EventBridge Patterns Guide
Master EventBridge: event buses, advanced patterns, Pipes, Scheduler, schema registry, and production workflows.
Prerequisites
- Basic understanding of event-driven architecture
- Familiarity with AWS Lambda and IAM
EventBridge: The Serverless Event Bus
Amazon EventBridge is a serverless event bus that connects application data from your own apps, SaaS partners, and AWS services. It is the backbone of event-driven architectures on AWS, replacing the older CloudWatch Events service with a more powerful and flexible platform. EventBridge processes millions of events per second with guaranteed at-least-once delivery and provides content-based filtering, schema discovery, event replay, and cross-account event routing.
Unlike simple message queues (SQS) or pub/sub topics (SNS), EventBridge provides rich content-based filtering. Instead of subscribing to all events and filtering in your code, you define event patterns that match specific fields within the event JSON. Only matching events are delivered to your targets, reducing compute costs and complexity.
This guide covers the core EventBridge concepts (event buses, rules, targets), advanced event patterns, EventBridge Pipes for point-to-point integrations, EventBridge Scheduler for time-based events, schema registry, and production patterns for building reliable event-driven architectures.
EventBridge Pricing
EventBridge pricing is straightforward: $1.00 per million events published to the default or custom event bus. Schema discovery costs $0.10 per million events ingested. EventBridge Pipes charges based on the number of requests. Custom event buses for partner integrations have the same pricing. There is no charge for built-in AWS service events on the default event bus.
Event Buses and Rules
An event bus receives events and routes them to targets based on rules. Every AWS account has a default event bus that receives events from AWS services automatically. You can create custom event buses for your application events andpartner event buses for SaaS integrations (Zendesk, Datadog, Auth0, etc.).
# Create a custom event bus for your application
aws events create-event-bus --name "myapp-events"
# List all event buses
aws events list-event-buses \
--query 'EventBuses[].{Name:Name, Arn:Arn}' --output table
# Publish a custom event
aws events put-events --entries '[
{
"EventBusName": "myapp-events",
"Source": "com.myapp.orders",
"DetailType": "Order Created",
"Detail": "{"orderId": "ord-123", "customerId": "cust-456", "total": 99.99, "items": 3}"
}
]'
# Create a rule with an event pattern
aws events put-rule \
--name "high-value-orders" \
--event-bus-name "myapp-events" \
--event-pattern '{
"source": ["com.myapp.orders"],
"detail-type": ["Order Created"],
"detail": {
"total": [{"numeric": [">=", 100]}]
}
}' \
--description "Match orders over $100"
# Add a Lambda target to the rule
aws events put-targets \
--rule "high-value-orders" \
--event-bus-name "myapp-events" \
--targets '[{
"Id": "process-high-value",
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:process-high-value-order"
}]'Advanced Event Patterns
EventBridge event patterns support sophisticated matching beyond simple equality. You can match on prefixes, suffixes, numeric ranges, IP address ranges, existence checks, and combine conditions with logical operators. Understanding these patterns is key to building efficient event-driven architectures.
{
"source": ["com.myapp.orders"],
"detail-type": ["Order Created", "Order Updated"],
"detail": {
"status": ["CONFIRMED", "SHIPPED"],
"total": [{ "numeric": [">=", 50, "<=", 1000] }],
"region": [{ "prefix": "us-" }],
"email": [{ "suffix": "@company.com" }],
"priority": [{ "exists": true }],
"tags": {
"environment": [{ "anything-but": ["test", "staging"] }]
},
"metadata": {
"ip": [{ "cidr": "10.0.0.0/8" }]
}
}
}Pattern Matching Reference
| Pattern Type | Syntax | Example Match |
|---|---|---|
| Exact match | ["value1", "value2"] | Field equals "value1" or "value2" |
| Prefix | [{"prefix": "us-"}] | "us-east-1", "us-west-2" |
| Suffix | [{"suffix": ".com"}] | "user@example.com" |
| Numeric range | [{"numeric": [">=", 100]}] | 100, 200, 500 |
| Anything-but | [{"anything-but": ["test"]}] | Everything except "test" |
| Exists | [{"exists": true}] | Field is present in the event |
| CIDR | [{"cidr": "10.0.0.0/8"}] | IP addresses in the 10.x.x.x range |
Use the Event Pattern Sandbox
The EventBridge console includes an event pattern sandbox where you can test your patterns against sample events before deploying rules. This saves time and prevents rules from silently failing to match. You can also use aws events test-event-patternfrom the CLI to validate patterns programmatically.
EventBridge Pipes
EventBridge Pipes is a point-to-point integration service that connects sources to targets with optional filtering, enrichment, and transformation. Unlike rules on an event bus, Pipes provide a direct connection between a source (SQS queue, DynamoDB stream, Kinesis stream, Kafka topic, or MQ broker) and a target, with built-in batching, error handling, and transformation.
# Create a pipe: SQS -> Transform -> Lambda
aws pipes create-pipe \
--name "order-processing-pipe" \
--source "arn:aws:sqs:us-east-1:123456789012:orders-queue" \
--source-parameters '{
"SqsQueueParameters": {
"BatchSize": 10,
"MaximumBatchingWindowInSeconds": 30
}
}' \
--target "arn:aws:lambda:us-east-1:123456789012:function:process-orders" \
--target-parameters '{
"LambdaFunctionParameters": {
"InvocationType": "REQUEST_RESPONSE"
}
}' \
--role-arn "arn:aws:iam::123456789012:role/PipeRole"Terraform Pipe Configuration
resource "aws_pipes_pipe" "order_processing" {
name = "order-processing-pipe"
role_arn = aws_iam_role.pipe_role.arn
source = aws_sqs_queue.orders.arn
source_parameters {
sqs_queue_parameters {
batch_size = 10
maximum_batching_window_in_seconds = 30
}
filter_criteria {
filter {
pattern = jsonencode({
body = {
total = [{ numeric = [">=", 50] }]
}
})
}
}
}
enrichment = aws_lambda_function.enrich_order.arn
target = aws_sfn_state_machine.process_order.arn
target_parameters {
step_function_state_machine_parameters {
invocation_type = "FIRE_AND_FORGET"
}
}
}EventBridge Scheduler
EventBridge Scheduler is a serverless scheduler that creates, runs, and manages scheduled tasks at scale. Unlike cron-based EventBridge rules, Scheduler supports one-time schedules, flexible time windows, time zones, and millions of schedules per account. It replaces CloudWatch Events cron rules for most use cases.
# Create a recurring schedule (every 5 minutes)
aws scheduler create-schedule \
--name "health-check" \
--schedule-expression "rate(5 minutes)" \
--flexible-time-window '{"Mode": "OFF"}' \
--target '{
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:health-check",
"RoleArn": "arn:aws:iam::123456789012:role/SchedulerRole"
}'
# Create a one-time schedule (future execution)
aws scheduler create-schedule \
--name "report-generation-march" \
--schedule-expression "at(2026-03-31T23:59:00)" \
--schedule-expression-timezone "America/New_York" \
--flexible-time-window '{"Mode": "FLEXIBLE", "MaximumWindowInMinutes": 15}' \
--target '{
"Arn": "arn:aws:lambda:us-east-1:123456789012:function:generate-report",
"RoleArn": "arn:aws:iam::123456789012:role/SchedulerRole",
"Input": "{"reportType": "monthly", "month": "2026-03"}"
}'
# Create a cron schedule
aws scheduler create-schedule \
--name "daily-cleanup" \
--schedule-expression "cron(0 2 * * ? *)" \
--schedule-expression-timezone "UTC" \
--flexible-time-window '{"Mode": "FLEXIBLE", "MaximumWindowInMinutes": 30}' \
--target '{
"Arn": "arn:aws:states:us-east-1:123456789012:stateMachine:cleanup-workflow",
"RoleArn": "arn:aws:iam::123456789012:role/SchedulerRole"
}'Schema Registry and Discovery
EventBridge Schema Registry automatically discovers and stores event schemas from events flowing through your event buses. It generates code bindings for TypeScript, Python, Java, and Go, making it easy to work with strongly-typed events in your application code.
# Enable schema discovery on a custom event bus
aws schemas put-discoverer \
--source-arn "arn:aws:events:us-east-1:123456789012:event-bus/myapp-events" \
--description "Discover schemas from myapp events"
# List discovered schemas
aws schemas list-schemas \
--registry-name "discovered-schemas" \
--query 'Schemas[].SchemaName' --output table
# Get a specific schema
aws schemas describe-schema \
--registry-name "discovered-schemas" \
--schema-name "com.myapp.orders@OrderCreated"
# Export code bindings
aws schemas get-code-binding-source \
--registry-name "discovered-schemas" \
--schema-name "com.myapp.orders@OrderCreated" \
--language "Python36" \
--schema-version "1" \
output-bindings.zipCross-Account and Cross-Region Patterns
EventBridge supports sending events across AWS accounts and regions. This is essential for centralized event processing in multi-account architectures. You configure resource-based policies on the target event bus to allow events from source accounts.
# On the target account: allow events from source account
aws events put-permission \
--event-bus-name "central-events" \
--action "events:PutEvents" \
--principal "111222333444" \
--statement-id "AllowSourceAccount"
# On the source account: send events to the target account
aws events put-events --entries '[
{
"EventBusName": "arn:aws:events:us-east-1:999888777666:event-bus/central-events",
"Source": "com.source-app.orders",
"DetailType": "Order Created",
"Detail": "{"orderId": "ord-789"}"
}
]'
# Cross-region: create a rule that forwards events to another region
aws events put-rule \
--name "forward-to-dr-region" \
--event-pattern '{"source": ["com.myapp.critical"]}' \
--event-bus-name "default"
aws events put-targets \
--rule "forward-to-dr-region" \
--targets '[{
"Id": "dr-event-bus",
"Arn": "arn:aws:events:us-west-2:123456789012:event-bus/default",
"RoleArn": "arn:aws:iam::123456789012:role/EventBridgeCrossRegionRole"
}]'Production Patterns and Best Practices
Building reliable event-driven architectures requires attention to error handling, dead-letter queues, idempotency, and observability. Here are key patterns for production EventBridge deployments.
Dead-Letter Queue Configuration
resource "aws_cloudwatch_event_rule" "order_rule" {
name = "process-orders"
event_bus_name = aws_cloudwatch_event_bus.app.name
event_pattern = jsonencode({
source = ["com.myapp.orders"]
detail-type = ["Order Created"]
})
}
resource "aws_cloudwatch_event_target" "order_lambda" {
rule = aws_cloudwatch_event_rule.order_rule.name
event_bus_name = aws_cloudwatch_event_bus.app.name
arn = aws_lambda_function.process_order.arn
dead_letter_config {
arn = aws_sqs_queue.dlq.arn
}
retry_policy {
maximum_event_age_in_seconds = 3600
maximum_retry_attempts = 3
}
input_transformer {
input_paths = {
orderId = "$.detail.orderId"
customerId = "$.detail.customerId"
total = "$.detail.total"
}
input_template = <<-EOF
{
"orderId": <orderId>,
"customerId": <customerId>,
"total": <total>,
"processedAt": <aws.events.event.ingestion-time>
}
EOF
}
}Event-Driven Architecture Checklist
| Concern | Solution | Implementation |
|---|---|---|
| Failed delivery | Dead-letter queues | SQS DLQ on each rule target |
| Duplicate events | Idempotent consumers | DynamoDB conditional writes |
| Event replay | Event archive and replay | EventBridge Archive feature |
| Observability | CloudWatch metrics | Monitor InvocationsCount, FailedInvocations |
| Schema evolution | Schema Registry versioning | Backward-compatible schema changes |
| Throttling | SQS buffer before Lambda | Use Pipes with SQS source |
# Create an event archive for replay
aws events create-archive \
--archive-name "myapp-events-archive" \
--event-source-arn "arn:aws:events:us-east-1:123456789012:event-bus/myapp-events" \
--retention-days 90 \
--event-pattern '{"source": ["com.myapp.orders"]}'
# Replay events from archive
aws events start-replay \
--replay-name "replay-march-orders" \
--event-source-arn "arn:aws:events:us-east-1:123456789012:event-bus/myapp-events" \
--destination '{"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/myapp-events"}' \
--event-start-time "2026-03-01T00:00:00Z" \
--event-end-time "2026-03-14T23:59:59Z"Event Size Limit
EventBridge events have a maximum size of 256 KB. For larger payloads, store the data in S3 and include the S3 key in the event. This "claim check" pattern keeps events small and fast while supporting arbitrarily large data. The consumer retrieves the full data from S3 using the key in the event.
Key Takeaways
- 1EventBridge supports content-based filtering with prefix, suffix, numeric, CIDR, and exists patterns.
- 2EventBridge Pipes connect sources to targets with built-in filtering, enrichment, and transformation.
- 3EventBridge Scheduler supports one-time, rate, and cron schedules with time zones and flexible windows.
- 4Schema Registry discovers schemas automatically and generates typed code bindings.
Frequently Asked Questions
How is EventBridge different from SNS?
What is the maximum event size for EventBridge?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.