Skip to main content
Multi-CloudMessaging & Eventsintermediate

Messaging Services Comparison

Compare messaging services across AWS, Azure, and GCP, covering queues, event buses, streaming, dead-letter handling, and cross-cloud patterns.

CloudToolStack Team25 min readPublished Feb 22, 2026

Prerequisites

  • Basic understanding of messaging patterns (queues, pub/sub, streaming)
  • Familiarity with at least one cloud messaging service
  • Experience with event-driven architecture concepts

Multi-Cloud Messaging Overview

Messaging services are the nervous system of distributed applications. They decouple producers from consumers, buffer traffic spikes, enable event-driven architectures, and provide reliable asynchronous communication between microservices. Every major cloud provider offers a suite of messaging services, each with different delivery semantics, throughput characteristics, and integration patterns.

AWS provides SQS (queues), SNS (pub/sub notifications), EventBridge (event bus), and Kinesis (streaming). Azure offers Service Bus (queues and topics), Event Grid (event routing), and Event Hubs (streaming). Google Cloud provides Pub/Sub (unified pub/sub and streaming) and Eventarc (event routing). Each ecosystem has evolved independently, resulting in different abstractions, APIs, and operational models.

This guide compares messaging services across all three providers, covering queue services, event buses, streaming platforms, and cross-cloud messaging patterns. We examine delivery guarantees, throughput limits, ordering semantics, dead-letter handling, and cost models to help you choose the right service, or design a multi-cloud messaging architecture that spans providers.

Messaging vs. Streaming

There is an important distinction between messaging (discrete messages consumed and acknowledged individually) and streaming (continuous, ordered event logs that support replay). SQS, Service Bus queues, and Pub/Sub subscriptions with acknowledgment are messaging systems. Kinesis, Event Hubs, and Pub/Sub Lite are streaming systems. Some services blur the line. Google Cloud Pub/Sub supports both messaging and streaming patterns. Choose based on whether you need message- level acknowledgment or partition-based ordered replay.

AWS Messaging Services

AWS provides the broadest set of messaging services among the three providers, with specialized services for different use cases. The core services are:

Amazon SQS (Simple Queue Service)

SQS is a fully managed message queue that supports two queue types: Standard (at-least- once delivery, best-effort ordering, nearly unlimited throughput) and FIFO (exactly- once processing, strict ordering, 3,000 messages per second with batching). SQS is the simplest and most widely used messaging service on AWS.

Amazon SNS (Simple Notification Service)

SNS is a pub/sub messaging service that fans out messages to multiple subscribers. Subscribers can be SQS queues, Lambda functions, HTTP endpoints, email addresses, SMS, or mobile push notifications. SNS supports message filtering, allowing subscribers to receive only messages that match specific attribute patterns.

Amazon EventBridge

EventBridge is a serverless event bus that routes events from AWS services, SaaS applications, and custom sources to target services based on rules. It supports content-based filtering, event transformation, schema discovery, and event replay. EventBridge Pipes connects sources to targets with optional filtering, enrichment, and transformation steps.

Amazon Kinesis

Kinesis Data Streams is a real-time streaming service for high-throughput, ordered event processing. It supports multiple consumers reading the same stream (fan-out), shard-level ordering, and 365-day retention. Kinesis is ideal for log aggregation, real-time analytics, and IoT data ingestion.

bash
# Create an SQS FIFO queue with dead-letter queue
aws sqs create-queue \
  --queue-name orders-dlq.fifo \
  --attributes FifoQueue=true,ContentBasedDeduplication=true

aws sqs create-queue \
  --queue-name orders.fifo \
  --attributes '{
    "FifoQueue": "true",
    "ContentBasedDeduplication": "true",
    "RedrivePolicy": "{"deadLetterTargetArn":"arn:aws:sqs:us-east-1:123456789012:orders-dlq.fifo","maxReceiveCount":"3"}",
    "VisibilityTimeout": "300"
  }'

# Create an SNS topic with SQS subscription and filter policy
aws sns create-topic --name order-events
aws sns subscribe \
  --topic-arn arn:aws:sns:us-east-1:123456789012:order-events \
  --protocol sqs \
  --notification-endpoint arn:aws:sqs:us-east-1:123456789012:orders.fifo \
  --attributes '{
    "FilterPolicy": "{"event_type":["order.created","order.updated"],"region":["us-east"]}"
  }'

# Create an EventBridge rule with transformation
aws events put-rule \
  --name order-processing-rule \
  --event-bus-name custom-events \
  --event-pattern '{
    "source": ["com.myapp.orders"],
    "detail-type": ["OrderCreated"],
    "detail": {
      "amount": [{"numeric": [">", 100]}]
    }
  }'

Azure Messaging Services

Azure provides a mature messaging ecosystem with enterprise-grade features. The three primary messaging services are Azure Service Bus, Event Grid, and Event Hubs.

Azure Service Bus

Service Bus is Azure's enterprise messaging broker supporting queues (point-to-point) and topics with subscriptions (pub/sub). It offers features that go beyond basic messaging: sessions (ordered message groups), scheduled delivery, message deferral, dead-lettering, duplicate detection, and transactions. Service Bus supports AMQP 1.0, making it interoperable with non-Azure clients and brokers like RabbitMQ.

Azure Event Grid

Event Grid is a serverless event routing service that delivers events from Azure services, custom topics, and partner sources to handlers like Azure Functions, Logic Apps, webhooks, and Service Bus queues. Event Grid supports CloudEvents 1.0 natively, event filtering, batching, and dead-lettering. It is ideal for reactive, event-driven architectures.

Azure Event Hubs

Event Hubs is a big data streaming platform capable of ingesting millions of events per second. It uses a partitioned consumer model similar to Apache Kafka and offers a Kafka-compatible API endpoint. Event Hubs is used for telemetry ingestion, log streaming, and real-time analytics pipelines.

bash
# Create a Service Bus namespace and queue with dead-letter
az servicebus namespace create \
  --name sb-prod-messaging \
  --resource-group rg-messaging \
  --location eastus \
  --sku Premium \
  --capacity 1

az servicebus queue create \
  --namespace-name sb-prod-messaging \
  --resource-group rg-messaging \
  --name orders \
  --max-delivery-count 5 \
  --default-message-time-to-live P14D \
  --lock-duration PT5M \
  --enable-dead-lettering-on-message-expiration true \
  --enable-partitioning false \
  --enable-session true

# Create a Service Bus topic with subscription and filter
az servicebus topic create \
  --namespace-name sb-prod-messaging \
  --resource-group rg-messaging \
  --name order-events \
  --enable-partitioning false

az servicebus topic subscription create \
  --namespace-name sb-prod-messaging \
  --resource-group rg-messaging \
  --topic-name order-events \
  --name high-value-orders

az servicebus topic subscription rule create \
  --namespace-name sb-prod-messaging \
  --resource-group rg-messaging \
  --topic-name order-events \
  --subscription-name high-value-orders \
  --name amount-filter \
  --filter-sql-expression "amount > 100 AND region = 'us-east'"

# Create an Event Hub for streaming
az eventhubs namespace create \
  --name eh-prod-streaming \
  --resource-group rg-messaging \
  --location eastus \
  --sku Standard \
  --capacity 2

az eventhubs eventhub create \
  --namespace-name eh-prod-streaming \
  --resource-group rg-messaging \
  --name telemetry-events \
  --partition-count 8 \
  --message-retention 7

Service Bus Sessions for Ordered Processing

Azure Service Bus sessions provide ordered, grouped message processing, a feature unique among cloud messaging services. When sessions are enabled, messages with the same SessionId are guaranteed to be processed in order by a single consumer. This is ideal for processing sequences of events that belong to the same entity (e.g., all events for a specific order). Neither SQS FIFO groups nor Pub/Sub ordering keys provide the same level of session affinity.

GCP Messaging Services

Google Cloud takes a more consolidated approach to messaging with Pub/Sub as the primary service for both messaging and streaming patterns, complemented by Eventarc for event routing.

Google Cloud Pub/Sub

Pub/Sub is a global, serverless messaging service that supports both push and pull delivery, exactly-once delivery (in pull mode), message ordering (within an ordering key), filtering, dead-letter topics, and schema enforcement. Unlike SQS or Service Bus, Pub/Sub is inherently global; a topic in one region can have subscribers in any other region with automatic message replication.

Pub/Sub supports two subscription types: pull (consumers poll for messages) and push (Pub/Sub sends messages to an HTTP endpoint). Push subscriptions are ideal for serverless architectures with Cloud Run or Cloud Functions as targets.

Eventarc

Eventarc is Google Cloud's event routing service that connects event sources (Cloud Audit Logs, Cloud Storage, Pub/Sub, and over 130 Google Cloud sources) to targets (Cloud Run, Cloud Functions, GKE, Workflows). Eventarc uses CloudEvents 1.0 format and provides a unified eventing layer across GCP services.

bash
# Create a Pub/Sub topic with schema enforcement
gcloud pubsub schemas create order-schema \
  --type=avro \
  --definition='{
    "type": "record",
    "name": "OrderEvent",
    "fields": [
      {"name": "order_id", "type": "string"},
      {"name": "amount", "type": "double"},
      {"name": "event_type", "type": "string"},
      {"name": "timestamp", "type": "string"}
    ]
  }'

gcloud pubsub topics create order-events \
  --schema=order-schema \
  --message-encoding=json

# Create a subscription with dead-letter topic and ordering
gcloud pubsub topics create order-events-dlq

gcloud pubsub subscriptions create order-processor \
  --topic=order-events \
  --ack-deadline=300 \
  --enable-exactly-once-delivery \
  --enable-message-ordering \
  --dead-letter-topic=order-events-dlq \
  --max-delivery-attempts=5 \
  --filter='attributes.event_type = "order.created" AND attributes.region = "us-east"'

# Create a push subscription to Cloud Run
gcloud pubsub subscriptions create order-webhook \
  --topic=order-events \
  --push-endpoint=https://order-service-xyz.run.app/events \
  --push-auth-service-account=pubsub-push@my-project.iam.gserviceaccount.com

# Create an Eventarc trigger for Cloud Storage events
gcloud eventarc triggers create upload-trigger \
  --location=us-central1 \
  --destination-run-service=file-processor \
  --destination-run-region=us-central1 \
  --event-filters="type=google.cloud.storage.object.v1.finalized" \
  --event-filters="bucket=my-uploads-bucket" \
  --service-account=eventarc-sa@my-project.iam.gserviceaccount.com

Queue Services Comparison

Queue services provide point-to-point messaging where each message is consumed by exactly one consumer. The following table compares the core queue offerings across all three providers.

FeatureAmazon SQSAzure Service Bus QueueGoogle Cloud Pub/Sub
Max message size256 KB (2 GB with Extended Client)256 KB (Standard) / 100 MB (Premium)10 MB
Max retention14 daysUnlimited (auto-forward to topic)31 days
Delivery guaranteeAt-least-once (Standard) / Exactly-once (FIFO)At-least-once / Exactly-once (sessions)At-least-once / Exactly-once (pull mode)
Message orderingFIFO queues (message group ID)Sessions (session ID) / FIFOOrdering key
Dead-letter queueNative (separate queue)Built-in (sub-queue)Dead-letter topic
Scheduled deliveryDelay queues (up to 15 min)Scheduled enqueue time (unlimited)Not native (use Cloud Scheduler)
TransactionsNot supportedSupported (send + complete atomically)Not supported
ThroughputNearly unlimited (Standard); 3,000 msg/s (FIFO)Up to 1,000 msg/s (Standard); 16 MB/s (Premium)Unlimited (auto-scales)
ProtocolHTTPS / AWS SDKAMQP 1.0 / HTTPSgRPC / HTTPS
Global availabilityRegionalRegional (Geo-DR for Premium)Global (automatic replication)

Event Bus & Routing Comparison

Event bus services route events from sources to targets based on content-based filtering rules. They are the backbone of event-driven architectures, enabling loose coupling between event producers and consumers.

FeatureAmazon EventBridgeAzure Event GridGCP Eventarc
Event formatEventBridge event formatCloudEvents 1.0 / Event Grid schemaCloudEvents 1.0
Event sources90+ AWS services, SaaS partners, customAzure services, custom topics, partner topics130+ GCP sources, Pub/Sub, custom
Content filteringJSON-path patterns with prefix, suffix, numeric, existsAdvanced filters (string, number, boolean, array operators)Attribute-based filtering (exact match)
Event transformationInput transformers, Pipes enrichmentLimited (via Azure Functions)Via Workflows or Cloud Functions
Schema registryEventBridge Schema Registry with code generationEvent Grid schema validationPub/Sub schema validation
Event replayArchive & replay (up to indefinite)Not nativeNot native (use Pub/Sub retention)
Targets20+ AWS service targetsFunctions, Logic Apps, Event Hubs, Service Bus, webhooksCloud Run, Functions, GKE, Workflows
ThroughputThousands of events/second (soft limit)10,000 events/second per topicDepends on underlying Pub/Sub
Pricing$1.00 per million events$0.60 per million operationsBased on Pub/Sub pricing

Event Ordering in Event Buses

None of the event bus services (EventBridge, Event Grid, Eventarc) guarantee strict event ordering by default. If your event consumers depend on receiving events in the order they were produced, use a streaming service (Kinesis, Event Hubs) or queue service with ordering (SQS FIFO, Service Bus sessions, Pub/Sub ordering keys) instead. Event buses are designed for loose coupling and fan-out, not ordered processing.

Streaming Services Comparison

Streaming services provide ordered, replayable event logs for high-throughput, real-time data processing. They are used for log aggregation, clickstream analytics, IoT data ingestion, and Change Data Capture (CDC) pipelines.

FeatureAmazon Kinesis Data StreamsAzure Event HubsGoogle Cloud Pub/Sub
Partition modelShards (explicit provisioning or on-demand)Partitions (2–32 per Event Hub)Auto-partitioned (transparent)
Throughput per partition1 MB/s in, 2 MB/s out per shard1 MB/s in, 2 MB/s out per TUAuto-scaled (no per-partition limits)
Max retention365 days90 days (Standard) / unlimited (capture)31 days (or BigQuery subscription for unlimited)
Kafka compatibilityNot native (use Amazon MSK)Kafka-compatible endpoint (SASL/OAuth)Not native (use Confluent on GCP)
Consumer modelShard iterator / Enhanced fan-outConsumer groups with checkpointingSubscriptions with acknowledgment
Serverless processingKinesis Data Analytics (Apache Flink)Stream Analytics / Azure FunctionsDataflow (Apache Beam)
Data capture to storageKinesis Firehose to S3/RedshiftEvent Hubs Capture to Blob/ADLSBigQuery subscriptions / Dataflow to GCS
kinesis-vs-eventhubs.sh
# AWS: Create a Kinesis Data Stream (on-demand mode)
aws kinesis create-stream \
  --stream-name telemetry-events \
  --stream-mode-details StreamMode=ON_DEMAND

# AWS: Put a record to Kinesis
aws kinesis put-record \
  --stream-name telemetry-events \
  --partition-key "device-001" \
  --data "$(echo '{"device":"device-001","temp":72.5}' | base64)"

# Azure: Send events to Event Hubs using Kafka protocol
# Event Hubs provides a Kafka-compatible endpoint
# Connection string: Endpoint=sb://eh-prod.servicebus.windows.net/;SharedAccessKeyName=send;SharedAccessKey=xxx

# GCP: Publish a message with ordering key
gcloud pubsub topics publish telemetry-events \
  --message='{"device":"device-001","temp":72.5}' \
  --ordering-key="device-001" \
  --attribute="device_id=device-001,region=us-central"

Cross-Cloud Messaging Patterns

Organizations running workloads across multiple clouds need messaging patterns that bridge provider boundaries. There are several approaches, each with different trade-offs in latency, reliability, and complexity.

Pattern 1: HTTP Webhook Bridge

The simplest cross-cloud pattern uses HTTP webhooks. An event in one cloud triggers an HTTP call to a service in another cloud. SNS can send to an HTTPS endpoint on Azure or GCP. Event Grid and Eventarc both support webhook targets. This approach is simple but lacks guaranteed delivery. Failed webhooks may be retried with exponential backoff, but persistent failures lose messages.

Pattern 2: Shared Message Broker

Deploy a managed Kafka cluster (Amazon MSK, Confluent Cloud, or self-managed) as a neutral message broker accessible from all clouds. Producers in any cloud publish to Kafka topics; consumers in any cloud subscribe. Confluent Cloud supports multi-cloud cluster linking for replicating topics between providers. This is the most robust pattern but adds operational complexity and cost.

Pattern 3: Event Replication

Replicate events between provider-native services using bridge functions. For example, an AWS Lambda function subscribes to an SQS queue and publishes messages to Google Cloud Pub/Sub via the Pub/Sub client library. Similarly, an Azure Function triggered by Service Bus can publish to SNS. This pattern uses native services on each side but requires custom bridge code.

cross-cloud-bridge.ts
// AWS Lambda: Bridge SQS messages to Google Cloud Pub/Sub
import { SQSHandler } from 'aws-lambda';
import { PubSub } from '@google-cloud/pubsub';

const pubsub = new PubSub({ projectId: process.env.GCP_PROJECT_ID });
const topic = pubsub.topic('cross-cloud-events');

export const handler: SQSHandler = async (event) => {
  const publishPromises = event.Records.map(async (record) => {
    const message = JSON.parse(record.body);

    await topic.publishMessage({
      data: Buffer.from(JSON.stringify(message)),
      attributes: {
        source_cloud: 'aws',
        source_queue: record.eventSourceARN,
        original_message_id: record.messageId,
      },
      orderingKey: message.entity_id,
    });
  });

  await Promise.all(publishPromises);
};

// Azure Function: Bridge Service Bus to Amazon SNS
// (similar pattern using @aws-sdk/client-sns)
// GCP Cloud Function: Bridge Pub/Sub to Azure Service Bus
// (similar pattern using @azure/service-bus)

CloudEvents for Interoperability

The CloudEvents specification (CNCF project, now v1.0) provides a standard envelope format for events across systems. Azure Event Grid and GCP Eventarc support CloudEvents natively. AWS EventBridge uses its own format but can be mapped to CloudEvents with minimal transformation. When building cross-cloud messaging, standardize on CloudEvents format to simplify event handling across providers.

Schema Management & Serialization

As messaging systems grow, managing message schemas becomes critical. Schema enforcement prevents producers from sending invalid messages and enables consumers to deserialize messages reliably. Each provider offers different schema management capabilities:

CapabilityAWSAzureGCP
Schema registryEventBridge Schema Registry / Glue Schema RegistryAzure Schema Registry (Event Hubs)Pub/Sub Schema validation
Schema formatsJSON Schema, OpenAPI, AvroAvro, JSON SchemaAvro, Protocol Buffers
Schema validationClient-side (Glue) / Discovery (EventBridge)Client-side serializationServer-side enforcement on publish
Schema evolutionCompatibility modes in Glue Schema RegistryCompatibility checking in Schema RegistryRevision management with compatibility
Code generationEventBridge Schema Registry generates TypeScript, Python, JavaManual from Avro/JSON schemasProtocol Buffers generates for all languages

Serialization Best Practices

  • Use a binary format for high-throughput streams: Avro or Protocol Buffers reduce message size by 40–70% compared to JSON. This lowers networking costs and improves throughput on Kinesis, Event Hubs, and Pub/Sub.
  • Use JSON for event buses: EventBridge, Event Grid, and Eventarc all use JSON-based event formats. JSON is human-readable and easier to debug in event-driven architectures.
  • Enforce schemas at the boundary: GCP Pub/Sub is the only provider that enforces schemas server-side on publish. For AWS and Azure, enforce schemas in producer client libraries using a schema registry client.
  • Plan for schema evolution: Use backward-compatible schema changes (adding optional fields, not removing required fields) to avoid breaking consumers during rolling deployments.

Choosing the Right Messaging Architecture

The right messaging service depends on your specific requirements: delivery guarantees, throughput needs, ordering requirements, integration patterns, and budget constraints. Understanding cost models is an important part of this decision.

Cost Comparison

Messaging costs depend on message volume, message size, retention, and feature tier. The following table estimates monthly costs for a moderate workload of 100 million messages per month with an average message size of 4 KB:

ServicePricing ModelEstimated Monthly Cost (100M messages)
Amazon SQS (Standard)$0.40 per million requests (first 1B)$40
Amazon SQS (FIFO)$0.50 per million requests$50
Amazon EventBridge$1.00 per million events$100
Azure Service Bus (Standard)$0.05 per million operations + base rate$15–$25
Azure Event Hubs (Standard)$0.028 per throughput unit/hour + $0.015 per million events$42–$60
Google Cloud Pub/Sub$40 per TiB of message data delivered$16 (at 4 KB avg)

Data Transfer Costs Matter

Messaging costs extend beyond the per-message pricing. Data transfer between availability zones and regions adds significant cost at high volumes. AWS charges $0.01/GB for cross-AZ data transfer, which affects SQS consumers in different AZs from the queue. Azure Service Bus Premium includes cross-zone transfer at no extra cost. Google Cloud Pub/Sub includes intra-region transfer in the per-message price. Factor data transfer into your total cost calculations, especially for high-volume workloads.

Here are decision frameworks for common scenarios:

When to Use Queues

Use queue services (SQS, Service Bus queues, Pub/Sub with single subscription) for work distribution patterns where each message should be processed exactly once by one consumer. Common use cases: order processing, task queues, background job execution, and request buffering.

When to Use Event Buses

Use event bus services (EventBridge, Event Grid, Eventarc) for event-driven architectures where events are routed to multiple consumers based on content. Common use cases: microservice integration, infrastructure automation (react to resource changes), audit logging, and workflow orchestration.

When to Use Streaming

Use streaming services (Kinesis, Event Hubs, Pub/Sub with ordering) for high- throughput, ordered event processing with replay capability. Common use cases: log aggregation, real-time analytics, IoT data ingestion, change data capture, and event sourcing architectures.

Provider Selection Guide

  • Choose AWS when you need the broadest service integration (EventBridge connects to 90+ services), fine-grained event replay, or Lambda-first serverless patterns.
  • Choose Azure when you need enterprise messaging features (transactions, sessions, scheduled delivery), AMQP interoperability, or Kafka-compatible streaming without managing Kafka.
  • Choose GCP when you need globally distributed messaging with automatic replication, the simplest operational model (Pub/Sub handles both queuing and streaming), or server-side schema enforcement.
  • Choose multi-cloud when workloads span providers and you need cross-cloud event flow. Use Confluent Cloud (Kafka) or CloudEvents-based HTTP bridges for cross-cloud messaging.

Start Simple, Evolve as Needed

Do not over-architect your messaging layer from the start. Begin with a queue service for work distribution and add event bus or streaming services as event- driven patterns emerge. Most applications start with SQS, Service Bus, or Pub/Sub and evolve to EventBridge, Event Grid, or Eventarc as they mature. Streaming services should only be introduced when you have genuine high-throughput or replay requirements.

Related Resources

Explore provider-specific messaging guides for deeper coverage:

Key Takeaways

  1. 1AWS offers specialized services (SQS, SNS, EventBridge) for different messaging patterns.
  2. 2Azure provides Service Bus for enterprise messaging, Event Grid for events, and Event Hubs for streaming.
  3. 3GCP Pub/Sub is a unified service covering queues, pub/sub, and lightweight streaming.
  4. 4Dead-letter handling is supported across all providers but configured differently.
  5. 5Event Hubs and Amazon Kinesis serve high-throughput streaming; Pub/Sub can serve as both queue and stream.
  6. 6Cross-cloud messaging requires bridge patterns using webhooks, Lambda/Functions, or third-party tools.

Frequently Asked Questions

What is the GCP equivalent of AWS SQS?
GCP Pub/Sub with pull subscriptions serves a similar role to SQS. A Pub/Sub topic with a single pull subscription behaves like a queue. However, Pub/Sub is more feature-rich than SQS, supporting both pub/sub and queue patterns in a single service.
How do I choose between queues, events, and streams?
Queues (SQS, Service Bus): point-to-point work distribution with guaranteed processing. Events (EventBridge, Event Grid, Eventarc): reactive notifications with content-based routing. Streams (Kinesis, Event Hubs, Pub/Sub): high-throughput ordered data pipelines with time-based retention.
Can I send messages between clouds?
Not natively. Cross-cloud messaging requires bridge patterns: (1) HTTP webhooks between event systems, (2) Cloud functions that consume from one service and publish to another, (3) Third-party tools like Confluent Cloud (Kafka) or Solace that span multiple clouds.
Which messaging service is cheapest?
For low-volume: Event Grid ($0.60/million events) and SNS ($0.50/million notifications) are cheapest. For high-volume: Pub/Sub ($40/TB) and SQS ($0.40/million requests) offer good value. Event Hubs and Kinesis charge per throughput unit/shard, which can be expensive at scale.
What about exactly-once delivery?
SQS FIFO and Pub/Sub support exactly-once delivery. Service Bus supports it via sessions and transactions. Event Hubs and Kinesis use checkpointing (effectively at-least-once with idempotent processing). For most distributed systems, idempotent consumers are recommended regardless.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.