Skip to main content
OCIMessaging & Eventsintermediate

OCI Queue Service Guide

Build reliable messaging with OCI Queue Service: queues, message lifecycle, dead-letter queues, visibility timeout, channels, and consumer patterns.

CloudToolStack Team22 min readPublished Mar 14, 2026

Prerequisites

  • Understanding of message queue concepts and producer-consumer patterns
  • OCI account with Queue Service permissions

Introduction to OCI Queue Service

The OCI Queue Service is a fully managed, serverless message queuing service that enables reliable asynchronous communication between distributed application components. It provides at-least-once message delivery, configurable visibility timeouts, dead-letter queues, and long polling, making it suitable for workload decoupling, task distribution, and event buffering. The Queue Service is conceptually similar to AWS SQS and Azure Queue Storage, providing a simple producer-consumer messaging pattern.

Unlike the OCI Streaming service (which is based on Apache Kafka and provides ordered, replayable event streams), the Queue Service is designed for point-to-point messaging where each message is processed exactly once by a single consumer. This makes it ideal for task queues, job scheduling, order processing, and any workflow where messages represent units of work that need to be processed independently.

This guide covers queue creation and configuration, message publishing and consumption, visibility timeouts, dead-letter queues, channels for message routing, monitoring, and production patterns for building reliable queue-based architectures on OCI.

Queue Service Pricing

OCI Queue Service pricing is based on the number of API requests (put, get, update, delete) per month. The first 1 million requests per month are free. Beyond that, pricing is approximately $0.40 per million requests. There are no charges for queue creation, message retention, or idle queues, making it very cost-effective for most workloads.

Creating and Configuring Queues

A queue is the fundamental resource in the Queue Service. Each queue has configurable properties including message retention period, visibility timeout, maximum message size, and dead-letter queue settings. These properties control message lifecycle behavior and should be tuned based on your application requirements.

Retention Period: How long messages remain in the queue before being automatically deleted (default 7 days, maximum 7 days). Set this based on your acceptable data loss window.

Visibility Timeout: After a consumer receives a message, it becomes invisible to other consumers for this duration (default 30 seconds, maximum 12 hours). If the consumer does not delete the message within this window, it becomes visible again and can be picked up by another consumer.

Maximum Message Size: The maximum size of a single message in bytes (maximum 512 KB). For larger payloads, store the data in Object Storage and pass the reference URL in the queue message.

bash
# Create a queue
oci queue queue create \
  --compartment-id $C \
  --display-name "order-processing" \
  --retention-in-seconds 604800 \
  --visibility-in-seconds 120 \
  --timeout-in-seconds 30 \
  --dead-letter-queue-delivery-count 5 \
  --wait-for-state ACTIVE

# Create a queue with custom settings
oci queue queue create \
  --compartment-id $C \
  --display-name "image-processing" \
  --retention-in-seconds 259200 \
  --visibility-in-seconds 600 \
  --timeout-in-seconds 20 \
  --dead-letter-queue-delivery-count 3 \
  --channel-consumption-limit 10 \
  --wait-for-state ACTIVE

# List all queues
oci queue queue list \
  --compartment-id $C \
  --query 'data.items[].{"display-name":"display-name", id:id, "lifecycle-state":"lifecycle-state"}' \
  --output table

# Get queue details
oci queue queue get \
  --queue-id <queue-ocid> \
  --query 'data.{"display-name":"display-name", "messages-endpoint":"messages-endpoint", "retention-in-seconds":"retention-in-seconds", "visibility-in-seconds":"visibility-in-seconds"}'

# Update queue settings
oci queue queue update \
  --queue-id <queue-ocid> \
  --visibility-in-seconds 300 \
  --dead-letter-queue-delivery-count 10

Sending and Receiving Messages

Producers send messages to a queue using the PutMessages API. Each message contains a string payload (typically JSON) and an optional channel identifier for routing. Messages are stored durably and replicated across fault domains for high availability.

Consumers retrieve messages using the GetMessages API, which supports long polling. Long polling keeps the connection open for a configurable duration (up to 30 seconds), returning immediately when messages are available or after the timeout expires. This is more efficient than frequent short polling because it reduces the number of empty API calls.

bash
# Send a message to a queue
oci queue messages put-messages \
  --queue-id <queue-ocid> \
  --messages '[{"content": "{"orderId": "ord-001", "action": "process", "amount": 99.99}"}]'

# Send multiple messages in a batch
oci queue messages put-messages \
  --queue-id <queue-ocid> \
  --messages '[
    {"content": "{"orderId": "ord-002", "action": "process"}"},
    {"content": "{"orderId": "ord-003", "action": "process"}"},
    {"content": "{"orderId": "ord-004", "action": "process"}"}
  ]'

# Send a message with a channel
oci queue messages put-messages \
  --queue-id <queue-ocid> \
  --messages '[{"content": "{"imageId": "img-001", "operation": "resize"}", "metadata": {"channelId": "high-priority"}}]'

# Receive messages (with long polling)
oci queue messages get-messages \
  --queue-id <queue-ocid> \
  --limit 10 \
  --timeout-in-seconds 20 \
  --visibility-in-seconds 120

# Delete a message after processing (acknowledge)
oci queue messages delete-message \
  --queue-id <queue-ocid> \
  --message-receipt <receipt-value>

# Extend visibility timeout for a message (need more processing time)
oci queue messages update-message \
  --queue-id <queue-ocid> \
  --message-receipt <receipt-value> \
  --visibility-in-seconds 300

Use Long Polling to Reduce Costs

Always use long polling (set timeout-in-seconds to 20-30 seconds) when receiving messages. Without long polling, each GetMessages call returns immediately, even if there are no messages. This generates many empty API calls that count toward your monthly usage. Long polling reduces API calls by 90% or more for low-traffic queues.

Visibility Timeout and Message Lifecycle

The visibility timeout is a critical concept in queue-based messaging. When a consumer receives a message, the message becomes invisible to other consumers for the duration of the visibility timeout. During this window, the consumer processes the message and deletes it from the queue to acknowledge successful processing.

If the consumer fails to delete the message within the visibility timeout (due to a crash, timeout, or processing error), the message becomes visible again and can be picked up by another consumer. This provides automatic retry behavior for failed message processing.

Setting the right visibility timeout is important. Too short, and messages will become visible again before processing completes, causing duplicate processing. Too long, and failed messages will wait unnecessarily before being retried.

bash
# Consumer processing loop (pseudocode):
#
# while True:
#     messages = get_messages(queue_id, timeout=20, visibility=120)
#     for message in messages:
#         try:
#             # Process the message
#             result = process_order(json.loads(message.content))
#
#             # If processing takes longer than expected, extend visibility
#             if processing_time > 60:
#                 update_message(queue_id, message.receipt, visibility=300)
#
#             # Delete the message to acknowledge successful processing
#             delete_message(queue_id, message.receipt)
#
#         except TemporaryError:
#             # Don't delete - message will become visible again after timeout
#             log.warning(f"Temporary error processing {message.id}, will retry")
#
#         except PermanentError:
#             # Delete to prevent infinite retries
#             # Or let it go to DLQ after max delivery count
#             log.error(f"Permanent error processing {message.id}")

# Check queue statistics
oci queue queue get-stats \
  --queue-id <queue-ocid> \
  --query 'data.{queue:queue, dlq:dlq}'

Dead-Letter Queues (DLQ)

A dead-letter queue automatically captures messages that fail processing after a specified number of delivery attempts. When a message is received and not deleted (acknowledged) for the configured dead-letter-queue-delivery-count number of times, it is automatically moved to the queue's associated DLQ.

DLQs are essential for preventing poison messages from blocking queue processing. Without a DLQ, a message that consistently causes processing failures would be retried indefinitely, preventing other messages from being processed. With a DLQ, failed messages are isolated for later investigation and manual reprocessing.

bash
# Configure DLQ delivery count when creating a queue
oci queue queue create \
  --compartment-id $C \
  --display-name "payment-processing" \
  --retention-in-seconds 604800 \
  --visibility-in-seconds 120 \
  --dead-letter-queue-delivery-count 5 \
  --wait-for-state ACTIVE

# The DLQ is automatically created as a sub-resource of the queue
# DLQ messages can be retrieved using the same GetMessages API
# with the DLQ's queue ID

# Get DLQ statistics
oci queue queue get-stats \
  --queue-id <queue-ocid> \
  --query 'data.dlq'

# Process DLQ messages manually
oci queue messages get-messages \
  --queue-id <dlq-queue-ocid> \
  --limit 10 \
  --timeout-in-seconds 5

# Reprocess a DLQ message by sending it back to the main queue
# 1. Get the message from DLQ
# 2. Put it back into the main queue
# 3. Delete from DLQ

# Purge all messages from a queue (careful!)
oci queue queue purge \
  --queue-id <queue-ocid> \
  --purge-type "NORMAL" \
  --force

Monitor Your Dead-Letter Queue

A growing DLQ indicates a systemic issue with message processing. Create an alarm that triggers when DLQ message count exceeds zero. Investigate DLQ messages promptly to identify and fix the root cause, whether it is a bug in your consumer code, a downstream service outage, or malformed messages. Do not let DLQ messages accumulate without investigation.

Channels for Message Routing

Channels provide a way to logically partition messages within a single queue. Producers tag messages with a channel identifier, and consumers can filter messages by channel when calling GetMessages. This enables priority-based processing, message categorization, and multi-tenant queuing on a single queue resource.

For example, an image processing queue might use channels like "high-priority", "standard", and "bulk". A high-priority consumer processes only high-priority messages, while a standard consumer handles the rest. This avoids creating separate queues for each priority level.

bash
# Send messages with different channels
oci queue messages put-messages \
  --queue-id <queue-ocid> \
  --messages '[
    {"content": "{"task": "urgent-resize"}", "metadata": {"channelId": "high-priority"}},
    {"content": "{"task": "thumbnail"}", "metadata": {"channelId": "standard"}},
    {"content": "{"task": "batch-convert"}", "metadata": {"channelId": "bulk"}}
  ]'

# Consume messages from a specific channel
oci queue messages get-messages \
  --queue-id <queue-ocid> \
  --channel-filter "high-priority" \
  --limit 10 \
  --timeout-in-seconds 20

# Consume from the standard channel
oci queue messages get-messages \
  --queue-id <queue-ocid> \
  --channel-filter "standard" \
  --limit 10 \
  --timeout-in-seconds 20

# Get messages from any channel (no filter)
oci queue messages get-messages \
  --queue-id <queue-ocid> \
  --limit 10 \
  --timeout-in-seconds 20

Monitoring and Metrics

The Queue Service emits metrics to OCI Monitoring for queue depth, message age, throughput, and DLQ size. These metrics are essential for capacity planning, performance monitoring, and detecting processing bottlenecks.

bash
# Monitor queue depth (number of visible messages)
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'MessagesInQueue[1m]{resourceId = "<queue-ocid>"}.max()'

# Monitor message age (oldest message in queue)
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'OldestMessageAge[1m]{resourceId = "<queue-ocid>"}.max()'

# Monitor DLQ depth
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'DlqMessagesInQueue[1m]{resourceId = "<queue-ocid>"}.max()'

# Monitor throughput
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'MessagesReceived[5m]{resourceId = "<queue-ocid>"}.sum()'

# Create alarm for growing queue depth
oci monitoring alarm create \
  --compartment-id $C \
  --display-name "queue-depth-warning" \
  --metric-compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'MessagesInQueue[5m]{resourceId = "<queue-ocid>"}.max() > 10000' \
  --severity "WARNING" \
  --destinations '["<ops-topic-ocid>"]' \
  --is-enabled true \
  --body "Queue depth exceeds 10,000 messages - consumers may be falling behind"

# Create alarm for DLQ messages
oci monitoring alarm create \
  --compartment-id $C \
  --display-name "dlq-messages-alert" \
  --metric-compartment-id $C \
  --namespace "oci_queue" \
  --query-text 'DlqMessagesInQueue[1m]{resourceId = "<queue-ocid>"}.max() > 0' \
  --severity "CRITICAL" \
  --destinations '["<ops-topic-ocid>"]' \
  --is-enabled true \
  --body "Dead-letter queue has messages - investigate processing failures"

# Delete a queue
oci queue queue delete \
  --queue-id <queue-ocid> \
  --force

Production Best Practices

Building reliable queue-based architectures requires attention to idempotency, error handling, and scaling patterns:

Idempotent Consumers: The Queue Service provides at-least-once delivery, meaning messages may be delivered more than once in rare cases. Design your consumers to be idempotent by tracking processed message IDs or using database upserts instead of inserts.

Right-Size Visibility Timeout: Set the visibility timeout to at least 2x the expected processing time for each message. This provides a buffer for slower-than- expected processing while minimizing the delay before retrying genuinely failed messages.

Batch Operations: Use batch put and batch get to reduce API call count and improve throughput. The service supports up to 20 messages per batch operation.

Graceful Consumer Shutdown: When shutting down a consumer for deployment or maintenance, stop accepting new messages, finish processing in-flight messages, and delete acknowledged messages before terminating. This prevents messages from being unnecessarily retried.

Consumer Scaling: Scale the number of consumers based on queue depth. Monitor the MessagesInQueue metric and add consumers when the queue depth grows, remove consumers when it shrinks. OCI Functions can serve as auto-scaling consumers triggered by queue depth alarms.

Message Size Management: Keep message payloads small (under 10 KB) for optimal performance. For large payloads, store the data in Object Storage and include only the Object Storage URL in the queue message. This reduces queue costs and improves throughput.

OCI Notifications & Events GuideOCI Functions Serverless GuideOCI Streaming & Events Guide

Key Takeaways

  1. 1Queue Service provides at-least-once delivery with configurable visibility timeouts for retry logic.
  2. 2Dead-letter queues automatically capture messages that fail processing after a configurable number of attempts.
  3. 3Channels enable logical message partitioning for priority-based processing on a single queue.
  4. 4Long polling reduces API call costs by 90% compared to frequent short polling.

Frequently Asked Questions

How does OCI Queue compare to AWS SQS?
OCI Queue Service is functionally similar to AWS SQS Standard queues, providing at-least-once delivery with visibility timeouts and dead-letter queues. OCI Queue adds channels for message routing within a single queue (SQS requires separate queues). OCI includes 1 million free requests per month. OCI Queue does not currently offer a FIFO (exactly-once, ordered) mode like SQS FIFO.
What is the maximum message size?
OCI Queue supports messages up to 512 KB in size. For larger payloads, store the data in OCI Object Storage and include only the Object Storage URL in the queue message. This claim-check pattern reduces queue costs, improves throughput, and avoids message size limitations.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.