OCI Connector Hub Guide
Move data between OCI services with Connector Hub: service connectors, source/target patterns, log filtering, Function transformations, and monitoring.
Prerequisites
- Understanding of OCI Logging and Monitoring services
- OCI account with Connector Hub and target service permissions
Introduction to OCI Connector Hub
OCI Connector Hub (also known as Service Connector Hub) is a fully managed service for moving data between OCI services. It provides a declarative, no-code approach to building data pipelines that connect sources to targets with optional transformation steps. Instead of writing custom integration code to move logs to object storage, metrics to streaming, or events to functions, you create a service connector that handles the data movement automatically with built-in error handling, batching, and retry logic.
Connector Hub supports a variety of source-target combinations including Logging to Object Storage, Logging to Streaming, Monitoring to Notifications, Streaming to Functions, and many more. It handles the operational complexity of data movement at scale, including backpressure management, dead-letter handling, and automatic scaling.
This guide covers service connector architecture, supported source and target patterns, building common data pipelines, configuring filters and transformations, monitoring connector health, and production best practices for data integration on OCI.
Connector Hub Pricing
OCI Connector Hub is free to use. There are no charges for creating or running service connectors. You only pay for the source and target services that the connector interacts with (e.g., Object Storage for stored data, Streaming for ingested messages, Functions for invocations). This makes Connector Hub an extremely cost-effective way to build data integration pipelines.
Service Connector Architecture
A service connector defines a unidirectional data pipeline with three components:
Source: The OCI service that produces data. Supported sources include Logging (log events), Monitoring (metric data points), Streaming (stream messages), and Queue (queue messages). The source determines what data enters the pipeline.
Task (Optional): A transformation step that processes data in transit. Tasks can filter log entries, enrich data with additional context, or invoke an OCI Function for custom transformation logic. Tasks are optional and can be chained.
Target: The OCI service that receives the processed data. Supported targets include Object Storage (for archival), Streaming (for event distribution), Functions (for custom processing), Notifications (for alerting), Monitoring (for custom metrics), and Logging Analytics (for analysis).
# Create a service connector: Logging to Object Storage
oci sch service-connector create \
--compartment-id $C \
--display-name "logs-to-object-storage" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--target '{"kind": "objectStorage", "bucketName": "log-archive", "namespace": "<namespace>", "objectNamePrefix": "logs/", "batchRolloverSizeInMBs": 100, "batchRolloverTimeInMs": 420000}' \
--wait-for-state ACTIVE
# Create: Logging to Streaming
oci sch service-connector create \
--compartment-id $C \
--display-name "logs-to-streaming" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--target '{"kind": "streaming", "streamId": "<stream-ocid>"}' \
--wait-for-state ACTIVE
# Create: Monitoring to Notifications (metric-based alerting)
oci sch service-connector create \
--compartment-id $C \
--display-name "metrics-to-notifications" \
--source '{"kind": "monitoring", "monitoringSources": [{"compartmentId": "<compartment-ocid>", "namespaceDetails": {"kind": "selected", "namespaces": [{"namespace": "oci_computeagent", "metrics": {"kind": "selected", "metricNames": ["CpuUtilization"]}}]}}]}' \
--target '{"kind": "notifications", "topicId": "<topic-ocid>"}' \
--wait-for-state ACTIVE
# List service connectors
oci sch service-connector list \
--compartment-id $C \
--query 'data.items[].{"display-name":"display-name", "lifecycle-state":"lifecycle-state"}' \
--output tableCommon Source-Target Patterns
Connector Hub supports numerous source-target combinations. Here are the most common patterns used in production OCI environments:
| Pattern | Source | Target | Use Case |
|---|---|---|---|
| Log Archival | Logging | Object Storage | Long-term log retention and compliance |
| Log Streaming | Logging | Streaming | Real-time log processing and analysis |
| Log Analytics | Logging | Logging Analytics | Advanced log search and pattern detection |
| Event Processing | Streaming | Functions | Serverless event processing pipelines |
| Metric Alerting | Monitoring | Notifications | Custom metric-based alerting |
| Log Processing | Logging | Functions | Custom log enrichment and transformation |
| Queue Processing | Queue | Functions | Serverless message processing |
# Pattern: Streaming to Functions (serverless event processing)
oci sch service-connector create \
--compartment-id $C \
--display-name "stream-to-function" \
--source '{"kind": "streaming", "streamId": "<stream-ocid>", "cursor": {"kind": "TRIM_HORIZON"}}' \
--target '{"kind": "functions", "functionId": "<function-ocid>", "batchSizeInKbs": 5120, "batchSizeInNum": 100, "batchTimeInSec": 60}' \
--wait-for-state ACTIVE
# Pattern: Logging to Functions (custom log processing)
oci sch service-connector create \
--compartment-id $C \
--display-name "log-enrichment" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>", "logId": "<log-ocid>"}]}' \
--target '{"kind": "functions", "functionId": "<enrichment-function-ocid>"}' \
--wait-for-state ACTIVE
# Pattern: Logging to Logging Analytics
oci sch service-connector create \
--compartment-id $C \
--display-name "logs-to-analytics" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--target '{"kind": "loggingAnalytics", "logGroupId": "<la-log-group-ocid>"}' \
--wait-for-state ACTIVEFiltering and Log Queries
Connector Hub supports log filtering using a query-based approach. You can filter log entries based on log content, source, type, and custom fields before they reach the target. This reduces the volume of data stored or processed and ensures that only relevant data flows through the pipeline.
Filters use the OCI Logging query syntax, which supports field matching, pattern matching, and logical operators. Filters are evaluated before data reaches the task or target, so filtering reduces both processing cost and storage consumption.
# Create a connector with log filtering
oci sch service-connector create \
--compartment-id $C \
--display-name "error-logs-only" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--target '{"kind": "objectStorage", "bucketName": "error-logs", "namespace": "<namespace>"}' \
--tasks '[{"kind": "logRule", "condition": "data.level = '\''ERROR'\'' OR data.level = '\''CRITICAL'\''"}]' \
--wait-for-state ACTIVE
# Filter for specific service logs
oci sch service-connector create \
--compartment-id $C \
--display-name "lb-access-logs" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--target '{"kind": "objectStorage", "bucketName": "lb-logs", "namespace": "<namespace>"}' \
--tasks '[{"kind": "logRule", "condition": "source = '\''oci_lbaas'\''" }]' \
--wait-for-state ACTIVE
# Filter with multiple conditions
# "condition": "data.statusCode >= 400 AND source = 'apigateway'"
# "condition": "data.action = 'REJECT' OR data.action = 'DROP'"
# "condition": "search(data.message, 'timeout') OR search(data.message, 'connection refused')"Filter Before Storing
Always apply the most restrictive filter possible at the connector level rather than filtering at the target. For example, if you only need error logs in Object Storage, filter for ERROR level at the connector rather than storing all logs and filtering later. This significantly reduces Object Storage costs and makes downstream analysis faster because there is less data to search through.
Function-Based Transformations
For complex data transformations that go beyond simple filtering, you can use OCI Functions as a task step in the connector pipeline. The function receives batches of records, performs custom processing (enrichment, format conversion, deduplication), and returns the transformed records for delivery to the target.
# Create a connector with a Function transformation task
oci sch service-connector create \
--compartment-id $C \
--display-name "enriched-logs" \
--source '{"kind": "logging", "logSources": [{"compartmentId": "<compartment-ocid>", "logGroupId": "<log-group-ocid>"}]}' \
--tasks '[{"kind": "function", "functionId": "<enrichment-function-ocid>", "batchSizeInKbs": 5120, "batchTimeInSec": 60}]' \
--target '{"kind": "objectStorage", "bucketName": "enriched-logs", "namespace": "<namespace>"}' \
--wait-for-state ACTIVE
# Example Function for log enrichment (Python):
# import io, json
# from fdk import response
#
# def handler(ctx, data: io.BytesIO = None):
# records = json.loads(data.getvalue())
# enriched = []
# for record in records:
# # Add geolocation based on source IP
# record["geo"] = lookup_geo(record.get("sourceAddress"))
# # Add cost center from resource tags
# record["costCenter"] = lookup_cost_center(record.get("resourceId"))
# # Normalize timestamp format
# record["normalizedTime"] = normalize_timestamp(record.get("time"))
# enriched.append(record)
# return response.Response(ctx, status_code=200,
# response_data=json.dumps(enriched))Monitoring Connector Health
Connector Hub emits metrics to OCI Monitoring for throughput, latency, errors, and backlog. Monitoring these metrics is essential for ensuring data flows reliably through your pipelines and for detecting issues before data loss occurs.
# Monitor connector throughput
oci monitoring metric-data summarize-metrics-data \
--compartment-id $C \
--namespace "oci_service_connector_hub" \
--query-text 'BytesProcessed[5m]{resourceId = "<connector-ocid>"}.sum()'
# Monitor error count
oci monitoring metric-data summarize-metrics-data \
--compartment-id $C \
--namespace "oci_service_connector_hub" \
--query-text 'Errors[5m]{resourceId = "<connector-ocid>"}.sum()'
# Monitor target delivery latency
oci monitoring metric-data summarize-metrics-data \
--compartment-id $C \
--namespace "oci_service_connector_hub" \
--query-text 'TargetResponseTime[5m]{resourceId = "<connector-ocid>"}.percentile(0.95)'
# Create alarm for connector errors
oci monitoring alarm create \
--compartment-id $C \
--display-name "connector-errors" \
--metric-compartment-id $C \
--namespace "oci_service_connector_hub" \
--query-text 'Errors[5m]{resourceId = "<connector-ocid>"}.sum() > 0' \
--severity "WARNING" \
--destinations '["<ops-topic-ocid>"]' \
--is-enabled true \
--body "Service connector experiencing errors - data may not be delivered"
# Create alarm for connector backlog
oci monitoring alarm create \
--compartment-id $C \
--display-name "connector-backlog" \
--metric-compartment-id $C \
--namespace "oci_service_connector_hub" \
--query-text 'Backlog[5m]{resourceId = "<connector-ocid>"}.max() > 1000' \
--severity "WARNING" \
--destinations '["<ops-topic-ocid>"]' \
--is-enabled true \
--body "Connector backlog growing - target may be slow or unavailable"
# Get connector lifecycle state
oci sch service-connector get \
--service-connector-id <connector-ocid> \
--query 'data.{"lifecycle-state":"lifecycle-state", "lifecyle-details":"lifecycle-details"}'
# Deactivate a connector
oci sch service-connector deactivate \
--service-connector-id <connector-ocid>
# Activate a connector
oci sch service-connector activate \
--service-connector-id <connector-ocid>
# Delete a connector
oci sch service-connector delete \
--service-connector-id <connector-ocid> \
--forceProduction Best Practices
Building reliable data pipelines with Connector Hub requires attention to data integrity, error handling, and operational monitoring:
IAM Policies: Connector Hub requires IAM policies that grant the service permission to read from sources and write to targets. Create a dedicated policy for each connector that follows least privilege. The required policy format is:Allow any-user to use <target-resource> in compartment <name> where all {request.principal.type = 'serviceconnector'}.
Batching Configuration: For Function and Object Storage targets, configure appropriate batch sizes and timeouts. Larger batches are more efficient but increase latency. For real-time processing, use smaller batches with shorter timeouts. For archival, use larger batches (100 MB+) to create fewer, larger files.
Error Handling: Monitor the Errors metric for every connector. When errors occur, check the connector lifecycle details for error messages. Common issues include IAM permission errors, target capacity limits, and network connectivity problems. Connectors automatically retry transient errors with exponential backoff.
Data Retention: When archiving logs to Object Storage, configure lifecycle rules on the target bucket to transition old data to Infrequent Access or Archive tiers. This significantly reduces long-term storage costs while maintaining compliance with retention requirements.
Testing: Test new connectors in a development compartment before deploying to production. Verify that data flows correctly, filters work as expected, and target data is in the expected format. Use the connector's deactivate/activate feature to temporarily pause pipelines during testing.
OCI Notifications & Events GuideOCI Logging Analytics GuideOCI Monitoring & Alarms GuideKey Takeaways
- 1Connector Hub is completely free, with charges only for the source and target services used.
- 2Common patterns include log archival to Object Storage, log streaming, and metric-based alerting.
- 3Log filtering reduces storage costs by forwarding only relevant entries to targets.
- 4Function-based tasks enable custom data enrichment and transformation in the pipeline.
Frequently Asked Questions
Is Connector Hub free to use?
What source and target combinations does Connector Hub support?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.