OCI Logging Analytics
Ingest, parse, search, and visualize logs with OCI Logging Analytics: sources, parsers, queries, dashboards, and alerts.
Prerequisites
- OCI account with Logging Analytics permissions
- Basic understanding of log management concepts
Introduction to OCI Logging Analytics
OCI Logging Analytics is a cloud-native log management and analytics service that enables you to index, enrich, search, analyze, and visualize log data from any source. Unlike basic log storage services, Logging Analytics provides machine learning-powered log parsing, automatic field extraction, anomaly detection, and a powerful query language for deep log analysis. It is the OCI equivalent of Splunk or Elastic Stack, fully managed and integrated with the OCI ecosystem.
The service handles the entire log lifecycle: collection from hundreds of source types (operating systems, databases, applications, cloud services), parsing and enrichment with metadata, indexing for fast search, analysis through an expressive query language, visualization with dashboards and saved searches, and retention management with configurable policies. Logging Analytics processes billions of log records daily across Oracle's customer base.
This guide covers the complete Logging Analytics workflow: onboarding and configuring the service, setting up log sources and parsers, collecting logs from various sources, writing queries with the Log Explorer, building dashboards, configuring alerts, and optimizing costs with storage tiers and retention policies.
Free Tier and Pricing
OCI Logging Analytics offers 10 GB of free log ingestion per month as part of the Always Free tier. Beyond that, pricing is based on data ingestion volume (per GB ingested) and active storage (per GB stored per month). There is no charge for queries, dashboards, alerts, or the number of users. Archive storage (for long-term retention) is significantly cheaper than active storage. This makes Logging Analytics cost-effective for both small and large deployments.
Onboarding and Initial Setup
Before you can use Logging Analytics, you must onboard the service in your tenancy and configure the necessary IAM policies. Onboarding creates the namespace and enables the log analytics features in your region.
# Check if Logging Analytics is already onboarded
oci log-analytics namespace get \
--namespace-name <tenancy-namespace>
# Onboard Logging Analytics (one-time setup)
oci log-analytics namespace onboard \
--namespace-name <tenancy-namespace>
# Create IAM policies for Logging Analytics
oci iam policy create \
--compartment-id <tenancy-ocid> \
--name LogAnalyticsPolicy \
--description "Policies for OCI Logging Analytics" \
--statements '[
"Allow group LogAnalyticsAdmins to manage log-analytics-family in tenancy",
"Allow group LogAnalyticsUsers to read log-analytics-family in tenancy",
"Allow group LogAnalyticsUsers to use log-analytics-log-group in tenancy",
"Allow group LogAnalyticsUsers to manage log-analytics-entity in compartment monitoring",
"Allow service loganalytics to read objects in tenancy",
"Allow service loganalytics to read compartments in tenancy",
"Allow service loganalytics to read instances in tenancy"
]'
# Create a Log Group (container for log sources)
oci log-analytics log-group create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--display-name "production-logs" \
--description "Log group for production workloads"
# List available log sources (pre-built parsers)
oci log-analytics source list \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--is-system true \
--query 'data.items[?contains(name, `Linux`)].{"Name": name, "Type": "source-type", "Description": description}' \
--output tableLog Sources and Parsers
Logging Analytics understands log formats through log sourcesand parsers. A log source defines where logs come from (a file path, an API, a database query) and which parser to use. A parser defines how to extract structured fields from raw log text. OCI provides over 300 pre-built sources and parsers for common software like Linux syslog, Apache, Nginx, Oracle Database, MySQL, Kubernetes, and OCI services. You can also create custom parsers for application-specific log formats.
Pre-Built Log Sources
| Category | Sources | Example Log Types |
|---|---|---|
| Operating System | Linux Syslog, Windows Event Log, Audit Log | /var/log/messages, /var/log/secure |
| Web Servers | Apache, Nginx, IIS | Access logs, error logs |
| Databases | Oracle DB, MySQL, PostgreSQL | Alert logs, slow query logs |
| Containers | Kubernetes, Docker | Pod logs, container events |
| OCI Services | Audit, VCN Flow Logs, LB Access | API audit, network flow data |
| Security | Cloud Guard, WAF, Firewall | Threat detections, blocked requests |
Creating a Custom Parser
# Create a custom parser for an application log format
# Example log: 2026-03-14T10:30:15.123Z [INFO] OrderService - Order ORD-001 created for customer CUST-42 total=299.99
oci log-analytics parser create \
--namespace-name <tenancy-namespace> \
--name "ecommerce-app-parser" \
--type REGEX \
--description "Parser for ecommerce application logs" \
--is-enabled true \
--field-maps '[
{"field-name": "timestamp", "regex-field-name": "timestamp"},
{"field-name": "log_level", "regex-field-name": "level"},
{"field-name": "service_name", "regex-field-name": "service"},
{"field-name": "order_id", "regex-field-name": "order_id"},
{"field-name": "customer_id", "regex-field-name": "customer_id"},
{"field-name": "order_total", "regex-field-name": "total"}
]' \
--content '(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z)\s+\[(?<level>\w+)\]\s+(?<service>\w+)\s+-\s+Order\s+(?<order_id>[\w-]+)\s+created\s+for\s+customer\s+(?<customer_id>[\w-]+)\s+total=(?<total>[\d.]+)'
# Create a custom log source using the parser
oci log-analytics source create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--name "ecommerce-app-logs" \
--source-type LOG \
--description "Ecommerce application logs" \
--parsers '[{
"parserName": "ecommerce-app-parser",
"isDefault": true
}]' \
--log-paths '["/var/log/ecommerce/app.log", "/var/log/ecommerce/app.*.log"]'
# Test the parser against sample log data
oci log-analytics parser test \
--namespace-name <tenancy-namespace> \
--content '2026-03-14T10:30:15.123Z [INFO] OrderService - Order ORD-001 created for customer CUST-42 total=299.99' \
--type REGEX \
--parser-name "ecommerce-app-parser"Parser Development Workflow
Use the Log Explorer's built-in parser testing tool when developing custom parsers. Paste sample log lines and iterate on your regex pattern until all fields are correctly extracted. Start with a simple pattern that captures the most common log format, then add alternative patterns for variations like error messages or multi-line stack traces. OCI Logging Analytics supports multi-line log records through header and footer patterns.
Log Collection Methods
OCI Logging Analytics supports multiple collection methods to handle different log sources. The Management Agent collects logs from compute instances and on-premises servers. The OCI Logging service connector pipes OCI service logs directly to Logging Analytics. Object Storage collection ingests logs from uploaded files. The REST API enables custom integrations from any source.
Management Agent Collection
# Install the management agent on a compute instance
# Download the agent RPM from OCI Console > Observability > Management Agents
# On the instance:
sudo rpm -ivh oracle.mgmt_agent.rpm
# Configure the agent with your tenancy details
sudo /opt/oracle/mgmt_agent/agent_inst/bin/setup.sh opts/input.rsp
# Create an entity (represents the source system)
oci log-analytics entity create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--name "web-server-01" \
--entity-type-name "Host (Linux)" \
--management-agent-id <agent-ocid> \
--properties '{"Host Name": "web-server-01", "Operating System": "Oracle Linux 8"}'
# Associate log sources with the entity
oci log-analytics entity-source associate \
--namespace-name <tenancy-namespace> \
--entity-id <entity-ocid> \
--source-names '["Linux Syslog Logs", "Linux Secure Logs", "ecommerce-app-logs"]'
# Verify the association
oci log-analytics entity-source list \
--namespace-name <tenancy-namespace> \
--entity-id <entity-ocid> \
--query 'data.items[].{"Source": name, "State": "lifecycle-state"}' \
--output tableService Connector Collection
# Create a Service Connector to pipe OCI Audit logs to Logging Analytics
oci sch service-connector create \
--compartment-id <compartment-ocid> \
--display-name "audit-to-log-analytics" \
--description "Send OCI Audit logs to Logging Analytics" \
--source '{
"kind": "logging",
"logSources": [{
"compartmentId": "<compartment-ocid>",
"logGroupId": "_Audit",
"logId": ""
}]
}' \
--target '{
"kind": "loggingAnalytics",
"logGroupId": "<la-log-group-ocid>",
"logSourceIdentifier": "OCI Audit Logs"
}'
# Create a connector for VCN Flow Logs
oci sch service-connector create \
--compartment-id <compartment-ocid> \
--display-name "flow-logs-to-log-analytics" \
--description "Send VCN flow logs to Logging Analytics" \
--source '{
"kind": "logging",
"logSources": [{
"compartmentId": "<compartment-ocid>",
"logGroupId": "<vcn-flow-log-group-ocid>",
"logId": "<flow-log-ocid>"
}]
}' \
--target '{
"kind": "loggingAnalytics",
"logGroupId": "<la-log-group-ocid>",
"logSourceIdentifier": "OCI VCN Flow Unified Schema Logs"
}'Querying Logs with Log Explorer
The Log Explorer provides a powerful query interface for searching, filtering, and analyzing log data. The query language supports text search, field-based filtering, aggregation functions, time-based analysis, statistical functions, and piped transformations. Queries can process millions of log records in seconds and return results as tables, time series, charts, or geographic maps.
-- Basic search for error messages
'Error' or 'Exception' | where 'Log Source' = 'ecommerce-app-logs'
-- Search with time filter and field extraction
* | where 'Log Source' = 'Linux Syslog Logs'
and 'Entity Name' = 'web-server-01'
and 'Time' >= '2026-03-14T00:00:00Z'
| fields 'Time', 'Log Level', 'Message'
-- Count errors by service over time
* | where 'Log Level' = 'ERROR'
| timestats count by 'Service Name' span = 1h
-- Top 10 most frequent error messages
* | where 'Log Level' = 'ERROR'
| stats count by 'Message'
| sort -count
| head 10
-- Analyze order creation patterns
* | where 'Log Source' = 'ecommerce-app-logs'
and Message like '%Order%created%'
| extract field=Message 'total=(?P<amount>[\d.]+)'
| stats count as order_count,
avg(amount) as avg_order_value,
sum(amount) as total_revenue,
max(amount) as max_order
by 'Customer ID'
| sort -total_revenue
| head 20
-- Detect anomalous error rates
* | where 'Log Level' = 'ERROR'
| timestats count as error_count span = 5m
| addfields avg(error_count) as avg_errors,
stddev(error_count) as stddev_errors
| where error_count > avg_errors + (3 * stddev_errors)
-- Join logs from multiple sources for correlation
'HTTP 500' | where 'Log Source' = 'OCI Load Balancer Access Logs'
| link 'Client IP'
| where 'Log Source' = 'ecommerce-app-logs'
| fields 'Time', 'Client IP', 'Request URL', 'Error Message'
-- Analyze VCN flow logs for security investigation
* | where 'Log Source' = 'OCI VCN Flow Unified Schema Logs'
and 'Destination Port' in (22, 3389)
and 'Action' = 'ACCEPT'
| stats count as connections,
distinctcount('Source IP') as unique_sources
by 'Destination IP', 'Destination Port'
| sort -connectionsQuery Performance Tips
For large log volumes, always include a time filter and a log source filter in your queries to narrow the search scope. Use where clauses early in the query pipeline to reduce the data processed by subsequent operations. Avoid using wildcards at the beginning of search terms (like *error) as they prevent index usage. Use timestats instead of stats when you need time-bucketed aggregations for charting.
Dashboards and Visualizations
Logging Analytics dashboards provide real-time visibility into your log data through customizable widgets. Each widget is powered by a saved search query and can display data as tables, line charts, bar charts, pie charts, donut charts, maps, or single-value indicators. Dashboards refresh automatically and can be shared with other users in your tenancy.
# Create a dashboard
oci management-dashboard dashboard create \
--compartment-id <compartment-ocid> \
--display-name "Production Operations" \
--description "Operational dashboard for production workloads" \
--provider-name "Logging Analytics" \
--provider-version "3.0.0" \
--is-oob-dashboard false \
--dashboard-type "saved-searches"
# Create a saved search (widget data source)
oci log-analytics saved-search create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--display-name "Error Rate by Service" \
--description "Hourly error rate grouped by service" \
--query "* | where 'Log Level' = 'ERROR' | timestats count by 'Service Name' span = 1h" \
--type WIDGET \
--widget-type LINE_CHART
# Create a saved search for top errors
oci log-analytics saved-search create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--display-name "Top Error Messages" \
--description "Most frequent error messages in the last 24 hours" \
--query "* | where 'Log Level' = 'ERROR' | stats count by 'Message' | sort -count | head 10" \
--type WIDGET \
--widget-type TABLE
# List saved searches
oci log-analytics saved-search list \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--query 'data.items[].{"Name": "display-name", "Type": type, "Widget": "widget-type"}' \
--output tableAlerts and Notifications
Logging Analytics integrates with OCI Monitoring and Notifications to alert you when specific log patterns are detected. You can create detection rules that continuously evaluate log queries and trigger alerts when conditions are met. This enables proactive incident response: get alerted when error rates spike, when specific security events occur, or when critical application failures appear in logs.
# Create a scheduled query that triggers an alarm
# First, create a scheduled task to run the detection query
oci log-analytics scheduled-task create \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--display-name "High Error Rate Detection" \
--task-type SAVED_SEARCH \
--schedules '[{
"type": "FIXED_FREQUENCY",
"misFirePolicy": "RETRY_ONCE",
"recurringInterval": "PT5M",
"repeatCount": -1
}]' \
--saved-search-id <saved-search-ocid>
# Create a monitoring alarm for Logging Analytics metrics
oci monitoring alarm create \
--compartment-id <compartment-ocid> \
--display-name "Critical Error Alert" \
--namespace oci_logging_analytics \
--query 'LogRecordCount[5m]{logSourceName = "ecommerce-app-logs", logLevel = "ERROR"}.sum() > 100' \
--severity CRITICAL \
--is-enabled true \
--destinations '["<notification-topic-ocid>"]' \
--body "More than 100 errors in 5 minutes from ecommerce application" \
--pending-duration "PT5M" \
--repeat-notification-duration "PT15M"
# Create a metric alarm for ingestion rate anomalies
oci monitoring alarm create \
--compartment-id <compartment-ocid> \
--display-name "Log Ingestion Drop" \
--namespace oci_logging_analytics \
--query 'LogRecordCount[15m]{logSourceName = "Linux Syslog Logs"}.sum() < 10' \
--severity WARNING \
--is-enabled true \
--destinations '["<notification-topic-ocid>"]' \
--body "Linux syslog ingestion has dropped below 10 records in 15 minutes - possible agent failure"Storage Management and Cost Optimization
Log data grows continuously, and managing storage costs is critical for any log analytics deployment. OCI Logging Analytics provides two storage tiers: active storage for frequently queried data and archive storage for long-term retention. Active storage supports full query capabilities, while archive storage provides cheaper storage with the requirement to recall data before querying.
# View current storage usage
oci log-analytics storage get \
--namespace-name <tenancy-namespace> \
--query 'data.{"Active (GB)": "active-data-size-in-bytes", "Archived (GB)": "archiving-configuration", "Recalled (GB)": "recall-billable-size-in-bytes"}'
# Set up archival policy (move logs older than 90 days to archive)
oci log-analytics storage update \
--namespace-name <tenancy-namespace> \
--archiving-configuration '{
"activeStorageDuration": "P90D",
"archivalState": "ARCHIVING"
}'
# Recall archived data for investigation
oci log-analytics storage recall \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--time-start "2025-12-01T00:00:00Z" \
--time-end "2025-12-31T23:59:59Z" \
--log-sets '["production-logs"]'
# Purge old data to free storage
oci log-analytics storage purge \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--time-start "2025-01-01T00:00:00Z" \
--time-end "2025-06-30T23:59:59Z" \
--purge-query-string "'Log Source' = 'debug-logs'"
# View ingestion usage by source
oci log-analytics source usage list \
--namespace-name <tenancy-namespace> \
--compartment-id <compartment-ocid> \
--query 'data.items[].{"Source": "source-name", "Usage (GB)": "data-ingested-in-bytes"}' \
--output tableCost Optimization Strategies
Reduce Logging Analytics costs by filtering out verbose debug logs at the source (only collect INFO and above in production), using shorter active storage retention for high-volume sources, archiving compliance-required logs rather than keeping them in active storage, and using the Service Connector Hub to pre-filter logs before ingestion. Monitor your daily ingestion volume in the Logging Analytics console and set budget alerts to catch unexpected spikes from new log sources or misconfigured applications.
Best Practices Summary
Effective log analytics requires thoughtful source configuration, query design, and operational practices. Structure your log groups by environment (production, staging, development) and application domain. Use consistent log formats across your applications to enable cross-service correlation. Build dashboards that answer specific operational questions rather than displaying raw data.
Configure alerts for actionable conditions only: alert on error rates, not individual errors. Use severity levels appropriately: critical for customer-facing impact, warning for degradation, and info for operational awareness. Review and tune alert thresholds monthly to reduce alert fatigue. Archive logs based on compliance and investigation requirements, not just storage cost.
OCI Streaming & EventsOCI Security Best PracticesOCI Cost Optimization StrategiesKey Takeaways
- 1Logging Analytics provides ML-powered log parsing with 300+ pre-built sources and parsers.
- 2The query language supports text search, field filtering, aggregation, and statistical functions.
- 3Active and archive storage tiers enable cost-effective log retention management.
- 4Service Connector Hub pipes OCI service logs directly to Logging Analytics without custom code.
Frequently Asked Questions
How much does OCI Logging Analytics cost?
How does Logging Analytics compare to Splunk or Elastic?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.