Skip to main content
OCINetworkingintermediate

OCI Load Balancer Deep Dive

Master OCI flexible and network load balancers with health checks, SSL termination, backend sets, path-based routing, and session persistence.

CloudToolStack Team25 min readPublished Mar 14, 2026

Prerequisites

  • Understanding of OCI VCN networking and subnets
  • OCI account with load balancer permissions

Introduction to OCI Load Balancing

Oracle Cloud Infrastructure provides two distinct load balancing services designed to distribute traffic across multiple backend servers: the OCI Load Balancer (a flexible, layer-7 application load balancer) and the Network Load Balancer (a high-performance, layer-4 pass-through load balancer). Together, these services handle everything from simple HTTP request routing to ultra-low-latency TCP/UDP distribution across backend instances, container workloads, and on-premises servers.

Unlike cloud providers that bundle load balancing into a single service with multiple SKUs, OCI explicitly separates the two types, each with its own API, feature set, and pricing model. The flexible Load Balancer supports SSL termination, path-based routing, cookie-based session persistence, and Web Application Firewall integration. The Network Load Balancer provides connection-level distribution with source IP preservation, making it ideal for non-HTTP protocols, gaming servers, and IoT workloads.

This guide covers both services end-to-end: provisioning, backend set configuration, health check tuning, SSL certificate management, listener rules, and production best practices. Every section includes OCI CLI commands alongside console instructions so you can automate your load balancer deployments from day one.

Always Free Load Balancer

OCI's Always Free tier includes one flexible load balancer with 10 Mbps bandwidth. This is a permanent allocation that never expires, making it ideal for personal projects, development environments, and learning. The free load balancer supports all features including SSL termination, path-based routing, and health checks.

Flexible Load Balancer Architecture

The OCI flexible Load Balancer operates at layer 7 of the OSI model, inspecting HTTP/HTTPS headers and content to make intelligent routing decisions. It sits in a subnet within your VCN and distributes incoming traffic to backend servers defined in backend sets. Each load balancer can have multiple listeners, each listening on a different port and protocol combination.

The architecture consists of four key components:

Listeners define the port and protocol that the load balancer monitors for incoming traffic. A single load balancer can have multiple listeners, for example one on port 80 for HTTP and another on port 443 for HTTPS. Each listener is associated with a default backend set and can have routing policies and rule sets.

Backend Sets are logical groupings of backend servers (instances, IPs, or containers) that receive traffic from a listener. Each backend set has its own health check configuration, load balancing policy (round robin, least connections, or IP hash), and session persistence settings.

Backends are the individual servers within a backend set. Each backend is defined by an IP address and port, along with a weight for weighted round-robin distribution and drain/offline status for maintenance operations.

Health Checks verify that backends are healthy and able to receive traffic. The load balancer automatically removes unhealthy backends from rotation and re-adds them when they recover.

bash
# Create a flexible load balancer (public)
oci lb load-balancer create \
  --compartment-id $C \
  --display-name "web-lb" \
  --shape-name "flexible" \
  --shape-details '{"minimumBandwidthInMbps": 10, "maximumBandwidthInMbps": 100}' \
  --subnet-ids '["<public-subnet-ocid>"]' \
  --is-private false \
  --wait-for-state SUCCEEDED

# Create a private (internal) load balancer
oci lb load-balancer create \
  --compartment-id $C \
  --display-name "internal-api-lb" \
  --shape-name "flexible" \
  --shape-details '{"minimumBandwidthInMbps": 10, "maximumBandwidthInMbps": 400}' \
  --subnet-ids '["<private-subnet-ocid>"]' \
  --is-private true \
  --wait-for-state SUCCEEDED

# List all load balancers
oci lb load-balancer list \
  --compartment-id $C \
  --query 'data[].{"display-name":"display-name", "lifecycle-state":"lifecycle-state", "ip-addresses":"ip-addresses[0]."ip-address""}' \
  --output table

Backend Sets and Load Balancing Policies

A backend set defines how traffic is distributed among a group of backend servers. OCI supports three load balancing policies, each suited to different workload characteristics:

Round Robin: Distributes requests sequentially across all healthy backends. This is the simplest policy and works well when all backends have similar capacity. Weighted round robin allows you to assign different weights to backends, sending proportionally more traffic to higher-capacity servers.

Least Connections: Routes each new request to the backend with the fewest active connections. This policy is ideal for workloads where request processing times vary significantly, as it naturally balances load based on actual server utilization rather than simple rotation.

IP Hash: Uses a hash of the source IP address to determine which backend receives the request. This ensures that requests from the same client IP always go to the same backend, providing a form of session persistence without cookies. However, it can lead to uneven distribution if traffic comes from a small number of source IPs (e.g., behind a corporate NAT).

bash
# Create a backend set with round-robin policy
oci lb backend-set create \
  --load-balancer-id <lb-ocid> \
  --name "web-backend-set" \
  --policy "ROUND_ROBIN" \
  --health-checker-protocol "HTTP" \
  --health-checker-port 80 \
  --health-checker-url-path "/health" \
  --health-checker-interval-in-ms 10000 \
  --health-checker-timeout-in-ms 3000 \
  --health-checker-retries 3 \
  --wait-for-state SUCCEEDED

# Create a backend set with least connections
oci lb backend-set create \
  --load-balancer-id <lb-ocid> \
  --name "api-backend-set" \
  --policy "LEAST_CONNECTIONS" \
  --health-checker-protocol "HTTP" \
  --health-checker-port 8080 \
  --health-checker-url-path "/api/health" \
  --health-checker-interval-in-ms 5000 \
  --health-checker-timeout-in-ms 2000 \
  --health-checker-retries 3 \
  --wait-for-state SUCCEEDED

# Add backends to a backend set
oci lb backend create \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --ip-address "10.0.1.10" \
  --port 80 \
  --weight 1 \
  --wait-for-state SUCCEEDED

oci lb backend create \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --ip-address "10.0.1.11" \
  --port 80 \
  --weight 1 \
  --wait-for-state SUCCEEDED

# List backends and their health status
oci lb backend-health get \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --backend-name "10.0.1.10:80"

Use Weighted Round Robin for Canary Deployments

You can implement canary deployments using weighted round robin. Add the new version as a backend with weight 1 while existing backends have weight 9. This sends roughly 10% of traffic to the canary. Gradually increase the canary weight as you gain confidence. If issues arise, set the canary backend to "drain" mode to gracefully remove it from rotation.

Health Checks and Backend Monitoring

Health checks are the mechanism by which the load balancer determines whether a backend is capable of receiving traffic. OCI supports both HTTP and TCP health checks, each with configurable intervals, timeouts, and retry thresholds. A well-configured health check is critical for application availability because it determines how quickly unhealthy backends are detected and removed from the pool.

For HTTP health checks, you specify a URL path that the load balancer requests periodically. The backend must return a response within the configured timeout, and the HTTP status code must match the expected return code (default 200). If the backend fails the configured number of retries, it is marked as unhealthy and removed from rotation.

For TCP health checks, the load balancer attempts to establish a TCP connection to the backend on the specified port. If the connection succeeds within the timeout, the backend is considered healthy. TCP health checks are simpler but less accurate because they only verify network connectivity, not application-level health.

bash
# Update health check configuration for a backend set
oci lb health-checker update \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --protocol "HTTP" \
  --port 80 \
  --url-path "/health" \
  --interval-in-ms 10000 \
  --timeout-in-ms 3000 \
  --retries 3 \
  --return-code 200 \
  --wait-for-state SUCCEEDED

# Check overall backend set health
oci lb backend-set-health get \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set"

# Check individual backend health
oci lb backend-health get \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --backend-name "10.0.1.10:80"

# Set a backend to drain mode for maintenance
oci lb backend update \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --backend-name "10.0.1.10:80" \
  --drain true \
  --weight 1 \
  --backup false \
  --offline false \
  --wait-for-state SUCCEEDED

Health Check Endpoint Best Practices

Your health check endpoint should verify application readiness, not just return a 200 status. A good health check verifies database connectivity, cache availability, and critical dependency status. However, avoid making health checks too heavy or slow, as this can cause false positives. Keep health check responses under 100ms and use a dedicated lightweight endpoint like /health or /ping.

SSL Termination and Certificate Management

SSL termination offloads the CPU-intensive work of encrypting and decrypting HTTPS traffic from your backend servers to the load balancer. This simplifies certificate management, improves backend performance, and centralizes your TLS configuration. OCI Load Balancer supports TLS 1.2 and TLS 1.3, with configurable cipher suites for compliance requirements.

You can provide SSL certificates in three ways: upload your own certificates directly to the load balancer, use certificates managed by the OCI Certificates service, or integrate with Let's Encrypt for automatic renewal. OCI Certificates can automatically rotate certificates before expiration, eliminating the risk of service outages due to expired certificates.

bash
# Create an SSL certificate bundle for the load balancer
oci lb certificate create \
  --load-balancer-id <lb-ocid> \
  --certificate-name "my-app-cert" \
  --public-certificate-file /path/to/cert.pem \
  --private-key-file /path/to/key.pem \
  --ca-certificate-file /path/to/ca-chain.pem \
  --wait-for-state SUCCEEDED

# Create an HTTPS listener with SSL termination
oci lb listener create \
  --load-balancer-id <lb-ocid> \
  --name "https-listener" \
  --default-backend-set-name "web-backend-set" \
  --port 443 \
  --protocol "HTTP" \
  --ssl-certificate-name "my-app-cert" \
  --wait-for-state SUCCEEDED

# Create an HTTP listener that redirects to HTTPS
oci lb listener create \
  --load-balancer-id <lb-ocid> \
  --name "http-redirect" \
  --default-backend-set-name "web-backend-set" \
  --port 80 \
  --protocol "HTTP" \
  --wait-for-state SUCCEEDED

# Create a rule set to redirect HTTP to HTTPS
oci lb rule-set create \
  --load-balancer-id <lb-ocid> \
  --name "http-to-https-redirect" \
  --items '[{
    "action": "REDIRECT",
    "conditions": [{"attributeName": "PATH", "attributeValue": "/", "operator": "FORCE_LONGEST_PREFIX_MATCH"}],
    "redirectUri": {"protocol": "HTTPS", "host": "{host}", "port": 443, "path": "{path}", "query": "{query}"},
    "responseCode": 301
  }]' \
  --wait-for-state SUCCEEDED

# List certificates on a load balancer
oci lb certificate list \
  --load-balancer-id <lb-ocid> \
  --query 'data[].{"certificate-name":"certificate-name"}' \
  --output table

Network Load Balancer (NLB)

The OCI Network Load Balancer operates at layer 4, forwarding TCP, UDP, and ICMP traffic without inspecting application-layer content. It provides significantly higher throughput and lower latency than the flexible Load Balancer because it passes packets directly to backends without protocol-level processing.

Key advantages of the Network Load Balancer include source IP preservation (the backend sees the actual client IP, not the load balancer IP), support for non-HTTP protocols, and the ability to handle millions of connections per second. It is the right choice for database connections, gaming servers, SIP/VoIP traffic, IoT device communication, and any workload that requires raw TCP/UDP forwarding.

The NLB supports both public and private configurations. Public NLBs receive traffic from the internet through an assigned public IP address. Private NLBs distribute traffic within a VCN, typically for internal service-to-service communication or as a frontend for private backend services.

bash
# Create a public Network Load Balancer
oci nlb network-load-balancer create \
  --compartment-id $C \
  --display-name "tcp-nlb" \
  --subnet-id <public-subnet-ocid> \
  --is-private false \
  --is-preserve-source-destination true \
  --wait-for-state SUCCEEDED

# Create a backend set for the NLB
oci nlb backend-set create \
  --network-load-balancer-id <nlb-ocid> \
  --name "tcp-backend-set" \
  --policy "FIVE_TUPLE" \
  --health-checker '{"protocol": "TCP", "port": 3306, "intervalInMillis": 10000, "timeoutInMillis": 3000, "retries": 3}' \
  --is-preserve-source true \
  --wait-for-state SUCCEEDED

# Add backends
oci nlb backend create \
  --network-load-balancer-id <nlb-ocid> \
  --backend-set-name "tcp-backend-set" \
  --port 3306 \
  --target-id <instance-ocid> \
  --weight 1 \
  --wait-for-state SUCCEEDED

# Create a TCP listener
oci nlb listener create \
  --network-load-balancer-id <nlb-ocid> \
  --name "tcp-listener" \
  --default-backend-set-name "tcp-backend-set" \
  --port 3306 \
  --protocol "TCP" \
  --wait-for-state SUCCEEDED

# List NLBs
oci nlb network-load-balancer list \
  --compartment-id $C \
  --query 'data.items[].{"display-name":"display-name", "lifecycle-state":"lifecycle-state"}' \
  --output table

Path-Based Routing and Virtual Hostnames

The flexible Load Balancer supports advanced traffic routing using path-based routing policies and virtual hostname listeners. These features allow a single load balancer to serve multiple applications or microservices, each with its own backend set, based on the URL path or the hostname in the HTTP Host header.

Path-based routing inspects the URL path of incoming requests and routes them to different backend sets based on pattern matching. For example, requests to/api/* can go to an API backend set while /static/* goes to a content server backend set. This is particularly useful in microservice architectures where different services handle different URL paths.

Virtual hostnames allow a single load balancer to handle traffic for multiple domain names. Each hostname can be associated with a different listener configuration, including different SSL certificates. This reduces cost by consolidating multiple applications behind a single load balancer.

bash
# Create a path route set for microservice routing
oci lb path-route-set create \
  --load-balancer-id <lb-ocid> \
  --name "microservice-routes" \
  --path-routes '[
    {"backendSetName": "api-backend-set", "path": "/api", "pathMatchType": {"matchType": "PREFIX_MATCH"}},
    {"backendSetName": "web-backend-set", "path": "/", "pathMatchType": {"matchType": "PREFIX_MATCH"}},
    {"backendSetName": "admin-backend-set", "path": "/admin", "pathMatchType": {"matchType": "PREFIX_MATCH"}}
  ]' \
  --wait-for-state SUCCEEDED

# Update a listener to use path routing
oci lb listener update \
  --load-balancer-id <lb-ocid> \
  --listener-name "https-listener" \
  --default-backend-set-name "web-backend-set" \
  --port 443 \
  --protocol "HTTP" \
  --path-route-set-name "microservice-routes" \
  --wait-for-state SUCCEEDED

# Create a virtual hostname
oci lb hostname create \
  --load-balancer-id <lb-ocid> \
  --name "app1-hostname" \
  --hostname "app1.example.com" \
  --wait-for-state SUCCEEDED

oci lb hostname create \
  --load-balancer-id <lb-ocid> \
  --name "app2-hostname" \
  --hostname "app2.example.com" \
  --wait-for-state SUCCEEDED

# Associate hostnames with a listener
oci lb listener update \
  --load-balancer-id <lb-ocid> \
  --listener-name "https-listener" \
  --default-backend-set-name "web-backend-set" \
  --port 443 \
  --protocol "HTTP" \
  --hostname-names '["app1-hostname", "app2-hostname"]' \
  --wait-for-state SUCCEEDED

Session Persistence and Cookies

Session persistence (also called sticky sessions) ensures that all requests from a specific client are routed to the same backend server for the duration of a session. This is essential for applications that store session state on the server, such as shopping carts, user authentication state, or in-memory caches tied to a specific user.

OCI Load Balancer supports two types of session persistence:

Application Cookie Stickiness: The load balancer uses an existing cookie set by your application to determine which backend should handle the request. You specify the cookie name, and the load balancer ensures all requests with the same cookie value go to the same backend.

Load Balancer Cookie Stickiness: The load balancer generates and manages its own cookie to track client-to-backend affinity. The cookie name, path, domain, and expiration are configurable. This approach does not require any application changes.

bash
# Configure load balancer cookie stickiness on a backend set
oci lb backend-set update \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --policy "ROUND_ROBIN" \
  --health-checker-protocol "HTTP" \
  --health-checker-port 80 \
  --health-checker-url-path "/health" \
  --session-persistence-cookie-name "X-Oracle-LB-Session" \
  --session-persistence-disable-fallback false \
  --wait-for-state SUCCEEDED

# Configure application cookie stickiness
oci lb backend-set update \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "app-backend-set" \
  --policy "ROUND_ROBIN" \
  --health-checker-protocol "HTTP" \
  --health-checker-port 8080 \
  --health-checker-url-path "/health" \
  --session-persistence-cookie-name "JSESSIONID" \
  --session-persistence-disable-fallback true \
  --wait-for-state SUCCEEDED

Avoid Session Persistence When Possible

While session persistence solves the problem of server-side session state, it can lead to uneven load distribution, especially during scale-up or scale-down events. Modern best practice is to externalize session state to a shared store like Redis, Memcached, or a database, making your backends truly stateless. Stateless backends can be freely load balanced with round robin or least connections for optimal distribution.

Monitoring and Troubleshooting

OCI provides comprehensive monitoring for load balancers through the Monitoring service. Key metrics include request count, response time, backend health status, active connections, and error rates. These metrics are available in the OCI Console, through the Monitoring API, and can be used to trigger alarms for automated incident response.

bash
# Query load balancer request metrics
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_lbaas" \
  --query-text 'HttpRequests[5m]{resourceId = "<lb-ocid>"}.sum()'

# Query backend response time
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_lbaas" \
  --query-text 'BackendTimeFirstByte[5m]{resourceId = "<lb-ocid>"}.percentile(0.95)'

# Query unhealthy backend count
oci monitoring metric-data summarize-metrics-data \
  --compartment-id $C \
  --namespace "oci_lbaas" \
  --query-text 'UnHealthyBackendServers[1m]{resourceId = "<lb-ocid>"}.max()'

# Create an alarm for unhealthy backends
oci monitoring alarm create \
  --compartment-id $C \
  --display-name "lb-unhealthy-backends" \
  --metric-compartment-id $C \
  --namespace "oci_lbaas" \
  --query-text 'UnHealthyBackendServers[1m]{resourceId = "<lb-ocid>"}.max() > 0' \
  --severity "CRITICAL" \
  --destinations '["<topic-ocid>"]' \
  --is-enabled true \
  --body "One or more backends are unhealthy"

# View load balancer access logs (if logging enabled)
oci lb load-balancer get \
  --load-balancer-id <lb-ocid> \
  --query 'data.{"access-log":"access-log", "error-log":"error-log"}'

Production Best Practices

Deploying load balancers in production requires careful attention to high availability, security, and performance. Here are the essential best practices for OCI load balancer deployments:

High Availability: Place your load balancer in a regional subnet that spans all availability domains. OCI automatically provisions load balancer nodes across ADs for fault tolerance. For backends, distribute instances across multiple fault domains and ADs to survive hardware and data center failures.

Security: Use NSGs (Network Security Groups) instead of security lists to control traffic to and from the load balancer. Restrict backend security groups to only accept traffic from the load balancer's IP addresses or NSG. Enable WAF integration for public-facing load balancers to protect against OWASP Top 10 threats.

Bandwidth Planning: Start with the minimum bandwidth (10 Mbps) and use autoscaling to handle traffic spikes. Monitor the BandwidthUsage metric and set alarms at 80% utilization. The flexible shape supports dynamic scaling from 10 Mbps to 8,000 Mbps without downtime.

Connection Draining: Always use drain mode before removing a backend for maintenance. This allows existing connections to complete while preventing new connections from being routed to the backend. Set a reasonable drain timeout (300 seconds is a good default) to ensure long-running requests complete gracefully.

Logging: Enable both access logs and error logs for troubleshooting and compliance. Access logs capture request details (client IP, URL, response code, latency), while error logs capture load balancer errors and backend failures. Send logs to OCI Logging for centralized analysis and retention.

bash
# Enable access logging on a load balancer
oci lb load-balancer update \
  --load-balancer-id <lb-ocid> \
  --access-log '{"isEnabled": true, "logGroupId": "<log-group-ocid>", "logId": "<log-ocid>"}' \
  --error-log '{"isEnabled": true, "logGroupId": "<log-group-ocid>", "logId": "<log-ocid>"}' \
  --wait-for-state SUCCEEDED

# Configure connection draining
oci lb backend update \
  --load-balancer-id <lb-ocid> \
  --backend-set-name "web-backend-set" \
  --backend-name "10.0.1.10:80" \
  --drain true \
  --weight 1 \
  --backup false \
  --offline false \
  --wait-for-state SUCCEEDED

# Scale bandwidth dynamically
oci lb load-balancer update \
  --load-balancer-id <lb-ocid> \
  --shape-details '{"minimumBandwidthInMbps": 100, "maximumBandwidthInMbps": 1000}' \
  --wait-for-state SUCCEEDED

Choosing Between Flexible LB and Network LB

FeatureFlexible Load BalancerNetwork Load Balancer
OSI LayerLayer 7 (Application)Layer 4 (Transport)
ProtocolsHTTP, HTTPS, HTTP/2TCP, UDP, ICMP
SSL TerminationYesNo (pass-through)
Path-Based RoutingYesNo
Source IP PreservationVia X-Forwarded-For headerNative (direct pass-through)
WAF IntegrationYesNo
Session PersistenceCookie-based5-tuple hash
Always FreeYes (10 Mbps)No
Best ForWeb apps, APIs, microservicesDatabases, gaming, IoT, non-HTTP

Use the flexible Load Balancer for HTTP/HTTPS workloads that benefit from content-based routing, SSL termination, and WAF protection. Use the Network Load Balancer for non-HTTP protocols, workloads requiring source IP preservation, and scenarios demanding the lowest possible latency.

OCI VCN Networking Deep DiveOCI WAF GuideOCI Monitoring & Alarms Guide

Key Takeaways

  1. 1OCI provides two load balancers: flexible (L7 HTTP) and network (L4 TCP/UDP) for different workloads.
  2. 2The Always Free tier includes one flexible load balancer with 10 Mbps bandwidth.
  3. 3Path-based routing and virtual hostnames enable hosting multiple applications behind a single load balancer.
  4. 4Health checks are the key mechanism for automatic backend failover and recovery.

Frequently Asked Questions

What is the difference between OCI flexible and network load balancers?
The flexible Load Balancer operates at layer 7, supporting HTTP/HTTPS with SSL termination, path-based routing, cookie-based session persistence, and WAF integration. The Network Load Balancer operates at layer 4, forwarding TCP/UDP traffic with source IP preservation and higher throughput. Use flexible LB for web apps and network LB for databases, gaming, and non-HTTP protocols.
Does OCI offer a free load balancer?
Yes, OCI's Always Free tier includes one flexible load balancer with 10 Mbps bandwidth. This includes all features like SSL termination, health checks, and path-based routing. The free load balancer is sufficient for development, testing, and lightweight production workloads.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.