Skip to main content
OCIComputebeginner

OCI Container Instances

Run serverless containers on OCI without Kubernetes: multi-container pods, health checks, volumes, and load balancing.

CloudToolStack Team20 min readPublished Mar 14, 2026

Prerequisites

  • Basic understanding of Docker containers
  • OCI account with container instances permissions

Introduction to OCI Container Instances

OCI Container Instances is a serverless compute service that lets you run containers without managing any underlying infrastructure. Unlike OKE (Oracle Kubernetes Engine), which requires provisioning and managing a Kubernetes cluster, Container Instances lets you launch one or more containers with a single API call. There is no cluster to create, no nodes to patch, no control plane to manage, and no Kubernetes expertise required.

Container Instances are ideal for workloads that need the packaging benefits of containers (consistent environments, dependency isolation, easy CI/CD integration) without the orchestration complexity of Kubernetes. Common use cases include batch processing jobs, CI/CD build agents, development and testing environments, data processing pipelines, web applications, API backends, and one-off tasks that need to run in a container and exit.

This guide covers everything from creating your first container instance to advanced configurations like multi-container pods, volume mounts, environment variable injection, resource limits, graceful shutdown, and integration with other OCI services. You will learn both the OCI CLI and Terraform approaches to managing container instances at scale.

Container Instances vs. OKE

Choose Container Instances when you want to run containers without any cluster management overhead. Choose OKE when you need Kubernetes features like service discovery, rolling deployments, auto-scaling, pod affinity rules, custom operators, or multi-service communication via Kubernetes networking. Container Instances charge only for the vCPU and memory consumed while the instance is running. There is no charge for the container runtime or orchestration layer.

Creating Your First Container Instance

A container instance runs one or more containers on a managed compute host. You specify the container image, CPU and memory requirements, networking configuration, and optional environment variables and volume mounts. The service pulls the image, starts the container, and provides you with logs and status information.

Required IAM Policies

bash
# Allow users to manage container instances
oci iam policy create \
  --compartment-id <tenancy-ocid> \
  --name ContainerInstancesPolicy \
  --description "Allow developers to use Container Instances" \
  --statements '[
    "Allow group Developers to manage container-instances in compartment containers",
    "Allow group Developers to manage repos in compartment containers",
    "Allow group Developers to use virtual-network-family in compartment containers",
    "Allow group Developers to read all-artifacts in compartment containers",
    "Allow group Developers to manage log-groups in compartment containers",
    "Allow any-user to read repos in tenancy where request.principal.type = \u0027containerinstance\u0027",
    "Allow any-user to use virtual-network-family in compartment containers where request.principal.type = \u0027containerinstance\u0027"
  ]'

Launching a Simple Container

bash
# Create a container instance running nginx
oci container-instances container-instance create \
  --compartment-id <compartment-ocid> \
  --availability-domain "AD-1" \
  --display-name "web-server" \
  --shape "CI.Standard.E4.Flex" \
  --shape-config '{"ocpus": 1, "memoryInGBs": 4}' \
  --containers '[{
    "displayName": "nginx",
    "imageUrl": "docker.io/library/nginx:latest",
    "resourceConfig": {
      "vcpusLimit": 1.0,
      "memoryLimitInGBs": 4.0
    }
  }]' \
  --vnics '[{
    "subnetId": "<subnet-ocid>",
    "isPublicIpAssigned": true,
    "nsgIds": ["<nsg-ocid>"]
  }]'

# Check the container instance status
oci container-instances container-instance get \
  --container-instance-id <instance-ocid> \
  --query 'data.{"Name": "display-name", "State": "lifecycle-state", "Shape": shape, "IP": "vnics"[0]."private-ip", "Containers": containers[].{"Name": "display-name", "State": "lifecycle-state"}}' \
  --output json

# List all container instances
oci container-instances container-instance list \
  --compartment-id <compartment-ocid> \
  --query 'data.items[].{"Name": "display-name", "State": "lifecycle-state", "Shape": shape, "Created": "time-created"}' \
  --output table

# View container logs
oci container-instances container get \
  --container-id <container-ocid> \
  --query 'data.{"Name": "display-name", "State": "lifecycle-state", "Exit Code": "exit-code"}'

# Retrieve container instance logs (via OCI Logging)
oci logging search \
  --search-query 'search "<log-group-ocid>" | where containerInstanceId = "<instance-ocid>" | sort by datetime desc | limit 100' \
  --time-start "2026-03-14T00:00:00Z" \
  --time-end "2026-03-14T23:59:59Z"

Container Configuration Deep Dive

Container instances support many of the same configuration options as Docker and Kubernetes pods. You can set environment variables, define health checks, configure resource limits, specify startup commands and arguments, and mount volumes. Understanding these options lets you run production-grade workloads without a container orchestrator.

Environment Variables and Secrets

bash
# Create a container instance with environment variables
oci container-instances container-instance create \
  --compartment-id <compartment-ocid> \
  --availability-domain "AD-1" \
  --display-name "api-server" \
  --shape "CI.Standard.E4.Flex" \
  --shape-config '{"ocpus": 2, "memoryInGBs": 8}' \
  --containers '[{
    "displayName": "api",
    "imageUrl": "<region>.ocir.io/<namespace>/ecommerce-api:v1.2.3",
    "environmentVariables": {
      "DATABASE_URL": "jdbc:oracle:thin:@adb.us-ashburn-1.oraclecloud.com:1522/finance_high",
      "REDIS_HOST": "10.0.1.50",
      "LOG_LEVEL": "info",
      "NODE_ENV": "production"
    },
    "command": ["/bin/sh"],
    "arguments": ["-c", "java -jar /app/api.jar --server.port=8080"],
    "resourceConfig": {
      "vcpusLimit": 2.0,
      "memoryLimitInGBs": 8.0
    },
    "healthChecks": [{
      "healthCheckType": "HTTP",
      "name": "readiness",
      "port": 8080,
      "path": "/health/ready",
      "intervalInSeconds": 10,
      "timeoutInSeconds": 5,
      "successThreshold": 1,
      "failureThreshold": 3
    }],
    "workingDirectory": "/app"
  }]' \
  --vnics '[{
    "subnetId": "<subnet-ocid>",
    "isPublicIpAssigned": false,
    "nsgIds": ["<nsg-ocid>"]
  }]'

Secrets Management

Never pass sensitive values like database passwords or API keys as plain environment variables in the container instance configuration. Instead, use OCI Vault to store secrets and retrieve them from within the container using the OCI SDK with instance principal authentication. Alternatively, use OCI Container Instances' integration with OCI Vault to inject secrets as environment variables at launch time, which keeps them out of the instance metadata.

Multi-Container Instances

A single container instance can run multiple containers that share the same network namespace and can communicate via localhost. This is similar to a Kubernetes pod with sidecar containers. Common patterns include a main application container with a logging sidecar, a web server with a reverse proxy, or an application with a service mesh proxy.

bash
# Multi-container instance: app + envoy proxy + log collector
oci container-instances container-instance create \
  --compartment-id <compartment-ocid> \
  --availability-domain "AD-1" \
  --display-name "api-with-sidecar" \
  --shape "CI.Standard.E4.Flex" \
  --shape-config '{"ocpus": 4, "memoryInGBs": 16}' \
  --containers '[
    {
      "displayName": "api",
      "imageUrl": "<region>.ocir.io/<namespace>/api:v2.0",
      "resourceConfig": {"vcpusLimit": 2.0, "memoryLimitInGBs": 8.0},
      "environmentVariables": {
        "PORT": "8080",
        "PROXY_PORT": "9901"
      },
      "healthChecks": [{
        "healthCheckType": "HTTP",
        "name": "api-health",
        "port": 8080,
        "path": "/health",
        "intervalInSeconds": 10,
        "timeoutInSeconds": 5,
        "successThreshold": 1,
        "failureThreshold": 3
      }]
    },
    {
      "displayName": "envoy-proxy",
      "imageUrl": "docker.io/envoyproxy/envoy:v1.28-latest",
      "resourceConfig": {"vcpusLimit": 1.0, "memoryLimitInGBs": 2.0},
      "arguments": ["-c", "/etc/envoy/envoy.yaml", "--service-cluster", "api"],
      "healthChecks": [{
        "healthCheckType": "TCP",
        "name": "envoy-health",
        "port": 9901,
        "intervalInSeconds": 10,
        "timeoutInSeconds": 3,
        "successThreshold": 1,
        "failureThreshold": 3
      }]
    },
    {
      "displayName": "fluentbit",
      "imageUrl": "docker.io/fluent/fluent-bit:latest",
      "resourceConfig": {"vcpusLimit": 0.5, "memoryLimitInGBs": 1.0},
      "environmentVariables": {
        "LOG_DESTINATION": "oci://logs@<namespace>/api-logs/"
      }
    }
  ]' \
  --vnics '[{
    "subnetId": "<subnet-ocid>",
    "isPublicIpAssigned": false,
    "nsgIds": ["<nsg-ocid>"]
  }]'

Volume Mounts and Storage

Container instances support ephemeral volumes for temporary storage and configmap-style volumes for injecting configuration files. Ephemeral volumes persist for the lifetime of the container instance but are lost when the instance is terminated. For persistent storage across container instance restarts, use OCI Object Storage or OCI File Storage accessed from within the container.

bash
# Container instance with volume mounts
oci container-instances container-instance create \
  --compartment-id <compartment-ocid> \
  --availability-domain "AD-1" \
  --display-name "app-with-volumes" \
  --shape "CI.Standard.E4.Flex" \
  --shape-config '{"ocpus": 2, "memoryInGBs": 8}' \
  --volumes '[
    {
      "name": "config-volume",
      "volumeType": "CONFIGFILE",
      "configs": [
        {
          "fileName": "app.yaml",
          "data": "c2VydmVyOgogIHBvcnQ6IDgwODAKICBob3N0OiAwLjAuMC4wCmRhdGFiYXNlOgogIHBvb2xfc2l6ZTogMTAK"
        },
        {
          "fileName": "logging.yaml",
          "data": "bGV2ZWw6IGluZm8KZm9ybWF0OiBqc29uCm91dHB1dDogc3Rkb3V0Cg=="
        }
      ]
    },
    {
      "name": "tmp-volume",
      "volumeType": "EMPTYDIR",
      "backingStore": "EPHEMERAL_STORAGE"
    }
  ]' \
  --containers '[{
    "displayName": "app",
    "imageUrl": "<region>.ocir.io/<namespace>/myapp:latest",
    "volumeMounts": [
      {
        "mountPath": "/etc/app/config",
        "volumeName": "config-volume",
        "isReadOnly": true
      },
      {
        "mountPath": "/tmp/app-data",
        "volumeName": "tmp-volume",
        "isReadOnly": false
      }
    ],
    "resourceConfig": {"vcpusLimit": 2.0, "memoryLimitInGBs": 8.0}
  }]' \
  --vnics '[{
    "subnetId": "<subnet-ocid>",
    "isPublicIpAssigned": false
  }]'

Using Container Images from OCIR

The OCI Container Registry (OCIR) is the recommended registry for container images used with Container Instances. OCIR provides private repositories with IAM-based access control, vulnerability scanning, image signing, and the lowest latency for image pulls within OCI. You can also pull images from Docker Hub, GitHub Container Registry, and other public registries.

bash
# Log in to OCIR
docker login <region-key>.ocir.io \
  -u '<tenancy-namespace>/oracleidentitycloudservice/<email>' \
  -p '<auth-token>'

# Build and push an image to OCIR
docker build -t <region-key>.ocir.io/<namespace>/myapp:v1.0 .
docker push <region-key>.ocir.io/<namespace>/myapp:v1.0

# Create a container repository (if it does not auto-create)
oci artifacts container repository create \
  --compartment-id <compartment-ocid> \
  --display-name "myapp" \
  --is-public false \
  --is-immutable false

# List images in a repository
oci artifacts container image list \
  --compartment-id <compartment-ocid> \
  --repository-name "myapp" \
  --query 'data.items[].{"Digest": digest, "Version": version, "Created": "time-created", "Size (MB)": "size-in-bytes"}' \
  --output table

# Enable vulnerability scanning
oci artifacts container image-signature list \
  --compartment-id <compartment-ocid> \
  --repository-name "myapp"

# Scan an image for vulnerabilities
oci vulnerability-scanning container-scan-recipe create \
  --compartment-id <compartment-ocid> \
  --display-name "standard-scan" \
  --scan-settings '{
    "scanLevel": "STANDARD"
  }'

Image Pull Performance

Container Instances pull images from OCIR over the OCI backbone network when using a service gateway, providing consistently fast image pulls regardless of image size. For large images (over 1 GB), this is significantly faster than pulling from external registries. Keep images small by using multi-stage Docker builds and distroless or Alpine base images. A smaller image means faster startup time for your container instance.

Terraform for Container Instances

For infrastructure-as-code management of container instances, use the OCI Terraform provider. Terraform lets you define container instances declaratively, version your configurations, and manage updates through plan and apply cycles.

hcl
# Terraform configuration for a container instance
resource "oci_container_instances_container_instance" "api" {
  compartment_id      = var.compartment_id
  availability_domain = data.oci_identity_availability_domains.ads.availability_domains[0].name
  display_name        = "api-server"

  shape = "CI.Standard.E4.Flex"
  shape_config {
    ocpus         = 2
    memory_in_gbs = 8
  }

  containers {
    display_name = "api"
    image_url    = "${var.ocir_url}/${var.namespace}/api:${var.image_tag}"

    environment_variables = {
      DATABASE_URL = var.database_url
      LOG_LEVEL    = "info"
      PORT         = "8080"
    }

    resource_config {
      vcpus_limit       = 2.0
      memory_limit_in_gbs = 8.0
    }

    health_checks {
      health_check_type = "HTTP"
      name             = "readiness"
      port             = 8080
      path             = "/health"
      interval_in_seconds = 10
      timeout_in_seconds  = 5
      success_threshold   = 1
      failure_threshold   = 3
    }
  }

  containers {
    display_name = "log-agent"
    image_url    = "docker.io/fluent/fluent-bit:latest"

    resource_config {
      vcpus_limit       = 0.5
      memory_limit_in_gbs = 1.0
    }
  }

  vnics {
    subnet_id             = var.subnet_id
    is_public_ip_assigned = false
    nsg_ids               = [var.nsg_id]
  }

  graceful_shutdown_timeout_in_seconds = 30

  freeform_tags = {
    Environment = var.environment
    Application = "ecommerce-api"
    ManagedBy   = "terraform"
  }
}

output "container_instance_id" {
  value = oci_container_instances_container_instance.api.id
}

output "private_ip" {
  value = oci_container_instances_container_instance.api.vnics[0].private_ip
}

Networking and Load Balancing

Container instances run within your VCN and get private IP addresses on the subnet you specify. For public-facing workloads, you can either assign a public IP to the container instance VNIC or, preferably, place container instances behind an OCI Load Balancer or Network Load Balancer. Load balancers provide health checking, TLS termination, and traffic distribution across multiple container instances.

bash
# Create multiple container instances for a load-balanced service
for i in 1 2 3; do
  oci container-instances container-instance create \
    --compartment-id <compartment-ocid> \
    --availability-domain "AD-$i" \
    --display-name "api-server-$i" \
    --shape "CI.Standard.E4.Flex" \
    --shape-config '{"ocpus": 2, "memoryInGBs": 8}' \
    --containers '[{
      "displayName": "api",
      "imageUrl": "<region>.ocir.io/<namespace>/api:v1.0",
      "resourceConfig": {"vcpusLimit": 2.0, "memoryLimitInGBs": 8.0},
      "healthChecks": [{
        "healthCheckType": "HTTP",
        "name": "health",
        "port": 8080,
        "path": "/health",
        "intervalInSeconds": 10,
        "timeoutInSeconds": 5,
        "successThreshold": 1,
        "failureThreshold": 3
      }]
    }]' \
    --vnics "[{
      \"subnetId\": \"<private-subnet-ocid>\",
      \"isPublicIpAssigned\": false,
      \"nsgIds\": [\"<nsg-ocid>\"]
    }]"
done

# Create a load balancer backend set
oci lb backend-set create \
  --load-balancer-id <lb-ocid> \
  --name "api-backend-set" \
  --policy "ROUND_ROBIN" \
  --health-checker-protocol HTTP \
  --health-checker-port 8080 \
  --health-checker-url-path "/health" \
  --health-checker-interval-ms 10000

# Add container instances as backends
# (Use the private IP of each container instance)
for ip in "10.0.1.10" "10.0.1.11" "10.0.1.12"; do
  oci lb backend create \
    --load-balancer-id <lb-ocid> \
    --backend-set-name "api-backend-set" \
    --ip-address "$ip" \
    --port 8080
done

Lifecycle Management and Best Practices

Container instances have a straightforward lifecycle: creating, active, updating, inactive, deleting, deleted, and failed. Unlike VMs, container instances cannot be stopped and restarted. To update a container instance, you create a new one with the updated configuration and delete the old one. This immutable infrastructure approach aligns well with container best practices.

Container Instance Shape Options

ShapeArchitectureMax OCPUsMax MemoryBest For
CI.Standard.E4.FlexAMD x86_64641024 GBGeneral-purpose workloads
CI.Standard.E3.FlexAMD x86_64641024 GBCost-optimized general workloads
CI.Standard.A1.FlexARM (Ampere)80512 GBARM-compatible workloads at lower cost

ARM Containers for Cost Savings

The CI.Standard.A1.Flex shape (Ampere ARM) is significantly cheaper than x86 shapes and included in the OCI Always Free tier (up to 4 OCPUs and 24 GB memory). If your container image supports ARM64 architecture (multi-arch builds), use the A1 shape to reduce costs by up to 50%. Most modern base images (nginx, node, python, golang) provide ARM64 variants.

Key best practices for production container instances include using health checks to detect container failures, setting resource limits to prevent resource starvation in multi-container instances, configuring graceful shutdown timeouts to allow in-flight requests to complete, using private subnets with load balancers for public-facing services, implementing logging via OCI Logging integration, tagging instances for cost tracking and automation, and using Terraform or Resource Manager for reproducible deployments.

OCI DevOps CI/CDOCI VCN Networking Deep DiveOCI Functions: Serverless Guide

Key Takeaways

  1. 1Container Instances run containers without cluster management, charging only for vCPU and memory used.
  2. 2Multi-container instances share a network namespace and can communicate via localhost.
  3. 3ARM (Ampere A1) shapes provide significant cost savings for compatible workloads.
  4. 4Health checks, volume mounts, and environment variables provide production-grade configuration.

Frequently Asked Questions

When should I use Container Instances vs. OKE?
Use Container Instances when you want to run containers without any cluster management overhead. Choose OKE when you need Kubernetes features like service discovery, rolling deployments, auto-scaling, pod affinity, or multi-service networking. Container Instances are ideal for batch processing, CI/CD agents, development environments, and simple web applications.
Can Container Instances use GPU shapes?
Currently, Container Instances support CPU-based shapes (E4 Flex, E3 Flex, A1 Flex). For GPU workloads, use OKE with GPU node pools or OCI Compute instances with Docker installed. Container Instances are best suited for general-purpose workloads that do not require GPU acceleration.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.