Skip to main content
OCIComputeintermediate

OKE: Kubernetes on Oracle Cloud

Deploy and manage OKE clusters with managed and virtual node pools, networking modes, and add-ons.

CloudToolStack Team22 min readPublished Mar 14, 2026

Prerequisites

  • Basic Kubernetes concepts (pods, services, deployments)
  • OCI account with container engine permissions

OKE: Oracle Kubernetes Engine

Oracle Kubernetes Engine (OKE) is OCI's managed Kubernetes service that simplifies deploying, managing, and scaling containerized applications. OKE runs upstream CNCF-conformant Kubernetes, which means your existing Kubernetes manifests, Helm charts, and tools work without modification. What makes OKE unique is its deep integration with OCI services, support for virtual nodes (serverless Kubernetes), and the fact that the Kubernetes control plane is free — you only pay for the worker nodes.

OKE supports multiple cluster types: managed node pools (you manage the worker node VMs), virtual node pools (OCI manages the infrastructure, similar to AWS Fargate), and enhanced clusters with additional features like cluster add-on management and workload identity. This guide covers cluster creation, node pool configuration, networking modes, container image management with OCI Container Registry, and production best practices.

Free Kubernetes Control Plane

OKE does not charge for the Kubernetes control plane. You only pay for the worker node compute instances, block volumes, load balancers, and other infrastructure resources your cluster uses. This makes OKE one of the most cost-effective managed Kubernetes services available. Combined with Always Free ARM-based worker nodes, you can run a Kubernetes cluster at zero cost for learning and development.

Cluster Types

OKE offers two cluster types: Basic and Enhanced. Basic clusters include core Kubernetes functionality at no additional cost. Enhanced clusters add features like cluster add-on management, workload identity, virtual nodes, and Kubernetes network policies enforcement.

FeatureBasic ClusterEnhanced Cluster
Control plane costFreeFree
Managed nodesYesYes
Virtual nodesNoYes
Cluster add-onsManualManaged by OCI
Workload identityNoYes (OCI IAM for pods)
Network policiesManual Calico installBuilt-in enforcement
Kubernetes versionsCurrent - 2Current - 2

Creating a Cluster

You can create OKE clusters using the OCI Console, CLI, Terraform, or the Resource Manager. The Console offers a "Quick Create" workflow that provisions a VCN, subnets, security lists, and node pool in a single step. For production environments, use the "Custom Create" workflow or Terraform for full control over networking and configuration.

bash
# Create an enhanced OKE cluster
oci ce cluster create \
  --compartment-id $C \
  --name "production-cluster" \
  --kubernetes-version "v1.30.1" \
  --vcn-id <vcn-ocid> \
  --type "ENHANCED_CLUSTER" \
  --endpoint-config '{"subnetId": "<k8s-endpoint-subnet-ocid>", "isPublicIpEnabled": true}' \
  --cluster-pod-network-options '[{"cniType": "OCI_VCN_IP_NATIVE"}]' \
  --service-lb-subnet-ids '["<lb-subnet-ocid>"]'

# Wait for cluster to become ACTIVE
oci ce cluster get \
  --cluster-id <cluster-ocid> \
  --query 'data."lifecycle-state"'

# Set up kubeconfig to access the cluster
oci ce cluster create-kubeconfig \
  --cluster-id <cluster-ocid> \
  --file $HOME/.kube/config \
  --region us-ashburn-1 \
  --token-version 2.0.0 \
  --kube-endpoint PUBLIC_ENDPOINT

# Verify cluster access
kubectl get nodes
kubectl cluster-info

Node Pools

Node pools define the worker node configuration for your cluster. Each node pool specifies the compute shape, image, placement (availability domains and fault domains), node count, and labels/taints. You can create multiple node pools with different configurations to support different workload types (e.g., general-purpose nodes and GPU nodes in the same cluster).

bash
# Create a managed node pool with flex shapes
oci ce node-pool create \
  --compartment-id $C \
  --cluster-id <cluster-ocid> \
  --name "general-pool" \
  --kubernetes-version "v1.30.1" \
  --node-shape "VM.Standard.E4.Flex" \
  --node-shape-config '{"ocpus": 2, "memoryInGBs": 16}' \
  --node-config-details '{
    "size": 3,
    "placementConfigs": [
      {"availabilityDomain": "AD-1", "subnetId": "<worker-subnet-ocid>"},
      {"availabilityDomain": "AD-2", "subnetId": "<worker-subnet-ocid>"},
      {"availabilityDomain": "AD-3", "subnetId": "<worker-subnet-ocid>"}
    ]
  }' \
  --node-source-details '{"sourceType": "IMAGE", "imageId": "<oke-image-ocid>"}'

# Create an ARM-based node pool (cost-effective)
oci ce node-pool create \
  --compartment-id $C \
  --cluster-id <cluster-ocid> \
  --name "arm-pool" \
  --kubernetes-version "v1.30.1" \
  --node-shape "VM.Standard.A1.Flex" \
  --node-shape-config '{"ocpus": 4, "memoryInGBs": 24}' \
  --node-config-details '{
    "size": 2,
    "placementConfigs": [
      {"availabilityDomain": "AD-1", "subnetId": "<worker-subnet-ocid>"}
    ]
  }'

# Scale a node pool
oci ce node-pool update \
  --node-pool-id <pool-ocid> \
  --node-config-details '{"size": 5}'

# List node pools in a cluster
oci ce node-pool list \
  --compartment-id $C \
  --cluster-id <cluster-ocid> \
  --query 'data[].{name:name, shape:"node-shape", size:"node-config-details".size, state:"lifecycle-state"}' \
  --output table

Use ARM Nodes for Cost Savings

ARM-based nodes using VM.Standard.A1.Flex shapes offer approximately 50% cost savings compared to x86 shapes with equivalent compute capacity. Most containerized workloads (especially those built with Go, Java, Node.js, or Python) run natively on ARM with no code changes. Build multi-architecture container images usingdocker buildx to support both ARM and x86 nodes in the same cluster.

Virtual Nodes (Serverless)

Virtual nodes provide a serverless Kubernetes experience where OCI manages the underlying infrastructure. You do not provision, manage, or pay for worker node VMs. Instead, each pod runs in its own isolated environment with dedicated compute resources. Virtual nodes are ideal for workloads with variable resource requirements or when you want to eliminate node management overhead entirely.

bash
# Create a virtual node pool (Enhanced cluster required)
oci ce virtual-node-pool create \
  --compartment-id $C \
  --cluster-id <cluster-ocid> \
  --display-name "virtual-pool" \
  --kubernetes-version "v1.30.1" \
  --pod-configuration '{
    "subnetId": "<pod-subnet-ocid>",
    "nsgIds": ["<pod-nsg-ocid>"],
    "shape": "Pod.Standard.E4.Flex"
  }' \
  --placement-configurations '[{
    "availabilityDomain": "AD-1",
    "subnetId": "<worker-subnet-ocid>",
    "faultDomain": ["FAULT-DOMAIN-1", "FAULT-DOMAIN-2"]
  }]' \
  --size 3

# Pods scheduled on virtual nodes look like regular pods
kubectl get pods -o wide
# NODE column shows "virtual-node-pool-xxx" instead of instance names

# Virtual node limitations:
# - DaemonSets are not supported
# - hostNetwork, hostPort, hostPath are not available
# - Privileged containers are not supported
# - Some CSI drivers may not work

Networking Modes

OKE supports two Container Network Interface (CNI) options that determine how pods receive IP addresses and communicate. The choice of CNI affects networking performance, scalability, and integration with OCI networking features.

Flannel Overlay vs VCN-Native Pod Networking

FeatureFlannel OverlayVCN-Native Pod Networking
Pod IP sourceOverlay network (not VCN IPs)VCN subnet IPs (directly routable)
Pod-to-pod routingEncapsulation (VXLAN)Native VCN routing
Network policiesRequires Calico installationBuilt-in (Enhanced clusters)
NSG support for podsNo (NSGs apply to nodes only)Yes (NSGs apply to individual pods)
PerformanceSlight overhead from encapsulationNative VCN performance
IP planningSimpler (pods use overlay IPs)Requires larger subnet CIDR for pods
Recommended forDev/test, simpler setupsProduction, security-sensitive workloads
bash
# VCN-Native Pod Networking requires:
# 1. A separate pod subnet with sufficient CIDR (e.g., /16 for large clusters)
# 2. Enhanced cluster type
# 3. Proper route table and security rules for pod subnet

# Example subnet layout for VCN-Native networking:
# VCN: 10.0.0.0/16
# K8s API endpoint subnet: 10.0.0.0/28
# Worker node subnet: 10.0.1.0/24 (up to 254 nodes)
# Pod subnet: 10.0.32.0/19 (up to 8,190 pod IPs)
# Service LB subnet: 10.0.2.0/24 (for Kubernetes LoadBalancer services)

# Security rules needed for pod subnet:
# - Allow pod-to-pod communication within the pod subnet
# - Allow pod-to-node communication
# - Allow pod-to-Kubernetes-API communication
# - Allow pod egress to internet (through NAT GW) or OCI services (through SGW)

OCI Container Registry (OCIR)

OCI Container Registry (OCIR) is a managed Docker registry service for storing and sharing container images. OCIR integrates natively with OKE, supports image scanning for vulnerabilities, and can replicate images across OCI regions. It supports Docker V2 and OCI image formats.

bash
# Log in to OCIR
docker login <region-code>.ocir.io
# Username: <tenancy-namespace>/<username>
# Password: <auth-token>
# Example: docker login iad.ocir.io

# Tag and push an image
docker tag myapp:latest iad.ocir.io/<namespace>/myapp:v1.0
docker push iad.ocir.io/<namespace>/myapp:v1.0

# Create a Kubernetes secret for OCIR access
kubectl create secret docker-registry ocir-secret \
  --docker-server=iad.ocir.io \
  --docker-username='<namespace>/<username>' \
  --docker-password='<auth-token>' \
  --docker-email='user@example.com'

# Reference the secret in your pod spec
# imagePullSecrets:
#   - name: ocir-secret

# List repositories
oci artifacts container repository list \
  --compartment-id $C \
  --query 'data.items[].{name:"display-name", "image-count":"image-count", "is-public":"is-public"}' \
  --output table

# List images in a repository
oci artifacts container image list \
  --compartment-id $C \
  --repository-name "myapp" \
  --query 'data.items[].{digest:digest, version:version, "time-created":"time-created"}' \
  --output table

OCIR Authentication

OCIR uses OCI auth tokens for Docker authentication, not your OCI console password. Generate an auth token in the OCI Console under Profile > User Settings > Auth Tokens. Each user can have a maximum of two auth tokens. The username format is<tenancy-namespace>/<username> for local users or<tenancy-namespace>/oracleidentitycloudservice/<username>for federated users through Identity Domains.

Load Balancers and Ingress

OKE integrates with OCI Load Balancer and OCI Network Load Balancer for exposing Kubernetes services. When you create a Kubernetes Service of type LoadBalancer, OKE automatically provisions an OCI load balancer and configures the appropriate backend sets, listeners, and health checks.

yaml
# Kubernetes Service with OCI Load Balancer
apiVersion: v1
kind: Service
metadata:
  name: web-service
  annotations:
    # Use a flexible load balancer shape (10 Mbps free tier)
    service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100"
    # Use an internal (private) load balancer
    service.beta.kubernetes.io/oci-load-balancer-internal: "true"
    # Specify the load balancer subnet
    service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet..."
    # Specify NSGs for the load balancer
    oci.oraclecloud.com/oci-network-security-groups: "ocid1.networksecuritygroup..."
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080
---
# Use OCI Network Load Balancer (Layer 4, higher performance)
apiVersion: v1
kind: Service
metadata:
  name: tcp-service
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/subnet: "ocid1.subnet..."
spec:
  type: LoadBalancer
  selector:
    app: tcp-app
  ports:
    - port: 9000
      targetPort: 9000

Cluster Add-Ons

Enhanced clusters support managed cluster add-ons that OCI automatically installs, configures, and updates. Add-ons include essential Kubernetes components like CoreDNS, kube-proxy, and the OCI CSI driver, as well as optional components for monitoring and certificate management.

bash
# List available add-ons
oci ce addon-option list \
  --kubernetes-version "v1.30.1" \
  --query 'data[].{name:name, description:description, "is-essential":"is-essential"}' \
  --output table

# Install the Kubernetes Dashboard add-on
oci ce cluster install-addon \
  --cluster-id <cluster-ocid> \
  --addon-name "KubernetesDashboard"

# Install the OCI Metrics Server
oci ce cluster install-addon \
  --cluster-id <cluster-ocid> \
  --addon-name "CertManager"

# List installed add-ons
oci ce addon list \
  --cluster-id <cluster-ocid> \
  --query 'data[].{name:name, version:version, state:"lifecycle-state"}' \
  --output table

Persistent Storage with OCI Block Volumes

OKE integrates with OCI Block Volumes for persistent storage through the Container Storage Interface (CSI) driver. The CSI driver is pre-installed on OKE clusters and supports dynamic provisioning, volume expansion, and snapshot operations.

yaml
# StorageClass for OCI Block Volumes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: oci-bv-high-perf
provisioner: blockvolume.csi.oraclecloud.com
parameters:
  vpusPerGB: "20"  # Higher Performance tier
  attachment-type: "paravirtualized"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: oci-bv-high-perf
  resources:
    requests:
      storage: 100Gi
---
# Pod using the PVC
apiVersion: v1
kind: Pod
metadata:
  name: app-pod
spec:
  containers:
    - name: app
      image: iad.ocir.io/namespace/myapp:v1.0
      volumeMounts:
        - name: data
          mountPath: /data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: app-data

Monitoring and Logging

OKE integrates with OCI Monitoring and OCI Logging for cluster observability. You can enable control plane logging to capture API server, scheduler, and controller manager logs. For workload monitoring, deploy Prometheus using the OCI Monitoring agent or the open-source Prometheus operator.

bash
# Enable OKE control plane logging
oci ce cluster update \
  --cluster-id <cluster-ocid> \
  --options '{"kubernetesNetworkConfig": {}, "admissionControllerOptions": {}, "addOns": {}}'

# Deploy the OCI Monitoring agent for custom metrics
kubectl apply -f https://raw.githubusercontent.com/oracle/oci-cloud-controller-manager/master/manifests/monitoring/metrics-server.yaml

# View cluster events
kubectl get events --sort-by=.metadata.creationTimestamp

# Check node resource utilization
kubectl top nodes
kubectl top pods --all-namespaces

# View pod logs
kubectl logs -f deployment/myapp -n production

Use Workload Identity for Pod Authentication

Enhanced clusters support OCI Workload Identity, which allows Kubernetes pods to authenticate to OCI services using the Kubernetes service account identity instead of storing OCI credentials in secrets. This eliminates the need to manage and rotate credentials for your applications. Configure a workload identity by creating an OCI IAM policy that maps a Kubernetes service account to OCI permissions.

OKE Best Practices

AreaRecommendation
NetworkingUse VCN-Native pod networking for production. Size pod subnet CIDRs generously.
SecurityUse private API endpoints. Enable network policies. Use workload identity.
CostUse ARM nodes where possible. Use cluster autoscaler. Consider virtual nodes for burst.
ReliabilitySpread nodes across ADs and fault domains. Use pod disruption budgets.
OperationsUse Enhanced clusters. Enable control plane logging. Automate with Terraform.
ImagesUse OCIR with vulnerability scanning. Build multi-arch images for ARM+x86.
OCI VCN Networking Deep DiveOCI Compute Shapes & InstancesTerraform on OCI

Key Takeaways

  1. 1OKE control plane is free — you only pay for worker node compute and storage.
  2. 2Virtual nodes provide serverless Kubernetes pods without managing node infrastructure.
  3. 3OCI VCN-native pod networking assigns VCN IP addresses directly to pods.
  4. 4OKE supports both AMD and Ampere ARM node pools for cost optimization.

Frequently Asked Questions

Is the OKE control plane free?
Yes, OKE does not charge for the Kubernetes control plane. You only pay for the worker node compute instances, block storage volumes, and load balancers used by your cluster. This makes OKE one of the most cost-effective managed Kubernetes options among major cloud providers.
What is the difference between managed and virtual nodes?
Managed nodes are standard OCI compute instances that you provision and manage as part of node pools. Virtual nodes are serverless — OKE automatically provisions and manages the underlying infrastructure, and you pay per pod based on the shape (CPU/memory) requested. Virtual nodes are ideal for bursty workloads and eliminating node management overhead.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.