Skip to main content
OCIComputeintermediate

OCI Compute Shapes & Instances

Navigate OCI flex shapes, Ampere ARM instances, bare metal options, and preemptible capacity.

CloudToolStack Team18 min readPublished Mar 14, 2026

Prerequisites

  • Basic understanding of virtual machine concepts
  • OCI account with compute permissions

OCI Compute: Shapes and Instances

OCI Compute is one of the most compelling parts of Oracle Cloud Infrastructure, offering a range of instance types from Always Free micro instances to massive bare-metal servers with hundreds of cores. What makes OCI compute unique is its flex shapes, which let you choose exactly how many OCPUs and how much memory you need rather than picking from a fixed menu of sizes. Combined with OCI's off-box network virtualization (which ensures no noisy-neighbor effects) and competitive pricing, OCI compute delivers excellent value for both development and production workloads.

This guide covers the full spectrum of OCI compute options: VM shapes, bare-metal shapes, dedicated hosts, flex shapes, ARM-based Ampere instances, GPU shapes, preemptible instances, capacity reservations, boot volumes, and instance configuration best practices.

OCPU vs vCPU

OCI uses OCPU (Oracle Compute Unit) instead of vCPU. One OCPU equals one physical CPU core with hyper-threading, which means 1 OCPU provides 2 vCPUs worth of processing power. When comparing OCI pricing to AWS or Azure, remember to double the OCPU count to get the equivalent vCPU count. For example, a VM.Standard.E4.Flex with 4 OCPUs is equivalent to an 8-vCPU instance on other clouds.

Shape Families

OCI organizes compute shapes into families based on processor type, use case, and generation. Each family has specific characteristics that make it suitable for different workloads. Understanding the shape families helps you choose the right instance for your needs.

Current Shape Families

FamilyProcessorUse CaseKey Feature
VM.Standard.E4/E5.FlexAMD EPYC (Milan/Genoa)General purposeFlexible OCPU/memory, excellent value
VM.Standard3.FlexIntel Xeon (Ice Lake)General purposeIntel-specific workloads, AVX-512
VM.Standard.A1.FlexARM Ampere AltraCloud-native, web, containersBest price-performance, Always Free eligible
VM.Standard.E2.1.MicroAMD EPYCAlways Free tier1/8 OCPU, 1 GB RAM, free forever
VM.Optimized3.FlexIntel Xeon (Ice Lake)HPC, high-frequencyHigher clock speed, low latency
VM.GPU.A10/A100NVIDIA A10/A100ML/AI, renderingGPU acceleration, CUDA support
VM.DenseIO.E4/E5.FlexAMD EPYCDatabases, big dataLocal NVMe SSD storage
BM.Standard.E4/E5AMD EPYCPerformance-critical workloadsBare metal, full server dedicated
bash
# List all available shapes in a compartment
oci compute shape list \
  --compartment-id $C \
  --query 'data[].{shape:shape, ocpus:ocpus, memory:"memory-in-gbs", gpus:"gpus", "local-disks":"local-disks"}' \
  --output table

# List only flex shapes
oci compute shape list \
  --compartment-id $C \
  --query 'data[?contains(shape, `Flex`)].{shape:shape, "max-ocpus":"ocpu-options"."max", "max-memory":"memory-options"."max-in-gbs"}' \
  --output table

# Check shape availability in each AD
oci compute shape list \
  --compartment-id $C \
  --availability-domain <ad-name> \
  --query 'data[].shape' --output table

Flex Shapes Deep Dive

Flex shapes are OCI's most popular compute option because they let you customize the OCPU and memory independently. Instead of choosing between a 2-vCPU/8-GB instance and a 4-vCPU/16-GB instance, you can create a 3-OCPU/48-GB instance if that is what your workload needs. This eliminates the waste of over-provisioning and reduces costs.

Each flex shape has a minimum and maximum OCPU range, and memory can be configured between 1 GB and 64 GB per OCPU (depending on the shape). The memory ratio is configurable in 1-GB increments, giving you precise control over your resource allocation.

bash
# Launch a flex shape instance with custom OCPU and memory
oci compute instance launch \
  --compartment-id $C \
  --availability-domain $AD \
  --shape "VM.Standard.E4.Flex" \
  --shape-config '{"ocpus": 4, "memoryInGBs": 32}' \
  --image-id <image-ocid> \
  --subnet-id <subnet-ocid> \
  --display-name "app-server" \
  --assign-public-ip false

# Launch an ARM Ampere A1 instance (Always Free eligible)
oci compute instance launch \
  --compartment-id $C \
  --availability-domain $AD \
  --shape "VM.Standard.A1.Flex" \
  --shape-config '{"ocpus": 2, "memoryInGBs": 12}' \
  --image-id <arm-image-ocid> \
  --subnet-id <subnet-ocid> \
  --display-name "arm-server" \
  --assign-public-ip true

# Resize an existing instance (change shape or OCPU/memory)
# Note: The instance must be stopped first for most shape changes
oci compute instance action \
  --instance-id <instance-ocid> \
  --action STOP

oci compute instance update \
  --instance-id <instance-ocid> \
  --shape "VM.Standard.E4.Flex" \
  --shape-config '{"ocpus": 8, "memoryInGBs": 64}'

oci compute instance action \
  --instance-id <instance-ocid> \
  --action START

ARM Ampere A1 for Best Value

The VM.Standard.A1.Flex shape based on Ampere Altra processors offers the best price-performance ratio on OCI. ARM instances cost approximately 50% less than equivalent x86 instances while delivering comparable or better performance for most workloads. Many popular applications (Nginx, Node.js, Python, Java, Docker, Kubernetes) run natively on ARM. The Always Free tier includes 4 A1 OCPUs and 24 GB of RAM, which you can configure as one large instance or up to four smaller ones.

Bare Metal Instances

Bare-metal instances give you an entire physical server with no hypervisor overhead. The server's CPU, memory, and local storage are exclusively yours. Bare-metal instances are ideal for workloads that require maximum performance, specific hardware access (like SR-IOV networking), license compliance (Oracle Database licensing is based on physical cores), or security isolation requirements.

bash
# Launch a bare-metal instance
oci compute instance launch \
  --compartment-id $C \
  --availability-domain $AD \
  --shape "BM.Standard.E4.128" \
  --image-id <image-ocid> \
  --subnet-id <subnet-ocid> \
  --display-name "bare-metal-db" \
  --assign-public-ip false

# BM.Standard.E4.128 specs:
# - 128 OCPUs (256 vCPUs)
# - 2048 GB RAM
# - 50 Gbps network bandwidth
# - Dedicated physical server

# List bare-metal shapes
oci compute shape list \
  --compartment-id $C \
  --query 'data[?starts_with(shape, `BM.`)].{shape:shape, ocpus:ocpus, memory:"memory-in-gbs", bandwidth:"networking-bandwidth-in-gbps"}' \
  --output table

Preemptible Instances

Preemptible instances (also called preemptible VMs) are regular compute instances that can be reclaimed by OCI when capacity is needed. In exchange for this possibility, they cost significantly less than regular instances (up to 50% discount). Preemptible instances are ideal for fault-tolerant workloads like batch processing, CI/CD, data analysis, and stateless web servers behind a load balancer.

bash
# Launch a preemptible instance
oci compute instance launch \
  --compartment-id $C \
  --availability-domain $AD \
  --shape "VM.Standard.E4.Flex" \
  --shape-config '{"ocpus": 4, "memoryInGBs": 32}' \
  --image-id <image-ocid> \
  --subnet-id <subnet-ocid> \
  --display-name "batch-worker" \
  --preemptible-instance-config '{"preemptionAction": {"type": "TERMINATE", "preserveBootVolume": false}}'

# Preemptible instances:
# - Can be terminated with 30 seconds notice
# - Not covered by SLAs
# - Cannot be stopped/started (only terminated)
# - Available in the same shapes as regular instances
# - Great for: CI/CD runners, batch jobs, dev environments

Preemptible Instance Limitations

Preemptible instances can be terminated at any time with only 30 seconds of notice. Design your workloads to handle termination gracefully: use checkpointing for long batch jobs, run stateless application tiers with multiple instances behind a load balancer, and never use preemptible instances for databases or single-instance production workloads. OCI sends a termination notification to the instance metadata endpoint 30 seconds before termination.

Boot Volumes and Block Storage

Every compute instance has a boot volume that contains the operating system and is backed by OCI Block Volume storage. Boot volumes are durable, network-attached storage that persists independently of the instance lifecycle (unless you explicitly choose to delete it on termination). You can also attach additional block volumes for data storage.

Block volumes offer different performance tiers: Lower Cost (2 IOPS/GB), Balanced (60 IOPS/GB), Higher Performance (75 IOPS/GB), and Ultra High Performance (up to 225 IOPS/GB per OCPU). The default boot volume is 47 GB, but you can increase it up to 32 TB.

bash
# Create a block volume
oci bv volume create \
  --compartment-id $C \
  --availability-domain $AD \
  --display-name "data-volume" \
  --size-in-gbs 200 \
  --vpus-per-gb 20

# VPUs per GB determines performance tier:
# 0  = Lower Cost (2 IOPS/GB, 240 KB/s/GB)
# 10 = Balanced (60 IOPS/GB, 480 KB/s/GB)  [default]
# 20 = Higher Performance (75 IOPS/GB, 600 KB/s/GB)
# 30+ = Ultra High Performance (varies)

# Attach the volume to an instance
oci compute volume-attachment attach \
  --instance-id <instance-ocid> \
  --volume-id <volume-ocid> \
  --type "paravirtualized" \
  --display-name "data-vol-attachment"

# After attaching, configure the volume inside the instance
# SSH into the instance and run:
# sudo iscsiadm -m node -o new -T <iqn> -p <ip>:3260
# sudo iscsiadm -m node -o update -T <iqn> -n node.startup -v automatic
# sudo iscsiadm -m node -T <iqn> -p <ip>:3260 -l
# For paravirtualized: the disk appears automatically as /dev/sdb

# Create a backup of a boot volume
oci bv boot-volume-backup create \
  --boot-volume-id <boot-volume-ocid> \
  --display-name "pre-upgrade-backup" \
  --type "FULL"

# List boot volume backups
oci bv boot-volume-backup list \
  --compartment-id $C \
  --query 'data[].{name:"display-name", "size-gb":"size-in-gbs", state:"lifecycle-state", type:type}' \
  --output table

Instance Configurations and Pools

Instance configurations are templates that capture the settings of a compute instance, including shape, image, networking, and metadata. Instance pools use these configurations to create and manage groups of identical instances, similar to AWS Auto Scaling Groups. Instance pools can automatically replace unhealthy instances and integrate with load balancers for horizontal scaling.

bash
# Create an instance configuration from an existing instance
oci compute-management instance-configuration create \
  --compartment-id $C \
  --display-name "web-server-config" \
  --instance-details '{
    "instanceType": "compute",
    "launchDetails": {
      "compartmentId": "'$C'",
      "shape": "VM.Standard.E4.Flex",
      "shapeConfig": {"ocpus": 2, "memoryInGBs": 16},
      "sourceDetails": {
        "sourceType": "image",
        "imageId": "<image-ocid>"
      },
      "createVnicDetails": {
        "subnetId": "<subnet-ocid>",
        "assignPublicIp": false
      },
      "metadata": {
        "ssh_authorized_keys": "<your-ssh-public-key>"
      }
    }
  }'

# Create an instance pool
oci compute-management instance-pool create \
  --compartment-id $C \
  --instance-configuration-id <config-ocid> \
  --placement-configurations '[{
    "availabilityDomain": "'$AD'",
    "primarySubnetId": "<subnet-ocid>"
  }]' \
  --size 3 \
  --display-name "web-server-pool"

# Scale the instance pool
oci compute-management instance-pool update \
  --instance-pool-id <pool-ocid> \
  --size 5

# Attach a load balancer to the pool
oci compute-management instance-pool update \
  --instance-pool-id <pool-ocid> \
  --load-balancers '[{
    "loadBalancerId": "<lb-ocid>",
    "backendSetName": "web-backend",
    "port": 80,
    "vnicSelection": "PrimaryVnic"
  }]'

Autoscaling

OCI Autoscaling automatically adjusts the number of instances in a pool based on metrics or schedules. Metric-based autoscaling monitors CPU or memory utilization and adds or removes instances to maintain a target utilization range. Schedule-based autoscaling changes the pool size at specific times, useful for predictable traffic patterns.

bash
# Create a metric-based autoscaling configuration
oci autoscaling configuration create \
  --compartment-id $C \
  --display-name "web-autoscaling" \
  --resource '{"type": "instancePool", "id": "<pool-ocid>"}' \
  --policies '[{
    "policyType": "threshold",
    "displayName": "cpu-based-scaling",
    "capacity": {
      "initial": 3,
      "min": 2,
      "max": 10
    },
    "rules": [
      {
        "displayName": "scale-out-rule",
        "action": {"type": "CHANGE_COUNT_BY", "value": 2},
        "metric": {
          "metricType": "CPU_UTILIZATION",
          "threshold": {"operator": "GT", "value": 70}
        }
      },
      {
        "displayName": "scale-in-rule",
        "action": {"type": "CHANGE_COUNT_BY", "value": -1},
        "metric": {
          "metricType": "CPU_UTILIZATION",
          "threshold": {"operator": "LT", "value": 30}
        }
      }
    ]
  }]' \
  --cool-down-in-seconds 300

Capacity Reservations

Capacity reservations guarantee that compute capacity is available when you need it. Unlike preemptible instances, reserved capacity ensures you can launch instances of a specific shape in a specific availability domain at any time. This is particularly important for disaster recovery scenarios where you need to guarantee capacity during a failover event.

bash
# Create a capacity reservation
oci compute capacity-reservation create \
  --compartment-id $C \
  --availability-domain $AD \
  --display-name "prod-reservation" \
  --instance-reservation-configs '[{
    "instanceShape": "VM.Standard.E4.Flex",
    "instanceShapeConfig": {"ocpus": 4, "memoryInGBs": 32},
    "reservedCount": 5
  }]'

# Launch an instance using the reservation
oci compute instance launch \
  --compartment-id $C \
  --availability-domain $AD \
  --shape "VM.Standard.E4.Flex" \
  --shape-config '{"ocpus": 4, "memoryInGBs": 32}' \
  --capacity-reservation-id <reservation-ocid> \
  --image-id <image-ocid> \
  --subnet-id <subnet-ocid> \
  --display-name "reserved-instance"

# List capacity reservations
oci compute capacity-reservation list \
  --compartment-id $C \
  --query 'data[].{name:"display-name", state:"lifecycle-state", reserved:"reserved-instance-count", used:"used-instance-count"}' \
  --output table

Custom Images and Platform Images

OCI provides platform images for Oracle Linux, Ubuntu, CentOS, Windows Server, and other operating systems. You can also create custom images from running instances or import images from other environments. Custom images capture the boot volume contents and can be used to launch identical instances quickly, making them useful for golden image pipelines.

bash
# List available platform images
oci compute image list \
  --compartment-id $C \
  --operating-system "Oracle Linux" \
  --query 'data[].{name:"display-name", os:"operating-system", version:"operating-system-version", created:"time-created"}' \
  --output table

# Create a custom image from a running instance
oci compute image create \
  --compartment-id $C \
  --instance-id <instance-ocid> \
  --display-name "golden-image-v1.0"

# Import an image from Object Storage
oci compute image import from-object \
  --compartment-id $C \
  --namespace <namespace> \
  --bucket-name "images" \
  --name "exported-vm.qcow2" \
  --source-image-type QCOW2 \
  --display-name "imported-image"

# Export an image to Object Storage
oci compute image export to-object \
  --image-id <image-ocid> \
  --namespace <namespace> \
  --bucket-name "images" \
  --name "my-image-export.qcow2"

Use Cloud-Init for Instance Configuration

Instead of creating custom images for every configuration change, use cloud-init user data to configure instances at launch time. Cloud-init scripts run on first boot and can install packages, configure services, pull application code, and join clusters automatically. Combine a base platform image with cloud-init for a flexible deployment model. Pass cloud-init data using the --user-data-file parameter when launching instances.

Compute Pricing and Optimization

StrategySavingsBest For
Always Free Tier100%Learning, personal projects, light workloads
ARM Ampere A1~50% vs x86Cloud-native apps, containers, web servers
Preemptible Instances~50%Batch processing, CI/CD, fault-tolerant workloads
Flex Shapes (right-sizing)VariesAll workloads (only pay for what you need)
Reserved CapacityGuaranteed availabilityProduction, DR, mission-critical workloads
Universal CreditsUp to 60%Committed spend across all OCI services
OCI Cost Optimization StrategiesOCI VCN Networking Deep DiveOKE: Kubernetes on Oracle Cloud

Key Takeaways

  1. 1Flex shapes let you specify exact OCPU and memory ratios for cost optimization.
  2. 2Ampere A1 ARM instances offer the best price-performance for compatible workloads.
  3. 3OCI bare metal instances provide dedicated physical servers with no hypervisor overhead.
  4. 4Preemptible instances cost up to 50% less but can be reclaimed with 30 seconds notice.

Frequently Asked Questions

What is an OCPU in OCI?
An OCPU (Oracle Compute Unit) represents one physical CPU core with hyper-threading, equivalent to 2 vCPUs on other cloud platforms. When OCI lists a shape as having 1 OCPU, it provides the same compute capacity as 2 vCPUs on AWS or Azure. This is important when comparing pricing across providers.
Should I use AMD, Intel, or Ampere ARM shapes?
Ampere A1 (ARM) shapes offer the best price-performance and are ideal for web servers, containers, and CI/CD workloads. AMD E4/E5 shapes provide a good balance of price and x86 compatibility. Intel shapes are best for workloads requiring specific Intel features like AVX-512 or SGX enclaves.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.