Skip to main content
Multi-CloudComputeintermediate

Container Orchestration Across Clouds

Compare ECS vs Container Apps vs Cloud Run for serverless containers and EKS vs AKS vs GKE for Kubernetes.

CloudToolStack Team23 min readPublished Mar 14, 2026

Prerequisites

  • Basic understanding of containers and Docker
  • Familiarity with at least one cloud container service

Container Orchestration Landscape

Every major cloud provider offers managed container services at two levels: managed Kubernetes (for full orchestration) and managed container runtimes (for simpler workloads without cluster management). Choosing between these options depends on your operational maturity, workload complexity, team expertise, and cost constraints.

This guide compares the serverless container services across AWS (ECS Fargate, App Runner), Azure (Container Apps, Container Instances), GCP (Cloud Run), and OCI (Container Instances), as well as the managed Kubernetes offerings (EKS, AKS, GKE, OKE) for workloads that need full Kubernetes capabilities.

Serverless Containers vs Kubernetes

Serverless containers (Fargate, Cloud Run, Container Apps) abstract away all infrastructure: no nodes, no clusters, no scaling configuration. You deploy a container image and the platform handles everything. Managed Kubernetes (EKS, AKS, GKE) gives you the full Kubernetes API with the control plane managed for you, but you still manage node pools, networking policies, and cluster upgrades. Use serverless containers when simplicity is the priority; use Kubernetes when you need custom operators, service mesh, or workload portability.

Serverless Container Services Compared

FeatureAWS Fargate / App RunnerAzure Container AppsGCP Cloud Run
Scale to zeroApp Runner: Yes. Fargate: NoYesYes
Max instancesApp Runner: 25. Fargate: per-service300 per app1000 per service
GPU supportFargate: No. ECS on EC2: YesYes (Dedicated plan)Yes (preview)
Sidecar supportFargate: Yes. App Runner: NoYes (built-in Dapr)Yes (multi-container)
Event-driven scalingFargate: SQS, Kinesis. App Runner: HTTPKEDA (30+ scalers)HTTP, Pub/Sub, Eventarc
Traffic splittingApp Runner: No. ALB: YesYes (revision-based)Yes (revision-based)
VNet/VPC integrationYes (VPC connectors)Yes (VNet integration)Yes (VPC connectors)
Minimum pricingFargate: ~$0.04/hr. App Runner: $0.007/hr paused$0 (scale to zero)$0 (scale to zero)

Deploying to Each Platform

AWS ECS with Fargate

bash
# Create a Fargate service
aws ecs create-cluster --cluster-name myapp-cluster
aws ecs register-task-definition --cli-input-json file://task-def.json

aws ecs create-service \
  --cluster myapp-cluster \
  --service-name api-service \
  --task-definition myapp-api:1 \
  --desired-count 3 \
  --launch-type FARGATE \
  --network-configuration '{
    "awsvpcConfiguration": {
      "subnets": ["subnet-abc123", "subnet-def456"],
      "securityGroups": ["sg-abc123"],
      "assignPublicIp": "DISABLED"
    }
  }' \
  --load-balancers '[{
    "targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/myapp/abc123",
    "containerName": "api",
    "containerPort": 8080
  }]'

Azure Container Apps

bash
# Deploy to Container Apps
az containerapp create \
  --name api-backend \
  --resource-group myapp-rg \
  --environment myapp-env \
  --image myregistry.azurecr.io/api:v1 \
  --target-port 8080 \
  --ingress external \
  --min-replicas 0 \
  --max-replicas 50 \
  --cpu 1.0 --memory 2.0Gi \
  --scale-rule-name http-rule \
  --scale-rule-type http \
  --scale-rule-http-concurrency 50

GCP Cloud Run

bash
# Deploy to Cloud Run
gcloud run deploy api-backend \
  --image us-docker.pkg.dev/PROJECT/repo/api:v1 \
  --platform managed \
  --region us-central1 \
  --port 8080 \
  --min-instances 0 \
  --max-instances 100 \
  --cpu 1 --memory 2Gi \
  --concurrency 80 \
  --allow-unauthenticated \
  --set-env-vars "APP_ENV=production"

Managed Kubernetes Compared

FeatureAWS EKSAzure AKSGCP GKE
Control plane cost$0.10/hr ($73/mo)FreeFree (Standard), $0.10/hr (Enterprise)
Auto-upgradeOptionalOptionalDefault (release channels)
Node autoscalingCluster Autoscaler / KarpenterCluster Autoscaler / KEDACluster Autoscaler / Autopilot
Serverless nodesFargate profilesVirtual nodes (ACI)Autopilot (fully managed)
Service meshApp Mesh / Istio add-onIstio add-on / Open Service MeshAnthos Service Mesh (managed Istio)
Multi-clusterManual or AnthosAzure Fleet ManagerGKE Fleet / Anthos
GPU supportNVIDIA GPU AMIsGPU node poolsGPU node pools + TPU

Portable Deployment with Terraform

hcl
# EKS cluster
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = "myapp-cluster"
  cluster_version = "1.29"

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_groups = {
    general = {
      instance_types = ["m5.xlarge"]
      min_size       = 2
      max_size       = 10
      desired_size   = 3
    }
  }
}

# AKS cluster
resource "azurerm_kubernetes_cluster" "main" {
  name                = "myapp-cluster"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  dns_prefix          = "myapp"

  default_node_pool {
    name       = "general"
    node_count = 3
    vm_size    = "Standard_D4s_v5"
    min_count  = 2
    max_count  = 10
    auto_scaling_enabled = true
  }

  identity {
    type = "SystemAssigned"
  }
}

# GKE Autopilot cluster
resource "google_container_cluster" "main" {
  name     = "myapp-cluster"
  location = "us-central1"

  enable_autopilot = true

  network    = google_compute_network.main.name
  subnetwork = google_compute_subnetwork.main.name

  ip_allocation_policy {
    cluster_secondary_range_name  = "pods"
    services_secondary_range_name = "services"
  }
}

Container Image Registries

FeatureAWS ECRAzure ACRGCP Artifact Registry
Geo-replicationCross-region replicationPremium: geo-replicationMulti-region repositories
Vulnerability scanningInspector integrationMicrosoft DefenderContainer Analysis
Image signingNotation/CosignNotationBinary Authorization
Lifecycle policiesYesYes (purge tasks)Yes (cleanup policies)
FormatsOCI, DockerOCI, Docker, Helm, OPAOCI, Docker, Maven, npm, Python

Multi-Cloud Container Strategy

When running containers across multiple clouds, standardize on these areas to reduce operational complexity and enable workload portability.

Strategy Recommendations

AreaRecommendation
Container runtimeOCI-compliant images work everywhere; avoid cloud-specific base images
OrchestrationKubernetes for portability; serverless containers for simplicity per-cloud
CI/CDCloud-agnostic tools (GitHub Actions, GitLab CI) with per-cloud deployment steps
ConfigurationEnvironment variables over cloud-specific config services
SecretsExternal Secrets Operator for Kubernetes; env vars for serverless
ObservabilityOpenTelemetry for vendor-neutral telemetry

Start Simple, Add Complexity

If you are running containers in a single cloud, start with the simplest option (Cloud Run, Container Apps, or App Runner). Only move to managed Kubernetes when you need features like custom operators, init containers, DaemonSets, or StatefulSets. The operational cost of Kubernetes is significant; do not adopt it unless you need it.

EKS vs AKS vs GKE ComparisonServerless Containers Across Clouds

Key Takeaways

  1. 1Cloud Run and Container Apps offer scale-to-zero; Fargate does not scale to zero.
  2. 2GKE Autopilot provides the closest to serverless Kubernetes with fully managed nodes.
  3. 3AKS has a free control plane; EKS charges $0.10/hr; GKE Standard is free.
  4. 4Use OCI-compliant images for portability across all cloud registries.

Frequently Asked Questions

Which serverless container service is cheapest?
Cloud Run and Container Apps are cheapest for bursty workloads because they scale to zero. For steady-state, all three are comparable. Benchmark with your specific workload pattern.
Should I use Kubernetes or serverless containers?
Start with serverless containers for simplicity. Move to Kubernetes only when you need custom operators, DaemonSets, StatefulSets, or service mesh.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.