Container Orchestration Across Clouds
Compare ECS vs Container Apps vs Cloud Run for serverless containers and EKS vs AKS vs GKE for Kubernetes.
Prerequisites
- Basic understanding of containers and Docker
- Familiarity with at least one cloud container service
Container Orchestration Landscape
Every major cloud provider offers managed container services at two levels: managed Kubernetes (for full orchestration) and managed container runtimes (for simpler workloads without cluster management). Choosing between these options depends on your operational maturity, workload complexity, team expertise, and cost constraints.
This guide compares the serverless container services across AWS (ECS Fargate, App Runner), Azure (Container Apps, Container Instances), GCP (Cloud Run), and OCI (Container Instances), as well as the managed Kubernetes offerings (EKS, AKS, GKE, OKE) for workloads that need full Kubernetes capabilities.
Serverless Containers vs Kubernetes
Serverless containers (Fargate, Cloud Run, Container Apps) abstract away all infrastructure: no nodes, no clusters, no scaling configuration. You deploy a container image and the platform handles everything. Managed Kubernetes (EKS, AKS, GKE) gives you the full Kubernetes API with the control plane managed for you, but you still manage node pools, networking policies, and cluster upgrades. Use serverless containers when simplicity is the priority; use Kubernetes when you need custom operators, service mesh, or workload portability.
Serverless Container Services Compared
| Feature | AWS Fargate / App Runner | Azure Container Apps | GCP Cloud Run |
|---|---|---|---|
| Scale to zero | App Runner: Yes. Fargate: No | Yes | Yes |
| Max instances | App Runner: 25. Fargate: per-service | 300 per app | 1000 per service |
| GPU support | Fargate: No. ECS on EC2: Yes | Yes (Dedicated plan) | Yes (preview) |
| Sidecar support | Fargate: Yes. App Runner: No | Yes (built-in Dapr) | Yes (multi-container) |
| Event-driven scaling | Fargate: SQS, Kinesis. App Runner: HTTP | KEDA (30+ scalers) | HTTP, Pub/Sub, Eventarc |
| Traffic splitting | App Runner: No. ALB: Yes | Yes (revision-based) | Yes (revision-based) |
| VNet/VPC integration | Yes (VPC connectors) | Yes (VNet integration) | Yes (VPC connectors) |
| Minimum pricing | Fargate: ~$0.04/hr. App Runner: $0.007/hr paused | $0 (scale to zero) | $0 (scale to zero) |
Deploying to Each Platform
AWS ECS with Fargate
# Create a Fargate service
aws ecs create-cluster --cluster-name myapp-cluster
aws ecs register-task-definition --cli-input-json file://task-def.json
aws ecs create-service \
--cluster myapp-cluster \
--service-name api-service \
--task-definition myapp-api:1 \
--desired-count 3 \
--launch-type FARGATE \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["subnet-abc123", "subnet-def456"],
"securityGroups": ["sg-abc123"],
"assignPublicIp": "DISABLED"
}
}' \
--load-balancers '[{
"targetGroupArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/myapp/abc123",
"containerName": "api",
"containerPort": 8080
}]'Azure Container Apps
# Deploy to Container Apps
az containerapp create \
--name api-backend \
--resource-group myapp-rg \
--environment myapp-env \
--image myregistry.azurecr.io/api:v1 \
--target-port 8080 \
--ingress external \
--min-replicas 0 \
--max-replicas 50 \
--cpu 1.0 --memory 2.0Gi \
--scale-rule-name http-rule \
--scale-rule-type http \
--scale-rule-http-concurrency 50GCP Cloud Run
# Deploy to Cloud Run
gcloud run deploy api-backend \
--image us-docker.pkg.dev/PROJECT/repo/api:v1 \
--platform managed \
--region us-central1 \
--port 8080 \
--min-instances 0 \
--max-instances 100 \
--cpu 1 --memory 2Gi \
--concurrency 80 \
--allow-unauthenticated \
--set-env-vars "APP_ENV=production"Managed Kubernetes Compared
| Feature | AWS EKS | Azure AKS | GCP GKE |
|---|---|---|---|
| Control plane cost | $0.10/hr ($73/mo) | Free | Free (Standard), $0.10/hr (Enterprise) |
| Auto-upgrade | Optional | Optional | Default (release channels) |
| Node autoscaling | Cluster Autoscaler / Karpenter | Cluster Autoscaler / KEDA | Cluster Autoscaler / Autopilot |
| Serverless nodes | Fargate profiles | Virtual nodes (ACI) | Autopilot (fully managed) |
| Service mesh | App Mesh / Istio add-on | Istio add-on / Open Service Mesh | Anthos Service Mesh (managed Istio) |
| Multi-cluster | Manual or Anthos | Azure Fleet Manager | GKE Fleet / Anthos |
| GPU support | NVIDIA GPU AMIs | GPU node pools | GPU node pools + TPU |
Portable Deployment with Terraform
# EKS cluster
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = "myapp-cluster"
cluster_version = "1.29"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_groups = {
general = {
instance_types = ["m5.xlarge"]
min_size = 2
max_size = 10
desired_size = 3
}
}
}
# AKS cluster
resource "azurerm_kubernetes_cluster" "main" {
name = "myapp-cluster"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = "myapp"
default_node_pool {
name = "general"
node_count = 3
vm_size = "Standard_D4s_v5"
min_count = 2
max_count = 10
auto_scaling_enabled = true
}
identity {
type = "SystemAssigned"
}
}
# GKE Autopilot cluster
resource "google_container_cluster" "main" {
name = "myapp-cluster"
location = "us-central1"
enable_autopilot = true
network = google_compute_network.main.name
subnetwork = google_compute_subnetwork.main.name
ip_allocation_policy {
cluster_secondary_range_name = "pods"
services_secondary_range_name = "services"
}
}Container Image Registries
| Feature | AWS ECR | Azure ACR | GCP Artifact Registry |
|---|---|---|---|
| Geo-replication | Cross-region replication | Premium: geo-replication | Multi-region repositories |
| Vulnerability scanning | Inspector integration | Microsoft Defender | Container Analysis |
| Image signing | Notation/Cosign | Notation | Binary Authorization |
| Lifecycle policies | Yes | Yes (purge tasks) | Yes (cleanup policies) |
| Formats | OCI, Docker | OCI, Docker, Helm, OPA | OCI, Docker, Maven, npm, Python |
Multi-Cloud Container Strategy
When running containers across multiple clouds, standardize on these areas to reduce operational complexity and enable workload portability.
Strategy Recommendations
| Area | Recommendation |
|---|---|
| Container runtime | OCI-compliant images work everywhere; avoid cloud-specific base images |
| Orchestration | Kubernetes for portability; serverless containers for simplicity per-cloud |
| CI/CD | Cloud-agnostic tools (GitHub Actions, GitLab CI) with per-cloud deployment steps |
| Configuration | Environment variables over cloud-specific config services |
| Secrets | External Secrets Operator for Kubernetes; env vars for serverless |
| Observability | OpenTelemetry for vendor-neutral telemetry |
Start Simple, Add Complexity
If you are running containers in a single cloud, start with the simplest option (Cloud Run, Container Apps, or App Runner). Only move to managed Kubernetes when you need features like custom operators, init containers, DaemonSets, or StatefulSets. The operational cost of Kubernetes is significant; do not adopt it unless you need it.
Key Takeaways
- 1Cloud Run and Container Apps offer scale-to-zero; Fargate does not scale to zero.
- 2GKE Autopilot provides the closest to serverless Kubernetes with fully managed nodes.
- 3AKS has a free control plane; EKS charges $0.10/hr; GKE Standard is free.
- 4Use OCI-compliant images for portability across all cloud registries.
Frequently Asked Questions
Which serverless container service is cheapest?
Should I use Kubernetes or serverless containers?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.