Azure Container Apps Guide
Deploy containers with Azure Container Apps: revisions, KEDA scaling, Dapr, jobs, and custom domains.
Prerequisites
- Basic understanding of containers and Docker
- Familiarity with Azure Container Registry
What Are Azure Container Apps?
Azure Container Apps is a fully managed serverless container platform that enables you to run microservices and containerized applications without managing infrastructure. Built on Kubernetes and powered by open-source technologies like KEDA, Dapr, and Envoy, Container Apps provides the benefits of Kubernetes without the complexity of managing clusters, node pools, or control planes.
Container Apps sits between Azure App Service (PaaS, limited container support) and Azure Kubernetes Service (full Kubernetes, high complexity). It is ideal for microservices, API backends, event-driven processing, and background jobs that need container flexibility without Kubernetes operational overhead. You pay only for the resources your containers consume, with scale-to-zero support for cost optimization.
This guide covers Container Apps architecture, revisions and traffic splitting, autoscaling with KEDA, Dapr integration for microservices, background jobs, custom domains with TLS, and production deployment patterns.
Consumption vs Dedicated Plans
Container Apps offers two workload profiles: Consumption (serverless, scale-to-zero, pay-per-use) and Dedicated (reserved compute, predictable pricing, GPU support). Use Consumption for most workloads. Use Dedicated when you need guaranteed capacity, GPU access, or compliance with data residency requirements that prohibit shared infrastructure.
Creating Your First Container App
# Create a Container Apps environment
az containerapp env create \
--name myapp-env \
--resource-group myapp-rg \
--location eastus \
--logs-workspace-id "LOG_ANALYTICS_WORKSPACE_ID"
# Deploy a container app from a public image
az containerapp create \
--name api-backend \
--resource-group myapp-rg \
--environment myapp-env \
--image mcr.microsoft.com/k8se/quickstart:latest \
--target-port 80 \
--ingress external \
--min-replicas 1 \
--max-replicas 10 \
--cpu 0.5 \
--memory 1.0Gi \
--env-vars "APP_ENV=production" "LOG_LEVEL=info"
# Deploy from Azure Container Registry
az containerapp create \
--name api-backend \
--resource-group myapp-rg \
--environment myapp-env \
--image myregistry.azurecr.io/api-backend:v1.2.3 \
--registry-server myregistry.azurecr.io \
--registry-identity system \
--target-port 8080 \
--ingress external \
--min-replicas 2 \
--max-replicas 50 \
--cpu 1.0 \
--memory 2.0Gi
# Get the app URL
az containerapp show \
--name api-backend \
--resource-group myapp-rg \
--query 'properties.configuration.ingress.fqdn' -o tsvRevisions and Traffic Splitting
Every deployment to a Container App creates a new revision. Revisions are immutable snapshots of your container app configuration and image. You can run multiple revisions simultaneously and split traffic between them for blue/green deployments, canary releases, and A/B testing.
# Enable multiple revision mode
az containerapp revision set-mode \
--name api-backend \
--resource-group myapp-rg \
--mode multiple
# Deploy a new revision (canary)
az containerapp update \
--name api-backend \
--resource-group myapp-rg \
--image myregistry.azurecr.io/api-backend:v2.0.0 \
--revision-suffix v2
# Split traffic: 90% to v1, 10% to v2 (canary)
az containerapp ingress traffic set \
--name api-backend \
--resource-group myapp-rg \
--revision-weight "api-backend--v1=90" "api-backend--v2=10"
# Gradually increase v2 traffic
az containerapp ingress traffic set \
--name api-backend \
--resource-group myapp-rg \
--revision-weight "api-backend--v1=50" "api-backend--v2=50"
# Complete the rollout (100% to v2)
az containerapp ingress traffic set \
--name api-backend \
--resource-group myapp-rg \
--revision-weight "api-backend--v2=100"
# List revisions
az containerapp revision list \
--name api-backend \
--resource-group myapp-rg \
--query '[].{Name:name, Active:properties.active, Traffic:properties.trafficWeight, Created:properties.createdTime}' \
--output table
# Deactivate old revision (save resources)
az containerapp revision deactivate \
--name api-backend \
--resource-group myapp-rg \
--revision api-backend--v1Label-Based Routing
You can assign labels to revisions (e.g., "stable", "canary", "blue", "green") and route traffic to labels instead of specific revision names. Labels create dedicated URLs likecanary---api-backend.kindflower-abc123.eastus.azurecontainerapps.iothat always point to the labeled revision, useful for testing before promoting.
Autoscaling with KEDA
Container Apps uses KEDA (Kubernetes Event-Driven Autoscaling) for scaling. KEDA supports dozens of scale triggers beyond HTTP: Azure Queue Storage, Service Bus, Kafka, Cron, PostgreSQL, and custom metrics. This makes Container Apps ideal for event-driven workloads that need to scale based on queue depth, message backlog, or scheduled times.
# Scale based on HTTP requests (default)
az containerapp update \
--name api-backend \
--resource-group myapp-rg \
--min-replicas 2 \
--max-replicas 100 \
--scale-rule-name http-rule \
--scale-rule-type http \
--scale-rule-http-concurrency 50
# Scale based on Azure Queue Storage depth
az containerapp update \
--name queue-processor \
--resource-group myapp-rg \
--min-replicas 0 \
--max-replicas 30 \
--scale-rule-name queue-rule \
--scale-rule-type azure-queue \
--scale-rule-metadata "queueName=orders" "queueLength=10" "accountName=mystorageaccount" \
--scale-rule-auth "connection=storage-connection-string"
# Scale based on Service Bus queue
az containerapp update \
--name sb-processor \
--resource-group myapp-rg \
--min-replicas 0 \
--max-replicas 50 \
--scale-rule-name servicebus-rule \
--scale-rule-type azure-servicebus \
--scale-rule-metadata "queueName=orders" "messageCount=5" \
--scale-rule-auth "connection=sb-connection-string"
# Scale based on cron schedule
az containerapp update \
--name batch-processor \
--resource-group myapp-rg \
--scale-rule-name business-hours \
--scale-rule-type cron \
--scale-rule-metadata "timezone=America/New_York" "start=0 8 * * 1-5" "end=0 18 * * 1-5" "desiredReplicas=10"Dapr Integration
Dapr (Distributed Application Runtime) is built into Container Apps and provides building blocks for microservices: service-to-service invocation, state management, pub/sub messaging, bindings, secrets, and observability. Enabling Dapr on a Container App automatically deploys a Dapr sidecar alongside your container.
# Enable Dapr on a container app
az containerapp update \
--name api-backend \
--resource-group myapp-rg \
--dapr-enabled true \
--dapr-app-id api-backend \
--dapr-app-port 8080 \
--dapr-app-protocol http
# Configure a Dapr state store component
az containerapp env dapr-component set \
--name myapp-env \
--resource-group myapp-rg \
--dapr-component-name statestore \
--yaml - <<'EOF'
componentType: state.azure.cosmosdb
version: v1
metadata:
- name: url
value: "https://mycosmosdb.documents.azure.com:443/"
- name: database
value: "daprstate"
- name: collection
value: "state"
- name: masterKey
secretRef: cosmosdb-key
secrets:
- name: cosmosdb-key
value: "COSMOS_DB_KEY_HERE"
scopes:
- api-backend
- order-service
EOF
# Configure Dapr pub/sub component
az containerapp env dapr-component set \
--name myapp-env \
--resource-group myapp-rg \
--dapr-component-name pubsub \
--yaml - <<'EOF'
componentType: pubsub.azure.servicebus.topics
version: v1
metadata:
- name: connectionString
secretRef: sb-connection
secrets:
- name: sb-connection
value: "Endpoint=sb://myapp.servicebus.windows.net/;SharedAccessKeyName=..."
scopes:
- api-backend
- order-service
- notification-service
EOFContainer Apps Jobs
Container Apps Jobs run containerized tasks to completion (as opposed to continuously running apps). Jobs support three trigger types: manual (on-demand), scheduled (cron), and event-driven (KEDA). Jobs are ideal for batch processing, data migrations, report generation, and machine learning training tasks.
# Create a scheduled job (cron)
az containerapp job create \
--name daily-report \
--resource-group myapp-rg \
--environment myapp-env \
--image myregistry.azurecr.io/report-generator:latest \
--registry-server myregistry.azurecr.io \
--trigger-type Schedule \
--cron-expression "0 6 * * *" \
--replica-timeout 3600 \
--replica-retry-limit 2 \
--parallelism 1 \
--replica-completion-count 1 \
--cpu 2.0 \
--memory 4.0Gi \
--env-vars "REPORT_TYPE=daily"
# Create an event-driven job (scale based on queue)
az containerapp job create \
--name queue-job \
--resource-group myapp-rg \
--environment myapp-env \
--image myregistry.azurecr.io/queue-processor:latest \
--trigger-type Event \
--min-executions 0 \
--max-executions 10 \
--scale-rule-name queue-trigger \
--scale-rule-type azure-queue \
--scale-rule-metadata "queueName=jobs" "queueLength=1" \
--scale-rule-auth "connection=storage-conn"
# Manually start a job execution
az containerapp job start \
--name daily-report \
--resource-group myapp-rg
# View job execution history
az containerapp job execution list \
--name daily-report \
--resource-group myapp-rg \
--query '[].{Name:name, Status:properties.status, Started:properties.startTime}' \
--output tableCustom Domains and TLS
# Add a custom domain with managed certificate
az containerapp hostname add \
--name api-backend \
--resource-group myapp-rg \
--hostname api.mycompany.com
# Bind a managed certificate (free, auto-renewed)
az containerapp hostname bind \
--name api-backend \
--resource-group myapp-rg \
--hostname api.mycompany.com \
--environment myapp-env \
--validation-method CNAME
# Or use a custom certificate
az containerapp env certificate upload \
--name myapp-env \
--resource-group myapp-rg \
--certificate-file ./cert.pfx \
--password "cert-password"
az containerapp hostname bind \
--name api-backend \
--resource-group myapp-rg \
--hostname api.mycompany.com \
--certificate CERT_IDTerraform Deployment
resource "azurerm_container_app_environment" "main" {
name = "myapp-env"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
log_analytics_workspace_id = azurerm_log_analytics_workspace.main.id
tags = {
Environment = "production"
}
}
resource "azurerm_container_app" "api" {
name = "api-backend"
container_app_environment_id = azurerm_container_app_environment.main.id
resource_group_name = azurerm_resource_group.main.name
revision_mode = "Multiple"
template {
min_replicas = 2
max_replicas = 50
container {
name = "api"
image = "myregistry.azurecr.io/api-backend:v1.0.0"
cpu = 1.0
memory = "2Gi"
env {
name = "APP_ENV"
value = "production"
}
env {
name = "DB_CONNECTION"
secret_name = "db-connection"
}
liveness_probe {
transport = "HTTP"
path = "/health"
port = 8080
}
readiness_probe {
transport = "HTTP"
path = "/ready"
port = 8080
}
}
}
ingress {
external_enabled = true
target_port = 8080
traffic_weight {
latest_revision = true
percentage = 100
}
}
dapr {
app_id = "api-backend"
app_port = 8080
}
secret {
name = "db-connection"
value = var.db_connection_string
}
}Monitoring and Observability
# View container app logs
az containerapp logs show \
--name api-backend \
--resource-group myapp-rg \
--type console \
--follow
# View system logs (scaling events, errors)
az containerapp logs show \
--name api-backend \
--resource-group myapp-rg \
--type system
# Query logs with Log Analytics (KQL)
# ContainerAppConsoleLogs_CL
# | where ContainerAppName_s == "api-backend"
# | where Log_s contains "error"
# | project TimeGenerated, Log_s
# | order by TimeGenerated desc
# | take 50
# Check replica count and scaling
az containerapp replica list \
--name api-backend \
--resource-group myapp-rg \
--query '[].{Name:name, Created:properties.createdTime, Running:properties.runningState}'Container Apps vs AKS Decision
Use Container Apps when you want serverless containers with automatic scaling, no cluster management, and built-in Dapr/KEDA support. Use AKSwhen you need full Kubernetes API access, custom operators, service mesh (Istio/Linkerd), specific node configurations, or GPU workloads with fine-grained control. Container Apps is the simpler choice for 80% of containerized workloads.
Key Takeaways
- 1Container Apps provides serverless containers with scale-to-zero and built-in KEDA/Dapr support.
- 2Multiple revision mode enables traffic splitting for blue/green and canary deployments.
- 3KEDA supports 30+ scale triggers beyond HTTP: queues, Service Bus, Kafka, cron, and custom metrics.
- 4Container Apps Jobs run tasks to completion with manual, scheduled, or event-driven triggers.
Frequently Asked Questions
When should I use Container Apps vs AKS?
Can Container Apps scale to zero?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.