Skip to main content
AzureComputeintermediate

AKS vs App Service Decision

Choose between Azure Kubernetes Service and App Service based on complexity and requirements.

CloudToolStack Team24 min readPublished Feb 22, 2026

Prerequisites

AKS vs App Service: Choosing the Right Azure Compute Platform

One of the most consequential architectural decisions in Azure is whether to deploy containerized workloads on Azure Kubernetes Service (AKS) or Azure App Service. Both platforms run containers, both integrate with CI/CD pipelines, and both support auto-scaling. Yet they represent fundamentally different philosophies: App Service abstracts infrastructure so you can focus on code, while AKS gives you the full power of Kubernetes orchestration at the cost of operational complexity. Choosing poorly can lock your team into years of unnecessary toil or, conversely, prevent you from scaling when the business demands it.

This guide provides a comprehensive, structured comparison across every dimension that matters: architecture, scaling, networking, security, cost, CI/CD, monitoring, and team readiness. We also cover Azure Container Apps (ACA) as the increasingly popular middle ground, and provide a decision framework you can apply to your own workloads.

Platform Architecture Overview

Understanding the architectural foundations of each platform helps explain their strengths and limitations. The two services sit at different points on the abstraction spectrum.

Azure App Service Architecture

App Service runs on a fully managed PaaS layer. Microsoft handles OS patching, runtime updates, load balancing, and infrastructure health. You deploy your code (or a single container) into an App Service Plan, which represents a pool of dedicated compute resources.

  • App Service Plan: Defines the VM SKU, region, and number of instances. Multiple web apps can share a single plan (multi-tenant on the same VMs).
  • Deployment unit: One application per App Service instance. While sidecar support is emerging (preview), the model is fundamentally single-container.
  • Built-in features: TLS termination, custom domains, authentication (Easy Auth), deployment slots, WebSocket support, and diagnostics are included without additional configuration.
  • Runtimes: Native support for .NET, Node.js, Python, Java, PHP, Ruby, and Go. Custom containers are also supported on Linux plans.

Azure Kubernetes Service Architecture

AKS provides a managed Kubernetes control plane (API server, etcd, scheduler, controller manager) while you manage the worker nodes that run your workloads. Microsoft handles control plane availability, Kubernetes version upgrades (you still initiate them), and certificate management.

  • Node pools: Collections of VMs with the same size and configuration. You can have multiple node pools with different VM sizes for different workloads (system vs user, CPU vs GPU).
  • Deployment unit: Pods containing one or more containers. Full Kubernetes primitives are available: Deployments, StatefulSets, DaemonSets, Jobs, CronJobs.
  • Networking: Azure CNI or kubenet networking. Full support for Network Policies, ingress controllers, service meshes, and load balancers.
  • Extensibility: Custom Resource Definitions (CRDs), operators, admission webhooks, and the entire CNCF ecosystem.

AKS Pricing Tiers

AKS offers three control plane tiers: Free (no SLA, suitable for dev/test), Standard ($73/month per cluster, 99.95% API server SLA), and Premium ($146/month per cluster, adds long-term support versions and advanced cluster management). You always pay separately for the worker node VMs, storage, and networking.

High-Level Comparison

The following table summarizes the key differences across the most important dimensions. Each dimension is explored in detail in subsequent sections.

DimensionAzure App ServiceAzure Kubernetes Service
Abstraction levelPaaS (fully managed platform)CaaS (managed Kubernetes control plane)
Container supportSingle container or Docker Compose (limited)Full Kubernetes orchestration (pods, deployments, services, operators)
Scaling ceilingAuto-scale up to 30 instances (100 with ASE v3)Cluster autoscaler + HPA/VPA/KEDA (thousands of pods)
Scale-to-zeroNo (minimum 1 instance always running)Yes (with KEDA, scale node pools to 0 for non-system pools)
NetworkingVNet integration, Private Endpoints, IP restrictionsFull VNet integration, Network Policies, service mesh, custom ingress
Operational overheadLow: Microsoft manages the platformMedium-High: you manage nodes, upgrades, networking, security
Multi-service appsOne app per instance (sidecar support in preview)Multiple containers per pod, full microservices with service discovery
CI/CDDeployment slots, GitHub Actions, Azure DevOpsHelm, Kustomize, Flux, ArgoCD, GitOps, custom pipelines
Stateful workloadsNot designed for stateful appsStatefulSets, Persistent Volumes, CSI drivers
Cost modelPer App Service Plan (fixed compute, shared by apps)Per node VM + load balancers + disks + add-ons
Minimum cost (production)~$55/mo (B1 plan)~$213/mo (2-node D2s_v5 + Standard tier)
Learning curveLow (deploy and go)Steep (Kubernetes ecosystem knowledge required)
Workload portabilityAzure-specific (some Docker portability)High (Kubernetes runs on any cloud or on-premises)

When to Choose App Service

App Service excels when you want to minimize infrastructure concerns and focus on delivering application value. It is the right choice in the following scenarios.

Web Applications and APIs

Standard HTTP workloads, such as REST APIs, server-rendered web applications, single-page app backends, and GraphQL endpoints, are App Service's sweet spot. You get TLS termination, custom domains, WebSocket support, and health checks out of the box without configuring ingress controllers or load balancers.

Small to Medium Teams Without Kubernetes Expertise

If your team does not include dedicated platform engineers with Kubernetes experience, App Service eliminates an entire category of operational burden. There are no nodes to patch, no control plane upgrades to schedule, no RBAC policies to configure at the cluster level, and no risk of misconfigured Network Policies breaking inter-service communication.

Monoliths and Simple Architectures

Applications with a single deployable unit or a small number of services (2–4) are straightforward to manage on App Service. Each service gets its own web app within the same App Service Plan, keeping costs low while maintaining isolation.

Rapid Prototyping and MVPs

Going from code to a production URL in minutes with deployment slots and integrated CI/CD is App Service's superpower. For startups and teams validating product-market fit, the speed advantage is significant.

Deploy a containerized app to App Service
# Create an App Service Plan (Linux, Premium v3)
az appservice plan create \
  --name my-api-plan \
  --resource-group myRG \
  --is-linux \
  --sku P1v3

# Deploy a container from Azure Container Registry
az webapp create \
  --name my-api-app \
  --resource-group myRG \
  --plan my-api-plan \
  --container-image-name myacr.azurecr.io/my-api:v1.2.3

# Enable managed identity for ACR pull
az webapp identity assign --name my-api-app --resource-group myRG
az role assignment create \
  --assignee $(az webapp identity show --name my-api-app --resource-group myRG --query principalId -o tsv) \
  --role AcrPull \
  --scope $(az acr show --name myacr --query id -o tsv)

# Enable VNet integration for backend access
az webapp vnet-integration add \
  --name my-api-app \
  --resource-group myRG \
  --vnet myVNet \
  --subnet app-service-subnet

# Configure health check endpoint
az webapp config set \
  --name my-api-app \
  --resource-group myRG \
  --generic-configurations '{"healthCheckPath": "/health"}'

App Service Plan Tiers Comparison

Choosing the right App Service Plan tier affects performance, features, and cost. Here is a comparison of the production-relevant tiers.

TiervCPU / RAMMax InstancesKey FeaturesStarting Price
Basic (B1-B3)1-4 vCPU / 1.75-7 GB3Custom domains, TLS~$55/mo
Standard (S1-S3)1-4 vCPU / 1.75-7 GB10+ Auto-scale, deployment slots, backups~$73/mo
Premium v3 (P1v3-P3v3)2-8 vCPU / 8-32 GB30+ Enhanced compute, VNet integration, zone redundancy~$138/mo
Premium v3 memory-opt (P1mv3-P5mv3)2-32 vCPU / 16-256 GB30+ High-memory workloads~$230/mo
Isolated v2 (I1v2-I3v2)2-8 vCPU / 8-32 GB100+ Dedicated hardware, ASE, highest isolation~$416/mo
Configure App Service auto-scaling rules
# Create auto-scale settings for an App Service Plan
az monitor autoscale create \
  --resource-group myRG \
  --resource my-api-plan \
  --resource-type Microsoft.Web/serverfarms \
  --name my-api-autoscale \
  --min-count 2 \
  --max-count 10 \
  --count 2

# Scale out when average CPU > 70% for 10 minutes
az monitor autoscale rule create \
  --resource-group myRG \
  --autoscale-name my-api-autoscale \
  --condition "CpuPercentage > 70 avg 10m" \
  --scale out 2

# Scale in when average CPU < 30% for 15 minutes
az monitor autoscale rule create \
  --resource-group myRG \
  --autoscale-name my-api-autoscale \
  --condition "CpuPercentage < 30 avg 15m" \
  --scale in 1

# Add a schedule-based rule for business hours
az monitor autoscale profile create \
  --resource-group myRG \
  --autoscale-name my-api-autoscale \
  --name business-hours \
  --min-count 4 \
  --max-count 10 \
  --count 4 \
  --recurrence week Mon Tue Wed Thu Fri \
  --start "08:00" --end "18:00" \
  --timezone "Eastern Standard Time"

Deployment Slots Are a Superpower

App Service deployment slots provide zero-downtime deployments with instant rollback. Deploy to a staging slot, validate with slot-specific app settings, then swap into production. The swap operation redirects traffic at the load balancer level, with no DNS propagation delay, no cold starts. This alone makes App Service compelling for teams that deploy frequently.

Blue-green deployment with deployment slots
# Create a staging slot
az webapp deployment slot create \
  --name my-api-app \
  --resource-group myRG \
  --slot staging

# Deploy new version to staging
az webapp config container set \
  --name my-api-app \
  --resource-group myRG \
  --slot staging \
  --container-image-name myacr.azurecr.io/my-api:v1.3.0

# Test the staging slot
curl https://my-api-app-staging.azurewebsites.net/health

# Route 10% of production traffic to staging (canary)
az webapp traffic-routing set \
  --name my-api-app \
  --resource-group myRG \
  --distribution staging=10

# After validation, swap staging to production
az webapp deployment slot swap \
  --name my-api-app \
  --resource-group myRG \
  --slot staging \
  --target-slot production

# If issues detected, swap back immediately
az webapp deployment slot swap \
  --name my-api-app \
  --resource-group myRG \
  --slot production \
  --target-slot staging

When to Choose AKS

AKS is the right choice when you need the full power and flexibility of Kubernetes orchestration. The additional operational complexity is justified in these scenarios.

Microservices Architectures (5+ Services)

When your application consists of five or more independently deployable services that need service discovery, load balancing, inter-service communication, and independent scaling, Kubernetes's built-in primitives (Services, Ingress, NetworkPolicy) provide the foundation. App Service requires external solutions (API Management, Front Door) to achieve similar service mesh capabilities.

Multi-Container and Sidecar Workloads

Applications requiring sidecar containers (service mesh proxies like Envoy, log collectors like Fluentd, security agents), init containers (database migrations, config loading), or complex pod configurations are native to Kubernetes. The pod abstraction allows tightly coupled containers to share networking, storage, and lifecycle management.

Advanced Scaling Requirements

Kubernetes offers fine-grained scaling beyond simple CPU/memory metrics. The Horizontal Pod Autoscaler (HPA) can scale based on custom metrics from Prometheus. The Vertical Pod Autoscaler (VPA) automatically right-sizes container resource requests. KEDA (Kubernetes Event-Driven Autoscaling) enables scaling based on queue depth, event stream lag, cron schedules, and dozens of other triggers. The Cluster Autoscaler adds or removes nodes based on pending pods, and you can scale user node pools to zero during off-hours.

Multi-Cloud or Hybrid Strategy

If your organization requires workload portability across cloud providers or needs to run the same platform on-premises and in Azure, Kubernetes provides the common abstraction layer. Your Helm charts, manifests, and CI/CD pipelines work across AKS, EKS, GKE, and on-premises clusters with minimal changes.

Stateful Workloads

Applications that require persistent storage (databases, message queues, caches running as containers) benefit from Kubernetes StatefulSets, Persistent Volume Claims, and CSI drivers. AKS integrates with Azure Disks and Azure Files for persistent storage, and supports third-party storage solutions through CSI.

Create a production AKS cluster with best practices
# Create an AKS cluster with production settings
az aks create \
  --name my-aks-cluster \
  --resource-group myRG \
  --location eastus2 \
  --node-count 3 \
  --node-vm-size Standard_D4s_v5 \
  --enable-managed-identity \
  --attach-acr myacr \
  --network-plugin azure \
  --network-policy calico \
  --vnet-subnet-id /subscriptions/<sub>/resourceGroups/myRG/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/aks-subnet \
  --service-cidr 10.0.0.0/16 \
  --dns-service-ip 10.0.0.10 \
  --zones 1 2 3 \
  --tier standard \
  --enable-cluster-autoscaler \
  --min-count 3 \
  --max-count 10 \
  --enable-defender \
  --enable-workload-identity \
  --enable-oidc-issuer \
  --auto-upgrade-channel stable

# Add a spot node pool for batch/non-critical workloads
az aks nodepool add \
  --cluster-name my-aks-cluster \
  --resource-group myRG \
  --name spotnodes \
  --node-vm-size Standard_D4s_v5 \
  --priority Spot \
  --eviction-policy Delete \
  --spot-max-price -1 \
  --enable-cluster-autoscaler \
  --min-count 0 \
  --max-count 20 \
  --labels workload-type=batch \
  --node-taints "kubernetes.azure.com/scalesetpriority=spot:NoSchedule"

# Add a GPU node pool for ML inference
az aks nodepool add \
  --cluster-name my-aks-cluster \
  --resource-group myRG \
  --name gpunodes \
  --node-vm-size Standard_NC6s_v3 \
  --node-count 1 \
  --enable-cluster-autoscaler \
  --min-count 0 \
  --max-count 4 \
  --labels workload-type=gpu \
  --node-taints "nvidia.com/gpu=present:NoSchedule"

# Get credentials
az aks get-credentials --name my-aks-cluster --resource-group myRG
Kubernetes deployment manifest with production settings
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-api
  namespace: production
  labels:
    app: my-api
    version: v1.2.3
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: my-api
  template:
    metadata:
      labels:
        app: my-api
        version: v1.2.3
        azure.workload.identity/use: "true"
    spec:
      serviceAccountName: my-api-sa
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: my-api
      containers:
        - name: my-api
          image: myacr.azurecr.io/my-api:v1.2.3
          ports:
            - containerPort: 8080
              protocol: TCP
          resources:
            requests:
              cpu: 500m
              memory: 512Mi
            limits:
              cpu: "2"
              memory: 1Gi
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 15
            periodSeconds: 20
          readinessProbe:
            httpGet:
              path: /ready
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 10
          startupProbe:
            httpGet:
              path: /healthz
              port: 8080
            failureThreshold: 30
            periodSeconds: 10
          env:
            - name: AZURE_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: my-api-secrets
                  key: client-id
---
apiVersion: v1
kind: Service
metadata:
  name: my-api
  namespace: production
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: my-api
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-api
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-api
  minReplicas: 3
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80

The True Cost of Kubernetes

AKS node VMs may appear cheaper than App Service on paper, but you must account for the full cost: worker node VMs, managed disks, load balancers, egress traffic, Container Registry, monitoring (Container Insights), and most importantly the engineering time to operate the cluster. A realistic estimate adds 30–50% on top of raw compute costs for operational overhead. Budget for at least one platform engineer per 5–10 AKS clusters in production.

The Middle Ground: Azure Container Apps

Azure Container Apps (ACA) is a serverless container platform built on Kubernetes (powered by AKS and KEDA under the hood) that abstracts away all cluster management. It fills the increasingly common gap between App Service and AKS for teams that need more than PaaS but less than full Kubernetes. ACA is rapidly becoming the default recommendation for new containerized workloads that don't have a specific reason to use AKS.

Container Apps Feature Comparison

FeatureApp ServiceContainer AppsAKS
Scale to zeroNoYesYes (with KEDA, node pool to 0)
Multiple containersLimited (sidecar preview)Sidecars and init containersFull pod spec
Dapr integrationNoBuilt-in (one click)Manual install and management
Service discoveryExternal (API Management, etc.)Built-in (DNS within environment)CoreDNS, full Kubernetes Services
Service meshNoEnvoy-based (built-in)Istio, Linkerd, or Envoy (manual)
Event-driven scalingLimited (CPU/memory only)KEDA (50+ scale triggers)KEDA (manual install)
Kubernetes API accessNoNo (abstracted)Full kubectl, helm, CRDs
Custom operators/CRDsNoNoYes
GPU workloadsNoYes (dedicated plan)Yes (GPU node pools)
Deployment slotsYes (built-in)Revisions with traffic splittingManual (Helm, Argo Rollouts)
VNet integrationOutbound only (or ASE for inbound)Full (environment-level)Full (cluster-level)
Operational burdenLowestLowHighest
Deploy to Azure Container Apps
# Create a Container Apps Environment
az containerapp env create \
  --name my-env \
  --resource-group myRG \
  --location eastus2 \
  --infrastructure-subnet-resource-id /subscriptions/<sub>/resourceGroups/myRG/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/aca-subnet \
  --internal-only false

# Deploy a container app with auto-scaling
az containerapp create \
  --name my-api \
  --resource-group myRG \
  --environment my-env \
  --image myacr.azurecr.io/my-api:v1.2.3 \
  --registry-server myacr.azurecr.io \
  --registry-identity system \
  --target-port 8080 \
  --ingress external \
  --cpu 1.0 \
  --memory 2.0Gi \
  --min-replicas 2 \
  --max-replicas 10 \
  --scale-rule-name http-scaling \
  --scale-rule-type http \
  --scale-rule-http-concurrency 100 \
  --env-vars "ENV=production"

# Deploy a second service with internal ingress (service-to-service)
az containerapp create \
  --name my-worker \
  --resource-group myRG \
  --environment my-env \
  --image myacr.azurecr.io/my-worker:v1.0.0 \
  --registry-server myacr.azurecr.io \
  --registry-identity system \
  --target-port 8080 \
  --ingress internal \
  --cpu 0.5 \
  --memory 1.0Gi \
  --min-replicas 0 \
  --max-replicas 30 \
  --scale-rule-name queue-scaling \
  --scale-rule-type azure-queue \
  --scale-rule-metadata "queueName=work-items" "queueLength=10" \
  --scale-rule-auth "connection=queue-connection-string" \
  --secrets "queue-connection-string=<connection-string>"

# Traffic splitting between revisions (canary deployment)
az containerapp ingress traffic set \
  --name my-api \
  --resource-group myRG \
  --revision-weight my-api--v1=90 my-api--v2=10

Container Apps as the Default Starting Point

For new containerized microservices projects, Container Apps is increasingly the best starting point. It provides KEDA-based event-driven scaling, built-in Dapr for service-to-service communication, revision-based traffic splitting, and scale-to-zero, all without managing Kubernetes infrastructure. Only graduate to AKS if you hit Container Apps limitations (custom operators, CRDs, DaemonSets, or compliance requirements needing full cluster control).

Networking Comparison

Network architecture differs significantly between the platforms and is often a deciding factor for enterprise workloads with strict security requirements.

App Service Networking

App Service provides two networking modes. VNet Integration enables outbound connectivity from your app to resources in a VNet (databases, caches, storage via Private Endpoints). Private Endpoints enable inbound connectivity, making the app accessible only from within the VNet. The combination provides full network isolation without App Service Environment (ASE).

AKS Networking

AKS offers two primary CNI plugins. Azure CNI assigns VNet IP addresses directly to pods, providing native VNet integration but consuming more IP addresses. Azure CNI Overlay uses a private CIDR for pods while maintaining VNet integration for nodes, reducing IP address consumption significantly. Network Policies (Calico or Azure NPM) provide pod-level microsegmentation. Ingress controllers (NGINX, Application Gateway Ingress Controller, or Contour) handle inbound traffic routing.

AKS Network Policy for microsegmentation
# Only allow traffic from the frontend namespace to the api namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-allow-frontend-only
  namespace: api
spec:
  podSelector:
    matchLabels:
      app: my-api
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              name: frontend
        - podSelector:
            matchLabels:
              app: web-frontend
      ports:
        - protocol: TCP
          port: 8080
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: database
      ports:
        - protocol: TCP
          port: 5432
    # Allow DNS resolution
    - to: []
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
Networking FeatureApp ServiceContainer AppsAKS
VNet integrationOutbound (regional) + Private Endpoints (inbound)Full (environment-level)Full (cluster-level, pods get VNet IPs)
Network policiesIP restrictions, service tagsEnvironment-level isolationKubernetes Network Policies (Calico, Azure NPM)
Custom DNSVia VNet integrationVia VNet integrationCoreDNS customization, external-dns
Load balancingBuilt-in (platform-managed)Envoy (platform-managed)Azure Load Balancer, NGINX, AGIC, custom ingress
Service meshNot supportedBuilt-in (Envoy-based)Istio (AKS add-on), Linkerd, Consul
Egress controlNAT Gateway, VNet route tablesNAT Gateway, UDR supportAzure Firewall, NAT Gateway, egress lockdown
Azure VNet Architecture Guide

CI/CD and Deployment Strategies

The deployment model differs significantly between platforms. App Service provides built-in deployment capabilities while AKS relies on the Kubernetes ecosystem for deployment orchestration.

App Service CI/CD

  • Deployment slots: Built-in blue-green and canary deployments with traffic routing percentage control.
  • GitHub Actions / Azure DevOps: First-party integration with publish profiles or service connections.
  • Container webhooks: ACR can trigger redeployment on image push.
  • ZIP deploy: Push code directly without containerization.

AKS CI/CD

  • GitOps (Flux / ArgoCD): Declarative, pull-based deployments that reconcile cluster state with Git. AKS has a first-party Flux extension.
  • Helm: Package manager for Kubernetes that handles templated manifests, versioned releases, and rollbacks.
  • Kustomize: Overlay-based manifest management without templates.
  • Progressive delivery: Argo Rollouts or Flagger for automated canary deployments with metric-based promotion.
GitHub Actions workflow for AKS deployment with Helm
name: Deploy to AKS
on:
  push:
    branches: [main]

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production
    steps:
      - uses: actions/checkout@v4

      - name: Azure Login (OIDC)
        uses: azure/login@v2
        with:
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

      - name: Build and push image
        run: |
          az acr login --name myacr
          docker build -t myacr.azurecr.io/my-api:${{ github.sha }} .
          docker push myacr.azurecr.io/my-api:${{ github.sha }}

      - name: Set AKS context
        uses: azure/aks-set-context@v4
        with:
          cluster-name: my-aks-cluster
          resource-group: myRG

      - name: Deploy with Helm
        run: |
          helm upgrade --install my-api ./charts/my-api \
            --namespace production \
            --create-namespace \
            --set image.repository=myacr.azurecr.io/my-api \
            --set image.tag=${{ github.sha }} \
            --set replicas=3 \
            --wait \
            --timeout 10m

      - name: Verify deployment
        run: |
          kubectl rollout status deployment/my-api -n production --timeout=300s
          kubectl get pods -n production -l app=my-api
GitOps with Flux for AKS
# Enable Flux extension on AKS cluster
az k8s-extension create \
  --cluster-name my-aks-cluster \
  --resource-group myRG \
  --cluster-type managedClusters \
  --extension-type microsoft.flux \
  --name flux

# Create a Flux configuration pointing to your Git repo
az k8s-configuration flux create \
  --cluster-name my-aks-cluster \
  --resource-group myRG \
  --cluster-type managedClusters \
  --name my-app-config \
  --namespace flux-system \
  --scope cluster \
  --url https://github.com/myorg/k8s-manifests \
  --branch main \
  --kustomization name=production path=./clusters/production/my-api prune=true

Security Comparison

Security posture differs significantly between the platforms. App Service provides more security out of the box, while AKS provides more security controls but requires explicit configuration.

Security FeatureApp ServiceAKS
IdentitySystem/user-assigned managed identity (one click)Workload Identity Federation (requires OIDC issuer + service account mapping)
Secret managementKey Vault references in app settings (zero-code)CSI Secrets Store driver + Key Vault provider
AuthenticationEasy Auth built-in (Azure AD, social providers)Manual (ingress-level auth, OAuth2 proxy, or app-level)
TLSManaged certificates (free), custom certs, auto-renewalcert-manager + Let's Encrypt, or Azure Key Vault
OS patchingMicrosoft-managed (transparent)Node image upgrades (you schedule and initiate)
Container scanningDefender for App ServiceDefender for Containers (runtime + registry scanning)
Network segmentationIP restrictions, service tags, Private EndpointsNetwork Policies, private cluster, authorized IP ranges
ComplianceASE for dedicated hardware isolationConfidential computing (SGX nodes), Azure Policy for AKS
AKS security configuration with Workload Identity and CSI Secrets Store
# Enable CSI Secrets Store driver
az aks enable-addons \
  --addons azure-keyvault-secrets-provider \
  --name my-aks-cluster \
  --resource-group myRG

# Create a user-assigned managed identity for the workload
az identity create \
  --name my-api-identity \
  --resource-group myRG

# Grant the identity access to Key Vault
az role assignment create \
  --assignee $(az identity show --name my-api-identity -g myRG --query clientId -o tsv) \
  --role "Key Vault Secrets User" \
  --scope $(az keyvault show --name myapp-prod-kv --query id -o tsv)

# Create federated credential for Workload Identity
az identity federated-credential create \
  --name my-api-fed-cred \
  --identity-name my-api-identity \
  --resource-group myRG \
  --issuer $(az aks show --name my-aks-cluster -g myRG --query oidcIssuerProfile.issuerUrl -o tsv) \
  --subject system:serviceaccount:production:my-api-sa \
  --audiences api://AzureADTokenExchange

# Create the Kubernetes service account
kubectl create namespace production
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-api-sa
  namespace: production
  annotations:
    azure.workload.identity/client-id: $(az identity show --name my-api-identity -g myRG --query clientId -o tsv)
  labels:
    azure.workload.identity/use: "true"
EOF

AKS Security is Not Optional Configuration

Unlike App Service where security features are enabled by default, AKS requires deliberate security configuration. A default AKS cluster has no Network Policies (all pod-to-pod traffic is allowed), no pod security standards enforced, public API server access, and no secret encryption at rest. You must explicitly enable these controls. Use Azure Policy for AKS to enforce security baselines across all clusters.

Azure Key Vault Best Practices

Monitoring and Observability

Both platforms integrate with Azure Monitor, but the monitoring stack differs in complexity and depth.

App Service Monitoring

App Service provides built-in monitoring with minimal configuration: application logs, HTTP request logs, deployment logs, and platform diagnostics. Enable Application Insights for APM (distributed tracing, live metrics, smart detection). The integration requires adding a NuGet package or enabling auto-instrumentation (codeless for .NET and Java).

AKS Monitoring

AKS monitoring requires Container Insights (Azure Monitor agent for containers), which collects node metrics, pod metrics, container logs, and Kubernetes events. For full observability, you typically add Prometheus (for custom metrics), Grafana (for dashboards), and either Application Insights or OpenTelemetry for distributed tracing. Azure Managed Prometheus and Azure Managed Grafana simplify this stack.

Enable comprehensive AKS monitoring
# Enable Container Insights
az aks enable-addons \
  --addons monitoring \
  --name my-aks-cluster \
  --resource-group myRG \
  --workspace-resource-id /subscriptions/<sub>/resourceGroups/myRG/providers/Microsoft.OperationalInsights/workspaces/myLogAnalytics

# Enable Azure Managed Prometheus
az aks update \
  --name my-aks-cluster \
  --resource-group myRG \
  --enable-azure-monitor-metrics \
  --azure-monitor-workspace-resource-id /subscriptions/<sub>/resourceGroups/myRG/providers/Microsoft.Monitor/accounts/myPrometheus

# Create an Azure Managed Grafana instance
az grafana create \
  --name my-grafana \
  --resource-group myRG \
  --location eastus2

# Link Grafana to Prometheus data source
az grafana data-source create \
  --name my-grafana \
  --resource-group myRG \
  --definition '{
    "name": "Azure Managed Prometheus",
    "type": "prometheus",
    "access": "proxy",
    "url": "https://myprometheus.eastus2.prometheus.monitor.azure.com"
  }'

# KQL query: Find pods with high restart counts
# AzureMetrics
# | where MetricName == "kube_pod_container_status_restarts_total"
# | where Average > 5
# | project TimeGenerated, PodName=split(DimensionValue, "/")[1], Restarts=Average
# | order by Restarts desc

Cost Comparison Deep Dive

Cost is often the deciding factor, but comparing the three platforms requires looking beyond raw compute prices to total cost of ownership (TCO), which includes engineering time, add-on services, and operational overhead.

Scenario 1: Simple Web API (3 Instances, 2 vCPU / 4 GB Each)

PlatformConfigurationCompute/moAdd-ons/moTotal/mo
App Service P1v33 instances, auto-scale 1–5~$300~$30 (monitoring)~$330
Container Apps3 replicas, 2 vCPU / 4 GB each~$250~$30 (monitoring)~$280
AKS3x D2s_v5 + Standard tier~$280~$143 (tier + LB + monitoring)~$423

Scenario 2: Microservices (10 Services, Variable Load)

PlatformConfigurationCompute/moAdd-ons/moTotal/mo
App Service2x P2v3 plans (shared), 10 apps~$1,100~$80 (APIM, monitoring)~$1,180
Container Apps10 apps, scale-to-zero on 4 low-traffic~$700~$50 (monitoring)~$750
AKS5x D4s_v5 nodes, shared cluster~$900~$200 (tier, LB, monitoring, ingress)~$1,100

Scenario 3: Large Platform (50+ Services, Multi-Team)

PlatformConfigurationCompute/moAdd-ons/moTotal/mo
App ServiceMultiple ASP plans, complex management~$8,000~$500~$8,500
Container Apps50 apps across environments~$5,500~$300~$5,800
AKS2 clusters, 15 nodes each, namespaces per team~$5,000~$800~$5,800 + 1–2 platform engineers

Engineering Cost Is the Hidden Variable

At scale (50+ services), AKS and Container Apps reach similar compute costs, but AKS requires dedicated platform engineering staff. A senior platform engineer costs $150,000–$200,000/year. If AKS requires two platform engineers that App Service or Container Apps would not, that's $25,000–$33,000/month in additional cost that does not appear on the Azure bill. Factor this into your TCO calculation.

Azure Cost Management Guide

Migration Paths

Choosing a platform is not a permanent decision. Azure provides migration paths between all three platforms, though the effort varies.

App Service to Container Apps

This is the smoothest migration path. If your App Service app is already containerized, you can deploy the same image to Container Apps with minimal changes. The main adjustments are configuring scaling rules (KEDA triggers replace App Service auto-scale rules) and updating networking (Container Apps Environment replaces VNet integration and Private Endpoints).

App Service to AKS

This requires containerizing your application (if not already), creating Kubernetes manifests or Helm charts, configuring ingress controllers, setting up CI/CD pipelines, and implementing monitoring. Plan for 2–4 weeks per service for the migration and operational readiness.

AKS to Container Apps

If you find AKS operational overhead unsustainable, migrating to Container Apps is feasible for workloads that don't require Kubernetes-specific features (CRDs, custom operators, DaemonSets). Your container images work as-is; the main effort is translating Kubernetes manifests to Container Apps configuration and replacing Kubernetes-native tooling (Helm, kubectl) with ACA CLI commands or Bicep templates.

Bicep template for Container Apps migration from AKS
// Migrating a Kubernetes deployment to Container Apps with Bicep
param location string = resourceGroup().location
param acrName string = 'myacr'
param imageName string = 'my-api'
param imageTag string = 'v1.2.3'

resource env 'Microsoft.App/managedEnvironments@2024-03-01' = {
  name: 'my-env'
  location: location
  properties: {
    vnetConfiguration: {
      infrastructureSubnetId: '/subscriptions/<sub>/resourceGroups/myRG/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/aca-subnet'
    }
    zoneRedundant: true
  }
}

resource app 'Microsoft.App/containerApps@2024-03-01' = {
  name: 'my-api'
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    managedEnvironmentId: env.id
    configuration: {
      activeRevisionsMode: 'Multiple'
      ingress: {
        external: true
        targetPort: 8080
        transport: 'http'
        traffic: [
          {
            revisionName: 'my-api--v1'
            weight: 90
          }
          {
            latestRevision: true
            weight: 10
          }
        ]
      }
      registries: [
        {
          server: '${acrName}.azurecr.io'
          identity: 'system'
        }
      ]
      secrets: [
        {
          name: 'db-connection'
          keyVaultUrl: 'https://myapp-prod-kv.vault.azure.net/secrets/db-connection-string'
          identity: 'system'
        }
      ]
    }
    template: {
      containers: [
        {
          name: 'my-api'
          image: '${acrName}.azurecr.io/${imageName}:${imageTag}'
          resources: {
            cpu: json('1.0')
            memory: '2Gi'
          }
          probes: [
            {
              type: 'Liveness'
              httpGet: {
                path: '/healthz'
                port: 8080
              }
              periodSeconds: 20
            }
            {
              type: 'Readiness'
              httpGet: {
                path: '/ready'
                port: 8080
              }
              periodSeconds: 10
            }
          ]
          env: [
            {
              name: 'DB_CONNECTION'
              secretRef: 'db-connection'
            }
          ]
        }
      ]
      scale: {
        minReplicas: 2
        maxReplicas: 10
        rules: [
          {
            name: 'http-scaling'
            http: {
              metadata: {
                concurrentRequests: '100'
              }
            }
          }
        ]
      }
    }
  }
}

Decision Framework

Use this structured framework to choose between App Service, Container Apps, and AKS. Start from the top and work through each question. The first “Yes” that exclusively applies to one platform is your recommendation.

Step 1: Do You Need Kubernetes-Specific Features?

If your answer is Yes to any of these, choose AKS:

  • Custom Resource Definitions (CRDs) or operators
  • DaemonSets for node-level agents
  • StatefulSets with specific scheduling requirements
  • Direct Kubernetes API access for tooling
  • Service mesh with full Envoy/Istio control
  • Existing Kubernetes manifests and operational tooling to reuse
  • Multi-cloud or on-premises Kubernetes strategy

Step 2: Do You Need Container-Level Control?

If your answer is Yes to any of these, choose Container Apps:

  • Sidecar containers (not available in App Service)
  • Event-driven scaling (queue depth, event streams, cron)
  • Scale-to-zero for cost optimization
  • Dapr for microservices communication
  • Multiple services needing service discovery
  • Traffic splitting between revisions

Step 3: Is It a Standard Web App or API?

If your answer is Yes to these, choose App Service:

  • Standard HTTP request/response workload
  • Team prefers code-first deployment (not containers)
  • Deployment slots with zero-downtime swap are sufficient
  • Built-in authentication (Easy Auth) is valuable
  • Minimal operational overhead is a priority

The Decision Is Not Permanent

Your choice between these platforms is not a one-way door. All three can run the same container images. The migration effort is primarily in deployment configuration, networking setup, and CI/CD pipeline adjustments, not application code changes. Start with the simplest platform that meets your current requirements and migrate when you outgrow it. Over-engineering with AKS from day one is more costly than migrating from App Service to Container Apps later.

Operational Day-2 Comparison

Beyond the initial deployment, the ongoing operational burden differs dramatically between platforms. This is where the true cost difference manifests.

App Service Day-2 Operations

  • OS and runtime patching: Fully managed by Microsoft, transparent to your application.
  • Scaling: Configure auto-scale rules once; the platform handles the rest.
  • Certificate management: Free managed certificates with auto-renewal, or custom certificates stored in Key Vault.
  • Troubleshooting: Built-in diagnostics, Kudu console, log streaming, and App Service Diagnostics (AI-powered).
  • Typical time spent: 1–2 hours per week per application.

AKS Day-2 Operations

  • Kubernetes version upgrades: New minor versions every 3–4 months. You must test and apply upgrades (control plane + node pools separately). Each upgrade requires testing against your workloads.
  • Node image upgrades: Weekly security patches for node OS images. Can be automated with the node-image auto-upgrade channel.
  • Certificate management: cert-manager installation, Let's Encrypt configuration, certificate renewal monitoring.
  • Cluster capacity planning: Monitor node utilization, adjust node pool sizes, handle node pool scaling events.
  • Add-on management: Keep ingress controllers, CSI drivers, policy agents, and monitoring agents updated.
  • Troubleshooting: kubectl debugging, pod log analysis, network policy verification, resource quota management.
  • Typical time spent: 8–16 hours per week per cluster.
AKS day-2 operational commands
# Check available Kubernetes upgrades
az aks get-upgrades --name my-aks-cluster --resource-group myRG --output table

# Upgrade control plane to a new version
az aks upgrade \
  --name my-aks-cluster \
  --resource-group myRG \
  --kubernetes-version 1.29.2 \
  --control-plane-only

# Upgrade node pools separately (rolling update)
az aks nodepool upgrade \
  --cluster-name my-aks-cluster \
  --resource-group myRG \
  --name nodepool1 \
  --kubernetes-version 1.29.2 \
  --max-surge 33%

# Check node pool health
kubectl get nodes -o wide
kubectl top nodes

# Find pods with high resource usage
kubectl top pods --all-namespaces --sort-by=cpu | head -20

# Check for failed pods across all namespaces
kubectl get pods --all-namespaces --field-selector=status.phase=Failed

# View recent events (useful for debugging)
kubectl get events --all-namespaces --sort-by='.lastTimestamp' | tail -30

# Drain a node for maintenance
kubectl drain node-name --ignore-daemonsets --delete-emptydir-data

Real-World Architecture Patterns

Many organizations use multiple Azure compute platforms together, leveraging each one's strengths. Here are common patterns seen in production environments.

Pattern 1: App Service Frontend + AKS Backend

Use App Service for customer-facing web applications (leveraging deployment slots, Easy Auth, and managed TLS) while running complex backend microservices on AKS. Communication flows through Private Endpoints or VNet integration.

Pattern 2: Container Apps for Event Processing + App Service for APIs

Use Container Apps with KEDA for event-driven processing (queue consumers, stream processors) that benefit from scale-to-zero, while keeping synchronous APIs on App Service for simplicity. This pattern optimizes cost for variable-load event processing while maintaining operational simplicity for the API tier.

Pattern 3: AKS as Internal Platform + App Service for Simple Services

Large organizations often run a centralized AKS cluster managed by a platform team, with application teams deploying via GitOps. Simpler services or quick prototypes that don't justify the platform onboarding process run on App Service independently.

You Do Not Have to Choose Just One

The best architectures often use multiple compute platforms. A common anti-pattern is forcing all workloads onto AKS “for consistency” when many of them would be simpler and cheaper on App Service or Container Apps. Let each workload's requirements drive the platform choice, not organizational inertia.

Azure Well-Architected Framework OverviewAzure Networking Deep DiveAzure VM Series GuideAzure Functions Hosting PlansARM Templates vs Bicep

Key Takeaways

  1. 1App Service is a PaaS with minimal ops overhead, ideal for standard web apps and APIs.
  2. 2AKS gives full Kubernetes control. Choose it when you need custom orchestration or portability.
  3. 3App Service supports containers via Web App for Containers, a middle ground option.
  4. 4AKS costs include node VMs plus $0.10/hour control plane; App Service pricing is plan-based.
  5. 5Choose App Service for simplicity; AKS for microservice architectures and multi-cloud portability.
  6. 6Both support auto-scaling, CI/CD, VNet integration, and managed SSL certificates.

Frequently Asked Questions

What is the main difference between AKS and App Service?
App Service is a managed PaaS where you deploy code or containers and Azure manages the infrastructure. AKS is managed Kubernetes where you manage pods, services, ingress, and cluster configuration. App Service is simpler; AKS offers more control.
Is AKS cheaper than App Service?
It depends on scale. App Service plans start cheaper for small apps. AKS can be more cost-effective at scale because you control node sizing and density. AKS charges $0.10/hour for the control plane plus VM costs. Compare total cost for your specific workload.
Can I run containers on App Service?
Yes. Azure App Service supports custom Docker containers via Web App for Containers. You get the simplicity of App Service with container flexibility. This is a great middle ground if you need containers but not full Kubernetes orchestration.
When should I choose AKS over App Service?
Choose AKS when you need: Kubernetes ecosystem tools (Helm, operators, service mesh), multi-cloud portability, fine-grained resource control, complex microservice topologies, or your team already has Kubernetes expertise.
Can I migrate from App Service to AKS later?
Yes, especially if your app is already containerized. You would need to create Kubernetes manifests, set up ingress, and update CI/CD pipelines. Containerized App Service apps require less effort to migrate than code-deployed apps.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.