Skip to main content
AzureServerlessbeginner

Functions Hosting Plans

Compare Azure Functions Consumption, Premium, and Dedicated plans for different workloads.

CloudToolStack Team22 min readPublished Feb 22, 2026

Prerequisites

  • Azure subscription
  • Basic understanding of serverless concepts

Azure Functions Hosting Plans Compared

Azure Functions is Microsoft's serverless compute platform that lets you run event-driven code without managing infrastructure. However, “serverless” doesn't mean there are no servers; it means you don't manage them. The hosting plan you choose fundamentally affects cold start behavior, scaling limits, networking capabilities, pricing model, and available features.

Choosing the wrong hosting plan is one of the most common mistakes teams make with Azure Functions. A startup that picks the Premium plan for a low-traffic webhook wastes hundreds of dollars per month. An enterprise that picks the Consumption plan for a latency-sensitive API serving production traffic deals with cold starts that frustrate users. This guide provides the detailed comparison and decision framework you need to make the right choice.

Hosting Plan Overview

Azure Functions offers four hosting options, each with fundamentally different trade-offs around cost, performance, and capabilities:

FeatureConsumptionFlex ConsumptionPremium (EP)Dedicated (App Service)
Scale to zeroYesYes (configurable)No (1+ warm instance)No
Max instances200100010010-30 (plan dependent)
Max execution time10 min (default 5)Unlimited*Unlimited*Unlimited*
Cold startYes (1-10 seconds)Reduced (configurable)No (pre-warmed)No
VNet integrationNoYesYesYes
Private endpoints (inbound)NoYesYesYes
Instance memory1.5 GB (fixed)2 GB or 4 GB3.5-14 GB (SKU dependent)Plan dependent
Minimum costFree tier availablePay-per-use~$175/mo (1x EP1)App Service plan cost
Billing modelPer execution + GB-sPer execution + GB-sPer instance + executionsFixed plan cost

*Unlimited execution time requires specific host.json configuration. Timer-triggered functions on Consumption have a 10-minute maximum.

Free Consumption Grant

The Consumption plan includes a generous free monthly grant per subscription: 1 million executions and 400,000 GB-seconds of compute time. For low-traffic APIs, scheduled tasks, webhooks, and prototypes, you can run Azure Functions entirely free. This makes Consumption the best plan for experimentation and proof-of-concept development.

Consumption Plan: True Serverless

The Consumption plan is the original serverless offering and remains the best choice for workloads with sporadic, unpredictable traffic patterns. Azure automatically allocates compute resources when your function is triggered, scales out based on demand, and bills only for actual execution time. When no functions are executing, you pay nothing.

Cold Start Deep Dive

Cold starts are the most discussed limitation of the Consumption plan. When a function hasn't been invoked recently, Azure needs to allocate a worker, load the runtime, restore dependencies, and initialize your code. This process adds latency to the first invocation.

LanguageTypical Cold StartFactors That Increase Cold Start
Node.js1-3 secondsLarge node_modules, many dependencies
Python1-5 secondsLarge packages (pandas, numpy), virtual environments
.NET (in-process)2-5 secondsLarge assemblies, startup initialization
.NET (isolated)3-7 secondsAdditional process startup overhead
Java5-15 secondsJVM startup, Spring Boot initialization, classpath scanning
PowerShell3-10 secondsModule imports, profile loading

Minimizing Cold Starts on Consumption

While cold starts cannot be eliminated on Consumption, several strategies can reduce them:

  • Minimize dependencies: Every package your function loads adds to cold start time. Remove unused dependencies and prefer lightweight libraries.
  • Use the latest runtime version: Microsoft continuously improves cold start performance in newer runtime versions. Always target the latest Functions runtime.
  • Lazy initialization: Defer heavy initialization (database connections, client creation) to the first actual invocation rather than module load time.
  • Keep functions warm with a timer: A timer-triggered function that runs every 5 minutes keeps the instance allocated. However, this adds cost and is essentially a hack; consider Flex Consumption or Premium if you need consistent warm starts.
host.json: Optimized Consumption plan configuration
{
  "version": "2.0",
  "functionTimeout": "00:10:00",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond": 20,
        "excludedTypes": "Request;Dependency"
      }
    },
    "logLevel": {
      "default": "Warning",
      "Host.Results": "Information",
      "Function": "Information"
    }
  },
  "extensions": {
    "http": {
      "routePrefix": "api",
      "maxOutstandingRequests": 200,
      "maxConcurrentRequests": 100
    },
    "queues": {
      "maxPollingInterval": "00:00:02",
      "visibilityTimeout": "00:00:30",
      "batchSize": 16,
      "maxDequeueCount": 5,
      "newBatchThreshold": 8
    }
  }
}

Consumption Plan Limitations

Beyond cold starts, the Consumption plan has several hard limitations: 1.5 GB memory per instance (not configurable), no VNet integration (cannot access private endpoints or resources behind firewalls), 10-minute maximum execution time, and a 200-instance scaling limit. If any of these are blockers, Flex Consumption or Premium is required.

Flex Consumption Plan: The Best of Both Worlds

Flex Consumption is the newest hosting option and represents a significant evolution of the serverless model. It addresses nearly every limitation of the original Consumption plan while retaining pay-per-use pricing. For most new serverless projects, Flex Consumption should be the default choice.

Key Advantages Over Standard Consumption

  • VNet integration: Full virtual network support for accessing private resources, a critical requirement for enterprise workloads that standard Consumption cannot provide.
  • Always-ready instances: Configure a minimum number of pre-provisioned instances to eliminate cold starts for critical functions, while still scaling to zero for non-critical ones. You only pay for always-ready instances when they are provisioned.
  • Higher scale limits: Scales up to 1,000 instances compared to 200 for standard Consumption and 100 for Premium.
  • Configurable instance memory: Choose between 2,048 MB and 4,096 MB instance sizes depending on workload needs.
  • Private endpoint support: Inbound private endpoints are supported, enabling fully private function apps accessible only from within your VNet.
  • Faster scaling: Improved concurrency management and faster instance allocation compared to standard Consumption.
Terminal: Create a Flex Consumption Function App
# Create a Flex Consumption function app with VNet integration
az functionapp create \
  --name my-flex-func \
  --resource-group myRG \
  --storage-account mystorageaccount \
  --runtime node \
  --runtime-version 20 \
  --functions-version 4 \
  --flexconsumption-location "East US 2" \
  --instance-memory 2048

# Configure always-ready instances for the HTTP trigger group
az functionapp scale config set \
  --name my-flex-func \
  --resource-group myRG \
  --always-ready-instances http=1

# Set maximum instance count
az functionapp scale config set \
  --name my-flex-func \
  --resource-group myRG \
  --maximum-instance-count 100

# Enable VNet integration
az functionapp vnet-integration add \
  --name my-flex-func \
  --resource-group myRG \
  --vnet my-vnet \
  --subnet functions-subnet

Flex Consumption for New Projects

For new serverless projects that need VNet integration, Flex Consumption should be your default choice. It provides pay-per-use pricing like Consumption but adds VNet support, configurable instance sizes, faster scaling, and always-ready instances, features that previously required the significantly more expensive Premium plan. The only reasons to use standard Consumption over Flex are if Flex is not yet available in your region or you need to stay within the free grant.

Premium Plan: Enterprise Serverless

The Premium plan (Elastic Premium, EP1/EP2/EP3) is designed for production workloads that need predictable performance with zero cold starts, VNet integration, and longer execution times. It maintains at least one pre-warmed instance at all times, ensuring that the first invocation is always fast.

Premium SKU Comparison

SKUvCPUMemoryApprox. Monthly CostBest For
EP113.5 GB~$175Low to moderate workloads, APIs
EP227 GB~$350Medium workloads, memory-intensive functions
EP3414 GB~$700High-compute workloads, ML inference
Terminal: Create and configure a Premium Function App
# Create an Elastic Premium plan
az functionapp plan create \
  --name my-premium-plan \
  --resource-group myRG \
  --location eastus2 \
  --sku EP1 \
  --min-instances 1 \
  --max-burst 20

# Create the Function App on the Premium plan
az functionapp create \
  --name my-premium-func \
  --resource-group myRG \
  --plan my-premium-plan \
  --runtime node \
  --runtime-version 20 \
  --storage-account mystorageaccount \
  --functions-version 4

# Enable VNet integration for accessing private resources
az functionapp vnet-integration add \
  --name my-premium-func \
  --resource-group myRG \
  --vnet my-vnet \
  --subnet functions-subnet

# Configure deployment slots for zero-downtime updates
az functionapp deployment slot create \
  --name my-premium-func \
  --resource-group myRG \
  --slot staging

# Swap staging to production
az functionapp deployment slot swap \
  --name my-premium-func \
  --resource-group myRG \
  --slot staging \
  --target-slot production

Pre-Warmed vs Minimum Instances

Premium plans have two scaling controls that are often confused:

  • Minimum instances (plan level): The minimum number of instances that are always allocated and billed. Set this to at least 1 to eliminate cold starts. Each minimum instance is fully billed at the EP SKU rate.
  • Pre-warmed instances (app level): Additional instances kept warm beyond the minimum, ready to handle burst traffic. These are billed at a reduced rate (~50% of a fully active instance). Set this to the number of additional instances you typically need during traffic spikes.

Premium Plan Cost Optimization

The Premium plan's cost is dominated by the minimum instance count. If you set--min-instances 3 on an EP2 plan, you pay approximately $1,050/month even with zero traffic. Set the minimum to 1 for most workloads and let Azure scale out additional instances on demand. Monitor your scaling patterns and adjust minimum instances only if you consistently need more baseline capacity.

Dedicated (App Service) Plan

Running Functions on a Dedicated App Service plan means your functions share compute with other App Service apps on the same plan. This is practical when you already have underutilized App Service plans, need App Service Environment (ASE) for full network isolation, or prefer predictable, fixed-cost billing over usage-based pricing.

When Dedicated Makes Sense

  • Existing underutilized App Service plans: If you have an App Service plan running web apps at 20% utilization, adding Functions to the same plan costs nothing extra and maximizes your investment.
  • Long-running functions: Dedicated plans have no execution time limit, making them suitable for functions that process for minutes or hours (video encoding, data migration, ETL pipelines).
  • App Service Environment (ASE) requirements: For single-tenant, fully isolated compute environments required by compliance standards, ASEv3 with Dedicated plans provides the highest level of network isolation.
  • Predictable billing: You pay for the App Service plan regardless of function execution count, making costs completely predictable for budgeting.

Avoid Free/Shared Tiers for Functions

While technically possible, running Azure Functions on Free or Shared App Service plan tiers results in significant limitations: no “Always On” setting (functions time out and stop responding), limited CPU allocation, no custom domains with SSL, and no scaling. Use at least a Basic B1 plan or higher for Dedicated function hosting. For production workloads, Premium v3 tiers offer the best price-to-performance ratio.

Triggers, Bindings, and Plan Compatibility

Azure Functions supports dozens of trigger types that connect your code to events from various Azure services and third-party systems. While most triggers work on all plans, some have plan-specific behaviors or limitations.

Trigger TypeConsumptionFlex ConsumptionPremiumNotes
HTTPFull supportFull supportFull supportCold starts affect latency on Consumption
TimerMax 10 min intervalFull supportFull supportTimer must run on exactly one instance
Queue StorageFull supportFull supportFull supportBatch size affects throughput
Service BusFull supportFull supportFull supportUse managed identity for auth
Event HubFull supportFull supportFull supportMax instances = number of partitions
Cosmos DBFull supportFull supportFull supportRequires change feed; use lease container
KafkaNot supportedFull supportFull supportRequires VNet for private Kafka clusters
Durable FunctionsFull (with limits)Full supportFull supportFan-out limited by instance count

Durable Functions and Orchestration

Durable Functions is an extension of Azure Functions that lets you write stateful workflows in code. It provides orchestration patterns like chaining, fan-out/fan-in, human interaction, and monitoring without external state stores.

Durable Functions fan-out pattern (JavaScript)
const df = require("durable-functions");

// Orchestrator function - coordinates the workflow
df.app.orchestration("processOrderOrchestrator", function* (context) {
  const orderId = context.df.getInput();

  // Step 1: Validate the order
  const validation = yield context.df.callActivity("validateOrder", orderId);
  if (!validation.isValid) {
    return { status: "rejected", reason: validation.reason };
  }

  // Step 2: Fan-out - process items in parallel
  const items = validation.items;
  const parallelTasks = items.map((item) =>
    context.df.callActivity("processItem", { orderId, item })
  );
  const results = yield context.df.Task.all(parallelTasks);

  // Step 3: Calculate totals after all items are processed
  const total = yield context.df.callActivity("calculateTotal", results);

  // Step 4: Charge payment
  const payment = yield context.df.callActivity("chargePayment", {
    orderId,
    amount: total,
  });

  return { status: "completed", orderId, total, paymentId: payment.id };
});

// Activity functions - the actual work
df.app.activity("validateOrder", { handler: async (orderId) => {
  // Validate against database
  return { isValid: true, items: ["item1", "item2", "item3"] };
}});

df.app.activity("processItem", { handler: async ({ orderId, item }) => {
  // Process each item (inventory check, reservation, etc.)
  return { item, processed: true, price: 29.99 };
}});

Durable Functions Task Hub

Durable Functions uses Azure Storage (queues and tables) as its backend state store by default. For high-throughput scenarios, consider using the Netherite storage provider (based on Event Hubs and Azure Page Blobs) or the MSSQL storage provider. The storage backend choice significantly affects orchestration throughput and latency. Configure the task hub name in host.json to isolate different function apps sharing the same storage account.

Monitoring and Observability

Azure Functions integrates with Application Insights for comprehensive monitoring. Proper instrumentation is critical for understanding performance, identifying failures, and optimizing costs.

Terminal: Configure monitoring
# Create Application Insights
az monitor app-insights component create \
  --app my-func-insights \
  --resource-group myRG \
  --location eastus2

# Connect Function App to Application Insights
az functionapp config appsettings set \
  --name my-premium-func \
  --resource-group myRG \
  --settings \
    APPLICATIONINSIGHTS_CONNECTION_STRING=$(az monitor app-insights component show --app my-func-insights -g myRG --query connectionString -o tsv) \
    FUNCTIONS_WORKER_RUNTIME=node \
    WEBSITE_RUN_FROM_PACKAGE=1

# View function execution logs
az monitor app-insights query \
  --app my-func-insights \
  --resource-group myRG \
  --analytics-query "requests | where timestamp > ago(1h) | summarize count(), avg(duration) by name | order by count_ desc"

Key Metrics to Monitor

MetricWhat It Tells YouAlert Threshold
Function execution countVolume of invocationsSudden drops may indicate trigger failure
Function execution durationP50/P95/P99 latencyP95 exceeding SLO target
Function failuresError rateError rate above 1-5%
Active instance countCurrent scaling levelApproaching plan maximum
Health check statusInstance availabilityUnhealthy instances detected

Cost Comparison Scenarios

Understanding cost at different traffic levels helps you choose the most economical plan. These scenarios compare monthly costs for identical workloads across plans.

Scenario 1: Low-Traffic API (10K requests/day)

PlanMonthly CostNotes
Consumption~$0 (within free grant)Cold starts on first request after idle
Flex Consumption~$5-15No cold starts if always-ready configured
Premium EP1~$175Massive overkill for this traffic level

Scenario 2: High-Traffic API (1M requests/day, 200ms avg)

PlanMonthly CostNotes
Consumption~$80-120May hit scaling limits during spikes
Flex Consumption~$100-150Better scaling, VNet support
Premium EP1 (min 2)~$350-500Predictable performance, no cold starts

Execution Billing Details

Consumption and Flex Consumption plans bill based on the number of executions and the resource consumption measured in GB-seconds (memory allocated multiplied by execution duration). For example, a function using 256 MB of memory running for 1 second consumes 0.25 GB-s. The Consumption free grant of 400,000 GB-s per month covers approximately 1.6 million such executions. Monitor your actual GB-s consumption in Application Insights to predict costs accurately.

Security and Networking Best Practices

Securing Azure Functions requires attention to authentication, network isolation, and secret management.

Authentication Options

  • Function keys: Built-in API keys (function-level and host-level). Simple but not recommended for production; keys are stored in the function app and can be leaked.
  • Azure AD authentication (EasyAuth): Built-in authentication middleware that validates Azure AD tokens. No code changes required. Configure in the portal or via CLI.
  • Custom authentication: Implement your own JWT validation in code for maximum flexibility (multiple identity providers, custom claims).
  • API Management: Place Azure API Management in front of Functions for rate limiting, API versioning, subscription keys, and OAuth 2.0 validation.
Terminal: Secure function app networking
# Restrict function app to VNet-only access (Premium/Flex required)
az functionapp config access-restriction add \
  --name my-premium-func \
  --resource-group myRG \
  --rule-name "AllowVNet" \
  --priority 100 \
  --vnet-name my-vnet \
  --subnet functions-subnet

# Enable managed identity for Key Vault access
az functionapp identity assign \
  --name my-premium-func \
  --resource-group myRG

# Set Key Vault references for secrets (no secrets in app settings)
az functionapp config appsettings set \
  --name my-premium-func \
  --resource-group myRG \
  --settings \
    "DB_CONNECTION=@Microsoft.KeyVault(VaultName=myapp-kv;SecretName=db-connection)" \
    "API_KEY=@Microsoft.KeyVault(VaultName=myapp-kv;SecretName=external-api-key)"

# Enable Azure AD authentication
az webapp auth update \
  --name my-premium-func \
  --resource-group myRG \
  --enabled true \
  --action LoginWithAzureActiveDirectory
Store function app secrets securely in Azure Key Vault

Decision Guide

Use this framework to select the right hosting plan for your workload:

  1. Is the workload sporadic with low traffic and no VNet requirement?→ Use Consumption. The free grant and scale-to-zero make it the cheapest option for lightweight workloads, prototypes, and scheduled tasks.
  2. Do you need VNet access with pay-per-use pricing? → UseFlex Consumption. It bridges the gap between Consumption and Premium at a fraction of the cost.
  3. Is latency critical with no cold starts allowed? → UsePremium with pre-warmed instances. The always-on minimum instance ensures sub-second response times for every request.
  4. Do you have existing App Service plans with spare capacity? → Use Dedicated. Consolidate functions onto existing plans to maximize utilization without additional cost.
  5. Need full network isolation (ASE)? → Use Dedicatedwith App Service Environment v3 for single-tenant, fully isolated compute.
  6. Running event-driven microservices with KEDA? → ConsiderAzure Container Apps as an alternative that provides Kubernetes-based scaling with built-in KEDA support.
Compare Functions with AKS and App Service for container workloadsTrack and optimize your serverless compute costsDesign VNet architecture for Functions with private connectivityApply Well-Architected Framework principles to your serverless design

Key Takeaways

  1. 1Consumption plan is pay-per-execution and scales to zero, making it best for sporadic workloads.
  2. 2Premium plan eliminates cold starts with pre-warmed instances and supports VNet integration.
  3. 3Dedicated (App Service) plan uses existing App Service infrastructure at fixed cost.
  4. 4Flex Consumption combines serverless scaling with always-ready instances.
  5. 5Consumption plan has a 5-minute default timeout (max 10); Premium supports up to 60 minutes.
  6. 6Choose based on latency requirements, execution duration, networking needs, and budget.

Frequently Asked Questions

What is the Azure Functions Consumption plan?
The Consumption plan is a true serverless option where you pay only for execution time and memory. Functions scale automatically and scale to zero when idle. It has a cold start penalty, a 5-minute default timeout, and 1.5 GB memory limit.
How do I avoid cold starts in Azure Functions?
Use the Premium plan which keeps pre-warmed instances ready. Alternatively, use always-ready instances in Flex Consumption. The Consumption plan always has cold starts because instances are deallocated when idle.
When should I use the Premium plan over Consumption?
Use Premium when you need: no cold starts, VNet integration, longer execution times (up to 60 min), more powerful instances, or deterministic performance. Premium costs more but is essential for production APIs and latency-sensitive workloads.
Can Azure Functions connect to resources in a VNet?
Yes, but only on Premium, Dedicated, and Flex Consumption plans. The Consumption plan does not support VNet integration. Premium plan supports both outbound VNet integration and private endpoint inbound connectivity.
What is the maximum execution time for Azure Functions?
Consumption: 10 minutes max (5 min default). Premium: 60 minutes (30 min default). Dedicated: unlimited (no enforced timeout). For long-running orchestrations, use Durable Functions regardless of plan.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.