Skip to main content
AWSCost Optimizationintermediate

Cost Optimization Strategies

Practical strategies for reducing AWS costs with Reserved Instances, Savings Plans, and right-sizing.

CloudToolStack Team26 min readPublished Feb 22, 2026

Prerequisites

  • AWS account with billing access
  • Familiarity with core AWS services (EC2, S3, RDS)
  • Understanding of AWS pricing models

The Cloud Cost Challenge

Cloud costs are the new operational challenge for engineering teams. Unlike on-premises infrastructure with fixed capital costs, cloud spending is variable and can grow rapidly without governance. Studies consistently show that 30-35% of cloud spend is wasted on idle, over-provisioned, or forgotten resources. This guide covers practical strategies to identify waste, optimize spending, and build a culture of cost awareness across your organization.

Cost optimization is not about spending less; it is about spending wisely. The goal is to eliminate waste while ensuring your applications have the resources they need to perform well and scale reliably. A well-optimized cloud environment can deliver the same performance at 40-60% less cost than an unoptimized one, freeing budget for new features and innovation.

This guide covers the complete cost optimization lifecycle: gaining visibility into your spending, implementing tagging for cost allocation, optimizing compute, storage, data transfer, and database costs, and building automated governance to prevent waste from accumulating.

Start with Visibility

You cannot optimize what you cannot see. Before implementing any savings strategy, enable Cost Explorer, set up AWS Budgets, activate Cost Anomaly Detection, and implement a comprehensive tagging strategy. These tools form the foundation of cloud financial management and should be configured within the first week of any new AWS account or organization.

Tagging Strategy for Cost Allocation

Tags are the primary mechanism for attributing cloud costs to teams, projects, and environments. Without consistent tagging, your cost reports show an undifferentiated blob of spending that no team owns or is accountable for. Define a mandatory tagging policy and enforce it with AWS Config rules, SCPs, or CDK Nag checks.

Recommended Tag Schema

Tag KeyPurposeExample ValuesRequired
EnvironmentSeparate dev/staging/prod costsdev, staging, prodYes
TeamTeam-level cost allocationplatform, data-eng, frontend, mlYes
ProjectProject or service-level trackingsearch-api, recommendation-engineYes
CostCenterFinance department mappingCC-1234, CC-5678Yes (enterprise)
ManagedByIaC vs manual trackingterraform, cdk, manual, cloudformationRecommended
DataClassificationSecurity and compliancepublic, internal, confidential, restrictedRecommended
enforce-tagging.json
{
  "Type": "AWS::Config::ConfigRule",
  "Properties": {
    "ConfigRuleName": "required-tags",
    "Source": {
      "Owner": "AWS",
      "SourceIdentifier": "REQUIRED_TAGS"
    },
    "InputParameters": {
      "tag1Key": "Environment",
      "tag2Key": "Team",
      "tag3Key": "Project"
    },
    "Scope": {
      "ComplianceResourceTypes": [
        "AWS::EC2::Instance",
        "AWS::RDS::DBInstance",
        "AWS::S3::Bucket",
        "AWS::Lambda::Function",
        "AWS::ECS::Service",
        "AWS::ElasticLoadBalancingV2::LoadBalancer",
        "AWS::DynamoDB::Table"
      ]
    }
  }
}

Tag Enforcement Strategies

  • AWS Config required-tags rule: Detects untagged resources and marks them non-compliant. Can trigger auto-remediation to notify or tag resources.
  • SCP-based enforcement: Deny resource creation without required tags. More aggressive but ensures 100% tag compliance from day one.
  • CDK/Terraform enforcement: Use CDK Aspects or Terraform sentinel policies to require tags at the IaC level, catching missing tags before deployment.
  • AWS Tag Editor: Bulk-tag existing resources across all regions from a single interface. Useful for retroactively tagging untagged resources.

Activate Tags for Cost Allocation

Creating tags on resources is not enough. You must also activate each tag key as a cost allocation tag in the Billing console. Only activated tags appear in Cost Explorer and Cost and Usage Reports. Go to Billing → Cost Allocation Tags and activate your mandatory tag keys. Allow 24 hours for newly activated tags to appear in cost reports.

AWS IAM Best Practices: SCPs for Governance

Compute Optimization

Compute typically accounts for 60-70% of AWS spending. The three highest-impact strategies are right-sizing, commitment discounts (Savings Plans), and Spot Instances. Applied together, these strategies can reduce compute costs by 50-70%.

Right-Sizing

AWS Compute Optimizer analyzes 14 days of CloudWatch metrics and recommends optimal instance types. Focus on instances with average CPU utilization below 40% and maximum CPU below 80%. These are strong candidates for downsizing. Install the CloudWatch agent to collect memory metrics, which Compute Optimizer uses for more accurate recommendations.

right-sizing-analysis.sh
# Get Compute Optimizer recommendations
aws compute-optimizer get-ec2-instance-recommendations \
  --filters "name=Finding,values=OVER_PROVISIONED" \
  --query 'instanceRecommendations[].{
    Id: instanceArn,
    Current: currentInstanceType,
    Recommended: recommendationOptions[0].instanceType,
    Savings: recommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.value,
    Currency: recommendationOptions[0].savingsOpportunity.estimatedMonthlySavings.currency,
    Risk: recommendationOptions[0].migrationEffort
  }' \
  --output table

# Enable Compute Optimizer (do this once per account)
aws compute-optimizer update-enrollment-status \
  --status Active \
  --include-member-accounts

# Identify idle instances (CPU < 5% for 14 days)
aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilization \
  --dimensions Name=InstanceId,Value=i-0123456789abcdef0 \
  --start-time $(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%SZ) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ) \
  --period 86400 \
  --statistics Average Maximum \
  --output table

Savings Plans and Reserved Instances

For steady-state workloads, commitment-based pricing delivers 30-72% savings over On-Demand pricing. The key is choosing the right commitment type and coverage level.

Commitment TypeFlexibility1-Year Savings3-Year SavingsRecommendation
Compute Savings PlanAny family, size, region, OS, or Fargate~30%~55%Best default choice
EC2 Instance Savings PlanAny size within a family + region~35%~60%When you know the family
Standard RISpecific instance type + region~40%~62%Being replaced by SPs
Convertible RICan exchange for different type~33%~54%Prefer Compute SP instead

Savings Plan Purchasing Strategy

Start by committing to 70-80% of your steady-state baseline. Use Cost Explorer's Savings Plans recommendations to identify the optimal hourly commitment. Purchase Compute Savings Plans first for maximum flexibility, then layer EC2 Instance Savings Plans for frequently used instance families. You can always add more commitments later, but you cannot reduce them once purchased. Avoid 3-year commitments until you have at least 6 months of stable usage data.

Spot Instances

Spot Instances offer up to 90% savings for fault-tolerant workloads. AWS can reclaim Spot Instances with a 2-minute warning, so your application must handle interruptions gracefully. Ideal workloads include batch processing, CI/CD runners, data analysis, test environments, and any containerized workload behind a load balancer.

spot-fleet-config.json
{
  "SpotFleetRequestConfig": {
    "AllocationStrategy": "capacityOptimized",
    "TargetCapacity": 10,
    "IamFleetRole": "arn:aws:iam::123456789012:role/spot-fleet-role",
    "TerminateInstancesWithExpiration": true,
    "LaunchTemplateConfigs": [
      {
        "LaunchTemplateSpecification": {
          "LaunchTemplateId": "lt-0123456789abcdef0",
          "Version": "$Latest"
        },
        "Overrides": [
          { "InstanceType": "m7g.large", "WeightedCapacity": 1 },
          { "InstanceType": "m6g.large", "WeightedCapacity": 1 },
          { "InstanceType": "m7i.large", "WeightedCapacity": 1 },
          { "InstanceType": "c7g.large", "WeightedCapacity": 1 },
          { "InstanceType": "c6g.large", "WeightedCapacity": 1 },
          { "InstanceType": "r7g.medium", "WeightedCapacity": 1 }
        ]
      }
    ]
  }
}
EC2 Instance Types: Right-Sizing and Pricing Models

Data Transfer Cost Reduction

Data transfer is the hidden cost that surprises most teams. Inbound traffic is free, but outbound and cross-AZ traffic adds up quickly, sometimes accounting for 15-25% of the total AWS bill. Understanding where data transfer charges come from is the first step to reducing them.

Common Data Transfer Cost Sources

Traffic PathCostOptimization Strategy
NAT Gateway processing$0.045/GBVPC endpoints for S3/DynamoDB (free gateway endpoints)
Cross-AZ traffic$0.01/GB each wayAZ-aware routing, co-locate communicating services
Internet outbound$0.09/GB (first 10 TB)CloudFront (cheaper), compression, caching
Cross-region replication$0.02/GB (varies)Compress before replicating, replicate only what is needed
VPC interface endpoints$0.01/GB + $0.01/hr/AZStill cheaper than NAT GW for high-volume AWS service calls

NAT Gateway Is Expensive

A single NAT Gateway costs $32/month just to exist, plus $0.045 per GB of data processed. If your Lambda functions, ECS tasks, or EC2 instances access S3 through a NAT Gateway, you could be paying 10x more than necessary. Always deploy free S3 and DynamoDB gateway endpoints. For other AWS services, calculate whether interface endpoint costs ($7.20/month per AZ) are less than NAT Gateway data processing charges. A workload sending 500 GB/month to an AWS service pays $22.50 through NAT vs $5 through an interface endpoint.

identify-nat-costs.sh
# Identify top data transfer costs using VPC Flow Logs
# First, query Flow Logs in Athena for NAT Gateway traffic

# Find top destinations from NAT Gateway
# (replace with your NAT Gateway ENI ID)
# SELECT dstaddr, SUM(bytes) as total_bytes
# FROM vpc_flow_logs
# WHERE interface_id = 'eni-nat-gateway-id'
# AND action = 'ACCEPT'
# GROUP BY dstaddr
# ORDER BY total_bytes DESC
# LIMIT 20;

# Check if S3 gateway endpoint exists
aws ec2 describe-vpc-endpoints \
  --filters "Name=service-name,Values=com.amazonaws.us-east-1.s3" \
  --query 'VpcEndpoints[].{Id: VpcEndpointId, State: State, Type: VpcEndpointType}'

# Create free S3 gateway endpoint if missing
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-0123456789abcdef0 \
  --service-name com.amazonaws.us-east-1.s3 \
  --route-table-ids rtb-private-a rtb-private-b \
  --vpc-endpoint-type Gateway

# Create free DynamoDB gateway endpoint
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-0123456789abcdef0 \
  --service-name com.amazonaws.us-east-1.dynamodb \
  --route-table-ids rtb-private-a rtb-private-b \
  --vpc-endpoint-type Gateway
VPC Architecture Patterns: VPC Endpoints and NAT Gateway OptimizationAWS Networking Deep Dive: Data Transfer Cost Matrix

Storage and Database Optimization

Storage costs accumulate silently over time. Unlike compute costs that fluctuate with usage, storage costs only grow unless actively managed. Implement these practices to keep storage costs under control.

S3 Cost Optimization

  • Lifecycle policies: Transition data to cheaper storage classes automatically. Logs over 30 days old should move to Standard-IA; over 90 days to Glacier Instant.
  • Intelligent-Tiering: For data with unpredictable access, let S3 automatically optimize storage class based on usage.
  • Abort incomplete multipart uploads: Abandoned multipart uploads consume storage but are invisible. Add a lifecycle rule to abort after 7 days.
  • Manage non-current versions: Versioned buckets accumulate old versions. Transition non-current versions to cheaper tiers and expire after a retention period.
  • S3 Storage Lens: Identify optimization opportunities across all buckets and accounts.

EBS Volume Optimization

  • Delete unattached volumes: Unattached EBS volumes continue to incur charges. Automate detection and deletion of volumes unattached for more than 7 days.
  • Use gp3 instead of gp2: gp3 is 20% cheaper than gp2 at baseline and allows independent IOPS and throughput configuration.
  • Right-size volumes: EBS volumes are often over-provisioned. Check actual disk usage and resize down.
  • Manage snapshots: Old snapshots accumulate over time. Implement retention policies with AWS Backup.
storage-cleanup.sh
# Find unattached EBS volumes and estimate waste
aws ec2 describe-volumes \
  --filters "Name=status,Values=available" \
  --query 'Volumes[].{
    VolumeId: VolumeId,
    SizeGB: Size,
    Type: VolumeType,
    Created: CreateTime,
    AZ: AvailabilityZone
  }' \
  --output table

# Calculate total waste from unattached volumes
aws ec2 describe-volumes \
  --filters "Name=status,Values=available" \
  --query 'sum(Volumes[].Size)' \
  --output text
# Multiply by $0.08/GB-month (gp3) for estimated monthly waste

# Migrate gp2 volumes to gp3 (20% savings)
for vol_id in $(aws ec2 describe-volumes \
  --filters "Name=volume-type,Values=gp2" \
  --query 'Volumes[].VolumeId' \
  --output text); do
  echo "Migrating $vol_id from gp2 to gp3"
  aws ec2 modify-volume \
    --volume-id $vol_id \
    --volume-type gp3
done

# Find old EBS snapshots (older than 90 days)
aws ec2 describe-snapshots \
  --owner-ids self \
  --query "Snapshots[?StartTime<='$(date -u -d '90 days ago' +%Y-%m-%dT%H:%M:%SZ)'].{
    Id: SnapshotId,
    Size: VolumeSize,
    Date: StartTime,
    Desc: Description
  }" \
  --output table

Database Cost Optimization

StrategyServiceEstimated Savings
Right-size with Performance InsightsRDS, Aurora20-40%
Aurora Serverless v2 for variable workloadsAurora30-60% vs provisioned
On-demand to provisioned billingDynamoDB40-50% for stable workloads
Reserved Instances for databasesRDS, ElastiCache, OpenSearch30-60%
Delete unused read replicasRDS, AuroraVariable (full instance cost)
Stop dev/test databases after hoursRDS65% (16 hrs off per day)
S3 Storage Classes: Complete Storage Cost OptimizationDynamoDB Data Modeling: Capacity Planning and Cost

Serverless Cost Optimization

Serverless services (Lambda, Fargate, API Gateway, SQS) have different cost profiles than traditional compute. They charge per request or per second of execution, making them extremely cost-effective for variable workloads but potentially expensive at high steady-state volumes.

Lambda Cost Reduction

  • Right-size memory: Use Lambda Power Tuning to find the optimal memory-cost configuration. Over-provisioned memory wastes money; under-provisioned memory increases duration.
  • Use ARM64 (Graviton): Lambda supports ARM64 with 20% lower pricing and often better performance than x86.
  • Reduce execution time: Optimize code, use connection pooling, cache frequently accessed data. Each 100ms reduction saves money at scale.
  • Batch processing with SQS: Process SQS messages in batches of 10 instead of invoking Lambda once per message.
lambda-cost-optimization.sh
# Find the most expensive Lambda functions
aws cloudwatch get-metric-data \
  --metric-data-queries '[
    {
      "Id": "invocations",
      "MetricStat": {
        "Metric": {
          "Namespace": "AWS/Lambda",
          "MetricName": "Invocations",
          "Dimensions": [
            {"Name": "FunctionName", "Value": "my-function"}
          ]
        },
        "Period": 86400,
        "Stat": "Sum"
      }
    },
    {
      "Id": "duration",
      "MetricStat": {
        "Metric": {
          "Namespace": "AWS/Lambda",
          "MetricName": "Duration",
          "Dimensions": [
            {"Name": "FunctionName", "Value": "my-function"}
          ]
        },
        "Period": 86400,
        "Stat": "Average"
      }
    }
  ]' \
  --start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%SZ) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%SZ)

# Switch Lambda to ARM64 (Graviton) for 20% cost reduction
aws lambda update-function-configuration \
  --function-name my-function \
  --architectures arm64
Lambda Performance Tuning: Memory Optimization and Cost

Non-Production Environment Savings

Non-production environments (development, staging, QA) often run 24/7 but are only actively used during business hours. Scheduling these environments to shut down outside business hours saves approximately 65% of their compute costs (16 hours off per day, 5 days per week).

Instance Scheduler

AWS Instance Scheduler is a solution that starts and stops EC2 instances and RDS databases on a schedule using tags. Tag your non-production resources with the schedule name, and the scheduler handles the rest.

scheduled-scaling.sh
# Tag instances for scheduled start/stop
aws ec2 create-tags \
  --resources i-0123456789abcdef0 \
  --tags Key=Schedule,Value=office-hours

# Auto Scaling scheduled action: scale to zero at night
aws autoscaling put-scheduled-update-group-action \
  --auto-scaling-group-name dev-web-servers \
  --scheduled-action-name scale-down-night \
  --recurrence "0 20 * * MON-FRI" \
  --desired-capacity 0 \
  --min-size 0

aws autoscaling put-scheduled-update-group-action \
  --auto-scaling-group-name dev-web-servers \
  --scheduled-action-name scale-up-morning \
  --recurrence "0 8 * * MON-FRI" \
  --desired-capacity 2 \
  --min-size 2

# Stop RDS instances outside business hours
# (RDS instances stopped for 7+ days auto-restart; use Lambda to re-stop)
aws rds stop-db-instance --db-instance-identifier dev-database

Weekend Savings Are Significant

Non-production environments running only during business hours (8 AM to 6 PM, Monday through Friday) consume only 52 hours per week instead of 168. That is a 69% reduction in compute costs. For an organization spending $50,000/month on non-production compute, scheduling alone can save $34,500/month ($414,000/year) with minimal effort.

Automation and Governance

Manual cost optimization does not scale. Teams get busy, reviews get skipped, and waste accumulates. Build automated guardrails and reporting systems that continuously optimize costs without requiring manual intervention.

AWS Budgets

Set monthly budgets with 80% and 100% alerts. Create budgets per team using cost allocation tags. Budget alerts should go to both the spending team and a central FinOps team to ensure accountability.

Cost Anomaly Detection

Cost Anomaly Detection uses machine learning to identify unusual spending patterns. Configure monitors for each AWS service and each linked account. It automatically detects spikes like runaway Lambda functions, accidentally provisioned large instances, or unexpected data transfer charges.

cost-governance-setup.sh
# Create monthly budget with alerts
aws budgets create-budget \
  --account-id 123456789012 \
  --budget '{
    "BudgetName": "TeamPlatform-Monthly",
    "BudgetLimit": {"Amount": "15000", "Unit": "USD"},
    "TimeUnit": "MONTHLY",
    "BudgetType": "COST",
    "CostFilters": {
      "TagKeyValue": ["user:Team$platform"]
    }
  }' \
  --notifications-with-subscribers '[
    {
      "Notification": {
        "NotificationType": "ACTUAL",
        "ComparisonOperator": "GREATER_THAN",
        "Threshold": 80
      },
      "Subscribers": [
        {"SubscriptionType": "EMAIL", "Address": "platform-team@example.com"},
        {"SubscriptionType": "SNS", "Address": "arn:aws:sns:us-east-1:123456789012:budget-alerts"}
      ]
    }
  ]'

# Enable Cost Anomaly Detection
aws ce create-anomaly-monitor \
  --anomaly-monitor '{
    "MonitorName": "ServiceMonitor",
    "MonitorType": "DIMENSIONAL",
    "MonitorDimension": "SERVICE"
  }'

aws ce create-anomaly-subscription \
  --anomaly-subscription '{
    "SubscriptionName": "DailyAlerts",
    "MonitorArnList": ["arn:aws:ce::123456789012:anomalymonitor/monitor-id"],
    "Subscribers": [
      {"Type": "EMAIL", "Address": "finops@example.com"}
    ],
    "Frequency": "DAILY",
    "ThresholdExpression": {
      "Dimensions": {
        "Key": "ANOMALY_TOTAL_IMPACT_ABSOLUTE",
        "Values": ["100"],
        "MatchOptions": ["GREATER_THAN_OR_EQUAL"]
      }
    }
  }'

Automated Resource Cleanup

Resource TypeCleanup RuleAutomation Method
Unattached EBS volumesDelete after 7 days unattachedLambda + CloudWatch Events
Old EBS snapshotsDelete snapshots older than retention periodAWS Backup or Lambda
Unused Elastic IPsRelease unassociated EIPsAWS Config auto-remediation
Idle load balancersDelete ALBs/NLBs with zero targetsLambda + CloudWatch metrics
Stale AMIsDeregister AMIs older than 90 daysLambda scheduled cleanup
Incomplete multipart uploadsAbort after 7 daysS3 lifecycle rule

Cost Optimization Maturity Model

Organizations typically progress through stages of cost optimization maturity. Understanding where you are helps you prioritize the next set of improvements.

StageCharacteristicsKey Actions
1. VisibilityNo cost awareness, no tagging, surprise billsEnable Cost Explorer, set budgets, implement tagging
2. Basic OptimizationTags in place, some right-sizing, no commitmentsRight-size instances, buy first Savings Plans, add VPC endpoints
3. Active ManagementRegular reviews, Savings Plans in place, scheduled environmentsSpot for fault-tolerant workloads, storage lifecycle, database RIs
4. Automated GovernanceAutomated cleanup, anomaly detection, team accountabilityFinOps team, showback/chargeback, architectural reviews for cost
5. Optimized CultureCost is a first-class metric, teams own their costsCost per unit metrics, architectural cost trade-offs, continuous improvement

Key Takeaways

Implement tagging for cost allocation before anything else; you cannot optimize what you cannot attribute. Use Compute Savings Plans for baseline coverage (start at 70-80% of steady state). Spot Instances for fault-tolerant workloads save up to 90%. Eliminate NAT Gateway costs with VPC endpoints for AWS services. Clean up unused resources (unattached EBS, old snapshots, idle load balancers) automatically. Schedule non-production environments for 65%+ savings on dev/staging compute. Set AWS Budgets and enable Cost Anomaly Detection for every account. The biggest savings come from architectural decisions (serverless vs provisioned, managed vs self-hosted, region selection), not just instance-level optimizations. Build cost awareness into your engineering culture and make it a regular part of architectural reviews.

AWS Well-Architected Framework: Cost Optimization PillarECS vs EKS Decision: Container Cost OptimizationMulti-Cloud Storage Cheat Sheet: Cost Comparison Across Providers

Key Takeaways

  1. 1Right-sizing is the highest-impact optimization. Use Compute Optimizer and Trusted Advisor.
  2. 2Savings Plans offer up to 72% savings with flexible commitment across EC2, Lambda, and Fargate.
  3. 3Reserved Instances provide up to 75% discount for predictable, steady-state workloads.
  4. 4Spot Instances save up to 90% for fault-tolerant, interruptible workloads.
  5. 5S3 lifecycle policies and Intelligent-Tiering automate storage cost optimization.
  6. 6Set up AWS Budgets with alerts to catch unexpected cost increases early.

Frequently Asked Questions

What is the difference between Reserved Instances and Savings Plans?
Reserved Instances commit to a specific instance type in a specific region. Savings Plans commit to a dollar amount of compute usage per hour and are more flexible, applying across instance types, regions (Compute SP), and services including Lambda and Fargate.
How do I identify wasted AWS spending?
Use AWS Cost Explorer for spending trends, Trusted Advisor for idle resources, Compute Optimizer for right-sizing, and Cost Anomaly Detection for unexpected spikes. Check for unused EBS volumes, idle load balancers, and unattached Elastic IPs.
Should I choose 1-year or 3-year commitment terms?
3-year terms offer deeper discounts (up to 75% vs 40% for 1-year) but lock you in longer. Use 3-year for stable baseline workloads. Use 1-year or no commitment for workloads likely to change. Start conservatively and increase commitment over time.
What are the best practices for S3 cost optimization?
Enable S3 Intelligent-Tiering for unknown access patterns. Set lifecycle rules to transition to Standard-IA after 30 days and Glacier after 90 days. Delete incomplete multipart uploads. Use S3 Storage Lens for visibility across buckets.
How much can I save with Spot Instances?
Spot Instances offer up to 90% savings compared to On-Demand. Average savings are 60-70% depending on instance type and region. Use Spot for batch processing, CI/CD, containers, and big data. Combine with On-Demand and Savings Plans for resilience.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.