Top 10 AWS Cost Mistakes (And How to Fix Them)
Common billing surprises from NAT Gateways, idle resources, oversized instances, and missed savings plans — with concrete fixes.
Why AWS Bills Surprise Everyone
AWS has over 200 services, and most of them have their own pricing dimensions: per-hour, per-request, per-GB, per-GB-month, per-million-API-calls, per-provisioned-IOPS, and dozens more. This granularity gives you the ability to optimize costs precisely, but it also means that costs can spiral in unexpected ways. A single misconfigured resource can add thousands of dollars to your monthly bill before anyone notices.
After working with hundreds of AWS accounts across startups and enterprises, certain patterns emerge repeatedly. These are the ten most common cost mistakes teams make, along with the concrete steps to fix each one. Most of them can be addressed in an afternoon, and collectively they often reduce AWS spending by 30 to 50 percent.
1. Leaving NAT Gateways Running 24/7
NAT Gateways are one of the most expensive networking components in AWS, and they catch teams off guard because the costs are split across two dimensions. Each NAT Gateway costs approximately $0.045 per hour, which is about $32 per month just to exist. On top of that, every gigabyte of data processed through the NAT Gateway costs $0.045. For a moderately busy application processing 1 TB of outbound traffic per month, a single NAT Gateway costs around $77 per month. Deploy one per availability zone across three AZs, and you are paying $231 per month before considering the workload itself.
The fix depends on your traffic patterns. For development and staging environments, consider using a single NAT Gateway in one AZ rather than one per AZ. For workloads that primarily access AWS services, use VPC endpoints instead. Gateway endpoints for S3 and DynamoDB are free, and interface endpoints for other services cost far less than routing traffic through a NAT Gateway. For batch workloads that run on a schedule, consider placing them in public subnets with security groups rather than private subnets with NAT Gateways.
2. Oversized EC2 Instances
The default behavior in most organizations is to provision instances that are larger than needed. Developers pick an instance type during initial setup, the application works, and nobody revisits the sizing decision. AWS Compute Optimizer analyzes your instance utilization metrics and recommends right-sized instance types, but many teams either ignore these recommendations or are not aware the tool exists.
Start by enabling Compute Optimizer across all your accounts. Look for instances where average CPU utilization is below 20 percent and maximum utilization never exceeds 60 percent. These instances are almost certainly oversized. Downsizing from an m6i.xlarge (4 vCPUs, 16 GB RAM, about $139/month on-demand) to an m6i.large (2 vCPUs, 8 GB RAM, about $70/month) saves 50 percent with no architectural changes. For burstable workloads like development servers, bastion hosts, and small web applications, T3 or T4g instances with unlimited credits often cost 60 to 70 percent less than fixed-performance instances.
Compare EC2 instance types and pricing3. Ignoring Savings Plans and Reserved Instances
On-demand pricing is the most expensive way to run workloads on AWS. If you have any workloads that run continuously, a Compute Savings Plan can reduce costs by 30 to 66 percent depending on the commitment term and payment option. Unlike the older Reserved Instances model, Compute Savings Plans are flexible: they apply automatically to any EC2 instance family, Lambda, and Fargate usage in any region.
The mistake most teams make is all-or-nothing thinking. You do not need to commit 100 percent of your compute spend to Savings Plans. Start by looking at your minimum baseline usage over the past six months, the compute you run every single day. Commit to 60 to 70 percent of that baseline with a Savings Plan, and let the remainder run on-demand. This gives you significant savings with low risk. You can layer additional coverage over time as you gain confidence in your usage patterns.
4. Unattached EBS Volumes and Old Snapshots
When you terminate an EC2 instance, its root EBS volume is usually deleted automatically. But additional volumes attached to the instance often persist, continuing to accrue storage charges at $0.08 to $0.125 per GB-month for gp3 volumes. Over time, these orphaned volumes accumulate silently. Similarly, EBS snapshots taken for backup purposes are often never cleaned up. Each snapshot is charged at $0.05 per GB-month, and they add up quickly across many accounts.
Audit your EBS volumes regularly. Filter for volumes with a state of "available," which means they are not attached to any instance. Unless they contain data you specifically need to retain, delete them. For snapshots, implement a lifecycle policy using AWS Data Lifecycle Manager to automatically delete snapshots older than your retention period. A common pattern is keeping daily snapshots for seven days, weekly snapshots for four weeks, and monthly snapshots for one year.
5. Data Transfer Costs Between AZs and Regions
Data transfer within a single availability zone is free, but data transfer between AZs costs $0.01 per GB in each direction, so $0.02 per GB round-trip. This seems trivial until you have microservices communicating across AZs at high volume. A service that sends 10 TB per month between AZs generates $200 in data transfer charges alone.
Cross-region data transfer is even more expensive, ranging from $0.02 to $0.08 per GB depending on the regions involved. If you replicate databases or synchronize data across regions for disaster recovery, these costs can become substantial. Consider whether you truly need cross-region replication for all data, or whether replicating only critical datasets would suffice. For inter-AZ traffic, co-locate services that communicate frequently in the same AZ, and use caching layers to reduce the volume of cross-AZ calls.
6. Paying for Idle Load Balancers
Application Load Balancers (ALBs) have a fixed hourly cost of about $0.0225 per hour, which is roughly $16.50 per month regardless of traffic. Many teams create load balancers for development environments, feature branches, or temporary testing and never clean them up. Ten idle ALBs cost $165 per month for doing nothing.
Audit your load balancers monthly and delete any that are not actively receiving traffic. For development environments, consider sharing a single ALB with path-based or host-based routing rules rather than creating one ALB per application. If you use infrastructure-as-code, make sure your teardown process includes load balancer deletion.
7. CloudWatch Logs Retention Set to Never Expire
The default retention setting for CloudWatch Log Groups is "Never expire," meaning logs accumulate indefinitely. CloudWatch Logs storage costs $0.03 per GB per month. For a busy application generating 100 GB of logs per month, that is $3 per month for the first month, $6 for the second, $9 for the third, and so on. After a year, you are paying $36 per month for log storage, and most of those logs are never queried.
Set explicit retention periods on all log groups. For most applications, 30 days of recent logs in CloudWatch is sufficient for debugging. If you need longer retention for compliance, export logs to S3 using a subscription filter, where storage costs are $0.023 per GB for Standard or $0.0036 per GB for Glacier Instant Retrieval. This single change can reduce CloudWatch costs by 80 to 90 percent.
AWS CloudWatch Observability Guide8. Not Using S3 Intelligent-Tiering
Many teams store all S3 objects in the Standard storage class by default and never configure lifecycle policies. S3 Intelligent-Tiering automatically moves objects between access tiers based on usage patterns, with no retrieval charges and no operational overhead. Objects that are not accessed for 30 days move to Infrequent Access, saving 40 percent. After 90 days without access, they move to Archive Instant Access, saving 68 percent.
For buckets where access patterns are unpredictable, enable Intelligent-Tiering as the default storage class. There is a small monitoring fee of $0.0025 per 1,000 objects, but the savings typically outweigh this cost by an order of magnitude. For buckets where you know objects will rarely be accessed after creation, such as log archives or backup data, configure lifecycle rules to transition directly to Glacier or Glacier Deep Archive after a set period.
Understanding S3 Storage Classes9. Running RDS Multi-AZ in Development
RDS Multi-AZ deployments maintain a synchronous standby replica in a different availability zone for high availability. This effectively doubles your database cost. In production, Multi-AZ is almost always the right choice because it provides automated failover with minimal downtime. But in development, staging, and testing environments, high availability is rarely necessary.
Use Single-AZ deployments for all non-production databases. If your infrastructure-as-code templates default to Multi-AZ, add an environment variable or parameter that sets the deployment type based on the environment. Also review RDS instance sizing in non-production environments. A db.t4g.medium instance at around $50 per month is sufficient for most development databases, compared to a db.r6g.xlarge at $350 per month that might be copied from the production configuration.
10. Forgetting About Elastic IP Charges
Elastic IPs are free when attached to a running EC2 instance. But an unattached Elastic IP costs $0.005 per hour, which is $3.60 per month. More significantly, AWS charges for additional Elastic IPs attached to a single instance beyond the first. When instances are stopped but Elastic IPs remain allocated, the charges continue. Across a large organization with many accounts, dozens of orphaned Elastic IPs can accumulate.
Release any Elastic IPs that are not attached to running instances. If you need static IP addresses for allow-listing or DNS records, consider using Global Accelerator static IPs or Network Load Balancer static IPs, which do not have the same orphaning problem. Run a periodic audit across all regions and accounts to catch unattached EIPs.
Building a Cost-Conscious Culture
Fixing these ten issues is a one-time effort, but keeping costs under control requires ongoing discipline. Enable AWS Cost Anomaly Detection to get alerts when spending patterns change unexpectedly. Set up AWS Budgets with threshold alerts at 50, 80, and 100 percent of your expected monthly spend. Tag all resources consistently so you can attribute costs to teams, projects, and environments. And most importantly, make cost visibility part of your engineering culture. When every engineer can see the cost impact of their architecture decisions, they naturally make more cost-effective choices.
Quick wins
Start with three actions today: enable Compute Optimizer, set CloudWatch log retention to 30 days on all log groups, and delete unattached EBS volumes. These three changes alone typically save 15 to 25 percent on AWS bills with minimal effort and zero risk.
Written by Jeff Monfield
Cloud architect and founder of CloudToolStack. Building free tools and writing practical guides to help engineers navigate AWS, Azure, GCP, and OCI.
Disclaimer: This article is for informational purposes. Cloud services and pricing change frequently; always verify with official provider documentation. AWS, Azure, GCP, and OCI are trademarks of their respective owners.