S3 Storage Classes Explained
Understand S3 storage classes, when to use each, and how to optimize costs with lifecycle rules.
Prerequisites
- AWS account with S3 access
- Basic understanding of object storage
Introduction to S3 Storage Classes
Amazon S3 offers multiple storage classes designed for different data access patterns and cost requirements. Choosing the right storage class can reduce your S3 costs by up to 95% compared to storing everything in S3 Standard. The key is understanding your data access frequency, retrieval time requirements, and durability needs for each dataset in your environment.
All S3 storage classes provide 99.999999999% (11 nines) of durability, meaning your data is incredibly safe regardless of which class you choose. AWS achieves this by automatically replicating objects across a minimum of three Availability Zones (with the exception of One Zone-IA, which uses a single AZ). The differences between storage classes lie in availability SLAs, retrieval costs, minimum storage durations, and per-GB pricing.
This guide provides a comprehensive comparison of every S3 storage class, explains when to use each one, and shows you how to implement lifecycle policies and Intelligent-Tiering to automate cost optimization. Whether you are managing application assets, data lake files, compliance archives, or backup repositories, understanding S3 storage classes is essential for controlling your AWS bill.
S3 Is Often the Largest Storage Cost
For many organizations, S3 is the single largest line item in their AWS storage bill. Data accumulates over time, and without lifecycle policies, every byte remains in S3 Standard indefinitely. A company storing 100 TB in S3 Standard pays approximately $2,300/month. Moving 80% of that data to Glacier Deep Archive reduces the cost to roughly $500/month, a 78% savings with no impact on actively used data.
Storage Class Comparison
The following table compares all S3 storage classes across key dimensions to help you make the right choice for each workload. Prices are for the us-east-1 region and may vary slightly in other regions.
| Storage Class | Storage Cost (per GB/mo) | Retrieval Fee | Min Duration | Min Object Size | Availability SLA |
|---|---|---|---|---|---|
| S3 Standard | $0.023 | None | None | None | 99.99% |
| S3 Intelligent-Tiering | $0.023 (frequent) / $0.0125 (infrequent) | None | None | None | 99.9% |
| S3 Standard-IA | $0.0125 | $0.01/GB | 30 days | 128 KB | 99.9% |
| S3 One Zone-IA | $0.01 | $0.01/GB | 30 days | 128 KB | 99.5% |
| S3 Glacier Instant Retrieval | $0.004 | $0.03/GB | 90 days | 128 KB | 99.9% |
| S3 Glacier Flexible Retrieval | $0.0036 | $0.01/GB + time-based | 90 days | 40 KB | 99.99% |
| S3 Glacier Deep Archive | $0.00099 | $0.02/GB + 12-48hrs | 180 days | 40 KB | 99.99% |
Minimum Storage Duration Charges
If you delete or transition an object before the minimum storage duration, you are still charged for the full minimum. For example, deleting a Glacier Deep Archive object after 30 days means you still pay for 180 days of storage. This makes it critical to choose the right storage class upfront and avoid unnecessary transitions for short-lived objects. Standard and Intelligent-Tiering have no minimum duration, making them safe choices for data with unpredictable lifetimes.
S3 Standard: The Default Tier
S3 Standard is the default storage class and the best choice for frequently accessed data with unpredictable access patterns. It provides low latency (typically single-digit millisecond access), no retrieval fees, and no minimum storage duration. Use S3 Standard for any data that is actively being accessed or processed.
Ideal Use Cases for S3 Standard
- Application assets: Images, CSS, JavaScript, and other files served to users through CloudFront
- Recently uploaded content: User uploads, incoming data feeds, and newly ingested files before access patterns are established
- Active data processing: Input and output files for ETL pipelines, machine learning training data, and analytics workloads
- Frequently accessed logs: Application logs, access logs, and audit trails from the past 7-30 days
- Data lake hot tier: The most frequently queried partition of your data lake (typically the most recent data)
Request Pricing Matters
While S3 Standard has no retrieval fees, request costs can add up for workloads with high request volumes. PUT, COPY, POST, and LIST requests cost $0.005 per 1,000 requests. GET and SELECT requests cost $0.0004 per 1,000 requests. For data lakes with millions of small objects, request costs can exceed storage costs.
# Check S3 request metrics in CloudWatch
aws cloudwatch get-metric-statistics \
--namespace AWS/S3 \
--metric-name AllRequests \
--dimensions Name=BucketName,Value=my-data-bucket \
Name=FilterId,Value=EntireBucket \
--start-time 2024-01-01T00:00:00Z \
--end-time 2024-01-31T23:59:59Z \
--period 86400 \
--statistics Sum \
--output table
# Enable S3 Storage Lens for comprehensive analytics
aws s3control put-storage-lens-configuration \
--account-id 123456789012 \
--config-id organization-lens \
--storage-lens-configuration '{
"Id": "organization-lens",
"IsEnabled": true,
"AccountLevel": {
"BucketLevel": {
"ActivityMetrics": {"IsEnabled": true},
"PrefixLevel": {
"StorageMetrics": {
"IsEnabled": true,
"SelectionCriteria": {
"MaxDepth": 3,
"MinStorageBytesPercentage": 1.0
}
}
}
}
}
}'S3 Intelligent-Tiering: The Automated Optimizer
S3 Intelligent-Tiering deserves special attention because it eliminates the guesswork in choosing storage classes. It automatically moves objects between tiers based on access patterns, with no retrieval fees and no performance impact. For data with unknown or changing access patterns, Intelligent-Tiering is often the best choice because it guarantees you are never overpaying for storage.
Intelligent-Tiering has a small monthly monitoring and automation fee of $0.0025 per 1,000 objects. For objects larger than 128 KB, this is almost always offset by the savings from automatic tiering. For buckets with millions of small objects, calculate whether the monitoring fee exceeds the potential savings.
Intelligent-Tiering Access Tiers
| Tier | Activation | Access Trigger | Approximate Savings |
|---|---|---|---|
| Frequent Access | Automatic (default) | Object accessed recently | 0% (same as Standard) |
| Infrequent Access | Automatic | Not accessed for 30 days | ~40% |
| Archive Instant Access | Automatic | Not accessed for 90 days | ~68% |
| Archive Access | Opt-in configuration | Not accessed for 90+ days | ~71% |
| Deep Archive Access | Opt-in configuration | Not accessed for 180+ days | ~95% |
The Archive Instant Access tier provides millisecond retrieval at the same performance as S3 Standard. The Archive Access and Deep Archive Access tiers require opt-in and have longer retrieval times (minutes to hours), similar to Glacier. Enable these optional tiers only if you are confident that infrequently accessed data can tolerate retrieval delays.
# Enable archive tiers for Intelligent-Tiering
aws s3api put-bucket-intelligent-tiering-configuration \
--bucket my-data-lake-bucket \
--id ArchiveConfig \
--intelligent-tiering-configuration '{
"Id": "ArchiveConfig",
"Status": "Enabled",
"Tierings": [
{
"AccessTier": "ARCHIVE_ACCESS",
"Days": 90
},
{
"AccessTier": "DEEP_ARCHIVE_ACCESS",
"Days": 180
}
]
}'
# Verify the configuration
aws s3api get-bucket-intelligent-tiering-configuration \
--bucket my-data-lake-bucket \
--id ArchiveConfigIntelligent-Tiering for Data Lakes
If you operate a data lake with unpredictable query patterns, Intelligent-Tiering is the ideal default storage class. Set it as the default for all uploads, enable the archive access tiers, and let AWS automatically optimize costs. Unlike lifecycle policies that move data based on age alone, Intelligent-Tiering responds to actual usage, ensuring frequently queried old data stays in the fast tier while unused recent data moves to cheaper tiers.
Infrequent Access Tiers
The Infrequent Access (IA) tiers, S3 Standard-IA and S3 One Zone-IA, offer reduced storage costs in exchange for a per-GB retrieval fee. They are designed for data accessed less than once per month but requiring millisecond retrieval when needed. The storage discount is approximately 45% compared to S3 Standard, but retrieval adds $0.01/GB each time the data is read.
Standard-IA vs One Zone-IA
| Feature | Standard-IA | One Zone-IA |
|---|---|---|
| Storage cost | $0.0125/GB/month | $0.01/GB/month (20% cheaper) |
| Availability zones | 3+ AZs | 1 AZ only |
| Availability SLA | 99.9% | 99.5% |
| Durability | 99.999999999% | 99.999999999% (within the AZ) |
| AZ destruction risk | Protected (multi-AZ) | Data lost if AZ is destroyed |
| Best for | DR copies, backups, older logs | Recreatable data, thumbnails, transcoded media |
One Zone-IA Data Loss Risk
S3 One Zone-IA stores data in a single Availability Zone. If that AZ is destroyed (physical disaster), your data is lost. Only use One Zone-IA for data that can be recreated from another source, such as thumbnails generated from originals, transcoded media, or cross-region replicas where the primary copy exists in another region. Never store your only copy of critical data in One Zone-IA.
Break-Even Analysis
Standard-IA saves money only if retrieval costs do not exceed the storage savings. For 1 GB stored for 30 days, S3 Standard costs $0.023 and Standard-IA costs $0.0125 plus $0.01 per retrieval. The break-even point is approximately 1.05 retrievals per month. If you access an object more than once per month on average, S3 Standard is cheaper. If you access it less than once per month, Standard-IA saves money.
# S3 Standard vs Standard-IA break-even analysis
# When does IA become more expensive than Standard?
standard_storage_per_gb = 0.023 # per month
ia_storage_per_gb = 0.0125 # per month
ia_retrieval_per_gb = 0.01 # per retrieval
# Monthly cost comparison for 100 GB
data_size_gb = 100
for retrievals_per_month in [0.1, 0.5, 1.0, 2.0, 5.0]:
standard_cost = data_size_gb * standard_storage_per_gb
ia_cost = (data_size_gb * ia_storage_per_gb) + \
(data_size_gb * ia_retrieval_per_gb * retrievals_per_month)
savings = standard_cost - ia_cost
print(f"Retrievals/month: {retrievals_per_month:.1f} | "
f"Standard: ${standard_cost:.2f} | "
f"IA: ${ia_cost:.2f} | "
f"Savings: ${savings:.2f} ({'IA wins' if savings > 0 else 'Standard wins'})")
# Output:
# Retrievals/month: 0.1 | Standard: $2.30 | IA: $1.35 | Savings: $0.95 (IA wins)
# Retrievals/month: 0.5 | Standard: $2.30 | IA: $1.75 | Savings: $0.55 (IA wins)
# Retrievals/month: 1.0 | Standard: $2.30 | IA: $2.25 | Savings: $0.05 (IA wins)
# Retrievals/month: 2.0 | Standard: $2.30 | IA: $3.25 | Savings: $-0.95 (Standard wins)
# Retrievals/month: 5.0 | Standard: $2.30 | IA: $6.25 | Savings: $-3.95 (Standard wins)Glacier Storage Classes: Long-Term Archives
The S3 Glacier family provides deep discounts for archive and long-term storage. There are three Glacier storage classes, each optimized for different retrieval speed and cost requirements. Glacier is not a separate service. Since 2021, all Glacier storage classes are managed directly through the S3 API, making lifecycle transitions seamless.
Glacier Instant Retrieval
Glacier Instant Retrieval provides the lowest cost for data accessed approximately once per quarter with millisecond retrieval (same performance as S3 Standard). At $0.004/GB/month, it is 82% cheaper than S3 Standard. The trade-off is a higher retrieval fee ($0.03/GB) and a 90-day minimum storage duration. This class is ideal for medical images, media archives, regulatory documents, and any data that is rarely accessed but must be available immediately when needed.
Glacier Flexible Retrieval
Glacier Flexible Retrieval (formerly S3 Glacier) offers three retrieval speeds:
- Expedited: 1-5 minutes ($0.03/GB + $10 per request)
- Standard: 3-5 hours ($0.01/GB + $0.05 per request)
- Bulk: 5-12 hours ($0.0025/GB, free requests)
Use Glacier Flexible when you can tolerate retrieval delays of minutes to hours. Common use cases include disaster recovery backups that you hope to never restore, historical data for occasional analysis, and tape replacement for organizations migrating from physical tape libraries.
Glacier Deep Archive
Glacier Deep Archive is the cheapest storage in AWS at under $1/TB/month ($0.00099/GB). It is designed for data retained for 7-10+ years for compliance or regulatory reasons. Retrieval takes 12 hours for standard and 48 hours for bulk. Use it for compliance archives (financial records, healthcare data), long-term backups that serve as a last-resort recovery mechanism, and any data you are required to retain but almost never access.
| Glacier Class | Retrieval Speed | Storage Cost | Best For |
|---|---|---|---|
| Glacier Instant | Milliseconds | $0.004/GB/mo | Quarterly access, instant retrieval needed |
| Glacier Flexible | 1 min to 12 hours | $0.0036/GB/mo | DR backups, historical data, tape replacement |
| Glacier Deep Archive | 12-48 hours | $0.00099/GB/mo | Compliance archives, 7-10+ year retention |
Glacier Restore Process
Restoring objects from Glacier Flexible or Deep Archive is a two-step process. First, you initiate a restore request, which creates a temporary copy in S3 Standard. Then, once the restore completes (based on the retrieval tier), you can access the temporary copy for a specified number of days. After the restore period expires, the temporary copy is deleted and you must initiate another restore to access the data again.
# Initiate a restore from Glacier Deep Archive (standard tier, 12 hours)
aws s3api restore-object \
--bucket my-archive-bucket \
--key compliance/2018/financial-records.zip \
--restore-request '{
"Days": 7,
"GlacierJobParameters": {
"Tier": "Standard"
}
}'
# Check restore status
aws s3api head-object \
--bucket my-archive-bucket \
--key compliance/2018/financial-records.zip \
--query 'Restore'
# Bulk restore all objects in a prefix (cost-effective for large restores)
aws s3api list-objects-v2 \
--bucket my-archive-bucket \
--prefix compliance/2018/ \
--query 'Contents[].Key' \
--output text | while read key; do
aws s3api restore-object \
--bucket my-archive-bucket \
--key "$key" \
--restore-request '{"Days": 7, "GlacierJobParameters": {"Tier": "Bulk"}}'
doneLifecycle Policies: Automated Cost Optimization
S3 Lifecycle policies automatically transition objects between storage classes or delete them based on age. This is the primary mechanism for optimizing storage costs without manual intervention. Every S3 bucket with data that ages out of active use should have a lifecycle policy configured from day one.
Lifecycle Policy Design
A well-designed lifecycle policy mirrors your data's lifecycle: actively used data stays in Standard, aging data transitions to cheaper tiers, and expired data is automatically deleted. You can apply different rules to different prefixes or tags to handle diverse data types within the same bucket.
{
"Rules": [
{
"ID": "OptimizeLogStorage",
"Status": "Enabled",
"Filter": {
"Prefix": "logs/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER_IR"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555
}
},
{
"ID": "CleanupIncompleteUploads",
"Status": "Enabled",
"Filter": {},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
},
{
"ID": "TransitionDataLake",
"Status": "Enabled",
"Filter": {
"Prefix": "data-lake/"
},
"Transitions": [
{
"Days": 90,
"StorageClass": "INTELLIGENT_TIERING"
}
]
},
{
"ID": "ExpireVersions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 30,
"StorageClass": "STANDARD_IA"
},
{
"NoncurrentDays": 90,
"StorageClass": "GLACIER_IR"
}
],
"NoncurrentVersionExpiration": {
"NoncurrentDays": 365
}
}
]
}Always Abort Incomplete Multipart Uploads
Incomplete multipart uploads accumulate storage charges but are invisible toaws s3 ls because they are not completed objects. Always include a rule to abort incomplete multipart uploads after 7 days. This is a free and often overlooked optimization that can reclaim significant storage in buckets used for large file uploads.
Lifecycle Transition Rules
Not all transitions are permitted. S3 enforces a waterfall model where objects can only move to cheaper (colder) storage classes. The valid transition path is:
- S3 Standard → Intelligent-Tiering → Standard-IA → One Zone-IA → Glacier Instant → Glacier Flexible → Deep Archive
- You cannot transition backward (for example, from Glacier to Standard-IA)
- Minimum 30 days in a class before transitioning to the next (for most transitions)
- Standard-IA and One Zone-IA have 128 KB minimum object size, and smaller objects are charged as 128 KB
S3 Storage Lens: Organization-Wide Visibility
S3 Storage Lens provides organization-wide visibility into storage usage, activity, and cost-efficiency. It aggregates metrics across all your S3 buckets and accounts, providing a single dashboard to identify optimization opportunities.
Key Storage Lens Metrics
- Storage class distribution: Shows what percentage of your data is in each storage class, highlighting opportunities to transition data to cheaper tiers
- Buckets without lifecycle policies: Identifies buckets that may be accumulating data without any cost optimization
- Incomplete multipart uploads: Shows storage consumed by abandoned multipart uploads
- Non-current version bytes: Highlights versioned buckets where old versions may be consuming excessive storage
- Average object size: Small objects (under 128 KB) are charged at minimum size in IA/Glacier tiers, making transitions potentially wasteful
# Create an S3 Storage Lens dashboard with advanced metrics
aws s3control put-storage-lens-configuration \
--account-id 123456789012 \
--config-id my-dashboard \
--storage-lens-configuration '{
"Id": "my-dashboard",
"IsEnabled": true,
"AccountLevel": {
"ActivityMetrics": {"IsEnabled": true},
"BucketLevel": {
"ActivityMetrics": {"IsEnabled": true},
"AdvancedCostOptimizationMetrics": {"IsEnabled": true},
"AdvancedDataProtectionMetrics": {"IsEnabled": true},
"DetailedStatusCodesMetrics": {"IsEnabled": true}
}
},
"DataExport": {
"S3BucketDestination": {
"Format": "Parquet",
"OutputSchemaVersion": "V_1",
"AccountId": "123456789012",
"Arn": "arn:aws:s3:::storage-lens-export-bucket",
"Prefix": "storage-lens/",
"Encryption": {
"SSEKMS": {
"KeyId": "arn:aws:kms:us-east-1:123456789012:key/key-id"
}
}
}
}
}'Versioning and Storage Cost Implications
S3 versioning retains all versions of an object, including all writes and deletes. While essential for data protection and compliance, versioning can dramatically increase storage costs if not managed properly. Each version is a complete copy of the object, and all versions consume storage and incur charges.
Managing Version Costs
- Non-current version lifecycle rules: Transition old versions to cheaper storage classes after 30 days, and delete them after a retention period (e.g., 90-365 days)
- Delete markers: When you delete a versioned object, S3 creates a delete marker. These accumulate over time. Use the
ExpiredObjectDeleteMarkerlifecycle rule to clean them up - Object Lock for compliance: Use S3 Object Lock instead of relying solely on versioning for compliance retention. Object Lock prevents deletion even with admin access
# Check total storage including all versions
aws s3api list-object-versions \
--bucket my-bucket \
--prefix important-data/ \
--query '{
CurrentVersions: length(Versions[?IsLatest==`true`]),
NonCurrentVersions: length(Versions[?IsLatest==`false`]),
DeleteMarkers: length(DeleteMarkers)
}'
# Enable versioning with MFA Delete for critical buckets
aws s3api put-bucket-versioning \
--bucket compliance-bucket \
--versioning-configuration '{
"Status": "Enabled",
"MFADelete": "Enabled"
}' \
--mfa "arn:aws:iam::123456789012:mfa/root-account-mfa-device 123456"Cross-Region and Same-Region Replication
S3 Replication automatically copies objects between buckets, either within the same region (SRR) or across regions (CRR). Replication is essential for disaster recovery, compliance requirements that mandate geographically separate copies, and reducing latency for globally distributed applications.
Replication Cost Considerations
| Cost Component | Same-Region Replication | Cross-Region Replication |
|---|---|---|
| Replication requests | Standard PUT pricing | Standard PUT pricing |
| Data transfer | Free (same region) | $0.02/GB (varies by region pair) |
| Destination storage | Based on destination storage class | Based on destination storage class |
| S3 Replication Time Control | $0.015/GB (opt-in for 15-min SLA) | $0.015/GB (opt-in for 15-min SLA) |
Replicate to a Cheaper Storage Class
When configuring replication, you can specify a different storage class for the destination. A common pattern is to replicate from S3 Standard in the primary region to S3 Standard-IA or Glacier Instant Retrieval in the DR region. This provides a disaster recovery copy at a fraction of the cost of replicating to S3 Standard.
S3 Security and Access Controls
Choosing the right storage class is only part of S3 management. Securing your S3 buckets is equally critical. S3 offers multiple layers of access control that work together to protect your data.
- S3 Block Public Access: Enable at the account level to prevent any bucket from being made public. This is the single most important S3 security setting.
- Bucket policies: Define who can access the bucket and what actions they can perform. Use conditions like
aws:PrincipalOrgIDto restrict access to your AWS Organization. - Encryption: Enable default encryption with SSE-S3 (free, AWS managed), SSE-KMS (customer managed key), or SSE-C (customer provided key). As of January 2023, all new objects are encrypted with SSE-S3 by default.
- Access Points: Create named network endpoints with dedicated access policies, simplifying access management for shared data lakes.
# Enable S3 Block Public Access at the account level
aws s3control put-public-access-block \
--account-id 123456789012 \
--public-access-block-configuration '{
"BlockPublicAcls": true,
"IgnorePublicAcls": true,
"BlockPublicPolicy": true,
"RestrictPublicBuckets": true
}'
# Enable default encryption with KMS
aws s3api put-bucket-encryption \
--bucket my-sensitive-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789012:key/key-id"
},
"BucketKeyEnabled": true
}]
}'
# Enable S3 Bucket Key to reduce KMS costs by 99%
# (Already included above with BucketKeyEnabled: true)Cost Optimization Decision Framework
Use this decision tree to choose the right storage class for each dataset:
- Is access frequency predictable? No → Use Intelligent-Tiering with archive tiers enabled
- Accessed multiple times per month? Yes → S3 Standard
- Accessed less than once per month but need instant access? Yes → Standard-IA (multi-AZ) or Glacier Instant Retrieval (if less than quarterly)
- Can you tolerate minutes to hours of retrieval time? Yes → Glacier Flexible Retrieval
- Is this compliance or archival data rarely or never accessed? Yes → Glacier Deep Archive
- Is the data easily reproducible from another source? Yes → Consider One Zone-IA for additional 20% savings
- Many small files under 128 KB? Stay in S3 Standard or Intelligent-Tiering to avoid minimum object size charges in IA/Glacier tiers
Annual Cost Comparison for 100 TB
| Storage Class | Monthly Cost (100 TB) | Annual Cost | Savings vs Standard |
|---|---|---|---|
| S3 Standard | $2,355 | $28,262 | N/A |
| Intelligent-Tiering (mixed) | $1,500 (estimated) | $18,000 | 36% |
| Standard-IA | $1,280 | $15,360 | 46% |
| Glacier Instant | $410 | $4,915 | 83% |
| Glacier Deep Archive | $101 | $1,214 | 96% |
Key Takeaways
Do not store everything in S3 Standard by default. Implement lifecycle policies from day one on every bucket. Use Intelligent-Tiering for unpredictable workloads; it is the “set and forget” option that guarantees you are never overpaying. Glacier Deep Archive is under $1/TB/month for true cold storage. Always abort incomplete multipart uploads to avoid hidden charges. Manage non-current versions with lifecycle rules to prevent version explosion. Enable S3 Storage Lens for organization-wide visibility. The right storage class strategy can cut your S3 bill by 60-96% while maintaining the exact access patterns your applications need.
Key Takeaways
- 1S3 offers seven storage classes optimized for different access patterns and costs.
- 2S3 Intelligent-Tiering automatically moves objects between tiers based on access.
- 3Lifecycle rules automate transitions and expirations to reduce costs.
- 4Glacier and Deep Archive provide the lowest cost for long-term archival.
- 5S3 Standard is the default and best for frequently accessed data.
- 6Use S3 Storage Lens and analytics to identify optimization opportunities.
Frequently Asked Questions
What is the difference between S3 Standard and S3 Standard-IA?
When should I use S3 Intelligent-Tiering?
How long does it take to retrieve data from S3 Glacier?
Can I set up automatic transitions between storage classes?
What is the minimum storage duration for S3 storage classes?
How much can I save by optimizing S3 storage classes?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.