Skip to main content
All articles

S3 Bucket Security Hardening: The Definitive Checklist for 2026

Complete S3 hardening guide covering Block Public Access, bucket policies, SSE-S3 vs SSE-KMS vs SSE-C, access logging, versioning, MFA Delete, Object Lock, and AWS Config rules for continuous compliance.

CloudToolStack TeamMarch 5, 202615 min read

S3 Is the Most Breached AWS Service

S3 is simultaneously the most useful and the most dangerous service in AWS. It stores everything -- application data, database backups, log archives, machine learning datasets, compliance records, customer PII. It is also the service behind the majority of cloud data breaches. Capital One (100 million customer records), Twitch (125 GB source code), Facebook (540 million records), US military (sensitive intelligence data) -- the list of organizations that have leaked data from misconfigured S3 buckets reads like a Fortune 500 directory.

The root cause is almost always the same: overly permissive access. A bucket policy that grants public read access. An ACL that allows authenticated AWS users (meaning any AWS account, not just yours) to list objects. A presigned URL with a 7-day expiration that gets indexed by search engines. An IAM role with s3:* permissions attached to an internet-facing application.

AWS has added layer after layer of security controls in response -- S3 Block Public Access, bucket ownership controls, access points, Object Lock, default encryption, and more. But each layer is optional, and many teams do not enable them because they either do not know they exist or assume the defaults are secure (they are not -- or at least, they were not until recently). This guide is the definitive checklist for hardening S3 buckets in 2026, covering every control that matters with specific implementation guidance.

S3 Block Public Access: The First Line of Defense

S3 Block Public Access (BPA) is the single most important S3 security control. When enabled at the account level, it overrides any bucket policy or ACL that grants public access. Even if a developer creates a bucket with a public-read ACL, BPA prevents that access from taking effect. It is a four-toggle control:

  • BlockPublicAcls: Rejects any PUT request that includes a public ACL. Prevents new public ACLs from being created.
  • IgnorePublicAcls: Ignores all public ACLs on existing buckets and objects. Even if a public ACL exists, S3 will not honor it.
  • BlockPublicPolicy: Rejects any bucket policy that grants public access. Uses AWS's public access evaluation logic.
  • RestrictPublicBuckets: Restricts access to buckets with public policies to AWS service principals and authorized users only. Cross-account access via public policies is blocked.

Enable all four toggles at the account level. Yes, all four. If you have a legitimate need for a public bucket (hosting static website assets, for example), use CloudFront with Origin Access Control instead of making the bucket public. The only exception is if you are hosting a public dataset or a public software distribution -- and even then, consider CloudFront.

As of April 2023, new AWS accounts have S3 Block Public Access enabled by default. But accounts created before that date may not have it enabled, and it can be disabled at any time by anyone with s3:PutAccountPublicAccessBlock permissions. Verify it is enabled and add an SCP to prevent disabling it.

Check Account-Level BPA Now

Run this CLI command to check your account-level S3 Block Public Access settings: aws s3control get-public-access-block --account-id YOUR_ACCOUNT_ID. If any of the four settings is false, enable it immediately. This is the single highest-impact security action you can take on AWS.

Bucket Policies: Least Privilege Access

S3 bucket policies define who can do what with your bucket and its objects. The default policy is empty, meaning only the bucket owner (the AWS account) has access. Every access grant beyond that should be deliberate, documented, and as narrow as possible.

Build S3 bucket policies interactively

Essential Bucket Policy Patterns

  • Deny unencrypted uploads: Add a deny statement for s3:PutObject where the s3:x-amz-server-side-encryption condition key is not present or not equal to your required encryption algorithm. This ensures every object is encrypted at rest, even if default encryption is somehow disabled.
  • Deny non-HTTPS access: Add a deny statement for all S3 actions where aws:SecureTransport is false. This ensures all data in transit is encrypted. S3 still accepts HTTP requests by default -- this policy blocks them.
  • Restrict to specific VPC endpoints: For internal buckets, add a deny statement for all actions where aws:sourceVpce is not your VPC endpoint ID. This ensures the bucket can only be accessed from within your VPC, even if credentials are compromised and used from outside.
  • Restrict to specific IAM roles: Instead of granting broad account access, restrict to specific IAM role ARNs. Use aws:PrincipalArn condition to allow only the application role, the deployment role, and the break-glass admin role.

Common Bucket Policy Mistakes

The most dangerous bucket policy mistake is using Principal: "*" without a sufficiently restrictive condition. Principal "*" means anyone -- any AWS account, any IAM user, any anonymous user. If your condition is weak or missing, you have a public bucket. Always pair Principal "*" with at least one of: aws:PrincipalOrgID (restricts to your AWS Organization), aws:sourceVpce (restricts to your VPC), or aws:PrincipalArn (restricts to specific roles).

Another common mistake is forgetting to deny s3:DeleteBucket and s3:PutBucketPolicy for non-admin roles. If an attacker compromises an application role that can modify the bucket policy, they can grant themselves full access or delete the bucket entirely. Restrict bucket policy modifications to a dedicated infrastructure role.

Encryption: SSE-S3 vs SSE-KMS vs SSE-C

S3 offers three server-side encryption options. Choosing the right one depends on your compliance requirements, key management needs, and cost tolerance.

SSE-S3 (AES-256)

AWS manages the encryption keys entirely. Every object is encrypted with a unique key, which is itself encrypted with a root key that AWS rotates regularly. You have no control over the keys, no visibility into key usage, and no ability to audit key access. SSE-S3 is free and has zero operational overhead. Since January 2023, SSE-S3 is enabled by default on all new buckets.

SSE-S3 satisfies the encryption-at-rest requirement for most compliance frameworks, including SOC 2 and basic HIPAA requirements. It does not satisfy requirements that mandate customer-managed keys (some PCI DSS interpretations, FedRAMP High, some financial regulations).

SSE-KMS (AWS KMS)

AWS KMS manages the encryption keys, but you control the key policy. You can audit every use of the key via CloudTrail, restrict which IAM roles can encrypt and decrypt data, set key rotation schedules, and revoke access by disabling or deleting the key. SSE-KMS provides an additional layer of access control beyond S3 bucket policies -- even if someone has s3:GetObject permission, they also need kms:Decrypt permission on the KMS key.

SSE-KMS costs $1/month per key plus $0.03 per 10,000 API calls (encrypt/decrypt). For buckets with high read throughput, KMS costs can be significant. A bucket serving 1 million objects per day generates 1 million kms:Decrypt calls -- $3/day or $90/month just for decryption. Use S3 Bucket Keys to reduce KMS API calls by up to 99%: instead of calling KMS for every object, S3 generates a bucket-level data key from KMS and uses it for multiple objects.

SSE-C (Customer-Provided Keys)

You provide the encryption key with every PUT and GET request. AWS uses the key to encrypt/decrypt but does not store it. You are entirely responsible for key management -- if you lose the key, you lose the data. SSE-C is used when regulatory requirements mandate that the cloud provider never has access to encryption keys. It is operationally complex and rarely the right choice for most applications.

Encryption Recommendation

Use SSE-S3 as the default for non-sensitive data. Use SSE-KMS with bucket keys for sensitive data, compliance-regulated data, and any data that requires key access auditing. Avoid SSE-C unless you have a specific regulatory requirement that mandates it. Always enable S3 Bucket Keys when using SSE-KMS to reduce costs.

Access Logging

S3 server access logging records every request made to your bucket: who made the request, what operation, what object, what time, what response code, and what error code (if any). These logs are essential for security investigation, compliance auditing, and understanding access patterns.

Enable server access logging for every bucket that contains sensitive data. Logs are delivered to a target bucket (not the same bucket -- this creates an infinite loop). Use a dedicated logging bucket with the following configuration: BPA enabled, SSE-KMS encryption, lifecycle policy to transition to Glacier after 90 days and delete after 365 days (or longer for compliance), and Object Lock if you need immutable logs for compliance.

For more structured and queryable logging, use CloudTrail data events for S3. CloudTrail S3 data events record the same information as server access logs but in CloudTrail's structured JSON format, which integrates with CloudWatch Logs, Athena, and SIEM tools. CloudTrail data events cost $0.10 per 100,000 events -- for high-traffic buckets, this adds up. Use S3 server access logs for cost-effective logging of all buckets, and CloudTrail data events for buckets that require real-time alerting or SIEM integration.

Versioning and MFA Delete

S3 versioning keeps every version of every object, including deleted objects (a delete marker is created instead of removing the object). Versioning protects against accidental deletion, accidental overwrites, and ransomware (attackers cannot encrypt your data by overwriting objects if you can restore previous versions).

MFA Delete adds an additional layer: it requires MFA authentication to permanently delete object versions or to disable versioning on the bucket. Without MFA Delete, anyone with s3:DeleteObject or s3:PutBucketVersioning permissions can permanently destroy data. With MFA Delete, even a compromised IAM role cannot permanently delete objects without the physical MFA device.

Enable versioning on every bucket. Enable MFA Delete on buckets containing data that must not be permanently deleted (backups, compliance records, audit logs). Note: MFA Delete can only be enabled by the root account using the AWS CLI -- it cannot be enabled via the console, CloudFormation, or Terraform. This is intentional -- it ensures that only someone with root account access and a physical MFA device can enable or disable this protection.

Object Lock: Immutable Storage

S3 Object Lock prevents objects from being deleted or overwritten for a specified retention period. It is the compliance-grade version of versioning + MFA Delete. Object Lock supports two modes:

  • Governance mode: Users with special permissions (s3:BypassGovernanceRetention) can override the lock and delete objects. Useful for data that should generally be retained but may need to be deleted in exceptional circumstances.
  • Compliance mode: Nobody can delete the object until the retention period expires -- not the root account, not AWS support, nobody. Once set, the retention period cannot be shortened. This is the nuclear option for regulatory compliance (SEC Rule 17a-4, FINRA, HIPAA).

Object Lock can only be enabled when creating a bucket (you cannot enable it on existing buckets). Legal holds are a separate Object Lock feature that prevents deletion indefinitely (no expiration date) until explicitly removed. Use legal holds for data under litigation hold.

Compliance Mode Is Irreversible

Once you set a compliance mode retention period, you cannot shorten it, disable it, or delete the object before the period expires. If you set a 7-year retention and realize the data should have been deleted, you will pay storage costs for 7 years with no way to delete it. Test thoroughly with governance mode before deploying compliance mode.

S3 Access Points

S3 Access Points simplify managing access to shared buckets. Instead of one monolithic bucket policy that tries to accommodate every application, each application gets its own access point with its own access policy. Access points can be restricted to a specific VPC, ensuring that the application can only access the bucket from within the VPC.

Use access points when multiple applications or teams share a bucket. Instead of adding conditions to the bucket policy for each new consumer (creating an increasingly complex and error-prone policy), create a new access point for each consumer with a clean, simple policy. The bucket policy delegates to access points, and each access point policy is self-contained.

Lifecycle Rules: Not Just for Cost Optimization

S3 lifecycle rules are typically discussed in the context of cost optimization (transitioning old data to cheaper storage classes), but they also serve a security function. A lifecycle rule that deletes objects after a retention period ensures that sensitive data does not persist indefinitely. A rule that deletes non-current versions after 30 days limits the window in which old data can be accessed. A rule that aborts incomplete multipart uploads after 7 days prevents orphaned upload parts from consuming storage and potentially containing sensitive data.

Build S3 lifecycle rules interactively

Every bucket should have at minimum: a rule to abort incomplete multipart uploads after 7 days and a rule to delete non-current versions after a defined retention period (30-90 days for most workloads). For compliance-regulated data, set lifecycle rules that match your data retention policy -- keep data for exactly as long as required, then delete it. Retaining data longer than required increases your exposure in the event of a breach.

AWS Config Rules for Continuous Compliance

Deploy these AWS Config rules to continuously monitor S3 security:

  • s3-bucket-server-side-encryption-enabled: Verifies default encryption is configured on all buckets.
  • s3-bucket-ssl-requests-only: Verifies bucket policies deny non-HTTPS requests.
  • s3-bucket-public-read-prohibited: Detects buckets with public read access via ACLs or policies.
  • s3-bucket-public-write-prohibited: Detects buckets with public write access.
  • s3-bucket-logging-enabled: Verifies server access logging is enabled.
  • s3-bucket-versioning-enabled: Verifies versioning is enabled.
  • s3-account-level-public-access-blocks-periodic: Verifies account-level S3 Block Public Access is enabled.
  • s3-bucket-default-lock-enabled: Verifies Object Lock is enabled (for compliance-critical buckets).
  • s3-bucket-replication-enabled: Verifies cross-region replication is configured (for disaster recovery).

Set up SNS notifications for non-compliant evaluations. Critical findings (public access, missing encryption) should page your security team. Informational findings (missing tags, missing lifecycle rules) should create tickets for remediation within a defined SLA.

Real Breach Case Studies

Capital One (2019)

An attacker exploited a misconfigured WAF to access EC2 instance metadata, obtained IAM role credentials, and used those credentials to list and download S3 objects containing 100 million customer records. The bucket was not public -- the breach was via compromised IAM credentials. Lessons: restrict IAM role permissions to specific bucket ARNs (not s3:*), enable CloudTrail S3 data events for sensitive buckets, monitor for anomalous API call patterns.

Twitch (2021)

An internal server misconfiguration exposed 125 GB of data including source code, creator payout data, and internal tools. The data was in S3 and was accessed via overly permissive internal access controls. Lessons: even internal buckets need access controls, use S3 access points to limit each application to only the objects it needs, enable versioning to detect unauthorized modifications.

Pegasus Airlines (2022)

An open S3 bucket containing 6.5 TB of data including flight charts, navigation materials, crew PII, and source code was discovered by security researchers. The bucket had no access controls. Lessons: enable account-level S3 Block Public Access (this alone would have prevented the breach), deploy AWS Config rules to detect public buckets, audit bucket policies regularly.

The Complete Hardening Checklist

Here is the definitive S3 security checklist for 2026, ordered by impact:

  1. Enable S3 Block Public Access at the account level (all four toggles).
  2. Add an SCP to prevent disabling account-level BPA.
  3. Enable default encryption (SSE-S3 minimum, SSE-KMS for sensitive data).
  4. Add bucket policy statements to deny non-HTTPS access.
  5. Add bucket policy statements to deny unencrypted uploads.
  6. Enable versioning on all buckets.
  7. Enable MFA Delete on critical buckets (backups, audit logs, compliance data).
  8. Enable Object Lock (governance or compliance mode) for regulatory-required data.
  9. Enable server access logging (target bucket with encryption and lifecycle rules).
  10. Enable CloudTrail S3 data events for sensitive buckets.
  11. Deploy AWS Config rules for continuous monitoring.
  12. Configure lifecycle rules: abort incomplete uploads, expire non-current versions, transition to cheaper tiers.
  13. Use S3 Access Points for shared buckets instead of complex bucket policies.
  14. Restrict bucket policies to specific VPC endpoints for internal buckets.
  15. Review and restrict IAM policies: no s3:* permissions, specific bucket ARNs, condition keys.
  16. Enable S3 Inventory to maintain a complete catalog of objects and their encryption/access status.

This is not a one-time exercise. S3 security requires continuous monitoring because new buckets are created, IAM policies change, and new applications are deployed. The combination of preventive controls (BPA, SCPs, bucket policies), detective controls (Config rules, CloudTrail), and response automation (SNS alerts, SSM remediation) creates a defense-in-depth posture that catches issues before they become breaches.

Build secure S3 bucket policiesConfigure S3 lifecycle rules for security and cost

Written by CloudToolStack Team

Cloud architects with 15+ years of production experience across AWS, Azure, GCP, and OCI. We build free tools and write practical guides to help engineers navigate multi-cloud infrastructure.

Disclaimer: This article is for informational purposes. Cloud services and pricing change frequently; always verify with official provider documentation. AWS, Azure, GCP, and OCI are trademarks of their respective owners.