Security

S3 403 AccessDenied: The Complete Troubleshooting Guide

2026-04-12 · 8 min read

Your application is returning this error when trying to read from S3:

An error occurred (AccessDenied) when calling the GetObject operation:
Access Denied

Or your CI/CD pipeline fails with:

fatal error: An error occurred (403) when calling the HeadObject operation:
Forbidden

The S3 403 AccessDenied error is uniquely frustrating because S3 has more access control layers than any other AWS service. Between IAM policies, bucket policies, ACLs, Block Public Access settings, VPC endpoint policies, S3 Object Ownership, and KMS encryption, there are over a dozen distinct reasons you might get a 403. And the error message is always the same unhelpful "Access Denied" with no indication of which layer blocked you.

I have debugged hundreds of S3 access issues for clients. Here are the root causes in order of how frequently I encounter them, along with the exact commands to diagnose and fix each one.

Root Cause 1: S3 Block Public Access Is Enabled

This is the number one cause I see when teams try to make objects publicly accessible. Since April 2023, all new S3 buckets have Block Public Access enabled by default at both the account and bucket level. This overrides any bucket policy or ACL that attempts to grant public access.

Diagnosis

Check both account-level and bucket-level Block Public Access settings:

# Account-level settings
aws s3control get-public-access-block \
  --account-id 123456789012

# Bucket-level settings
aws s3api get-public-access-block \
  --bucket my-bucket

The output shows four settings. Any one of them can block access:

{
  "PublicAccessBlockConfiguration": {
    "BlockPublicAcls": true,
    "IgnorePublicAcls": true,
    "BlockPublicPolicy": true,
    "RestrictPublicBuckets": true
  }
}

Solution

If you intentionally need public access (for example, a static website hosting bucket), disable the relevant settings at the bucket level:

aws s3api put-public-access-block \
  --bucket my-bucket \
  --public-access-block-configuration \
    BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false

However, for most use cases, you should keep Block Public Access enabled and use presigned URLs or CloudFront with Origin Access Control instead.

Root Cause 2: Bucket Policy Explicit Deny

An explicit Deny statement in a bucket policy overrides all Allow statements — in both the bucket policy and IAM policies. This is a fundamental IAM evaluation principle that trips up many teams.

Diagnosis

Retrieve and inspect the bucket policy:

aws s3api get-bucket-policy \
  --bucket my-bucket \
  --output text | python3 -m json.tool

Look for any Deny statements. A common pattern that causes issues is an IP restriction policy:

{
  "Sid": "DenyNonOfficeIPs",
  "Effect": "Deny",
  "Principal": "*",
  "Action": "s3:*",
  "Resource": [
    "arn:aws:s3:::my-bucket",
    "arn:aws:s3:::my-bucket/*"
  ],
  "Condition": {
    "NotIpAddress": {
      "aws:SourceIp": ["203.0.113.0/24"]
    }
  }
}

This policy denies all S3 actions from any IP outside the specified range — including AWS services, Lambda functions, and CI/CD pipelines that do not originate from those IPs.

Solution

Refine the Deny statement to exclude AWS service principals and specific IAM roles:

{
  "Sid": "DenyNonOfficeIPsExceptAWSServices",
  "Effect": "Deny",
  "Principal": "*",
  "Action": "s3:GetObject",
  "Resource": "arn:aws:s3:::my-bucket/*",
  "Condition": {
    "NotIpAddress": {
      "aws:SourceIp": ["203.0.113.0/24"]
    },
    "StringNotLike": {
      "aws:PrincipalArn": [
        "arn:aws:iam::123456789012:role/LambdaRole",
        "arn:aws:iam::123456789012:role/CICDRole"
      ]
    }
  }
}

Root Cause 3: IAM Policy Missing S3 Permissions

The calling IAM principal (user or role) might not have the required S3 permissions. This is straightforward but worth checking systematically.

Diagnosis

Simulate the S3 action against the caller's IAM policies:

aws iam simulate-principal-policy \
  --policy-source-arn arn:aws:iam::123456789012:role/MyAppRole \
  --action-names s3:GetObject s3:PutObject s3:ListBucket \
  --resource-arns \
    arn:aws:s3:::my-bucket \
    arn:aws:s3:::my-bucket/* \
  --query 'EvaluationResults[*].{Action:EvalActionName,Decision:EvalDecision,Matched:MatchedStatements[0].SourcePolicyId}'

Also note that s3:ListBucket applies to the bucket ARN (arn:aws:s3:::my-bucket) while s3:GetObject applies to the object ARN (arn:aws:s3:::my-bucket/*). Mixing these up is a common mistake.

Solution

Ensure the IAM policy uses the correct resource ARN format:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3Read",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion"
      ],
      "Resource": "arn:aws:s3:::my-bucket/*"
    },
    {
      "Sid": "AllowS3List",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:ListBucketVersions"
      ],
      "Resource": "arn:aws:s3:::my-bucket"
    }
  ]
}

Root Cause 4: VPC Endpoint Policy Restricting S3 Access

If your application runs in a VPC with an S3 VPC endpoint (gateway endpoint), the endpoint has its own policy that can restrict which buckets are accessible. The default policy allows all S3 actions, but many organizations apply restrictive endpoint policies.

Diagnosis

Find the VPC endpoint and check its policy:

# Find S3 gateway endpoints
aws ec2 describe-vpc-endpoints \
  --filters "Name=service-name,Values=com.amazonaws.us-east-1.s3" \
  --query 'VpcEndpoints[*].{
    ID:VpcEndpointId,
    VpcId:VpcId,
    State:State,
    RouteTableIds:RouteTableIds
  }'
# Get the endpoint policy
aws ec2 describe-vpc-endpoints \
  --vpc-endpoint-ids vpce-0abc123 \
  --query 'VpcEndpoints[0].PolicyDocument' \
  --output text | python3 -m json.tool

Solution

Update the endpoint policy to allow access to the required buckets:

aws ec2 modify-vpc-endpoint \
  --vpc-endpoint-id vpce-0abc123 \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "AllowSpecificBuckets",
        "Effect": "Allow",
        "Principal": "*",
        "Action": ["s3:GetObject", "s3:PutObject", "s3:ListBucket"],
        "Resource": [
          "arn:aws:s3:::my-bucket",
          "arn:aws:s3:::my-bucket/*",
          "arn:aws:s3:::my-other-bucket",
          "arn:aws:s3:::my-other-bucket/*"
        ]
      }
    ]
  }'

Root Cause 5: Cross-Account Access Misconfiguration

Accessing an S3 bucket in another AWS account requires the bucket policy to grant access to the calling account or principal. IAM policies in the caller's account are necessary but not sufficient.

Diagnosis

Check the bucket owner account. If the bucket is in a different account, you need cross-account access:

# This shows the bucket owner's canonical ID
aws s3api get-bucket-acl --bucket my-bucket \
  --query 'Owner.ID'

Solution

In the bucket owner's account, add a bucket policy granting access:

{
  "Sid": "CrossAccountRead",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::222222222222:role/ExternalAppRole"
  },
  "Action": [
    "s3:GetObject",
    "s3:ListBucket"
  ],
  "Resource": [
    "arn:aws:s3:::my-bucket",
    "arn:aws:s3:::my-bucket/*"
  ]
}

In the calling account, the IAM role also needs the same S3 permissions in its IAM policy.

Root Cause 6: S3 Object Ownership and ACL Issues

Since April 2023, new buckets default to BucketOwnerEnforced for Object Ownership, which disables ACLs entirely. If your application relies on ACLs (for example, setting bucket-owner-full-control during cross-account uploads), this setting can cause 403 errors.

Diagnosis

Check the Object Ownership setting:

aws s3api get-bucket-ownership-controls \
  --bucket my-bucket

If it returns BucketOwnerEnforced, ACLs are disabled. Any API call that includes an ACL header will fail with AccessDenied.

Solution

If you need ACLs, change the Object Ownership setting:

aws s3api put-bucket-ownership-controls \
  --bucket my-bucket \
  --ownership-controls Rules=[{ObjectOwnership=BucketOwnerPreferred}]

However, the better approach is to remove ACL headers from your application and use bucket policies for access control instead.

Root Cause 7: SSE-KMS Encryption Requiring kms:Decrypt

If the bucket uses SSE-KMS encryption, reading objects requires both s3:GetObject and kms:Decrypt permissions on the KMS key. This catches many teams off guard because the S3 error does not mention KMS at all.

Diagnosis

Check the bucket's default encryption:

aws s3api get-bucket-encryption \
  --bucket my-bucket \
  --query 'ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault'

If SSEAlgorithm is aws:kms, you need KMS permissions in addition to S3 permissions.

Solution

Add KMS permissions to the IAM policy:

{
  "Sid": "AllowKMSForS3",
  "Effect": "Allow",
  "Action": [
    "kms:Decrypt",
    "kms:GenerateDataKey"
  ],
  "Resource": "arn:aws:kms:us-east-1:123456789012:key/mrk-abc123"
}

Systematic Debugging Checklist

When you encounter an S3 403, run through this diagnostic sequence:

# 1. Check Block Public Access (if expecting public access)
aws s3api get-public-access-block --bucket BUCKET 2>&1

# 2. Check bucket policy for Deny statements
aws s3api get-bucket-policy --bucket BUCKET --output text 2>&1

# 3. Simulate IAM permissions
aws iam simulate-principal-policy \
  --policy-source-arn ROLE_ARN \
  --action-names s3:GetObject \
  --resource-arns arn:aws:s3:::BUCKET/*

# 4. Check Object Ownership
aws s3api get-bucket-ownership-controls --bucket BUCKET 2>&1

# 5. Check bucket encryption (for SSE-KMS)
aws s3api get-bucket-encryption --bucket BUCKET 2>&1

# 6. Check VPC endpoint policies (if in VPC)
aws ec2 describe-vpc-endpoints \
  --filters "Name=service-name,Values=com.amazonaws.REGION.s3"

# 7. Check bucket ACL
aws s3api get-bucket-acl --bucket BUCKET

Prevention Best Practices

Use CloudTrail Data Events

Enable CloudTrail data events for S3 to log every object-level API call. This lets you see exactly which principal called which action and what the response was:

aws cloudtrail put-event-selectors \
  --trail-name my-trail \
  --event-selectors '[{
    "ReadWriteType": "All",
    "IncludeManagementEvents": true,
    "DataResources": [{
      "Type": "AWS::S3::Object",
      "Values": ["arn:aws:s3:::my-bucket/"]
    }]
  }]'

Use IAM Access Analyzer

IAM Access Analyzer can identify S3 buckets with unintended public or cross-account access. Enable it in your account:

aws accessanalyzer create-analyzer \
  --analyzer-name s3-access-review \
  --type ACCOUNT

Standardize on Bucket Policies, Not ACLs

Set BucketOwnerEnforced on all buckets and use bucket policies exclusively. This eliminates an entire class of access issues:

aws s3api put-bucket-ownership-controls \
  --bucket my-bucket \
  --ownership-controls Rules=[{ObjectOwnership=BucketOwnerEnforced}]

Need Help With S3 Access Issues?

S3 access control is deceptively complex. We have seen organizations lose days debugging 403 errors that turned out to be a single misconfigured VPC endpoint policy or an overlooked Block Public Access setting. If your team is spending too much time fighting access denied errors, we can review your S3 access architecture and implement patterns that prevent these issues from recurring.

Get in touch for a free AWS consultation

Need help with your AWS infrastructure?

Book a free 30-minute consultation to discuss your challenges.