Security

AWS AccessDeniedException: How to Debug IAM Policy Misconfigurations

2026-03-22 · 10 min read

You are deploying a new feature and everything works in your development account. You push to staging and suddenly every API call fails with this:

An error occurred (AccessDeniedException) when calling the PutObject operation:
User: arn:aws:iam::123456789012:user/deploy-user is not authorized to perform:
s3:PutObject on resource: arn:aws:s3:::my-bucket/uploads/file.txt

Or maybe it is even less helpful:

An error occurred (AccessDenied) when calling the AssumeRole operation:
User: arn:aws:sts::123456789012:assumed-role/lambda-role/my-function is not
authorized to perform: sts:AssumeRole on resource:
arn:aws:iam::987654321098:role/cross-account-role

The AccessDeniedException is the single most common error in AWS. Every engineer who works with AWS encounters it regularly, yet it remains one of the most frustrating to debug because the error message rarely tells you the full story. There are at least six different reasons why an IAM request can be denied, and finding the right one requires a systematic approach.

Here is the exact process I use when a client calls with an access denied issue. It works every time.

Step 1: Confirm Who You Are

Before debugging policies, verify which identity is actually making the request. This catches a surprising number of issues — wrong profile, expired credentials, or an assumed role you did not expect.

aws sts get-caller-identity

The output tells you everything:

{
    "UserId": "AIDAEXAMPLEID",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/deploy-user"
}

If you expected to be using an IAM role but the output shows an IAM user, that is your first clue. If the account number is wrong, you are in the wrong account entirely. I have seen production outages caused by nothing more than an expired AWS_SESSION_TOKEN environment variable that caused the CLI to fall back to a different credential source.

Check your credential chain:

# See which credentials are being used and from where
aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************ABCD              env
secret_key     ****************1234              env
    region                us-east-1      config-file    ~/.aws/config

If the Type column shows env, environment variables are overriding your profile settings. This is one of the most common gotchas when switching between accounts.

Step 2: Find the Exact Denial in CloudTrail

The error message you see in your terminal is a summary. CloudTrail has the full story, including which policy denied the request and why.

aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=EventName,AttributeValue=PutObject \
  --start-time "2026-03-22T10:00:00Z" \
  --end-time "2026-03-22T11:00:00Z" \
  --query 'Events[?contains(CloudTrailEvent, `AccessDenied`)].CloudTrailEvent' \
  --output text | jq '.'

Look for the errorCode and errorMessage fields in the CloudTrail event. For more recent events, CloudTrail now includes an accessDeniedException field that tells you which specific policy caused the denial.

If you have CloudTrail Lake enabled, this query is even more powerful:

SELECT eventTime, eventName, errorCode, errorMessage,
       userIdentity.arn, requestParameters
FROM cloudtrail_events
WHERE errorCode = 'AccessDenied'
  AND eventTime > '2026-03-22 10:00:00'
ORDER BY eventTime DESC
LIMIT 20

Root Cause 1: Missing Actions in the IAM Policy

This is the most straightforward cause. The policy simply does not grant the action being attempted. Here is an example of a policy that looks correct but fails:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": "arn:aws:s3:::my-bucket/*"
        }
    ]
}

This policy grants s3:GetObject and s3:ListBucket, but if the application also needs to upload files, s3:PutObject is missing. The fix is straightforward — add the missing action.

But there is a subtlety here. The s3:ListBucket action requires the bucket ARN without the /* suffix, while s3:GetObject requires it with the suffix. This is one of the most common IAM mistakes:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::my-bucket"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::my-bucket/*"
        }
    ]
}

Use the IAM Policy Simulator to test before deploying:

aws iam simulate-principal-policy \
  --policy-source-arn arn:aws:iam::123456789012:user/deploy-user \
  --action-names s3:PutObject \
  --resource-arns arn:aws:s3:::my-bucket/uploads/file.txt \
  --query 'EvaluationResults[*].[EvalActionName, EvalDecision]' \
  --output table

This gives you a clear allowed or implicitDeny result without making an actual API call.

Root Cause 2: Wrong Resource ARN Format

Resource ARNs in IAM policies must match exactly. A missing wildcard, a wrong region, or an incorrect account number will silently deny access. Here are mistakes I see regularly:

// WRONG: Missing /* suffix for object-level actions
"Resource": "arn:aws:s3:::my-bucket"

// CORRECT: Includes /* for object-level actions
"Resource": "arn:aws:s3:::my-bucket/*"

// WRONG: S3 buckets do not have a region or account in the ARN
"Resource": "arn:aws:s3:us-east-1:123456789012:my-bucket/*"

// CORRECT: S3 bucket ARNs have empty region and account fields
"Resource": "arn:aws:s3:::my-bucket/*"

// WRONG: Missing path separator for DynamoDB tables
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:my-table"

// CORRECT: DynamoDB tables use table/ prefix
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/my-table"

When in doubt, check the actual resource ARN:

# Get the exact ARN of a DynamoDB table
aws dynamodb describe-table \
  --table-name my-table \
  --query 'Table.TableArn' \
  --output text

# Get the exact ARN of a Lambda function
aws lambda get-function \
  --function-name my-function \
  --query 'Configuration.FunctionArn' \
  --output text

Root Cause 3: Explicit Deny Overriding Allow

IAM policy evaluation follows a strict hierarchy: explicit deny always wins. If any policy attached to the principal contains an explicit "Effect": "Deny" that matches the request, it overrides all allow statements everywhere.

This is commonly introduced through permission boundaries or organization SCPs without the team realizing it. Check for deny statements:

# List all policies attached to a user
aws iam list-attached-user-policies \
  --user-name deploy-user \
  --query 'AttachedPolicies[*].[PolicyName, PolicyArn]' \
  --output table

# List inline policies
aws iam list-user-policies \
  --user-name deploy-user

# Check group policies too
aws iam list-groups-for-user \
  --user-name deploy-user \
  --query 'Groups[*].GroupName' \
  --output text

Then review each policy for deny statements:

# Get a specific policy version
aws iam get-policy-version \
  --policy-arn arn:aws:iam::123456789012:policy/my-policy \
  --version-id v1 \
  --query 'PolicyVersion.Document' \
  --output json | jq '.Statement[] | select(.Effect == "Deny")'

A common pattern that causes confusion is a deny-all policy with exceptions. For example, this policy denies everything outside a specific region:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "aws:RequestedRegion": ["us-east-1", "eu-central-1"]
                }
            }
        }
    ]
}

If your service runs in us-west-2, every call is denied regardless of what your allow policies say.

Root Cause 4: Service Control Policies (SCPs)

If you are in an AWS Organization, SCPs applied at the organizational unit or account level act as a permission boundary for the entire account. Even the account root user cannot override an SCP deny.

# Check if the account is in an organization
aws organizations describe-organization 2>/dev/null

# List SCPs affecting the current account
aws organizations list-policies-for-target \
  --target-id 123456789012 \
  --filter SERVICE_CONTROL_POLICY \
  --query 'Policies[*].[Name, Id]' \
  --output table

To see the actual SCP content:

aws organizations describe-policy \
  --policy-id p-abc123def4 \
  --query 'Policy.Content' \
  --output text | jq '.'

SCPs are allow-list based. If the SCP does not explicitly allow an action, it is implicitly denied even if the IAM policy grants it. A common SCP pattern blocks specific services:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "organizations:LeaveOrganization",
                "ec2:RunInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotLike": {
                    "aws:PrincipalArn": "arn:aws:iam::*:role/AdminRole"
                }
            }
        }
    ]
}

Root Cause 5: Permission Boundaries

Permission boundaries are an advanced IAM feature that sets the maximum permissions an IAM entity can have. The effective permissions are the intersection of the identity policy and the permission boundary — not the union.

# Check if a user has a permission boundary
aws iam get-user \
  --user-name deploy-user \
  --query 'User.PermissionsBoundary' \
  --output json

# Check if a role has a permission boundary
aws iam get-role \
  --role-name my-lambda-role \
  --query 'Role.PermissionsBoundary' \
  --output json

If the permission boundary does not include the action you need, adding it to the identity policy will not help. You must also add it to the boundary. This is a common source of confusion when teams grant permissions to a role but forget to update the boundary.

Root Cause 6: Session Policies and AssumeRole Restrictions

When you assume a role, you can optionally pass a session policy that further restricts the role's permissions. Also, the role's trust policy must explicitly allow the principal to assume it.

Check the trust policy:

aws iam get-role \
  --role-name cross-account-role \
  --query 'Role.AssumeRolePolicyDocument' \
  --output json | jq '.'

A common trust policy mistake is not including the sts:ExternalId condition when it is required, or specifying the wrong principal:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:root"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "sts:ExternalId": "my-external-id"
                }
            }
        }
    ]
}

If you call AssumeRole without the --external-id parameter, it fails. If you specify the wrong account in the Principal, it fails. Both produce the same generic AccessDenied error.

The IAM Policy Evaluation Logic

Understanding the evaluation order is critical. AWS evaluates policies in this order:

  1. Explicit deny — Any deny in any policy immediately blocks the request
  2. Organization SCPs — Must allow the action (if applicable)
  3. Resource-based policies — Can grant cross-account access independently
  4. Permission boundaries — Must allow the action (if set)
  5. Session policies — Must allow the action (if set)
  6. Identity-based policies — Must allow the action

If any layer does not allow the request (or explicitly denies it), the request is denied. This is why you can have a perfectly correct IAM policy and still get AccessDenied — the denial is coming from a different layer entirely.

Prevention Best Practices

After debugging hundreds of IAM issues, here are the practices I recommend to every client:

  1. Use IAM Access Analyzer to validate policies before deployment:
aws accessanalyzer validate-policy \
  --policy-document file://policy.json \
  --policy-type IDENTITY_POLICY \
  --query 'findings[*].[findingType, issueCode, learnMoreLink]' \
  --output table
  1. Enable CloudTrail in all regions and all accounts. You cannot debug what you cannot see.

  2. Use least privilege with gradual expansion. Start with minimal permissions and add as needed, rather than starting broad and trying to restrict later.

  3. Tag-based access control reduces ARN-matching errors. Instead of listing specific resource ARNs, use conditions based on tags:

{
    "Effect": "Allow",
    "Action": "ec2:*",
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:ResourceTag/Environment": "production"
        }
    }
}
  1. Test with the IAM Policy Simulator before every deployment. Automate this as part of your CI/CD pipeline.

  2. Use AWS CloudFormation or CDK to manage IAM policies. Manual console changes are the top source of misconfigurations we see in client audits.

When to Call for Help

IAM misconfigurations are rarely isolated problems. They often reveal deeper architectural issues — overly complex permission models, missing automation, or inconsistent practices across teams. If your team spends more than a few hours per week debugging access issues, it is time to step back and review your IAM strategy holistically.

We help AWS teams design clean, maintainable IAM architectures that minimize these errors. If you are fighting AccessDeniedException more than you should be, get in touch for a free consultation — we will review your IAM setup and show you exactly where the problems are hiding.

Need help with your AWS infrastructure?

Book a free 30-minute consultation to discuss your challenges.