Security

STS AssumeRole AccessDenied: Fixing Cross-Account Role Trust Policies

2026-05-17 · 9 min read

You are setting up cross-account access for your CI/CD pipeline. The deploy role exists in the target account, your pipeline has the right role ARN, and yet every deployment fails with this:

An error occurred (AccessDenied) when calling the AssumeRole operation:
User: arn:aws:iam::111111111111:role/pipeline-role is not authorized to perform:
sts:AssumeRole on resource: arn:aws:iam::222222222222:role/deploy-role

Or sometimes you get this slightly different variant:

An error occurred (AccessDenied) when calling the AssumeRole operation:
User: arn:aws:sts::111111111111:assumed-role/pipeline-role/session-name is not
authorized to perform: sts:AssumeRole on resource:
arn:aws:iam::222222222222:role/deploy-role with an explicit deny in a
resource-based policy

The STS AssumeRole error is one of the most frustrating in AWS because it sits at the intersection of two IAM policies in two different accounts. Both sides need to agree, and the error message does not tell you which side is rejecting the request. In my experience consulting with teams on cross-account architectures, this error accounts for about a third of all escalations during initial multi-account setups.

Here is how to systematically diagnose and fix it.

Step 1: Confirm Your Identity

Before anything else, verify who the caller actually is. Credential chain issues cause more AssumeRole failures than most people realize.

aws sts get-caller-identity
{
    "UserId": "AROAEXAMPLEID:session-name",
    "Account": "111111111111",
    "Arn": "arn:aws:sts::111111111111:assumed-role/pipeline-role/session-name"
}

If the account number or ARN does not match what you expected, you are using the wrong credentials. Check for stale environment variables or a misconfigured profile:

aws configure list
env | grep AWS_

An expired AWS_SESSION_TOKEN sitting in your environment will override your profile and cause confusing failures. Clear it with unset AWS_SESSION_TOKEN if needed.

Step 2: Test the AssumeRole Call

Run the assume-role call explicitly from the CLI so you can see the exact error:

aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/deploy-role \
  --role-session-name test-session \
  --duration-seconds 900

If an external ID is required, include it:

aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/deploy-role \
  --role-session-name test-session \
  --external-id my-external-id-123 \
  --duration-seconds 900

If this succeeds, your problem is in your application code or pipeline configuration. If it fails, continue diagnosing.

Root Cause 1: Trust Policy Does Not Allow the Calling Principal

This is the most common cause. The trust policy on the target role must explicitly list the calling principal. Here is a common mistake — the trust policy allows a specific IAM user but the caller is actually an IAM role:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:user/deploy-user"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

If your caller is arn:aws:iam::111111111111:role/pipeline-role, this trust policy will deny the request. Fix it by specifying the correct principal:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:role/pipeline-role"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

To inspect the current trust policy:

aws iam get-role \
  --role-name deploy-role \
  --query 'Role.AssumeRolePolicyDocument' \
  --output json \
  --profile target-account

You can also allow the entire account to assume the role and then control access on the caller side:

{
  "Version": "2012-10-17",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:root"
  },
  "Action": "sts:AssumeRole"
}

This is a valid approach but it means any principal in account 111111111111 with sts:AssumeRole permission can assume this role. Use it when you want centralized control in the source account.

Root Cause 2: Caller Missing sts:AssumeRole Permission

Both sides must agree. Even if the trust policy allows your principal, the caller also needs an IAM policy granting sts:AssumeRole on the target role ARN. Check the caller's policies:

aws iam list-attached-role-policies --role-name pipeline-role
aws iam list-role-policies --role-name pipeline-role

Then inspect each policy:

aws iam get-role-policy \
  --role-name pipeline-role \
  --policy-name AssumeRolePolicy

The caller needs at minimum:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::222222222222:role/deploy-role"
    }
  ]
}

A common mistake is restricting the Resource to the wrong ARN or using a wildcard that does not match. If you use arn:aws:iam::222222222222:role/deploy-*, make sure the role name actually starts with deploy-.

Root Cause 3: ExternalId Mismatch

When a trust policy requires an external ID, omitting it or providing the wrong value results in AccessDenied. This is the confused deputy prevention mechanism, primarily used when granting access to third-party services.

The trust policy looks like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:root"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "unique-external-id-abc123"
        }
      }
    }
  ]
}

If you do not pass --external-id unique-external-id-abc123 in your AssumeRole call, it will fail. The error message does not indicate that an external ID was expected — it just says AccessDenied.

To check if the trust policy requires an external ID, inspect it and look for the sts:ExternalId condition:

aws iam get-role \
  --role-name deploy-role \
  --query 'Role.AssumeRolePolicyDocument.Statement[*].Condition' \
  --profile target-account

Root Cause 4: MFA Required but Not Provided

Trust policies can require MFA for additional security. This is common for sensitive roles like production admin access:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111111111111:user/admin-user"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "Bool": {
          "aws:MultiFactorAuthPresent": "true"
        }
      }
    }
  ]
}

To assume a role with MFA, you must provide the MFA serial number and a current token code:

aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/admin-role \
  --role-session-name admin-session \
  --serial-number arn:aws:iam::111111111111:mfa/admin-user \
  --token-code 123456

Note that service roles, Lambda execution roles, and EC2 instance profiles cannot satisfy MFA conditions. If your trust policy requires MFA but the caller is a service, the call will always fail.

Root Cause 5: Role Chaining Duration Limit

When you assume role A and then use those credentials to assume role B, that is role chaining. AWS enforces a hard limit of one hour for chained role sessions, regardless of the --duration-seconds you request.

# First assume: this works with up to 12 hours
aws sts assume-role \
  --role-arn arn:aws:iam::222222222222:role/intermediate-role \
  --role-session-name hop1 \
  --duration-seconds 3600

# Second assume using hop1 credentials: max 1 hour regardless of what you request
aws sts assume-role \
  --role-arn arn:aws:iam::333333333333:role/final-role \
  --role-session-name hop2 \
  --duration-seconds 3600

If you request a duration longer than 3600 seconds in a chained session, the call fails. The solution is to either avoid chaining or ensure your requested duration is within limits.

Also check the role's maximum session duration setting:

aws iam get-role \
  --role-name deploy-role \
  --query 'Role.MaxSessionDuration'

If you request 3600 seconds but the role's maximum is set to 900 seconds, the call fails. Update it if needed:

aws iam update-role \
  --role-name deploy-role \
  --max-session-duration 3600

Root Cause 6: Restrictive Condition Keys in Trust Policy

Organizations often add condition keys to trust policies for tighter security. These conditions silently deny access when not met:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:PrincipalOrgID": "o-abc123def4"
        },
        "ArnLike": {
          "aws:PrincipalArn": "arn:aws:iam::*:role/pipeline-*"
        }
      }
    }
  ]
}

This trust policy allows any role matching pipeline-* from any account in the organization o-abc123def4. If your calling role is named deploy-role instead of pipeline-deploy, it will be denied.

Check your organization ID:

aws organizations describe-organization \
  --query 'Organization.Id'

And verify the caller's ARN matches any ArnLike or StringLike conditions.

Root Cause 7: SCPs or Permission Boundaries Blocking sts:AssumeRole

Service Control Policies (SCPs) at the organization level can restrict sts:AssumeRole even if the IAM policies are correct. If your organization has an SCP denying cross-account access except to approved roles, the call fails with AccessDenied.

Similarly, if the calling role has a permissions boundary, that boundary must also allow sts:AssumeRole.

To check the permissions boundary:

aws iam get-role \
  --role-name pipeline-role \
  --query 'Role.PermissionsBoundary'

SCPs are harder to inspect without organization admin access:

aws organizations list-policies \
  --filter SERVICE_CONTROL_POLICY \
  --profile org-admin

Prevention and Best Practices

After fixing the immediate issue, put these practices in place to prevent future AssumeRole failures:

Use aws:PrincipalOrgID instead of account IDs. This allows any account in your organization without maintaining a list of account numbers:

{
  "Condition": {
    "StringEquals": {
      "aws:PrincipalOrgID": "o-abc123def4"
    }
  }
}

Always use ExternalId for third-party access. Never skip this step when a vendor asks you to create a cross-account role. It prevents the confused deputy attack.

Test cross-account access in CI before relying on it. Add a step to your pipeline that runs aws sts assume-role and verifies the result before proceeding to deployment.

Use IAM Access Analyzer to validate trust policies. Access Analyzer flags overly permissive trust policies and identifies external access:

aws accessanalyzer list-findings \
  --analyzer-arn arn:aws:access-analyzer:us-east-1:222222222222:analyzer/my-analyzer \
  --filter '{"resourceType": {"eq": ["AWS::IAM::Role"]}}'

Document role chains. If your architecture requires role chaining (hub-and-spoke access patterns), document the chain and ensure no hop requires more than one hour.

Avoid wildcard principals in trust policies. Using "AWS": "*" in the principal without conditions allows any AWS account to attempt assuming the role. Always pair it with aws:PrincipalOrgID or a specific account condition.

When to Call for Help

If you have verified all of the above and the error persists, the cause may be a race condition during role creation (eventual consistency means a newly created role may not be assumable for a few seconds), a cross-region STS endpoint issue, or a complex SCP interaction that is difficult to trace without organization-level visibility.

We help teams design and debug cross-account access architectures regularly. If your multi-account setup is blocking deployments or you want a security review of your trust policies, get in touch for a free consultation. We will map your role trust chain and identify every gap.

Need help with your AWS infrastructure?

Book a free 30-minute consultation to discuss your challenges.