Secrets Manager ResourceNotFoundException: Finding and Fixing Missing Secrets
2026-05-10 · 9 min read
Your application just crashed on startup with this error:
ResourceNotFoundException: Secrets Manager can't find the specified secret.
(Service: AWSSecretsManager; Status Code: 400; Error Code: ResourceNotFoundException;
Request ID: a1b2c3d4-5678-90ab-cdef-EXAMPLE)
Or your CloudFormation stack is stuck because a dynamic reference to a secret is failing:
Template error: instance of Fn::Sub references invalid resource attribute
'SecretString' for resource 'MyDatabaseSecret'
The ResourceNotFoundException from Secrets Manager is one of those errors that sounds straightforward but has at least half a dozen possible causes. The secret might exist but be in the wrong region. It might have been deleted and is sitting in a pending deletion window. The IAM permissions might be blocking access in a way that masquerades as "not found." Let me walk through each cause and the exact commands to diagnose them.
Step 1: Verify the Secret Exists
Start by searching for the secret. If you know the exact name:
aws secretsmanager describe-secret \
--secret-id my-app/database/credentials \
--query '{Name:Name,ARN:ARN,DeletedDate:DeletedDate,RotationEnabled:RotationEnabled}' \
--output table
If this returns ResourceNotFoundException, the secret does not exist with that name in this region. Try listing all secrets to find it:
aws secretsmanager list-secrets \
--query 'SecretList[*].{Name:Name,ARN:ARN,DeletedDate:DeletedDate}' \
--output table
Look carefully at the output. Common naming mismatches I see in the field:
my-app/database/credentialsvsmy-app/Database/Credentials(case sensitivity)prod/my-app/dbvsmy-app/prod/db(path segment order)my-app-database-credentialsvsmy-app/database/credentials(hyphens vs slashes)
Root Cause 1: Wrong Region
This is the number one cause of ResourceNotFoundException that I encounter. Secrets Manager is a regional service. A secret created in us-east-1 does not exist in eu-west-1. If your application is configured with the wrong region, or if you deployed to a new region without replicating the secrets, you will get this error.
Check which region your SDK or CLI is targeting:
# Check your current CLI region
aws configure get region
# List secrets in a specific region
aws secretsmanager list-secrets \
--region us-east-1 \
--query 'SecretList[?contains(Name, `my-app`)].{Name:Name,ARN:ARN}' \
--output table
If you find the secret in a different region, you have three options:
- Fix the application configuration to use the correct region
- Replicate the secret to the region where the application runs:
aws secretsmanager replicate-secret-to-regions \
--secret-id my-app/database/credentials \
--add-replica-regions Region=eu-west-1
- Create a new secret in the target region (appropriate if the secret values differ per region, like database endpoints)
Root Cause 2: Secret Pending Deletion
When you delete a secret in Secrets Manager, it is not immediately destroyed. It enters a "pending deletion" state with a recovery window of 7 to 30 days. During this window, the secret still exists but any attempt to read its value returns ResourceNotFoundException.
Check if the secret is pending deletion:
aws secretsmanager describe-secret \
--secret-id my-app/database/credentials \
--query '{Name:Name,DeletedDate:DeletedDate,DeletionDate:DeletionDate}' \
--output table
If DeletedDate is set, the secret is scheduled for deletion. You can restore it:
aws secretsmanager restore-secret \
--secret-id my-app/database/credentials
This immediately restores the secret and its value. The restore command works anytime during the recovery window.
To see all secrets pending deletion:
aws secretsmanager list-secrets \
--include-planned-deletion \
--query 'SecretList[?DeletedDate!=`null`].{Name:Name,DeletedDate:DeletedDate}' \
--output table
A common scenario: someone runs terraform destroy or deletes a CloudFormation stack, which deletes the secrets. Then the application in another stack or service still references them. The secret exists but is inaccessible.
Root Cause 3: IAM Permissions Masquerading as Not Found
This is the most deceptive cause. If the calling IAM principal lacks secretsmanager:GetSecretValue permission, Secrets Manager returns AccessDeniedException. But if the principal lacks secretsmanager:DescribeSecret or secretsmanager:ListSecrets, they cannot even confirm the secret exists, making it appear as if the secret is not found.
More importantly, if a resource policy on the secret explicitly denies access, the error message can be misleading. Some organizations use resource policies to restrict secrets to specific VPCs or principals, and the denial looks like a not-found error in application logs.
Check the IAM permissions of the calling role:
# Simulate the API call to check permissions
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:role/my-app-role \
--action-names secretsmanager:GetSecretValue \
--resource-arns arn:aws:secretsmanager:us-east-1:123456789012:secret:my-app/database/credentials-AbCdEf \
--query 'EvaluationResults[0].{Action:EvalActionName,Decision:EvalDecision}' \
--output table
Check if the secret has a resource policy:
aws secretsmanager get-resource-policy \
--secret-id my-app/database/credentials \
--query 'ResourcePolicy' \
--output text | python3 -m json.tool
A restrictive resource policy looks like this — and it will block access for anyone not in the allowed list:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "secretsmanager:GetSecretValue",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": [
"arn:aws:iam::123456789012:role/my-app-role",
"arn:aws:iam::123456789012:role/admin-role"
]
}
}
}
]
}
The Fix: Grant the Correct Permissions
The IAM policy for the application role should include:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-app/*"
}
]
}
Note the wildcard at the end of the resource ARN. Secrets Manager appends a random 6-character suffix to the secret ARN, so my-app/database/credentials becomes my-app/database/credentials-AbCdEf. If you specify the exact ARN without the suffix, the policy will not match.
Root Cause 4: Secret Version Staging Labels
Secrets Manager supports version staging labels like AWSCURRENT, AWSPREVIOUS, and custom labels. If your application requests a specific version that does not exist, you get ResourceNotFoundException.
Check the available versions:
aws secretsmanager describe-secret \
--secret-id my-app/database/credentials \
--query 'VersionIdsToStages' \
--output json
This returns something like:
{
"a1b2c3d4-5678-90ab-cdef-111111111111": ["AWSCURRENT"],
"a1b2c3d4-5678-90ab-cdef-222222222222": ["AWSPREVIOUS"]
}
If your application requests a staging label like AWSCURRENT and the secret has no versions with that label (which can happen during a failed rotation), the call will fail.
Try retrieving the secret with the default label:
aws secretsmanager get-secret-value \
--secret-id my-app/database/credentials \
--version-stage AWSCURRENT \
--query '{Name:Name,VersionId:VersionId,CreatedDate:CreatedDate}' \
--output table
Root Cause 5: Rotation Lambda Failing Silently
If secret rotation is configured but the rotation Lambda is failing, the secret can end up in an inconsistent state. The rotation process creates a new version with the AWSPENDING label, but if the Lambda fails before promoting it to AWSCURRENT, the secret may become inaccessible.
Check the rotation configuration and status:
aws secretsmanager describe-secret \
--secret-id my-app/database/credentials \
--query '{RotationEnabled:RotationEnabled,RotationLambdaARN:RotationLambdaARN,LastRotatedDate:LastRotatedDate,LastChangedDate:LastChangedDate}' \
--output table
Check the rotation Lambda's CloudWatch logs for errors:
# Get the Lambda function name from the ARN
aws secretsmanager describe-secret \
--secret-id my-app/database/credentials \
--query 'RotationLambdaARN' \
--output text
# Check recent Lambda invocations
aws logs filter-log-events \
--log-group-name /aws/lambda/my-app-secret-rotation \
--start-time $(date -d '24 hours ago' +%s000) \
--filter-pattern "ERROR" \
--query 'events[*].{Time:timestamp,Message:message}' \
--output table
If rotation is stuck, you can cancel it and manually update the secret:
aws secretsmanager cancel-rotate-secret \
--secret-id my-app/database/credentials
Root Cause 6: CloudFormation Dynamic References to Non-Existent Secrets
CloudFormation supports dynamic references to Secrets Manager with the syntax {{resolve:secretsmanager:secret-name}}. If the referenced secret does not exist when the stack is created or updated, the stack operation fails.
Common issues:
- The secret is created in a different stack that has not been deployed yet
- The secret name in the template does not match the actual secret name
- The secret was deleted outside of CloudFormation
Verify the secret name matches what CloudFormation expects:
# Check the template for secret references
aws cloudformation get-template \
--stack-name my-app-stack \
--query 'TemplateBody' \
--output text | grep -i "secretsmanager"
Step-by-Step Diagnosis Workflow
When you encounter ResourceNotFoundException, run through this checklist in order:
- Verify the secret name — check for typos, case sensitivity, path segment order
- Check the region — is the secret in the same region as the application?
- Check for pending deletion — use
describe-secretto see ifDeletedDateis set - Test IAM permissions — use
simulate-principal-policyor tryget-secret-valuewith an admin role - Check the resource policy — a deny policy can masquerade as not found
- Verify version staging labels — ensure
AWSCURRENTexists - Check rotation status — a failed rotation can leave the secret inaccessible
# Quick diagnostic script — run all checks at once
SECRET_ID="my-app/database/credentials"
REGION="us-east-1"
echo "=== Describe Secret ==="
aws secretsmanager describe-secret \
--secret-id $SECRET_ID \
--region $REGION 2>&1
echo "=== Resource Policy ==="
aws secretsmanager get-resource-policy \
--secret-id $SECRET_ID \
--region $REGION 2>&1
echo "=== Get Secret Value ==="
aws secretsmanager get-secret-value \
--secret-id $SECRET_ID \
--region $REGION \
--query '{VersionId:VersionId,CreatedDate:CreatedDate}' 2>&1
Prevention Best Practices
-
Use consistent naming conventions. Adopt a standard like
{environment}/{application}/{secret-type}and enforce it with an IAM policy that only allows creating secrets matching the pattern. -
Enable CloudTrail logging for Secrets Manager API calls. This lets you trace who deleted a secret and when:
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=DeleteSecret \
--start-time 2026-05-01T00:00:00Z \
--query 'Events[*].{Time:EventTime,User:Username,Resource:Resources[0].ResourceName}' \
--output table
- Set the deletion recovery window to 30 days (the maximum) in production to give yourself time to recover:
aws secretsmanager delete-secret \
--secret-id my-app/database/credentials \
--recovery-window-in-days 30
-
Use cross-region replication for secrets that are needed in multiple regions, rather than creating duplicate secrets manually.
-
Test secret access in CI/CD. Add a pre-deployment step that verifies the application can read all required secrets before rolling out new code.
-
Use infrastructure as code to manage secrets and their resource policies together. This prevents drift between what the application expects and what actually exists.
Need Help with Secrets Management?
Secrets Manager misconfigurations can block deployments and create security blind spots. If you are struggling with rotation failures, cross-account secret access, or building a consistent secrets management strategy across your AWS accounts, we can help. Contact us for a free AWS consultation — we will review your secrets architecture and identify gaps before they cause production outages.
Need help with your AWS infrastructure?
Book a free 30-minute consultation to discuss your challenges.