What to Do When a Secret Leaks to an AI Tool: Incident Response Playbook
An API key, database password, or customer PII was sent to ChatGPT or Copilot. Here's a step-by-step incident response plan to contain the damage.
It happened. A developer pasted production credentials into ChatGPT, or your scanning tool flagged a secret that made it through before you had monitoring in place. Don't panic — but do act fast. Here's your incident response playbook.
Immediate Actions (First 15 Minutes)
1. Identify What Was Exposed
Determine exactly what was sent:
- API keys/tokens — which service? Production or staging?
- Database credentials — which database? What access level?
- Customer PII — how many records? What data types (names, SSNs, financial)?
- Internal URLs/endpoints — do they reveal attack surface?
2. Rotate the Credential Immediately
Don't wait. Rotate the exposed credential right now:
# AWS — rotate access key
aws iam create-access-key --user-name affected-user
aws iam delete-access-key --user-name affected-user --access-key-id AKIA_EXPOSED_KEY
# GitHub — revoke and recreate token
# Go to Settings → Developer settings → Personal access tokens → Revoke
# Database — change password
ALTER USER app_user WITH PASSWORD 'new_secure_password_here';
# Stripe — roll API key
# Dashboard → Developers → API keys → Roll keyKey principle: assume the credential is compromised. Even if the AI provider has a zero-retention policy, you can't verify that the data wasn't logged, cached, or intercepted in transit.
3. Check for Unauthorized Usage
Look for signs that the exposed credential was already exploited:
- AWS CloudTrail — check for API calls from unfamiliar IPs
- Database audit logs — look for unusual queries or data exports
- Application logs — check for unauthorized API requests
- Billing dashboards — unexpected charges on Stripe, AWS, or other services
Investigation (First Hour)
4. Determine the Scope
Answer these questions:
- When was the credential first exposed? (Check AI tool history, scanning logs)
- Which AI tool was it sent to? (ChatGPT, Claude, Cursor, Copilot?)
- What was the provider's data retention at the time? (Check their current policy)
- What else was in the same prompt? (Often, multiple secrets are exposed in one paste)
5. Review the AI Provider's Data Policy
Contact the AI provider if the exposure is severe:
- OpenAI — data submitted via API is not used for training (as of 2024+), but may be retained for 30 days for abuse monitoring
- Anthropic — API data has zero-retention by default; check if the usage was via API or web (different policies)
- Google — Gemini API data retention varies by plan
- GitHub Copilot — prompts are processed in real-time; check if telemetry was enabled
Some providers have processes for requesting data deletion. Use them if available.
6. Assess Regulatory Impact
If PII was exposed, determine your notification obligations:
- GDPR — 72-hour notification to supervisory authority if personal data was compromised
- HIPAA — breach notification required if PHI was disclosed to an unauthorized party
- CCPA — notification required if California residents' data was involved
- State breach laws — many US states have their own notification timelines
Remediation (First Day)
7. Deploy Prevention Controls
If you don't already have automated scanning, deploy it now:
- Install AxSentinel — 10MB binary, runs locally
- Configure block mode — stop requests containing secrets from reaching AI providers
- Deploy browser extension — covers ChatGPT, Claude, Gemini web interfaces
- Deploy IDE extension — covers Cursor, VS Code, Copilot
- Set up the compliance dashboard — monitor all developers centrally
8. Audit Other Developers
One developer's leak often indicates a systemic problem. Check:
- Are other developers using the same AI tools without scanning?
- Is the leaked credential shared across the team?
- Are there other hardcoded credentials in the codebase?
# Quick scan for hardcoded secrets in your repo
grep -rn "sk_live\|AKIA\|password.*=\|token.*=\|secret.*=" --include="*.py" --include="*.js" --include="*.ts" .9. Document the Incident
For compliance and post-mortem:
- Timeline of events
- What was exposed and for how long
- Actions taken (rotation, investigation, remediation)
- Root cause (no scanning tool, ignored alert, new AI tool not covered)
- Preventive measures deployed
Post-Incident (First Week)
10. Conduct a Post-Mortem
Blameless post-mortem focused on systemic improvements:
- Why was a production credential in the developer's local environment?
- Why wasn't the AI prompt scanned before submission?
- How can we prevent this class of incident?
11. Update Your Security Posture
Common improvements:
- Separate credentials — staging vs production, with limited blast radius
- Vault-based secrets — no more hardcoded credentials (HashiCorp Vault, AWS Secrets Manager)
- Mandatory scanning — AxSentinel in block mode for all developers
- Regular rotation — automate credential rotation on a schedule
Prevention Is Cheaper Than Response
The average cost of responding to a credential leak — rotation, investigation, potential breach notification, and remediation — is measured in hours of engineering time and potential regulatory fines. The cost of prevention is a 10MB binary and a few minutes of setup.