5 Ways Developers Accidentally Leak API Keys to LLMs
API keys end up in AI prompts more often than you'd think. Here are the five most common ways it happens and a practical prevention strategy.
API key leaks through AI prompts are one of the most common and least discussed security risks in modern development. Here are the five ways developers accidentally expose credentials to LLMs.
1. Copy-Pasting .env Files
The most straightforward leak. A developer debugging an environment issue pastes their .env file into ChatGPT:
OPENAI_API_KEY=sk-proj-...
DATABASE_URL=postgres://admin:password@prod.db.com/main
STRIPE_KEY=sk_live_...Why it happens: The developer is focused on the config format, not the values.
Prevention: Scan clipboard/prompt content for known key patterns before submission.
2. Stack Traces with Connection Strings
Error messages often include connection details:
Error: connect ECONNREFUSED
at TCPConnectWrap.afterConnect
Connection string: mongodb://app_user:Pr0dP@ss!@10.0.1.50:27017/productionWhy it happens: Developers paste full stack traces for debugging help.
Prevention: Scan for connection string patterns (postgres://, mongodb://, redis://) in prompt content.
3. Git Diffs with Hardcoded Credentials
Asking an AI to review a PR diff that includes hardcoded keys:
+ const config = {
+ apiKey: "company_prod_ak_8f3j2k4l5m6n7o8p",
+ endpoint: "https://internal-api.company.com",
+ };Why it happens: The developer wants a code review and includes the full diff without sanitizing it first.
Prevention: ML-based detection catches custom API key formats that regex misses.
4. Configuration Files with Secrets
Terraform files, Kubernetes manifests, Docker Compose files — all commonly contain embedded credentials:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
data:
password: UEBzc3cwcmQxMjM= # base64 of P@ssw0rd123Why it happens: Infrastructure-as-code files are code, and developers treat them the same way.
Prevention: Scan for base64-encoded strings in security-sensitive contexts.
5. Log Output with Bearer Tokens
Pasting server logs that contain authorization headers:
[2026-03-01 14:23:15] POST /api/v2/users
Headers: Authorization: Bearer eyJhbGciOiJSUzI1NiIs...
Body: {"name": "John Smith", "email": "john@acme.com"}This leaks both the bearer token AND customer PII in a single paste.
Why it happens: Developers debugging API issues need to see the full request context.
Prevention: Scan for JWT tokens, Bearer tokens, and common PII patterns in the same pass.
The Common Thread
In every case, the developer isn't trying to leak secrets. They're trying to solve a problem and the secret is embedded in the context they need to share. Manual vigilance doesn't scale — you need automated scanning.
A Practical Prevention Stack
- Pre-commit hooks — catch secrets before they enter version control (GitGuardian, Gitleaks)
- AI prompt scanning — catch secrets before they reach AI providers (AxSentinel)
- Secret rotation — minimize blast radius when leaks happen (Vault, AWS Secrets Manager)
- Monitoring — know when secrets are exposed (CloudTrail, audit logs)
AxSentinel handles layer 2 — the AI prompt scanning layer that traditional secret scanners don't cover.