How to Secure VS Code, Cursor, and Copilot: A Developer's Guide
Step-by-step guide to securing AI coding assistants in your IDE. Covers VS Code, Cursor, GitHub Copilot, and Claude Code with practical configurations and scanning setup.
AI coding assistants have become essential developer tools. GitHub reports that 92% of US developers use AI coding tools, and Cursor now has over 1 million monthly active users. But every prompt you send to these tools is an API call to a third-party service — and every API call is a potential data leak.
This guide covers how to secure the most popular AI coding tools without disrupting your workflow.
The Risk: What Leaves Your IDE
When you use an AI coding assistant, the following data is typically sent to the provider's API:
- The current file (or selected code block)
- Surrounding context (nearby files, imports, function signatures)
- Your prompt/question
- File paths and project structure (in some tools)
This means secrets, PII, and credentials that exist anywhere in your open files or project context can end up in an API request.
Securing GitHub Copilot
What Copilot Sends
Copilot sends the current file content, neighboring file tabs, and project context to GitHub's API (powered by OpenAI models). With Copilot Chat, your explicit questions and any code you reference are also sent.
Configuration
// .vscode/settings.json
{
"github.copilot.enable": {
"*": true,
"env": false,
"plaintext": false
}
}Disable Copilot for sensitive files:
.envfiles — contain secrets by definitionplaintext— unstructured text often contains PII- Configuration files with credentials
Organization Settings
If you manage a GitHub organization:
- Go to Organization Settings → Copilot → Policies
- Disable "Suggestions matching public code" to reduce IP risk
- Enable "Exclude specified files" and add patterns for sensitive paths
- Review Copilot's data retention settings
Limitation
File-level disabling only works for files you know contain secrets. It doesn't catch secrets embedded in regular code files — a hardcoded API key in api_client.py will still be sent to Copilot.
Securing Cursor
What Cursor Sends
Cursor sends your code context to Anthropic (Claude) or OpenAI models. In Composer mode, it may send entire file contents. Cursor also offers a "Privacy Mode" that disables training on your data.
Configuration
- Enable Privacy Mode:
- Settings → General → Privacy Mode → Enable
- This ensures your code isn't used for model training
- Configure the API proxy:
Cursor supports custom API base URLs, which lets you route all AI traffic through a local scanner:
Settings → Models → API Base URL
Set to: http://localhost:8990/v1This routes all Cursor AI traffic through a local proxy that scans for secrets and PII before forwarding to the AI provider.
- Exclude sensitive directories:
// .cursorignore (similar to .gitignore)
.env*
secrets/
credentials/
*.pem
*.keySecuring VS Code with AI Extensions
VS Code supports dozens of AI extensions — Copilot, Continue, Cody, and more. Each one has its own API integration and data handling.
Universal Protection: The HTTP Proxy Approach
Instead of configuring each extension individually, use a local HTTP proxy that intercepts all AI API traffic from your IDE:
# Start the scanner proxy
axsentinel --proxy --port 8990
# Configure VS Code to route AI traffic through it
# Add to settings.json:
# "http.proxy": "http://localhost:8990"This approach works for every AI extension without per-extension configuration.
Per-Extension Settings
For Copilot specifically:
{
"github.copilot.advanced": {
"debug.testOverrideProxyUrl": "http://localhost:8990"
}
}Securing Claude Code (CLI)
Claude Code is a command-line AI assistant that has deep access to your filesystem and can read, write, and execute code.
Configuration
# Route Claude Code through the scanner proxy
export ANTHROPIC_BASE_URL=http://localhost:8990
# Now all Claude Code API calls pass through local scanning
claudeWhat to Watch For
Claude Code can:
- Read any file in your project directory
- Execute shell commands
- Access environment variables
This means it can inadvertently include .env contents, SSH keys, or database credentials in its API calls. A local scanner proxy catches these before they leave your machine.
Securing Browser-Based AI Chat
ChatGPT, Claude, Gemini
When you use AI chat in the browser, you're manually pasting content. This is actually the highest risk scenario because:
- You're choosing what to paste (no automated context filtering)
- You might paste entire files, logs, or database outputs
- There's no IDE-level file exclusion
Browser Extension Protection
Install a browser extension that scans input fields on AI chat sites before submission:
1. Install the AxSentinel Chrome extension
2. It automatically monitors:
- chat.openai.com (ChatGPT)
- claude.ai (Claude)
- gemini.google.com (Gemini)
- Any site using common AI chat patterns
3. When you type or paste sensitive data, it warns before submissionBest Practices Across All Tools
1. Never Paste Raw Credentials
Before pasting code into any AI tool, check for:
- API keys and tokens
- Database connection strings
- Passwords and secrets in config files
- Customer PII in test data
2. Use Test Data
Replace real data with synthetic equivalents:
# Instead of:
user = {"name": "John Smith", "email": "john@acme.com", "ssn": "123-45-6789"}
# Use:
user = {"name": "Test User", "email": "test@example.com", "ssn": "000-00-0000"}3. Review .gitignore Alignment
Files excluded from git should also be excluded from AI assistants. If it's secret enough to .gitignore, it's secret enough to not send to an AI API.
4. Set Up Organization-Wide Scanning
For teams, deploy the scanner across all developer workstations:
# Download and install
curl -sL https://ax-sentinel.com/install.sh | bash
# Start proxy with telemetry to your org dashboard
axsentinel --proxy --port 8990 \
--client-token axc_your_org_token \
--org-id your_org_id \
--user-id your_user_idThis gives your security team visibility into AI data leak risks across the organization while keeping all scanning local to each developer's machine.
5. Monitor Detection Trends
After deployment, review your dashboard weekly:
- Which AI providers are being used?
- What types of secrets are most commonly detected?
- Which team members need additional security training?
- Are detection counts trending down over time? (they should be)
Quick Reference: Security Settings by Tool
| Tool | Setting | Where |
|---|---|---|
| GitHub Copilot | Disable for .env files | VS Code settings.json |
| Cursor | Enable Privacy Mode | Settings → General |
| Cursor | Custom API base URL | Settings → Models |
| Claude Code | ANTHROPIC_BASE_URL | Environment variable |
| VS Code (any extension) | HTTP proxy | settings.json → http.proxy |
| ChatGPT/Claude/Gemini | Browser extension | Chrome Web Store |
The Bottom Line
You don't have to choose between AI productivity and data security. The key is adding a scanning layer between your tools and the AI APIs they call. Whether that's a proxy, a browser extension, or an IDE plugin — the important thing is that every prompt gets scanned before it leaves your machine.