Cursor sends your code to AI models. AxSentinel makes sure your secrets don't go with it.
Cursor's AI features send code context to OpenAI and Anthropic APIs. If your codebase contains hardcoded credentials, .env values, or customer data in test fixtures, that data leaves your machine every time Cursor generates or edits code. Cursor's privacy mode disables some features — AxSentinel lets you keep full AI functionality while blocking sensitive data.
AxSentinel sits between Cursor and the AI API as a local proxy. Every request is scanned for PII and secrets in milliseconds. Clean requests pass through instantly. Requests containing sensitive data are blocked or redacted before they ever leave your network.
Cursor supports VS Code extensions. Install AxSentinel from Open VSX (Cursor's extension registry) or sideload the .vsix file.
cursor --install-extension AxDevs.ax-sentinelIf you prefer the standalone proxy, start it and point Cursor at it. Go to Cursor Settings → Models → OpenAI API Base URL.
http://localhost:8990/v1Open a file containing a test secret (e.g., AKIA1234567890ABCDEF) and trigger a Cursor AI action. AxSentinel will block the request and show a notification.
Install the extension and scanning starts automatically. No proxy setup needed.
Tab completion, Cmd+K edits, chat, and multi-file edits are all scanned.
Regex scanning adds ~0.1ms. ML scanning adds ~5ms. Unnoticeable in practice.
Block (reject request), Redact (strip sensitive data and forward), or Prompt (ask you each time).
Free tier includes regex scanning for unlimited developers. Pro adds ML-powered detection and the compliance dashboard.