Prompt Guard is a lightweight HTTPS MITM proxy designed to protect sensitive data when using AI coding assistants like GitHub Copilot and ChatGPT. By inspecting prompts in real time, it flags potential leaks of API keys, passwords, and other sensitive information, ensuring user security without blocking essential functionality.
Prompt Guard is a lightweight HTTPS MITM proxy designed to enhance data privacy when interacting with AI coding assistants and APIs. It acts as an intermediary that inspects prompts before they are sent to third-party servers, ensuring that sensitive information such as API keys, passwords, Social Security Numbers, and internal IP addresses do not leave your machine unnoticed.
Features
- HTTPS MITM Proxy: Utilizes a transparent interception mechanism through a local CA certificate.
- Real-time Prompt Inspection: Executes customizable rules on every prompt before forwarding them to AI services.
- Web Dashboard: Provides a live feed displaying flagged prompts along with matched snippets for immediate visibility.
- Built-in Rules: Comes with 12 default rules that detect sensitive data, including credentials and personally identifiable information (PII).
- Zero Prompt Blocking: Operates in an inspect-only mode, ensuring that no prompts are inadvertently blocked or dropped.
- SQLite Persistence: Maintains a comprehensive audit log even after restarts, allowing for full accountability.
- Single Binary: Packaged as a single executable with no runtime dependencies, simplifying deployment.
Supported Services
Prompt Guard is designed to intercept prompts targeted toward the following services:
| Service | Host |
|---|---|
| GitHub Copilot | *.githubcopilot.com |
| OpenAI | api.openai.com |
| Anthropic | api.anthropic.com |
Other HTTPS traffic is tunneled directly without alteration.
Built-in Rules
This tool leverages an array of built-in rules to flag potentially sensitive data:
| Rule | Severity |
|---|---|
AWS Access Key (AKIA…) | High |
| AWS Secret Key | High |
OpenAI API Key (sk-…) | High |
Anthropic API Key (sk-ant-…) | High |
GitHub Token (ghp_, gho_, …) | High |
| Private Key (PEM block) | High |
| Social Security Number | High |
| Credit Card Number | High |
| JWT Token | Medium |
| Generic Secret / Password assignment | Medium |
| Email Address | Low |
| Internal IP Address (RFC-1918) | Low |
Architecture
Prompt Guard operates as follows:
Your app (VS Code, curl, etc.)
→ HTTP_PROXY / HTTPS_PROXY
→ prompt-guard proxy (:8080)
├── Non-target hosts → blind tunnel (unchanged)
└── Target hosts (OpenAI, Anthropic, Copilot)
→ TLS MITM (local CA cert)
→ parse JSON body → extract prompt text
→ run rules → if match: store in SQLite
→ forward to real API → return response
→ web dashboard (:7778) reads SQLite
Prompt Guard not only safeguards sensitive information but also provides a streamlined way to maintain oversight of interaction with AI services, making it a valuable tool for developers concerned with data safety and privacy.
No comments yet.
Sign in to be the first to comment.