Prompt Guard is a lightweight HTTPS MITM proxy that intercepts prompts sent to AI coding assistants and APIs. It effectively blocks or redacts sensitive data such as API keys and passwords in real time, ensuring that critical information never reaches third-party servers. Protect your development environment seamlessly.
Prompt Guard
Prompt Guard is a lightweight HTTPS MITM proxy designed to enhance security while utilizing AI coding assistants and APIs by intercepting requests and blocking sensitive data. This tool ensures that potentially sensitive information, such as API keys, passwords, and other personal identifiers, does not leave a user's machine unprotected.
Overview
AI tools like GitHub Copilot, ChatGPT, and Claude can inadvertently expose sensitive information from the developer's environment. Prompt Guard operates by monitoring and inspecting each prompt sent to these AI services, offering real-time protection by blocking or redacting sensitive information before it is transmitted to third-party servers.
Key Features
- Transparent HTTPS MITM Proxy: Intercepts requests seamlessly; optional CA certificate required only for browser inspection.
- Real-Time Inspection: Evaluates every prompt instantaneously before forwarding, allowing for dynamic control over data sent to AI models.
- Block and Redact Modes:
- Block Mode: Prevents the AI from receiving any sensitive data by rejecting requests containing it, thus leaving no trace in the model's context.
- Redact Mode: Replaces sensitive information with
[REDACTED], allowing responses from the AI without compromising confidentiality.
- Web Dashboard: Provides a user-friendly interface to view intercepted prompts, status updates, token usage, session ID, and client identity.
- Built-In Rules: Comes pre-configured with 14 rules to monitor and handle various sensitive information types such as AWS keys, API tokens, personal identification numbers, and more.
- Custom Rule Editing: Modify existing rules or create new ones directly from the dashboard, with changes applied instantly.
- Persistence with SQLite: Maintains an audit log across sessions, ensuring traceability of all requests.
- Agent Mode: Keeps long-running tasks uninterrupted by silently redacting sensitive data instead of blocking requests.
Supported AI Services
Prompt Guard can intercept prompts directed to major AI coding assistants and APIs including:
- GitHub Copilot (
*.githubcopilot.com) - OpenAI (
api.openai.com) - Anthropic (
api.anthropic.com)
Customization
Administrators can modify the built-in rules by accessing the ~/.prompt-guard/rules.json configuration file. This file can be edited to customize rule behavior, switch between blocking and tracking, and add bespoke patterns for sensitive data detection.
Deployment
Prompt Guard is built for cross-platform compatibility and requires Go 1.21 or higher, functioning seamlessly on macOS, Linux, or Windows environments. Users can initiate the proxy service with minimal setup, ensuring robust protection with just a few commands.
git clone https://github.com/chaudharydeepak/prompt-guard
cd prompt-guard
go build -o prompt-guard .
./prompt-guard
This tool addresses growing concerns regarding data security in AI-assisted coding practices, making it essential for developers who prioritize confidentiality and protection of their sensitive information.
No comments yet.
Sign in to be the first to comment.