PitchHut logo
Prompt Guard
Securely intercept prompts for AI tools and prevent data leaks.
Pitch

Prompt Guard is a lightweight HTTPS MITM proxy designed to protect sensitive data when using AI coding assistants like GitHub Copilot and ChatGPT. By inspecting prompts in real time, it flags potential leaks of API keys, passwords, and other sensitive information, ensuring user security without blocking essential functionality.

Description

Prompt Guard is a lightweight HTTPS MITM proxy designed to enhance data privacy when interacting with AI coding assistants and APIs. It acts as an intermediary that inspects prompts before they are sent to third-party servers, ensuring that sensitive information such as API keys, passwords, Social Security Numbers, and internal IP addresses do not leave your machine unnoticed.

Features

  • HTTPS MITM Proxy: Utilizes a transparent interception mechanism through a local CA certificate.
  • Real-time Prompt Inspection: Executes customizable rules on every prompt before forwarding them to AI services.
  • Web Dashboard: Provides a live feed displaying flagged prompts along with matched snippets for immediate visibility.
  • Built-in Rules: Comes with 12 default rules that detect sensitive data, including credentials and personally identifiable information (PII).
  • Zero Prompt Blocking: Operates in an inspect-only mode, ensuring that no prompts are inadvertently blocked or dropped.
  • SQLite Persistence: Maintains a comprehensive audit log even after restarts, allowing for full accountability.
  • Single Binary: Packaged as a single executable with no runtime dependencies, simplifying deployment.

Supported Services

Prompt Guard is designed to intercept prompts targeted toward the following services:

ServiceHost
GitHub Copilot*.githubcopilot.com
OpenAIapi.openai.com
Anthropicapi.anthropic.com

Other HTTPS traffic is tunneled directly without alteration.

Built-in Rules

This tool leverages an array of built-in rules to flag potentially sensitive data:

RuleSeverity
AWS Access Key (AKIA…)High
AWS Secret KeyHigh
OpenAI API Key (sk-…)High
Anthropic API Key (sk-ant-…)High
GitHub Token (ghp_, gho_, …)High
Private Key (PEM block)High
Social Security NumberHigh
Credit Card NumberHigh
JWT TokenMedium
Generic Secret / Password assignmentMedium
Email AddressLow
Internal IP Address (RFC-1918)Low

Architecture

Prompt Guard operates as follows:

Your app (VS Code, curl, etc.)
  → HTTP_PROXY / HTTPS_PROXY
    → prompt-guard proxy (:8080)
      ├── Non-target hosts → blind tunnel (unchanged)
      └── Target hosts (OpenAI, Anthropic, Copilot)
            → TLS MITM (local CA cert)
              → parse JSON body → extract prompt text
                → run rules → if match: store in SQLite
                  → forward to real API → return response
                    → web dashboard (:7778) reads SQLite

Prompt Guard not only safeguards sensitive information but also provides a streamlined way to maintain oversight of interaction with AI services, making it a valuable tool for developers concerned with data safety and privacy.

0 comments

No comments yet.

Sign in to be the first to comment.