SafeDOM.ai is a privacy-first tool designed to help build AI context from actual UI while preventing the accidental leakage of sensitive information. With automatic redaction of PII and safe reinjection of values, it ensures compliance with data privacy standards while integrating AI seamlessly into user interactions.
SafeDOM.ai is a privacy-focused tool designed to facilitate the conversion of the Document Object Model (DOM) into an AI-compatible context while ensuring the protection of sensitive information. This project allows for the annotation of HTML elements using data-ai attributes, offering robust automatic redaction of Personal Identifiable Information (PII) such as emails, phone numbers, credit card details, and Social Security Numbers (SSN). By minimizing the risk of exposing sensitive data to AI models, SafeDOM.ai prioritizes user privacy and data security.
Key Features
- DOM Annotations: Leverage
data-aiattributes to specify which parts of the DOM should be included, excluded, or redacted when the data is sent to AI providers. - Automatic PII Redaction: Before transmitting data to AI systems, SafeDOM.ai automatically redacts sensitive information, helping to enforce a privacy-first approach.
- Structured Outputs: The library generates structured outputs such as
fields, combinedrawText, and a detailed list of redactions, providing useful data for AI prompts while maintaining compliance with privacy regulations.
Quick Overview
-
Annotate Your HTML: Utilize
data-aiattributes to control the data flow and specify the desired redaction rules directly in your HTML structure.Example:
<div id="ticket-root"> <h2 data-ai="include" data-ai-label="subject">Login blocked</h2> <p data-ai="redact:email phone" data-ai-label="customer"> Contact: alice@example.com, phone +1 212-555-7890 </p> <p data-ai="exclude">Internal notes never leave the browser.</p> </div> -
Build AI Context: Easily build the context in your JavaScript code, applying any necessary configurations to maintain compliance with data protection standards.
Example:
import { buildAiContext } from "safedom-ai"; const ctx = buildAiContext("#ticket-root", { labeledOnly: true, region: "eu" }); -
Send Data to AI Provider: After building the context with the necessary parameters, sync with your chosen AI provider, ensuring that data handling policies are adhered to.
Example:
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: ctx.rawText }] }); -
Reinject Data Safely: Utilize backend helpers to safely reinject placeholders after receiving a response from the AI model, keeping sensitive information secure.
Example:
import { reinjectPlaceholders } from "@safedom/ai-node"; const finalAnswer = reinjectPlaceholders(modelResponse, ctx.redactions);
Importance
With the increasing reliance on AI systems, sensitive information often unwittingly becomes part of AI prompts. SafeDOM.ai helps enforce a privacy-by-design framework, providing heuristics for redacting PII and complying with regulations prevalent in both the EU and the US. By combining robust functionalities with a focus on user safety, this tool offers a crucial layer of privacy protection for applications that interact with AI systems.
For a live demonstration, visit the SafeDOM.ai Demo. The library is actively maintained, with ample contribution opportunities for developers concerned about privacy and data protection.
No comments yet.
Sign in to be the first to comment.