Anchor Engine is ground truth for business and personal AI. A lightweight, local-first memory layer that lets LLMs retrieve answers from your actual data—not hallucinations. Every response is traceable, every policy enforced. Runs in <3GB RAM. No cloud, no drift, no guessing. Your AI's anchor to reality.
Anchor Engine (Node.js)
Anchor Engine is a privacy-first context engine designed for effective human-facing interactions with large language models (LLMs). Utilizing the innovative STAR Algorithm (Semantic Temporal Associative Retrieval), this tool serves as a semantic memory and search API, ensuring sovereign knowledge management while prioritizing user data privacy.
Why Anchor Engine Exists
In the realm of extended chat sessions, challenges arise when attempting to maintain continuity across multiple interactions. When faced with the limitations of context windows, users often struggle to retain valuable chat history and vital information accrued over time. Anchor Engine was developed to overcome this challenge by:
- Allowing users to extract targeted context from previous chats.
- Enabling the continuation of interactions with the assistance of retained knowledge.
- Facilitating an efficient re-engagement model with LLMs, thus preserving the depth of prior exchanges.
This advanced engine can manage extensive datasets, digesting millions of tokens rapidly and delivering condensed, relevant contextual information back to users in scant minutes.
Features and Innovations
- PGlite-First Architecture: Employs a shared database system compatible across various platforms without native builds, enabling seamless operation on ARM64 Windows, x64 Windows, Linux, and macOS.
- Enhanced User Experience: Navigate queries chronologically or based on relevance, and access an intuitive user interface to filter results effectively.
- Data Compression and Contextual Recall: Quickly compress extensive datasets into focus-rich narratives while intelligently deduplicating concepts.
Core Components
- Database: The engine employs a PGlite (WASM-based PostgreSQL) database that operates without persistent coupling, making it lightweight and efficient.
- Performance Metrics:
- Ingest 25M tokens in under 15 minutes.
- Average search latency of under 200ms.
- Capable of handling bulk ingestion 10-50x faster than traditional methods.
Agent Harness Integration
Compatible with various agent frameworks, Anchor Engine can be seamlessly integrated into existing workflows allowing users to leverage its powerful context retrieval in applications of their choice.
Security and Privacy
With a focus on local-first functionality, all data remains solely on the user's machine, eliminating dependency on cloud services or external storage. Anchor Engine champions open-source sovereignty while retaining robust security protocols.
Documentation and Resources
For comprehensive information regarding the STAR Algorithm, operational standards, and practical usage, please refer to the documentation available within this repository. Key resources include:
Conclusion
Anchor Engine stands as a vital solution for individuals seeking better outputs in LLM interactions without relinquishing control over their data. By fostering efficient knowledge management and contextual awareness, it equips users with the tools necessary for sustained and effective communication with AI models.
Seriously impressive:
"By the time Anchor Engine was operational, I had accumulated 40 chat sessions, ~18M tokens. My current corpus is ~28M tokens. Anchor Engine digests all of it in about 5 minutes.
Now I make a query with a few choice entities and some fluff for serendipitous connections. The engine compresses those 28M tokens into 100k+ chars of non-duplicated, narrative context—concepts deduplicated, not just text. My LLM remembers July 2025 like it was yesterday."
Sign in to comment.