VERITAS OS turns any LLM into a constitutionally-governed proto-AGI. Every decision is forced through a rigid pipeline: Evidence → Critique → Debate → Planner → ValueCore → FUJI Safety Gate → SHA-256 hash-chained TrustLog. 100% local · no cloud · no telemetry Doctor Dashboard auto-detects misbehavior Perfect for AGI safety research & auditable autonomous agents Open-source · runs on your laptop
VERITAS OS is a cutting-edge Proto-AGI framework designed to elevate large language models (LLMs) beyond traditional chatbots, transforming them into a robust, reliable, and auditable decision-making operating system. This advanced framework leverages the structured decision pipeline encapsulated in a single /v1/decide API call, which meticulously guides LLMs through a deterministic process that includes Evidence, Critique, Debate, Planner, ValueCore, FUJI, and TrustLog.
Key Features
- Integrated Decision Framework: The platform functions as a complete Decision/Agent OS, infusing safety, memory, value functions, and world-state management into the operation of LLMs as reasoning engines.
- Structured Outputs: Each call to
/v1/decidereturns a detailed JSON payload with insights such as:chosen: The selected action along with its rationale and uncertainty levels.alternatives[]: An array of other options considered during the decision-making process.evidence[]: References to the evidence utilized in formulating the decision.critique[]: Self-critiques performed on the candidate plan.debate[]: Multi-perspective internal debates that assess different viewpoints on the decision.telos_score: Indicator of alignment with the current value function.fuji: A safety and ethics assessment status (allow, modify, block, or abstain).trust_log: A hash-chained log of entries for auditing purposes.
API Interaction
VERITAS OS is exposed as a local FastAPI service that can be accessed via OpenAPI 3.1, enabling seamless interaction through tools like Swagger Editor/Studio. A valid X-API-Key is required for all endpoints to ensure the integrity and security of interactions with the API.
| Method | Path | Description |
|---|---|---|
| GET | /health | Server health check |
| POST | /v1/decide | Initiates the full decision loop |
| POST | /v1/fuji/validate | Validates the safety/ethics of a proposed action |
| POST | /v1/memory/put | Stores a key/value pair in persistent memory |
| GET | /v1/memory/get | Retrieves a value from persistent memory |
| GET | /v1/logs/trust/{request_id} | Accesses immutable hash-chained trust log entries |
Contextual Decision Making
The framework supports AGI-style tasks by utilizing a contextual schema formeta-decisions. A Context object encapsulates user queries, goals, constraints, and preferences, allowing users to leverage the system to make high-level decisions for AGI research, self-improvement cycles, or experimental plans
demonstrating rigorous safety adherence.
Core Architecture
The dizzying array of features of VERITAS OS derives from its modular architecture. The following components are noteworthy:
- Core OS Layer: Manages overall orchestration, pipeline execution, planning, and decision outputs.
- Safety and Ethics: Implements rigorous safety measures and compliance checks via the FUJI Gate, assessing decisions against ethical standards.
- MemoryOS: Manages long-term memory storage and retrieval, ensuring relevant historical data informs current decisions.
- Logging and Auditing: Ensures comprehensive logging of decisions and actions to maintain trust and transparency throughout the decision-making process.
Conclusion
VERITAS OS stands as a pioneering framework aimed at researchers and developers interested in AGI and AI safety explorations. By facilitating safe, repeatable, and auditable decision-making processes, it lays the groundwork for robust autonomous agents capable of self-improvement and ethical operation.
Modern LLM agents still behave like black boxes — great at generating answers, but opaque in why decisions were made.
With VERITAS OS I’m trying to solve exactly this: a deterministic, auditable decision pipeline where every reasoning step, safety check, and world-state update is logged and inspectable.
The goal is not “another agent framework”, but a transparent alternative where reasoning is traceable, safe, and reproducible — something I always wished existing LLM agents had.
Feedback or suggestions from AGI / agent researchers would be incredibly helpful.
Sign in to comment.