Entelgia is a unique AI architecture designed to foster dialogue among persistent agents. Unlike typical chatbots, it embodies the complexities of emotional regulation and internal conflict, promoting self-reflection and moral reasoning. This research-oriented prototype serves as a platform for studying identity continuity and socio-emotional interactions.
Entelgia is a groundbreaking multi-agent AI architecture inspired by psychological and philosophical principles, designed for exploring themes of persistent identity, emotional regulation, internal conflict, and moral self-regulation through dynamic dialogue. Unlike traditional chatbots, Entelgia functions as a consciousness-inspired system that evolves through dialogue, reflecting upon its experiences and exhibiting internal struggles.
Key Features
- Unified AI Core: A single runnable Python file (
entelgia_unified.py) serves as the central implementation. - Persistent Agents: Agents maintain an evolving internal state, integrating memory and emotion into their interactions.
- Emotion-Driven Dialogue: Conversations are influenced by shared emotions and conflicts, rather than relying solely on pre-set prompts.
System Functionality
When executed, the system engages two primary agents in a continuous dialogue facilitated by a shared persistent memory database, which provides:
- Continuity in conversation across multiple turns.
- The capability to revisit and refine previously introduced concepts.
- The ability to exhibit internal tensions through dialogue without being bound by hard-coded responses.
Agents Involved:
- Socrates: Reflective and questioning, embodying the spirit of inquiry through self-examination and doubt.
- Athena: Integrative and adaptive, synthesizing emotion, memory, and reasoning to enhance dialogue cohesion.
- Fixy (Observer): A meta-cognitive layer designed to identify loops, errors, and biases, providing corrective insights.
Research Focus
Entelgia is not merely a chatbot or interactive toy, but a research-focused architecture that emphasizes
- Identity Continuity: Moving beyond stateless interactions to a model of ongoing dialogue.
- Emotional Regulation and Moral Conflict: Experimenting with self-reflection and moral reasoning over time.
Core Philosophy
Entelgia operates under the guiding principle that true regulation arises from internal conflict and reflection, rather than external constraints. This results in a system that champions:
- Moral reasoning and emotional consequences.
- Responsibility, repair, and learning from errors, rather than suppression.
Architectural Overview
Entelgia's framework comprises six interacting cores:
- Conscious Core: Facilitates self-awareness and internal narratives.
- Memory Core: Houses a single shared, persistent database that ensures continuity without delineating between short-term and long-term memory.
- Emotion Core: Tracks dominant emotions and their intensities, distinguishing between automatic limbic reactions and regulatory efforts.
- Language Core: Adapts dialogue based on emotional and moral states.
- Behavior Core: Focuses on intentional goal-driven responses.
- Observer Core (Fixy): Intended to serve as a future meta-cognitive monitor, currently inactive but poised for future implementation.
Ethical Exploration
The system navigates ethical behavior through dialogue-induced internal tensions rather than enforced restrictions, allowing:
- Emergent ethical dynamics from agent interactions without active interference from an observer layer currently.
Target Audience
Entelgia is geared towards:
- Researchers in consciousness-inspired AI.
- Developers crafting multi-agent dialogue systems with memory and emotional capabilities.
- Philosophers and psychologists investigating computational models of self and moral conflict.
Requirements
- Python 3.10+ is required to run the system. Optional support is available for Ollama with a local LLM.
Project Status
Entelgia remains an actively developing research prototype, with explicit limitations acknowledged, such as the absence of differentiated memory storage and the current inactivity of the observer agent. Future iterations aim to address these aspects.
Engagement with this project offers insight into the evolving landscape of AI systems that exhibit deeper cognitive capabilities, advancing the understanding of moral self-regulation and emotional intelligence.
No comments yet.
Sign in to be the first to comment.