LiteralAI is a compiler for Python that transforms prompts into source code, allowing users to store and version their prompts alongside their code. Unlike traditional agents that forget prompts, LiteralAI integrates prompts into the project, ensuring that functions and classes are generated or updated based on their associated documentation.
LiteralAI is a powerful compiler designed specifically for Python, offering a fresh perspective on AI-assisted coding. Unlike traditional AI-enabled IDEs that may lead to fleeting, unmanageable code, LiteralAI treats LLM-prompts as code. This tool allows developers to store their prompts alongside their code, checking them into version control just like any other source file. Thus, it acts as a compiler that transforms prompts into source code.
How It Works
LiteralAI intelligently generates functions and classes based on their docstrings and initial comments. When executed, it scans through your project and updates the necessary functions or classes whenever changes are detected in their signatures, docstrings, or comments. Notably, it maintains a clean workspace by storing only essential metadata—a hash of the function’s signature, docstring, and comments—as an in-line comment.
Example
Consider the following function definitions:
def add_two(a, b):
"""Add two numbers together, return the result"""
def manual(x):
"A manual function"
return x + 1
Upon running literalai ., the code automatically gets updated to:
def add_two(a, b):
"""Add two numbers together, return the result"""
# LITERALAI: {"codeid": "4a5c8e754c305b36907466707fbbcdc9883ba6499cddd35fb4e1923f7af4e2e4"}
result = a + b
return result
def manual(x):
"A manual function"
return x + 1
In this example, the manual function remains intact, demonstrating how LiteralAI respects the integrity of manually defined code while still offering smart updates when necessary.
Class Functionality
LiteralAI applies similar logic to classes, generating methods based on class docstrings and initial comments. This means when a class is processed, its methods will be auto-generated to harmonize with the class's docstring. If no manual changes have been made to method signatures or comments, they will be replaced with new definitions that reflect the latest docstring updates.
Configuration and Models
The configuration for LiteralAI, such as model names and system prompts, is managed via YAML files named literalai.yml. These configuration files, found from the project root to the source files, are merged to create a comprehensive setup, customizable for both functions and classes. Key options include:
- model: A string that specifies a model compatible with
litellm, likeopenai/gpt-4 - prompt: A template for system prompts using Jinja2, allowing access to the function or class signature.
Sample Configuration
Example of a literalai.yml file:
base:
model: "openai/gpt-4"
FunctionDef:
prompt: |
Generate the python source code for a function with the following signature, docstring, and initial comments.
{{signature}}
# IMPORTANT
* Write the full function implementation.
* Provide only valid python for a single function as output.
* Do NOT add any initial description, argument or similar
ClassDef:
prompt: |
Below is the python source code for a class and some of its methods (without implementations). Given the docstring and initial comments of the class, define any missing method signatures and provide their docstrings.
{{signature}}
# IMPORTANT
* Write the full class specification.
* Provide only valid skeleton python for a single class as output.
* Do NOT add any initial description or similar
With LiteralAI, Python developers can elevate their coding practices, ensuring that their code remains clean, up-to-date, and deeply integrated with prompt configurations.
No comments yet.
Sign in to be the first to comment.