llmSHAP offers a robust, multi-threaded framework that leverages Shapley values to enhance explainability for outputs generated by large language models. Ideal for researchers and developers, this tool simplifies the process of understanding model decisions through intuitive attributions and visualizations.
llmSHAP is a comprehensive multi-threaded explainability framework designed for analyzing outputs from Large Language Models (LLMs) using Shapley values. With a focus on delivering transparent and interpretable AI solutions, this framework allows users to gain deeper insights into the decision-making processes of LLMs. The project is characterized by its modular architecture, enabling seamless integration and usage for various application needs.
Key Features
- Multi-Threading: Supports optional multi-threading for enhanced performance in computations.
- Exact Shapley Computation: Offers full enumeration for exact Shapley value calculations, unlike other frameworks that rely on sampling.
- Modular Design: Allows for easy customization and scalability with its pluggable components.
- Contextual Attribution: Supports permanent context pinning, ensuring vital features are consistently included in analyses.
- Pluggable Similarity Metrics: Users can employ different similarity metrics such as TF-IDF and embedding models for a tailored attribution experience.
- User-Friendly Documentation: Extensive documentation is available at llmSHAP Docs along with a hands-on tutorial to facilitate quick onboarding and usage.
Example Usage
The framework enables straightforward integration with Python-based applications. Below is a basic example of how to use llmSHAP:
from llmSHAP import DataHandler, BasicPromptCodec, ShapleyAttribution
from llmSHAP.llm import OpenAIInterface
data = "In what city is the Eiffel Tower?"
handler = DataHandler(data, permanent_keys={0,3,4})
result = ShapleyAttribution(model=OpenAIInterface("gpt-4o-mini"),
data_handler=handler,
prompt_codec=BasicPromptCodec(system="Answer the question briefly."),
use_cache=True,
num_threads=16,
).attribution()
print("\n\n### OUTPUT ###")
print(result.output)
print("\n\n### ATTRIBUTION ###")
print(result.attribution)
print("\n\n### HEATMAP ###")
print(result.render())
Comparison to TokenSHAP
llmSHAP distinguishes itself with capabilities that enhance explainability in LLMs, notably its threading support, exact Shapley computation, and modularity as compared to TokenSHAP.
Conclusion
Through its robust features and user-focused design, llmSHAP stands as a valuable tool for researchers and developers seeking to enhance the interpretability of AI systems. Explore the full potential of AI explainability with llmSHAP.
No comments yet.
Sign in to be the first to comment.