llm-costs empowers users to estimate and compare API costs of over 20 LLM models directly from the terminal. With simple commands, this tool provides auto-updating pricing for accurate budgeting. It eliminates the guesswork, allowing users to find the most cost-effective options for their prompts.
llm-costs is a powerful command-line tool designed to instantly compare the costs of various language model (LLM) APIs directly from your terminal. With support for over 20 models from leading providers like Anthropic, OpenAI, and Google, llm-costs ensures that cost estimation is both accurate and convenient.
Key Features
-
Instant Cost Comparison: Get cost estimations for numerous models effortlessly with just a command. This tool integrates seamlessly into your terminal workflow, eliminating any need for complex API keys or configurations.
-
Comprehensive Model Support: Compare pricing across a wide range of models, including Claude, GPT, and Gemini, among others. A detailed view allows users to analyze both input and output costs, providing transparency in pricing.
Usage Examples
Cost estimation can be performed for different scenarios:
Compare Costs Across Models
llm-costs "Explain quantum computing" --compare
This command outputs a cleanly formatted table showing costs for various models, making it easy to identify the cheapest option.
Estimate for a Specific Model
llm-costs "Explain quantum computing in simple terms" --model claude-sonnet-4-5
This command will provide a total cost estimate for the specified model, breaking down input and output token costs.
Batch Processing
Handle multiple prompts at once with batch processing capabilities:
llm-costs batch prompts.jsonl --model gpt-4o
This is useful for projects requiring bulk cost analysis.
Supported Models
llm-costs includes support for over 20 models, ensuring comprehensive coverage:
- Anthropic: Claude Haiku, Claude Sonnet, Claude Opus
- OpenAI: GPT-4, GPT-4o, and more
- Google: Gemini 2.5 Pro, Gemini 2.5 Flash
- DeepSeek: DeepSeek-R1, DeepSeek-V3
- Mistral: Mistral Large, Mistral Medium
- Cohere: Command R+, Command R
- Groq: Llama-3.3-70B
No API Calls Required
llm-costs operates locally without making API calls, providing a fast and secure way to estimate expenses without compromising your data.
Why Choose llm-costs?
- Zero Setup: Start using it immediately without any configurations.
- Terminal Native: Seamlessly integrates into your shell for efficient usage.
- Regular Updates: Pricing is updated automatically to reflect current costs from major LLM providers.
Roadmap
Future phases of llm-costs are planned to include batch processing, CI/CD integration, and more advanced features to enhance its utility in various workflows.
FAQ
Users commonly inquire about the accuracy of token estimates and the supported features. The tool leverages heuristic methods for estimating token counts and ensures that all pricing information is regularly maintained to provide the best user experience.
This project represents a significant advancement in managing and understanding the costs associated with using LLM APIs, making it a valuable resource for developers and researchers alike.
No comments yet.
Sign in to be the first to comment.