Are you tired of juggling different APIs for OpenAI, Anthropic, and Google’s large language models? A year after its initial release, the LLM API Adapter has evolved into a robust, SDK-free solution designed to unify your LLM development experience. Now in version 0.2.2, this powerful tool simplifies how developers interact with various language models, offering a consistent and efficient interface.

What’s New in LLM API Adapter v0.2.2?

The latest iteration of the Universal LLM API Adapter brings significant enhancements, making it an indispensable tool for AI developers:

  • Completely SDK-Free: Say goodbye to external dependencies. The adapter now communicates directly with provider APIs, offering a leaner and more flexible integration.
  • Unified chat() Interface: Experience unparalleled consistency with a single chat() interface across all supported models, including those from OpenAI, Anthropic, and Google. This means less code rewriting and more focus on your application logic.
  • Transparent Token & Cost Tracking: Gain complete visibility into your LLM usage. The adapter automatically tracks tokens and calculates costs for every request, eliminating the need for manual calculations and helping you manage your budget effectively.
  • Enhanced Resilience: Built with a consistent error taxonomy, the adapter provides clear and actionable insights into issues such as authentication failures, rate limits, timeouts, and token limits across all providers.
  • Rigorously Tested: With an impressive 98% unit test coverage, you can rely on the stability and accuracy of the LLM API Adapter for your critical applications.

Effortless LLM Integration and Management

Integrating with any LLM is now simpler than ever. Here’s a glimpse of how easily you can interact with different models:

from llm_api_adapter.universal_adapter import UniversalLLMAPIAdapter

# Chat with an OpenAI model
openai_adapter = UniversalLLMAPIAdapter(provider="openai", model="gpt-5")
response = openai_adapter.chat([
    {"role": "system", "content": "Be concise."},
    {"role": "user", "content": "Explain how LLM adapters work."},
])
print(response.content)

# Seamlessly switch to Anthropic or Google models
anthropic_adapter = UniversalLLMAPIAdapter(provider="anthropic", model="claude-sonnet-4-5")
google_adapter = UniversalLLMAPIAdapter(provider="google", model="gemini-2.5-pro")

The adapter also provides detailed token and cost breakdowns with every response, empowering you with precise resource management:

# Example of token and cost tracking
# (Assuming 'chat_params' and 'google_api_key' are defined)
google = UniversalLLMAPIAdapter(
    organization="google",
    model="gemini-2.5-pro",
    api_key=google_api_key
)
response = google.chat(**chat_params)

print(response.usage.input_tokens, "tokens", f"({response.cost_input} {response.currency})")
print(response.usage.output_tokens, "tokens", f"({response.cost_output} {response.currency})")
print(response.usage.total_tokens, "tokens", f"({response.cost_total} {response.currency})")

This output might look like:

512 tokens (0.00025 USD)
137 tokens (0.00010 USD)
649 tokens (0.00035 USD)

Why the Universal LLM API Adapter?

The need for a unified interface became clear when developers repeatedly faced the challenge of adapting code for each LLM provider’s unique SDKs, parameter names, and error handling. The Universal LLM API Adapter solves this by providing one consistent experience, abstracting away these complexities and allowing you to focus on innovation.

Get Involved!

Ready to streamline your LLM development workflow? Install the adapter today:

pip install llm-api-adapter

Explore the documentation and examples on GitHub: github.com/Inozem/llm_api_adapter

Your feedback and stars ⭐ on GitHub are invaluable as we continue to enhance this project!

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed