Skip to content

LLMProvider

LLMProvider wraps litellm to provide a single call_api() interface across 100+ LLM providers. It is the LLM used by ValidationFramework to generate responses, and is also used internally by AccuracyAgent, RelevancyAgent, and BiasAgent as the judge model.

Constructor

LLMProvider(
provider: str,
model: str,
key: str,
)
ParameterTypeDescription
providerstrProvider name as recognised by litellm (e.g. "anthropic", "openai", "gemini").
modelstrModel identifier (e.g. "claude-haiku-4-5-20251001", "gpt-4o").
keystrAPI key for the provider.

Internally, litellm receives the model as "{provider}/{model}" — e.g. "anthropic/claude-haiku-4-5-20251001".

Methods

call_api(query)

def call_api(query: str) -> str

Sends a single user message and returns the response text.

Examples

Basic usage

from llm_validation_framework import LLMProvider
from llm_validation_framework.config_loader import load_api_key
api_key = load_api_key(provider="ANTHROPIC")
llm = LLMProvider(provider="anthropic", model="claude-haiku-4-5-20251001", key=api_key)
response = llm.call_api("What is the capital of France?")
print(response) # "The capital of France is Paris."

Using OpenAI

import os
llm = LLMProvider(
provider="openai",
model="gpt-4o-mini",
key=os.environ["OPENAI_API_KEY"],
)

Provider list

Any provider supported by litellm works. See the full list at docs.litellm.ai/docs/providers. Common options:

Providerprovider=Example model=
Anthropic"anthropic""claude-haiku-4-5-20251001"
OpenAI"openai""gpt-4o-mini"
Google"gemini""gemini-2.0-flash"

DeepEvalLLMProvider

DeepEvalLLMProvider is an internal adapter class that wraps LLMProvider to make it compatible with deepeval’s GEval metric. You do not need to use it directly — AccuracyAgent, RelevancyAgent, and BiasAgent create it automatically.

class DeepEvalLLMProvider(DeepEvalBaseLLM):
def __init__(self, llm_provider: LLMProvider): ...
def generate(self, prompt: str) -> str: ...
async def a_generate(self, prompt: str) -> str: ...