dllmforge.llamaindex_api¶
Create LLM object and API calls using llama_index, including Azure and non-Azure models. We use openai and mistral models for examples. An overview of available llama_index LLMs: https://docs.llamaindex.ai/en/stable/module_guides/models/llms/modules/
Classes
|
Class to interact with various LLM providers using LlamaIndex. |
- class dllmforge.llamaindex_api.LlamaIndexAPI(model_provider: str = 'azure-openai', temperature: float = 0.0, api_key=None, api_base=None, api_version=None, deployment_name=None, model_name=None)[source]¶
Class to interact with various LLM providers using LlamaIndex.
Initialize the LlamaIndex API client with specified configuration. :param model_provider: Provider of model to use. Options are:
“azure-openai”: Use Azure OpenAI
“openai”: Use OpenAI
“mistral”: Use Mistral
- Parameters:
temperature (float) – Temperature setting for the model (0.0 to 1.0)
api_key (str) – API key for the provider
api_base (str) – API base URL (for Azure)
api_version (str) – API version (for Azure)
deployment_name (str) – Deployment name (for Azure)
model_name (str) – Model name (for OpenAI/Mistral)
- __init__(model_provider: str = 'azure-openai', temperature: float = 0.0, api_key=None, api_base=None, api_version=None, deployment_name=None, model_name=None)[source]¶
Initialize the LlamaIndex API client with specified configuration. :param model_provider: Provider of model to use. Options are:
“azure-openai”: Use Azure OpenAI
“openai”: Use OpenAI
“mistral”: Use Mistral
- Parameters:
temperature (float) – Temperature setting for the model (0.0 to 1.0)
api_key (str) – API key for the provider
api_base (str) – API base URL (for Azure)
api_version (str) – API version (for Azure)
deployment_name (str) – Deployment name (for Azure)
model_name (str) – Model name (for OpenAI/Mistral)
- send_test_message(prompt='Hello, how are you?')[source]¶
Send a test message to the model and get a response. :param prompt: The prompt string to send. :type prompt: str
- Returns:
Dictionary containing the response and metadata.
- Return type:
dict
- chat_completion(messages, temperature=None, max_tokens=None)[source]¶
Get a chat completion from the model. :param messages: List of message dicts or tuples (role, content) :type messages: list :param temperature: Optional temperature override :type temperature: float :param max_tokens: Optional max tokens override :type max_tokens: int
- Returns:
Dictionary containing the response and metadata.
- Return type:
dict