dllmforge.langchain_api

Create LLM object and api calls from langchain, including Azure and non-Azure models. We use openai and mistral models for examples. An overview of available langchain chat models: https://python.langchain.com/docs/integrations/chat/

Classes

LangchainAPI([model_provider, temperature, ...])

Class to interact with various LLM providers using Langchain.

class dllmforge.langchain_api.LangchainAPI(model_provider: str = 'azure-openai', temperature: float = 0.1, api_key=None, api_base=None, api_version=None, deployment_name=None, model_name=None)[source]

Class to interact with various LLM providers using Langchain.

Initialize the Langchain API client with specified configuration.

Parameters:
  • model_provider (str) – Provider of model to use. Options are: - “azure-openai”: Use Azure OpenAI - “openai”: Use OpenAI - “mistral”: Use Mistral

  • temperature (float) – Temperature setting for the model (0.0 to 1.0)

  • api_key (str) – API key for the provider

  • api_base (str) – API base URL (for Azure)

  • api_version (str) – API version (for Azure)

  • deployment_name (str) – Deployment name (for Azure)

  • model_name (str) – Model name (for OpenAI/Mistral)

__init__(model_provider: str = 'azure-openai', temperature: float = 0.1, api_key=None, api_base=None, api_version=None, deployment_name=None, model_name=None)[source]

Initialize the Langchain API client with specified configuration.

Parameters:
  • model_provider (str) – Provider of model to use. Options are: - “azure-openai”: Use Azure OpenAI - “openai”: Use OpenAI - “mistral”: Use Mistral

  • temperature (float) – Temperature setting for the model (0.0 to 1.0)

  • api_key (str) – API key for the provider

  • api_base (str) – API base URL (for Azure)

  • api_version (str) – API version (for Azure)

  • deployment_name (str) – Deployment name (for Azure)

  • model_name (str) – Model name (for OpenAI/Mistral)

check_server_status()[source]

Check if the LLM service is accessible.

send_test_message(prompt='Hello, how are you?')[source]

Send a test message to the model and get a response.

Parameters:

prompt (str) – The prompt string to send.

Returns:

Dictionary containing the response and metadata.

Return type:

dict

chat_completion(messages, temperature=None, max_tokens=None)[source]

Get a chat completion from the model.

Parameters:
  • messages (list) – List of message tuples (role, content)

  • temperature (float) – Optional temperature override

  • max_tokens (int) – Optional max tokens override

Returns:

Dictionary containing the response and metadata.

Return type:

dict

ask_with_retriever(question: str, retriever)[source]

Ask a question using the retriever to get context.

Parameters:
  • question (str) – The question to ask.

  • retriever – A rag retriever object that can retrieve relevant context.

  • **kwargs – Additional keyword arguments to pass to the LLM (e.g., temperature, max_tokens).

Returns:

The response from the LLM.