Skip to content

Platforms

Platforms allow connecting to LLMs from different providers through a single API. Railtracks has support for connecting to the following major LLM platforms:

  • Azure AI Foundry
  • Ollama
  • HuggingFace
  • Portkey

The code remains the same as LLM Providers with the provider name being replaced with the platform name.

Quick Start Examples

import railtracks as rt
# make sure to configure your environment variables for Azure AI

model = rt.llm.AzureAILLM("azure_ai/deepseek-r1")
import railtracks as rt
# make sure to configure your environment variables for Ollama

model = rt.llm.OllamaLLM("deepseek-r1:8b")

Tool Calling Support

For HuggingFace serverless inference models, you need to make sure that the model you are using supports tool calling. We DO NOT check for tool calling support in HuggingFace models. If you are using a model that does not support tool calling, it will default to regular chat, even if the tool_nodes parameter is provided.

In case of HuggingFace, model_name must be of the format:

  • huggingface/<provider>/<hf_org_or_user>/<hf_model>
  • <provider>/<hf_org_or_user>/<hf_model>"

Here are a few example models that you can use:

rt.llm.HuggingFaceLLM("together_ai/meta-llama/Llama-3.3-70B-Instruct") 
rt.llm.HuggingFaceLLM("sambanova/meta-llama/Llama-3.3-70B-Instruct")

# does not support tool calling
rt.llm.HuggingFaceLLM("featherless-ai/mistralai/Mistral-7B-Instruct-v0.2")
import railtracks as rt
# make sure to configure your environment variables for HuggingFace

model = rt.llm.HuggingFaceLLM("together/deepseek-ai/DeepSeek-R1")
import railtracks as rt

# you can pass in the model name, api endpoint and your API key to connect to your LLM
model = rt.llm.OpenAICompatibleProvider("<your model name>", api_base="<base api url>", api_key="<api key>")