Guardrails Quickstart
Guardrails are the policy layer around an LLM call in Railtracks. They let you inspect what goes into the model and what comes out, and they can allow, transform, or block the interaction based on your own rules.
In practice, you attach guardrails with agent_node(..., guardrails=Guard(...)), then provide one or more rails for the phases you want to control. This quickstart focuses on a small input guard with a real LLM so you can see both outcomes clearly: one request passes through to the model, and one is blocked before inference.
Minimal setup
import railtracks as rt
from railtracks.guardrails import (
Guard,
GuardrailBlockedError,
GuardrailDecision,
InputGuard,
LLMGuardrailEvent,
)
class BlockSensitiveRequests(InputGuard):
def __call__(self, event: LLMGuardrailEvent) -> GuardrailDecision:
"""Check the latest user message and block requests that mention passwords."""
latest_message = event.messages[-1]
content = str(latest_message.content).lower()
if "password" in content:
return GuardrailDecision.block(
reason="Requests for passwords are not allowed.",
user_facing_message="Ask for something else instead.",
)
return GuardrailDecision.allow()
Agent = rt.agent_node(
name="guardrails-quickstart-agent",
llm=rt.llm.GeminiLLM("gemini-2.5-flash"),
system_message="You are a concise assistant.",
guardrails=Guard(input=[BlockSensitiveRequests()]),
)
flow = rt.Flow("Guardrails Quickstart", entry_point=Agent)
No API key set?
Make sure your provider API key is available in your environment or .env file.
Railtracks supports multiple providers. See Supported Providers.
Passing request
This request does not match the guardrail, so it reaches the LLM normally.
Blocked request
This request contains the blocked keyword, so Railtracks raises GuardrailBlockedError instead of calling the model.