Prompts and Context
Basic Example
import railtracks as rt
# Define a prompt with placeholders
system_message = "You are a {role} assistant specialized in {domain}."
# Create an LLM node with this prompt
assistant = rt.agent_node(
name="Assistant",
system_message=system_message,
llm=rt.llm.OpenAILLM("gpt-4o"),
)
# Run with context values
assistant_flow = rt.Flow("assistant-flow", entry_point=assistant)
response = assistant_flow.update_context({"role": "technical", "domain": "Python programming"}).invoke("Help me understand decorators.")
In this example, the system message will be expanded to: "You are a technical assistant specialized in Python programming."
Enabling and Disabling Context Injection
Context injection is enabled by default but can be disabled if needed:
# Disable context injection for a specific run
flow = rt.Flow(
"assistant-flow",
entry_point=assistant,
prompt_injection=False
)
# or globally via
rt.set_config(prompt_injection=False)
This may be useful when formatting prompts that should not change based on the context.
Message-Level Control
Context injection can be controlled at the message level using the inject_prompt parameter:
# This message will have context injection applied
system_msg = rt.llm.SystemMessage(content="You are a {role}.", inject_prompt=True)
# This message will not have context injection applied
user_msg = rt.llm.UserMessage(content="Tell me about {topic}.", inject_prompt=False)
This can be useful when you want to control which messages should have context injected and which should not.
As an example, in a Math Assistant, you might want to inject context into the system message, but not the user message that may contain LaTeX that has {} characters. To prevent formatting issues, you can set inject_prompt=False for the user message.
Escaping Placeholders
If you need to include literal curly braces in your prompt without triggering context injection, you can escape them by doubling the braces:
Debugging Prompts
If your prompts aren't producing the expected results:
- Check context values: Ensure the context contains the expected values for your placeholders
- Verify prompt injection is enabled: Check that
prompt_injection=Truein your session configuration - Look for syntax errors: Ensure your placeholders use the correct format
{variable_name}
Example (Reusable Prompt Templates)
You can create reusable prompt templates that adapt to different scenarios:
import railtracks as rt
from railtracks.llm import OpenAILLM
# Define a template with multiple placeholders
template = """You are a {assistant_type} assistant.
Your task is to help the user with {task_type} tasks.
Use a {tone} tone in your responses.
The user's name is {user_name}."""
# Create an LLM node with this template
assistant = rt.agent_node(
name="Dynamic Assistant",
system_message=template,
llm=OpenAILLM("gpt-4o"),
)
# Different context for different scenarios
customer_support_context = {
"assistant_type": "customer support",
"task_type": "troubleshooting",
"tone": "friendly and helpful",
"user_name": "Alex"
}
technical_expert_context = {
"assistant_type": "technical expert",
"task_type": "programming",
"tone": "professional",
"user_name": "Taylor"
}
# Run with different contexts for different scenarios
assistant_flow = rt.Flow("assistant-flow", entry_point=assistant)
customer_support_flow = assistant_flow.update_context(customer_support_context)
response1 = customer_support_flow.invoke("My internet is not working. Can you help?")
technical_expert_flow = assistant_flow.update_context(technical_expert_context)
response2 = technical_expert_flow.invoke("How do I implement a binary tree?")
Benefits of Context Injection
Using context injection provides several advantages:
- Reduced token usage: Avoid passing the same context information repeatedly
- Improved maintainability: Update prompts in one place
- Dynamic adaptation: Adjust prompts based on runtime conditions
- Separation of concerns: Keep prompt templates separate from variable data
- Reusability: Use the same prompt template with different contexts