How to use placeholders for variables in my system prompt without "raise KeyError" #26369
Replies: 1 comment 7 replies
-
Hey there, @humphreyde! I'm here to help you with your issue. Let's tackle this together! To use placeholders for variables in your system prompt without raising a Here is an example of how to use the from langchain_core.prompts import MessagesPlaceholder
# Create a MessagesPlaceholder with the variable name "chat_history" and make it optional
prompt = MessagesPlaceholder("chat_history", optional=True)
# Format messages without providing the "chat_history" variable
formatted_messages = prompt.format_messages()
# This will return an empty list instead of raising a KeyError
print(formatted_messages) # Output: []
# Format messages with the "chat_history" variable provided
formatted_messages = prompt.format_messages(
chat_history=[
("system", "You are an AI assistant."),
("human", "Hello!"),
]
)
# This will return the formatted messages
print(formatted_messages)
# Output: [
# SystemMessage(content="You are an AI assistant."),
# HumanMessage(content="Hello!"),
# ] By setting Additionally, you can use partial variables to pre-fill some of the placeholders, as shown in the following example: from langchain_core.prompts import SystemMessagePromptTemplate
graph_creator_content = """
Your instructions are:
{instructions}
History:
{chat_history}
"""
json_prompt_instructions = {}
graph_analyst_template = SystemMessagePromptTemplate.from_template(
template=graph_creator_content,
input_variables=["chat_history"],
partial_variables={"instructions": json_prompt_instructions},
)
formatted_message = graph_analyst_template.format(chat_history="history")
print(formatted_message)
# Output: SystemMessage(content="\n Your instructions are:\n {}\n History:\n history\n ") This approach allows you to handle placeholders more flexibly and avoid errors when some variables are not provided. To integrate this with from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langchain_openai import ChatOpenAI
# Define the prompt template with a MessagesPlaceholder
system_prompt = (
"examples: {example}"
"\n\n"
"{context}"
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
]
)
# Define the model and chain
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
chain = qa_prompt | llm
# Define the function to get session history
store = {}
def get_session_history(session_id: str):
if session_id not in store:
store[session_id] = ChatMessageHistory()
return store[session_id]
# Wrap the chain with RunnableWithMessageHistory
conversational_rag_chain = RunnableWithMessageHistory(
chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
# Define the config with session_id
config = {"configurable": {"session_id": "abc5"}}
# Invoke the chain with the message history
response = conversational_rag_chain.invoke(
{"input": "Hi! I'm Jim"},
config=config,
)
print(response.content) By following this approach, you can effectively manage placeholders and avoid |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
{example} is just a example.
As you can see,if there are many "{x},{xx}..." in system prompt, KeyError will be raised, how to handle this problem?
Thanks!
ps: I want to use
RunnableWithMessageHistory
rather thanFewShotPromptTemplate
.System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
Beta Was this translation helpful? Give feedback.
All reactions