How to Use and Update Complex Pydantic Models as Input in LangChain with Non-Linear Chains? #28539
Replies: 1 comment 2 replies
-
Hey there, @JamesLarkinWhite! I'm here to help you with any bugs, questions, or contributions you might have. Let's work together to solve this! To handle complex Pydantic models in LangChain with non-linear chains, you can follow these steps to selectively pass parts of the model to different stages and update the model with outputs:
Here's a simplified example of how you might structure this: from pydantic import BaseModel, Field
from typing import List, Optional
from langchain.chains import LLMChain, RunnableLambda
from langchain.prompts import PromptTemplate
class Message(BaseModel):
role: str
content: str
class QuestionContext(BaseModel):
current_description: str
current_comments: Optional[str]
current_question: str
current_context: str
category: Optional[str]
class InputData(BaseModel):
messages: List[Message]
question_context: QuestionContext
# Define your chains with specific prompts
main_chain_prompt = PromptTemplate(
template="Category: {category}\nQuestion: {question}\nContext: {context}\n\nPlease provide a response based on the given category and context.",
input_variables=["category", "question", "context"],
)
main_chain = LLMChain(llm=llm, prompt=main_chain_prompt)
def extract_input_data(input_data: InputData):
return {
"category": input_data.question_context.category,
"question": input_data.question_context.current_question,
"context": input_data.question_context.current_context,
}
def update_input_data(input_data: InputData, result):
# Update the input_data with the result
input_data.question_context.category = result.get("category", input_data.question_context.category)
# Add more updates as needed
return input_data
def router(input_data: InputData):
if input_data.question_context.category:
return main_chain
else:
return tagging_chain
full_chain = RunnableLambda(router) | RunnableLambda(
lambda chain, input_data: chain.run(extract_input_data(input_data))
) | RunnableLambda(update_input_data)
# Example usage
input_data = InputData(
messages=[Message(role="user", content="Hello")],
question_context=QuestionContext(
current_description="Issue with blue balloon",
current_comments=None,
current_question="What could be the problem?",
current_context="my blue balloon is red",
category=None
)
)
# Run the full chain with the input data
result = full_chain.run(input_data) In this setup:
This approach allows you to maintain and update the state of your Pydantic model throughout the chain execution. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm working with LangChain and have a complex pydantic model that encapsulates all the necessary information for my process. I need to pass only parts of this object to different stages of a chain and update the object with the outputs of these stages so that the data is persisted throughout the process. Additionally, my chain is non-linear, involving routers and forks.
Here's a simplified version of my pydantic model and the chain setup that obviously does not work but should help you understand my reaoning:
In reality the result would be used to updated Input_data such that i have all information in one place.
I am not sure how i could that use a nother sequantial chain that does RAG for example and persist the InputData object throuut the whole chain.
Is this even a valid approach?
Beta Was this translation helpful? Give feedback.
All reactions