Replies: 4 comments 6 replies
-
Hey @Eskiii, great to see you diving deeper into the LangChain world! Facing a new challenge, I see? Let's tackle this one together. To limit the number of iterations for specific loops within your graph, you can create custom chain classes that inherit from Here's a step-by-step guide to achieve this:
Here's an example to illustrate this approach: from langchain.chains import SequentialChain
class LimitedIterationChain(SequentialChain):
def __init__(self, components, max_iterations):
super().__init__(components)
self.max_iterations = max_iterations
self.current_iteration = 0
def __call__(self, *args, **kwargs):
while self.current_iteration < self.max_iterations:
super().__call__(*args, **kwargs)
self.current_iteration += 1
self.current_iteration = 0 # Reset for potential reuse
# Example usage with your specific loops
sequence_1_chain = LimitedIterationChain(components_sequence_1, max_iterations=3)
sequence_2_chain = LimitedIterationChain(components_sequence_2, max_iterations=5) Replace This method requires you to adjust the logic based on your specific implementation details and how components are called within each iteration. For further insights on handling iterations with Sources
|
Beta Was this translation helpful? Give feedback.
-
Hi @Eskiii You can pass chain.invoke(...,{"recursion_limit": 15}) |
Beta Was this translation helpful? Give feedback.
-
I am using a defaultdict to keep track of the number of times each specific step from collections import defaultdict class CustomGraphState(GraphState): workflow = StateGraph(CustomGraphState) is it helpful? |
Beta Was this translation helpful? Give feedback.
-
You can do something like the below: have revision_number and max_revision in your graph state ### graph state
from typing import List
from typing_extensions import TypedDict
class GraphState(TypedDict):
"""
Represents the state of our graph.
Attributes:
question: question
generation: LLM generation
documents: list of documents
"""
question: str
generation: str
documents: List[str]
revision_number: int
max_revisions: int
and update your conditional edges for the revisions def decide_to_generate(self, state: GraphState):
"""
Determines whether to generate an answer, or re-generate a question.
Args:
state (dict): The current graph state
Returns:
str: Binary decision for next node to call
"""
print("---ASSESS GRADED DOCUMENTS---")
state["question"]
filtered_documents = state["documents"]
if not filtered_documents and state["revision_number"] < state["max_revisions"]:
# All documents have been filtered check_relevance
# We will re-generate a new query
print(
"---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION, TRANSFORM QUERY---"
)
return "transform_query"
else:
# We have relevant documents, so generate answer
print("---DECISION: GENERATE---")
return "generate" def grade_generation_v_documents_and_question(self, state: GraphState):
"""
Determines whether the generation is grounded in the document and answers question.
Args:
state (dict): The current graph state
Returns:
str: Decision for next node to call
"""
print("---CHECK HALLUCINATIONS---")
question = state["question"]
documents = state["documents"]
generation = state["generation"]
score = self.hallucination_grader.invoke(
{
"documents": documents, "generation": generation
}
)
grade = score["score"]
# Check hallucination
if grade == "yes":
print("---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---")
# Check question-answering
print("---GRADE GENERATION vs QUESTION---")
score = self.answer_grader.invoke({
"question": question, "generation": generation
})
grade = score["score"]
if grade == "yes":
print("---DECISION: GENERATION ADDRESSES QUESTION---")
return "useful"
else:
print("---DECISION: GENERATION DOES NOT ADDRESS QUESTION---")
return "not useful"
else:
if state["revision_number"] >= state["max_revisions"]:
print("---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS AND THE REVISION COUNT EXCEEDED THE MAX REVISION, STOP---")
return "stop"
else:
print("---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---")
return "not supported" Compile graph like: # build graph
workflow = StateGraph(GraphState)
# Define the nodes
workflow.add_node("retrieve", self.retrieve) # retrieve
workflow.add_node("grade_documents", self.grade_documents) # grade documents
workflow.add_node("generate", self.generate) # generatae
workflow.add_node("transform_query", self.transform_query) # transform_query
# Build graph
workflow.add_edge(START, "retrieve")
workflow.add_edge("retrieve", "grade_documents")
workflow.add_conditional_edges(
"grade_documents",
self.decide_to_generate,
{
"transform_query": "transform_query",
"generate": "generate",
},
)
workflow.add_edge("transform_query", "retrieve")
workflow.add_conditional_edges(
"generate",
self.grade_generation_v_documents_and_question,
{
"not supported": "generate",
"useful": END,
"not useful": "transform_query",
"stop": END
},
)
# Compile
self.app = workflow.compile()
# display
display(Image(self.app.get_graph(xray=True).draw_mermaid_png())) It will look like this: Hope this helps! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
in the self-rag graph: https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_self_rag.ipynb,which contain three loops, i wonder how to limit the maximum number of times of one of the loops.
For example, limit the loop: retrive - grade_documents - decide_to_generate - retrieve to 3, and limit the other loop: generate - grade_generation_v_documents_and_question - generate to 5.
Looking forward to reply, thank you!
System Info
langchain-experimental == 0.0.49
langchain-openai == 0.1.1
langchain == 0.1.13
langchain-community == 0.0.29
langchain-core == 0.1.34
langchain-text-splitters == 0.0.1
python == 3.10.12
Beta Was this translation helpful? Give feedback.
All reactions