Replies: 1 comment 2 replies
-
要精确控制 from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnableSequence
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
# 定义单独的 Runnables
event_chain = ChatPromptTemplate.from_template("extract event from {text}") | model
org_chain = ChatPromptTemplate.from_template("extract organization from {text}") | model
human_chain = ChatPromptTemplate.from_template("extract human from {text}") | model
# 将需要串行执行的部分组合成一个 RunnableSequence
serial_chain = RunnableSequence(event_chain, org_chain, human_chain)
# 将串行和并行的部分组合在一起
qa_bot_with_docs = RunnableParallel(
{
"context": itemgetter("question") | contextual_rerank_retriever,
"question": itemgetter("question"),
"serial_chain": serial_chain # 串行执行的部分
}
).assign(
answer=qa_bot,
prompt=prompt
)
# 执行
qa_bot_with_docs.invoke({"question": "What is the event?"}) 在这个示例中, |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
能否将event、org、human设置为串行处理,因为 并行会导致 单个chain 耗时增加,影响用户体验
怎么控制RunnableParallel.assign 中 runnable 是并行执行还是串行执行。最好可以精确控制某几个chain的并行或串行。
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions