How do I chain multiple prompts? #8383
Replies: 2 comments 3 replies
-
Answer generated by a 🤖 AnswerI understand that you're looking to optimize your code by chaining multiple prompts in LangChain. You want to upload your podcast transcript once and then generate a summary, a social media post for your guest, and a social media post for yourself, without having to upload the document multiple times. You're on the right track. LangChain does indeed allow you to chain multiple prompts using the Here's how you can modify your code to achieve this: from langchain.chains import SequentialDocumentsChain, LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
# Define your prompt templates
summary_template = """Write a summary of the following podcast text:
{text}
SUMMARY :"""
guest_template = """Write a summary of the following podcast text as if you are the guest(s) posting on social media.
{text}
SUMMARY :"""
host_template = """Write a summary of the following podcast text as if you are the host posting on social media.
{text}
SUMMARY :"""
# Create PromptTemplates
SUMMARY_PROMPT = PromptTemplate(template=summary_template, input_variables=["text"])
GUEST_PROMPT = PromptTemplate(template=guest_template, input_variables=["text"])
HOST_PROMPT = PromptTemplate(template=host_template, input_variables=["text"])
# LLM to use in each stage
llm = OpenAI()
# Create LLMChains for each stage
summary_llm_chain = LLMChain(llm=llm, prompt=SUMMARY_PROMPT)
guest_llm_chain = LLMChain(llm=llm, prompt=GUEST_PROMPT)
host_llm_chain = LLMChain(llm=llm, prompt=HOST_PROMPT)
# Create SequentialDocumentsChain
chain = SequentialDocumentsChain(
chains=[summary_llm_chain, guest_llm_chain, host_llm_chain]
)
# Run the chain
results = chain.run(docs)
# Print the results
print(results) In this code, we're using This is just one potential solution to your problem. There might be other ways to achieve the same result depending on your specific needs and the complexity of your project. If you find this solution helpful and think it might be useful to others, I encourage you to make a pull request to update the LangChain documentation. This way, other users facing the same issue can benefit from your experience. Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
-
Answer generated by a 🤖 AnswerHello, To define the The Here is an example of how you might define these chains: # Define the individual chains
summary_chain_1 = FakeChain(input_variables=["transcript"], output_variables=["summary"])
summary_chain_2 = FakeChain(input_variables=["summary"], output_variables=["final_summary"])
guest_post_chain_1 = FakeChain(input_variables=["transcript"], output_variables=["guest_post"])
guest_post_chain_2 = FakeChain(input_variables=["guest_post"], output_variables=["final_guest_post"])
self_post_chain_1 = FakeChain(input_variables=["transcript"], output_variables=["self_post"])
self_post_chain_2 = FakeChain(input_variables=["self_post"], output_variables=["final_self_post"])
# Define the SequentialChains
summary_chain = SequentialChain(chains=[summary_chain_1, summary_chain_2], input_variables=["transcript"])
guest_post_chain = SequentialChain(chains=[guest_post_chain_1, guest_post_chain_2], input_variables=["transcript"])
self_post_chain = SequentialChain(chains=[self_post_chain_1, self_post_chain_2], input_variables=["transcript"]) In this example, each chain is made up of two individual chains. The first chain takes the transcript as input and generates a summary, guest post, or self post. The second chain takes this output and further processes it to generate the final summary, guest post, or self post. To add a markdown file named # Read the contents of the markdown file
with open('transcript.md', 'r') as file:
transcript = file.read()
# Pass the transcript as an argument to the SequentialChain
summary_output = summary_chain({"transcript": transcript})
guest_post_output = guest_post_chain({"transcript": transcript})
self_post_output = self_post_chain({"transcript": transcript}) Please note that this is a simplified example and the actual implementation may vary depending on the specifics of your application and the LangChain framework. For more information, you can refer to the LangChain documentation and the unit tests for the
I hope this helps! If you have any further questions, feel free to ask. Best, Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am the host of a podcast. After a recording, I generate a transcript in markdown. I then use that markdown to develop the following:
I can do all this, but my code is inefficient because I make multiple calls, uploading the document each time with a new prompt template. Langchain should be able to do that for me.
I stripped out the detailed question for brevity, but as you can see, I created a
prompt_template
for the summary like this:Then I create yet another one for the guest, like this:
With my limited experience in Langchain, I think my code is doing the following:
What I want is:
Is it possible to do that, or is what I am using the most efficient?
FYI: I searched the questions here and tried to find similar answers, but I feel like searching for "chain," "link," or "multiple" provides too many results that are not necessarily the answer I am looking for. I'm sorry if this has been asked and answered.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions