Replies: 4 comments
-
Hey @MadDonkey! I'm Dosu, your friendly bot here to help you squash bugs, answer questions, and even guide you to become a contributor. I'm here to assist you while we wait for a human maintainer. Taking a look at your issue on multi-user conversations now and will get back to you shortly with a full answer! 🚀 |
Beta Was this translation helpful? Give feedback.
-
Hi @MadDonkey, https://python.langchain.com/v0.1/docs/expression_language/how_to/message_history/ |
Beta Was this translation helpful? Give feedback.
-
To implement a multi-user chatbot using LangChain and an LLM, you need to manage the context for each user independently. LangChain does not automatically handle user context separation, so you will need to implement this in your code. Here's how you can modify your existing code to support multi-user conversations by maintaining separate chat histories for each user:
Below is the modified version of your code to support multi-user conversations: from typing import Dict
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import AzureChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.memory import ChatMessageHistory
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_chroma import Chroma
from langchain_openai import AzureOpenAIEmbeddings
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableBranch
chat_model_api_key = "my api key"
chat_model_deployment = "model_deployment_name"
chat_model_endpoint = "model_endpoint"
chat_model_api_version = "api_version"
embed_model_deployment = "embed_model_deployment_name"
embed_model_endpoint = "embed_model_endpoint"
embed_model_api_key = "embed_model_api_key"
embed_model_api_version = "embed_model_api_version"
loader = WebBaseLoader("https://baike.baidu.com/item/%E5%AF%8C%E5%9D%9A%E4%B9%89%E5%8D%9A/188127?fr=ge_ala")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
vectorstore = Chroma.from_documents(
documents=all_splits,
embedding=AzureOpenAIEmbeddings(
azure_deployment=embed_model_deployment,
api_key=embed_model_api_key,
azure_endpoint=embed_model_endpoint,
api_version=embed_model_api_version
)
)
# k is the number of chunks to retrieve
retriever = vectorstore.as_retriever(k=4)
docs = retriever.invoke("")
chat = AzureChatOpenAI(
openai_api_version=chat_model_api_version,
azure_endpoint=chat_model_endpoint,
azure_deployment=chat_model_deployment,
api_key=chat_model_api_key,
temperature=0
)
question_answering_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Answer the user's questions based on the below context:\n\n{context}",
),
MessagesPlaceholder(variable_name="messages"),
]
)
query_transform_prompt = ChatPromptTemplate.from_messages(
[
MessagesPlaceholder(variable_name="messages"),
(
"user",
"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.",
),
]
)
query_transforming_retriever_chain = RunnableBranch(
(
lambda x: len(x.get("messages", [])) == 1,
# If only one message, then we just pass that message's content to retriever
(lambda x: x["messages"][-1].content) | retriever,
),
# If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever
query_transform_prompt | chat | StrOutputParser() | retriever,
).with_config(run_name="chat_retriever_chain")
document_chain = create_stuff_documents_chain(chat, question_answering_prompt)
conversational_retrieval_chain = RunnablePassthrough.assign(
context=query_transforming_retriever_chain,
).assign(
answer=document_chain,
)
# Dictionary to store chat histories for each user
user_chat_histories: Dict[str, ChatMessageHistory] = {}
def get_user_chat_history(user_id: str) -> ChatMessageHistory:
if user_id not in user_chat_histories:
user_chat_histories[user_id] = ChatMessageHistory()
return user_chat_histories[user_id]
def handle_user_message(user_id: str, message: str) -> str:
chat_history = get_user_chat_history(user_id)
chat_history.add_user_message(message)
response = conversational_retrieval_chain.invoke(
{"messages": chat_history.messages}
)
chat_history.add_ai_message(response["answer"])
return response["answer"]
# Example usage
user_id_1 = "user_1"
user_id_2 = "user_2"
message_1 = "What is 1+1?"
message_2 = "How many people live in the world?"
response_1 = handle_user_message(user_id_1, message_1)
response_2 = handle_user_message(user_id_2, message_2)
print(f"Response to user 1: {response_1}")
print(f"Response to user 2: {response_2}") Explanation:
By maintaining separate chat histories for each user, you ensure that the context for each user is preserved independently, preventing any mix-up of responses between different users.
|
Beta Was this translation helpful? Give feedback.
-
@dosu, does not make sense to use chat history from all users and add in system prompt a mention to generate a response to the user of ID 'user_1'? Because the user_2 can also interactiing with user_1, and his messages could be important as part of context. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I'm trying to use the langchain and LLM to develop a chatbot. The demo code is above, now I want to implement a chatbot that can support multi-user conversations. If there are multiple sessions initiated at the same time, do I need to maintain the context of each user in the code to ensure that the robot can answer the question correctly? For example, if two users ask a question at the same time, user A asks what 1+1 is, and user B asks how many people live in the world, how can the chatbot guarantee that user B's answer will not be returned to user A? Do I need to maintain the user context on my own or langchain or the big model can do it on its own?
System Info
langchain==0.1.9
langchain-chroma==0.1.1
langchain-community==0.0.37
langchain-core==0.1.52
langchain-openai==0.1.6
langchain-text-splitters==0.0.2
paltform:mac
Python:3.9.6
Beta Was this translation helpful? Give feedback.
All reactions