Retreival option in langchian 0.3 #26667
Replies: 1 comment
-
Hey @ZainZia0341! 👋 I'm here to help you out with your question. I found some similar discussions and solutions that might help you with your issue:
To correctly pass the retrieved documents from Chroma to the chain in Langchain 0.3 when using the
By following these steps, you should be able to correctly pass the retrieved documents from Chroma to the chain in Langchain 0.3 while using the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am creating a simple chatbot to test the mongodbWithChatHistory module with Langchain 0.3. I have used Chroma as the vector database, but I am facing an issue with passing the retrieved documents from Chroma to the chain.
I have tried multiple approaches, but it seems that the knowledge base is not being accessed correctly. Could someone please help me identify where I might be going wrong? Additionally, any relevant resources or links would be greatly appreciated.
Below is my code:
import streamlit as st
from langchain_chroma import Chroma
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory
from dotenv import load_dotenv
import bs4
import os
Load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Initialize OpenAI LLM
llm = ChatOpenAI(model="gpt-4o-mini", api_key=OPENAI_API_KEY)
Load, chunk and index the contents of the blog.
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
Initialize Vectorstore (Chroma)
PERSIST_DIR = './chroma_db'
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings(), persist_directory=PERSIST_DIR)
retriever = vectorstore.as_retriever()
Prompt Template
SYSTEM_TEMPLATE = """
{context} """Answer the user's questions based on the below context.
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":
question_answering_prompt = ChatPromptTemplate.from_messages(
[
("system", SYSTEM_TEMPLATE),
MessagesPlaceholder(variable_name="messages")
]
)
from langchain.chains.combine_documents import create_stuff_documents_chain
document_chain = create_stuff_documents_chain(llm, question_answering_prompt)
from langchain_core.messages import HumanMessage
document_chain.invoke(
{
"context": docs,
"messages": [
HumanMessage(content="Can LangSmith help test my LLM applications?")
],
}
)
from typing import Dict
from langchain_core.runnables import RunnablePassthrough
def parse_retriever_input(params: Dict):
return params["messages"][-1].content
retrieval_chain = RunnablePassthrough.assign(
context=parse_retriever_input | retriever,
).assign(
answer=document_chain,
)
MongoDB setup for history (assuming you have MongoDB properly configured)
def get_chat_history(session_id):
return MongoDBChatMessageHistory(
session_id=session_id,
connection_string="",
database_name="chatbot_db",
collection_name="conversations"
)
Streamlit frontend
st.title("Conversational AI with RAG and MongoDB")
st.write("Ask any question and let the AI respond with knowledge from the documents and history.")
Set session ID for user history
session_id = st.text_input("Session ID", "1234")
Get question from user
user_question = st.text_input("Your question:")
if st.button("Submit"):
if user_question:
# Retrieve relevant documents
docs = retriever.invoke(user_question)
chain = question_answering_prompt | llm
# Run the question-answering pipeline
document_chain = RunnableWithMessageHistory(
chain,
get_chat_history,
input_messages_key="messages",
history_messages_key="history"
)
Beta Was this translation helpful? Give feedback.
All reactions