Replies: 7 comments
-
I support this idea. Current models do not support Conversational/Chat and question answer Models. |
Beta Was this translation helpful? Give feedback.
-
I support this idea, and I am struggling much from last some days to achieve this kind of setup so I can use memory on the documentation. |
Beta Was this translation helpful? Give feedback.
-
Yes please, I need it too.
|
Beta Was this translation helpful? Give feedback.
-
I was able to make this happen: I'm using Redis for History, but you could use any other chat_history example. I'm also using the FAISS vector database to minimize the size of the context getting in the way. # https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html
template = """
- You are a chatbot named Botty having a conversation with a human.
- Your name is Botty.
- You have to be friendly.
- You are talking to a human.
- You NEVER answer that your are an AI language model.
- If you don't know the answer truthfully say you don't have an answer. Don't try to make up an answer.
- Always reply in the same language you are being asked.
Given the following extracted parts of a long document and a question, create a final answer.
{context}
{chat_history}
Human: {question}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "question", "context"],
template=template
)
session_id = 'UNIQUE FOR CLIENT SESSION OR SIMPLY CLIENT'
message_history = RedisChatMessageHistory(url=redisUrl, ttl=600, session_id=session_id)
memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=message_history, input_key="question")
agent_chain = load_qa_chain(llm, chain_type="stuff", memory=memory, prompt=prompt)
search_index = FAISS.load_local(local_folder, OpenAIEmbeddings())
response = agent_chain(
{
"input_documents": search_index.similarity_search(query, k=1),
"question": query,
},
return_only_outputs=True,
) Hope it helps. |
Beta Was this translation helpful? Give feedback.
-
This example seems to be similar to what you want to do. |
Beta Was this translation helpful? Give feedback.
-
I believe the following code demonstrates what your asking: from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import Pinecone
import pinecone
from templates.qa_prompt import QA_PROMPT
from templates.condense_prompt import CONDENSE_PROMPT
def query(openai_api_key, pinecone_api_key, pinecone_environment, pinecone_index, pinecone_namespace):
embeddings = OpenAIEmbeddings(model='text-embedding-ada-002', openai_api_key=openai_api_key)
pinecone.init(api_key=pinecone_api_key,environment=pinecone_environment)
vectorstore = Pinecone.from_existing_index(index_name=pinecone_index, embedding=embeddings, text_key='text', namespace=pinecone_namespace)
model = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0, openai_api_key=openai_api_key)
retriever = vectorstore.as_retriever(qa_template=QA_PROMPT, question_generator_template=CONDENSE_PROMPT)
qa = ConversationalRetrievalChain.from_llm(llm=model, retriever=retriever, return_source_documents=True)
return qa |
Beta Was this translation helpful? Give feedback.
-
I think customize a retriever is the way: #8623 class CustomRetriever(VectorStoreRetriever):
vectorstore: VectorStoreRetriever
search_type: str = "similarity"
search_kwargs: dict = Field(default_factory=dict)
def get_relevant_documents(self, query: str) -> List[Document]:
results = self.vectorstore.get_relevant_documents(query=query)
for r in results:
print(r)
return results
ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=CustomRetriever(vectorstore=vectordb.as_retriever(search_type="similarity",search_kwargs={'k': 5}),
return_source_documents=True,
memory=memory,
combine_docs_chain_kwargs={"prompt": PROMPT}
) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to pass documents like we do with load_qa_with_sources_chain but I want memory so I was trying to do same thing with conversation chain but I don't see a way to pass documents along with it. Any advices ? Last option I know would be to write my own custom chain which accepts sources and also preserve memory.
Beta Was this translation helpful? Give feedback.
All reactions