how we add chat history in my retrievalQA chain #17850
Replies: 4 comments 2 replies
-
🤖 Hey @shraddhaa26, great to see you back with another interesting question! Hope you've been doing well. Based on the information you've provided, it seems like you're trying to add chat history to a However, if you want to modify the from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, SystemMessage
# Create BaseMessage instances for each message in the chat history
chat_history = [
HumanMessage(content="Hello"),
AIMessage(content="Hello, how can I assist you today?"),
SystemMessage(content="Please provide more information."),
# Add more messages as needed
]
# Pass the chat history to the _generate method
output = llm._generate(chat_history) In this code, you need to create For more details, you can refer to the files in the LangChain repository:
Please note that this is a simplified example and the actual implementation would depend on the specifics of your application and how you want to process the chat history. I hope this helps! If you have any further questions, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
🤖 Based on the context provided, it seems you want to modify the To include a custom chat history, you would need to modify the Here's a general idea of how you might modify the def _enter_history(self, input: Any, config: RunnableConfig) -> List[BaseMessage]:
# Instead of getting the history from the config, we'll use the input
# Assume that the input is a dictionary with a 'chat_history' key
chat_history = input.get('chat_history', [])
# Check if the chat history is a list of BaseMessage instances
if not all(isinstance(msg, BaseMessage) for msg in chat_history):
raise ValueError("Chat history must be a list of BaseMessage instances")
# If the history_messages_key is set, return only the historic messages
if self.history_messages_key:
return chat_history
# Otherwise, return all messages
else:
input_val = (
input if not self.input_messages_key else input[self.input_messages_key]
)
return chat_history + self._get_input_messages(input_val) This modification assumes that the input to the Please note that this is a general idea and might need to be adjusted based on the specifics of the Regarding the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
I'm the same as you, do you have any ideas for that |
Beta Was this translation helpful? Give feedback.
-
Use |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
i'm trying to add chat_history in retrieval chain but it doesnt addable
System Info
model_name_or_path = "TheBloke/Llama-2-13B-chat-GPTQ"
model_basename = "model"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
model_name_or_path,
revision="gptq-4bit-128g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
inject_fused_attention=False,
device=DEVICE,
quantize_config=None,
)
DEFAULT_SYSTEM_PROMPT = """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
""".strip()
def generate_prompt(prompt: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT) -> str:
return f"""
[INST] <>
{system_prompt}
<>
{prompt} [/INST]
""".strip()
llm = HuggingFacePipeline(pipeline=text_pipeline, model_kwargs={"temperature": 0})
SYSTEM_PROMPT = "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer."
template = generate_prompt(
"""
{context}
Question: {question}
""",
system_prompt=SYSTEM_PROMPT,
)
Beta Was this translation helpful? Give feedback.
All reactions