docs/tutorials/rag/ #28762
Replies: 13 comments 8 replies
-
If you're using Chroma during the query analysis, don't forget to change the filter for Instead of |
Beta Was this translation helpful? Give feedback.
-
You have to run |
Beta Was this translation helpful? Give feedback.
-
On Windows 11 running Python 3.13.1 needed to change to a pydantic BaseModel to get the analyze_query to work. Keep getting ValueError: no signature found for builtin type <class 'dict'> when running the query = structured_llm.invoke(state["question"]) Updated code as follows and works. The answer is slightly different but by adjusting the query was able to come close from typing import Literal Define the Search schema using Pydantic instead of TypedDictclass Search(BaseModel): for step in graph.stream( If anyone knows how to make it work with the TypeDict vs BaseModel would be interested in the answer, otherwise hope this helps someone else if they get stuck. Cheers! |
Beta Was this translation helpful? Give feedback.
-
The documentation is not up-to-date: Traceback (most recent call last): |
Beta Was this translation helpful? Give feedback.
-
Use of an NVIDIA chat models did not work for me, as returned message indicated model_provider='nvidia' is not supported for init_chat_model() using latest version, even though online documentation states that it is. |
Beta Was this translation helpful? Give feedback.
-
for step in graph.stream( |
Beta Was this translation helpful? Give feedback.
-
loader = WebBaseLoader( |
Beta Was this translation helpful? Give feedback.
-
When running this code:
I get the following error:
Could I please get some help with this as I am quite lost to what is going wrong. |
Beta Was this translation helpful? Give feedback.
-
Do we have the use Langsmith in this tutorial? seems beyond the point. |
Beta Was this translation helpful? Give feedback.
-
Earlier there was a conversation retrieval chain, which offered map-reduce llm calls. Is there a tutorial that implements rag with map-reduce and doesnt use langsmith? |
Beta Was this translation helpful? Give feedback.
-
It seems the line 'response = llm.invoke(messages)' do not return the correct class. When I run this block of code (also in the tutorial), I get the correct type for example_messages: from langchain import hub prompt = hub.pull("rlm/rag-prompt") example_messages = prompt.invoke( assert len(example_messages) == 1 But when I run this: def generate(state:State): This fits with the error trace I got: 'str' object has no attribute 'content' Some help would be appreciated. |
Beta Was this translation helpful? Give feedback.
-
If you are interested in being able to type your question directly in the terminal, you can do so by using the following: Before the chatbot template, i wrote: question = input("ask your question: ") Then, where i would normally type my question in the code, i wrote: response = graph.invoke({"question": question}) so when i run my code, i can ask whatever question i want. |
Beta Was this translation helpful? Give feedback.
-
How can the model decide the "section" in the analyze_query function? Does it look into the documents or randomly pick from those 3 sections? Seems like it is randomly. |
Beta Was this translation helpful? Give feedback.
-
docs/tutorials/rag/
One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. These applications use a technique known as Retrieval Augmented Generation, or RAG.
https://python.langchain.com/docs/tutorials/rag/
Beta Was this translation helpful? Give feedback.
All reactions