Replies: 2 comments
-
The issue of receiving empty responses with a block reason of "OTHER" in LangChain's RAG setup when using the Here is an example of how to adjust the safety settings: from langchain_google_genai import (
ChatGoogleGenerativeAI,
HarmBlockThreshold,
HarmCategory,
)
llm = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
google_api_key=google_api_key,
safety_settings={
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
},
) For an enumeration of the categories and thresholds available, see Google's safety setting types [1]. Additionally, the LangChain library includes a retry mechanism for handling API exceptions, which might be useful if the issue is related to transient errors from the Google PaLM API. The retry mechanism is implemented using the Here is an example of how to use the retry mechanism: from langchain_community.chat_models import ChatGooglePalm
def chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:
"""Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator()
@retry_decorator
def _chat_with_retry(**kwargs: Any) -> Any:
return llm.client.chat(**kwargs)
return _chat_with_retry(**kwargs) Therefore, the issue is likely related to the safety settings configuration in your implementation, but it could also be influenced by transient API errors that can be mitigated using the retry mechanism provided by LangChain [2][3]. |
Beta Was this translation helpful? Give feedback.
-
https://www.reddit.com/r/SillyTavernAI/comments/1jorzqg/gemini_25_pro_experimental_giving_you_headache/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked other resources
Commit to Help
Example Code
Description
I tried to build a RAG with langchain for question answering, I read the following references:
#15095
https://python.langchain.com/docs/tutorials/rag/
but I found that it still returns an empty result‘ ’ with safety settings.
There are two situations, 1st is SEXUALLY_EXPLICIT question with normal docs, 2nd is normal question but retriever return SEXUALLY_EXPLICIT docs. both will return empty results with block_reason: OTHER.
I want to know if this is a problem with Google, a problem with langchain, or a problem with myself.
System Info
langchain 0.3.1
langchain-community 0.3.1
langchain-core 0.3.6
langchain-experimental 0.3.2
langchain-google-genai 2.0.0
google-ai-generativelanguage 0.6.5
google-generativeai 0.7.0
Beta Was this translation helpful? Give feedback.
All reactions