docs/integrations/caches/redis_llm_caching/ #29366
Replies: 2 comments 1 reply
-
I'm working on a project where I am using LangChain's Here’s how I’m currently setting up the cache: semantic_cache = RedisSemanticCache(
redis_url=REDIS_URL,
embeddings=embeddings,
distance_threshold=0.3,
ttl=REDIS_TTL,
prefix=hashlib.sha256(question.encode('utf-8')).hexdigest(),
name=hashlib.sha256(question.encode('utf-8')).hexdigest(),
) The issue arises because when I use What I want to achieve: However, when I try to use a static cache key (e.g., always using the same cache key), I face another problem: Questions:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/integrations/caches/redis_llm_caching/
This notebook demonstrates how to use the RedisCache and RedisSemanticCache classes from the langchain-redis package to implement caching for LLM responses.
https://python.langchain.com/docs/integrations/caches/redis_llm_caching/
Beta Was this translation helpful? Give feedback.
All reactions