You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our agentic AI system, we want to implement a caching mechanism that stores answers only when they meet specific conditions. The main condition is based on client feedback: if a client explicitly likes or approves an answer, then it should be cached. This ensures that only high-quality, user-approved responses are reused in future interactions. By filtering cached results through feedback, we can avoid storing incorrect or low-value answers. The cache will be checked before running agents, so if a similar query appears again, the approved answer can be returned immediately. This approach reduces computational cost, improves response speed, and increases consistency across sessions. Ultimately, the goal is to make the system more efficient while ensuring that cached answers remain reliable and trustworthy.
Note: This system involves multiple agents, where each node in the workflow is an agent, and the orchestration is managed using LangGraph.