New Send Notes to Promp #316
Replies: 3 comments 1 reply
-
Hi @FamedBear16, Chat mode doesn't use embeddings, it plugs the notes directly into the prompt. Not sure what you mean by QA mode is RAG with embeddings. In the future, it will be possible to reference note titles directly in chat just with [[note title]]. With that, there will be 3 ways to reference notes in the Copilot plugin
Here's a video I made showing how to use custom prompt templating https://youtu.be/VPNlXeCsH74?si=sp3-qmGdSR493wHC, at the very end I also touched on the "send notes to prompt" button way. |
Beta Was this translation helpful? Give feedback.
-
HI @logancyang, This is my use case; I hope you can help me to see if copilot can help. I have a folder with many notes, this is my "ZKBox" Folder. I would like Copilot to index all my ZKBox notes, and then I would like to QA of my notes with a RAG model I have tried the Chat mode which does not use a RAG model. It would be helpful if you could add a filter that excludes notes with a given tag when creating a context or a custom prompt. How would you suggest to use copilot to query my ZKBox? Many thanks Pierpaolo |
Beta Was this translation helpful? Give feedback.
-
hi @FamedBear16, thanks for the details! Vault QA (BETA) mode v2.5.0 is coming soon, I think it will be most relevant for you use case. The first iteration will index the whole vault without Exclude filters. Exclude filters for Chat mode context and Vault QA mode context can be added for sure. I will need people to test the new Vault QA mode and provide feedback. Since this is completely client-side, I do not have access to individual user's search logs, so it will be hard for me to help debug individual cases. So users like you who can share feedback with me will be crucial for future iterations. Stay tuned! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Can anybody tell me how this works?
I am sending all my Zettelkasten notes to the chat mode.
Does this model use embeddings to cut down on the amount of data sent to Open AI?
Or will it send to Open AI the text of all my notes?
I gave it a try and with GPT-4 Turbo I get this error: "LangChain Error: limit exceeded"
I am sure all my notes are less than 128k tokens.
Would be possibel in the future to say something like
For all my notes in @cards can you answer this question? (So that only the embeddings will be sent?)
Thanks
Beta Was this translation helpful? Give feedback.
All reactions