See how important each token of the context was for the LLM response #8753
LiquidGunay
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I think the ability to get something like an average attention score for each token of the context would be really useful to see what parts of the context did the LLM "focus" more on. This would be fairly useful for RAG and QA applications.
Beta Was this translation helpful? Give feedback.
All reactions