-
Notifications
You must be signed in to change notification settings - Fork 69
Open
Description
Hi, thanks for this great repository!
I’d like to recommend our recent paper for inclusion:
Title: Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models
Link: https://arxiv.org/pdf/2502.10835
Abstract:
In this work, we design and use mechanistic interpretability techniques to analyze why LLMs cannot perform latent multi-hop reasoning well. To achieve this problem, we propose Back Attention, a mechanism that enables large language models to explicitly revisit prior intermediate steps when conducting multi-hop reasoning.
This work might be a good fit under the Part 8: Explainability Section
Thanks for considering!
Metadata
Metadata
Assignees
Labels
No labels