Skip to content

paper recommendation for Part 8: Explainability #17

@zepingyu0512

Description

@zepingyu0512

Hi, thanks for this great repository!

I’d like to recommend our recent paper for inclusion:

Title: Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large Language Models

Link: https://arxiv.org/pdf/2502.10835

Abstract:
In this work, we design and use mechanistic interpretability techniques to analyze why LLMs cannot perform latent multi-hop reasoning well. To achieve this problem, we propose Back Attention, a mechanism that enables large language models to explicitly revisit prior intermediate steps when conducting multi-hop reasoning.

This work might be a good fit under the Part 8: Explainability Section

Thanks for considering!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions