Skip to content

Conversation

@nil0x9
Copy link
Contributor

@nil0x9 nil0x9 commented Oct 31, 2025

currently running the code on non-cuda devices (e.g., ascend npus) will trigger the following warning, when in reality npu fused attention is used.

[XTuner][2025-10-25 18:11:49][WARNING] flash-attn is not installed, using `flex_attention` instead.
[XTuner][2025-10-25 18:11:49][WARNING] flash-attn is not installed, using `flex_attention` instead.
[XTuner][2025-10-25 18:11:49][WARNING] flash-attn is not installed, using `flex_attention` instead.
[XTuner][2025-10-25 18:11:49][WARNING] flash-attn is not installed, using `flex_attention` instead.
[XTuner][2025-10-25 18:11:50][WARNING] flash-attn is not installed, using `flex_attention` instead.

this warning is triggered by config initialization triggered by importing methods from xtuner.v1.model (see here)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant