Skip to content

kleidiai: add support for get_rows #14676

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

chaxu01
Copy link
Collaborator

@chaxu01 chaxu01 commented Jul 14, 2025

This patch adds support for KleidiAI acceleration of the Q4_0 matrix multiplication operation in cases where the weight tensor is shared with the get_rows operator. A typical use case is in whisper.cpp, where such weight sharing occurs between get_rows and matmul.

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jul 14, 2025
Comment on lines 34 to 39
static inline float compute_fp16_to_fp32(ggml_fp16_t h) {
static_assert(sizeof(ggml_fp16_t) == sizeof(__fp16), "ggml_fp16_t and __fp16 must be the same size");
__fp16 tmp;
memcpy(&tmp, &h, sizeof(ggml_fp16_t));
return (float)tmp;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we use ggml_fp16_to_fp32() instead introducing this function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good point — I'll update the patch to use ggml_fp16_to_fp32() instead.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is in the CPU backend, it could also use the potentially more efficient ggml_cpu_fp16_to_fp32.

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General advice is to try to keep the implementation more generic - it seems to focus a lot on Q4_0. Adding more asserts for the current underlying assumptions will help long term in case we add support for other types.

Another important thing we should improve soon is to add support for testing extra buffer types in test-backend-ops (see ggml-org/whisper.cpp#3223 (comment)). Without such tests it is very difficult to verify that these changes do not break something.

@chaxu01
Copy link
Collaborator Author

chaxu01 commented Jul 16, 2025

I've updated the patch to address all review comments. However, I noticed that three CI tests are currently failing due to what appear to be unrelated infrastructure issues.

@ggerganov
Copy link
Member

The build failures are unrelated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants