-
-
Notifications
You must be signed in to change notification settings - Fork 136
Open
Labels
new featureNew feature or requestNew feature or request
Description
Feature Description
current on windows platform, cuda and vulkan are supported. is there a roadmap to support sycl?
I think it will improve the performance especially on intel's gpu.
build llama.cpp with sycl:
https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md#windows
The Solution
win-x64-sycl
Considered Alternatives
win-x64-sycl
Additional Context
No response
Related Features to This Feature Request
- Metal support
- CUDA support
- Vulkan support
- Grammar
- Function calling
Are you willing to resolve this issue by submitting a Pull Request?
Yes, I have the time, but I don't know how to start. I would need guidance.
Metadata
Metadata
Assignees
Labels
new featureNew feature or requestNew feature or request