File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change 2
2
Wheels for [ llama-cpp-python] ( https://github.com/abetlen/llama-cpp-python ) compiled with cuBLAS support.
3
3
4
4
Requirements:
5
- - Windows x64, Linux x64, or MacOS 11.7 +
5
+ - Windows x64, Linux x64, or MacOS 11.0 +
6
6
- CUDA 11.6 - 12.2
7
7
- CPython 3.8 - 3.11
8
8
9
9
llama.cpp, and llama-cpp-python by extension, has migrated to using the new GGUF format and has dropped support for GGML.
10
10
This applies to version 0.1.79+.
11
11
12
12
ROCm builds for AMD GPUs: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/rocm
13
- Metal builds for MacOS 11.7 +: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/metal
13
+ Metal builds for MacOS 11.0 +: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/metal
14
14
15
15
Installation instructions:
16
16
---
You can’t perform that action at this time.
0 commit comments