Skip to content

Commit 96baad9

Browse files
committed
Update README.md
1 parent af001a5 commit 96baad9

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,15 @@
22
Wheels for [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) compiled with cuBLAS support.
33

44
Requirements:
5-
- Windows x64, Linux x64, or MacOS 11.7+
5+
- Windows x64, Linux x64, or MacOS 11.0+
66
- CUDA 11.6 - 12.2
77
- CPython 3.8 - 3.11
88

99
llama.cpp, and llama-cpp-python by extension, has migrated to using the new GGUF format and has dropped support for GGML.
1010
This applies to version 0.1.79+.
1111

1212
ROCm builds for AMD GPUs: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/rocm
13-
Metal builds for MacOS 11.7+: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/metal
13+
Metal builds for MacOS 11.0+: https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels/releases/tag/metal
1414

1515
Installation instructions:
1616
---

0 commit comments

Comments
 (0)