-
Notifications
You must be signed in to change notification settings - Fork 49
ERROR: llama_cpp_python-0.2.19-cp311-cp311-macosx_12_0_x86_64.whl is not a supported wheel on this platform. #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
What version of Python are you using? |
@jllllll I have some users unable to install on MacOS with M1. Can't figure out why 😂 |
This is the error we get:
|
@jllllll Is using this the right way for Metal cpu's? UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu |
@gaby Try this command:
|
@jllllll I had to use |
@jllllll Still didn't work, this is with py3.11 on a M2 Pro +ls_cpu 'Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: 0x00
Model name: -
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 6
Socket(s): -
Cluster(s): 1
Stepping: 0x0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected'
++ dpkg --print-architecture
Recommended install command for llama-cpp-python: UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
+ pip_command='UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'
+ echo 'Recommended install command for llama-cpp-python: UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'
+ eval 'UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'
++ UNAME_M=arm64
++ python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
Looking in indexes: https://pypi.org/simple, https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
ERROR: Could not find a version that satisfies the requirement llama-cpp-python==0.2.19 (from versions: none)
ERROR: No matching distribution found for llama-cpp-python==0.2.19
[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: pip install --upgrade pip
+ echo 'Failed to install llama-cpp-python'
+ exit 1
Failed to install llama-cpp-python |
Seems it works fine with MacOS x86, I was able to create a test CI to prove it: https://github.com/gaby/testbench/actions/runs/7013936843/job/19080863382 Seems to be related to arm64/m1/m2 |
@gaby Your Mac is recognizing itself as |
@jllllll According to Google
Several users suggested building https://github.com/pypa/cibuildwheel |
Not really sure how The true problem is that Python/pip pointlessly differentiates between Uploading new wheels is a trivial problem to fix as I just need to write a simple script to download all the MacOS wheels and upload a version that is renamed to use |
I see what you mean, cool! The main diff I read about manylinux, is that those wheels work dor both x86 and arm64. I played with name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build_wheels_macos:
name: Build wheels for macos
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
with:
repository: 'abetlen/llama-cpp-python'
ref: 'v0.2.20'
submodules: 'recursive'
- name: Build wheels
uses: pypa/cibuildwheel@v2.16.2
env:
CIBW_ARCHS_MACOS: x86_64 universal2
CIBW_PROJECT_REQUIRES_PYTHON: ">=3.10"
CIBW_BEFORE_ALL: 'export CMAKE_ARGS="-DLLAMA_NATIVE=off -DLLAMA_METAL=on"'
CIBW_BEFORE_BUILD: 'pip install build wheel cmake'
- name: List wheels
run: ls -la /wheelhouse/*.whl |
@gaby I had originally built I have finished building and uploading |
@jllllll Thanks for your help, turns out my issue is related to running in Docker with MacOS. When running in Docker the platform is linux/arm64 not macos. Thus why it can't find any compatible wheels 😂 |
getting error when running the below command
pip install -r requirements_apple_intel.txt
Could your please help me how can I fix this error.
The text was updated successfully, but these errors were encountered: