Skip to content

Llama-cpp-python installation issues on MacOS with M2 #956

Open
@erlebach

Description

@erlebach

I am following the instructions found in the file macOS.md. Everything worked fine until I get to the line:

pip install 'llama-cpp-python[server]'

which also installs correctly. First, the notation [server] is new to me. Is this some kind of standard?

Second, I follow the instructions and type:

export MODEL=mistral-7b-instruct-v0.1.Q3_K_M.gguf
python3 -m llama_cpp.server --model $MODEL  --n_gpu_layers 1

and get the message:

Traceback (most recent call last):
  File "/Users/erlebach/miniconda3/envs/llama/lib/python3.9/runpy.py", line 188, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/Users/erlebach/miniconda3/envs/llama/lib/python3.9/runpy.py", line 111, in _get_module_details
    __import__(pkg_name)
  File "/Users/erlebach/src/2023/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/Users/erlebach/src/2023/llama-cpp-python/llama_cpp/llama_cpp.py", line 82, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "/Users/erlebach/src/2023/llama-cpp-python/llama_cpp/llama_cpp.py", line 73, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

This error seems to be unrelated to the model. What could be the problem? Thanks.

(Could the error be due to the fact that I am running a Q3_K_M model, and not a Q4_K_M model? Why should that be?
Thanks!

Gordon

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingbuild

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions