Skip to content

Build with llama support does not build llama support #5890

@mooleshacat

Description

@mooleshacat

LocalAI version:

Master branch

root@AI0:~/localai# ./LocalAI --version
LocalAI version  ()

Environment, CPU architecture, OS, and Version:

Linux AI0 6.1.0-37-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64 GNU/Linux
AMD 5950x 32GB ram 2x rtx 3070 1x rtx 2070 super 20tb hdd (for now)

Describe the bug

  • Follow build instructions for llama support
  • Try and load local ai and give it test command
  • get error about no llama support
  • frustrated

To Reproduce

  • Follow build instructions for llama support
  • Try and load local ai and give it test command
  • get error about no llama support
  • frustrated

Expected behavior

build with llama support

Logs

Additional context

I have been informed by an AI that the build is supposed to have another binary that runs the llama backend but when I compile it the binary is missing, giving the failure for the llama support.

The AI suggested it may need to be built embedded llama because llama binary is not included (even though required to build llama support)

I am trying embedded but I have been compiling and recompiling for the past 48 hours without success.

The LLAMA build commands and instructions should have EVERYTHING to build llama support including the separate binary that is not included. This embedded binary llama support instructions should be right beside it, but it should not even be needed if you give the requirements for the install (the llama server binary) and give instructions to install it.

Then you wouldn't even need to mention embedded unless someone really wanted it to simplify the build and installation.

Still unable to get llama support enabled. Documentations need updating and required binary source needs to be included and built and instructions for that should be included.

{"error":{"code":500,"message":"could not load model - all backends returned error: 10 errors occurred:\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/llama-ggml. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/gpt4all. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* could not load model: rpc error: code = Unknown desc = failed loading model\n\t* could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n\t* could not load model: rpc error: code = Unknown desc = unable to load model\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/stablediffusion. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/tinydream. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\t* grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/piper. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n\n","type":""}}root@AI0:~/localai#

P.S. the docker build fails to connect to the GPU's which is why I am forced to manual compile and install.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions