Replies: 1 comment
-
This is duplicated |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I tried to load an auto-gptq model with the latest LocalAI v2.10.0 docker image. I rebuilt the image with below Dockerfile:
I haved downloaded model
internlm/internlm-xcomposer2-vl-7b-4bit
from HuggingFace, and created a model config file for LocalAI as below:And then started the container with:
docker run --gpus all -p 8080:8080 -v $PWD/models:/opt/models -e DEBUG=true -e MODELS_PATH=/opt/models -e CLIP_VISION_MODEL=/opt/models/clip-vit-large-patch14-336 -e HF_HOME=/opt/models -e TRANSFORMERS_OFFLINE=1 localai:v2.10.0-autogptq-5 --config-file /opt/models/intern-vl.yml
Service seems started successfully.
But when I call the vision API, it always return 500 error, with message:
Can anyone tell is this a bug of auto-gptq or localai, or my configuration mistake?
Beta Was this translation helpful? Give feedback.
All reactions