Skip to content

Issue ggml_vulkan: device Vulkan0 does not support 16-bit storage. #3035

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
codingWiz-rick opened this issue Apr 12, 2025 · 1 comment
Open

Comments

@codingWiz-rick
Copy link

Hii,
I'm getting an error while using whisper.cpp with vulkan here is the error while trying to use with Adreno (TM) 610

whisper_init_from_file_with_params_no_state: loading model from '/data/data/com.codewiz.ailyrics/files/home/whisper.cpp/models/ggml-tiny.bin' whisper_init_with_params_no_state: use gpu = 1 whisper_init_with_params_no_state: flash attn = 0 whisper_init_with_params_no_state: gpu_device = 0 whisper_init_with_params_no_state: dtw = 0 ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = Adreno (TM) 610 (Qualcomm Technologies Inc. Adreno Vulkan Driver) | uma: 1 | fp16: 0 | warp size: 64 | shared memory: 16384 | int dot: 0 | matrix cores: none whisper_init_with_params_no_state: devices = 2 whisper_init_with_params_no_state: backends = 2 whisper_model_load: loading model whisper_model_load: n_vocab = 51865 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 384 whisper_model_load: n_audio_head = 6 whisper_model_load: n_audio_layer = 4 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 384 whisper_model_load: n_text_head = 6 whisper_model_load: n_text_layer = 4 whisper_model_load: n_mels = 80 whisper_model_load: ftype = 1 whisper_model_load: qntvr = 0 whisper_model_load: type = 1 (tiny) whisper_model_load: adding 1608 extra tokens whisper_model_load: n_langs = 99 ggml_vulkan: device Vulkan0 does not support 16-bit storage. libc++abi: terminating due to uncaught exception of type std::runtime_error: Unsupported device

Can you please tell me how to solve this error

@geobra
Copy link

geobra commented Apr 13, 2025

I have the same issue on a OnePlus 6, running PostmarketOS.

Activated Vulkan support during build time by cmake.

This is the output on my device:

oneplus-enchilada:~/Documents/whisper.cpp$ time ./build/bin/whisper-cli -m ./models/ggml-base.en.bi
n -f ~/.local/share/org.gnome.SoundRecorder/Recording3.vob -t 8
whisper_init_from_file_with_params_no_state: loading model from './models/ggml-base.en.bin'
whisper_init_with_params_no_state: use gpu    = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw        = 0
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = Turnip Adreno (TM) 630 (turnip Mesa driver) | uma: 1 | fp16: 0 | warp size: 128 | shared memory: 32768 | int dot: 0 | matrix cores: none
whisper_init_with_params_no_state: devices    = 2
whisper_init_with_params_no_state: backends   = 2
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 2 (base)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: n_langs       = 99
ggml_vulkan: device Vulkan0 does not support 16-bit storage.
terminate called after throwing an instance of 'std::runtime_error'
  what():  Unsupported device
Command terminated by signal 6

It is running fine (but slow) with only CPU support.

I am happy to test and try out some code changes if someone points me in the right direction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants