Description
Prerequisites
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
When running a llama-cpp-python server with a configuration, in which flash-attention is enabled, I would expect a request to the chat-completion endpoint to be processed without errors and to successfully compute a response - as is the case when not using flash attention or when using llama.cpp directly with the same configuration.
Current Behavior
After installing lama-cpp-python[server]
(the versions were 0.2.88 as well as 0.2.89) and running a server with a configuration, in which flash-attention is enabled, e.g.
{
"host": "0.0.0.0",
"port": 8080,
"models": [
{
"model": "/mnt/machine_learning/text_generation/models/text_generation_models/mradermacher_Meta-Llama-3.1-8B-Instruct-norefusal-i1-GGUF/Meta-Llama-3.1-8B-Instruct-norefusal.i1-Q6_K.gguf",
"model_alias": "llama3.1-8B-norefusal-i1",
"chat_format": "chatml",
"n_gpu_layers": 33,
"flash_attn": true,
"n_ctx": 32764
}
]
}
the server starts up correctly but any request to the chat-completion endpoint fails with a CUDA- and flash-attention-related error.
The error does not appear when
- not enabling flash-attention
- using the llama-cpp server directly (https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md)
- using other GPUs
See "Steps to Reproduce" and "Failure Logs" for more information.
Environment and Context
Physical (or virtual) hardware you are using
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7402P 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2800,0000
CPU min MHz: 1500,0000
BogoMIPS: 5600.42
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht
syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid
aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c r
drand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit w
dt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba
ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni
xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr r
dpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassist
s pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
sme sev sev_es
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 768 KiB (24 instances)
L1i: 768 KiB (24 instances)
L2: 12 MiB (24 instances)
L3: 128 MiB (8 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Mitigation; untrained return thunk; SMT disabled
Spec rstack overflow: Mitigation; SMT disabled
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Not affected
Additional information on the GPUs
$ nvidia-smi
Mon Aug 26 15:52:14 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.28.03 Driver Version: 560.28.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla P100-PCIE-16GB Off | 00000000:43:00.0 Off | 0 |
| N/A 37C P0 25W / 250W | 5MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 Tesla P100-PCIE-16GB Off | 00000000:C3:00.0 Off | 0 |
| N/A 34C P0 26W / 250W | 5MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1602 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1602 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------------------+
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
Operating System
$ uname -a
Linux linux-G292-Z20 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
SDK version
$ python3 --version
Python 3.10.10
$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Additional environment info
llama-cpp-python$ git log | head -1
commit 259ee151da9a569f58f6d4979e97cfd5d5bc3ecd
llama-cpp-python$ python3 --version
Python 3.10.10
llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
fastapi 0.112.2
numpy 2.1.0
sse-starlette 2.1.3
uvicorn 0.30.6
llama-cpp-python/vendor/llama.cpp$ git log | head -3
commit 259ee151da9a569f58f6d4979e97cfd5d5bc3ecd
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
Other potentially relevant packages in the virtual enviroment
llama_cpp_python==0.2.89
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.6.20
nvidia-nvtx-cu12==12.1.105
torch==2.2.0
torchaudio==2.2.0
torchvision==0.17.0
Failure Information (for bugs)
The failure seems to be linked to the usage of llama-cpp-python in connection with an Nvidia Tesla P100 GPU while enabling flash-attention. The trace (under section "Failure Logs") seems to confirm this.
- using the exact same setup but not enabling flash-attention works fine
- using the llama-cpp server directly, with Nvidia Tesla P100s and flash-attention enabled works fine (
llama.cpp$ .llama-server -m /mnt/machine_learning/text_generation/models/text_generation_models/mradermacher_Meta-Llama-3.1-8B-Instruct-norefusal-i1-GGUF/Meta-Llama-3.1-8B-Instruct-norefusal.i1-Q6_K.gguf -c 32764 -ngl 33 --chat-template chatml -fa
) - using other GPUs with the same setup works fine (I tested Nvidia RTX3060, Quadro P3200 and Tesla P4s)
The same issue appeared when using llama_cpp_python version 0.2.88 as well as a few prior versions.
Since I have not used P100s and flash-attention before that, I cannot say, whether the setup worked at some point.
Steps to Reproduce
- Use one or two Nvidia Tesla P100 with CUDA 12.1
- Install llama-cpp-python via
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python[server]
. - Create a server configuration (e.g. with the name "cfg.json"):
{
"host": "0.0.0.0",
"port": 8080,
"models": [
{
"model": "/mnt/machine_learning/text_generation/models/text_generation_models/mradermacher_Meta-Llama-3.1-8B-Instruct-norefusal-i1-GGUF/Meta-Llama-3.1-8B-Instruct-norefusal.i1-Q6_K.gguf",
"model_alias": "llama3.1-8B-norefusal-i1",
"chat_format": "chatml",
"n_gpu_layers": 33,
"flash_attn": true,
"n_ctx": 32764
}
]
}
- Start the server with the configuration file, e.g. using
python3 -m llama_cpp.server --config_file cfg.json
- Send a chat-completion request to the server (e.g. via the swagger docs at http://0.0.0.0:8080/docs or via curl)
Failure Logs
The log of the llama-cpp-python server, starting up correctly
llama_model_loader: loaded meta data with 40 key-value pairs and 292 tensors from /media/linux/Workspaces 12TB/Resources/machine_learning/text_generation/models/text_generation_models/mradermacher_Meta-Llama-3.1-8B-Instruct-i1-GGUF/Meta-Llama-3.1-8B-Instruct.i1-Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 18
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {% set loop_messages = messages %}{% ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - kv 29: general.url str = https://huggingface.co/mradermacher/M...
llama_model_loader: - kv 30: mradermacher.quantize_version str = 2
llama_model_loader: - kv 31: mradermacher.quantized_by str = mradermacher
llama_model_loader: - kv 32: mradermacher.quantized_at str = 2024-07-28T07:06:44+02:00
llama_model_loader: - kv 33: mradermacher.quantized_on str = db1
llama_model_loader: - kv 34: general.source.url str = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv 35: mradermacher.convert_type str = hf
llama_model_loader: - kv 36: quantize.imatrix.file str = Meta-Llama-3.1-8B-Instruct-i1-GGUF/im...
llama_model_loader: - kv 37: quantize.imatrix.dataset str = imatrix-training-full-3
llama_model_loader: - kv 38: quantize.imatrix.entries_count i32 = 224
llama_model_loader: - kv 39: quantize.imatrix.chunks_count i32 = 314
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q6_K: 226 tensors
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q6_K
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 6.14 GiB (6.56 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 CUDA devices:
Device 0: Tesla P100-PCIE-16GB, compute capability 6.0, VMM: yes
Device 1: Tesla P100-PCIE-16GB, compute capability 6.0, VMM: yes
Device 2: Tesla P4, compute capability 6.1, VMM: yes
Device 3: Tesla P4, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size = 0.68 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 410.98 MiB
llm_load_tensors: CUDA0 buffer size = 2047.88 MiB
llm_load_tensors: CUDA1 buffer size = 1877.22 MiB
llm_load_tensors: CUDA2 buffer size = 853.28 MiB
llm_load_tensors: CUDA3 buffer size = 1093.62 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 65536
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 1
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 3072.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 2816.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 1280.00 MiB
llama_kv_cache_init: CUDA3 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 8192.00 MiB, K (f16): 4096.00 MiB, V (f16): 4096.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.49 MiB
llama_new_context_with_model: pipeline parallelism enabled (n_copies=4)
llama_new_context_with_model: CUDA0 compute buffer size = 688.01 MiB
llama_new_context_with_model: CUDA1 compute buffer size = 368.01 MiB
llama_new_context_with_model: CUDA2 compute buffer size = 368.01 MiB
llama_new_context_with_model: CUDA3 compute buffer size = 546.52 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 520.02 MiB
llama_new_context_with_model: graph nodes = 903
llama_new_context_with_model: graph splits = 5
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
Model metadata: {'quantize.imatrix.dataset': 'imatrix-training-full-3', 'quantize.imatrix.entries_count': '224', 'llama.attention.head_count_kv': '8', 'mradermacher.convert_type': 'hf', 'general.source.url': 'https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct', 'llama.feed_forward_length': '14336', 'general.license': 'llama3.1', 'general.size_label': '8B', 'general.type': 'model', 'mradermacher.quantized_on': 'db1', 'quantize.imatrix.chunks_count': '314', 'llama.context_length': '131072', 'llama.embedding_length': '4096', 'mradermacher.quantized_at': '2024-07-28T07:06:44+02:00', 'llama.block_count': '32', 'llama.attention.head_count': '32', 'general.name': 'Meta Llama 3.1 8B Instruct', 'tokenizer.ggml.bos_token_id': '128000', 'general.basename': 'Meta-Llama-3.1', 'general.architecture': 'llama', 'general.url': 'https://huggingface.co/mradermacher/Meta-Llama-3.1-8B-Instruct-i1-GGUF', 'llama.rope.freq_base': '500000.000000', 'mradermacher.quantized_by': 'mradermacher', 'general.finetune': 'Instruct', 'general.file_type': '18', 'tokenizer.ggml.pre': 'llama-bpe', 'llama.vocab_size': '128256', 'quantize.imatrix.file': 'Meta-Llama-3.1-8B-Instruct-i1-GGUF/imatrix.dat', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.model': 'gpt2', 'general.quantization_version': '2', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'tokenizer.ggml.eos_token_id': '128009', 'tokenizer.chat_template': "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}", 'mradermacher.quantize_version': '2'}
Available chat formats from metadata: chat_template.default
INFO: Started server process [4141]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: 127.0.0.1:36044 - "GET /v1/models HTTP/1.1" 200 OK
When flash-attention is enabled in the server configuration, a request to the chat-completion endpoint results in the following error log:
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda/fattn-tile-f16.cu:269: ERROR: CUDA kernel flash_attn_tile_ext_f16 has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520,610,700,750
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda/fattn-tile-f16.cu:269: ERROR: CUDA kernel flash_attn_tile_ext_f16 has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520,610,700,750
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda/fattn-tile-f16.cu:269: ERROR: CUDA kernel flash_attn_tile_ext_f16 has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520,610,700,750
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda/fattn-tile-f16.cu:269: ERROR: CUDA kernel flash_attn_tile_ext_f16 has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520,610,700,750
...
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda/fattn-tile-f16.cu:269: ERROR: CUDA kernel flash_attn_tile_ext_f16 has no device code compatible with CUDA arch 520. ggml-cuda.cu was compiled for: 520,610,700,750
ggml_cuda_compute_forward: SILU failed
CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_compute_forward at /tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda.cu:2313
err
/tmp/pip-install-8p5atw7a/llama-cpp-python_cfac194396b14d2b935656e0a44f89f1/vendor/llama.cpp/ggml/src/ggml-cuda.cu:101: CUDA error
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
Please note the ...
for shortening, the repeating error message appeares 255 times in total!
Also note, that much of the above information is hidden under spoilers for a better overview. If removing those spoilers is preferred, please let me know.
Thank you for your time!