Skip to content

Misc. bug: Regression in unified KV cache appears after llama.cpp release b5912 in b5913 #14847

@akarasulu

Description

@akarasulu

Name and Version

Working b5912:
llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes
version: 1 (ab14019)
built with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for x86_64-linux-gnu

Broken b5913:

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

libllama (core library)

Command line

Here's the test_chat.py test program:

from llama_cpp import Llama

llm = Llama(
    model_path="/opt/models/gguf/tinyllama.gguf",
    n_gpu_layers=22,
    n_ctx=2048,
    chat_format="chatml",
)

response = llm.create_chat_completion(
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the capital of France?"}
    ],
    max_tokens=20,
)

print("Response:", response["choices"][0]["message"]["content"])

Here's the Ansible task that runs it:

- name: Execute test_chat.py and validate output
  shell: |
    {{ llama_cpp_python_path }} /tmp/test_chat.py
  args:
    chdir: "{{ ansible_env.HOME }}"
    executable: /bin/bash
  environment:
    LLAMA_CPP_LIB_PATH: "{{ effective_libllama_path | dirname }}"
  register: chat_output
  failed_when: "'Response' not in chat_output.stdout"

CLI still works — only Python launch with ABI crashes.

print "" | llama-cli -m /opt/models/gguf/tinyllama.gguf \
    --n-gpu-layers 22 --n-predict 20 --prompt "What is the capital of France?" \
    --interactive-first

Problem description & steps to reproduce

Running llama-cpp-python against llama.cpp compiled after b5912, in b5913, results in:

llama.cpp/src/llama-kv-cache-unified.cpp:222: GGML_ASSERT(seq_id >= 0 && (size_t) seq_id < seq_to_stream.size()) failed

It appears to be a regression in sequence ID handling or unified KV cache logic affecting external bindings. This is consistent with the heavy work done on the kv-cache to prepare K/V buffers for separation in b5913.

NOTE: llama-cli runs successfully but running llama-cpp-python against llama.cpp with the same model results in the failure.

Environment

  • Model: tinyllama-1.1b-chat-v1.0.gguf (GGUF v3, q4_K)
  • Python binding: llama-cpp-python==0.2.56
  • Python: 3.12.9 via pyenv
  • GPU: NVIDIA RTX 3060 Laptop (CUDA, driver OK)
  • System: Debian 12 (llama.cpp compiled with CUDA backend)

Reproduce Steps

Compile (b5913) and run test_chat.yml and the direct llama-cli command using https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf.

Run failing test_chat.yml after installing latest llama-cpp-python

LLAMA_CPP_LIB_PATH=/usr/local/lib python /tmp/test_chat.py

Run succeeding direct llama-cli

print "" | llama-cli -m /opt/models/gguf/tinyllama.gguf \
    --n-gpu-layers 22 --n-predict 20 --prompt "What is the capital of France?" \
    --interactive-first

CLI still works — only Python crashes.

First Bad Commit

225e7a1

Relevant log output

Broken Version (b5913) Output

Broken Python Launch (via test_chat.py)

Here's the Ansible log output. Grep for GGML_ASSERT:

TASK [llama_cpp_python : Execute test_chat.py and validate output] *******************************************************************
fatal: [uefi]: FAILED! => changed=true
  cmd: |-
    /home/aok/.pyenv/shims/python3 /tmp/test_chat.py
  delta: '0:00:00.579927'
  end: '2025-07-24 08:28:18.568691'
  failed_when_result: true
  msg: non-zero return code
  rc: -6
  start: '2025-07-24 08:28:17.988764'
  stderr: |-
    ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
    ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
    ggml_cuda_init: found 1 CUDA devices:
      Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes
    llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060 Laptop GPU) - 5823 MiB free
    llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /opt/models/gguf/tinyllama.gguf (version GGUF V3 (latest))
    llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
    llama_model_loader: - kv   0:                       general.architecture str              = llama
    llama_model_loader: - kv   1:                               general.name str              = tinyllama_tinyllama-1.1b-chat-v1.0
    llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
    llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
    llama_model_loader: - kv   4:                          llama.block_count u32              = 22
    llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
    llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
    llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
    llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
    llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
    llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
    llama_model_loader: - kv  11:                          general.file_type u32              = 15
    llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
    llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
    llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
    llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
    llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
    llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
    llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
    llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
    llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
    llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
    llama_model_loader: - kv  22:               general.quantization_version u32              = 2
    llama_model_loader: - type  f32:   45 tensors
    llama_model_loader: - type q4_K:  135 tensors
    llama_model_loader: - type q6_K:   21 tensors
    print_info: file format = GGUF V3 (latest)
    print_info: file type   = Q4_K - Medium
    print_info: file size   = 636.18 MiB (4.85 BPW)
    init_tokenizer: initializing tokenizer for type 1
    load: control token:      2 '</s>' is not marked as EOG
    load: control token:      1 '<s>' is not marked as EOG
    load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
    load: special tokens cache size = 3
    load: token to piece cache size = 0.1684 MB
    print_info: arch             = llama
    print_info: vocab_only       = 0
    print_info: n_ctx_train      = 2048
    print_info: n_embd           = 2048
    print_info: n_layer          = 22
    print_info: n_head           = 32
    print_info: n_head_kv        = 4
    print_info: n_rot            = 64
    print_info: n_swa            = 0
    print_info: is_swa_any       = 0
    print_info: n_embd_head_k    = 64
    print_info: n_embd_head_v    = 64
    print_info: n_gqa            = 8
    print_info: n_embd_k_gqa     = 256
    print_info: n_embd_v_gqa     = 256
    print_info: f_norm_eps       = 0.0e+00
    print_info: f_norm_rms_eps   = 1.0e-05
    print_info: f_clamp_kqv      = 0.0e+00
    print_info: f_max_alibi_bias = 0.0e+00
    print_info: f_logit_scale    = 0.0e+00
    print_info: f_attn_scale     = 0.0e+00
    print_info: n_ff             = 5632
    print_info: n_expert         = 0
    print_info: n_expert_used    = 0
    print_info: causal attn      = 1
    print_info: pooling type     = 0
    print_info: rope type        = 0
    print_info: rope scaling     = linear
    print_info: freq_base_train  = 10000.0
    print_info: freq_scale_train = 1
    print_info: n_ctx_orig_yarn  = 2048
    print_info: rope_finetuned   = unknown
    print_info: model type       = 1B
    print_info: model params     = 1.10 B
    print_info: general.name     = tinyllama_tinyllama-1.1b-chat-v1.0
    print_info: vocab type       = SPM
    print_info: n_vocab          = 32000
    print_info: n_merges         = 0
    print_info: BOS token        = 1 '<s>'
    print_info: EOS token        = 2 '</s>'
    print_info: UNK token        = 0 '<unk>'
    print_info: PAD token        = 2 '</s>'
    print_info: LF token         = 13 '<0x0A>'
    print_info: EOG token        = 2 '</s>'
    print_info: max token length = 48
    load_tensors: loading model tensors, this can take a while... (mmap = true)
    load_tensors: layer   0 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   1 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   2 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   3 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   4 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   5 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   6 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   7 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   8 assigned to device CUDA0, is_swa = 0
    load_tensors: layer   9 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  10 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  11 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  12 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  13 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  14 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  15 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  16 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  17 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  18 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  19 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  20 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  21 assigned to device CUDA0, is_swa = 0
    load_tensors: layer  22 assigned to device CPU, is_swa = 0
    load_tensors: tensor 'token_embd.weight' (q4_K) (and 2 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
    load_tensors: offloading 22 repeating layers to GPU
    load_tensors: offloaded 22/23 layers to GPU
    load_tensors:        CUDA0 model buffer size =   549.74 MiB
    load_tensors:   CPU_Mapped model buffer size =   636.18 MiB
    ....................................................................................
    llama_context: constructing llama_context
    llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache
    llama_context: n_seq_max     = 1
    llama_context: n_ctx         = 2048
    llama_context: n_ctx_per_seq = 2048
    llama_context: n_batch       = 512
    llama_context: n_ubatch      = 512
    llama_context: causal_attn   = 1
    llama_context: flash_attn    = 0
    llama_context: kv_unified    = true
    llama_context: freq_base     = 10000.0
    llama_context: freq_scale    = 1
    set_abort_callback: call
    llama_context:        CPU  output buffer size =     0.12 MiB
    create_memory: n_ctx = 2048 (padded)
    llama_kv_cache_unified: layer   0: dev = CUDA0
    llama_kv_cache_unified: layer   1: dev = CUDA0
    llama_kv_cache_unified: layer   2: dev = CUDA0
    llama_kv_cache_unified: layer   3: dev = CUDA0
    llama_kv_cache_unified: layer   4: dev = CUDA0
    llama_kv_cache_unified: layer   5: dev = CUDA0
    llama_kv_cache_unified: layer   6: dev = CUDA0
    llama_kv_cache_unified: layer   7: dev = CUDA0
    llama_kv_cache_unified: layer   8: dev = CUDA0
    llama_kv_cache_unified: layer   9: dev = CUDA0
    llama_kv_cache_unified: layer  10: dev = CUDA0
    llama_kv_cache_unified: layer  11: dev = CUDA0
    llama_kv_cache_unified: layer  12: dev = CUDA0
    llama_kv_cache_unified: layer  13: dev = CUDA0
    llama_kv_cache_unified: layer  14: dev = CUDA0
    llama_kv_cache_unified: layer  15: dev = CUDA0
    llama_kv_cache_unified: layer  16: dev = CUDA0
    llama_kv_cache_unified: layer  17: dev = CUDA0
    llama_kv_cache_unified: layer  18: dev = CUDA0
    llama_kv_cache_unified: layer  19: dev = CUDA0
    llama_kv_cache_unified: layer  20: dev = CUDA0
    llama_kv_cache_unified: layer  21: dev = CUDA0
    llama_kv_cache_unified:      CUDA0 KV buffer size =    44.00 MiB
    llama_kv_cache_unified: size =   44.00 MiB (  2048 cells,  22 layers,  1/ 1 seqs), K (f16):   22.00 MiB, V (f16):   22.00 MiB
    llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
    llama_context: enumerating backends
    llama_context: backend_ptrs.size() = 2
    llama_context: max_nodes = 65536
    llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
    graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
    graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
    graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
    llama_context:      CUDA0 compute buffer size =   148.00 MiB
    llama_context:  CUDA_Host compute buffer size =     8.01 MiB
    llama_context: graph nodes  = 820
    llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
    CUDA : ARCHS = 860 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
    Model metadata: {'tokenizer.chat_template': "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n'  + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}", 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.rope.freq_base': '10000.000000', 'llama.context_length': '2048', 'general.name': 'tinyllama_tinyllama-1.1b-chat-v1.0', 'llama.embedding_length': '2048', 'llama.feed_forward_length': '5632', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '64', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '22', 'llama.attention.head_count_kv': '4', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '15'}
    Available chat formats from metadata: chat_template.default
    /tmp/llama.cpp/src/llama-kv-cache-unified.cpp:222: GGML_ASSERT(seq_id >= 0 && (size_t) seq_id < seq_to_stream.size()) failed
    /usr/local/lib/libggml-base.so(+0x14ee8)[0x7fe0a04dbee8]
    /usr/local/lib/libggml-base.so(ggml_print_backtrace+0x1e4)[0x7fe0a04dc2b4]
    /usr/local/lib/libggml-base.so(ggml_abort+0x11e)[0x7fe0a04dc43e]
    /usr/local/lib/libllama.so(+0xbd573)[0x7fe0a0627573]
    /lib/x86_64-linux-gnu/libffi.so.8(+0x6f7a)[0x7fe0a09d9f7a]
    /lib/x86_64-linux-gnu/libffi.so.8(+0x640e)[0x7fe0a09d940e]
    /lib/x86_64-linux-gnu/libffi.so.8(ffi_call+0xcd)[0x7fe0a09d9b0d]
    /home/aok/.pyenv/versions/3.12.9/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so(+0x11bfe)[0x7fe0a09f0bfe]
    /home/aok/.pyenv/versions/3.12.9/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so(+0xbac6)[0x7fe0a09eaac6]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyObject_MakeTpCall+0x7c)[0x7fe0a137078c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyEval_EvalFrameDefault+0x3c1c)[0x7fe0a130fa1c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x18cc5d)[0x7fe0a138cc5d]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyEval_EvalFrameDefault+0x6d75)[0x7fe0a1312b75]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x18cc5d)[0x7fe0a138cc5d]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x27af09)[0x7fe0a147af09]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x1c6ae8)[0x7fe0a13c6ae8]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(PyObject_Vectorcall+0x4f)[0x7fe0a1370aaf]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyEval_EvalFrameDefault+0x3c1c)[0x7fe0a130fa1c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x173dca)[0x7fe0a1373dca]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyObject_Call+0x119)[0x7fe0a1372799]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyEval_EvalFrameDefault+0x85c)[0x7fe0a130c65c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(PyEval_EvalCode+0x207)[0x7fe0a1480727]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x2d8b76)[0x7fe0a14d8b76]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x2d8c89)[0x7fe0a14d8c89]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyRun_SimpleFileObject+0x16c)[0x7fe0a14dba4c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(_PyRun_AnyFileObject+0x3c)[0x7fe0a14dc02c]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(+0x301c2a)[0x7fe0a1501c2a]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(Py_RunMain+0x16)[0x7fe0a1502076]
    /home/aok/.pyenv/versions/3.12.9/lib/libpython3.12.so.1.0(Py_BytesMain+0x47)[0x7fe0a15021e7]
    /lib/x86_64-linux-gnu/libc.so.6(+0x2724a)[0x7fe0a104624a]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7fe0a1046305]
    /home/aok/.pyenv/versions/3.12.9/bin/python3(_start+0x21)[0x562ae480b081]
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

PLAY RECAP ***************************************************************************************************************************
uefi                       : ok=18   changed=4    unreachable=0    failed=1    skipped=5    rescued=0    ignored=0

You can run this manually too using the Python test_chat.py code above.

Direct llama-cli Output

CLI still works — only Python crashes.

print "" | llama-cli -m /opt/models/gguf/tinyllama.gguf \
    --n-gpu-layers 22 --n-predict 20 --prompt "What is the capital of France?" \
    --interactive-first
Warning: unknown mime-type for "" -- using "application/octet-stream"
Error: no such file ""
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes
build: 1 (225e7a1) with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060 Laptop GPU) - 5823 MiB free
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /opt/models/gguf/tinyllama.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = tinyllama_tinyllama-1.1b-chat-v1.0
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_K:  135 tensors
llama_model_loader: - type q6_K:   21 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 636.18 MiB (4.85 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 2048
print_info: n_embd           = 2048
print_info: n_layer          = 22
print_info: n_head           = 32
print_info: n_head_kv        = 4
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 5632
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 2048
print_info: rope_finetuned   = unknown
print_info: model type       = 1B
print_info: model params     = 1.10 B
print_info: general.name     = tinyllama_tinyllama-1.1b-chat-v1.0
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 2 '</s>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 22 repeating layers to GPU
load_tensors: offloaded 22/23 layers to GPU
load_tensors:        CUDA0 model buffer size =   549.74 MiB
load_tensors:   CPU_Mapped model buffer size =   636.18 MiB
....................................................................................
llama_context: constructing llama_context
llama_context: non-unified KV cache requires ggml_set_rows() - forcing unified KV cache
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: kv_unified    = true
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) > n_ctx_train (2048) -- possible training context overflow
llama_context:        CPU  output buffer size =     0.12 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =    88.00 MiB
llama_kv_cache_unified: size =   88.00 MiB (  4096 cells,  22 layers,  1/ 1 seqs), K (f16):   44.00 MiB, V (f16):   44.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      CUDA0 compute buffer size =   280.00 MiB
llama_context:  CUDA_Host compute buffer size =    12.01 MiB
llama_context: graph nodes  = 820
llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
common_init_from_params: added </s> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 8
main: model was trained on only 2048 context tokens (4096 specified)
main: chat template is available, enabling conversation mode (disable it with -no-cnv)
*** User-specified prompt will pre-start conversation, did you mean to set --system-prompt (-sys) instead?
main: chat template example:
<|system|>
You are a helpful assistant<|user|>
Hello<|assistant|>
Hi there<|user|>
How are you?<|assistant|>

system_info: n_threads = 8 (n_threads_batch = 8) / 8 | CUDA : ARCHS = 860 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |

main: interactive mode on.
sampler seed: 324771313
sampler params:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
        top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 4096, n_batch = 2048, n_predict = 20, n_keep = 1

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.
 - Not using system message. To change it, set a different value via -sys PROMPT

 <|user|>
What is the capital of France?<|assistant|>
Yes, the capital of France is Paris.

> EOF by user


llama_perf_sampler_print:    sampling time =       0.42 ms /    31 runs   (    0.01 ms per token, 74698.80 tokens per second)
llama_perf_context_print:        load time =     274.57 ms
llama_perf_context_print: prompt eval time =      62.62 ms /    20 tokens (    3.13 ms per token,   319.38 tokens per second)
llama_perf_context_print:        eval time =      72.10 ms /    10 runs   (    7.21 ms per token,   138.69 tokens per second)
llama_perf_context_print:       total time =     138.79 ms /    30 tokens

Regression introduced between b5912 and b5913 affecting:

  • Sequence-to-stream mapping
  • Use of llama_seq_id
  • Buffer allocation or graph scheduling inside unified KV cache

Request

Could you help confirm whether this is a known issue or a new regression in sequence handling? Is it an ABI issue that needs updating in llama-cpp-python?

Thanks in advance.

Working Version (b5912) Outputs

Both python test_chat.py and llama-cli succeed.

Working Python Launch (via test_chat.py)

LLAMA_CPP_LIB_PATH=/usr/local/lib python /tmp/test_chat.py
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060 Laptop GPU) - 5823 MiB free
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /opt/models/gguf/tinyllama.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = tinyllama_tinyllama-1.1b-chat-v1.0
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_K:  135 tensors
llama_model_loader: - type q6_K:   21 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 636.18 MiB (4.85 BPW)
init_tokenizer: initializing tokenizer for type 1
load: control token:      2 '</s>' is not marked as EOG
load: control token:      1 '<s>' is not marked as EOG
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 2048
print_info: n_embd           = 2048
print_info: n_layer          = 22
print_info: n_head           = 32
print_info: n_head_kv        = 4
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 5632
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 2048
print_info: rope_finetuned   = unknown
print_info: model type       = 1B
print_info: model params     = 1.10 B
print_info: general.name     = tinyllama_tinyllama-1.1b-chat-v1.0
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 2 '</s>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: layer   0 assigned to device CUDA0, is_swa = 0
load_tensors: layer   1 assigned to device CUDA0, is_swa = 0
load_tensors: layer   2 assigned to device CUDA0, is_swa = 0
load_tensors: layer   3 assigned to device CUDA0, is_swa = 0
load_tensors: layer   4 assigned to device CUDA0, is_swa = 0
load_tensors: layer   5 assigned to device CUDA0, is_swa = 0
load_tensors: layer   6 assigned to device CUDA0, is_swa = 0
load_tensors: layer   7 assigned to device CUDA0, is_swa = 0
load_tensors: layer   8 assigned to device CUDA0, is_swa = 0
load_tensors: layer   9 assigned to device CUDA0, is_swa = 0
load_tensors: layer  10 assigned to device CUDA0, is_swa = 0
load_tensors: layer  11 assigned to device CUDA0, is_swa = 0
load_tensors: layer  12 assigned to device CUDA0, is_swa = 0
load_tensors: layer  13 assigned to device CUDA0, is_swa = 0
load_tensors: layer  14 assigned to device CUDA0, is_swa = 0
load_tensors: layer  15 assigned to device CUDA0, is_swa = 0
load_tensors: layer  16 assigned to device CUDA0, is_swa = 0
load_tensors: layer  17 assigned to device CUDA0, is_swa = 0
load_tensors: layer  18 assigned to device CUDA0, is_swa = 0
load_tensors: layer  19 assigned to device CUDA0, is_swa = 0
load_tensors: layer  20 assigned to device CUDA0, is_swa = 0
load_tensors: layer  21 assigned to device CUDA0, is_swa = 0
load_tensors: layer  22 assigned to device CPU, is_swa = 0
load_tensors: tensor 'token_embd.weight' (q4_K) (and 2 others) cannot be used with preferred buffer type CUDA_Host, using CPU instead
load_tensors: offloading 22 repeating layers to GPU
load_tensors: offloaded 22/23 layers to GPU
load_tensors:        CUDA0 model buffer size =   549.74 MiB
load_tensors:   CPU_Mapped model buffer size =   636.18 MiB
....................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 2048
llama_context: n_ctx_per_seq = 2048
llama_context: n_batch       = 512
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
set_abort_callback: call
llama_context:        CPU  output buffer size =     0.12 MiB
create_memory: n_ctx = 2048 (padded)
llama_kv_cache_unified: layer   0: dev = CUDA0
llama_kv_cache_unified: layer   1: dev = CUDA0
llama_kv_cache_unified: layer   2: dev = CUDA0
llama_kv_cache_unified: layer   3: dev = CUDA0
llama_kv_cache_unified: layer   4: dev = CUDA0
llama_kv_cache_unified: layer   5: dev = CUDA0
llama_kv_cache_unified: layer   6: dev = CUDA0
llama_kv_cache_unified: layer   7: dev = CUDA0
llama_kv_cache_unified: layer   8: dev = CUDA0
llama_kv_cache_unified: layer   9: dev = CUDA0
llama_kv_cache_unified: layer  10: dev = CUDA0
llama_kv_cache_unified: layer  11: dev = CUDA0
llama_kv_cache_unified: layer  12: dev = CUDA0
llama_kv_cache_unified: layer  13: dev = CUDA0
llama_kv_cache_unified: layer  14: dev = CUDA0
llama_kv_cache_unified: layer  15: dev = CUDA0
llama_kv_cache_unified: layer  16: dev = CUDA0
llama_kv_cache_unified: layer  17: dev = CUDA0
llama_kv_cache_unified: layer  18: dev = CUDA0
llama_kv_cache_unified: layer  19: dev = CUDA0
llama_kv_cache_unified: layer  20: dev = CUDA0
llama_kv_cache_unified: layer  21: dev = CUDA0
llama_kv_cache_unified:      CUDA0 KV buffer size =    44.00 MiB
llama_kv_cache_unified: size =   44.00 MiB (  2048 cells,  22 layers,  1 seqs), K (f16):   22.00 MiB, V (f16):   22.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
graph_reserve: reserving a graph for ubatch with n_tokens =    1, n_seqs =  1, n_outputs =    1
graph_reserve: reserving a graph for ubatch with n_tokens =  512, n_seqs =  1, n_outputs =  512
llama_context:      CUDA0 compute buffer size =   148.00 MiB
llama_context:  CUDA_Host compute buffer size =     8.01 MiB
llama_context: graph nodes  = 798
llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
CUDA : ARCHS = 860 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
Model metadata: {'tokenizer.chat_template': "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n'  + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}", 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.rope.freq_base': '10000.000000', 'llama.context_length': '2048', 'general.name': 'tinyllama_tinyllama-1.1b-chat-v1.0', 'llama.embedding_length': '2048', 'llama.feed_forward_length': '5632', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '64', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '22', 'llama.attention.head_count_kv': '4', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '15'}
Available chat formats from metadata: chat_template.default
llama_perf_context_print:        load time =      66.63 ms
llama_perf_context_print: prompt eval time =      66.42 ms /    57 tokens (    1.17 ms per token,   858.21 tokens per second)
llama_perf_context_print:        eval time =      55.81 ms /     7 runs   (    7.97 ms per token,   125.43 tokens per second)
llama_perf_context_print:       total time =     125.24 ms /    64 tokens
Response: The capital of France is Paris.

Working llama-cli Launch

print "" | llama-cli -m /opt/models/gguf/tinyllama.gguf \
    --n-gpu-layers 22 --n-predict 20 --prompt "What is the capital of France?" \
    --interactive-first
Warning: unknown mime-type for "" -- using "application/octet-stream"
Error: no such file ""
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 3060 Laptop GPU, compute capability 8.6, VMM: yes
build: 1 (ab14019) with cc (Debian 12.2.0-14+deb12u1) 12.2.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 3060 Laptop GPU) - 5823 MiB free
llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /opt/models/gguf/tinyllama.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = tinyllama_tinyllama-1.1b-chat-v1.0
llama_model_loader: - kv   2:                       llama.context_length u32              = 2048
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   4:                          llama.block_count u32              = 22
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5632
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 64
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  11:                          general.file_type u32              = 15
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  14:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  15:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  16:                      tokenizer.ggml.merges arr[str,61249]   = ["▁ t", "e r", "i n", "▁ a", "e n...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  21:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  22:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_K:  135 tensors
llama_model_loader: - type q6_K:   21 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 636.18 MiB (4.85 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 2048
print_info: n_embd           = 2048
print_info: n_layer          = 22
print_info: n_head           = 32
print_info: n_head_kv        = 4
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 8
print_info: n_embd_k_gqa     = 256
print_info: n_embd_v_gqa     = 256
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 5632
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 2048
print_info: rope_finetuned   = unknown
print_info: model type       = 1B
print_info: model params     = 1.10 B
print_info: general.name     = tinyllama_tinyllama-1.1b-chat-v1.0
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: PAD token        = 2 '</s>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 22 repeating layers to GPU
load_tensors: offloaded 22/23 layers to GPU
load_tensors:        CUDA0 model buffer size =   549.74 MiB
load_tensors:   CPU_Mapped model buffer size =   636.18 MiB
....................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context: n_ctx_per_seq (4096) > n_ctx_train (2048) -- possible training context overflow
llama_context:        CPU  output buffer size =     0.12 MiB
llama_kv_cache_unified:      CUDA0 KV buffer size =    88.00 MiB
llama_kv_cache_unified: size =   88.00 MiB (  4096 cells,  22 layers,  1 seqs), K (f16):   44.00 MiB, V (f16):   44.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      CUDA0 compute buffer size =   280.00 MiB
llama_context:  CUDA_Host compute buffer size =    12.01 MiB
llama_context: graph nodes  = 798
llama_context: graph splits = 4 (with bs=512), 3 (with bs=1)
common_init_from_params: added </s> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 8
main: model was trained on only 2048 context tokens (4096 specified)
main: chat template is available, enabling conversation mode (disable it with -no-cnv)
*** User-specified prompt will pre-start conversation, did you mean to set --system-prompt (-sys) instead?
main: chat template example:
<|system|>
You are a helpful assistant<|user|>
Hello<|assistant|>
Hi there<|user|>
How are you?<|assistant|>

system_info: n_threads = 8 (n_threads_batch = 8) / 8 | CUDA : ARCHS = 860 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |

main: interactive mode on.
sampler seed: 512675149
sampler params:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
        top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 4096, n_batch = 2048, n_predict = 20, n_keep = 1

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.
 - Not using system message. To change it, set a different value via -sys PROMPT

 <|user|>
What is the capital of France?<|assistant|>
The capital of France is Paris.

> EOF by user


llama_perf_sampler_print:    sampling time =       0.35 ms /    29 runs   (    0.01 ms per token, 83094.56 tokens per second)
llama_perf_context_print:        load time =     315.03 ms
llama_perf_context_print: prompt eval time =      68.44 ms /    20 tokens (    3.42 ms per token,   292.24 tokens per second)
llama_perf_context_print:        eval time =      55.23 ms /     8 runs   (    6.90 ms per token,   144.84 tokens per second)
llama_perf_context_print:       total time =     128.91 ms /    28 tokens

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions