Skip to content

Commit 66a2b1e

Browse files
CISCqnixsynapse
authored andcommitted
llama : return mistral-v7-tekken as default template only (ggml-org#14390)
1 parent 57cd396 commit 66a2b1e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/llama-model.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14377,7 +14377,7 @@ const char * llama_model_chat_template(const llama_model * model, const char * n
1437714377
// do not extend this list unless absolutely necessary
1437814378
// Mistral-Small-2503 does not have built-in chat template
1437914379
llama_vocab_pre_type pre_type = model->vocab.get_pre_type();
14380-
if (pre_type == LLAMA_VOCAB_PRE_TYPE_TEKKEN && model->layers.size() == 40) {
14380+
if (!name && pre_type == LLAMA_VOCAB_PRE_TYPE_TEKKEN && model->layers.size() == 40) {
1438114381
return "mistral-v7-tekken";
1438214382
}
1438314383

0 commit comments

Comments
 (0)