Skip to content

Commit 2fdf23d

Browse files
authored
change code-llama to codellama (#400)
* change code-llama to codellama * use both code-llama and codellama temporarily
1 parent e189837 commit 2fdf23d

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

model-engine/model_engine_server/domain/use_cases/llm_model_endpoint_use_cases.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,10 @@
190190
# Based on config here: https://huggingface.co/TIGER-Lab/MAmmoTH-Coder-7B/blob/main/config.json#L12
191191
# Can also see 13B, 34B there too
192192
"code-llama": {"max_model_len": 16384, "max_num_batched_tokens": 16384},
193+
"codellama": {
194+
"max_model_len": 16384,
195+
"max_num_batched_tokens": 16384,
196+
}, # setting both for backwards compatibility, will phase code-llama out in a future pr
193197
# Based on config here: https://huggingface.co/codellama/CodeLlama-7b-hf/blob/main/config.json#L12
194198
# Can also see 13B, 34B there too
195199
"llama-2": {"max_model_len": None, "max_num_batched_tokens": 4096},

0 commit comments

Comments
 (0)