Skip to content

Commit 9267f1f

Browse files
danbevarthw
authored andcommitted
llama : fix typo in llama_tensor_get_type comment [no ci] (ggml-org#8937)
1 parent 41667ae commit 9267f1f

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/llama.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15308,7 +15308,7 @@ static ggml_type llama_tensor_get_type(quantize_state_internal & qs, ggml_type n
1530815308
const int n_expert = std::max(1, (int)qs.model.hparams.n_expert);
1530915309
auto layer_info = [n_expert] (int i_layer, int n_layer, const char * name) {
1531015310
if (n_expert > 1) {
15311-
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but iccasionally randomly
15311+
// Believe it or not, "experts" in the FFN of Mixtral-8x7B are not consecutive, but occasionally randomly
1531215312
// sprinkled in the model. Hence, simply dividing i_ffn_down by n_expert does not work
1531315313
// for getting the current layer as I initially thought, and we need to resort to parsing the
1531415314
// tensor name.

0 commit comments

Comments
 (0)