Skip to content

Commit 7ad0c07

Browse files
rahul-tuliclaude
andcommitted
fix: Add speculators weight remapping to llama_eagle model
- Added speculators_name_map to handle fusion_fc -> fc weight remapping - Also handles transformer.* -> model.layers.0.* prefix remapping - Fixes KeyError for fusion_fc.weight when loading speculators Eagle models - Similar to the remapping already added to eagle.py model Signed-off-by: rtuli@redhat.com 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Rahul Tuli <rtuli@redhat.com>
1 parent 81c9904 commit 7ad0c07

File tree

1 file changed

+17
-0
lines changed

1 file changed

+17
-0
lines changed

vllm/model_executor/models/llama_eagle.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,24 @@ def load_weights(self, weights: Iterable[tuple[str,
103103
]
104104
params_dict = dict(self.named_parameters())
105105
loaded_params: set[str] = set()
106+
107+
# Support for speculators format weights
108+
speculators_name_map = {
109+
"fusion_fc.weight": "fc.weight",
110+
"fusion_fc.bias": "fc.bias",
111+
"embedding_layernorm.weight": "enorm.weight",
112+
"pre_lm_head_layernorm.weight": "hnorm.weight",
113+
}
114+
106115
for name, loaded_weight in weights:
116+
# Handle speculators format weight names
117+
if name in speculators_name_map:
118+
name = speculators_name_map[name]
119+
elif name.startswith("transformer."):
120+
# transformer.* -> model.layers.0.*
121+
suffix = name[len("transformer."):]
122+
name = f"model.layers.0.{suffix}"
123+
107124
for param_name, weight_name, shard_id in stacked_params_mapping:
108125
if weight_name not in name:
109126
continue

0 commit comments

Comments
 (0)