Skip to content

Commit 2c41df7

Browse files
Forbid extra argument for modifiers (#1614)
SUMMARY: Fixes [issue 1226](#1226) and [issue 1225](#1225) TEST PLAN: Tested locally, example error message: ``` pydantic_core._pydantic_core.ValidationError: 1 validation error for GPTQModifier group_0 Extra inputs are not permitted [type=extra_forbidden, input_value={'weights': {'num_bits': ... 'targets': ['Linear']}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.11/v/extra_forbidden ``` --------- Signed-off-by: shanjiaz <zsjwpianpian@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
1 parent 090baff commit 2c41df7

File tree

12 files changed

+16
-13
lines changed

12 files changed

+16
-13
lines changed

examples/finetuning/example_alternating_recipe.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ initial_sparsity_stage:
44
SparseGPTModifier:
55
sparsity: 0.5
66
block_size: 128
7-
percdamp: 0.01
7+
dampening_frac: 0.01
88
mask_structure: "0:0"
99
targets: ["Linear"]
1010
ignore: ["re:.*lm_head"]
@@ -20,7 +20,7 @@ next_sparsity_stage:
2020
SparseGPTModifier:
2121
sparsity: 0.7
2222
block_size: 128
23-
percdamp: 0.01
23+
dampening_frac: 0.01
2424
mask_structure: "0:0"
2525
targets: ["Linear"]
2626
ignore: ["re:.*lm_head"]

src/llmcompressor/modifiers/modifier.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
from abc import abstractmethod
22
from typing import Optional
33

4+
from pydantic import ConfigDict
5+
46
from llmcompressor.core.events import Event, EventType
57
from llmcompressor.core.state import State
68
from llmcompressor.modifiers.interface import ModifierInterface
@@ -30,6 +32,8 @@ class Modifier(ModifierInterface, HooksMixin):
3032
:param update: The update step for the modifier
3133
"""
3234

35+
model_config = ConfigDict(extra="forbid")
36+
3337
index: Optional[int] = None
3438
group: Optional[str] = None
3539
start: Optional[float] = None

tests/llmcompressor/pytorch/modifiers/pruning/constant/test_pytorch.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -145,8 +145,8 @@ def test_constant_pruning_pytorch_is_registered():
145145
from llmcompressor.modifiers.pruning.constant import ConstantPruningModifier
146146

147147
kwargs = dict(
148-
start_epoch=5.0,
149-
end_epoch=15.0,
148+
start=5.0,
149+
end=15.0,
150150
targets="__ALL_PRUNABLE__",
151151
)
152152
setup_modifier_factory()

tests/llmcompressor/transformers/finetune/test_alternate_recipe.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ test_oneshot_stage:
33
SparseGPTModifier:
44
sparsity: 0.7
55
block_size: 128
6-
percdamp: 0.01
6+
dampening_frac: 0.01
77
mask_structure: "0:0"
88
targets: ["Linear"]
99
ignore: ["re:.*lm_head"]

tests/llmcompressor/transformers/obcq/recipes/additional_sparsity.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ test_stage:
33
SparseGPTModifier:
44
sparsity: 0.7
55
block_size: 128
6-
percdamp: 0.01
6+
dampening_frac: 0.01
77
mask_structure: "0:0"
88
targets: ["re:.*model.layers.0$"]
99
preserve_sparsity_mask: True

tests/llmcompressor/transformers/obcq/recipes/additional_sparsity_with_quant.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ test_stage:
33
SparseGPTModifier:
44
sparsity: 0.7
55
block_size: 128
6-
percdamp: 0.01
6+
dampening_frac: 0.01
77
mask_structure: "0:0"
88
targets: [
99
"re:.*model.layers.0$",

tests/llmcompressor/transformers/obcq/recipes/quant.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ test_stage:
44
smoothing_strength: 0.6
55
GPTQModifier:
66
block_size: 128
7-
percdamp: 0.01
7+
dampening_frac: 0.01
88
config_groups:
99
group_0:
1010
weights:

tests/llmcompressor/transformers/obcq/recipes/quant_and_sparse.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,6 @@ test_stage:
1212
SparseGPTModifier:
1313
sparsity: 0.5
1414
block_size: 128
15-
percdamp: 0.01
15+
dampening_frac: 0.01
1616
mask_structure: "0:0"
1717
targets: ["re:.*model.layers.0$"]

tests/llmcompressor/transformers/obcq/recipes/sparse.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@ test_stage:
33
SparseGPTModifier:
44
sparsity: 0.3
55
block_size: 128
6-
percdamp: 0.01
6+
dampening_frac: 0.01
77
targets: ["model.layers.0", "model.layers.1"]
88
mask_structure: "0:0"

tests/llmcompressor/transformers/obcq/recipes/sparse_with_mask_structure.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ test_stage:
33
SparseGPTModifier:
44
sparsity: 0.5
55
block_size: 128
6-
percdamp: 0.01
6+
dampening_frac: 0.01
77
mask_structure: "2:4"
88
targets: [
99
"re:.*model.layers.0$",

0 commit comments

Comments
 (0)