Skip to content

Commit c6c6bf2

Browse files
committed
Set strict export explicitly for API change
1 parent 2898903 commit c6c6bf2

File tree

6 files changed

+12
-10
lines changed

6 files changed

+12
-10
lines changed

docs/source/quick_start.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ API Example::
156156
model(image)
157157

158158
# Step 1. program capture
159-
m = export(m, *example_inputs).module()
159+
m = export(m, *example_inputs, strict=True).module()
160160
# we get a model with aten ops
161161

162162
# Step 2. quantization

docs/source/tutorials_source/pt2e_quant_openvino.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ We will start by performing the necessary imports, capturing the FX Graph from t
107107
108108
# Capture the FX Graph to be quantized
109109
with torch.no_grad(), nncf.torch.disable_patching():
110-
exported_model = torch.export.export(model, example_inputs).module()
110+
exported_model = torch.export.export(model, example_inputs, strict=True).module()
111111
112112
113113

docs/source/tutorials_source/pt2e_quant_ptq.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ The PyTorch 2 export quantization API looks like this:
6666
# Step 1. program capture
6767
# This is available for pytorch 2.6+, for more details on lower pytorch versions
6868
# please check `Export the model with torch.export` section
69-
m = torch.export.export(m, example_inputs).module()
69+
m = torch.export.export(m, example_inputs, strict=True).module()
7070
# we get a model with aten ops
7171
7272
@@ -350,7 +350,7 @@ Here is how you can use ``torch.export`` to export the model:
350350
351351
example_inputs = (torch.rand(2, 3, 224, 224),)
352352
# for pytorch 2.6+
353-
exported_model = torch.export.export(model_to_quantize, example_inputs).module()
353+
exported_model = torch.export.export(model_to_quantize, example_inputs, strict=True).module()
354354
355355
# for pytorch 2.5 and before
356356
# from torch._export import capture_pre_autograd_graph
@@ -362,7 +362,7 @@ Here is how you can use ``torch.export`` to export the model:
362362
{0: torch.export.Dim("dim")} if i == 0 else None
363363
for i in range(len(example_inputs))
364364
)
365-
exported_model = torch.export.export_for_training(model_to_quantize, example_inputs, dynamic_shapes=dynamic_shapes).module()
365+
exported_model = torch.export.export_for_training(model_to_quantize, example_inputs, dynamic_shapes=dynamic_shapes, strict=True).module()
366366
367367
# for pytorch 2.5 and before
368368
# dynamic_shape API may vary as well

docs/source/tutorials_source/pt2e_quant_qat.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ to the post training quantization (PTQ) flow for the most part:
3838
# Step 1. program capture
3939
# This is available for pytorch 2.6+, for more details on lower pytorch versions
4040
# please check `Export the model with torch.export` section
41-
m = torch.export.export(m, example_inputs).module()
41+
m = torch.export.export(m, example_inputs, strict=True).module()
4242
# we get a model with aten ops
4343
4444
# Step 2. quantization-aware training
@@ -273,7 +273,7 @@ Here is how you can use ``torch.export`` to export the model:
273273
274274
example_inputs = (torch.rand(2, 3, 224, 224),)
275275
# for pytorch 2.6+
276-
exported_model = torch.export.export(float_model, example_inputs).module()
276+
exported_model = torch.export.export(float_model, example_inputs, strict=True).module()
277277
# for pytorch 2.5 and before
278278
# from torch._export import capture_pre_autograd_graph
279279
# exported_model = capture_pre_autograd_graph(model_to_quantize, example_inputs)
@@ -288,7 +288,7 @@ Here is how you can use ``torch.export`` to export the model:
288288
{0: torch.export.Dim("dim")} if i == 0 else None
289289
for i in range(len(example_inputs))
290290
)
291-
exported_model = torch.export.export(float_model, example_inputs, dynamic_shapes=dynamic_shapes).module()
291+
exported_model = torch.export.export(float_model, example_inputs, dynamic_shapes=dynamic_shapes, strict=True).module()
292292
293293
# for pytorch 2.5 and before
294294
# dynamic_shape API may vary as well

docs/source/tutorials_source/pt2e_quant_x86_inductor.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,8 @@ We will start by performing the necessary imports, capturing the FX Graph from t
104104
# Note: requires torch >= 2.6
105105
exported_model = export(
106106
model,
107-
example_inputs
107+
example_inputs,
108+
strict=True
108109
)
109110

110111

@@ -266,7 +267,7 @@ The PyTorch 2 Export QAT flow is largely similar to the PTQ flow:
266267
# Step 1. program capture
267268
# NOTE: this API will be updated to torch.export API in the future, but the captured
268269
# result shoud mostly stay the same
269-
exported_model = export(m, example_inputs)
270+
exported_model = export(m, example_inputs, strict=True)
270271
# we get a model with aten ops
271272
272273
# Step 2. quantization-aware training

docs/source/tutorials_source/pt2e_quant_xpu_inductor.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ We will start by performing the necessary imports, capturing the FX Graph from t
8585
exported_model = export(
8686
model,
8787
example_inputs,
88+
strict=True
8889
).module()
8990

9091

0 commit comments

Comments
 (0)