You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/FX.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@ FX
2
2
====
3
3
1.[Introduction](#introduction)
4
4
2.[FX Mode Support Matrix in Neural Compressor](#fx-mode-support-matrix-in-neural-compressor)
5
-
3.[Get Start](#get-start)
5
+
3.[Get Started](#get-started)
6
6
7
7
3.1. [Post Training Static Quantization](#post-training-static-quantization)
8
8
@@ -34,7 +34,7 @@ For detailed description, please refer to [PyTorch FX](https://pytorch.org/docs/
34
34
|Quantization-Aware Training |✔|
35
35
36
36
37
-
## Get Start
37
+
## Get Started
38
38
39
39
**Note:** "backend" field indicates the backend used by the user in configure. And the "default" value means it will quantization model with fx backend for PyTorch model.
Adaptor only provide framework API for tuning strategy. So we can find complete working flow in [tuning strategy working flow](./tuning_strategies.md).
35
35
36
-
## Get Start with Adaptor API
36
+
## Get Started with Adaptor API
37
37
38
38
Neural Compressor supports a new adaptor extension by
39
39
implementing a subclass `Adaptor` class in the neural_compressor.adaptor package
Intel® Neural Compressor validated the quantization for 10K+ models from popular model hubs (e.g., HuggingFace Transformers, Torchvision, TensorFlow Model Hub, ONNX Model Zoo).
48
36
Over 30 pruning, knowledge distillation and model export samples are also available.
0 commit comments