Skip to content

Commit a0b0707

Browse files
authored
Docs enhancement (#1032)
Signed-off-by: chensuyue <suyue.chen@intel.com>
1 parent 905eda3 commit a0b0707

File tree

38 files changed

+2387
-117
lines changed

38 files changed

+2387
-117
lines changed

.azure-pipelines/code-scan-neural-insights.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ pr:
1111
- neural_insights
1212
- setup.py
1313
- .azure-pipelines/code-scan-neural-insights.yml
14-
- .azure-pipelines/scripts/codeScan
1514

1615
pool:
1716
vmImage: "ubuntu-latest"

.azure-pipelines/code-scan-neural-solution.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ pr:
1111
- neural_solution
1212
- setup.py
1313
- .azure-pipelines/code-scan-neural-solution.yml
14-
- .azure-pipelines/scripts/codeScan
1514

1615
pool:
1716
vmImage: "ubuntu-latest"

.azure-pipelines/scripts/codeScan/pyspelling/inc_dict.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2490,6 +2490,7 @@ Thalaiyasingam
24902490
Torr
24912491
QOperator
24922492
MixedPrecisionConfig
2493+
mixedprecision
24932494
contrib
24942495
ONNXConfig
24952496
Arial

docker/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
## build `Neural Compressor(INC)` Containers:
1+
## Build Intel Neural Compressor Containers:
22

3-
### To build the the `Pip` based deployment container:
3+
### To build the `Pip` based deployment container:
44
Please note that `INC_VER` must be set to a valid version published here:
55
https://pypi.org/project/neural-compressor/#history
66

@@ -12,7 +12,7 @@ $ IMAGE_TAG=${INC_VER}
1212
$ docker build --build-arg PYTHON=${PYTHON} --build-arg INC_VER=${INC_VER} -f Dockerfile -t ${IMAGE_NAME}:${IMAGE_TAG} .
1313
```
1414

15-
### To build the the `Pip` based development container:
15+
### To build the `Pip` based development container:
1616
Please note that `INC_BRANCH` must be a set to a valid branch name otherwise, Docker build fails.
1717
If `${INC_BRANCH}-devel` does not meet Docker tagging requirements described here:
1818
https://docs.docker.com/engine/reference/commandline/tag/

docs/source/FX.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ FX
22
====
33
1. [Introduction](#introduction)
44
2. [FX Mode Support Matrix in Neural Compressor](#fx-mode-support-matrix-in-neural-compressor)
5-
3. [Get Start](#get-start)
5+
3. [Get Started](#get-started)
66

77
3.1. [Post Training Static Quantization](#post-training-static-quantization)
88

@@ -34,7 +34,7 @@ For detailed description, please refer to [PyTorch FX](https://pytorch.org/docs/
3434
|Quantization-Aware Training |&#10004; |
3535

3636

37-
## Get Start
37+
## Get Started
3838

3939
**Note:** "backend" field indicates the backend used by the user in configure. And the "default" value means it will quantization model with fx backend for PyTorch model.
4040

docs/source/adaptor.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ Adaptor
33
1. [Introduction](#introduction)
44
2. [Adaptor Support Matrix](#adaptor-support-matrix)
55
3. [Working Flow](#working-flow)
6-
4. [Get Start with Adaptor API](#get-start-with-adaptor-api)
6+
4. [Get Started with Adaptor API](#get-start-with-adaptor-api)
77

88
4.1 [Query API](#query-api)
99

@@ -33,7 +33,7 @@ tuning strategy and vanilla framework quantization APIs.
3333
## Working Flow
3434
Adaptor only provide framework API for tuning strategy. So we can find complete working flow in [tuning strategy working flow](./tuning_strategies.md).
3535

36-
## Get Start with Adaptor API
36+
## Get Started with Adaptor API
3737

3838
Neural Compressor supports a new adaptor extension by
3939
implementing a subclass `Adaptor` class in the neural_compressor.adaptor package

docs/source/dataloader.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ DataLoader
55

66
2. [Supported Framework Dataloader Matrix](#supported-framework-dataloader-matrix)
77

8-
3. [Get Start with Dataloader](#get-start-with-dataloader)
8+
3. [Get Started with Dataloader](#get-start-with-dataloader)
99

1010
3.1 [Use Intel® Neural Compressor DataLoader API](#use-intel®-neural-compressor-dataloader-api)
1111

@@ -37,7 +37,7 @@ Of cause, users can also use frameworks own dataloader in Neural Compressor.
3737
| PyTorch | &#10004; |
3838
| ONNX Runtime | &#10004; |
3939

40-
## Get Start with DataLoader
40+
## Get Started with DataLoader
4141

4242
### Use Intel® Neural Compressor DataLoader API
4343

docs/source/get_started.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,6 @@
22

33
1. [Quick Samples](#quick-samples)
44

5-
1.1 [Quantization with Python API](#quantization-with-python-api)
6-
7-
1.2 [Quantization with JupyterLab Extension](#quantization-with-jupyterlab-extension)
8-
95
2. [Validated Models](#validated-models)
106

117
## Quick Samples
@@ -35,14 +31,6 @@ q_model = fit(
3531
eval_dataloader=dataloader)
3632
```
3733

38-
### Quantization with [JupyterLab Extension](/neural_coder/extensions/neural_compressor_ext_lab/README.md)
39-
40-
Search for ```jupyter-lab-neural-compressor``` in the Extension Manager in JupyterLab and install with one click:
41-
42-
<a target="_blank" href="/neural_coder/extensions/screenshots/extmanager.png">
43-
<img src="/neural_coder/extensions/screenshots/extmanager.png" alt="Extension" width="35%" height="35%">
44-
</a>
45-
4634
## Validated Models
4735
Intel® Neural Compressor validated the quantization for 10K+ models from popular model hubs (e.g., HuggingFace Transformers, Torchvision, TensorFlow Model Hub, ONNX Model Zoo).
4836
Over 30 pruning, knowledge distillation and model export samples are also available.

docs/source/installation_guide.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ You can install Neural Compressor using one of three options: Install single com
3030

3131
The following prerequisites and requirements must be satisfied for a successful installation:
3232

33-
- Python version: 3.7 or 3.8 or 3.9 or 3.10
33+
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
3434

3535
> Notes:
3636
> - If you get some build issues, please check [frequently asked questions](faq.md) at first.
@@ -82,7 +82,7 @@ The AI Kit is distributed through many common channels, including from Intel's w
8282

8383
The following prerequisites and requirements must be satisfied for a successful installation:
8484

85-
- Python version: 3.7 or 3.8 or 3.9 or 3.10
85+
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
8686

8787
### Install from Binary
8888

@@ -127,7 +127,7 @@ The following prerequisites and requirements must be satisfied for a successful
127127
### Validated Software Environment
128128

129129
* OS version: CentOS 8.4, Ubuntu 22.04
130-
* Python version: 3.7, 3.8, 3.9, 3.10
130+
* Python version: 3.7, 3.8, 3.9, 3.10, 3.11
131131

132132
<table class="docutils">
133133
<thead>
@@ -148,20 +148,20 @@ The following prerequisites and requirements must be satisfied for a successful
148148
<td class="tg-7zrl"><a href=https://github.com/tensorflow/tensorflow/tree/v2.12.0>2.12.0</a><br>
149149
<a href=https://github.com/tensorflow/tensorflow/tree/v2.11.0>2.11.0</a><br>
150150
<a href=https://github.com/tensorflow/tensorflow/tree/v2.10.1>2.10.1</a><br></td>
151-
<td class="tg-7zrl"><a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.11.0>2.11.0</a><br>
152-
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.10.0>2.10.0</a><br>
153-
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.9.1>2.9.1</a><br></td>
154-
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.1.0>1.1.0</a><br>
155-
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.0.0>1.0.0</a></td>
156-
<td class="tg-7zrl"><a href=https://download.pytorch.org/whl/torch_stable.html>2.0.0+cpu</a><br>
157-
<a href=https://download.pytorch.org/whl/torch_stable.html>1.13.0+cpu</a><br>
151+
<td class="tg-7zrl"><a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.12.0>2.12.0</a><br>
152+
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.11.0>2.11.0</a><br>
153+
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.10.0>2.10.0</a><br></td>
154+
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.2.0>1.2.0</a><br>
155+
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.1.0>1.1.0</a></td>
156+
<td class="tg-7zrl"><a href=https://download.pytorch.org/whl/torch_stable.html>2.0.1+cpu</a><br>
157+
<a href=https://download.pytorch.org/whl/torch_stable.html>1.13.1+cpu</a><br>
158158
<a href=https://download.pytorch.org/whl/torch_stable.html>1.12.1+cpu</a><br></td>
159-
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.0+cpu>2.0.0+cpu</a><br>
160-
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.0+cpu>1.13.0+cpu</a><br>
159+
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu>2.0.1+cpu</a><br>
160+
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu>1.13.1+cpu</a><br>
161161
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.12.100>1.12.1+cpu</a><br></td>
162-
<td class="tg-7zrl"><a href=https://github.com/microsoft/onnxruntime/tree/v1.14.1>1.14.1</a><br>
163-
<a href=https://github.com/microsoft/onnxruntime/tree/v1.13.1>1.13.1</a><br>
164-
<a href=https://github.com/microsoft/onnxruntime/tree/v1.12.1>1.12.1</a><br></td>
162+
<td class="tg-7zrl"><a href=https://github.com/microsoft/onnxruntime/tree/v1.15.0>1.15.0</a><br>
163+
<a href=https://github.com/microsoft/onnxruntime/tree/v1.14.1>1.14.1</a><br>
164+
<a href=https://github.com/microsoft/onnxruntime/tree/v1.13.1>1.13.1</a><br></td>
165165
<td class="tg-7zrl"><a href=https://github.com/apache/incubator-mxnet/tree/1.9.1>1.9.1</a><br></td>
166166
</tr>
167167
</tbody>

docs/source/metric.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Metrics
1111

1212
2.4. [ONNXRT](#onnxrt)
1313

14-
3. [Get Start with Metric](#get-start-with-metric)
14+
3. [Get Started with Metric](#get-start-with-metric)
1515

1616
3.1. [Use Intel® Neural Compressor Metric API](#use-intel®-neural-compressor-metric-api)
1717

@@ -88,11 +88,11 @@ Neural Compressor supports some built-in metrics that are popularly used in indu
8888

8989

9090

91-
## Get Start with Metric
91+
## Get Started with Metric
9292

9393
### Use Intel® Neural Compressor Metric API
9494

95-
Users can specify an Neural Compressor built-in metric such as shown below:
95+
Users can specify a Neural Compressor built-in metric such as shown below:
9696

9797
```python
9898
from neural_compressor import Metric

0 commit comments

Comments
 (0)