You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-18Lines changed: 20 additions & 18 deletions
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,10 @@ Intel® Neural Compressor
12
12
</div>
13
13
14
14
---
15
+
<divalign="left">
15
16
16
-
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, an open-source Python library running on Intel CPUs and GPUs, which delivers unified interfaces across multiple deeplearning frameworks for popular network compression technologies, such as quantization, pruning, knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help user quickly find out the best quantized model. It also implements different weightpruning algorithms to generate pruned model with predefined sparsity goal and supports knowledge distillation to distill the knowledge from the teacher model to the student model.
17
-
Intel® Neural Compressor has been one of the critical AI software components in [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
17
+
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, is an open-source Python library that runs on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep-learning frameworks for popular network compression technologies such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help the user quickly find out the best quantized model. It also implements different weight-pruning algorithms to generate a pruned model with predefined sparsity goal. It also supports knowledge distillation to distill the knowledge from the teacher model to the student model.
18
+
Intel® Neural Compressor is a critical AI software component in the[Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
18
19
19
20
> **Note:**
20
21
> GPU support is under development.
@@ -23,11 +24,11 @@ Intel® Neural Compressor has been one of the critical AI software components in
23
24
24
25
## Installation
25
26
26
-
**Prerequisites**
27
+
#### Prerequisites
27
28
28
-
-Python version: 3.7 or 3.8 or 3.9 or 3.10
29
+
Python version: 3.7, 3.8, 3.9, 3.10
29
30
30
-
**Install on Linux**
31
+
#### Install on Linux
31
32
- Release binary install
32
33
```Shell
33
34
# install stable basic version from pip
@@ -48,7 +49,7 @@ Intel® Neural Compressor has been one of the critical AI software components in
48
49
More installation methods can be found at [Installation Guide](./docs/installation_guide.md). Please check out our [FAQ](./docs/faq.md) for more details.
* Quantization with [Auto-coding API](./neural_coder/docs/AutoQuant.md) (Experimental)
83
+
###Quantization with [Auto-coding API](./neural_coder/docs/AutoQuant.md) (Experimental)
83
84
84
85
```python
85
86
from neural_coder import auto_quant
@@ -95,7 +96,7 @@ auto_quant(
95
96
96
97
## System Requirements
97
98
98
-
Intel® Neural Compressor supports systems based on [Intel 64 architecture or compatible processors](https://en.wikipedia.org/wiki/X86-64), specially optimized for the following CPUs:
99
+
Intel® Neural Compressor supports systems based on [Intel 64 architecture or compatible processors](https://en.wikipedia.org/wiki/X86-64) that are specifically optimized for the following CPUs:
99
100
100
101
* Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
101
102
* Future Intel Xeon Scalable processor (code name Sapphire Rapids)
@@ -143,15 +144,16 @@ Intel® Neural Compressor supports systems based on [Intel 64 architecture or co
143
144
</table>
144
145
145
146
> **Note:**
146
-
> Please set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable oneDNN optimizations if you are using TensorFlow from v2.6 to v2.8. oneDNN has been fully default from TensorFlow v2.9.
147
+
> Set the environment variable ``TF_ENABLE_ONEDNN_OPTS=1`` to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.
147
148
148
149
### Validated Models
149
-
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with performance speedup geomean 2.2x and up to 4.2x on VNNI while minimizing the accuracy loss. And also provided 30+ pruning and knowledge distillation samples.
150
-
More details for validated models are available [here](docs/validated_model_list.md).
150
+
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available [here](docs/validated_model_list.md).
@@ -231,7 +233,7 @@ More details for validated models are available [here](docs/validated_model_list
231
233
*[Accelerate AI Inference without Sacrificing Accuracy](https://www.intel.com/content/www/us/en/developer/videos/accelerate-inference-without-sacrificing-accuracy.html#gs.9yottx)
232
234
*[Accelerate Deep Learning with Intel® Extension for TensorFlow*](https://www.intel.com/content/www/us/en/developer/videos/accelerate-deep-learning-with-intel-tensorflow.html#gs.9yrw90)
233
235
234
-
> Please check out our [full publication list](docs/publication_list.md).
0 commit comments