You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -4,8 +4,8 @@ Intel® Neural Compressor
4
4
===========================
5
5
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>
@@ -31,7 +31,7 @@ In particular, the tool provides the key features, typical examples, and open co
31
31
```Shell
32
32
pip install neural-compressor
33
33
```
34
-
> [!NOTE]
34
+
> **Note**:
35
35
> More installation methods can be found at [Installation Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/installation_guide.md). Please check out our [FAQ](https://github.com/intel/neural-compressor/blob/master/docs/source/faq.md) for more details.
36
36
37
37
## Getting Started
@@ -139,7 +139,7 @@ q_model = fit(
139
139
</tbody>
140
140
</table>
141
141
142
-
> [!NOTE]
142
+
> **Note**:
143
143
> More documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).
144
144
145
145
## Selected Publications/Events
@@ -150,7 +150,7 @@ q_model = fit(
150
150
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
151
151
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)
You can install Neural Compressor using one of three options: Install single component from binary or source, or get the Intel-optimized framework together with the library by installing the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
30
22
31
23
The following prerequisites and requirements must be satisfied for a successful installation:
32
24
33
-
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
25
+
- Python version: 3.8 or 3.9 or 3.10 or 3.11
34
26
35
27
> Notes:
36
28
> - If you get some build issues, please check [frequently asked questions](faq.md) at first.
37
29
38
30
### Install from Binary
39
-
31
+
- Install from Pypi
40
32
```Shell
41
33
# install stable basic version from pypi
42
34
pip install neural-compressor
43
35
```
36
+
```Shell
37
+
# [Experimental] install stable basic + PyTorch framework extension API from pypi
38
+
pip install neural-compressor[pt]
39
+
```
40
+
```Shell
41
+
# [Experimental] install stable basic + TensorFlow framework extension API from pypi
@@ -77,38 +85,6 @@ The AI Kit is distributed through many common channels, including from Intel's w
77
85
|-|-|
78
86
|[Download AI Kit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit/)|[AI Kit Get Started Guide](https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html)|
79
87
80
-
## Windows Installation
81
-
82
-
### Prerequisites
83
-
84
-
The following prerequisites and requirements must be satisfied for a successful installation:
85
-
86
-
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
Copy file name to clipboardExpand all lines: examples/README.md
-2Lines changed: 0 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,6 @@
1
1
Examples
2
2
==========
3
3
Intel® Neural Compressor validated examples with multiple compression techniques, including quantization, pruning, knowledge distillation and orchestration. Part of the validated cases can be found in the example tables, and the release data is available [here](../docs/source/validated_model_list.md).
4
-
> Note: The example marked with `*` means it still use 1.x API.
5
4
6
5
# Quick Get Started Notebook Examples
7
6
*[Quick Get Started Notebook of Intel® Neural Compressor for ONNXRuntime](/examples/notebook/onnxruntime/Quick_Started_Notebook_of_INC_for_ONNXRuntime.ipynb)
**[BERT Mini SST2 performance boost with INC](/examples/notebook/bert_mini_distillation): train a BERT-Mini model on SST-2 dataset through distillation, and leverage quantization to accelerate the inference while maintaining the accuracy using Intel® Neural Compressor.
1512
1510
*[Performance of FP32 Vs. INT8 ResNet50 Model](/examples/notebook/perf_fp32_int8_tf): compare existed FP32 & INT8 ResNet50 model directly.
1513
1511
*[Intel® Neural Compressor Sample for PyTorch*](/examples/notebook/pytorch/alexnet_fashion_mnist): an End-To-End pipeline to build up a CNN model by PyTorch to recognize fashion image and speed up AI model by Intel® Neural Compressor.
1514
1512
*[Intel® Neural Compressor Sample for TensorFlow*](/examples/notebook/tensorflow/alexnet_mnist): an End-To-End pipeline to build up a CNN model by TensorFlow to recognize handwriting number and speed up AI model by Intel® Neural Compressor.
0 commit comments