Skip to content

Commit eb615ed

Browse files
authored
update validated fw version and installation method (#1463)
Signed-off-by: chensuyue <suyue.chen@intel.com>
1 parent 098401d commit eb615ed

File tree

6 files changed

+49
-10003
lines changed

6 files changed

+49
-10003
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ Intel® Neural Compressor
44
===========================
55
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)</h3>
66

7-
[![python](https://img.shields.io/badge/python-3.7%2B-blue)](https://github.com/intel/neural-compressor)
8-
[![version](https://img.shields.io/badge/release-2.3-green)](https://github.com/intel/neural-compressor/releases)
7+
[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/intel/neural-compressor)
8+
[![version](https://img.shields.io/badge/release-2.4-green)](https://github.com/intel/neural-compressor/releases)
99
[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)
1010
[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)
1111
[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total&units=international_system&left_color=grey&right_color=green&left_text=downloads)](https://pepy.tech/project/neural-compressor)
@@ -31,7 +31,7 @@ In particular, the tool provides the key features, typical examples, and open co
3131
```Shell
3232
pip install neural-compressor
3333
```
34-
> [!NOTE]
34+
> **Note**:
3535
> More installation methods can be found at [Installation Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/installation_guide.md). Please check out our [FAQ](https://github.com/intel/neural-compressor/blob/master/docs/source/faq.md) for more details.
3636
3737
## Getting Started
@@ -139,7 +139,7 @@ q_model = fit(
139139
</tbody>
140140
</table>
141141

142-
> [!NOTE]
142+
> **Note**:
143143
> More documentations can be found at [User Guide](https://github.com/intel/neural-compressor/blob/master/docs/source/user_guide.md).
144144
145145
## Selected Publications/Events
@@ -150,7 +150,7 @@ q_model = fit(
150150
* NeurIPS'2022: [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) (Oct 2022)
151151
* NeurIPS'2022: [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114) (Oct 2022)
152152

153-
> [!NOTE]
153+
> **Note**:
154154
> View [Full Publication List](https://github.com/intel/neural-compressor/blob/master/docs/source/publication_list.md).
155155
156156
## Additional Content

docs/source/installation_guide.md

Lines changed: 44 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Installation
22

3-
1. [Linux Installation](#linux-installation)
3+
1. [Installation](#installation)
44

55
1.1. [Prerequisites](#prerequisites)
66

@@ -10,38 +10,39 @@
1010

1111
1.4. [Install from AI Kit](#install-from-ai-kit)
1212

13-
2. [Windows Installation](#windows-installation)
13+
2. [System Requirements](#system-requirements)
1414

15-
2.1. [Prerequisites](#prerequisites-1)
15+
2.1. [Validated Hardware Environment](#validated-hardware-environment)
1616

17-
2.2. [Install from Binary](#install-from-binary-1)
17+
2.2. [Validated Software Environment](#validated-software-environment)
1818

19-
2.3. [Install from Source](#install-from-source-1)
20-
21-
3. [System Requirements](#system-requirements)
22-
23-
3.1. [Validated Hardware Environment](#validated-hardware-environment)
24-
25-
3.2. [Validated Software Environment](#validated-software-environment)
26-
27-
## Linux Installation
19+
## Installation
2820
### Prerequisites
2921
You can install Neural Compressor using one of three options: Install single component from binary or source, or get the Intel-optimized framework together with the library by installing the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
3022

3123
The following prerequisites and requirements must be satisfied for a successful installation:
3224

33-
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
25+
- Python version: 3.8 or 3.9 or 3.10 or 3.11
3426

3527
> Notes:
3628
> - If you get some build issues, please check [frequently asked questions](faq.md) at first.
3729
3830
### Install from Binary
39-
31+
- Install from Pypi
4032
```Shell
4133
# install stable basic version from pypi
4234
pip install neural-compressor
4335
```
36+
```Shell
37+
# [Experimental] install stable basic + PyTorch framework extension API from pypi
38+
pip install neural-compressor[pt]
39+
```
40+
```Shell
41+
# [Experimental] install stable basic + TensorFlow framework extension API from pypi
42+
pip install neural-compressor[tf]
43+
```
4444

45+
- Install from test Pypi
4546
```Shell
4647
# install nightly version
4748
git clone https://github.com/intel/neural-compressor.git
@@ -51,8 +52,15 @@ The following prerequisites and requirements must be satisfied for a successful
5152
pip install -i https://test.pypi.org/simple/ neural-compressor
5253
```
5354

55+
- Install from Conda
5456
```Shell
55-
# install stable basic version from from conda
57+
# install on Linux OS
58+
conda install opencv-python-headless -c fastai
59+
conda install neural-compressor -c conda-forge -c intel
60+
```
61+
```Shell
62+
# install on Windows OS
63+
conda install pycocotools -c esri
5664
conda install opencv-python-headless -c fastai
5765
conda install neural-compressor -c conda-forge -c intel
5866
```
@@ -77,38 +85,6 @@ The AI Kit is distributed through many common channels, including from Intel's w
7785
|-|-|
7886
|[Download AI Kit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit/) |[AI Kit Get Started Guide](https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html) |
7987

80-
## Windows Installation
81-
82-
### Prerequisites
83-
84-
The following prerequisites and requirements must be satisfied for a successful installation:
85-
86-
- Python version: 3.7 or 3.8 or 3.9 or 3.10 or 3.11
87-
88-
### Install from Binary
89-
90-
```Shell
91-
# install stable basic version from pypi
92-
pip install neural-compressor
93-
```
94-
95-
```Shell
96-
# install stable basic version from from conda
97-
conda install pycocotools -c esri
98-
conda install opencv-python-headless -c fastai
99-
conda install neural-compressor -c conda-forge -c intel
100-
```
101-
102-
### Install from Source
103-
104-
```Shell
105-
git clone https://github.com/intel/neural-compressor.git
106-
cd neural-compressor
107-
pip install -r requirements.txt
108-
# build with basic functionality
109-
python setup.py install
110-
```
111-
11288
## System Requirements
11389

11490
### Validated Hardware Environment
@@ -128,8 +104,8 @@ The following prerequisites and requirements must be satisfied for a successful
128104

129105
### Validated Software Environment
130106

131-
* OS version: CentOS 8.4, Ubuntu 22.04
132-
* Python version: 3.7, 3.8, 3.9, 3.10, 3.11
107+
* OS version: CentOS 8.4, Ubuntu 22.04, MacOS Ventura 13.5
108+
* Python version: 3.8, 3.9, 3.10, 3.11
133109

134110
<table class="docutils">
135111
<thead>
@@ -147,24 +123,24 @@ The following prerequisites and requirements must be satisfied for a successful
147123
<tbody>
148124
<tr align="center">
149125
<th>Version</th>
150-
<td class="tg-7zrl"> <a href=https://github.com/tensorflow/tensorflow/tree/v2.13.0>2.13.0</a><br>
151-
<a href=https://github.com/tensorflow/tensorflow/tree/v2.12.1>2.12.1</a><br>
152-
<a href=https://github.com/tensorflow/tensorflow/tree/v2.11.1>2.11.1</a><br></td>
153-
<td class="tg-7zrl"> <a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.13.0>2.13.0</a><br>
154-
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.12.0>2.12.0</a><br>
155-
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.11.0>2.11.0</a><br></td>
156-
<td class="tg-7zrl"> <a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v2.13.0.0>v2.13.0.0</a><br>
157-
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.2.0>1.2.0</a><br>
158-
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.1.0>1.1.0</a></td>
159-
<td class="tg-7zrl"><a href=https://github.com/pytorch/pytorch/tree/v2.0.1>2.0.1+cpu</a><br>
160-
<a href=https://github.com/pytorch/pytorch/tree/v1.13.1>1.13.1+cpu</a><br>
161-
<a href=https://github.com/pytorch/pytorch/tree/v1.12.1>1.12.1+cpu</a><br></td>
162-
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100+cpu>2.0.1+cpu</a><br>
163-
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100+cpu>1.13.1+cpu</a><br>
164-
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.12.100>1.12.1+cpu</a><br></td>
165-
<td class="tg-7zrl"><a href=https://github.com/microsoft/onnxruntime/tree/v1.15.1>1.15.1</a><br>
166-
<a href=https://github.com/microsoft/onnxruntime/tree/v1.14.1>1.14.1</a><br>
167-
<a href=https://github.com/microsoft/onnxruntime/tree/v1.13.1>1.13.1</a><br></td>
126+
<td class="tg-7zrl"> <a href=https://github.com/tensorflow/tensorflow/tree/v2.15.0>2.15.0</a><br>
127+
<a href=https://github.com/tensorflow/tensorflow/tree/v2.14.1>2.14.1</a><br>
128+
<a href=https://github.com/tensorflow/tensorflow/tree/v2.13.0>2.13.0</a><br></td>
129+
<td class="tg-7zrl"> <a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.14.0>2.14.0</a><br>
130+
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.13.0>2.13.0</a><br>
131+
<a href=https://github.com/Intel-tensorflow/tensorflow/tree/v2.12.0>2.12.0</a><br></td>
132+
<td class="tg-7zrl"> <a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v2.14.0.1>2.14.0.1</a><br>
133+
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v2.13.0.0>2.13.0.0</a><br>
134+
<a href=https://github.com/intel/intel-extension-for-tensorflow/tree/v1.2.0>1.2.0</a><br></td>
135+
<td class="tg-7zrl"><a href=https://github.com/pytorch/pytorch/tree/v2.1.0>2.1.0+cpu</a><br>
136+
<a href=https://github.com/pytorch/pytorch/tree/v2.0.1>2.0.1+cpu</a><br>
137+
<a href=https://github.com/pytorch/pytorch/tree/v1.13.1>1.13.1+cpu</a><br></td>
138+
<td class="tg-7zrl"><a href=https://github.com/intel/intel-extension-for-pytorch/tree/v2.1.0%2Bcpu>2.1.0+cpu</a><br>
139+
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v2.0.100%2Bcpu>2.0.1+cpu</a><br>
140+
<a href=https://github.com/intel/intel-extension-for-pytorch/tree/v1.13.100%2Bcpu>1.13.1+cpu</a><br></td>
141+
<td class="tg-7zrl"><a href=https://github.com/microsoft/onnxruntime/tree/v1.16.3>1.16.3</a><br>
142+
<a href=https://github.com/microsoft/onnxruntime/tree/v1.15.1>1.15.1</a><br>
143+
<a href=https://github.com/microsoft/onnxruntime/tree/v1.14.1>1.14.1</a><br></td>
168144
<td class="tg-7zrl"><a href=https://github.com/apache/incubator-mxnet/tree/1.9.1>1.9.1</a><br></td>
169145
</tr>
170146
</tbody>

examples/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
Examples
22
==========
33
Intel® Neural Compressor validated examples with multiple compression techniques, including quantization, pruning, knowledge distillation and orchestration. Part of the validated cases can be found in the example tables, and the release data is available [here](../docs/source/validated_model_list.md).
4-
> Note: The example marked with `*` means it still use 1.x API.
54

65
# Quick Get Started Notebook Examples
76
* [Quick Get Started Notebook of Intel® Neural Compressor for ONNXRuntime](/examples/notebook/onnxruntime/Quick_Started_Notebook_of_INC_for_ONNXRuntime.ipynb)
@@ -1508,7 +1507,6 @@ Intel® Neural Compressor validated examples with multiple compression technique
15081507

15091508
# Notebook Examples
15101509

1511-
* *[BERT Mini SST2 performance boost with INC](/examples/notebook/bert_mini_distillation): train a BERT-Mini model on SST-2 dataset through distillation, and leverage quantization to accelerate the inference while maintaining the accuracy using Intel® Neural Compressor.
15121510
* [Performance of FP32 Vs. INT8 ResNet50 Model](/examples/notebook/perf_fp32_int8_tf): compare existed FP32 & INT8 ResNet50 model directly.
15131511
* [Intel® Neural Compressor Sample for PyTorch*](/examples/notebook/pytorch/alexnet_fashion_mnist): an End-To-End pipeline to build up a CNN model by PyTorch to recognize fashion image and speed up AI model by Intel® Neural Compressor.
15141512
* [Intel® Neural Compressor Sample for TensorFlow*](/examples/notebook/tensorflow/alexnet_mnist): an End-To-End pipeline to build up a CNN model by TensorFlow to recognize handwriting number and speed up AI model by Intel® Neural Compressor.

0 commit comments

Comments
 (0)