Skip to content

Commit be70558

Browse files
authored
fix IO format (#1310)
1 parent 0bf02a6 commit be70558

File tree

3 files changed

+27
-18
lines changed

3 files changed

+27
-18
lines changed

Makefile

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,8 @@ html:
2121
mkdir "$(BUILDDIR)/html/docs/imgs"
2222
cp docs/imgs/architecture.png "$(BUILDDIR)/html/docs/imgs/architecture.png"
2323
cp docs/imgs/workflow.png "$(BUILDDIR)/html/docs/imgs/workflow.png"
24+
cp docs/imgs/INC_GUI.gif "$(BUILDDIR)/html/docs/imgs/INC_GUI.gif"
25+
cp docs/imgs/release_data.png "$(BUILDDIR)/html/docs/imgs/release_data.png"
2426

2527

2628
# Catch-all target: route all unknown targets to Sphinx using the new

README.md

Lines changed: 20 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,10 @@ Intel® Neural Compressor
1212
</div>
1313

1414
---
15+
<div align="left">
1516

16-
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, an open-source Python library running on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep learning frameworks for popular network compression technologies, such as quantization, pruning, knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help user quickly find out the best quantized model. It also implements different weight pruning algorithms to generate pruned model with predefined sparsity goal and supports knowledge distillation to distill the knowledge from the teacher model to the student model.
17-
Intel® Neural Compressor has been one of the critical AI software components in [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
17+
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, is an open-source Python library that runs on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep-learning frameworks for popular network compression technologies such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help the user quickly find out the best quantized model. It also implements different weight-pruning algorithms to generate a pruned model with predefined sparsity goal. It also supports knowledge distillation to distill the knowledge from the teacher model to the student model.
18+
Intel® Neural Compressor is a critical AI software component in the [Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html).
1819

1920
> **Note:**
2021
> GPU support is under development.
@@ -23,11 +24,11 @@ Intel® Neural Compressor has been one of the critical AI software components in
2324

2425
## Installation
2526

26-
**Prerequisites**
27+
#### Prerequisites
2728

28-
- Python version: 3.7 or 3.8 or 3.9 or 3.10
29+
Python version: 3.7, 3.8, 3.9, 3.10
2930

30-
**Install on Linux**
31+
#### Install on Linux
3132
- Release binary install
3233
```Shell
3334
# install stable basic version from pip
@@ -48,7 +49,7 @@ Intel® Neural Compressor has been one of the critical AI software components in
4849
More installation methods can be found at [Installation Guide](./docs/installation_guide.md). Please check out our [FAQ](./docs/faq.md) for more details.
4950

5051
## Getting Started
51-
* Quantization with Python API
52+
### Quantization with Python API
5253

5354
```shell
5455
# A TensorFlow Example
@@ -66,7 +67,7 @@ dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
6667
quantizer.calib_dataloader = common.DataLoader(dataset)
6768
quantizer.fit()
6869
```
69-
* Quantization with [GUI](./docs/bench.md)
70+
### Quantization with [GUI](./docs/bench.md)
7071
```shell
7172
# An ONNX Example
7273
pip install onnx==1.12.0 onnxruntime==1.12.1 onnxruntime-extensions
@@ -79,7 +80,7 @@ inc_bench
7980
<img src="./docs/imgs/INC_GUI.gif" alt="Architecture">
8081
</a>
8182

82-
* Quantization with [Auto-coding API](./neural_coder/docs/AutoQuant.md) (Experimental)
83+
### Quantization with [Auto-coding API](./neural_coder/docs/AutoQuant.md) (Experimental)
8384

8485
```python
8586
from neural_coder import auto_quant
@@ -95,7 +96,7 @@ auto_quant(
9596

9697
## System Requirements
9798

98-
Intel® Neural Compressor supports systems based on [Intel 64 architecture or compatible processors](https://en.wikipedia.org/wiki/X86-64), specially optimized for the following CPUs:
99+
Intel® Neural Compressor supports systems based on [Intel 64 architecture or compatible processors](https://en.wikipedia.org/wiki/X86-64) that are specifically optimized for the following CPUs:
99100

100101
* Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
101102
* Future Intel Xeon Scalable processor (code name Sapphire Rapids)
@@ -143,15 +144,16 @@ Intel® Neural Compressor supports systems based on [Intel 64 architecture or co
143144
</table>
144145

145146
> **Note:**
146-
> Please set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to enable oneDNN optimizations if you are using TensorFlow from v2.6 to v2.8. oneDNN has been fully default from TensorFlow v2.9.
147+
> Set the environment variable ``TF_ENABLE_ONEDNN_OPTS=1`` to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.
147148
148149
### Validated Models
149-
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with performance speedup geomean 2.2x and up to 4.2x on VNNI while minimizing the accuracy loss. And also provided 30+ pruning and knowledge distillation samples.
150-
More details for validated models are available [here](docs/validated_model_list.md).
150+
Intel® Neural Compressor validated 420+ [examples](./examples) for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available [here](docs/validated_model_list.md).
151151

152-
<a target="_blank" href="./docs/imgs/release_data.png">
153-
<img src="./docs/imgs/release_data.png" alt="Architecture" width=800 height=600>
154-
</a>
152+
<div style = "width: 77%; margin-bottom: 2%;">
153+
<a target="_blank" href="./docs/imgs/release_data.png">
154+
<img src="./docs/imgs/release_data.png" alt="Architecture" width=800 height=600>
155+
</a>
156+
</div>
155157

156158
## Documentation
157159

@@ -231,7 +233,7 @@ More details for validated models are available [here](docs/validated_model_list
231233
* [Accelerate AI Inference without Sacrificing Accuracy](https://www.intel.com/content/www/us/en/developer/videos/accelerate-inference-without-sacrificing-accuracy.html#gs.9yottx)
232234
* [Accelerate Deep Learning with Intel® Extension for TensorFlow*](https://www.intel.com/content/www/us/en/developer/videos/accelerate-deep-learning-with-intel-tensorflow.html#gs.9yrw90)
233235

234-
> Please check out our [full publication list](docs/publication_list.md).
236+
> View our [full publication list](docs/publication_list.md).
235237
236238
## Additional Content
237239

@@ -241,6 +243,6 @@ More details for validated models are available [here](docs/validated_model_list
241243
* [Security Policy](docs/security_policy.md)
242244
* [Intel® Neural Compressor Website](https://intel.github.io/neural-compressor)
243245

244-
## Hiring :star:
246+
## Hiring
245247

246-
We are actively hiring. Please send your resume to inc.maintainers@intel.com if you have interests in model compression techniques.
248+
We are actively hiring. Send your resume to inc.maintainers@intel.com if you are interested in model compression techniques.

_static/custom.css

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,8 @@
1111
.rst-content tt.literal, .rst-content code.literal {
1212
color: #000000;
1313
}
14+
15+
table.docutils th {
16+
text-align: center;
17+
vertical-align: middle;
18+
}

0 commit comments

Comments
 (0)