Skip to content

Commit 55da45e

Browse files
committed
doc update to use APIs in lpot.experimental rather than lpot
1 parent 9f9ef47 commit 55da45e

File tree

77 files changed

+190
-173
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

77 files changed

+190
-173
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -482,6 +482,8 @@ The MSE tuning strategy does not work with the PyTorch adaptor layer. This strat
482482

483483
[LPOT v1.2](https://github.com/intel/lpot/tree/v1.2) introduces incompatible changes in user facing APIs. Please refer to [incompatible changes](./docs/incompatible_changes.md) to know which incompatible changes are made in v1.2.
484484

485+
[LPOT v1.2.1](https://github.com/intel/lpot/tree/v1.2.1) solves this backward compatible issues introduced in v1.2 by moving new user facing APIs to lpot.experimental package and keep old one as is. Please refer to [introduction](./docs/introduction.md) to know the details of user-facing APIs.
486+
485487
# Support
486488

487489
Submit your questions, feature requests, and bug reports to the

docs/benchmark.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ evaluation: # optional. required if use
2121
size: 256
2222
CenterCrop:
2323
size: 224
24-
ToTensor:
24+
ToTensor: {}
2525
Normalize:
2626
mean: [0.485, 0.456, 0.406]
2727
std: [0.229, 0.224, 0.225]
@@ -39,7 +39,7 @@ evaluation: # optional. required if use
3939
size: 256
4040
CenterCrop:
4141
size: 224
42-
ToTensor:
42+
ToTensor: {}
4343
Normalize:
4444
mean: [0.485, 0.456, 0.406]
4545
```
@@ -52,7 +52,7 @@ In this case, you should config your dataloader and lpot will construct an evalu
5252
5353
```python
5454
dataset = Dataset() # dataset class that implement __getitem__ method or __iter__ method
55-
from lpot import Benchmark, common
55+
from lpot.experimental import Benchmark, common
5656
evaluator = Benchmark(config.yaml)
5757
evaluator.dataloader = common.DataLoader(dataset, batch_size=batch_size)
5858
# user can also register postprocess and metric, this is optional

docs/introduction.md

Lines changed: 47 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,25 @@ Intel® Low Precision Optimization Tool is an open-source Python library designe
88

99
The API is intended to unify low-precision quantization interfaces cross multiple DL frameworks for the best out-of-the-box experiences.
1010

11-
The API consists of below componenets:
11+
> **NOTE**
12+
>
13+
> LPOT is keeping improving the user-facing APIs for better user experience.
14+
>
15+
> Now there are two sets of user-facing APIs. One is the default one supported from LPOT v1.0 for backward compatibility. Another one is the new APIs in lpot.experimental package.
16+
> We recommend user to use the one in lpot.experimental. All of examples have been updated to use this experimental APIs.
17+
>
18+
> The major differences between the default use-facing APIs and the experiemntal APIs are:
19+
> 1. The experimental APIs abstract `lpot.experimental.common.Model` concept to cover those cases whose weight and graph files are stored seperately.
20+
> 2. The experimental APIs unifiy the calling style of `Quantization`, `Pruning`, and `Benchmark` class by setting model, calibration dataloader, evaluation dataloader, metric through class attributes rather than passing as function inputs.
21+
> 3. The experimental APIs refine LPOT built-in transforms/datasets/metrics by unifying the APIs cross different framework backends.
22+
23+
## Experimental user-facing APIs
24+
25+
The experimental user-facing APIs consist of below components:
1226

1327
### quantization-related APIs
1428
```python
29+
# lpot.experimental.Quantization
1530
class Quantization(object):
1631
def __init__(self, conf_fname):
1732
...
@@ -48,7 +63,7 @@ class Quantization(object):
4863
...
4964

5065
```
51-
The `conf_fname` parameter used in the class initialization is the path to user yaml configuration file. This is a yaml file that is used to control the entire tuning behavior.
66+
The `conf_fname` parameter used in the class initialization is the path to user yaml configuration file. This is a yaml file that is used to control the entire tuning behavior on the model.
5267

5368
> **LPOT User YAML Syntax**
5469
>
@@ -58,14 +73,15 @@ The `conf_fname` parameter used in the class initialization is the path to user
5873
5974
```python
6075
# Typical Launcher code
61-
from lpot import Quantization, common
76+
from lpot.experimental import Quantization, common
6277

6378
# optional if LPOT built-in dataset could be used as model input in yaml
6479
class dataset(object):
6580
def __init__(self, *args):
6681
...
6782

6883
def __getitem__(self, idx):
84+
# return single sample and label tuple without collate. label should be 0 for label-free case
6985
...
7086

7187
def len(self):
@@ -77,9 +93,13 @@ class custom_metric(object):
7793
...
7894

7995
def update(self, predict, label):
96+
# metric update per mini-batch
8097
...
8198

8299
def result(self):
100+
# final metric calculation invoked only once after all mini-batch are evaluated
101+
# return a scalar to lpot for accuracy-driven tuning.
102+
# by default the scalar is higher-is-better. if not, set tuning.accuracy_criterion.higher_is_better to false in yaml.
83103
...
84104

85105
quantizer = Quantization(conf.yaml)
@@ -96,7 +116,7 @@ q_model = quantizer()
96116
q_model.save('/path/to/output/dir')
97117
```
98118

99-
`model` attribute in `Quantization` class is an abstraction of model formats cross different frameworks. LPOT supports passing the path of `keras model`, `frozen pb`, `checkpoint`, `saved model`, `torch.nn.model`, `mxnet.symbol.Symbol`, `gluon.HybirdBlock`, and `onnx model` to instantiate a `lpot.common.Model()` class and set to `quantizer.model`.
119+
`model` attribute in `Quantization` class is an abstraction of model formats cross different frameworks. LPOT supports passing the path of `keras model`, `frozen pb`, `checkpoint`, `saved model`, `torch.nn.model`, `mxnet.symbol.Symbol`, `gluon.HybirdBlock`, and `onnx model` to instantiate a `lpot.experimental.common.Model()` class and set to `quantizer.model`.
100120

101121
`calib_dataloader` and `eval_dataloader` attribute in `Quantization` class is used to setup a calibration dataloader by code. It is optional to set if user sets corresponding fields in yaml.
102122

@@ -130,33 +150,14 @@ class Pruning(object):
130150
def __call__(self):
131151
...
132152

133-
@property
134-
def calib_dataloader(self):
135-
...
136-
137-
@property
138-
def eval_dataloader(self):
139-
...
140-
141153
@property
142154
def model(self):
143155
...
144156

145-
@property
146-
def metric(self):
147-
...
148-
149-
@property
150-
def postprocess(self, user_postprocess):
151-
...
152-
153157
@property
154158
def q_func(self):
155159
...
156160

157-
@property
158-
def eval_func(self):
159-
...
160161
```
161162

162163
This API is used to do sparsity pruning. Currently it is Proof-of-Concept, LPOT only supports `magnitude pruning` on PyTorch.
@@ -171,9 +172,32 @@ class Benchmark(object):
171172

172173
def __call__(self):
173174
...
175+
176+
@property
177+
def model(self):
178+
...
179+
180+
@property
181+
def metric(self):
182+
...
183+
184+
@property
185+
def b_dataloader(self):
186+
...
187+
188+
@property
189+
def postprocess(self, user_postprocess):
190+
...
174191
```
175192

176193
This API is used to measure the model performance and accuarcy.
177194

178195
For how to use this API, please refer to [Benchmark Document](./benchmark.md)
179196

197+
## Default user-facing APIs
198+
199+
The default user-facing APIs would exist for backward compatiblity from v1.0 release. User could refer to [v1.1 API](https://github.com/intel/lpot/blob/v1.1/docs/introduction.md) to understand how default user-facing APIs work.
200+
201+
A [HelloWorld example](../examples/helloworld/tf_example6) using default user-facing APIs is provided for user reference.
202+
203+
Full examples using default user-facing APIs could be found at [here](https://github.com/intel/lpot/tree/v1.1/examples).

docs/tuning_strategies.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ quantization: # optional. tuning constrai
4949
scale_propagation_concat: True # optional. default value is True.
5050
first_conv_or_matmul_quantization: True # optional. default value is True.
5151
calibration:
52-
sampling_size: 1000, 2000 # optional. default value is the size of whole dataset. used to set how many portions of calibration dataset is used. exclusive with iterations field.
52+
sampling_size: 1000, 2000 # optional. default value is 100. used to set how many samples should be used in calibration.
5353
dataloader: # optional. if not specified, user need construct a q_dataloader in code for lpot.Quantization.
5454
dataset:
5555
TFRecordDataset:

docs/tutorial.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -27,18 +27,18 @@ LPOT has added built-in supports on popular dataloader/dataset and metric to eas
2727

2828
LPOT also supports register custom dataset and custom metric by code.
2929

30-
As for model, LPOT abstract a common API, named as [lpot.common.Model](../lpot/common/model.py), to cover the case in which model, weight, and other necessary info, are separately stored. Please refer to [model](./model.md) to know how to use it.
30+
As for model, LPOT abstract a common API, named as [lpot.experimental.common.Model](../lpot/experimental/common/model.py), to cover the case in which model, weight, and other necessary info, are separately stored. Please refer to [model](./model.md) to know how to use it.
3131

3232
Postprocess is treat as a specical transform by LPOT which is only needed when model output is mismatching with the expected input of LPOT built-in metrics. if user is using custom metric, the postprocess is not needed indeed as custom metric implementation need ensure it can handle model output correctly. On the other hand, the postprocess logic becomes part of custom metric implementation.
3333

3434
Below is an example of how to enable LPOT on TensorFlow mobilenet_v1 with built-in dataloader, dataset and metric.
3535

3636
```python
3737
# main.py
38-
import lpot
39-
quantizer = lpot.Quantization('./conf.yaml')
40-
model = quantizer.model("./mobilenet_v1_1.0_224_frozen.pb")
41-
quantized_model = quantizer(model)
38+
from lpot.experimental import Quantization, common
39+
quantizer = Quantization('./conf.yaml')
40+
quantizer.model = common.Model("./mobilenet_v1_1.0_224_frozen.pb")
41+
quantized_model = quantizer()
4242
```
4343

4444
```yaml
@@ -82,8 +82,7 @@ If user wants to use a dataset or metric which does not support by LPOT built-in
8282

8383
```python
8484
# main.py
85-
import lpot
86-
from lpot.metric import BaseMetric
85+
from lpot.experimental import Quantization, common
8786

8887
class Dataset(object):
8988
def __init__(self):
@@ -97,7 +96,7 @@ class Dataset(object):
9796
return len(self.test_images)
9897

9998
# Define a customized Metric function
100-
class MyMetric(BaseMetric):
99+
class MyMetric(object):
101100
def __init__(self, *args):
102101
self.pred_list = []
103102
self.label_list = []
@@ -116,19 +115,19 @@ class MyMetric(BaseMetric):
116115
return correct_num / self.samples
117116

118117
# Quantize with customized dataloader and metric
119-
quantizer = lpot.Quantization('./conf.yaml')
118+
quantizer = Quantization('./conf.yaml')
120119
dataset = Dataset()
121-
quantizer.metric = lpot.common.Metric(MyMetric)
122-
quantizer.calib_dataloader = lpot.common.DataLoader(dataset, batch_size=1)
123-
quantizer.eval_dataloader = lpot.common.DataLoader(dataset, batch_size=1)
124-
quantizer.model = lpot.common.Model('../models/simple_model')
120+
quantizer.metric = common.Metric(MyMetric)
121+
quantizer.calib_dataloader = common.DataLoader(dataset, batch_size=1)
122+
quantizer.eval_dataloader = common.DataLoader(dataset, batch_size=1)
123+
quantizer.model = common.Model('../models/simple_model')
125124
q_model = quantizer()
126125
```
127126
Note:
128127

129-
In the customized dataset, the `__getitem__()` interface must be implemented. It returns the (image, label) pair.
128+
In the customized dataset, the `__getitem__()` interface must be implemented and return single sample and label. In this example, it returns the (image, label) pair. User could return (image, 0) for label-free case.
130129

131-
In the customized metric, the update() function records the predict result of each mini-batch, the result() function would be invoked by LPOT at the end of evaluation to return a higher-is-better scalar to reflect model accuracy.
130+
In the customized metric, the update() function records the predict result of each mini-batch, the result() function would be invoked by LPOT at the end of evaluation to return a scalar to reflect model accuracy. By default, this scalar is higher-is-better. If this scalar returned from customerized metric is a lower-is-better value, `tuning.accuracy_criterion.higher_is_better` in yaml should be set to `False`.
132131

133132
```yaml
134133
# conf.yaml

examples/helloworld/tf_example1/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@ The configuration will create a dataloader of Imagenet and it will do Bilinear r
1010
```yaml
1111
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
1212
calibration:
13-
sampling_size: 20 # optional. default value is the size of whole dataset. used to set how many portions of calibration dataset is used. exclusive with iterations field.
13+
sampling_size: 20 # optional. default value is 100. used to set how many samples should be used in calibration.
1414
dataloader:
1515
batch_size: 1
1616
dataset:
1717
ImageRecord:
18-
root: <DATASET>/TF_imagenet/val/ # NOTE: modify to calibration dataset location if needed
18+
root: <DATASET>/TF_imagenet/val/ # NOTE: modify to calibration dataset location if needed
1919
transform:
2020
ParseDecodeImagenet:
2121
BilinearImagenet:
@@ -35,7 +35,7 @@ evaluation: # optional. required if use
3535
batch_size: 32
3636
dataset:
3737
ImageRecord:
38-
root: <DATASET>/TF_imagenet/val/ # NOTE: modify to evaluation dataset location if needed
38+
root: <DATASET>/TF_imagenet/val/ # NOTE: modify to evaluation dataset location if needed
3939
transform:
4040
ParseDecodeImagenet:
4141
BilinearImagenet:

examples/helloworld/tf_example1/conf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ model: # mandatory. lpot uses this
1919

2020
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
2121
calibration:
22-
sampling_size: 20 # optional. default value is the size of whole dataset. used to set how many portions of calibration dataset is used. exclusive with iterations field.
22+
sampling_size: 20 # optional. default value is 100. used to set how many samples should be used in calibration.
2323
dataloader:
2424
batch_size: 1
2525
dataset:

examples/helloworld/tf_example2/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -55,8 +55,7 @@ class Dataset(object):
5555
### 3.Define a customized metric
5656
This customized metric will caculate accuracy.
5757
```python
58-
from lpot.metric import Metric
59-
class MyMetric(Metric):
58+
class MyMetric(object):
6059
def __init__(self, *args):
6160
self.pred_list = []
6261
self.label_list = []

examples/helloworld/tf_example3/README.md

Lines changed: 4 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,12 @@ The configuration will help user to create a dataloader of Imagenet and it will
1818
```yaml
1919
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
2020
calibration:
21-
sampling_size: 20, 50 # optional. default value is the size of whole dataset. used to set how many portions of calibration dataset is used. exclusive with iterations field.
21+
sampling_size: 20, 50 # optional. default value is 100. used to set how many samples should be used in calibration.
2222
dataloader:
2323
batch_size: 10
2424
dataset:
2525
ImageRecord:
26-
root: /path/to/imagenet/ # NOTE: modify to calibration dataset location if needed
26+
root: /path/to/imagenet/ # NOTE: modify to calibration dataset location if needed
2727
transform:
2828
ParseDecodeImagenet:
2929
BilinearImagenet:
@@ -39,7 +39,7 @@ evaluation: # optional. required if use
3939
last_batch: discard
4040
dataset:
4141
ImageRecord:
42-
root: /path/to/imagenet/ # NOTE: modify to evaluation dataset location if needed
42+
root: /path/to/imagenet/ # NOTE: modify to evaluation dataset location if needed
4343
transform:
4444
ParseDecodeImagenet:
4545
BilinearImagenet:
@@ -51,22 +51,9 @@ evaluation: # optional. required if use
5151
3. Run quantizaiton
5252
* In order to do quanzation for slim models, we need to get graph from slim .ckpt first.
5353
```python
54-
from lpot.experimental import Quantization, common
54+
from lpot.experimental import Quantization, common
5555
quantizer = Quantization('./conf.yaml')
5656

57-
# Get graph from slim checkpoint
58-
from tf_slim.nets import inception
59-
model_func = inception.inception_v1
60-
arg_scope = inception.inception_v1_arg_scope()
61-
kwargs = {'num_classes': 1001}
62-
inputs_shape = [None, 224, 224, 3]
63-
images = tf.compat.v1.placeholder(name='input', \
64-
dtype=tf.float32, shape=inputs_shape)
65-
66-
from lpot.adaptor.tf_utils.util import get_slim_graph
67-
graph = get_slim_graph('./inception_v1.ckpt', model_func, \
68-
arg_scope, images, **kwargs)
69-
7057
# Do quantization
7158
quantizer.model = common.Model('./inception_v1.ckpt')
7259
quantized_model = quantizer()

examples/helloworld/tf_example3/conf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ model: # mandatory. lpot uses this
1919

2020
quantization: # optional. tuning constraints on model-wise for advance user to reduce tuning space.
2121
calibration:
22-
sampling_size: 20 # optional. default value is the size of whole dataset. used to set how many portions of calibration dataset is used. exclusive with iterations field.
22+
sampling_size: 20 # optional. default value is 100. used to set how many samples should be used in calibration.
2323
dataloader:
2424
batch_size: 10
2525
dataset:

0 commit comments

Comments
 (0)