Skip to content

Commit 05be605

Browse files
ClarkChin08ftian1
authored andcommitted
[fix] fix pruning bug, fix mxnet dataloader bug, fix pytorch save bug, fix benchmark optimization bug, fix get graph def bug
1 parent 4370dbb commit 05be605

File tree

22 files changed

+325
-43
lines changed

22 files changed

+325
-43
lines changed

examples/helloworld/tf_example1/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ evaluation: # optional. required if use
4747
3. Run quantizaiton
4848
We only need to add the following lines for quantization to create an int8 model.
4949
```python
50-
import lpot
51-
quantizer = lpot.Quantization('./conf.yaml')
50+
from lpot.experimental import Quantization, common
51+
quantizer = Quantization('./conf.yaml')
5252
quantizer.model = common.Model("./mobilenet_v1_1.0_224_frozen.pb")
5353
quantized_model = quantizer()
5454
```

examples/helloworld/tf_example1/test.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,10 @@
44
import numpy as np
55
def main():
66

7-
import lpot
8-
quantizer = lpot.Quantization('./conf.yaml')
9-
quantized_model = quantizer("./mobilenet_v1_1.0_224_frozen.pb")
7+
from lpot.experimental import Quantization, common
8+
quantizer = Quantization('./conf.yaml')
9+
quantizer.model = common.Model("./mobilenet_v1_1.0_224_frozen.pb")
10+
quantized_model = quantizer()
1011

1112
if __name__ == "__main__":
1213

examples/helloworld/tf_example2/README.md

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,6 @@ class Dataset(object):
5555
### 3.Define a customized metric
5656
This customized metric will caculate accuracy.
5757
```python
58-
import lpot
5958
from lpot.metric import Metric
6059
class MyMetric(Metric):
6160
def __init__(self, *args):
@@ -84,8 +83,7 @@ class MyMetric(Metric):
8483
```
8584
### 4.Use the customized data loader and metric for quantization
8685
```python
87-
import lpot
88-
quantizer = lpot.Quantization('./conf.yaml')
86+
quantizer = Quantization('./conf.yaml')
8987
dataset = Dataset()
9088
quantizer.metric = common.Metric(MyMetric, 'hello_metric')
9189
quantizer.calib_dataloader = common.DataLoader(dataset, batch_size=1)

examples/helloworld/tf_example2/test.py

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,7 @@ def __len__(self):
1616
return len(self.test_images)
1717

1818
# Define a customized Metric function
19-
import lpot
20-
from lpot.experimental import common
19+
from lpot.experimental import Quantization, common
2120
from lpot.metric import BaseMetric
2221
class MyMetric(BaseMetric):
2322
def __init__(self, *args):
@@ -45,12 +44,13 @@ def result(self):
4544

4645

4746
# Quantize with customized dataloader and metric
48-
quantizer = lpot.Quantization('./conf.yaml')
47+
quantizer = Quantization('./conf.yaml')
4948
dataset = Dataset()
50-
quantizer.metric('helll_metric', MyMetric)
51-
dataloader = quantizer.dataloader(dataset, batch_size=1)
52-
q_model = quantizer('../models/simple_model', \
53-
q_dataloader=dataloader, eval_dataloader=dataloader)
49+
quantizer.metric = common.Metric(MyMetric, 'hello_metric')
50+
quantizer.calib_dataloader = common.DataLoader(dataset, batch_size=1)
51+
quantizer.eval_dataloader = common.DataLoader(dataset, batch_size=1)
52+
quantizer.model = common.Model('../models/saved_model')
53+
q_model = quantizer()
5454

5555
# Optional, run quantized model
5656
import tensorflow as tf

examples/helloworld/tf_example3/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ quantization: # optional. tuning constrai
3030
height: 224
3131
width: 224
3232
......
33-
evaluation: # optional. required if user doesn't provide eval_func in lpot.Quantization.
34-
accuracy: # optional. required if user doesn't provide eval_func in lpot.Quantization.
33+
evaluation: # optional. required if user doesn't provide eval_func in Quantization.
34+
accuracy: # optional. required if user doesn't provide eval_func in Quantization.
3535
metric:
3636
topk: 1 # built-in metrics are topk, map, f1, allow user to register new metric.
3737
dataloader:
@@ -51,9 +51,8 @@ evaluation: # optional. required if use
5151
3. Run quantizaiton
5252
* In order to do quanzation for slim models, we need to get graph from slim .ckpt first.
5353
```python
54-
import lpot
55-
from lpot import common
56-
quantizer = lpot.Quantization('./conf.yaml')
54+
from lpot.experimental import Quantization, common
55+
quantizer = Quantization('./conf.yaml')
5756

5857
# Get graph from slim checkpoint
5958
from tf_slim.nets import inception

examples/helloworld/tf_example3/test.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,12 @@
66

77
def main():
88

9-
import lpot
10-
quantizer = lpot.Quantization('./conf.yaml')
11-
quantized_model = quantizer('./inception_v1.ckpt')
9+
from lpot.experimental import Quantization, common
10+
quantizer = Quantization('./conf.yaml')
11+
12+
# Do quantization
13+
quantizer.model = common.Model('./inception_v1.ckpt')
14+
quantized_model = quantizer()
1215

1316

1417
if __name__ == "__main__":

examples/helloworld/tf_example4/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@ This example is used to demonstrate how to quantize a TensorFlow checkpoint and
1111
2. Run quantizaiton
1212
We will create a dummy dataloader and only need to add the following lines for quantization to create an int8 model.
1313
```python
14-
import lpot
15-
quantizer = lpot.Quantization('./conf.yaml')
14+
quantizer = Quantization('./conf.yaml')
1615
dataset = quantizer.dataset('dummy', shape=(100, 100, 100, 3), label=True)
1716
quantizer.model = common.Model('./model/public/rfcn-resnet101-coco-tf/rfcn_resnet101_coco_2018_01_28/')
1817
quantizer.calib_dataloader = common.DataLoader(dataset)

examples/helloworld/tf_example5/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobil
88
2. Update the root of dataset in conf.ymal
99
The configuration will will create a TopK metric function for evaluation and configure the batch size, instance number and core number for performacne measurement.
1010
```yaml
11-
evaluation: # optional. required if user doesn't provide eval_func in lpot.Quantization.
12-
accuracy: # optional. required if user doesn't provide eval_func in lpot.Quantization.
11+
evaluation: # optional. required if user doesn't provide eval_func in Quantization.
12+
accuracy: # optional. required if user doesn't provide eval_func in Quantization.
1313
metric:
1414
topk: 1 # built-in metrics are topk, map, f1, allow user to register new metric.
1515
dataloader:
@@ -45,8 +45,7 @@ evaluation: # optional. required if use
4545
3. Run quantizaiton
4646
We only need to add the following lines for quantization to create an int8 model.
4747
```python
48-
import lpot
49-
quantizer = lpot.Quantization('./conf.yaml')
48+
quantizer = Quantization('./conf.yaml')
5049
quantized_model = quantizer('./mobilenet_v1_1.0_224_frozen.pb')
5150
```
5251
* Run quantization and evaluation:
@@ -57,8 +56,9 @@ We only need to add the following lines for quantization to create an int8 model
5756
4. Run benchmark accoridng to config
5857
```python
5958
# Optional, run benchmark
60-
from lpot import Benchmark
59+
from lpot.experimental import Quantization, Benchmark, common
6160
evaluator = Benchmark('./conf.yaml')
62-
results = evaluator(quantized_model)
61+
evaluator.model = common.Model(quantized_model)
62+
results = evaluator()
6363

6464
```

examples/helloworld/tf_example5/test.py

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,16 @@
44
import numpy as np
55
def main():
66

7-
import lpot
8-
quantizer = lpot.Quantization('./conf.yaml')
9-
model_path = "./mobilenet_v1_1.0_224_frozen.pb"
10-
quantized_model = quantizer(model_path)
7+
from lpot.experimental import Quantization, common
8+
quantizer = Quantization('./conf.yaml')
9+
quantizer.model = common.Model("./mobilenet_v1_1.0_224_frozen.pb")
10+
quantized_model = quantizer()
1111

1212
# Optional, run benchmark
13-
from lpot import Benchmark
13+
from lpot.experimental import Benchmark
1414
evaluator = Benchmark('./conf.yaml')
15-
results = evaluator(quantized_model)
15+
evaluator.model = common.Model(quantized_model.graph_def)
16+
results = evaluator()
1617
batch_size = 1
1718
for mode, result in results.items():
1819
acc, batch_size, result_list = result
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
tf_example5 example
2+
=====================
3+
This example is used to demonstrate how to config benchmark in yaml for performance measuremnt.
4+
5+
1. Download the FP32 model
6+
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
7+
8+
2. Update the root of dataset in conf.ymal
9+
The configuration will will create a TopK metric function for evaluation and configure the batch size, instance number and core number for performacne measurement.
10+
```yaml
11+
evaluation: # optional. required if user doesn't provide eval_func in Quantization.
12+
accuracy: # optional. required if user doesn't provide eval_func in Quantization.
13+
metric:
14+
topk: 1 # built-in metrics are topk, map, f1, allow user to register new metric.
15+
dataloader:
16+
batch_size: 32
17+
dataset:
18+
ImageRecord:
19+
root: /path/to/imagenet/ # NOTE: modify to evaluation dataset location if needed
20+
transform:
21+
ParseDecodeImagenet:
22+
BilinearImagenet:
23+
height: 224
24+
width: 224
25+
26+
performance: # optional. used to benchmark performance of passing model.
27+
configs:
28+
cores_per_instance: 4
29+
num_of_instance: 7
30+
dataloader:
31+
batch_size: 1
32+
last_batch: discard
33+
dataset:
34+
ImageRecord:
35+
root: /path/to/imagenet/ # NOTE: modify to evaluation dataset location if needed
36+
transform:
37+
ParseDecodeImagenet:
38+
ResizeCropImagenet:
39+
height: 224
40+
width: 224
41+
mean_value: [123.68, 116.78, 103.94]
42+
43+
```
44+
45+
3. Run quantizaiton
46+
We only need to add the following lines for quantization to create an int8 model.
47+
```python
48+
from lpot import Quantization
49+
quantizer = Quantization('./conf.yaml')
50+
quantized_model = quantizer('./mobilenet_v1_1.0_224_frozen.pb')
51+
```
52+
* Run quantization and evaluation:
53+
```shell
54+
python test.py
55+
```
56+
57+
4. Run benchmark accoridng to config
58+
```python
59+
# Optional, run benchmark
60+
from lpot import Benchmark
61+
evaluator = Benchmark('./conf.yaml')
62+
results = evaluator(quantized_model)
63+
64+
```

0 commit comments

Comments
 (0)