Skip to content

Commit e2b34de

Browse files
ChongWei905ChongWei905
andauthored
docs: renew readmes, add ms/step data to forms and remove development docs (#812)
Co-authored-by: ChongWei905 <weichong4@huawei.com>
1 parent 009aaeb commit e2b34de

File tree

59 files changed

+710
-1228
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+710
-1228
lines changed

README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,6 @@ We provide the following jupyter notebook tutorials to help users learn to use M
217217
- [Finetune a pretrained model on custom datasets](docs/en/tutorials/finetune.md)
218218
- [Customize your model]() //coming soon
219219
- [Optimizing performance for vision transformer]() //coming soon
220-
- [Deployment demo](docs/en/tutorials/deployment.md)
221220
222221
## Model List
223222

README_CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg'
121121

122122
```shell
123123
# 分布式训练
124-
# 假设你有4张GPU或者NPU卡
124+
# 假设你有4张NPU卡
125125
msrun --bind_core=True --worker_num 4 python train.py --distribute \
126126
--model densenet121 --dataset imagenet --data_dir ./datasets/imagenet
127127
```

benchmark_results.md

Lines changed: 101 additions & 99 deletions
Large diffs are not rendered by default.

configs/README.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,20 @@ Please follow the outline structure and **table format** shown in [densenet/READ
3333

3434
<div align="center">
3535

36-
| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
37-
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
38-
| densenet_121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
36+
| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download |
37+
| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
38+
| densenet121 | 75.67 | 92.77 | 8.06 | 32 | 8 | 47,34 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
3939

4040
</div>
4141

4242
Illustration:
4343
- Model: model name in lower case with _ seperator.
44-
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
4544
- Top-1 and Top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
4645
- Params (M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point
46+
- Batch Size: Training batch size
47+
- Cards: # of cards
48+
- Ms/step: Time used on training per step in ms
49+
- Jit_level: Jit level of mindspore context, which contains 3 levels: O0/O1/O2
4750
- Recipe: Training recipe/configuration linked to a yaml config file.
4851
- Download: url of the pretrained model weights
4952

@@ -62,10 +65,10 @@ Illustration:
6265
For consistency, it is recommended to provide distributed training commands based on `msrun --bind_core=True --worker_num {num_devices} python train.py`, instead of using shell script such as `distrubuted_train.sh`.
6366

6467
```shell
65-
# standalone training on a gpu or ascend device
68+
# standalone training on single NPU device
6669
python train.py --config configs/densenet/densenet_121_gpu.yaml --data_dir /path/to/dataset --distribute False
6770

68-
# distributed training on gpu or ascend divices
71+
# distributed training on NPU divices
6972
msrun --bind_core=True --worker_num 8 python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet
7073

7174
```

configs/bit/README.md

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -17,25 +17,24 @@ too low. 5) With BiT fine-tuning, good performance can be achieved even if there
1717

1818
Our reproduced model performance on ImageNet-1K is reported as follows.
1919

20-
performance tested on ascend 910*(8p) with graph mode
20+
- ascend 910* with graph mode
2121

2222
*coming soon*
2323

24-
performance tested on ascend 910(8p) with graph mode
24+
- ascend 910 with graph mode
2525

2626

2727
<div align="center">
2828

29-
| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download |
30-
| ------------ | --------- | --------- | --------- | ---------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
31-
| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
29+
30+
| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download |
31+
| ------------ | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
32+
| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | 8 | 74.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
3233

3334

3435
</div>
3536

3637
#### Notes
37-
38-
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
3938
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
4039

4140
## Quick Start
@@ -44,7 +43,7 @@ performance tested on ascend 910(8p) with graph mode
4443

4544
#### Installation
4645

47-
Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
46+
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.
4847

4948
#### Dataset Preparation
5049

@@ -57,11 +56,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
5756
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
5857

5958
```shell
60-
# distributed training on multiple GPU/Ascend devices
59+
# distributed training on multiple NPU devices
6160
msrun --bind_core=True --worker_num 8 python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet
6261
```
6362

64-
Similarly, you can train the model on multiple GPU devices with the above `msrun` command.
6563

6664
For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).
6765

@@ -72,7 +70,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
7270
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
7371

7472
```shell
75-
# standalone training on a CPU/GPU/Ascend device
73+
# standalone training on single NPU device
7674
python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/dataset --distribute False
7775
```
7876

@@ -84,10 +82,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
8482
python validate.py -c configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8583
```
8684

87-
### Deployment
88-
89-
Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV.
90-
9185
## References
9286

9387
<!--- Guideline: Citation format should follow GB/T 7714. -->

configs/cmt/README.md

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -14,24 +14,23 @@ on ImageNet-1K dataset.
1414

1515
Our reproduced model performance on ImageNet-1K is reported as follows.
1616

17-
performance tested on ascend 910*(8p) with graph mode
17+
- ascend 910* with graph mode
1818

1919
*coming soon*
2020

21-
performance tested on ascend 910(8p) with graph mode
21+
- ascend 910 with graph mode
2222

2323
<div align="center">
2424

25-
| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download |
26-
| --------- | --------- | --------- | --------- | ---------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
27-
| cmt_small | 83.24 | 96.41 | 26.09 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
25+
26+
| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download |
27+
| --------- | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
28+
| cmt_small | 83.24 | 96.41 | 26.09 | 128 | 8 | 500.64 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
2829

2930

3031
</div>
3132

3233
#### Notes
33-
34-
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
3534
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
3635

3736
## Quick Start
@@ -40,7 +39,7 @@ performance tested on ascend 910(8p) with graph mode
4039

4140
#### Installation
4241

43-
Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
42+
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.
4443

4544
#### Dataset Preparation
4645

@@ -53,11 +52,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
5352
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
5453

5554
```shell
56-
# distributed training on multiple GPU/Ascend devices
55+
# distributed training on multiple NPU devices
5756
msrun --bind_core=True --worker_num 8 python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet
5857
```
5958

60-
Similarly, you can train the model on multiple GPU devices with the above `msrun` command.
6159

6260
For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).
6361

@@ -68,7 +66,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
6866
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
6967

7068
```shell
71-
# standalone training on a CPU/GPU/Ascend device
69+
# standalone training on single NPU device
7270
python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/dataset --distribute False
7371
```
7472

@@ -80,10 +78,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
8078
python validate.py -c configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
8179
```
8280

83-
### Deployment
84-
85-
Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/).
86-
8781
## References
8882

8983
<!--- Guideline: Citation format should follow GB/T 7714. -->

configs/coat/README.md

Lines changed: 9 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,23 +10,23 @@ Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image
1010

1111
Our reproduced model performance on ImageNet-1K is reported as follows.
1212

13-
performance tested on ascend 910*(8p) with graph mode
13+
- ascend 910* with graph mode
1414

1515
*coming soon*
1616

1717

18-
performance tested on ascend 910(8p) with graph mode
18+
- ascend 910 with graph mode
1919

2020
<div align="center">
2121

22-
| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Weight |
23-
| --------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
24-
| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
22+
23+
| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | Weight |
24+
| --------- | --------- | --------- | ---------- | ---------- | ----- |---------| --------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
25+
| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | 8 | 254.95 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
2526

2627
</div>
2728

2829
#### Notes
29-
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
3030
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
3131

3232

@@ -35,7 +35,7 @@ performance tested on ascend 910(8p) with graph mode
3535
### Preparation
3636

3737
#### Installation
38-
Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
38+
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.
3939

4040
#### Dataset Preparation
4141
Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation.
@@ -47,12 +47,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
4747
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
4848

4949
```shell
50-
# distributed training on multiple GPU/Ascend devices
50+
# distributed training on multiple NPU devices
5151
msrun --bind_core=True --worker_num 8 python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet
5252
```
5353

5454

55-
Similarly, you can train the model on multiple GPU devices with the above `msrun` command.
5655

5756
For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).
5857

@@ -63,7 +62,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
6362
If you want to train or finetune the model on a smaller dataset without distributed training, please run:
6463

6564
```shell
66-
# standalone training on a CPU/GPU/Ascend device
65+
# standalone training on single NPU device
6766
python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False
6867
```
6968

@@ -75,10 +74,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
7574
python validate.py -c configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
7675
```
7776

78-
### Deployment
79-
80-
To deploy online inference services with the trained model efficiently, please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/).
81-
8277
## References
8378

8479
[1] Han D, Yun S, Heo B, et al. Rethinking channel dimensions for efficient model design[C]//Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2021: 732-741.

0 commit comments

Comments
 (0)