Skip to content

Commit 7de107b

Browse files
Add docstrings and expose some useful properties for all models (#48)
* Add useful properties to `BaseModel` and docstring for some models * Update README * Update README * Add docstrings * Update version * Update README
1 parent e3a9574 commit 7de107b

24 files changed

+1182
-151
lines changed

README.md

Lines changed: 79 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,9 @@
2727

2828
## Latest Updates
2929

30-
2024/05/29:
30+
2024/06/02:
3131

32+
- Add docstrings for all `kimm` models.
3233
- Merge reparameterizable layers into 1 `ReparameterizableConv2D`
3334
- Add `GhostNetV3*` from [huawei-noah/Efficient-AI-Backbones](https://github.com/huawei-noah/Efficient-AI-Backbones)
3435

@@ -49,56 +50,80 @@
4950
- `kimm.models.*.available_feature_keys`
5051
- `kimm.models.*(...)`
5152
- `kimm.models.*(..., feature_extractor=True, feature_keys=[...])`
52-
- `kimm.utils.get_reparameterized_model`
53-
- `kimm.export.export_tflite`
54-
- `kimm.export.export_onnx`
5553

5654
```python
5755
import keras
5856
import kimm
59-
import numpy as np
60-
6157

6258
# List available models
6359
print(kimm.list_models("mobileone", weights="imagenet"))
6460
# ['MobileOneS0', 'MobileOneS1', 'MobileOneS2', 'MobileOneS3']
6561

6662
# Initialize model with pretrained ImageNet weights
67-
x = keras.random.uniform([1, 224, 224, 3])
63+
# Note: all `kimm` models expect inputs in the value range of [0, 255] by
64+
# default if `incldue_preprocessing=True`
65+
x = keras.random.uniform([1, 224, 224, 3]) * 255.0
6866
model = kimm.models.MobileOneS0()
6967
y = model.predict(x)
7068
print(y.shape)
7169
# (1, 1000)
7270

73-
# Get reparameterized model by kimm.utils.get_reparameterized_model
74-
reparameterized_model = kimm.utils.get_reparameterized_model(model)
75-
y2 = reparameterized_model.predict(x)
76-
np.testing.assert_allclose(
77-
keras.ops.convert_to_numpy(y), keras.ops.convert_to_numpy(y2), atol=1e-5
78-
)
79-
80-
# Export model to tflite format
81-
kimm.export.export_tflite(reparameterized_model, 224, "model.tflite")
82-
83-
# Export model to onnx format (note: must be "channels_first" format)
84-
# kimm.export.export_onnx(reparameterized_model, 224, "model.onnx")
71+
# Print some basic information about the model
72+
print(model)
73+
# <MobileOneS0 name=MobileOneS0, input_shape=(None, None, None, 3),
74+
# default_size=224, preprocessing_mode="imagenet", feature_extractor=False,
75+
# feature_keys=None>
76+
# This information can also be accessed through properties
77+
print(model.input_shape, model.default_size, model.preprocessing_mode)
8578

8679
# List available feature keys of the model class
8780
print(kimm.models.MobileOneS0.available_feature_keys)
8881
# ['STEM_S2', 'BLOCK0_S4', 'BLOCK1_S8', 'BLOCK2_S16', 'BLOCK3_S32']
8982

9083
# Enable feature extraction by setting `feature_extractor=True`
9184
# `feature_keys` can be optionally specified
92-
model = kimm.models.MobileOneS0(
85+
feature_extractor = kimm.models.MobileOneS0(
9386
feature_extractor=True, feature_keys=["BLOCK2_S16", "BLOCK3_S32"]
9487
)
95-
features = model.predict(x)
88+
features = feature_extractor.predict(x)
9689
for feature_name, feature in features.items():
9790
print(feature_name, feature.shape)
98-
# BLOCK2_S16 (1, 14, 14, 256)
99-
# BLOCK3_S32 (1, 7, 7, 1024)
100-
# TOP (1, 1000)
91+
# BLOCK2_S16 (1, 14, 14, 256), BLOCK3_S32 (1, 7, 7, 1024), ...
92+
```
10193

94+
> [!NOTE]
95+
> All models in `kimm` expect inputs in the value range of [0, 255] by default if `incldue_preprocessing=True`.
96+
> Some models only accept static inputs. You should explicitly specify the input shape for these models by `input_shape=[*, *, 3]`.
97+
98+
## Advanced Usage
99+
100+
- `kimm.utils.get_reparameterized_model`
101+
- `kimm.export.export_tflite`
102+
- `kimm.export.export_onnx`
103+
104+
```python
105+
import keras
106+
import kimm
107+
import numpy as np
108+
109+
# Initialize a reparameterizable model
110+
x = keras.random.uniform([1, 224, 224, 3]) * 255.0
111+
model = kimm.models.MobileOneS0()
112+
y = model.predict(x)
113+
114+
# Get reparameterized model by kimm.utils.get_reparameterized_model
115+
reparameterized_model = kimm.utils.get_reparameterized_model(model)
116+
y2 = reparameterized_model.predict(x)
117+
np.testing.assert_allclose(
118+
keras.ops.convert_to_numpy(y), keras.ops.convert_to_numpy(y2), atol=1e-3
119+
)
120+
121+
# Export model to tflite format
122+
kimm.export.export_tflite(reparameterized_model, 224, "model.tflite")
123+
124+
# Export model to onnx format
125+
# Note: must be "channels_first" format before the exporting
126+
# kimm.export.export_onnx(reparameterized_model, 224, "model.onnx")
102127
```
103128

104129
## Installation
@@ -107,6 +132,9 @@ for feature_name, feature in features.items():
107132
pip install keras kimm -U
108133
```
109134

135+
> [!IMPORTANT]
136+
> Make sure you have installed a supported backend for Keras.
137+
110138
## Quickstart
111139

112140
### Image classification using the model pretrained on ImageNet
@@ -152,34 +180,34 @@ Reference: [Grad-CAM class activation visualization (keras.io)](https://keras.io
152180

153181
## Model Zoo
154182

155-
|Model|Paper|Weights are ported from|API|
183+
|Model|Paper|Weights are ported from|API (`kimm.models.*`)|
156184
|-|-|-|-|
157-
|ConvMixer|[ICLR 2022 Submission](https://arxiv.org/abs/2201.09792)|`timm`|`kimm.models.ConvMixer*`|
158-
|ConvNeXt|[CVPR 2022](https://arxiv.org/abs/2201.03545)|`timm`|`kimm.models.ConvNeXt*`|
159-
|DenseNet|[CVPR 2017](https://arxiv.org/abs/1608.06993)|`timm`|`kimm.models.DenseNet*`|
160-
|EfficientNet|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNet*`|
161-
|EfficientNetLite|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`kimm.models.EfficientNetLite*`|
162-
|EfficientNetV2|[ICML 2021](https://arxiv.org/abs/2104.00298)|`timm`|`kimm.models.EfficientNetV2*`|
163-
|GhostNet|[CVPR 2020](https://arxiv.org/abs/1911.11907)|`timm`|`kimm.models.GhostNet*`|
164-
|GhostNetV2|[NeurIPS 2022](https://arxiv.org/abs/2211.12905)|`timm`|`kimm.models.GhostNetV2*`|
165-
|GhostNetV3|[arXiv 2024](https://arxiv.org/abs/2404.11202)|`github`|`kimm.models.GhostNetV3*`|
166-
|HGNet||`timm`|`kimm.models.HGNet*`|
167-
|HGNetV2||`timm`|`kimm.models.HGNetV2*`|
168-
|InceptionNeXt|[arXiv 2023](https://arxiv.org/abs/2303.16900)|`timm`|`kimm.models.InceptionNeXt*`|
169-
|InceptionV3|[CVPR 2016](https://arxiv.org/abs/1512.00567)|`timm`|`kimm.models.InceptionV3`|
170-
|LCNet|[arXiv 2021](https://arxiv.org/abs/2109.15099)|`timm`|`kimm.models.LCNet*`|
171-
|MobileNetV2|[CVPR 2018](https://arxiv.org/abs/1801.04381)|`timm`|`kimm.models.MobileNetV2*`|
172-
|MobileNetV3|[ICCV 2019](https://arxiv.org/abs/1905.02244)|`timm`|`kimm.models.MobileNetV3*`|
173-
|MobileOne|[CVPR 2023](https://arxiv.org/abs/2206.04040)|`timm`|`kimm.models.MobileOne*`|
174-
|MobileViT|[ICLR 2022](https://arxiv.org/abs/2110.02178)|`timm`|`kimm.models.MobileViT*`|
175-
|MobileViTV2|[arXiv 2022](https://arxiv.org/abs/2206.02680)|`timm`|`kimm.models.MobileViTV2*`|
176-
|RegNet|[CVPR 2020](https://arxiv.org/abs/2003.13678)|`timm`|`kimm.models.RegNet*`|
177-
|RepVGG|[CVPR 2021](https://arxiv.org/abs/2101.03697)|`timm`|`kimm.models.RepVGG*`|
178-
|ResNet|[CVPR 2015](https://arxiv.org/abs/1512.03385)|`timm`|`kimm.models.ResNet*`|
179-
|TinyNet|[NeurIPS 2020](https://arxiv.org/abs/2010.14819)|`timm`|`kimm.models.TinyNet*`|
180-
|VGG|[ICLR 2015](https://arxiv.org/abs/1409.1556)|`timm`|`kimm.models.VGG*`|
181-
|ViT|[ICLR 2021](https://arxiv.org/abs/2010.11929)|`timm`|`kimm.models.VisionTransformer*`|
182-
|Xception|[CVPR 2017](https://arxiv.org/abs/1610.02357)|`keras`|`kimm.models.Xception`|
185+
|ConvMixer|[ICLR 2022 Submission](https://arxiv.org/abs/2201.09792)|`timm`|`ConvMixer*`|
186+
|ConvNeXt|[CVPR 2022](https://arxiv.org/abs/2201.03545)|`timm`|`ConvNeXt*`|
187+
|DenseNet|[CVPR 2017](https://arxiv.org/abs/1608.06993)|`timm`|`DenseNet*`|
188+
|EfficientNet|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`EfficientNet*`|
189+
|EfficientNetLite|[ICML 2019](https://arxiv.org/abs/1905.11946)|`timm`|`EfficientNetLite*`|
190+
|EfficientNetV2|[ICML 2021](https://arxiv.org/abs/2104.00298)|`timm`|`EfficientNetV2*`|
191+
|GhostNet|[CVPR 2020](https://arxiv.org/abs/1911.11907)|`timm`|`GhostNet*`|
192+
|GhostNetV2|[NeurIPS 2022](https://arxiv.org/abs/2211.12905)|`timm`|`GhostNetV2*`|
193+
|GhostNetV3|[arXiv 2024](https://arxiv.org/abs/2404.11202)|`github`|`GhostNetV3*`|
194+
|HGNet||`timm`|`HGNet*`|
195+
|HGNetV2||`timm`|`HGNetV2*`|
196+
|InceptionNeXt|[CVPR 2024](https://arxiv.org/abs/2303.16900)|`timm`|`InceptionNeXt*`|
197+
|InceptionV3|[CVPR 2016](https://arxiv.org/abs/1512.00567)|`timm`|`InceptionV3`|
198+
|LCNet|[arXiv 2021](https://arxiv.org/abs/2109.15099)|`timm`|`LCNet*`|
199+
|MobileNetV2|[CVPR 2018](https://arxiv.org/abs/1801.04381)|`timm`|`MobileNetV2*`|
200+
|MobileNetV3|[ICCV 2019](https://arxiv.org/abs/1905.02244)|`timm`|`MobileNetV3*`|
201+
|MobileOne|[CVPR 2023](https://arxiv.org/abs/2206.04040)|`timm`|`MobileOne*`|
202+
|MobileViT|[ICLR 2022](https://arxiv.org/abs/2110.02178)|`timm`|`MobileViT*`|
203+
|MobileViTV2|[arXiv 2022](https://arxiv.org/abs/2206.02680)|`timm`|`MobileViTV2*`|
204+
|RegNet|[CVPR 2020](https://arxiv.org/abs/2003.13678)|`timm`|`RegNet*`|
205+
|RepVGG|[CVPR 2021](https://arxiv.org/abs/2101.03697)|`timm`|`RepVGG*`|
206+
|ResNet|[CVPR 2015](https://arxiv.org/abs/1512.03385)|`timm`|`ResNet*`|
207+
|TinyNet|[NeurIPS 2020](https://arxiv.org/abs/2010.14819)|`timm`|`TinyNet*`|
208+
|VGG|[ICLR 2015](https://arxiv.org/abs/1409.1556)|`timm`|`VGG*`|
209+
|ViT|[ICLR 2021](https://arxiv.org/abs/2010.11929)|`timm`|`VisionTransformer*`|
210+
|Xception|[CVPR 2017](https://arxiv.org/abs/1610.02357)|`keras`|`Xception`|
183211

184212
The export scripts can be found in `tools/convert_*.py`.
185213

kimm/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@
1313
from kimm._src.utils.model_registry import list_models
1414
from kimm._src.version import version
1515

16-
__version__ = "0.2.2"
16+
__version__ = "0.2.3"

0 commit comments

Comments
 (0)