Skip to content

Commit baea8fd

Browse files
committed
Update README
1 parent 276a1f5 commit baea8fd

File tree

1 file changed

+21
-7
lines changed

1 file changed

+21
-7
lines changed

README.md

Lines changed: 21 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,23 @@
1313

1414
# Keras Image Models
1515

16+
- [Introduction](#introduction)
17+
- [Installation](#installation)
18+
- [Quickstart](#quickstart)
19+
- [Image classification using the model pretrained on ImageNet](#image-classification-using-the-model-pretrained-on-imagenet)
20+
- [An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset](#an-end-to-end-example-fine-tuning-an-image-classification-model-on-a-cats-vs-dogs-dataset)
21+
- [Grad-CAM](#grad-cam)
22+
- [Model Zoo](#model-zoo)
23+
- [License](#license)
24+
- [Acknowledgements](#acknowledgements)
25+
1626
## Introduction
1727

1828
**K**eras **Im**age **M**odels (`kimm`) is a collection of image models, blocks and layers written in Keras 3. The goal is to offer SOTA models with pretrained weights in a user-friendly manner.
1929

2030
KIMM is:
2131

22-
🚀 A model zoo where almost all models come with pre-trained weights on ImageNet.
32+
🚀 A model zoo where almost all models come with **pre-trained weights on ImageNet**.
2333

2434
> [!NOTE]
2535
> The accuracy of the converted models can be found at [results-imagenet.csv (timm)](https://github.com/huggingface/pytorch-image-models/blob/main/results/results-imagenet.csv) and [https://keras.io/api/applications/ (keras)](https://keras.io/api/applications/),
@@ -42,7 +52,7 @@ model = kimm.models.RegNetY002(
4252
)
4353
```
4454

45-
🔥 Integrated with feature extraction capability.
55+
🔥 Integrated with **feature extraction** capability.
4656

4757
```python
4858
model = kimm.models.ConvNeXtAtto(feature_extractor=True)
@@ -58,28 +68,32 @@ for k, v in y.items():
5868
```python
5969
# tensorflow backend
6070
keras.backend.set_image_data_format("channels_last")
61-
model = kimm.models.MobileNet050V3Small()
71+
model = kimm.models.MobileNetV3W050Small()
6272
kimm.export.export_tflite(model, [224, 224, 3], "model.tflite")
6373
```
6474

6575
```python
6676
# torch backend
6777
keras.backend.set_image_data_format("channels_first")
68-
model = kimm.models.MobileNet050V3Small()
78+
model = kimm.models.MobileNetV3W050Small()
6979
kimm.export.export_onnx(model, [3, 224, 224], "model.onnx")
7080
```
7181

7282
> [!IMPORTANT]
7383
> `kimm.export.export_tflite` is currently restricted to `tensorflow` backend and `channels_last`.
7484
> `kimm.export.export_onnx` is currently restricted to `torch` backend and `channels_first`.
7585
76-
🔧 Supporting the reparameterization technique.
86+
🔧 Supporting the **reparameterization** technique.
7787

7888
```python
7989
model = kimm.models.RepVGGA0()
8090
reparameterized_model = kimm.utils.get_reparameterized_model(model)
8191
# or
8292
# reparameterized_model = model.get_reparameterized_model()
93+
model.summary()
94+
# Total params: 9,132,616 (34.84 MB)
95+
reparameterized_model.summary()
96+
# Total params: 8,309,384 (31.70 MB)
8397
y1 = model.predict(x)
8498
y2 = reparameterized_model.predict(x)
8599
np.testing.assert_allclose(y1, y2, atol=1e-5)
@@ -88,12 +102,12 @@ np.testing.assert_allclose(y1, y2, atol=1e-5)
88102
## Installation
89103

90104
```bash
91-
pip install keras kimm
105+
pip install keras kimm -U
92106
```
93107

94108
## Quickstart
95109

96-
### Image Classification Using the Model Pretrained on ImageNet
110+
### Image classification using the model pretrained on ImageNet
97111

98112
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14WxYgVjlwCIO9MwqPYW-dskbTL2UHsVN?usp=sharing)
99113

0 commit comments

Comments
 (0)