Skip to content

🎨 On the road to 0.3.0: Adding shift evaluation & more #117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 73 commits into from
Oct 22, 2024
Merged
Show file tree
Hide file tree
Changes from 63 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
b61dcd1
:bug: Always use TUTrainer
o-laurent Oct 3, 2024
f91c3d9
:computer: Sync all epoch level metric logs
o-laurent Oct 3, 2024
5888b75
:shirt: AURC & AUGRC in %
o-laurent Oct 3, 2024
034bb41
:bug: Add correct classes of CamVid
alafage Oct 4, 2024
031c041
:shirt: Improve datamodules
o-laurent Oct 4, 2024
47eaac3
:bug: Fix Camvid img size
o-laurent Oct 4, 2024
372a702
:sparkles: Add CamVid class grouping & deeplab config
o-laurent Oct 4, 2024
ab61752
:shirt: Finish standardizing WideResNet defaults
o-laurent Oct 4, 2024
b07a954
:sparkles: Add Focal Loss
o-laurent Oct 4, 2024
34b79d6
:racehorse: Avoid computing some metrics twice
o-laurent Oct 4, 2024
12db3a3
:sparkles: Show RMSE at the end of the validation in regression
o-laurent Oct 4, 2024
6f4b9dd
:shirt: Start transforms by ToTensor
o-laurent Oct 4, 2024
b9ab4ea
:shirt: Cutout full torch implementation
o-laurent Oct 4, 2024
3469031
:white_check_mark: Update tests
o-laurent Oct 4, 2024
4c2b858
:zap: Update dependencies
o-laurent Oct 7, 2024
a96eba9
:heavy_minus_sign: Make glest optional
o-laurent Oct 7, 2024
00ca077
:heavy_minus_sign: Make scipy and cv2 optional
o-laurent Oct 7, 2024
3bf8069
:bug: Small img transforms fixes
o-laurent Oct 7, 2024
58ec315
:sparkles: Add shift transforms
o-laurent Oct 7, 2024
d25fef0
:heavy_plus_sign: Add Wand as optionnal dependency
o-laurent Oct 7, 2024
75ff336
:bug: Fix RNGs
o-laurent Oct 7, 2024
c1c0272
:hammer: Add other implementations for PackedLinear
alafage Oct 8, 2024
95c1a54
:sparkles: Start adding --eval-shift
o-laurent Oct 8, 2024
74b72c2
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 8, 2024
327e092
:fire: Update TinyImagenet exp
o-laurent Oct 9, 2024
539cf60
:shirt: Small tutorial change
o-laurent Oct 9, 2024
2110367
:bug: Make parsers inherit methods
o-laurent Oct 9, 2024
9df886a
:shirtt: Rename severity to shift severity
o-laurent Oct 9, 2024
585a0b1
:construction: Add that eval shift is implemented in classification only
o-laurent Oct 9, 2024
bbd9081
:shirt: Print shift severity
o-laurent Oct 9, 2024
38c9d87
:fire: Remove useless corruption tests
o-laurent Oct 9, 2024
4c592b6
:white_check_mark: Fix test
o-laurent Oct 9, 2024
5a05d4b
:books: Fix bad link to tutorials in Quickstart page.
alafage Oct 10, 2024
51f9cd8
:shirt: Display mAcc in percentage
alafage Oct 11, 2024
8301d7f
:hammer: Add color_palette property to segmentation datasets
alafage Oct 11, 2024
daec73d
:sparkles: Plot segmentation results in logger
alafage Oct 11, 2024
ca6eafe
:wrench: Update config file for segformer on CamVid
alafage Oct 11, 2024
7e7933a
:bug: Fix DummySegmentationDataModule
alafage Oct 11, 2024
3435604
:shirt: Add TU trainer, CLI & datamodule to root init
o-laurent Oct 17, 2024
8ce51dc
:fire: Simplify imports
o-laurent Oct 17, 2024
7a69549
:shirt: Fix doc warning
o-laurent Oct 21, 2024
6d4400f
:bug: Fix most corruption transforms
o-laurent Oct 21, 2024
43bab57
::white_check_mark Fix the remaining use of List
o-laurent Oct 21, 2024
22de687
:sparkles: Add first version of the corrupted ds. wrapper
o-laurent Oct 21, 2024
099d7d9
:bug: Fix Elastic transform
o-laurent Oct 22, 2024
c96aa44
:heavy_plus_sign: Add seaborn as dependency
alafage Oct 22, 2024
a9bfa52
:hammer: Improve calibration error plot function
alafage Oct 22, 2024
0c8519f
:arrow_up: Upgrade glest
alafage Oct 22, 2024
3ebc3e2
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 22, 2024
063ba49
:bug: Fix tests for the calibration error plot method
alafage Oct 22, 2024
1181ab3
:heavy_check_mark: Add corruption tests
o-laurent Oct 22, 2024
7a7ddcf
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 22, 2024
972a01a
:bug: Fix & add Zoomblur
o-laurent Oct 22, 2024
8e5f2e8
:heavy_check_mark: Add focal loss tests
o-laurent Oct 22, 2024
db17969
:heavy_check_mark: Add eval shift tests
o-laurent Oct 22, 2024
c22c258
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 22, 2024
ce93a5c
:heavy_check_mark: Improve datamodule coverage
o-laurent Oct 22, 2024
42d2c90
:bug: Fix PackedLinear implementation feature
alafage Oct 22, 2024
0026c80
:heavy_check_mark: Test `bias=False` in PackedLinear
alafage Oct 22, 2024
81f3fb7
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
alafage Oct 22, 2024
48df99f
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 22, 2024
4ed1c29
:bug: Use fake ds for MNIST OOD
o-laurent Oct 22, 2024
dfc7a0f
:heavy_check_mark: Test `basic_augment=False` in relevant datamodules
alafage Oct 22, 2024
d42efbc
:heavy_check_mark: Test `group_classes=False` in CamVid
alafage Oct 22, 2024
b9d7e52
:ok_hand: Take comments into account
o-laurent Oct 22, 2024
56a03ca
:heavy_check_mark: Test implementation failure cases in PackedLinear
alafage Oct 22, 2024
37dba26
Merge branch 'dev' of github.com:ENSTA-U2IS-AI/torch-uncertainty into…
o-laurent Oct 22, 2024
f0d84a4
:zap: Update cache action to v4
o-laurent Oct 22, 2024
405043e
:wrench: Try a fix for the docs
o-laurent Oct 22, 2024
4697d3f
:bug: Make wand really optional
o-laurent Oct 22, 2024
cd041ff
:zap: Update version for release
o-laurent Oct 22, 2024
1379b26
:bug: Make scipy really optional
o-laurent Oct 22, 2024
f38d7be
:wrench: Add supported python version flags
o-laurent Oct 22, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
data/
logs/
lightning_logs/
auto_tutorials_source/*.png
docs/*/generated/
docs/*/auto_tutorials/
*.pth
Expand Down
10 changes: 5 additions & 5 deletions auto_tutorials_source/tutorial_bayesian.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@

To train a BNN using TorchUncertainty, we have to load the following modules:

- the Trainer from Lightning
- our TUTrainer
- the model: bayesian_lenet, which lies in the torch_uncertainty.model
- the classification training routine from torch_uncertainty.routines
- the Bayesian objective: the ELBOLoss, which lies in the torch_uncertainty.losses file
Expand All @@ -39,9 +39,9 @@
# %%
from pathlib import Path

from lightning.pytorch import Trainer
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import ELBOLoss
from torch_uncertainty.models.lenet import bayesian_lenet
Expand All @@ -65,12 +65,12 @@ def optim_lenet(model: nn.Module):
# 3. Creating the necessary variables
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# In the following, we define the Lightning trainer, the root of the datasets and the logs.
# In the following, we instantiate our trainer, define the root of the datasets and the logs.
# We also create the datamodule that handles the MNIST dataset, dataloaders and transforms.
# Please note that the datamodules can also handle OOD detection by setting the eval_ood
# parameter to True. Finally, we create the model using the blueprint from torch_uncertainty.models.

trainer = Trainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)
trainer = TUTrainer(accelerator="cpu", enable_progress_bar=False, max_epochs=1)

# datamodule
root = Path("data")
Expand Down Expand Up @@ -111,7 +111,7 @@ def optim_lenet(model: nn.Module):
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Now that we have prepared all of this, we just have to gather everything in
# the main function and to train the model using the Lightning Trainer.
# the main function and to train the model using the our wrapper of Lightning Trainer.
# Specifically, it needs the routine, that includes the model as well as the
# training/eval logic and the datamodule
# The dataset will be downloaded automatically in the root/data folder, and the
Expand Down
80 changes: 64 additions & 16 deletions auto_tutorials_source/tutorial_corruption.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,35 @@
torchvision and matplotlib.
"""
# %%
from torchvision.datasets import CIFAR10
from torchvision.transforms import Compose, ToTensor, Resize
from torchvision.transforms import Compose, ToTensor, Resize, CenterCrop

import matplotlib.pyplot as plt
from PIL import Image
from urllib import request

ds = CIFAR10("./data", train=False, download=True)
urls = [
"https://upload.wikimedia.org/wikipedia/commons/d/d9/Carduelis_tristis_-Michigan%2C_USA_-male-8.jpg",
"https://upload.wikimedia.org/wikipedia/commons/5/5d/Border_Collie_Blanca_y_Negra_Hembra_%28Belen%2C_Border_Collie_Los_Baganes%29.png",
"https://upload.wikimedia.org/wikipedia/commons/f/f8/Birmakatze_Seal-Point.jpg",
"https://upload.wikimedia.org/wikipedia/commons/a/a9/Garranos_fight.jpg",
"https://upload.wikimedia.org/wikipedia/commons/8/8b/Cottontail_Rabbit.jpg",
]

def download_img(url, i):
request.urlretrieve(url, f"tmp_{i}.png")
return Image.open(f"tmp_{i}.png").convert('RGB')

images_ds = [download_img(url, i) for i, url in enumerate(urls)]


def get_images(main_corruption, index: int = 0):
"""Create an image showing the 6 levels of corruption of a given transform."""
images = []
for severity in range(6):
ds_transforms = Compose(
[ToTensor(), main_corruption(severity), Resize(256, antialias=True)]
transforms = Compose(
[Resize(256, antialias=True), CenterCrop(256), ToTensor(), main_corruption(severity), CenterCrop(224)]
)
ds = CIFAR10("./data", train=False, download=False, transform=ds_transforms)
images.append(ds[index][0].permute(1, 2, 0).numpy())
images.append(transforms(images_ds[index]).permute(1, 2, 0).numpy())
return images


Expand Down Expand Up @@ -65,49 +77,85 @@ def show_images(transforms):
GaussianNoise,
ShotNoise,
ImpulseNoise,
SpeckleNoise,
)

show_images(
[
GaussianNoise,
ShotNoise,
ImpulseNoise,
SpeckleNoise,
]
)

# %%
# 2. Blur Corruptions
# ~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruption import (
GaussianBlur,
MotionBlur,
GlassBlur,
DefocusBlur,
ZoomBlur,
)

show_images(
[
GaussianBlur,
GlassBlur,
MotionBlur,
DefocusBlur,
ZoomBlur,
]
)

# %%
# 3. Other Corruptions
# ~~~~~~~~~~~~~~~~~~~~
# 3. Weather Corruptions
# ~~~~~~~~~~~~~~~~~~~~~~
from torch_uncertainty.transforms.corruption import (
JPEGCompression,
Pixelate,
Frost,
Snow,
Fog,
)

show_images(
[
Fog,
Frost,
Snow,
]
)

# %%
# 4. Other Corruptions

from torch_uncertainty.transforms.corruption import (
Brightness, Contrast, Elastic, JPEGCompression, Pixelate)

show_images(
[
Brightness,
Contrast,
JPEGCompression,
Pixelate,
Frost,
Elastic,
]
)

# %%
# 5. Unused Corruptions
# ~~~~~~~~~~~~~~~~~~~~~

# The following corruptions are not used in the paper Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.

from torch_uncertainty.transforms.corruption import (
GaussianBlur,
SpeckleNoise,
Saturation,
)

show_images(
[
GaussianBlur,
SpeckleNoise,
Saturation,
]
)

Expand Down
6 changes: 3 additions & 3 deletions auto_tutorials_source/tutorial_der_cubic.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

To train a MLP with the DER loss function using TorchUncertainty, we have to load the following modules:

- the Trainer from Lightning
- our TUTrainer
- the model: mlp from torch_uncertainty.models.mlp
- the regression training routine from torch_uncertainty.routines
- the evidential objective: the DERLoss from torch_uncertainty.losses. This loss contains the classic NLL loss and a regularization term.
Expand All @@ -31,10 +31,10 @@
"""
# %%
import torch
from lightning.pytorch import Trainer
from lightning import LightningDataModule
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.models.mlp import mlp
from torch_uncertainty.datasets.regression.toy import Cubic
from torch_uncertainty.losses import DERLoss
Expand Down Expand Up @@ -67,7 +67,7 @@ def optim_regression(
# Please note that this MLP finishes with a NormalInverseGammaLayer that interpret the outputs of the model
# as the parameters of a Normal Inverse Gamma distribution.

trainer = Trainer(accelerator="cpu", max_epochs=50) #, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=50) #, enable_progress_bar=False)

# dataset
train_ds = Cubic(num_samples=1000)
Expand Down
9 changes: 4 additions & 5 deletions auto_tutorials_source/tutorial_evidential_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

To train a LeNet with the DEC loss function using TorchUncertainty, we have to load the following utilities from TorchUncertainty:

- the Trainer from Lightning
- our wrapper of the Lightning Trainer
- the model: LeNet, which lies in torch_uncertainty.models
- the classification training routine in the torch_uncertainty.routines
- the evidential objective: the DECLoss from torch_uncertainty.losses
Expand All @@ -28,9 +28,9 @@
from pathlib import Path

import torch
from lightning.pytorch import Trainer
from torch import nn, optim

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.losses import DECLoss
from torch_uncertainty.models.lenet import lenet
Expand All @@ -53,10 +53,9 @@ def optim_lenet(model: nn.Module) -> dict:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# In the following, we need to define the root of the logs, and to
# fake-parse the arguments needed for using the PyTorch Lightning Trainer. We
# also use the same MNIST classification example as that used in the
# We use the same MNIST classification example as that used in the
# original DEC paper. We only train for 3 epochs for the sake of time.
trainer = Trainer(accelerator="cpu", max_epochs=3, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=3, enable_progress_bar=False)

# datamodule
root = Path() / "data"
Expand Down
4 changes: 2 additions & 2 deletions auto_tutorials_source/tutorial_from_de_to_pe.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ def optim_recipe(model, lr_mult: float = 1.0):


from torch_uncertainty.routines import ClassificationRoutine
from torch_uncertainty.utils import TUTrainer
from torch_uncertainty import TUTrainer

# Create the trainer that will handle the training
trainer = TUTrainer(accelerator="cpu", max_epochs=max_epochs)
Expand Down Expand Up @@ -242,7 +242,7 @@ def optim_recipe(model, lr_mult: float = 1.0):
# We have put the pre-trained models on Hugging Face that you can download with the utility function
# "hf_hub_download" imported just below. These models are trained for 75 epochs and are therefore not
# comparable to the all the other models trained in this notebook. The pretrained models can be seen
# on `HuggingFace <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `here <https://huggingface.co/torch-uncertainty>`_.
# on `HuggingFace <https://huggingface.co/ENSTA-U2IS/tutorial-models>`_ and TorchUncertainty's are `there <https://huggingface.co/torch-uncertainty>`_.

from torch_uncertainty.utils.hub import hf_hub_download

Expand Down
6 changes: 3 additions & 3 deletions auto_tutorials_source/tutorial_mc_batch_norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

First, we have to load the following utilities from TorchUncertainty:

- the Trainer from Lightning
- the TUTrainer from the framework
- the datamodule handling dataloaders: MNISTDataModule from torch_uncertainty.datamodules
- the model: LeNet, which lies in torch_uncertainty.models
- the MC Batch Normalization wrapper: mc_batch_norm, which lies in torch_uncertainty.post_processing
Expand All @@ -25,9 +25,9 @@
# %%
from pathlib import Path

from lightning import Trainer
from torch import nn

from torch_uncertainty import TUTrainer
from torch_uncertainty.datamodules import MNISTDataModule
from torch_uncertainty.models.lenet import lenet
from torch_uncertainty.optim_recipes import optim_cifar10_resnet18
Expand All @@ -41,7 +41,7 @@
# logs. We also create the datamodule that handles the MNIST dataset
# dataloaders and transforms.

trainer = Trainer(accelerator="cpu", max_epochs=2, enable_progress_bar=False)
trainer = TUTrainer(accelerator="cpu", max_epochs=2, enable_progress_bar=False)

# datamodule
root = Path("data")
Expand Down
2 changes: 1 addition & 1 deletion auto_tutorials_source/tutorial_mc_dropout.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
# %%
from pathlib import Path

from torch_uncertainty.utils import TUTrainer
from torch_uncertainty import TUTrainer
from torch import nn

from torch_uncertainty.datamodules import MNISTDataModule
Expand Down
6 changes: 6 additions & 0 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,12 @@ Losses
ELBOLoss
BetaNLL
DECLoss
DERLoss
FocalLoss
ConflictualLoss
ConfidencePenaltyLoss
KLDiv
ELBOLoss

Post-Processing Methods
-----------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/cli_guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Let's see how to implement the CLI, by checking out the ``experiments/classifica

from torch_uncertainty.baselines.classification import ResNetBaseline
from torch_uncertainty.datamodules import CIFAR10DataModule
from torch_uncertainty.utils import TULightningCLI
from torch_uncertainty import TULightningCLI


class ResNetCLI(TULightningCLI):
Expand Down
6 changes: 3 additions & 3 deletions docs/source/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,16 +86,16 @@ CIFAR10 datamodule.
.. code:: python

from torch_uncertainty.datamodules import CIFAR10DataModule
from lightning.pytorch import Trainer
from lightning.pytorch import TUTrainer

dm = CIFAR10DataModule(root="data", batch_size=32)
trainer = Trainer(gpus=1, max_epochs=100)
trainer = TUTTrainer(gpus=1, max_epochs=100)
trainer.fit(routine, dm)
trainer.test(routine, dm)

Here it is, you have trained your first model with TorchUncertainty! As a result, you will get access to various metrics
measuring the ability of your model to handle uncertainty. You can get other examples of training with lightning Trainers
looking at the `Tutorials <tutorials.html#layers>`_.
looking at the `Tutorials <auto_tutorials/index.html>`_.

More metrics
^^^^^^^^^^^^
Expand Down
10 changes: 10 additions & 0 deletions docs/source/references.rst
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,16 @@ For Laplace Approximation, consider citing:
Losses
------

Focal Loss
^^^^^^^^^^

For the focal loss, consider citing:

**Focal Loss for Dense Object Detection**

* Authors: *Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár*
* Paper: `TPAMI 2020 <https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8417976>`__.

Conflictual Loss
^^^^^^^^^^^^^^^^

Expand Down
3 changes: 2 additions & 1 deletion experiments/classification/cifar10/resnet.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import torch
from lightning.pytorch.cli import LightningArgumentParser

from torch_uncertainty import TULightningCLI
from torch_uncertainty.baselines.classification import ResNetBaseline
from torch_uncertainty.datamodules import CIFAR10DataModule
from torch_uncertainty.utils import TULightningCLI


class ResNetCLI(TULightningCLI):
def add_arguments_to_parser(self, parser: LightningArgumentParser) -> None:
super().add_arguments_to_parser(parser)
parser.add_optimizer_args(torch.optim.SGD)
parser.add_lr_scheduler_args(torch.optim.lr_scheduler.MultiStepLR)

Expand Down
3 changes: 2 additions & 1 deletion experiments/classification/cifar10/vgg.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import torch
from lightning.pytorch.cli import LightningArgumentParser

from torch_uncertainty import TULightningCLI
from torch_uncertainty.baselines.classification import VGGBaseline
from torch_uncertainty.datamodules import CIFAR10DataModule
from torch_uncertainty.utils import TULightningCLI


class ResNetCLI(TULightningCLI):
def add_arguments_to_parser(self, parser: LightningArgumentParser) -> None:
super().add_arguments_to_parser(parser)
parser.add_optimizer_args(torch.optim.Adam)
parser.add_lr_scheduler_args(torch.optim.lr_scheduler.MultiStepLR)

Expand Down
Loading
Loading