Skip to content

🔨 Rework PPs, Use transformsV2, & Fix corruptions #145

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 53 commits into from
Mar 19, 2025
Merged

🔨 Rework PPs, Use transformsV2, & Fix corruptions #145

merged 53 commits into from
Mar 19, 2025

Conversation

o-laurent
Copy link
Contributor

@o-laurent o-laurent commented Mar 15, 2025

  • Improve the corruption transforms
  • Rework postprocessing methods
  • Use transformsV2
  • Fix the security issue

@o-laurent o-laurent self-assigned this Mar 15, 2025
Copy link

codecov bot commented Mar 17, 2025

Codecov Report

Attention: Patch coverage is 99.27007% with 2 lines in your changes missing coverage. Please review.

Project coverage is 99.15%. Comparing base (b3fe75e) to head (de6d239).
Report is 58 commits behind head on main.

✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
torch_uncertainty/routines/segmentation.py 66.66% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #145      +/-   ##
==========================================
- Coverage   99.19%   99.15%   -0.05%     
==========================================
  Files         144      144              
  Lines        7192     7307     +115     
  Branches      925      942      +17     
==========================================
+ Hits         7134     7245     +111     
- Misses         26       28       +2     
- Partials       32       34       +2     
Flag Coverage Δ
cpu ?
pytest 99.15% <99.27%> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@o-laurent o-laurent requested a review from alafage March 17, 2025 14:05
@o-laurent o-laurent changed the title 🔨 Use transformsV2 & Fix corruptions 🔨 Rework PPs, Use transformsV2, & Fix corruptions Mar 17, 2025
@o-laurent o-laurent marked this pull request as ready for review March 17, 2025 18:42
@o-laurent
Copy link
Contributor Author

@alafage we need to state that the calibration dataloader parameters are going to be those of the validation (or test) set, then, notably for MC-BatchNorm, otherwise you wouldn't know how to set the parameters of this method. Ideally we should find a workaround because it makes no sense to define the MCBatchNorm parameter in the datamodule; maybe temporarily overwrite the dataloader's batch_size in fit?
Moreover, we should warn the user when taking the test dataset as a calibration set, and ideally provide a way to split the sets in a more principled way from the start.

@o-laurent
Copy link
Contributor Author

o-laurent commented Mar 18, 2025

I've made changes accordingly. Tell me what you think about them when you have time @alafage. Basically, I believe that the post-processing set (not only a calibration set anymore) should be handled at the datamodule level instead of the routine.

@@ -26,9 +26,9 @@ This package provides a multi-level API, including:

- easy-to-use :zap: lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation.
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR ( :construction: work in progress :construction: ).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don’t have pretrained weights?

Copy link
Contributor Author

@o-laurent o-laurent Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have some, but I don't think that we can sell our library on this for now. Maybe it would be good to improve our number of pre-trained models for the submission.

@alafage alafage merged commit 7328661 into main Mar 19, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants