-
Notifications
You must be signed in to change notification settings - Fork 32
🔨 Rework PPs, Use transformsV2, & Fix corruptions #145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔨 make batch size and other parameters configurable in scaler fit method
use ImageNet-C parameters
…``einops.repeat``
…ty into corruption
🔨 Improve the corruption transforms
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## main #145 +/- ##
==========================================
- Coverage 99.19% 99.15% -0.05%
==========================================
Files 144 144
Lines 7192 7307 +115
Branches 925 942 +17
==========================================
+ Hits 7134 7245 +111
- Misses 26 28 +2
- Partials 32 34 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Post-processing ``fit()`` update
@alafage we need to state that the calibration dataloader parameters are going to be those of the validation (or test) set, then, notably for MC-BatchNorm, otherwise you wouldn't know how to set the parameters of this method. Ideally we should find a workaround because it makes no sense to define the MCBatchNorm parameter in the datamodule; maybe temporarily overwrite the dataloader's batch_size in fit? |
I've made changes accordingly. Tell me what you think about them when you have time @alafage. Basically, I believe that the post-processing set (not only a calibration set anymore) should be handled at the datamodule level instead of the routine. |
@@ -26,9 +26,9 @@ This package provides a multi-level API, including: | |||
|
|||
- easy-to-use :zap: lightning **uncertainty-aware** training & evaluation routines for **4 tasks**: classification, probabilistic and pointwise regression, and segmentation. | |||
- ready-to-train baselines on research datasets, such as ImageNet and CIFAR | |||
- [pretrained weights](https://huggingface.co/torch-uncertainty) for these baselines on ImageNet and CIFAR ( :construction: work in progress :construction: ). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we don’t have pretrained weights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have some, but I don't think that we can sell our library on this for now. Maybe it would be good to improve our number of pre-trained models for the submission.
Uh oh!
There was an error while loading. Please reload this page.