-
Notifications
You must be signed in to change notification settings - Fork 32
✨ LeNet BatchEnsemble and Deep Ensemble + minor bugfixes #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hello again @tonyzamyatin, We have long wanted to do a BatchEnsemble wrapper, so thank you for this. Before I get to looking at your PR in detail, don't hesitate to enable the pre-commits as detailed in the contributing page. By the way, you could directly enable them in your dockerfile in #134? |
Hi @o-laurent, currently I have a fork with where I play around and if I implement something new or fiy a bug I make a new branch and create a PR. I have the pre-commit checks enabled on my fork and manually run the hooks before I make commits. Are there any problems, or ways to automate this, just because you're asking? And yeah I can add that to the Dockerfile👌 |
No worries, I'm saying this to avoid failing GitHub workflows due to linting issues as in test #1601. If the docker image is made for contributions, then I think that it makes sense to enforce that the pre-commit hooks are installed. When installed, they should run automatically at commit time - and not manually - unless you use the |
Yeah you're right. Definitely, makes sense to enable pre-commit hooks in the Dockerfile. Hmmm when I commit something onto the fork the pre-commit hook doesn't automatically run, only the pre-commit checks.🧐 |
What do you mean by pre-commit check (vs. pre-commit hooks)? Normally, pre-commit should make committing impossible when any of the hooks fail, in this case "ruff-check". |
Sorry for the unclarity. The commits get rejected if the pre-commit hook doesn't successfully run, but the code isn't automatically formatted on commit. I have to run the pre-commit hook manually and then commit again. |
On my side, I just have to add the modified files and commit again. Anyway, this doesn't explain how you managed to push code that made our test fail 😄 but it's no big deal at all. |
Exactly, I do the same. Don't know how this happens lol 😂 |
Altough, usually it's some kind of test for a transform which is not implemented that makes the pipeline fail. This time I maybe forgot to lint🌚 I'll have to double check whether the pre-commit hooks is active in the Docker container by default. |
Damn, I didn't install the pre-commits on my clone of your fork and now I'm pushing failing code too 🤣 |
HAHAHHA🤣 |
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found.
Additional details and impacted files@@ Coverage Diff @@
## dev #137 +/- ##
==========================================
- Coverage 99.15% 99.15% -0.01%
==========================================
Files 143 144 +1
Lines 7136 7192 +56
Branches 910 923 +13
==========================================
+ Hits 7076 7131 +55
- Misses 27 28 +1
Partials 33 33
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your PR! Here are a few quick comments on some details.
Hi @tonyzamyatin, Nice PR, thank you! I don't know if you saw my answer on Discord, but there are some points that we need to address before merging these changes to dev. First, we would like to keep the possibility of not repeating the batch during training to fit the original paper better. In this case, we do not need to use Secondly, @o-laurent and I agree that the current BatchEnsemble wrapper can be misleading. It does not make a BatchEnsemble but overrides the model's forward to repeat the inputs first. It would be clearer if we had a wrapper taking any I will make these changes directly on your fork if that's okay. |
Hi @alafage, That makes total sense!
For this, I think we could simply pass a boolean flag via the
This also makes a lot of sense, but note that this introduces additional complexity, could lead to silent failures if there is no such implementation, and cause unintended behaviour (e.g. using a pretrained model with the wrapper). Maybe its more explicit to implement BatchEnsemble constructors similar to In any case, I belive it makes sense to add a What do you think? |
All feedback implemented apart from this detail., but I can also make this change if you want :) |
- now the input is repeated during training depending on `repeat_training_inputs` argument - with `convert_layers==True` all `nn.Linear` and `nn.Conv2d` layers are replaced by `BatchLinear` and `BatchConv2d`
I added a boolean Concerning the second point, although it adds more complexity, I have added another boolean parameter to the class: Although I've written tests to validate my modifications, feel free to check that they do not break anything for your experiments. Cheers! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems perfect! Thanks @tonyzamyatin and @alafage!
This PR introduces the following improvements and fixes:
num_estimator
times, eliminating the need to hardcode this for each model.