Skip to content

Conversation

janfb
Copy link
Contributor

@janfb janfb commented Oct 13, 2025

I suggest we adopt an Enhancement Proposal (EP) workflow to better organize and discuss larger refactorings and API changes. EPs should live on the docs website and have a correspondng GH discussions entry for public discussions.
This will help us plan and reach consensus on major changes (e.g., roadmap toward sbi 1.0, deprecations, architectural refactors), while keeping the process transparent and easy to follow.

I created a corresponding entry on the website, an explanation of the workflow and an example EP-01 for the planned training infrastructure refactoring (numfocus/small-development-grant-proposals#60).

In general, I see this as a first step towards a more transparent and professional governance model.

All open for discussions, let's discuss in next week's org meeting.

@janfb janfb requested a review from michaeldeistler October 13, 2025 18:57
Copy link

codecov bot commented Oct 13, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.59%. Comparing base (4d1dc5f) to head (634a980).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1674      +/-   ##
==========================================
- Coverage   87.77%   84.59%   -3.19%     
==========================================
  Files         134      137       +3     
  Lines       11126    11417     +291     
==========================================
- Hits         9766     9658     -108     
- Misses       1360     1759     +399     
Flag Coverage Δ
unittests 84.59% <ø> (-3.19%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 43 files with indirect coverage changes

Copy link
Contributor

@michaeldeistler michaeldeistler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

The general idea of EPs is great, I really like it! We should also advertise this on the contributing.md (or even require it for "external" PRs).

I do have doubts about the new abstractions though. For example, we currently have EarlyStopping. In the future, we would probably also have LRScheduler, ClippingMethod, and possibly more. Should we really aim to build the ultimate training loop? I think this has a few downsides:

  • all of these features have to be well-documented (otherwise will forever be unused)
  • they create more abstractions, which will make it harder for new maintainers
  • more testing...

The simple alternative would be to simply point people to the flexible training loop. Everybody can use an LLM to get, for example, a learning rate scheduler. Of course, this requires a bit more work for users, but getting to know yet another sbi abstraction (which includes reading and understanding documentation, testing the feature, making sure it works,...) is also non-zero work for users.


### For Maintainers

- **Reduced code duplication**: Extract shared training logic into reusable components
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with doing this for this PR, but in general, I don't think that code duplication is always a bad thing. Introducing "reusable components" adds cognitive load and will make the package harder to maintain in the future.


- **Seamless experiment tracking**: Support for TensorBoard, WandB, MLflow, and stdout
without changing existing code
- **Multiple early stopping strategies**: Patience-based, plateau detection, and custom
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any evidence that patience-based is not working well? I would only add more methods this if it is really low-effort to implement and maintain other methods or if it they are really important for performance.

learning_rate=5e-4,
max_epochs=1000,
device="cuda"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really see the point of this. All of our train config args are ints, floats, or bools. Why do we need this additional abstraction?

)

# Method-specific loss configuration
loss_args = LossArgsNPE(exclude_invalid_x=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not consider exclude_invalid_x to live in LossArgsNPE

early_stop = EarlyStopping.validation_loss(patience=20, min_delta=1e-4)

# Stop when learning rate drops too low
early_stop = EarlyStopping.lr_threshold(min_lr=1e-6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this not a part of train_config?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants