Skip to content

Conversation

krammnic
Copy link
Collaborator

@krammnic krammnic commented Jan 22, 2025

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Was able to forward with external loss and concatenated_forward.

Copy link

pytorch-bot bot commented Jan 22, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2292

Note: Links to docs will display an error until the docs builds have been completed.

❌ 7 New Failures, 4 Cancelled Jobs

As of commit 2bf00ba with merge base 8c9235e (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 22, 2025
@krammnic
Copy link
Collaborator Author

@SalmanMohammadi Hey! Can you take a look please?

utils.log_rank_zero(log, "Loss is initialized.")

try:
self._forward = config.instantiate(cfg.forward)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the motivation for this change?

Copy link
Collaborator

@RdoubleA RdoubleA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused on why a custom forward class needs to be created, can't everything be contained in the custom loss module, as long as it follows a certain contract?

@krammnic
Copy link
Collaborator Author

I am a bit confused on why a custom forward class needs to be created, can't everything be contained in the custom loss module, as long as it follows a certain contract?

Do you want to make it in one class?

@SalmanMohammadi
Copy link
Contributor

Hey @krammnic. Thanks for opening this.

What's the motivation for this change?

@RdoubleA to try and provide some context (@krammnic feel free to correct me here since this is your PR) - the core issue is that our DPO recipe(s) as they are today are not easily extensible to other DPO-esque losses. As the recipes are now, we could start to see that adding new losses would overall be harmful to the code quality. This is why we removed SimPO in #2063 - it was a "reference-free" loss, which meant the logic for calculating the loss was slightly different.

If a user wishes to add SimPO in today, they'd have to duplicate the recipe file and make the same changes which we removed. If you'd like to estimate the space of possible DPO-style losses and the degree of branching logic they require, I'd reccomend taking a look at TRL's code.

As it is, I'm not sure if this PR is heading in the right direction. In the example with SimPO which you've provided, we're still making the extra forward pass in the recipe to obtain the reference logprobs, and passing these into the SimPOLoss which would be instantiated, which would error out.

I think it would be helpful to enumerate the different cases we would need to handle to support the most commonly used DPO-style losses, which would allow us to generalize in the right place. I would need to spend a little more time to come up with a suitable design, but right now I'm leaning towards pulling most of the logic for forward passes and loss calculation (e.g. from here to here) out of the recipe, and into the losses. I'd rather duplicate this logic for losses that are very similar than over-generalize.

I'd love to hear your thoughts @krammnic (and @RdoubleA and anyone else who has stuck with my ramblings).

@krammnic
Copy link
Collaborator Author

We can handle this case related to reference logprobs with adding parameter related to such possibility in the recipe. Speaking about the design overall, I actually could not find better alternative to save adaptiveness and code quality

Hey @krammnic. Thanks for opening this.

What's the motivation for this change?

@RdoubleA to try and provide some context (@krammnic feel free to correct me here since this is your PR) - the core issue is that our DPO recipe(s) as they are today are not easily extensible to other DPO-esque losses. As the recipes are now, we could start to see that adding new losses would overall be harmful to the code quality. This is why we removed SimPO in #2063 - it was a "reference-free" loss, which meant the logic for calculating the loss was slightly different.

If a user wishes to add SimPO in today, they'd have to duplicate the recipe file and make the same changes which we removed. If you'd like to estimate the space of possible DPO-style losses and the degree of branching logic they require, I'd reccomend taking a look at TRL's code.

As it is, I'm not sure if this PR is heading in the right direction. In the example with SimPO which you've provided, we're still making the extra forward pass in the recipe to obtain the reference logprobs, and passing these into the SimPOLoss which would be instantiated, which would error out.

I think it would be helpful to enumerate the different cases we would need to handle to support the most commonly used DPO-style losses, which would allow us to generalize in the right place. I would need to spend a little more time to come up with a suitable design, but right now I'm leaning towards pulling most of the logic for forward passes and loss calculation (e.g. from here to here) out of the recipe, and into the losses. I'd rather duplicate this logic for losses that are very similar than over-generalize.

I'd love to hear your thoughts @krammnic (and @RdoubleA and anyone else who has stuck with my ramblings).

@krammnic
Copy link
Collaborator Author

krammnic commented Jan 29, 2025

@SalmanMohammadi @RdoubleA Hey! What I need to do to proceed on this? Currently, I mainly use slightly modified torchtune from this PR as the feature is pretty important for me.

@krammnic
Copy link
Collaborator Author

krammnic commented Feb 9, 2025

@SalmanMohammadi I see that full DPO was merged. Let me update this to align with it and our new design that we have discussed

@krammnic
Copy link
Collaborator Author

krammnic commented Feb 9, 2025

@SalmanMohammadi Pushed the design that we discussed. If it is OK, then will run some tests with each recipe.

@krammnic
Copy link
Collaborator Author

@ebsmothers As Salman is unavailable, after small discussion with Felipe, we concluded that you can review this relatively small but important feature. Would love to here some comments!

Copy link
Contributor

@ebsmothers ebsmothers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krammnic admittedly I am a bit out of the loop on these changes so pardon my ignorance here. But personally I don't think that model should be an argument to the forward method of a loss module. This is introducing some coupling that shouldn't exist -- namely any loss should be usable as a standalone component.

Again I am probably missing some important background here, if you can share more details on what you're trying to support I'm happy to brainstorm some alternative paths here.

@krammnic
Copy link
Collaborator Author

@krammnic admittedly I am a bit out of the loop on these changes so pardon my ignorance here. But personally I don't think that model should be an argument to the forward method of a loss module. This is introducing some coupling that shouldn't exist -- namely any loss should be usable as a standalone component.

Again I am probably missing some important background here, if you can share more details on what you're trying to support I'm happy to brainstorm some alternative paths here.

Yep, let me clarify the idea. Currently we support only DPOLoss as a contrastive loss for our recipes. In some prevous PRs we have dropped the support of other contrastive losses due to the overcomplecation of the recipe. The key point is that we don't want to include all losses directly in the framework, because SOTA can be changed frequently and because recipe should be clean. Key observation: to support almost any contrastive loss, custom forward of the loss and concatenated_forward in the recipe is enough. Firstly, I wanted to pass them separately, but then we concluded that the better idea is to put both forwards directly in the loss. The idea of this PR to support possibility of such pass. Therefore, support any contrastive losses that users then may take from TRL or any other source.

@krammnic krammnic closed this Feb 23, 2025
@krammnic krammnic mentioned this pull request Feb 23, 2025
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants