-
Notifications
You must be signed in to change notification settings - Fork 10
Wensun/apo #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
wensun
wants to merge
19
commits into
main
Choose a base branch
from
wensun/apo
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Wensun/apo #96
Changes from 9 commits
Commits
Show all changes
19 commits
Select commit
Hold shift + click to select a range
5a95cab
apo initial
wensun d89cd64
.
wensun 1d925d2
vllm fix
jdchang1 e9dbad9
critic free
jdchang1 934e864
cleanup
jdchang1 b7cb34e
local run
jdchang1 28014a9
added timeout in rlvr utils
wensun ed4e554
added timeout in rlvr utils
wensun de6400c
.
wensun 6b4c385
fix some comments issues in local yaml
wensun b2ae469
.
wensun 4b51437
Update compose_rl/algorithms/online/callback.py
jdchang1 d4ffc59
clean comments
jdchang1 1fd7e2d
comment cleanup
jdchang1 c769975
cleanup of callback
jdchang1 0d275ec
undo rlvr update
jdchang1 d6c24d0
Merge branch 'main' into wensun/apo
jdchang1 202b0a2
fix
jdchang1 fc2f835
Merge branch 'wensun/apo' of https://github.com/databricks/compose-rl…
jdchang1 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -41,6 +41,7 @@ | |
ComposerMPTPolicyLM, | ||
) | ||
from compose_rl.algorithms.online.model_methods import ( | ||
ALGORITHM_TYPE, | ||
OnPolicyEnum, | ||
) | ||
from compose_rl.algorithms.online.reward_manager import ( | ||
|
@@ -589,6 +590,7 @@ def iteration_start(self, state: State, logger: Logger): | |
del logger # unused | ||
|
||
batch = self._get_next_iter_prompts() | ||
|
||
batch = state.device.batch_to_device(batch) | ||
|
||
if self.vllm_engines is not None: | ||
|
@@ -648,7 +650,7 @@ def _get_next_iter_prompts(self): | |
# Explode the batch into multiple batches for each generation | ||
for _ in range(self.generations_per_prompt): | ||
# For keys that do not require additional processing | ||
if key in ['prompt_len', 'verified_answer', 'prompt_id']: | ||
if key in ['prompt_len', 'verified_answer', 'prompt_id', 'vstar']: | ||
curr_values.append(batch[key]) | ||
continue | ||
|
||
|
@@ -678,6 +680,8 @@ def _get_next_iter_prompts(self): | |
else: | ||
if key == 'verified_answer': | ||
ret_batch[key] = list(flatten(curr_values)) | ||
elif key == 'vstar': | ||
ret_batch[key] = list(flatten(curr_values)) | ||
else: | ||
# this is an edge case that we will not hit currently, but just handling it as needed | ||
ret_batch[key] = curr_values | ||
|
@@ -870,109 +874,135 @@ def _resolve_outputs( | |
env_outs['right_padded_attn_mask'] = torch.logical_not( | ||
torch.eq(env_outs['obs'], self.pad_token_idx), # type: ignore | ||
) | ||
if self.actor_critic.loss_type not in ALGORITHM_TYPE.REGRESSION: | ||
# Now that rewards are resolved, we can compute advantages | ||
if self.actor_critic.loss_type == OnPolicyEnum.PPO: | ||
env_outs['advantages'] = compute_advantages( | ||
rewards=env_outs['rewards'], | ||
values=env_outs['values'], | ||
gamma=self.gamma, | ||
lambda_gae=self.lambda_gae, | ||
) | ||
elif self.actor_critic.loss_type == OnPolicyEnum.GRPO: | ||
# compute GRPO advantages | ||
prompt_id = env_outs['prompt_id'] | ||
rewards = env_outs['rewards'] | ||
|
||
# Flatten the rewards by summing on sequence length/action_mask | ||
flat_rewards = masked_sum( | ||
rewards, | ||
env_outs['action_mask'], | ||
dim=-1, | ||
) | ||
|
||
# Now that rewards are resolved, we can compute advantages | ||
if self.actor_critic.loss_type == OnPolicyEnum.PPO: | ||
env_outs['advantages'] = compute_advantages( | ||
rewards=env_outs['rewards'], | ||
values=env_outs['values'], | ||
gamma=self.gamma, | ||
lambda_gae=self.lambda_gae, | ||
) | ||
elif self.actor_critic.loss_type == OnPolicyEnum.GRPO: | ||
# compute GRPO advantages | ||
prompt_id = env_outs['prompt_id'] | ||
rewards = env_outs['rewards'] | ||
|
||
# Flatten the rewards by summing on sequence length/action_mask | ||
flat_rewards = masked_sum( | ||
rewards, | ||
env_outs['action_mask'], | ||
dim=-1, | ||
) | ||
# Get unique prompt IDs and their indices | ||
unique_prompt_ids, inverse_indices = torch.unique( | ||
prompt_id, | ||
return_inverse=True, | ||
) | ||
|
||
# Get unique prompt IDs and their indices | ||
unique_prompt_ids, inverse_indices = torch.unique( | ||
prompt_id, | ||
return_inverse=True, | ||
) | ||
# Use scatter to compute means and standard deviations | ||
# First, we'll create a tensor to track counts, sums, and sum of squares | ||
n_unique = len(unique_prompt_ids) | ||
counts = torch.zeros(n_unique, device=prompt_id.device) | ||
sums = torch.zeros(n_unique, device=prompt_id.device) | ||
sum_squares = torch.zeros(n_unique, device=prompt_id.device) | ||
|
||
# Use scatter_add to accumulate values | ||
counts.scatter_add_( | ||
0, | ||
inverse_indices, | ||
torch.ones_like(flat_rewards), | ||
) | ||
sums.scatter_add_(0, inverse_indices, flat_rewards) | ||
sum_squares.scatter_add_(0, inverse_indices, flat_rewards**2) | ||
|
||
# Compute means and standard deviations | ||
means = sums / counts | ||
variances = (sum_squares / counts) - (means**2) | ||
stds = torch.sqrt(variances) | ||
|
||
# Map back to original tensor shape | ||
mean_rewards = means[inverse_indices] | ||
std_rewards = stds[inverse_indices] | ||
|
||
# Calculate GRPO advantage | ||
grpo_advantage = (flat_rewards - mean_rewards) | ||
# Only normalize the advantage if flag is set | ||
if self.actor_critic.normalize_advantage: | ||
grpo_advantage /= (std_rewards + 1e-4) | ||
|
||
# Create advantages of the same shape as original rewards | ||
advantages = torch.zeros_like(rewards) | ||
# Copy the flat grpo_advantage according to action_mask | ||
expanded_advantages = grpo_advantage.unsqueeze(1).expand_as( | ||
env_outs['action_mask'], | ||
) | ||
advantages = torch.where( | ||
env_outs['action_mask'].bool(), | ||
expanded_advantages, | ||
advantages, | ||
) | ||
env_outs['advantages'] = advantages | ||
else: | ||
raise ValueError( | ||
f'Invalid loss type: {self.actor_critic.loss_type}. ' + | ||
'Valid options are: ppo, grpo.', | ||
) | ||
|
||
# Use scatter to compute means and standard deviations | ||
# First, we'll create a tensor to track counts, sums, and sum of squares | ||
n_unique = len(unique_prompt_ids) | ||
counts = torch.zeros(n_unique, device=prompt_id.device) | ||
sums = torch.zeros(n_unique, device=prompt_id.device) | ||
sum_squares = torch.zeros(n_unique, device=prompt_id.device) | ||
|
||
# Use scatter_add to accumulate values | ||
counts.scatter_add_( | ||
0, | ||
inverse_indices, | ||
torch.ones_like(flat_rewards), | ||
) | ||
sums.scatter_add_(0, inverse_indices, flat_rewards) | ||
sum_squares.scatter_add_(0, inverse_indices, flat_rewards**2) | ||
|
||
# Compute means and standard deviations | ||
means = sums / counts | ||
variances = (sum_squares / counts) - (means**2) | ||
stds = torch.sqrt(variances) | ||
|
||
# Map back to original tensor shape | ||
mean_rewards = means[inverse_indices] | ||
std_rewards = stds[inverse_indices] | ||
|
||
# Calculate GRPO advantage | ||
grpo_advantage = (flat_rewards - mean_rewards) | ||
# Only normalize the advantage if flag is set | ||
if self.actor_critic.normalize_advantage: | ||
grpo_advantage /= (std_rewards + 1e-4) | ||
|
||
# Create advantages of the same shape as original rewards | ||
advantages = torch.zeros_like(rewards) | ||
# Copy the flat grpo_advantage according to action_mask | ||
expanded_advantages = grpo_advantage.unsqueeze(1).expand_as( | ||
batch_adv_mean, batch_adv_var = dist_compute_masked_mean_and_var( | ||
env_outs['advantages'], | ||
env_outs['action_mask'], | ||
) | ||
advantages = torch.where( | ||
env_outs['action_mask'].bool(), | ||
expanded_advantages, | ||
advantages, | ||
|
||
mean_ift = masked_mean( | ||
env_outs['ift_kl'], | ||
env_outs['action_mask'], | ||
) | ||
env_outs['advantages'] = advantages | ||
self.kl_ift.append(mean_ift.cpu()) | ||
|
||
iter_batch.update(env_outs) | ||
|
||
iter_batch.update({ | ||
'max_gen_len': | ||
torch.ones(self.iter_batch_size).to(torch.int32) * | ||
self.max_gen_len, | ||
'adv_masked_mean': | ||
torch.ones(self.iter_batch_size) * batch_adv_mean.cpu(), | ||
'adv_masked_var': | ||
torch.ones(self.iter_batch_size) * batch_adv_var.cpu(), | ||
'ift_kl_scalar': | ||
torch.ones(self.iter_batch_size) * self.kl_ctl.value, | ||
'reward_std': | ||
torch.ones(self.iter_batch_size) * | ||
env_outs['rewards'].std().to('cpu'), | ||
}) | ||
else: | ||
raise ValueError( | ||
f'Invalid loss type: {self.actor_critic.loss_type}. ' + | ||
'Valid options are: ppo, grpo.', | ||
) | ||
# APO and REBEL | ||
|
||
batch_adv_mean, batch_adv_var = dist_compute_masked_mean_and_var( | ||
env_outs['advantages'], | ||
env_outs['action_mask'], | ||
) | ||
mean_ift = masked_mean( | ||
env_outs['ift_kl'], | ||
env_outs['action_mask'], | ||
) | ||
self.kl_ift.append(mean_ift.cpu()) | ||
|
||
iter_batch.update(env_outs) | ||
|
||
iter_batch.update({ | ||
'max_gen_len': | ||
torch.ones(self.iter_batch_size).to(torch.int32) * | ||
self.max_gen_len, | ||
'adv_masked_mean': | ||
torch.ones(self.iter_batch_size), | ||
'adv_masked_var': | ||
torch.ones(self.iter_batch_size), | ||
'ift_kl_scalar': | ||
torch.ones(self.iter_batch_size) * self.kl_ctl.value, | ||
'reward_std': | ||
torch.ones(self.iter_batch_size) * | ||
env_outs['rewards'].std().to('cpu'), | ||
}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't this block of code for both algorithms very similar to each other, except for the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
||
|
||
mean_ift = masked_mean( | ||
env_outs['ift_kl'], | ||
env_outs['action_mask'], | ||
) | ||
self.kl_ift.append(mean_ift.cpu()) | ||
|
||
iter_batch.update(env_outs) | ||
|
||
iter_batch.update({ | ||
'max_gen_len': | ||
torch.ones(self.iter_batch_size).to(torch.int32) * | ||
self.max_gen_len, | ||
'adv_masked_mean': | ||
torch.ones(self.iter_batch_size) * batch_adv_mean.cpu(), | ||
'adv_masked_var': | ||
torch.ones(self.iter_batch_size) * batch_adv_var.cpu(), | ||
'ift_kl_scalar': | ||
torch.ones(self.iter_batch_size) * self.kl_ctl.value, | ||
'reward_std': | ||
torch.ones(self.iter_batch_size) * | ||
env_outs['rewards'].std().to('cpu'), | ||
}) | ||
|
||
# Moving minibatches to CPU to not take additional GPU memory | ||
for k, v in iter_batch.items(): | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.