-
Notifications
You must be signed in to change notification settings - Fork 678
fix qat_lora_test #2131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix qat_lora_test #2131
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2131
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 214be63 with merge base 26b2200 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link isn't working for me, but feel free to merge with green CI. Thank you for fixing this!
* Llama 3.3 70B (meta-pytorch#2124) * Llama 3.3 readme updates (meta-pytorch#2125) * update configs (meta-pytorch#2107) Co-authored-by: Felipe Mello <felipemello@fb.com> * Reduce logging output for distributed KD (meta-pytorch#2120) * Support Early Exit Loss and/or Layer Dropout (meta-pytorch#1076) Co-authored-by: ebsmothers <ebs@meta.com> * Update checkpointing directory (meta-pytorch#2074) Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: vancoyendall <vancoykendall@gmail.com> * pass correct arg (meta-pytorch#2127) Co-authored-by: Felipe Mello <felipemello@fb.com> * update configs (meta-pytorch#2128) Co-authored-by: Felipe Mello <felipemello@fb.com> * fix qat_lora_test (meta-pytorch#2131) Co-authored-by: Felipe Mello <felipemello@fb.com> --------- Co-authored-by: Philip Bontrager <pbontrager@gmail.com> Co-authored-by: ebsmothers <ebs@meta.com> Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com> Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: Joe Cummings <jrcummings27@gmail.com> Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org> Co-authored-by: vancoyendall <vancoykendall@gmail.com>
* Llama 3.3 70B (meta-pytorch#2124) * Llama 3.3 readme updates (meta-pytorch#2125) * update configs (meta-pytorch#2107) Co-authored-by: Felipe Mello <felipemello@fb.com> * Reduce logging output for distributed KD (meta-pytorch#2120) * Support Early Exit Loss and/or Layer Dropout (meta-pytorch#1076) Co-authored-by: ebsmothers <ebs@meta.com> * Update checkpointing directory (meta-pytorch#2074) Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: vancoyendall <vancoykendall@gmail.com> * pass correct arg (meta-pytorch#2127) Co-authored-by: Felipe Mello <felipemello@fb.com> * update configs (meta-pytorch#2128) Co-authored-by: Felipe Mello <felipemello@fb.com> * fix qat_lora_test (meta-pytorch#2131) Co-authored-by: Felipe Mello <felipemello@fb.com> --------- Co-authored-by: Philip Bontrager <pbontrager@gmail.com> Co-authored-by: ebsmothers <ebs@meta.com> Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com> Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: Joe Cummings <jrcummings27@gmail.com> Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org> Co-authored-by: vancoyendall <vancoykendall@gmail.com>
* Llama 3.3 70B (meta-pytorch#2124) * Llama 3.3 readme updates (meta-pytorch#2125) * update configs (meta-pytorch#2107) Co-authored-by: Felipe Mello <felipemello@fb.com> * Reduce logging output for distributed KD (meta-pytorch#2120) * Support Early Exit Loss and/or Layer Dropout (meta-pytorch#1076) Co-authored-by: ebsmothers <ebs@meta.com> * Update checkpointing directory (meta-pytorch#2074) Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: vancoyendall <vancoykendall@gmail.com> * pass correct arg (meta-pytorch#2127) Co-authored-by: Felipe Mello <felipemello@fb.com> * update configs (meta-pytorch#2128) Co-authored-by: Felipe Mello <felipemello@fb.com> * fix qat_lora_test (meta-pytorch#2131) Co-authored-by: Felipe Mello <felipemello@fb.com> * guard ckpt imports (meta-pytorch#2133) Co-authored-by: Felipe Mello <felipemello@fb.com> * [bug fix] add parents=True (meta-pytorch#2136) Co-authored-by: Felipe Mello <felipemello@fb.com> * [bug fix] re-add model (meta-pytorch#2135) Co-authored-by: Felipe Mello <felipemello@fb.com> * Update save sizes into GiB (meta-pytorch#2143) * [bug fix] remove config download when source is kaggle (meta-pytorch#2144) Co-authored-by: Felipe Mello <felipemello@fb.com> * [fix] remove "with_suffix" (meta-pytorch#2146) Co-authored-by: Felipe Mello <felipemello@fb.com> * DoRA fixes (meta-pytorch#2139) Co-authored-by: Mircea Mironenco <5738815+mirceamironenco@users.noreply.github.com> * [Fix] Llama 3.2 Vision decoder_trainable flag fixed (meta-pytorch#2150) * Small readme, config updates (meta-pytorch#2157) * Using `FormattedCheckpointFiles` in configs (meta-pytorch#2147) * Move ``get_world_size_and_rank`` to utils (meta-pytorch#2155) * Faster intermediate checkpoints with DCP async save in TorchTune (meta-pytorch#2006) Co-authored-by: Saurabh Mishra <msaurabh@fb.com> * torchdata integration - multi-dataset and streaming support (meta-pytorch#1929) * Allow higher version of lm-eval (meta-pytorch#2165) * Using `FormattedCheckpointFiles` in configs... round 2 (meta-pytorch#2167) * [EZ] Fix set_torch_num_threads in multi-node. (meta-pytorch#2164) --------- Co-authored-by: Philip Bontrager <pbontrager@gmail.com> Co-authored-by: ebsmothers <ebs@meta.com> Co-authored-by: Felipe Mello <fmellomascarenhas@gmail.com> Co-authored-by: Felipe Mello <felipemello@fb.com> Co-authored-by: Joe Cummings <jrcummings27@gmail.com> Co-authored-by: Mostafa Elhoushi <m.elhoushi@ieee.org> Co-authored-by: vancoyendall <vancoykendall@gmail.com> Co-authored-by: Mircea Mironenco <5738815+mirceamironenco@users.noreply.github.com> Co-authored-by: salman <salman.mohammadi@outlook.com> Co-authored-by: Saurabh Mishra <msaurabh@meta.com> Co-authored-by: Saurabh Mishra <msaurabh@fb.com> Co-authored-by: Andrew Ho <andrew.kenneth.ho@gmail.com> Co-authored-by: Eugen Hotaj <eugen_hotaj_91@hotmail.com>
Context
What is the purpose of this PR? Is it to