Replies: 3 comments 6 replies
-
👋 Hello @Lingpheng, thank you for sharing your detailed observations and results with Ultralytics 🚀! For users interested in optimizing loss weights, we recommend reviewing the Docs, including Python and CLI usage examples. These resources cover many of the most common questions about custom training and model configuration. If this is a 🐛 Bug Report, please include a minimum reproducible example to help us investigate further. For custom training ❓ questions like yours, providing as much context as possible—including dataset samples, relevant config files, and complete training logs—will help our team assist you effectively. Also, please review our Tips for Best Training Results to ensure you're following best practices. Join the Ultralytics community where it suits you best! For real-time chat, visit Discord 🎧. For thoughtful discussions, check out Discourse. Or connect with others on our Subreddit. UpgradeTo make sure you are working with the latest features and fixes, upgrade to the newest pip install -U ultralytics EnvironmentsYOLO can be run in any of these verified environments (with dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are passing. CI tests verify correct operation of all YOLO Modes and Tasks across macOS, Windows, and Ubuntu every 24 hours and on every commit. This is an automated response 🤖. An Ultralytics engineer will review your discussion and assist you soon! |
Beta Was this translation helpful? Give feedback.
-
Thanks! I see—it's important to balance the three parameters. I'll try using the n model and start experimenting with cls=1.0. Appreciate the help! |
Beta Was this translation helpful? Give feedback.
-
The loss would be higher because the weights are multiplied with the actual loss value. That's what you see in the log. Higher weights equals higher value of loss. They are not comparable without normalizing first with respect to the weights |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am currently training a model to detect multiple objects. The exact positions of the bounding boxes are not very important to me — I care more about the accurate classification of different objects. So, I tried increasing cls loss weight, while keeping the weights for box and dfl unchanged.
I compared two values for the cls weight: on the left, cls=0.8, and on the right, cls=1.5. All other training settings were the same.
Here are the training and validation results:
And here are the corresponding test results:
From these results, it seems that cls=0.8 performs better — the final classification loss is 0.75 when cls=0.8, compared to 1.5 when cls=1.5.
So I have two questions:
Beta Was this translation helpful? Give feedback.
All reactions