Replies: 5 comments 5 replies
-
👋 Hello @zzw725, thank you for reaching out and sharing your experience with Ultralytics 🚀! For new users and troubleshooting, we recommend checking out the Docs, where you’ll find Python and CLI usage examples, as well as answers to many frequently asked questions. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us identify and debug the issue. If you’re looking for training advice or have custom dataset questions, please include as much detail as possible, such as dataset image examples, configuration files, and training logs, and make sure you’re following our Tips for Best Training Results. You’re also invited to join the Ultralytics community in whatever way suits you best! For real-time chat, join us on Discord 🎧. For in-depth discussions, try Discourse. Or join our Subreddit to exchange knowledge with the community. UpgradePlease ensure you are using the latest pip install -U ultralytics EnvironmentsYOLO can be run in any of the following up-to-date verified environments (with all dependencies, including CUDA/CUDNN, Python, and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. This is an automated response to help guide your troubleshooting. An Ultralytics engineer will also review your discussion and assist you soon! |
Beta Was this translation helpful? Give feedback.
-
OK! Thank you for your guidance. |
Beta Was this translation helpful? Give feedback.
-
OK! Thank you for your guidance. |
Beta Was this translation helpful? Give feedback.
-
I recall that I changed the presentation format of the dataset images.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Everyone!
When training grayscale images with YOLOv8, the training set and validation set performed well, with P, R, and mAP reaching 99.5%, 100%, and 99.9%, respectively. However, the test set performed poorly, with two categories identified, and category 1 accuracy at 0%, with the model completely identifying it as category 2. What could be the reason for this, and how can this issue be resolved?
Beta Was this translation helpful? Give feedback.
All reactions