zh/models/yoloe/ #20395
Replies: 5 comments 1 reply
-
I converted YOLOE to ONNX, but it only supports single-image input. How can I implement visual prompts? |
Beta Was this translation helpful? Give feedback.
-
👋 Hello, thank you for your interest in Ultralytics 🚀! We recommend checking out the Docs for comprehensive guides, including Python and CLI examples, where many common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us investigate the issue efficiently. If you have a custom training ❓ Question, please include detailed information such as dataset samples, training logs, and confirm you are following our Tips for Best Training Results. Join the Ultralytics community in a way that suits you: UpgradeTo ensure you have the latest features and bug fixes, upgrade the pip install -U ultralytics EnvironmentsYOLO models can be run in any of these up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. This is an automated response 💡. An Ultralytics engineer will also review your discussion and assist you soon! |
Beta Was this translation helpful? Give feedback.
-
Hello, I have some datasets for bbox, like {class, minx, miny, maxx, maxy} |
Beta Was this translation helpful? Give feedback.
-
This is the code I used to test yoloe inference: from ultralytics import YOLOE Initialize a YOLOE modelmodel = YOLOE("yoloe-v8m-seg.pt") # or select yoloe-11s/m-seg.pt for different sizes Set text prompt to detect person and bus. You only need to do this once after you load the model.names = ["person", "bus"] Run detection on the given imageresults = model.predict("girl.png") Show resultsresults[0].show() PS C:\Users\SYC\Desktop\projects\YOLOE_test> & D:/anaconda3/envs/RL/python.exe c:/Users/SYC/Desktop/projects/YOLOE_test/test.py how to solve it? |
Beta Was this translation helpful? Give feedback.
-
YOLOE ONNX inference doesn't accept text prompts.Goal: Run YOLOE ONNX inference with text prompts. The YOLOE model is imported to onnx using the given code:
I want to run ONNX inference with text prompt. I have already preprocessed my text into embeddings using CLIP as shown below.
But I cannot pass it as input the onnx model during inference, because I checked the input parameters and it accepts only 1 input image
This prints:
It shows only 1 input which is used for the input image. Where can I plugin the text embedding/prompt? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
zh/models/yoloe/
YOLOE 是一种实时开放词汇检测和分割模型,它通过文本、图像或内部词汇提示对YOLO 进行了扩展,从而能够以最先进的零镜头性能检测任何对象类别。
https://docs.ultralytics.com/zh/models/yoloe/
Beta Was this translation helpful? Give feedback.
All reactions