integrations/sony-imx500/ #17496
Replies: 17 comments 44 replies
-
👋 Hello, thank you for your interest in Ultralytics and exploring integrations with the Sony IMX500! 🚀 For those looking to dive deeper into this topic, the Ultralytics Docs are a great starting point. They contain sections dedicated to various integrations, including Sony IMX500, with detailed instructions on how to export your YOLO models efficiently. If you're here to report a 🐛 bug, please provide a minimum reproducible example. This will help us better understand and assist with the issue you're facing. For general usage, upgrading to the latest version of Ultralytics might resolve some issues you encounter. Make sure your environment is updated with the latest package: pip install -U ultralytics For interactive discussions, consider joining the Ultralytics community on Discord 🎧, our Discourse forum, or participate in threads on our Subreddit. Our team will assist you soon, but for now, these resources should help you get started! 😊 EnvironmentsFeel free to run YOLO in any of these verified environments with preinstalled dependencies:
StatusOur continuous integration (CI) ensures all models and integrations, including Sony IMX500, are tested across platforms. Check our CI status here for the latest updates: |
Beta Was this translation helpful? Give feedback.
-
I ran into some python dependency issues when trying to run the 1st example. i started with python 3.12. ultralytics asks for tensorflow 2.12 which seems to clash with the requirements of the converter tools. Downgrading python to 3.11 is not enough. Tensorflow version must be higher than 2.12.
I tested on ubuntu 22 machine. Found solution after some testing. Here is the project file. [project]
name = "ultralytics-imx500"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = "~=3.11.0"
dependencies = [
"imx500-converter[pt]==3.14.3",
"model-compression-toolkit==2.1.1",
"sony-custom-layers==0.2.0",
"tensorflow~=2.14.0",
"ultralytics>=8.3.31",
] It could also be good to mention that converter installation at some point might ask for sudo pwd since it requires java. |
Beta Was this translation helpful? Give feedback.
-
Hi there it doesnt support for yolov8n-obb model because it shows me |
Beta Was this translation helpful? Give feedback.
-
Hello there, Are there any tips on IMX exporting for later YOLO models such as V11? |
Beta Was this translation helpful? Give feedback.
-
There is a problem with the instructions to install picamera2, it seems to create issues for the imx500 examples in https://github.com/raspberrypi/picamera2/tree/main/examples/imx500. Has it been tested with existing imx500 examples in the picamera2 repo?
Proper picamera2 installation is with apt, see https://github.com/raspberrypi/picamera2 |
Beta Was this translation helpful? Give feedback.
-
problem with ultralytics version >8.3.34, the instruction:
|
Beta Was this translation helpful? Give feedback.
-
Can we do face recognition(not just detection) with this approach? |
Beta Was this translation helpful? Give feedback.
-
Great guide and I managed to get my first model trained and converted to run on the imx500 on a raspberry pi4 for some custom object detection. Initially I trained on the yolov8s but when I tried to export it it did not work. I then trained the yolov8n with the same data and it worked fine (doubled checked and the yolov8s defualt does not export either) Is there a way to get the yolov8s models to export so I can run it on the imx500 with the tools or does it only work for the yolov8n? I am new to the Yolov8 models and the imx500 so was just wondering (the s version performed much better than the n version on my dataset) as once I get the yolov8n packaged up on the raspberry PI it seems to be only about 2MB is size so with the input tensors would think it could handle a bigger model as I think the imx500 has 8MB available for the model and input tensors. |
Beta Was this translation helpful? Give feedback.
-
Are there any plans to support yolov8n-pose or any other version of a yolo pose model in the foreseeable future. I have tried using the above tools with yolov8n-pose and I obtained this result: |
Beta Was this translation helpful? Give feedback.
-
hello i ran succesfully the standard yolov8 model converted in imx in my pi ai cam, now i'm wondering where i can modify the parameter of detection, such as confidence, resolution ecc... i don't see anything like the config json of the models tath comes preinstalled with the pi cam dependency |
Beta Was this translation helpful? Give feedback.
-
in the object detection sample, how do you crop and save the detected obj? assuming only 1 is detected |
Beta Was this translation helpful? Give feedback.
-
Hello, is there a journal / paper that discusses IMX 500 for yolo ? |
Beta Was this translation helpful? Give feedback.
-
Can sony imx run yolo12, does it run the same way as in yolov8? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I got a bunch of warnings during the execution (export_log.txt), but the imx_model seemed to work properly in the end. I then packaged it via IMX500 Packager and test it using the imx500_object_detection_demo.py. |
Beta Was this translation helpful? Give feedback.
-
Hi, I followed the steps in this page to convert the yolov8n model with imx500 converter and tested it with raspberry ai camera. But I get 29.5 ms for the inference time with 640x640 input tensor size. So I wonder why my inference time is better than your result (58.82ms). Which version of imx500 converter did you use? |
Beta Was this translation helpful? Give feedback.
-
Hello, per picamera2 docs, creating apps with "a camera window embedded into the GUI is through Qt", is there no other way to embed the video feed in other GUI frameworks? |
Beta Was this translation helpful? Give feedback.
-
In the official IMX500 documentation provided by Sony (https://developer.sony.com/tinyml-23/developer-tools/documentation/imx500-evaluation-kit-user-guide?version=2023-11-28&progLang=), they describe the process of generating the network.fpk and network_info.txt files that will be used to deploy the model to IMX500, using the Packager.sh script. However, when I attempt to apply the same procedure to the packerOut.zip file I created by following the tutorial on https://docs.ultralytics.com/integrations/sony-imx500, I am unable to run the model on the IMX500. Also I couldn't run your implementation because it says that some packages are not found which I believe is related to the version of the OS. Currently I use the Pi OS that is provided by SONY for compatibility issues. I could run some other deep learning models following the steps in the (https://developer.sony.com/tinyml-23/developer-tools/documentation/imx500-evaluation-kit-user-guide?version=2023-11-28&progLang=), but I believe, there must be something different related to the creation of packerOut.zip file. Could you assist in resolving this issue with the conversion process? @glenn-jocher |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
integrations/sony-imx500/
Learn to export Ultralytics YOLOv8 models to Sony's IMX500 format to optimize your models for efficient deployment.
https://docs.ultralytics.com/integrations/sony-imx500/
Beta Was this translation helpful? Give feedback.
All reactions