Fisheye lens undestortion #1169
Replies: 1 comment 1 reply
-
Hi, can you say how you created the camera configuration? I'm guessing you might have 32-bit ARGB pixels? You can ask for 24-bit RGB instead, and that should actually be more efficient, though you can't display them with the hardware-accelerated preview. In order to get the frames back into the pipeline, the easiest thing is probably just to copy them back over the original image, maybe like this:
(untested, but you get the idea!) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently working on a project that requires minimal CPU usage on the rpi5 and a wide field of view. I need multiple outputs as described in one of the examples. To reduce the overhead I thought I could combine the encoder function of the H264 encoder with a fisheye undestorter from the cv2 library by applying a wrapper class around the H264 encoder that intercepts the frames before encoding and undestorts them before passing them to the encoder. Unfortunately the frames I can intercept are in a format I don't quite understand. Also, I don't really know how to get the undistorted frames back into the pipeline. Did anyone had similar issues or an alternative approach to undestort frames while uesing the picamera2 library? Thanks in advance.
Wrapperclass_code.txt
Beta Was this translation helpful? Give feedback.
All reactions