-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Do I need to do a distortion correction after getting aligned frames using D455? #13824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @cvki The RealSense camera model's hardware applies a Brown-Conrady distortion model at the point of capture before the data is sent through the USB cable to the computer. It is not necessary to perform further processing on the image's distortion. Most RealSense stream formats have distortion applied to them, except for Y16 because it is used for camera calibration. A small number of RealSense users prefer distortion not to be applied and so may use OpenCV's undistort function like you did to remove the distortion. The intrinsic values will vary depending on which resolution is being used, as demonstrated in the image below. The default color resolution is 1280x720. The intrinsic values for each individual camera are unique due to the manufacturing process at the factory. You can find the intrinsics for your own camera at each resolution by launching the rs-enumerate-devices tool in calibration information mode with the command below:
In regard to scale: as you are using rs-align, depth data must be involved in your script. The default depth scale of the camera is 0.001. When RealSense depth data is imported into another tool, the image may be mis-scaled if the tool that the data is imported into is not using the same scale as the original RealSense data. It is possible to reconfigure the scale of RealSense depth data to another value, such as 0.1 or 0.001. #10976 (comment) has information about changing the depth scale with pyrealsense2 code. |
@MartyG-RealSense Thank you for your careful answer, it has been very helpful to me. |
@MartyG-RealSense By the way, Among the three different ways of querying internal parameters, the obtained internal parameters (especially fx, fy) are different :
|
The intrinsic values that are output can vary between retrieval methods. If in doubt about which set of values is correct then I recommend relying on the ones from |
@MartyG-RealSense thanks! |
Are you sure that's right? deproject_pixel_to_point undistorts before projecting, so I've been using the color stream with the assumption that no correction is applied when I have the image. |
Hi @Abrahamh08 Line 49 of the SDK's rs-types.h source code file states that the Inverse Brown-Conrady distortion model (which the color stream uses) undistorts the image instead of distorts it.
The PR at #13833 that you commented on will be the best place to get advice about your question though. |
My question is the same now as the question in this issue. The original poster asked if it's necessary to apply distortion correction to have an undistorted image. I am confused by what you were saying about the "camera model's hardware applies a Brown-Conrady distortion model at the point of capture before the data is sent through the USB cable to the computer", because I believe the image is going to be distorted, and needs to be undistorted with the inverse-brown conrady model like you were saying for points of interest, or a distorted->undistorted pixel map which is inefficient since it is inverse. In my use case, however, I am interested in undistorted->distorted, so it is good. |
It looks like this issue could be related to my #13864 |
There is response on this topic:
|
That makes sense. The Left IR/depth distortion coefficients (didn't check Right IR) are zero. So it makes sense it is rectified beforehand. |
Does anyone require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Hello,
I obtained color and depth aligned frame using the D455 camera. Do I need to further perform distortion processing on the color images? I tried using OpenCV for distortion processing and remapping, and obtained internal parameters for multiple scenes. However, the internal parameters obtained for each scene have a 0.1 scale difference. Is this normal? Can you give me some advice.
Thank you.
Here is part of my code:
align_to = rs.stream.color
align = rs.align(align_to)
align_frames = align.process(frames)
frames = align_frames
frame_num = frames.frame_number
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
process rgb
color_img = np.asanyarray(color_frame.get_data())
img_bgr = cv2.cvtColor(color_img, cv2.COLOR_RGB2BGR)
camera_matrix = np.array([
[color_intrin.fx, 0, color_intrin.ppx],
[0, color_intrin.fy, color_intrin.ppy],
[0, 0, 1]
])
dist_coeffs = np.array(color_intrin.coeffs)
undistorted_color = cv2.undistort(img_bgr, camera_matrix, dist_coeffs)
new_camera_matrix, roi = cv2.getOptimalNewCameraMatrix(
camera_matrix, dist_coeffs, (undistorted_color.shape[1], undistorted_color.shape[0]), alpha=0)
mapx, mapy = cv2.initUndistortRectifyMap(
camera_matrix, dist_coeffs, None, new_camera_matrix, (undistorted_color.shape[1],
undistorted_color.shape[0]), cv2.CV_32FC1)
color_img = cv2.remap(undistorted_color, mapx_g, mapy_g, cv2.INTER_LINEAR)
'''then I will save <color_img>'''
The text was updated successfully, but these errors were encountered: