Skip to content

Do I need to do a distortion correction after getting aligned frames using D455? #13824

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cvki opened this issue Mar 10, 2025 · 13 comments
Closed

Comments

@cvki
Copy link

cvki commented Mar 10, 2025

Hello,
I obtained color and depth aligned frame using the D455 camera. Do I need to further perform distortion processing on the color images? I tried using OpenCV for distortion processing and remapping, and obtained internal parameters for multiple scenes. However, the internal parameters obtained for each scene have a 0.1 scale difference. Is this normal? Can you give me some advice.
Thank you.

Here is part of my code:

align_to = rs.stream.color
align = rs.align(align_to)
align_frames = align.process(frames)
frames = align_frames
frame_num = frames.frame_number
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()

process rgb

color_img = np.asanyarray(color_frame.get_data())
img_bgr = cv2.cvtColor(color_img, cv2.COLOR_RGB2BGR)
camera_matrix = np.array([
[color_intrin.fx, 0, color_intrin.ppx],
[0, color_intrin.fy, color_intrin.ppy],
[0, 0, 1]
])
dist_coeffs = np.array(color_intrin.coeffs)
undistorted_color = cv2.undistort(img_bgr, camera_matrix, dist_coeffs)

new_camera_matrix, roi = cv2.getOptimalNewCameraMatrix(
camera_matrix, dist_coeffs, (undistorted_color.shape[1], undistorted_color.shape[0]), alpha=0)
mapx, mapy = cv2.initUndistortRectifyMap(
camera_matrix, dist_coeffs, None, new_camera_matrix, (undistorted_color.shape[1],
undistorted_color.shape[0]), cv2.CV_32FC1)

color_img = cv2.remap(undistorted_color, mapx_g, mapy_g, cv2.INTER_LINEAR)

'''then I will save <color_img>'''

@MartyG-RealSense
Copy link
Collaborator

Hi @cvki The RealSense camera model's hardware applies a Brown-Conrady distortion model at the point of capture before the data is sent through the USB cable to the computer. It is not necessary to perform further processing on the image's distortion.

Most RealSense stream formats have distortion applied to them, except for Y16 because it is used for camera calibration.

A small number of RealSense users prefer distortion not to be applied and so may use OpenCV's undistort function like you did to remove the distortion.

The intrinsic values will vary depending on which resolution is being used, as demonstrated in the image below. The default color resolution is 1280x720.

Image

The intrinsic values for each individual camera are unique due to the manufacturing process at the factory. You can find the intrinsics for your own camera at each resolution by launching the rs-enumerate-devices tool in calibration information mode with the command below:

rs-enumerate-devices -c

In regard to scale: as you are using rs-align, depth data must be involved in your script. The default depth scale of the camera is 0.001. When RealSense depth data is imported into another tool, the image may be mis-scaled if the tool that the data is imported into is not using the same scale as the original RealSense data.

It is possible to reconfigure the scale of RealSense depth data to another value, such as 0.1 or 0.001. #10976 (comment) has information about changing the depth scale with pyrealsense2 code.

@cvki
Copy link
Author

cvki commented Mar 11, 2025

@MartyG-RealSense Thank you for your careful answer, it has been very helpful to me.

@cvki
Copy link
Author

cvki commented Mar 11, 2025

@MartyG-RealSense By the way, Among the three different ways of querying internal parameters, the obtained internal parameters (especially fx, fy) are different :

  1. when using the command: rs-enumerate-devices -c on ubuntu20.04 :

Image

  1. when using the code config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, FPS) pipe_profile = pipeline.start(config) ret, frames = pipeline.try_wait_for_frames() color_frame = frames.get_color_frame() color_profile = color_frame.get_profile() color_intrinsics = color_profile.as_video_stream_profile().get_intrinsics():

Image

  1. when using the code align_to = rs.stream.color align = rs.align(align_to) align_frames = align.process(frames) frames = align_frames frame_num = frames.frame_number depth_frame = frames.get_depth_frame() color_frame = frames.get_color_frame() color_intrin = color_frame.get_profile().as_video_stream_profile().get_intrinsics()
    to align depth to color :

Image

@MartyG-RealSense
Copy link
Collaborator

The intrinsic values that are output can vary between retrieval methods. If in doubt about which set of values is correct then I recommend relying on the ones from rs-enumerate-devices -c

@cvki
Copy link
Author

cvki commented Mar 18, 2025

@MartyG-RealSense thanks!

@Abrahamh08
Copy link

Are you sure that's right? deproject_pixel_to_point undistorts before projecting, so I've been using the color stream with the assumption that no correction is applied when I have the image.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 23, 2025

Hi @Abrahamh08 Line 49 of the SDK's rs-types.h source code file states that the Inverse Brown-Conrady distortion model (which the color stream uses) undistorts the image instead of distorts it.

RS2_DISTORTION_INVERSE_BROWN_CONRADY , /**< Equivalent to Brown-Conrady distortion, except undistorts image instead of distorting it */

The PR at #13833 that you commented on will be the best place to get advice about your question though.

@Abrahamh08
Copy link

My question is the same now as the question in this issue. The original poster asked if it's necessary to apply distortion correction to have an undistorted image. I am confused by what you were saying about the "camera model's hardware applies a Brown-Conrady distortion model at the point of capture before the data is sent through the USB cable to the computer", because I believe the image is going to be distorted, and needs to be undistorted with the inverse-brown conrady model like you were saying for points of interest, or a distorted->undistorted pixel map which is inefficient since it is inverse. In my use case, however, I am interested in undistorted->distorted, so it is good.

@Zauzolkov
Copy link

It looks like this issue could be related to my #13864

@Zauzolkov
Copy link

Zauzolkov commented Mar 27, 2025

@Abrahamh08:

I am confused by what you were saying about the "camera model's hardware applies a Brown-Conrady distortion model at the point of capture before the data is sent through the USB cable to the computer", because I believe the image is going to be distorted

There is response on this topic:

Whilst the Left and Right IR images are rectified by the (Vision Processor) D4, the RGB image is not (this is not a bug). Nevertheless, the RGB calibration params are used when performing 2D-3D projections and frame alignments.

@Abrahamh08
Copy link

That makes sense. The Left IR/depth distortion coefficients (didn't check Right IR) are zero. So it makes sense it is rectified beforehand.

@MartyG-RealSense
Copy link
Collaborator

Does anyone require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants