In close-range underwater interactions, two information of object is needed: shape & surface information.
3D shape can be obtained by sonar, but precisice surface infos like texutre, colors are only can be captured by camera. But in underwater (UW), 3 main issues(Attenuation, Scattering, Turbidity) make camera measurements differs by time & place.
To avoid measurement issues with UW domain, the surface information can be represented with albedo, fraction of light that is diffusely reflected by a body. (~from NBUV, CVPR 2017)
And with albedo, we can estimate unknown object's surface texture & use as cost function for automated light-camera system(img in below)'s Next Best Sensor & Light View Planning. (also, from NBUV)
In this project, I tried to implement albedo estimation to understand physical component of UW visual environments.
This work was presented at the 20th Korea Robotics Society Annual Conference Poster Session under the title "Preliminary Study on Active 3D Reconstruction for Underwater Close-range Interaction with Lighting-Camera Systems."
- Surface information of target objects is essential for underwater task automation.
- In a simplified optical model of the underwater visual environment, surface information is represented by surface reflectance.
- The functionality of surface reflectance estimation was validated through a pipeline implementation in a simulator.
- For arbitrary poses and single images, an optimized pose combination and multiple images are required!
- Modeling more realistic underwater visual environments under conditions such as turbidity.
- Target objects with more complex structures and surface textures.
- Research on optimal camera and lighting positions for automated perception, including Next Best View (NBV) and coverage path planning.