(#gemelo-digital-control-telemático-y-rv-generada-por-nerf-y-3dgs-en-tareas-de-rescate)




3DGS robustness tests1. From left to right and top to bottom: thermal camera, underwater, smoke, flowers (complex geometry). Images inside the Unreal Engine simulation. Source: Author
Deployment of technology in a simulated rescue situation. Source: Author
In this repository you will find:
- The source code of the nodes and the launcher with the ROS parameters necessary for communication between the robot and Unreal Engine.
- The complete Unreal Engine 5.2 project used for the development of the work, including the plugins with the modifications indicated in the report.
- The installation and use manuals for the tools used.
- The compilation of basic ROS concepts and commands necessary for the project.
- Link: https://oa.upm.es/83722/
Additionally, the following YouTube channel is available where you can find the first tests carried out with instant-ngp, the robustness tests of the capacity to generate 3D models for environments considered difficult and the demonstration video of the deployment of the technology.
- Youtube Channel: https://www.youtube.com/watch?v=sjYeZrFqgSc&t
The list of videos found on the YouTube channel is as follows:
- Demonstration of the complete workflow
- instant-ngp - Yellow Flowers Training
- 3DGS Robustness Tests - Environments with repetitive elements
- 3DGS Robustness Tests - Small objects
- 3DGS Robustness Tests - Elements with complex geometries
- 3DGS Robustness Tests - Scenes with smoke or fog
- 3DGS Robustness Tests - Night scenes
- 3DGS Robustness Tests - Thermal camera recordings
- 3DGS Robustness Tests - Scenes with reflections and refraction
To illustrate the starting point, Figure 0.1 presents a typical training scenario for Digital Twins (0.1a) and the state of the art in teleoperation of robotic systems with virtual reality interfaces in the Robotics and Cybernetics Laboratory (ROBCIB) of CAR ETSII - UPM (0.1b); and Figure 0.2 presents examples of 3D models of complex physical environments in which it is proposed to deploy said digital twins.


Figures 0.1a and 0.1b. Examples of Digital Twins in artificial or unrealistic environments. Sources: 2 y 3.

Figure 0.2: Examples of 3D models of complex environments generated with 3DGS. Sources: 1
This research seeks to demonstrate that it is possible to integrate these 3D model reconstruction technologies with rescue operations in emergency situations, providing operators with an immersive and detailed visualization of the affected environment.
The precise and realistic reconstruction of three-dimensional environments from two-dimensional images in which the digital twins of the deployed robots can operate also allows the simulation of possible catastrophe scenarios.
In this way, it not only facilitates a faster and more effective response, but also reduces the risk for rescue teams.
The main contributions of this project are the following:
- The optimal combination of tools to obtain a 3D model compatible with the simulation of the digital twins of the robots.
- The robustness and deployment time of the technology in realistic application situations.
- Compatibility with the current lines of research of the Robotics and Cybernetics Laboratory (ROBCIB) of the CAR ETSII UPM-CSIC, in particular the simulation and telematic control of robots through the use of virtual reality for closer and safer collaboration between operators and robots.
The combination of the tools used in this project allows the deployment of the technology in approximately one and a half hours for a simulated rescue situation.
After discarding instant-ngp4 and Unity due to incompatibilities, the tools compiled in the list below have been chosen. Figure 0.3 shows the complete workflow diagram of these tools, and then Table 0.1 shows the times used for their deployment.
- FFmpeg5, for preprocessing, and COLMAP6 with the addition of hloc7, for obtaining the photogrammetry.
- Splatfacto8 (NerfStudio8 environment method for 3DGS1) and volinga-model9, for generating the 3D model of the scene.
- Volinga Suite9, for compatibility with Unreal Engine (UE).
- UE together with the Volinga plugin9, for its representation in a virtual reality environment compatible with the simulation of the digital twins of the robots.
- Rosbridge10 and ROSIntegration11, for compatibility with current lines of research that use the ROS standard.

Figure 0.3: Complete workflow. Source: own elaboration

Table 0.1: Table of deployment times for the complete workflow. Source: Own elaboration
The appearance of the technology deployed during the demonstration can be seen in the images in Figure 0.4.
Footnotes
-
Kerbl, B., Kopanas, G., Leimk¨uhler, T. y Drettakis, G. (jul. de 2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. En: ACM Transactions on Graphics 42.4. url: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/. ↩ ↩2 ↩3
-
Rudin, N., Hoeller, D., Reist, P. y Hutter, M. (2021). Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning. En: ArXiv abs/2109.11978. url: https://api.semanticscholar.org/CorpusID:237635100. ↩
-
Cruz Ulloa, C. (mar. de 2024). Quadrupedal Robots in Search and Rescue: Perception and Teleoperation. doi: 10.20868/UPM.thesis.81769. url: https://oa.upm.es/81769/. ↩
-
Müller, T., Evans, A., Schied, C. y Keller, A. (jul. de 2022). Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. En: ACM Trans. Graph. 41.4, 102:1-102:15. doi: 10.1145/3528223.3530127. url: https://doi.org/10.1145/3528223.3530127 y https://nvlabs.github.io/instant-ngp/. ↩
-
Tomar, S. (2006). Converting video formats with FFmpeg. En: Linux Journal 2006.146, pág. 10. url: https://www.ffmpeg.org/ ↩
-
Schönberger, J. L., Zheng, E., Pollefeys, M. y Frahm, J.-M. (2016). Pixelwise View Selection for Unstructured Multi-View Stereo. En: European Conference on Computer Vision (ECCV). url: https://colmap.github.io/ ↩
-
Sarlin, P.-E., Cadena, C., Siegwart, R. y Dymczyk, M. (2019). From Coarse to Fine: Robust Hierarchical Localization at Large Scale. En: CVPR. url: https://github.com/cvg/Hierarchical-Localization ↩
-
Tancik, M. et al. (2023). Nerfstudio: A Modular Framework for Neural Radiance Field Development. En: ACM SIGGRAPH 2023 Conference Proceedings. SIGGRAPH ’23. url: https://docs.nerf.studio/ ↩ ↩2
-
Volinga development team (2023). Volinga Suite, volinga-model and Volinga plugin. url: https://volinga.ai/ y https://github.com/Volinga/volinga-model ↩ ↩2 ↩3
-
Crick, C., Jay, G., Osentoski, S., Pitzer, B. y Jenkins, O. C. (2011). ROSbridge: ROS for Non-ROS Users. En: Proceedings of the Robotics Systems Science and Systems Conference (RSS). url: http://rosbridge.org. ↩
-
Schmeisser, M. y Suero, M. (2018). ROSIntegration: Connecting Unreal Engine with ROS for Realistic Simulations. En: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). url: https://github.com/code-iai/ROSIntegration. ↩