Replies: 2 comments 1 reply
-
Hello @StBaki, Thank you for the kinds words! Let me start with some high-level comments:
The solution to almost all of these is to use a Path Replay Backpropagation / Radiative Backpropagation-style algorithm as implemented in the Instant Radio Maps repository: https://github.com/NVlabs/instant-rm/blob/5c1803ed11cbd9725e9a19743ab8327b702107ae/instant_rm/tracer_prb.py For the problem of merge shapes, the upcoming function freezing feature of DrJit should help eliminate a lot of the overhead, however there is no timeline as to when this will be released & available in Sionna. |
Beta Was this translation helpful? Give feedback.
-
Dear @merlinND, Thank you very much for your prompt reply and recommendations! Your comments align with our time & memory increase observations, which we understand are intrinsic in the calibration process in the current version. Hence, we tried experimenting with instant-rm, but we observed a similar behaviour. Specifically, running a radio map simulation with Map Tracer in our scene takes around 2 seconds in our GPU - this is only the map generation time, without backpropagation, which is increased compared to Sionna, which takes 0.07 in symbolic mode and 1-2 seconds in the Material-level calibration. When using the PathlossMapRBPTracer, one pass takes around 8 seconds, which is again increased compared to the time required from Sionna per optimization iteration ( ~2 sec for the map + ~1.5 sec for backpropagation). Consequently, we find that continuing our experiments in Sionna might be more suitable. Note: To be sure that IRM was configured properly, we also ran simulations with the provided etoile scene, where the iteration times observed were ~ 0.06 (for radio map) and ~0.2 (for radio map and back propagation) seconds for MapTracer and PathlossMapRBPTracer, respectively. Thus, we believe that increased times in IRM are attributed to the scene size & complexity. Regarding the textured radio materials, as we did not find a straightforward way to distinguish materials from objects, when objects are merged, are there any implementations you could point us to? Thank you very much in advance! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Sionna Team,
First, we would like to thank you for your contribution to the community and for sharing such a valuable tool.
In the context of a research project, we are using Sionna for the calibration of radio coverage maps in urban environments, relying on real-world measurements. Based on this, we would like to share our experience and ask some questions to make sure that we are using Sionna optimally.
Geographical scene:
Our scene has an area of 2.5 sq. km and ~ 8000 buildings. In our coverage maps, we consider a resolution of 5 meters.
Calibration setup:
In all our experiments, we set "loop_mode = evaluated” (to enable the calibration) and use a trainable version of the tr38901 antenna model (including trainable gain, horizontal & vertical beamwidth, and steering angle). Radio map simulations (only with trainable antenna parameters) take ~1 sec per iteration, plus ~0.5 secs for backpropagation using dr.backward, which is suitable for our use case – we note that with "loop_mode = symbolic”, i.e., no calibration, the simulation takes ~0.04 and 0.6 seconds in GPU and CPU, respectively.
However, we have some doubts about how to implement and include a material parameter optimization in the calibration in our setup. Specifically, we have tested the following two approaches:
1) Material-level calibration: We jointly calibrate all objects with the same material, using “materials_merged = True” and assigning two trainable parameters (permittivity and conductivity) for each type of material. In total, our scene contains 7 distinct materials (i.e., 14 trainable parameters), and each simulation takes ~1.5 secs, plus 0.5 secs for the backpropagation time, which is still suitable for our use case.
2) Building-level calibration: to further improve the accuracy, we tried to consider distinct trainable parameters (conductivity and permittivity) for each building in our scene, using materials_merged = False). However, simulation times are considerably longer (>15 minutes), considering the maximum time requirements in our use case. Alternatively, we add only trainable materials to the buildings in proximity to locations with measurements, resulting in around 3,000 buildings for which we optimized the parameters (using merge_shapes_exclude_regex and predefined flags that we have associated with these buildings). As a result, a single ray-tracing simulation for one iteration takes ~ 700 seconds in our scene and backpropagation takes around 1000 secs*, which still exceeds the maximum time requirements in our use case. Based on these experiments, we have decided to discard this option due to the scalability challenges in our large-scale scene.
*Note that all reported times are from executions on a 112-core CPU, since when setting loop_mode to “evaluated” in our calibration experiments, they did not fit in our GPU (24 GB Nvidia RTX A5000) – while for “regular” simulations with loop_mode set to symbolic, we can run simulations in GPU without a problem.
Results:
We summarize below our best results so far:
Questions:
In the context of optimizing material parameters during the calibration process, do you expect Sionna RT to scale effectively to our scene’s size and complexity when fine-tuning materials at a more granular level—i.e., beyond jointly optimizing parameters for objects that share the same initial material (i.e., using our “Material-level calibration” method)? Further, is it reasonable for dr.backpropagation to be that time-consuming? We would appreciate it if you could revise the implementation details of our “Building-level calibration” and help us identify a potential code optimization to make it more scalable, which could help us reduce execution time in our scene.
• Adam optimizer with lr=0.1
• Set trainable properties only for the 3000 closest buildings, using merge_shapes_exclude_regex and custom flags.
• Compute the radio map launching 5*10^7 rays (LoS and reflection are enabled, refraction and diffuse scattering are turned off), tracing up to 5 consecutive reflections
• Estimate the MSE for radio map points for which we (a) have measured data and (b) sionna.rt provides coverage
• Backpropagation with dr.backward and opt.step.
• Set object materials in the scene to opt[parameters]
Are there any other best practices you would recommend to further optimize radio coverage maps in our large-scale urban scene? For example, is there an implementation of neural materials in the new Sionna version that builds on Mitsuba and dr.jit instead of TensorFlow?
We would greatly appreciate it if you could guide us through the two questions above, as this can help us make sure that we are using an optimal Sionna-based implementation for our research project. Thank you very much in advance for your time and consideration!
Beta Was this translation helpful? Give feedback.
All reactions