Description
Dear @xinggangw
Hi, and thank you for sharing such an impressive piece of work! The Dynamic-2D GS is an incredible contribution to the field, and I deeply appreciate the effort you have put into making this research open and accessible.
I have been experimenting with Dynamic-2D GS using the NeRF-DS dataset, which consists of monocular static camera inputs. During my experiments, I noticed that the output normals and depth maps appear quite blurry, and sometimes the training even crashes unexpectedly.
I wanted to ask if you have attempted training on the NeRF-DS dataset yourselves. If so, is it expected to see such poor-quality normals and depth maps, or might there be an issue with my setup?
I would greatly appreciate any insights or suggestions you could provide. Thank you again for your excellent work, and I look forward to your response.
Best regards,
Longxiang-ai
(results of "as" in nerf-ds dataset)
GT
Render
Depth
Depth Normal
Normal