
PyTorchGeoNodes is a differentiable module for understanding 3D objects using interpretable shape programs.
PyTorchGeoNodes enables differentiable procedural graphs in PyTorch that reimplement functionalities of Geometry Nodes in Blender. More exactly, for different node types of Geometry Nodes, we implement corresponding node types with same functionalities using PyTorch, and PyTorch3D in case of geometric operations.
We provide algorithms for fitting parameters of PyTorchGeoNodes programs / procedural models, designed in Blender, to synthetic scenes and scenes of the ScanNet dataset. In comparison to traditional CAD model retrieval methods, the use of shape programs for 3D reconstruction allows for reasoning about the semantic properties of reconstructed objects, editing, low memory footprint, etc.
- June 2025 - Add integration of Gaussian Splatting into PyTorchGeoNodes.
- March 2025 - Add experiments for fitting objects from ScanNet scenes and data preparation scripts
- January 2025 - Add genetic algorithm for fitting shape parameters to target 3D objects.
- September 2024 - First release that includes a baseline combining coordinate descent and gradient descent for fitting shape parameters to synthetic scenes
Step (1) From the root directory of this repository, create a new conda environment:
conda env create -f environment.yml
conda activate pytorchgeonodes
Step (2) Adjust paths config in configs/general_config.yaml
:
experiments_path_base: '<Base-Path-To-Experiments>'
processed_data_path: '<Path-Where-Processed-Decision-Variables-Are-or-Will-Be-Saved>'
The following script demonstrates how to use the PytorchGeoNodes with the Adam optimizer to fit shape parameters of shape program, designed in Blender, to a synthetic scene:
python demo_optimize_pytorch_geometry_nodes.py --experiment_path demo_outputs/demo_optimize_pytorch_geometry_nodes
Step (1) Generate a synthetic dataset of scenes with chairs.
python generate_synthetic_dataset.py --category chair --num_scenes 10 --dataset_path synthetic_experiments/synthetic_dataset
Step (2) Preprocess shape parameters:
python preprocess_dv_values.py --category chair
Step (3) Run the following command to reconstruct shape parameters of the chairs using genetic algorithm. Use --skip_refinement
to run without refinement:
python reconstruct_synthetic_objects.py --category chair --dataset_name synthetic_dataset --experiment_path synthetic_experiments/ --method genetic
[Note] Current settings in config/genetic_settings.yaml
were selected for accurate inference. By modifying parameters, including population_size
, num_offsprings
, num_generations
, you effectively reduce computation time.
Alternatively, you can change the --method
to run reconstruction using coordinate descent baseline:
python reconstruct_synthetic_objects.py --category chair --dataset_name synthetic_dataset --experiment_path synthetic_experiments/ --method genetic --disable_prior --disable_post_tree
Step (4) Run evaluation script:
python evaluate_synthetic_sp_parameters.py --category chair --synthetic_dataset_path synthetic_experiments/synthetic_dataset --experiments_path synthetic_experiments/ --experiment_name synthetic_dataset_genetic --solution_name 0best_0_solution.json
Step (1a) We provide two processed demo scenes to get you started. Note that by downloading this file you agree to ScanNet terms of use, and to ScanNet and SCANnotate licenses. The folder structure is as follow:
|-- DEMO_PGN_DATA
|-- scannet_scans # PATH-TO-SCANNET-SCANS,
|-- scannotate_dataset # PATH-TO-SCANNOTATE-SCENES
|-- annotations
|-- Scannotate_2d_masks # PATH-TO-SCANNOTATE-MASKS-SCENES
Step (1b) If you want to experiment with more SCANnotate data, you need to preprocess the dataset firs. Follow the guide in README.md for setting up ScanNet and SCANnotate, and for preprocessing data.
Step (2) Adjust paths in configs/scannotate_config.yaml
:
scannet_processed_path: '<PATH-TO-SCANNET-SCANS>'
scannet_ply_path: '<PATH-TO-SCANNET-SCANS>'
scannotate_dataset_path: '<PATH-TO-SCANNOTATE-SCENES>'
scannotate_masks_path: '<PATH-TO-SCANNOTATE-MASKS-SCENES>'
Step (3) If you have not done so, preprocess shape parameters for OBJ_CAT
in [chair, sofa, table]
:
python preprocess_dv_values.py --category <OBJ_CAT>
Step (4) Run the following command to reconstruct shape parameters of the chairs using genetic algorithm, for the selected validation scenes. Use --skip_refinement
to run without refinement:
python reconstruct_scannotate_objects.py --category OBJ_CAT --experiment_path scannotate_experiments/ --method genetic
[Note] Current settings in config/genetic_settings.yaml
were selected for accurate inference. By modifying parameters, including population_size
, num_offsprings
, num_generations
, you effectively reduce computation time.
Step (5) Run evaluation script on all categories
python evaluate_scannotate_sp_parameters.py --experiments_path scannotate_experiments/ --experiment_name EXP_NAME --solution_name 0best_0_solution.json
Step (0) Switch to procedural_gs
branch and follow the README.md from there:
git checkout procedural_gs
-
When designing shape programs with Geometry Nodes feature of Blender, make sure that you use Blender 4.0. Structure of .blend files was changed with newer versions of Blender and will likely lead to errors when compiling them to PyTorchGeoNodes code.
-
When designing shape programs with Geometry Nodes feature of Blender, check the supported nodes first. Adding support for new nodes should not be difficult as long as you understand the specific functionalities.
We introduced PyTorchGeoNodes with the goal of creating a framework for developing differentiable shape programs and enabling their applications for tasks in 3D scene understanding. We are encouraging and welcoming contributions and integrations of new functionalities into PyTorchGeoNodes.
If you find this code useful, please consider citing our paper:
@article{stekovic2025pytorchgeonodes,
author = {Stekovic, Sinisa and Artykov, Arslan and Ainetter, Stefan and D'Urso, Mattia and Fraundorfer, Friedrich},
title = {PyTorchGeoNodes: Enabling Differentiable Shape Programs for 3D Shape Reconstruction},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2025}
}