Skip to content

lucieDLE/MeshDiffusion

 
 

Repository files navigation

MeshDiffusion for condyles

Introduction

This is an adapatation of the official implementation of MeshDiffusion (ICLR 2023 Spotlight).Please refer their project page for more details and interactive demos.

Inference

Conditional Generation

To generate samples, run the following command. The model generates data stored as .npy

python main_diffusion.py --config configs/res64_cond.py --mode=uncond_gen --config.eval.eval_dir= /path/to/output_dir --config.eval.ckpt_path=ckp/conditional_checkpoint.pth --config.eval.gen_class 0

where config.eval.gen_class is the class to generate

mode=cond_gen is for conditionnal generation with Single-view a single view of a mesh positioned in its canonical pose

mode=uncond_gen is initially unconditionnal generation. Code has been changed to be conditionnal on classes. (TO DO: add another mode)

Then, transform the grid data generated to a vtk mesh by running the following:

python nvdiffrec/eval.py --config nvdiffrec/configs/res64.json --out-dir /path/to/output_dir --sample-path sample.npy --deform-scale 3 

where sample.npy is the .npy file generated by the model

Training

preprocessing

Create a list of paths of all ground-truth meshes under json or csv file meshes.json or meshes.csv. Create a data directory where to save the data (ex: data)

Then run the following

python nvdiffrec/fit_dmtets.py --config nvdiffrec/configs/res64.json --meta-path meshes.csv --out-dir data --index 0 --split-size 100000

where split_size is set to any large number greater than the dataset size. In case of batch fitting with multiple jobs, change split_size to a suitable number and assign a different index for different jobs. Tune the resolutions in the 1st and 2nd pass fitting in the config file if necessary.

Now convert the DMTet dataset (stored as python dicts) into a dataset of 3D cubic grids:

python data/tets_to_3dgrid.py --resolution 64 --root data --source tets  --target grid --index 0

it will use the DMTet in data/tets and save the resulted cubic grid in data/grid.

Create a csv file of all dmtet 3D cubic grid file locations with the class associated for diffusion model training:

python convert_vtk2grid_file.py --grid_dir data/grid/ --in_csv meshes.csv --out data --split 1

split will split the output csv file into training, validation and test csv.

Train a diffusion model

python main_diffusion.py --mode=train --config=configs/res64_cond.py
 --config.data.train_csv train.csv --config.data.val_csv val.csv \ 
 --config.eval.eval_dir output_dir --config.training.train_dir train_dir

where output_dir is the directory where to save the generated samples and train_dir is the directory to save the model

Citation

@InProceedings{Liu2023MeshDiffusion,
    title={MeshDiffusion: Score-based Generative 3D Mesh Modeling},
    author={Zhen Liu and Yao Feng and Michael J. Black and Derek Nowrouzezahrai and Liam Paull and Weiyang Liu},
    booktitle={International Conference on Learning Representations},
    year={2023},
    url={https://openreview.net/forum?id=0cpM2ApF9p6}
}

Acknowledgement

This repo is adapted from https://github.com/NVlabs/nvdiffrec, https://github.com/yang-song/score_sde_pytorch and https://github.com/lzzcd001/MeshDiffusion

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Languages

  • Python 79.5%
  • Cuda 12.0%
  • C++ 6.3%
  • C 2.2%