Skip to content

TAU-VAILab/BlendedPC

Repository files navigation

Blended Point Cloud Diffusion for Localized Text-Guided Shape Editing

Etai Sella1, Noam Atia1, Ron Mokady2, Hadar Averbuch-Elor3

1 Tel Aviv University 2 BRIA AI 3 Cornell University

This is the official PyTorch implementation of BlendedPC.

arXiv Generic badge

Abstract

Natural language offers a highly intuitive interface for enabling localized, fine-grained edits of 3D shapes. However, prior works face challenges in preserving global coherence while locally modifying the input 3D shape.

We introduce an inpainting-based framework for editing shapes represented as point clouds. Our approach leverages foundation 3D diffusion models for localized shape edits, adding structural guidance through partial conditional shapes to preserve global identity. To enhance identity preservation within edited regions, we propose an inference-time coordinate blending algorithm. This algorithm balances reconstruction of the full shape with inpainting over progressive noise levels, enabling seamless blending of original and edited shapes without requiring costly and inaccurate inversion.

Extensive experiments demonstrate that our method outperforms existing techniques across multiple metrics, measuring both fidelity to the original shape and adherence to textual prompts.


Getting Started

Cloning the repository

git clone git@github.com:TAU-VAILab/BlendedPC.git
cd BlendedPC

Setting up the environment

conda create --name blended-pc -y python=3.11
conda activate blended-pc
pip install -e .

Running the Demo

Run one of the following scripts to test our "chair", "lamp" or "table" models:

bash demos/chair_demo.sh 
bash demos/lamp_demo.sh 
bash demos/table_demo.sh 

Model checkpoints are automatically downloaded from the Hugging Face Hub by default.

Expected Outputs:

  • input.png: The original input shape
  • reconstruction.png: Output of the model using the "copy" prompt
  • masked.png: Input shape with masked regions
  • output.png: Final output after editing

Using other shapes from ShapeTalk

Download the ShapeTalk dataset from here.
Then run the script with your desired parameters:

python run_inference.py --prompt <YOUR-PROMPT> --shape_category <SHAPE-CATEGORY> --input_path <INPUT-PATH> --part <SHAPE-PART>

Please refer to the previously mentioned demo scripts for examples on how to set these arguments.


Training a Model

Coming soon...


Citation

If you find our work useful, please consider citing:

@misc{sella2025blendedpointclouddiffusion,
      title={Blended Point Cloud Diffusion for Localized Text-guided Shape Editing}, 
      author={Etai Sella and Noam Atia and Ron Mokady and Hadar Averbuch-Elor},
      year={2025},
      eprint={2507.15399},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2507.15399}, 
}

Acknowledgements

We thank the authors of Point-E for their outstanding codebase, which served as a foundation for this project.

About

Blended Point Cloud Diffusion for Localized Text-guided Shape Editing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •