Etai Sella1, Noam Atia1, Ron Mokady2, Hadar Averbuch-Elor3
1 Tel Aviv University 2 BRIA AI 3 Cornell University
This is the official PyTorch implementation of BlendedPC.
Natural language offers a highly intuitive interface for enabling localized, fine-grained edits of 3D shapes. However, prior works face challenges in preserving global coherence while locally modifying the input 3D shape.
We introduce an inpainting-based framework for editing shapes represented as point clouds. Our approach leverages foundation 3D diffusion models for localized shape edits, adding structural guidance through partial conditional shapes to preserve global identity. To enhance identity preservation within edited regions, we propose an inference-time coordinate blending algorithm. This algorithm balances reconstruction of the full shape with inpainting over progressive noise levels, enabling seamless blending of original and edited shapes without requiring costly and inaccurate inversion.
Extensive experiments demonstrate that our method outperforms existing techniques across multiple metrics, measuring both fidelity to the original shape and adherence to textual prompts.
git clone git@github.com:TAU-VAILab/BlendedPC.git
cd BlendedPC
conda create --name blended-pc -y python=3.11
conda activate blended-pc
pip install -e .
Run one of the following scripts to test our "chair", "lamp" or "table" models:
bash demos/chair_demo.sh
bash demos/lamp_demo.sh
bash demos/table_demo.sh
Model checkpoints are automatically downloaded from the Hugging Face Hub by default.
Expected Outputs:
input.png
: The original input shapereconstruction.png
: Output of the model using the "copy" promptmasked.png
: Input shape with masked regionsoutput.png
: Final output after editing
Download the ShapeTalk dataset from here.
Then run the script with your desired parameters:
python run_inference.py --prompt <YOUR-PROMPT> --shape_category <SHAPE-CATEGORY> --input_path <INPUT-PATH> --part <SHAPE-PART>
Please refer to the previously mentioned demo scripts for examples on how to set these arguments.
Coming soon...
If you find our work useful, please consider citing:
@misc{sella2025blendedpointclouddiffusion,
title={Blended Point Cloud Diffusion for Localized Text-guided Shape Editing},
author={Etai Sella and Noam Atia and Ron Mokady and Hadar Averbuch-Elor},
year={2025},
eprint={2507.15399},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2507.15399},
}
We thank the authors of Point-E for their outstanding codebase, which served as a foundation for this project.