Skip to content

EugenioBugli/PointCloud3D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Point Cloud Occupancy with Dynamic Planes

This Repository contains code for reconstructing 3D point clouds using the Occupancy Predictions of a small and sparse subsets of points. This work is based on Lionar, Stefan, et al. "Dynamic Plane Convolutional Occupancy Networks" Proceedings of the IEEE/CVF Winter Conference of Applications of Computer Vision, 2021 with FAUST Dataset.

First GIF Second GIF

Table of Contents

Repository Structure

PointCloud3D/
│
├── README.md
├── requirements.txt
├── PointCloud3D.ipynb
├── .gitignore
├── Media/
│    └── images
│
├── Src/
│    ├── dataset.py
│    ├── unet.py
│    └── utils.py
│    
├── Papers/
│    ├── dynamic_plane_conv.pdf
│    └── occupancy_net_mise.pdf
│
└── Slides/
     ├── main.pdf
     └── main.tex

Installation

  1. Clone the repository:
    git clone https://github.com/EugenioBugli/3DPointCloud.git
  2. Install dependencies:
    pip install -r <Folder>/3DPointCloud/requirements.txt
  3. You can run the Code directly from the Notebook

Dataset

Here is an example of the original point cloud extracted from a registration :

Alt Text

From the previous cloud, we obtain the Noisy Cloud by sampling points from the surface and by adding the following augmentations:

  • Rotation
  • Translation
  • Scaling

Alt Text

The sampled cloud is obtained by sampling points inside the bounding box that contains the original mesh, while The labels have been generated by measuring the distance between the points and the surface of the original mesh.

Alt Text

Architecture

Alt Text

The Architecture used has an Encoder-Decoder structure and takes a Noisy Cloud as input for the Encoder and a Sampled Cloud for the Decoder.

  • Noisy Cloud: it's composed by 3000 points sampled over the surface of the starting mesh with the addition of Gaussian noise.

  • Sampled Cloud: it's composed by 2048 points sampled over the bounding box containing the starting mesh.

During Training we use Binary Cross Entropy (BCE) between the occupancy prediction and the ground truth occupancy, while during Inference we Multiresolution IsoSurface Extraction (MISE) to reconstruct the meshes.

Reconstruction

This procedure, which is used to reconstruct a mesh starting from points sampled inside the orginal bounding box, follows these steps :

Alt Text

  1. The volumetric space is discretized at an Initial Resolution and for all the points (corners) belonging to this grid, the occupancy is checked with our Network.

  2. Given a voxel, if two adjacent grid points have a different occupancy value then we define it as active . These type of voxels will be the ones intersecating the mesh with the Marching Cubes Algorithm.

  3. All the active voxels will be subdivided into 8 subvoxels

  4. Check the new occupancy values of the grid points and go to (2)

  5. Repeat this until the Final Resolution is reached.

At this final resolution, we apply the Marching Cubes algorithm to extract an approximate isosurface : Marching cubes

${ p \in \mathbb{R}^3 \ | \ f_{θ}(p,x) = τ }$.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages