Official PyTorch implementation.
3D Unet for 3D Seismic Data Denoising
Shiqin Zeng, Rafael Orozco, Huseyin Tuna Erdinc
Python libraries: See environment.yaml for library dependencies. The conda environment can be set up using these commands:
conda env create -f environment.yaml
conda activate seismic_Denoising
Open the test.ipynb and follow the instructions to download the dataset and transfer the dataset to .h5
format.
!python data_prep/data_download.py
!python data_prep/data_format.py
Our training script is derived from Deep Learning Semantic Segmentation for High-Resolution Medical Volumes and implemented based on Accurate and Versatile 3D Segmentation of Plant Tissues at Cellular Resolution. The training loss includes an edge loss component based on the Laplacian operator, implemented according to the paper Multi-Stage Progressive Image Restoration.
We are using one HDF5 file for training one epoch to test the code (num_epochs = 1, start = 1, end = 1).
The start
and end
values correspond to the dataset file names.
For example:
start = 1
and end = 2
means the script will use the files original_image-impeccable-train-data-part1.h5
and original_image-impeccable-train-data-part2.h5
. To include all dataset files, set start = 1
and end = 17
to use all training data from original_image-impeccable-train-data-part1.h5
to original_image-impeccable-train-data-part17.h5
.
You can modify the config.yaml file to adjust parameters such as batch_size, num_epochs, start, and end. Once you have downloaded all the data in the h5py files, set the appropriate start and end values to
train on the full dataset by running the Python script.
!python scripts/train_model.py
The pre-trained model is in the directory pretrained_model. See the details in the test.ipynb.