Skip to content

jmaxrdgz/SARATR-X

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch Lightning implementation of SARATR-X: Toward Building A Foundation Model for SAR Target Recognition pretraining pipeline for the Capella Space OpenData dataset.

Install & Setup

On Linux/MacOS

conda create -n saratrx python=3.8
conda activate saratrx
pip install -r requirements.txt

Additionally you can check torch is working with Metal (for MacOS) backend with :

python -c "import torch; print(torch.__version__, torch.backends.mps.is_available())"

Download MAEHiViT Imagnet weights and add it to the project (training initialization).

# Download MAE-HiViT pretrained weights
wget --no-check-certificate "https://drive.google.com/uc?export=download&id=1VZQz4buhlepZ5akTcEvrA3a_nxsQZ8eQ" -O mae_hivit_base_1600ep.pth

# Move it to the project weights folder
mkdir -p checkpoints/pretrained
mv mae_hivit_base_1600ep.pth checkpoints/pretrained/

Dataset

SAR images must be preprocessed as single precision tiles (.npy) before training. The following command allows to chip images from a given path into 512x512 chips:

python data/chip_capella.py /path/to/sar_images --chip_size 512

Launch Training

python pretrain.py

You can check training metrics on TensorBoard visualizer with this command:

tensorboard --logdir=lightning_logs/

Here is a notebook example to run pretraining on colab's T4 GPU.

About

SARATR-X pretraining pipeline implemented on Pytorch Lightning framework.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published