This repository contains the implementation of a Deep Learning-based SAR Image Colorization system developed for Smart India Hackathon 2024. The solution uses a Hybrid Model combining Attention U-Net and Conditional GANs to transform synthetic aperture radar (SAR) grayscale images into high-quality colorized images. This project aligns with the Space Technology theme to enhance the interpretation of SAR imagery for applications like disaster response, environmental monitoring, and climate change analysis.
- Hybrid Model: Combines the spatial feature extraction power of Attention U-Net with the realism of Conditional GANs.
- Advanced Image Processing: Overcomes noise issues, enhances fine-grained details, and generates realistic, visually intuitive outputs.
- High Evaluation Metrics: Achieved an average PSNR of 29.29 dB, SSIM of 0.83, and an average processing time of 1.44 seconds per image.
- Real-Time Feasibility: Designed for integration into satellite systems for real-time SAR image colorization.
- Problem Statement ID: SIH1733
- Title: SAR Image Colorization for Comprehensive Insight using Deep Learning
- Theme: Space Technology
- Category: Software
- Attention U-Net: Ensures precise spatial focus and better feature retention during colorization.
- Conditional GANs: Enhances color realism by leveraging adversarial training techniques.
- Mitigated intrinsic speckle noise in SAR images.
- Preserved fine-grained spatial and contextual features.
- Improved color consistency and realism.
- Preprocessing: Image resizing, normalization, and noise mitigation.
- Model Training: Custom loss functions integrating L1 loss, perceptual loss, and adversarial loss.
- Post-Processing: Visual refinement and evaluation using PSNR and SSIM metrics.
- 30% Faster Data Interpretation: Accelerates insights into SAR data.
- Enhanced Disaster Response: Provides improved visual data for critical decision-making.
- 20% Higher Accuracy: Supports precise monitoring of environmental changes like deforestation and urbanization.
- Supports NASA-ISRO's NISAR mission by reducing SAR data interpretation time by 30%.
- Enhances public engagement with visually compelling SAR data visualizations.
- PSNR: 29.29 dB
- SSIM: 0.83
- Processing Time: 1.44 seconds per image
- Integration with satellite systems for real-time colorization.
- Expansion to multi-spectral analysis for broader applications.
- Python 3.8 or later
- TensorFlow 2.x
- Other dependencies: See
requirements.txt
(to be added).
- Clone the repository:
git clone https://github.com/hackoverflow72/Team_HackOverflow_1733
- Install dependencies:
pip install -r requirements.txt
- Prepare the dataset:
- Organize SAR grayscale images and corresponding optical images as specified in the
create_dataset
function.
- Organize SAR grayscale images and corresponding optical images as specified in the
- Train the model:
python train.py
- Evaluate results:
- Use pre-trained models provided or evaluate your trained models on test data.
Team HackOverflow thanks Smart India Hackathon 2024 organizers for the opportunity to contribute to SAR image enhancement.
- Generating High-Quality Visible Images from SAR Images Using CNNs
- Colorizing Sentinel-1 SAR Images Using Variational Autoencoder
- SAR Image Colorization Using Multidomain CycleGAN
For more information, visit the project GitHub page and view the demo video.