FloodDetectionNet is a deep learning project focused on detecting floods using satellite imagery. This repository implements a U-Net architecture enhanced with an attention mechanism to improve segmentation accuracy. By leveraging state-of-the-art techniques in computer vision, we aim to contribute to disaster management and response efforts.
- Features
- Installation
- Usage
- Dataset
- Model Architecture
- Training
- Evaluation
- Results
- Contributing
- License
- Contact
- Attention Mechanism: Enhances the U-Net architecture by focusing on relevant features.
- Data Augmentation: Improves model robustness through various augmentation techniques.
- Deep Learning Framework: Built on TensorFlow for efficient training and deployment.
- Image Segmentation: Provides pixel-level classification for accurate flood detection.
- Disaster Management: Supports timely and effective responses to flood events.
To set up the FloodDetectionNet project, follow these steps:
-
Clone the repository:
git clone https://github.com/yagizefekose6/FloodDetectionNet.git cd FloodDetectionNet -
Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required packages:
pip install -r requirements.txt
After installation, you can run the model with the following command:
python main.py --input <path_to_input_image> --output <path_to_output_image>Replace <path_to_input_image> with the path to your satellite image and <path_to_output_image> with the desired output path for the segmented image.
For pre-trained models and releases, visit the Releases section. Download the necessary files and execute them as needed.
FloodDetectionNet utilizes satellite imagery datasets for training and evaluation. We recommend using publicly available datasets such as:
Ensure to preprocess the data as required by the model.
FloodDetectionNet employs a U-Net architecture enhanced with an attention mechanism. The architecture consists of:
- Encoder: Down-sampling layers that capture context.
- Bottleneck: The deepest layer that captures the most abstract features.
- Decoder: Up-sampling layers that reconstruct the image.
- Attention Gates: Focus on important features, improving segmentation quality.
To train the model, use the following command:
python train.py --epochs <number_of_epochs> --batch_size <batch_size>Adjust <number_of_epochs> and <batch_size> as needed. Monitor the training process through logs generated in the console.
After training, evaluate the model using:
python evaluate.py --model <path_to_trained_model> --test_data <path_to_test_data>This will provide metrics such as accuracy, precision, recall, and F1-score.
The model achieves promising results in detecting floods. Here are some example outputs:
You can find more results in the results folder.
We welcome contributions! If you have suggestions or improvements, please fork the repository and submit a pull request. Make sure to follow the coding standards and include tests for new features.
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or feedback, feel free to reach out:
- Author: Your Name
- Email: your.email@example.com
For additional resources, check the Releases section for model files and updates.

