A machine learning-powered solution for rapid post-disaster assessment using aerial imagery. This project leverages deep learning modelsβResNet50, CNN, and EfficientNetβto classify structural damage caused by hurricanes. Designed for real-time predictions via a Streamlit web interface.
- Srinivas Saiteja Tenneti
- Namratha Prakash
- Lakshmi Sreya Rapolu
Hurricanes in the U.S. cause an average of $21.5 billion in damage per event, with over 10 billion-dollar storms annually between 2015 and 2020. Accurate and rapid post-hurricane damage assessment is essential for emergency response, insurance processing, and recovery planning.
This project builds an AI system that:
- Detects structural damage from aerial images post-hurricane
- Utilizes transfer learning with ResNet50 and EfficientNet models
- Deploys a Streamlit web app for interactive image uploads and predictions
- Source: University of Washington Disaster Data Science Lab
- Location: Houston, TX (Post Hurricane Harvey)
- Images: 14,000 (7,000 damaged, 7,000 undamaged)
- Splits:
Train
: 8,000Validation
: 2,000Test
: 2,000 (also tested on unbalanced and balanced subsets)
- Data Normalization & Augmentation (
RandomHorizontalFlip
) - PCA for feature reduction
- Custom & pre-trained models
- Evaluation metrics: Accuracy, Confusion Matrix, F1-score
- Streamlit-based real-time interface for multi-image upload
To build a robust hurricane damage classification model, we performed in-depth exploratory data analysis (EDA) to uncover key visual and statistical signals differentiating damaged from undamaged structures.
- No Single Definition of "Damage": Can include debris, discoloration, roof collapse, or minor structural shifts.
- False Visual Triggers: Materials scattered for other reasons can look like damage.
- Intra-Class Variability: Buildings in the same class vary greatly in size, shape, and appearance.
- AI Ambiguity: Damage is often subtle or context-dependent, making detection by machines inherently challenging.
- Format: RGB, 128Γ128 pixels
- Content: Aerial view of rooftops and structures post-hurricane
Observations:
- π Flood patterns with unique texture and tone
- π§± Scattered debris and damaged rooftops
- π Human-eye struggle: Subtle patterns not always easily visible
We computed mean grayscale intensity across images in each class:
Damage | No Damage |
---|---|
Brighter cores with dark surroundings | More uniform brightness across the image |
Suggests collapsed or open roof areas | Indicates intact, cleaner structural surfaces |
Pixel-wise standard deviation helps visualize variability:
Damage | No Damage |
---|---|
Lower variation across the image | Higher variation near core structure |
Uniformity due to debris/flooding | Variation from visible rooftops and shadows |
Principal Component Analysis (PCA) was used to extract key visual patterns:
Class | Components to explain 70% variance |
---|---|
Damage | 19 |
No Damage | 56 |
Figure 1: With damage: 19 principle components
Figure 2: No damage: 56 principle components
π§ Insight: Damaged images have more visual consistency, making them easier for models to learn from.
We compared the mean intensity of the first 1,000 pixels across classes:
- Damage: Lower, noisier intensity β possibly due to shadows and debris
- No Damage: Higher and smoother intensity β cleaner rooftops

- Aerial tiles are spread across Houston, Beaumont, and Victoria (Texas)
- Damage and no-damage classes cluster by location

- Strong statistical signals in pixel-level data
- Class imbalance handled
- Geographic clustering introduces potential bias
- PCA & intensity trends support model learning
- Input:
128x128 RGB images
- Architecture: 3 Conv Layers + 4 FC Layers
- Accuracy:
- β Train: 99.48%
- β Validation: 96.25%
- Input:
224x224
, ImageNet normalized - Accuracy:
- β Validation: 99.50%
- β Test Set: 99.61%
- π§ Best model for generalization and deployment
- Accuracy: 99.30%
- Lightweight and fast but slightly underperformed vs. ResNet50
- Frozen: 91.7% accuracy β very fast but limited learning
- Fine-tuned (Last 2 Blocks): 97.95% β efficient and effective
Interactive interface for uploading and classifying images.
- Multi-image upload with grid view
- Class predictions (damage / no damage)
- Confidence scores with visual indicators
- Session-wise prediction history
- Optional visualization of transformed model input
- Lightweight and runs locally or on any Streamlit-compatible server
- β
ResNet50 Confusion Matrix
- True Positives: 7,980
- False Negatives: 20
- True Negatives: 985
- False Positives: 15
- Accuracy: 99.61%
- Multi-class damage levels (minor/moderate/severe)
- Integrate Grad-CAM for visual attention maps
- Expand to detect other disaster types: fire, floods, earthquakes
- Incorporate geospatial overlays using GIS libraries
βIn the aftermath of a hurricane, every second counts. With AI-driven tools, response teams can act faster and smarter.β β Group 8
For questions, contact any team member via this repository's issue tracker.