Nature Nexus is an advanced forest surveillance system designed to protect natural ecosystems through AI-powered monitoring. It combines multiple detection technologies to identify illegal activities, monitor deforestation, and detect potential threats to forest areas.
The application leverages:
- Satellite Imagery Analysis - Detects deforestation using segmentation models
- Audio Surveillance - Identifies unusual sounds like chainsaws, vehicles, and human activity
- Object Detection - Recognizes trespassers, vehicles, fires, and other threats
- Analyzes satellite or aerial imagery to identify deforested areas
- Uses Attention U-Net segmentation model optimized with ONNX runtime
- Provides detailed metrics on forest coverage and deforestation levels
- Visualizes results with color-coded overlays
- Detects unusual sounds that may indicate illegal activities
- Classifies various sounds including:
- Human Sounds: Footsteps, coughing, laughing, breathing, etc.
- Tool Sounds: Chainsaw, hand saw
- Vehicle Sounds: Car horn, engine, siren
- Other Sounds: Crackling fire, fireworks
- Supports both uploaded audio files and real-time recording
- Identifies potential threats using YOLOv11 model
- Detects objects including:
- Humans (trespassers)
- Vehicles (cars, bikes, buses/trucks)
- Fire and smoke
- Processes images, videos, and camera feeds
- Alerts on potential threats with confidence scores
- Python 3.8+
- pip package manager
- Virtual environment (recommended)
- Clone the repository
git clone https://github.com/yourusername/nature-nexus.git
cd nature-nexus
- Create and activate a virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows, use: venv\Scripts\activate
- Install required dependencies
pip install -r requirements.txt
- Download models
# Create models directory if it doesn't exist
mkdir -p models
Launch the Streamlit application:
streamlit run app.py
The application will open in your default web browser at http://localhost:8501
- Architecture: Attention U-Net
- Input: Satellite/aerial imagery (RGB)
- Output: Binary segmentation mask (forest vs. deforested)
- Optimization: ONNX runtime for faster inference
- Architecture: Convolutional Neural Network (CNN)
- Input: Audio spectrograms
- Output: 14 sound classes with confidence scores
- Features: Mel-spectrogram analysis
- Architecture: YOLOv11
- Input: Images/video frames
- Output: Bounding boxes, class labels, confidence scores
- Classes: Humans, vehicles, fire, smoke, etc.
nature-nexus/
│
├── app.py # Main Streamlit application
├── prediction_engine.py # Deforestation model interface
│
├── utils/
│ ├── audio_model.py # Audio classification model
│ ├── audio_processing.py # Audio preprocessing utilities
│ ├── helpers.py # Helper functions for visualization
│ ├── model.py # U-Net model definition
│ ├── onnx_converter.py # Converts PyTorch models to ONNX
│ ├── onnx_inference.py # YOLO object detection inference
│ └── preprocess.py # Image preprocessing utilities
│
└── models/ # Model weights (not included in repo)
├── deforestation_model.onnx
├── best_model.pth # Audio model
└── best_model.onnx # YOLO model
- Select "Deforestation Detection" from the sidebar
- Upload satellite or aerial imagery of forest areas
- View segmentation results showing forest vs. deforested areas
- Analyze metrics including forest coverage and deforestation level
- Select "Forest Audio Surveillance" from the sidebar
- Choose between uploading audio files or recording live audio
- Submit the audio for analysis
- View detected sound classification and potential alerts
- Select "Object Detection" from the sidebar
- Choose between image, video, or camera feed
- Adjust confidence and IoU thresholds as needed
- Upload or capture input for processing
- View detection results with bounding boxes and confidence scores
To train custom models for your specific forest environment:
# Convert trained PyTorch model to ONNX
python -m utils.onnx_converter models/your_pytorch_model.pth models/deforestation_model.onnx [input_size]
Train on your custom audio dataset and replace the model file at models/best_model.pth
Train on your custom object dataset and replace the model file at models/best_model.onnx
- Models not loading: Ensure all model files exist in the
models/
directory - CUDA errors: If using GPU, verify CUDA and cuDNN are correctly installed
- Audio processing issues: Check audio format compatibility (WAV, MP3, OGG)
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.