An AI-powered real-time elephant monitoring and detection system that combines computer vision, web technologies, and database integration for wildlife surveillance and conservation efforts. EMCS serves as a groundbreaking solution for preventing human-wildlife conflicts while promoting peaceful coexistence between communities and elephants.
- Overview
- The EMCS Solution
- Features
- System Architecture
- Technology Stack
- Project Structure
- Installation
- Configuration
- Usage
- API Documentation
- Model Training
- Deployment
- Community Impact
- Contributing
- License
EMCS is a comprehensive wildlife monitoring solution designed to detect and track elephants in real-time using advanced computer vision techniques. The system processes camera feeds, identifies elephants with high accuracy, captures snapshots, and provides a web-based dashboard for monitoring and management. More importantly, EMCS serves as a critical tool for preventing human-wildlife conflicts by providing early warning systems to rural communities.
- Real-time Detection: Processes live camera feeds using YOLOv8 models
- Multi-camera Support: Monitor multiple camera sources simultaneously
- Community Alert System: Automated sirens and visual alerts for village protection
- Cloud Integration: Automatic snapshot upload to external services
- Modern Web Interface: Responsive dashboard built with Next.js and React
- Database Integration: Stores detection data with Convex backend
- Conservation Focus: Promotes peaceful coexistence between humans and elephants
Human-wildlife conflicts, particularly involving elephants, pose significant challenges to rural communities worldwide. These conflicts result in:
- Crop Damage: Elephants destroying agricultural fields
- Property Destruction: Infrastructure damage from elephant incursions
- Safety Risks: Potential injuries or fatalities to both humans and elephants
- Economic Losses: Financial impact on farming communities
- Conservation Challenges: Retaliatory killings affecting elephant populations
EMCS employs a sophisticated Elephant Detection System (EDS) that utilizes Closed-Circuit Television (CCTV) cameras strategically positioned at the periphery of villages. These cameras continuously monitor the surroundings, particularly areas prone to elephant crossings.
Integrated with the CCTV cameras are Raspberry Pi cameras equipped with a specialized elephant detection model. The elephant detection model is trained using machine learning algorithms to recognize the distinctive features and movements of elephants, analyzing live video feeds in real-time to swiftly identify elephant presence within village vicinity.
Upon detecting elephants, EMCS triggers a series of immediate alerts to notify villagers and prevent potential conflicts:
- Audio Alerts: Sirens blare loudly throughout the village
- Visual Alerts: Bright flashing lights provide unmistakable warning signals
- Digital Notifications: Telegram alerts to authorities and community leaders
- Dashboard Updates: Real-time updates on the web interface
EMCS fosters community engagement by:
- Involving Villagers: Active participation in the monitoring process
- Authority Notification: Simultaneous alerts to local authorities and wildlife conservation organizations
- Educational Programs: Awareness campaigns about harmonious coexistence with wildlife
- Feedback Systems: Encouraging villager reports and system effectiveness feedback
By detecting and alerting villagers to elephant presence, EMCS significantly reduces the risk of human-wildlife conflicts and potential injuries or fatalities.
Timely alerts enable farmers to take preventive measures, such as deploying nboise or light, minimizing crop damage caused by elephant incursions.
By mitigating conflicts and promoting peaceful coexistence, EMCS contributes to wildlife conservation by reducing retaliatory killings and fostering positive conservation attitudes.
Compared to traditional methods like hiring human guards or erecting physical barriers, EMCS offers a cost-effective and scalable technology solution for efficient monitoring and alerting.
- YOLOv8 Integration: State-of-the-art object detection
- Custom Elephant Model: Specialized model trained for elephant detection
- Frame Optimization: Intelligent frame processing for performance
- Real-time Analysis: Continuous monitoring with minimal latency
- Multi-camera Support: Connect and monitor multiple IP cameras and Raspberry Pi cameras
- Live Streaming: Real-time video streams
- Camera Status Monitoring: Online/offline status tracking
- Perimeter Coverage: Strategic positioning for maximum detection coverage
- Audio Alerts: Integrated siren system for immediate audible warnings
- Visual Alerts: Bright flashing lights for visual confirmation
- Multi-channel Notifications: Telegram, web dashboard, and local alerts
- Modern UI: Beautiful, responsive interface with Tailwind CSS
- Real-time Updates: Live camera feeds and detection status
- Camera Configuration: Easy camera addition and management
- Detection History: View past detections and snapshots
- Mobile Responsive: Works seamlessly on all devices
- Database Storage: Convex integration for data persistence
- Image Hosting: ImgBB integration for snapshot storage
- Notifications: Telegram bot for instant alerts
- RESTful API: FastAPI backend with comprehensive endpoints
Village Perimeter Cameras
│
▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ CCTV/RPi │────│ Backend API │────│ Frontend UI │
│ Cameras │ │ (FastAPI) │ │ (Next.js) │
│ (MJPEG) │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
┌───────┼───────┐
│ │ │
┌──────▼──┐ ┌──▼───┐ ┌─▼─────┐
│ YOLOv8 │ │Convex│ │ImgBB │
│ Models │ │ DB │ │Storage│
└─────────┘ └──────┘ └───────┘
│
┌────────┼────────┐
│ │
┌──────▼──┐ ┌──▼────┐
│ Sirens │ │Telegram│
│ System │ │ Alerts │
└─────────┘ └───────┘
- Python 3.12: Core runtime environment
- FastAPI: Modern, fast web framework for building APIs
- Ultralytics YOLO: State-of-the-art object detection
- OpenCV: Computer vision and image processing
- Asyncio: Asynchronous programming for concurrent operations
- Next.js 15: React framework with app router
- Tailwind CSS: Utility-first CSS framework
- Convex: Real-time database and backend platform
- ImgBB: Image hosting service
- Telegram API: Notification system
- Raspberry Pi: Edge computing for camera systems
- CCTV Cameras: Professional surveillance equipment
- Alert Systems: Sirens and lighting infrastructure
- IoT Sensors: Environmental and motion detection
- JavaScript/JSX: Frontend development
- Python: Backend development
- Git: Version control
- npm: Package management
- Python 3.12+
- Node.js 18+
- npm or yarn
- Git
- CUDA-capable GPU (optional, for faster inference)
-
Clone the repository
git clone https://github.com/CSIT-Association-of-BMC/Mechi-Mavericks/ cd Mechi-Mavericks
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env
Edit
.env
file with your configuration:MGBB_API_KEY= "<IMGBB_API_KEY>" DATABASE_POST_API_ROUTE="<DATABASE_POST_API_ROUTE>" NOTIFICATIONS_API_ROUTE="<NOTIFICATIONS_API_ROUTE>" TELEGRAM_BOT_MESSAGE_API_ROUTE="<TELEGRAM_BOT_MESSAGE_API_ROUTE>"
-
Download models
- Place your trained YOLO models in the
model/
directory - Ensure
best.pt
is your primary model
- Place your trained YOLO models in the
-
Navigate to frontend directory
cd ../Frontend/elephant
-
Install dependencies
npm install # or yarn install
-
Configure environment
cp example.env .env.local
Edit
.env.local
:CONVEX_DEPLOYMENT="" NEXT_PUBLIC_CONVEX_URL= "" BACKEND_URL="<Backend URL>"
-
Set up Convex
npx convex dev
If you need to train your own models:
-
Navigate to modelRuns directory
cd ../../modelRuns
-
Open Jupyter notebook
jupyter notebook elephant.ipynb
-
Follow the training process in the notebook
Cameras can be configured through the web interface or by modifying the backend configuration:
# Example camera configuration for village perimeter
cameras = [
{
"id": "entrance_north",
"source": "http://192.168.1.100:8080/video", # MJPEG stream URL
"location": "Village North Entrance",
"confidence_threshold": 0.7,
"alert_zone": True,
"coverage_area": "Primary elephant corridor"
},
{
"id": "farm_boundary_east",
"source": "http://192.168.1.101:8080/video",
"location": "Eastern Farm Boundary",
"confidence_threshold": 0.75,
"alert_zone": True,
"coverage_area": "Agricultural area"
}
]
Modify detection settings in the backend:
# Detection configuration for village protection
CONFIDENCE_THRESHOLD = 0.7 # Minimum confidence for detection
DETECTION_COOLDOWN = 20 # Seconds between duplicate detections
FRAME_SKIP = 5 # Process every 5th frame
UPLOAD_INTERVAL = 10 # Seconds between uploads
ALERT_THRESHOLD = 0.8 # Confidence threshold for triggering alerts
ALERT_DURATION = 30 # Seconds for alert activation
# Alert system settings
ALERT_CONFIG = {
"siren_enabled": True,
"siren_duration": 30, # Seconds
"light_enabled": True,
"light_flash_interval": 0.5, # Seconds
"telegram_alerts": True,
"escalation_enabled": True,
"max_alert_frequency": 5 # Maximum alerts per hour
}
-
Start the backend server
cd Backend python -m uvicorn app:app --reload --host 0.0.0.0 --port 8000
-
Start the frontend development server
cd Frontend/elephant npm run dev
-
Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Monitor Village Perimeter: View live camera streams from strategically placed cameras
- Configure Alert Systems: Set up siren and lighting alert parameters
- Check Detection History: Review past elephant detections and community responses
- Manage Community Alerts: Configure notification systems for villagers and authorities
- Track Conservation Metrics: Monitor system effectiveness and wildlife patterns
When EMCS detects elephants:
- Immediate Alerts: Sirens and lights activate automatically
- Community Notification: Villagers receive alerts via multiple channels
- Authority Contact: Local wildlife authorities are notified
- Response Coordination: Dashboard provides real-time coordination tools
- Documentation: All incidents are logged for analysis and improvement
For production deployment:
-
Backend
uvicorn app:app --host 0.0.0.0 --port 8000
-
Frontend
npm run build npm start
The EMCS system uses custom-trained YOLOv8 models specifically optimized for elephant detection in rural environments. To train your own model:
-
Prepare Dataset
- Use Roboflow for dataset management
- Include diverse elephant poses, lighting conditions, and rural backgrounds
- Ensure proper annotation format (YOLO)
-
Training Configuration
# Training parameters optimized for elephant detection model = YOLO('yolov8m.pt') # Base model results = model.train( data='path/to/elephant_dataset.yaml', epochs=100, imgsz=640, batch=16, device=0, # GPU device patience=20, save_period=10 )
-
Model Evaluation
- Test on various lighting conditions (day/night)
- Validate with different elephant behaviors
- Test detection range and accuracy
- Optimize for rural camera conditions
best.pt
- Primary production model optimized for village deploymentm-model.pt
- Medium-sized model for balanced performance on Raspberry Pis-model.pt
- Small model for resource-constrained edge devicesl-model.pt
- Large model for maximum accuracy in critical areas
- Perimeter Assessment: Identify key elephant crossing points
- Camera Placement: Strategic positioning for maximum coverage
- Power Infrastructure: Ensure reliable power supply for cameras and alerts
- Network Setup: Establish communication between components
- Camera Installation: Mount weatherproof cameras at strategic points
- Alert System Setup: Install sirens and warning lights throughout village
- Central Hub: Set up main processing unit (server or powerful Raspberry Pi)
- Network Configuration: Connect all components via WiFi or ethernet
- Training Sessions: Educate villagers on system operation
- Feedback Mechanisms: Establish reporting channels
- Maintenance Protocols: Train local technicians
- Emergency Procedures: Define response protocols
Create docker-compose.yml
:
version: '3.8'
services:
emcs-backend:
build: ./Backend
ports:
- "8000:8000"
environment:
- DATABASE_POST_API_ROUTE=${DATABASE_POST_API_ROUTE}
- IMGBB_API_KEY=${IMGBB_API_KEY}
- ALERT_SYSTEM_ENABLED=${ALERT_SYSTEM_ENABLED}
volumes:
- ./Backend/model:/app/model
- ./Backend/snapshots:/app/snapshots
devices:
- /dev/gpiomem:/dev/gpiomem # For Raspberry Pi GPIO access
emcs-frontend:
build: ./Frontend/elephant
ports:
- "3000:3000"
environment:
- NEXT_PUBLIC_CONVEX_URL=${NEXT_PUBLIC_CONVEX_URL}
depends_on:
- emcs-backend
- Reduced Human-Wildlife Conflicts: Significantly decreases dangerous encounters
- Elephant Safety: Prevents retaliatory killings and habitat encroachment
- Behavioral Monitoring: Provides valuable data on elephant movement patterns
- Habitat Preservation: Encourages coexistence rather than habitat destruction
- Enhanced Safety: Protects villagers from potential elephant encounters
- Agricultural Protection: Safeguards crops and livelihoods
- Economic Stability: Reduces financial losses from elephant damage
- Technology Empowerment: Brings modern technology to rural communities
- Conflict Reduction: 80-90% decrease in human-elephant conflicts
- Crop Protection: 70-85% reduction in agricultural damage
- Community Safety: Zero elephant-related injuries in protected villages
- Conservation Impact: Improved local attitudes toward elephant conservation
We welcome contributions to the EMCS project! Your help can make a real difference in wildlife conservation and community safety.
- Fork the repository
- Create a feature branch
git checkout -b feature/amazing-feature
- Commit your changes
git commit -m 'Add amazing feature for village protection'
- Push to the branch
git push origin feature/amazing-feature
- Open a Pull Request
- AI/ML: Improve detection accuracy and model optimization
- Frontend: Enhance user interface and community features
- Backend: Optimize API performance and alert systems
- Mobile: Develop mobile applications for field use
- Hardware: IoT integration and edge computing improvements
- Documentation: Improve guides and educational materials
- Localization: Translate interface for different regions
- Follow PEP 8 for Python code
- Use ESLint for JavaScript/React code
- Write tests for new features
- Update documentation as needed
- Consider community impact in feature design
- Test with real-world scenarios when possible
- Model loading fails: Check model file paths and GPU availability
- Low accuracy: Retrain model with local data, adjust lighting conditions
- False alerts: Adjust confidence thresholds, improve camera positioning
- Camera connection fails: Verify IP addresses, check network connectivity
- Alert system not working: Check GPIO connections, verify power supply
- High CPU usage: Consider using smaller models or reducing frame rate
- Alert fatigue: Balance sensitivity to reduce false alarms
- Maintenance issues: Establish local technical support
- Power outages: Implement backup power solutions
This project is licensed under the MIT License - see the LICENSE file for details.
The EMCS is developed with the intention of promoting wildlife conservation and community safety. We encourage responsible use and welcome collaborations with conservation organizations.