Welcome! This guide will help you build and deploy an AI-powered flood intelligence and disaster response system using NVIDIA NIM and h2oGPTe.
An intelligent system that combines:
- Real-time flood monitoring from USGS and NOAA data sources
- AI-powered risk assessment using NVIDIA's latest language models
- Multi-agent coordination with 5 specialized AI agents
- Predictive analytics for flood forecasting
- Interactive dashboard for monitoring and alerts
By the end of this guide, you'll have a fully functional flood intelligence system running with live data.
- ✅ NVIDIA API Key - Get your free key from build.nvidia.com
-
🔹 NGC API Key - For running a local NVIDIA NIM model (requires GPU)
- Get it from NGC Catalog
- Only needed if you have an NVIDIA GPU available
-
🔹 H2OGPTE Access - For advanced AutoML features
- Get access at h2o.ai
- The system works without this, but some features will be limited
Note: If you don't have H2OGPTE or NGC keys, that's okay! The system will work with just the NVIDIA API key.
-
In your Jupyter environment, navigate to the notebook:
Building_Flood_Intelligence_Agents.ipynb -
Open the notebook - you'll see it's organized into clear sections
-
You'll follow the notebook from top to bottom, running cells as you go
Important: Read the instructions in each section before running cells!
The notebook guides you through everything step-by-step. Here's what to expect:
This section sets up your environment and deploys the application.
What you'll do:
-
Install Python Dependencies (Cell 5)
- Run the cell to install required libraries
⚠️ Important: Restart your kernel after this step- Don't run this cell again after restarting
-
Collect API Keys (Cells 7-10)
- The notebook will prompt you to enter your API keys
- Your inputs are hidden for security
- Required: NVIDIA API Key
- Optional: NGC API Key, H2OGPTE credentials
- Just press Enter to skip optional keys
-
Generate Configuration File (Cells 12-13)
- Run the cells to create your configuration
- The notebook shows a summary of what was configured
- A file called
flood_intelligence.envis created automatically
-
Pull Docker Images (Cells 17-18, optionally 23-27)
- This downloads the application containers
- Takes 5-10 minutes depending on your connection
- You'll see progress bars
- Optional: If you have a GPU, you can pull the NIM LLM image (cells 23-27)
-
Deploy the Application (Cell 30 or 32)
- Run the docker compose command
- With GPU: Use cell 30 to deploy with local NIM LLM
- Without GPU: Use cell 32 for standard deployment
- Wait 2-3 minutes for services to start
-
Verify Deployment (Cell 34)
- Check that all containers show "healthy" status
- If not healthy, wait another minute and check again
✅ Checkpoint: Once all containers are healthy, your system is deployed!
Learn how NVIDIA's language models power the flood intelligence system:
- Test different NVIDIA models
- See streaming responses in action
- Compare model performance
- Try the LLM-as-Judge evaluation feature
What you'll do: Run the cells to see AI models analyzing flood scenarios in real-time.
Explore advanced AutoML capabilities (if you configured H2OGPTE):
- Get AI guidance on building ML models
- Learn feature engineering techniques
- Understand model training best practices
Note: This section is skipped if you don't have H2OGPTE credentials - that's okay!
Interact with the 5 specialized AI agents:
- Data Collector: Pulls real-time flood data
- Risk Analyzer: Calculates flood risk scores
- Emergency Responder: Manages alerts and evacuations
- AI Predictor: Generates flood forecasts
- H2OGPTE ML Agent: Trains and optimizes models (optional)
What you'll do:
- View agent status and insights
- Run agent workflows
- See how agents coordinate to analyze flood risk
Work with live data from government agencies:
- USGS water monitoring stations
- NOAA flood alerts
- Weather forecasts
What you'll do:
- Refresh live data from monitoring stations
- View watershed data in tables
- See risk scores and trends
Once deployment is complete (Section 1), you can access the interactive dashboard:
- Goto the "Access" section of your deployed instance on brev.
- At the bottom of the page, find "Using Ports" section.
- If the cloud provider allows forwarding ports, there will be a clickable link similar to x.x.x.x:8090 under the "TCP/UDP Ports" section.
- Open that URL in your browser
- You'll see the Flood Intelligence Dashboard with:
- Real-time flood monitoring
- Interactive watershed maps
- Agent status and insights
- Alert management
- Data visualizations
Problem: After deploying, containers don't show "(healthy)" status
Solutions:
- Wait 20-30 minutes - services take time to initialize
- Run this cell again to check status:
!docker ps -a
- If still not healthy after 5 minutes, check logs:
!docker logs flood-intelligence-web
Problem: Port 8090 doesn't load or shows an error
Solutions:
- Verify containers are running and healthy (see above)
- Wait 2-3 minutes after deployment
- Try refreshing your browser
- Check that the deployment step (cell 30 or 32) completed without errors
Problem: Cells show "API key required" errors
Solutions:
- Make sure you ran all cells in Section 1, Step 2 (collecting keys)
- Verify you entered the keys correctly (check for extra spaces)
- Re-run the key collection cells if needed
- After fixing keys, restart the deployment:
!docker compose -f ../deployment/nvidia-launchable/docker-compose.yml --env-file ./flood_intelligence.env down !docker compose -f ../deployment/nvidia-launchable/docker-compose.yml --env-file ./flood_intelligence.env up -d
Problem: Containers crash or system becomes slow
Solutions:
- If you deployed with local NIM (cell 30), try without it (cell 32 instead)
- Close other applications to free up memory
- Restart the containers:
!docker compose --env-file ./flood_intelligence.env restart
Problem: "Kernel died" or cells won't run
Solutions:
- Restart the kernel from the Kernel menu
- Don't re-run the dependency installation (cell 5) after restarting
- Re-run cells from Section 1, Step 2 onwards
Once your system is running, you can:
- Explore the Agents - Run the examples in Section 4 to see agents in action
- Monitor Live Data - Use Section 5 to refresh and view real-time watershed data
- Customize - Modify the prompts and queries to test different scenarios
- Add More Data - The system can monitor any watershed with USGS data
- Train Models - If you have H2OGPTE, try the AutoML features in Section 3
- NVIDIA NIM: build.nvidia.com
- USGS Water Data: waterdata.usgs.gov
- NOAA Flood Alerts: weather.gov
- H2O.ai: h2o.ai
!docker ps -aAll containers should show "Up" and "(healthy)"
!docker logs flood-intelligence-web
!docker logs flood-intelligence-redis!docker compose --env-file ./flood_intelligence.env restart!docker compose --env-file ./flood_intelligence.env down!docker compose --env-file ./flood_intelligence.env up -dIf you encounter issues:
- Check the Troubleshooting section above
- Review the error messages in failed cells
- Check container logs with the commands above
- Make sure you completed all steps in Section 1 (Setup)
This flood intelligence system is built with:
- NVIDIA NIM - High-performance AI inference
- h2oGPTe - Enterprise AI and AutoML
- FastMCP - Multi-agent coordination
- Real-time APIs - USGS, NOAA, weather data
It demonstrates how AI can be used for disaster response and public safety.
🌊 Ready to start? Open the notebook and begin with Section 1!
Built with ❤️ for AI for Good using H2O.ai and NVIDIA NIM