The PREDICTIVE ANALYTIC IN COMPLEX ENVIRONMENT (Disaster Management) Database System can be used to speed up the response, preparedness, and mitigation against natural disasters. Centralizing disaster and weather data in real time, this system applies predictive models in order to enrich decision-making and support rescue teams and government agencies in disaster management more effectively.
- PHASE 2: Business Process Modeling
- PHASE 3: Logical Data Model Design
- PHASE 4: Creating and Naming the Database
- PHASE 5: Table Implementation and Data Insertion
- PHASE 6: Interaction with the Database and Transactions
The system shall:
- Process Real-Time Data: Allow for efficient alerts and mitigation of disasters. Enhance preparedness by availing workable insights to emergency teams and government bodies. Allow for international collaboration by sharing data across borders for harmonized disaster management approaches.
- More accurate disaster predictions
- Better coordination of response and disaster management.
- Evidence-based decision-making in disaster situations.
- Users
- Roles: Disaster planners, data evaluators, and on-site responders utilize the system for disaster-related data processing.
- Engagement: Handle information on disaster risks, occurrences, and predictions.
- Departments
- Responsibilities: Government institutions, relief agencies, and meteorological stations provide and access data.
- Engagement: Facilitate situation analysis and real-time response strategy development.
- Information Systems
- Responsibilities: Integrate predictive models and databases to analyze and forecast disasters.
- Engagement: Process data from various sources to issue warnings and recommendations.
- External Data Sources
- Responsibilities: Supply satellite imagery, weather alerts, and other relevant data.
- Engagement: Aggregate and supply external data for system use.
Process Modeling Using Swimlane Diagram The swimlane diagram details the main processes involved:
- Swimlane 1: Identification of disaster events by users, request for data, and appraisal of model output.
- Swimlane 2: Data supply is ensured by different departments; they ensure data integrity and response planning.
- Swimlane 3: Information Systems collects, stores, and processes data to generate predictions.
- Swimlane 4: External Data Sources providing satellite images and weather forecasts in raw forms.
The following BPMN diagram visualizes disaster management processes:
- Start Event: Users start disaster reporting.
- Data Collection: Disaster-related data will be obtained from external sources.
- Data Storage and Processing: The systems store and analyze the data for forecasts.
- Data Quality Control: Check the accuracy of data and conformance to policy.
- Response Predictions: Predictions in response to active activities by departments and users.
- Preparedness Strategies: Departments disseminate warnings and execute strategies.
- End Event: Data updates enhance future predictions.
Link for references:(https://demo.bpmn.io/new)
- Disaster: Tracks disaster details (e.g., type, magnitude, location).
- Location: Stores geographical data (e.g., country, state, city).
- Prediction: Logs disaster predictions (e.g., type, risk level).
- Weather Condition: Records weather-related data (e.g., temperature, rainfall).
- Preparedness Measure: Explains strategies against disasters.
- Historical Disaster Data: Maintains past disaster history records.
The ER diagram will map the following types of relationships:
- One-to-many relationships between disasters and locations.
- Many-to-one relationships between predictions and locations.
- Many-to-many relationships between historical data, disasters, and locations.
- Username:
wed_hawks
- Password:
hawks
- Tables Created: Location, Disaster, Prediction, Weather_Condition, Preparedness_Measure, Historical_Disaster_Data.
- Relationships: Tables are related with foreign keys to maintain referential integrity.
- Constraints: Primary keys, foreign keys, and unique keys are imposed.
SQL commands to create tables look like this:
CREATE TABLE Location (
Location_ID INT PRIMARY KEY,
Country VARCHAR(50),
State VARCHAR(50),
City VARCHAR(50)
);
Inserting Data Sample data:
INSERT INTO Location (Location_ID, Country, State, City)
VALUES (1, 'Rwanda', 'Northern Province', 'Musanze');
INSERT INTO Disaster (Disaster_ID, Disaster_Type, Disaster_Date, Magnitude, Location_ID)
VALUES (1, 'Earthquake', TO_DATE('2024-11-15', 'YYYY-MM-DD'), 7.8, 1);
Examples of some join operations:
- Cross Join:
SELECT Disaster.Disaster_Type, Location.Country FROM Disaster CROSS JOIN Location;
- Inner Join:
SELECT Disaster.Disaster_Type, Location.City FROM Disaster
INNER JOIN Location ON Disaster.Location_ID = Location.Location_ID;
- Outer Join:
SELECT Prediction.Predicted_Date, Location.Country FROM Prediction LEFT OUTER JOIN Location ON Prediction.Predicted_Location_ID = Location.Location_ID;
- Insert with Commit:
BEGIN; INSERT INTO Disaster (Disaster_ID, Disaster_Type) VALUES (2, 'Flood'); INSERT INTO Preparedness_Measure (Measure_ID, Disaster_ID) VALUES (1, 2); COMMIT;
- Rollback on Error:
BEGIN; INSERT INTO Prediction (Prediction_ID, Disaster_Type) VALUES (3, 'Hurricane'); ROLLBACK;
This project includes advanced database programming features, such as:
- Triggers: Ensure data integrity and automate workflows.
- Cursors: Handle row-by-row data processing.
- Functions and Packages: Encapsulate reusable logic for efficient operations.
- Auditing Mechanisms: Track changes and enforce security measures.
- Performance Impact: Triggers and cursors can affect performance if not optimized.
- Complexity: Auditing requires careful setup to avoid excessive logging.
- Data Consistency: Ensure proper testing when implementing packages and functions.
For detailed documentation, see the Scope and Limitations Documentation.