Skip to content

mfmarcoferrero/tropical-cyclones-xai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interpretable Machine Learning to Detect Tropical Cyclones

Welcome to the repository for the POLIMI Master's thesis titled "An Evaluation of Data-Driven Interpretable Methods to Detect Tropical Cyclones" by Marco Ferrero. This repository contains all the necessary materials and code to understand and reproduce the work presented in the thesis.

Repository Structure

  • docs: This directory contains the official thesis document and the executive summary. These documents provide an in-depth explanation of the methodologies used, results obtained, and conclusions drawn from the study.
  • notebooks: This directory includes all the Jupyter notebooks used throughout the project. These notebooks cover data preprocessing, model training, evaluation, and interpretation. Each notebook is well-documented to guide you through the steps taken in the analysis.
  • datasets: This directory houses the datasets used in the implementation. The data has been preprocessed and organized for ease of use in the notebooks.
  • models: This directory contains all the trained models produced and evaluated during the project. You can find both the final models and intermediate versions to understand the progression and improvements made.

Project Overview

The goal of this project is to evaluate various data-driven interpretable methods for detecting tropical cyclones. Interpretable machine learning models are crucial in understanding and trusting the predictions made, especially in critical applications like cyclones forecasting.

Key Components

  • Data Preprocessing: Detailed steps to clean and prepare the data for modeling.
  • Model Training: Implementation of various machine learning models with a focus on interpretability.
  • Model Evaluation: Comprehensive evaluation of model performance using standard metrics.
  • Interpretability: Techniques used to interpret and explain model predictions.

Black-Box Models Evaluated

  • Gradient Boosting Decision Trees (XGBoost)
  • LSTM Networks
  • LSTM Autoencoders + XGBoost
  • LIME (Local Interpretable Model-agnostic Explanations)

White-Box Models

  • Decision Trees
  • Bayesian Rule Lists

Results and Findings

The detailed results and findings of the study can be found in the thesis document located in the docs directory. The executive summary provides a high-level overview of the key insights and conclusions.

Contact

For any questions or further information, please contact:

About

"An Evaluation of Data-Driven Interpretable Methods to Detect Tropical Cyclones"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published