Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
-
Updated
Mar 24, 2025 - R
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
In this repository you will fine explainability of machine learning models.
Code for EACL Workshop paper Can BERT eat RuCoLA? Topological Data Analysis to Explain
📊🛰️ Data processing scripts, ML models, and Explainable AI results created as part of my Masters Thesis @ Johns Hopkins
Code for my thesis about SHAP. Implementation of DecisionTree, SVM, BERT on 2 Datasets Imdb and Argument Mining
Measuring galaxy environmental distance scales with GNNs and explainable ML models
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of soils. This model is developed using XGBoost and SHAP.
Getting explanations for predictions made by black box models.
Determining Feature Importance by Integrating Random Forest and SHAP in Python
Predicting NBA game outcomes using schedule related information. This is an example of supervised learning where a xgboost model was trained with 20 seasons worth of NBA games and uses SHAP values for model explainability.
Implementation of the algorithm described in the paper "An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data"
gradient-boosted regression and decision tree models on behavioural animal data (PLOS Computational Biology, doi: https://doi.org/10.1371/journal.pcbi.1011985)
XGB - SHAP XAI
Android malware detection using machine learning.
Holistic Multimodel Domain Analysis: A New Paradigm for Robust, Transparent, And Reliable Exploratory Machine Learning that Considers Cross-Model Variability in Feature Importance Assessment
An Analysis of Lassa Fever Outbreaks in Nigeria using Machine Learning Models and Shapley Values
The goal of SHAP is to explain the prediction of an instance x by computing the contribution of each feature to the prediction. The SHAP explanation method computes Shapley values from coalitional game theory. The feature values of a data instance act as players in a coalition.
Use machine learning to find out what drives sales and predict sales
XAI analytics to understand the working of SHAP values
Add a description, image, and links to the shapley-additive-explanations topic page so that developers can more easily learn about it.
To associate your repository with the shapley-additive-explanations topic, visit your repo's landing page and select "manage topics."