Membership Inference Attacks and Differentially Private Training
This repo contains a notebook on how Membership Inference Attacks (MIA) work and highlights the significance of differentially private training in mitigating the success of such attacks. We use Medical MNIST dataset from link. Our goal here is to show that overfitting creates privacy risk for the individuals whose data appears in the training set.