This is the official implementation of our paper Equitable Federated Learning with NCA accepted at MICCAI 2025.
Nick Lemke*, Mirko Konstantin*, Henry John Krumb, John Kalkhof, Jonathan Stieber, Anirban Mukhopadhyay
* Equal Contribution
- Setup a conda environment with
conda create -n <your_conda_env> python=3.10. - Install torch with your preferred CUDA version, e.g.
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118. - Install other dependencies via
pip install -r requirements.txt. - (Optional) Log into wandb via
wandb login. - Specify paths in utils/root_path.py.
The datasets used in our publication can be downloaded from the following sources:
| Anatomy | Dataset | Link |
|---|---|---|
| Ultrasound | Fetal Abdominal Structures Segmentation | https://data.mendeley.com/datasets/4gcpm9dsc3/1 |
| XRay | MIMIC-III | https://physionet.org/content/mimiciii |
Training can be started using main_us.py. This script also contains all the parameters that you can specify. Training on a single node (without federation) can be simulated using train_us_single.py.
Evaluation is done using the eval_us.py script. Select the correct experiment by specifying the same parameters as in the training run of that specific training run.
In this section we provide exact Dice results (mean ± standard deviation) obained in our experiments as visualized in Fig. 3 of our paper.
| Method | Dice | Transmission Cost in MiB |
|---|---|---|
| FedNCA | 78.89 ± 9.84 | 0.06 |
| Fed UNet | 77.52 ± 9.57 | 521.30 |
| UNet (4 bit) | 35.63 ± 17.34 | 73.32 |
| UNet (top-25) | 78.26 ± 9.84 | 390.97 |
| UNet (top-01) | 78.08 ± 10.93 | 265.87 |
| Fed TransUNet | 76.81 ± 18.70 | 699.78 |
| TransUNet (4 bit) | 57.62 ± 14.75 | 98.44 |
| TransUNet (top-25) | 76.75 ± 18.94 | 524.83 |
| TransUNet (top-01) | 76.09 ± 19.02 | 356.89 |
| Method | Dice | Transmission Cost in MiB |
|---|---|---|
| FedNCA | 74.73 ± 25.27 | 0.06 |
| Fed UNet | 74.88 ± 24.45 | 129.24 |
| UNet (4 bit) | 59.73 ± 28.93 | 18.18 |
| UNet (top-25) | 73.59 ± 24.63 | 96.94 |
| UNet (top-01) | 73.21 ± 24.25 | 65.92 |
| Fed TransUNet | 64.78 ± 22.45 | 699.28 |
| TransUNet (4 bit) | 11.66 ± 11.57 | 98.36 |
| TransUNet (top-25) | 63.41 ± 23.59 | 524.46 |
| TransUNet (top-01) | 00.00 ± 00.00 | 356.64 |
| Figure Idx | Script | Contents | Notes |
|---|---|---|---|
| 3 | train_results.ipynb |
Dice scores | You need to manually specify the transmission cost, which can be found in the log automatically produced by Nvidia Flare! |
| 4 | training_time.ipynb |
Training times on mobile hardware | The measured runtimes must be entered manually, to create the Figure. Hard to reproduce as we do not have published the smartphone app, yet. |
| 5 | measure_encrypt.ipynb |
Runtime of encryption/decryption algorithms | Use measure_encrypt.py to measure the runtimes before visualizing. |
Throughout the text, we mention the number of parameters of different models. You can get the exact number of parameters using the num_parameters.ipynb script.
If you use FedNCA in your research, please include the following BibTeX entry.
@inproceedings{lemke2025equitable,
title={Equitable Federated Learning with NCA},
author={Lemke, Nick and Konstantin, Mirko and Krumb, Henry John and Kalkhof, John and Stieber, Jonathan and Mukhopadhyay, Anirban},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={168--177},
year={2025},
organization={Springer}
}
