If our project helps you, please give us a star ⭐ and cite our paper!
torch~=1.13.1
rich~=13.3.5
torchvision~=0.14.1
torchaudio~=0.13.1
numpy~=1.24.3
visdom~=0.2.4
Pillow~=9.4.0
scipy~=1.10.1
pandas~=2.0.1
matplotlib~=3.7.1
faiss-cpu~=1.7.4
scikit-learn~=1.2.2
pynvml~=11.5.0
We are grateful for the following awesome projects:
The code is built on top of FL-bench.
To run the code, you need to first download and split the datasets using data\generate_data.py
.
Then
- generate CLIP embeddings by
python datapreprocess/preprocess.py -d [dataset]
- generate client index and sample orders by
python datapreprocess/train.py -d [dataset] -lr [learning rate] -e [epochs] -tp [global/local]
python datapreprocess\generate_sample_order.py
Please config the datapreprocess\generate_sample_order.py
before run.
- Run the experiments using the scrips like the follows:
python fedavg.py -d cifar10 -m res18 -ge 100 -tg 1 -le 5 -mom 0.0 -bs 32 -wd 5e-5 --seed 42 --MWU_aggregate 1 --MWU_tau 1.0 --local_reg 1 --sample_order_path ../../datapreprocess/sample_orders/cifar10/global/transformer/e100_lr0.01_div_1/greedy/10-mean-mean-1.0.pkl --client_index_path cifar10/global/transformer/e500_lr0.01_div_1 -lr 1e-2 --local_reg_weight 5.0 --MWU_momentum 0.9;
More examples can be found in src\server\scripts
If you find this repository helpful for your project, please consider citing:
@article{guo2024client2vec,
title={Client2Vec: Improving Federated Learning by Distribution Shifts Aware Client Indexing},
author={Guo, Yongxin and Wang, Lin and Tang, Xiaoying and Lin, Tao},
journal={arXiv preprint arXiv:2405.16233},
year={2024}
}