- Running Experiments -- Run the command to get reproducibility
- Quickstart -- Follow the instructions and get the result!
- C4 notation -- Context Container Component Code scheme.
- Federated Method Explaining -- Get the basis and write your own method
- Config Explaining -- See allowed optionalization
- Attacks -- Get the basis and write custom attack
- Install dependencies
python -m venv venv
source venv/bin/activate
pip install -e .
🔄 Standard Federated Averaging on CIFAR-10
python src/train.py \
training_params.batch_size=32 \
federated_params.print_client_metrics=False \
training_params.device_ids=[0] \
> test_run_fedavg_cifar.txt
device_ids
controls the GPU number (if there are several GPUs on the machine). You can specify multiple ids, then the training will be evenly distributed across the specified devices.
Additionally, manager.batch_size
client processes will be created. To forcefully terminate the training, kill any of the processes.
Dirichlet Partition with
python src/train.py \
training_params.batch_size=32 \
federated_params.print_client_metrics=False \
train_dataset.alpha=0.1 \
federated_params.amount_of_clients=100 \
> test_run_fedavg_cifar_dirichlet_strong_heterogeneity_100_clients.txt
Uniform Distribution (
dataset.alpha=1000 \
federated_params.amount_of_clients=42 \
FedAvg with Label Flipping Attack
python src/train.py \
training_params.batch_size=32 \
federated_params.print_client_metrics=False \
federated_params.clients_attack_types=label_flip \
federated_params.prop_attack_clients=0.5 \
federated_params.attack_scheme=constant \
federated_params.prop_attack_rounds=1.0 \
> test_run_fedavg_cifar_label_flip_half_byzantines.txt