-
Notifications
You must be signed in to change notification settings - Fork 162
Open
Description
FedProx/flearn/trainers/fedbase.py
Line 17 in d2a4501
self.clients = self.setup_clients(dataset, self.client_model) |
Please take a look at this line. It seems that all clients are using the same ML model for local training. In other words, there is no local model, but a global model which is sequentially trained on each client.
This can be verified by the following code snippet (I have tested it on flearn/trainers/fedavg.py).
csolns = [] # buffer for receiving client solutions
lastc = None
for idx, c in enumerate(active_clients.tolist()): # simply drop the slow devices
print(i, idx)
if lastc is not None:
for j in range(len(lastc)):
print('Is the parameters of the current client (before training) the same as the parameters of the previous client (after training)?: %s' % (c.get_params()[j] == lastc[j]).all())
from time import sleep
sleep(1)
else:
print('The first client.')
# communicate the latest model
c.set_params(self.latest_model)
# solve minimization locally
soln, stats = c.solve_inner(num_epochs=self.num_epochs, batch_size=self.batch_size)
lastc = c.get_params()
# gather solutions from client
csolns.append(soln)
# track communication cost
self.metrics.update(rnd=i, cid=c.id, stats=stats)
# update models
self.latest_model = self.aggregate(csolns)
In my opinion, this is not expected for federated learning.
Metadata
Metadata
Assignees
Labels
No labels