*** The extension of this work can be found in KubeFlower-Operator ***
Kubeflower is a project for exploiting the benefits of cloud-native and container-based technologies for the development, deployment and workload management of Federated Learning (FL) pipelines. We use the open-source framework Flower for the FL workload control. Flower has been widely adopted in industry and academia. In order to increase computation elasticity and efficiency when deploying FL, we use the container orchestration system Kubernetes (K8s). We use different concepts such as FL servers, FL clients, K8s clusters, K8s deployments, K8s pods, and K8s services. If you are not familiar with this terminology, please watch the following resources: Federated Learning, Kubernetes.
- Single and multi-node implementation.
- High availability through clustering and distributed state management.
- Scalability through clustering of network device control.
- CLI for debugging.
- Applicable to real-world scenarios.
- Extendable.
- Cross-platform (Linux, macOS, Windows).
For this proof-of-concept, a K8s cluster is deployed locally using minikube. The following packages are required and should be installed beforehand:
- Clone the present repository in the CLI.
git clone git@github.com:hpn-bristol/kubeFlower.git
- Go to the folder that contains Kubeflower
- Point your terminal to use the docker deamon inside minikube
eval $(minikube docker-env)
- Deploy a K8s cluster in minikube.
minikube start
- Check minikube docker images.
minikube image list
You will find a list of standard k8s docker images for management. For example, k8s.gcr.io/kube-scheduler, k8s.gcr.io/kube-controller-manager, etc.
- Build the docker image from this repo (dockerfile) with the requiered packages (requirements.txt). This image is based on python:3.9.8-slim-bullseye.
minikube image build -t kubeflower .
where your image will be called kubeflower
and the building source .
will be the current folder.
- Check your image has been succesfully added to the minikube docker.
minikube image list
check that kubeflower:latest
is in the list, where latest
is the tag assigned to the docker image by default.
Now you are ready for deploying the FL pipeline using K8s. We will be using K8s deployments to create K8s pods that will use a K8s service for communications. Each pod represents a FL actor with a main pod that will act as a FL server. The proposed architecture is depicted in the figure.
The docker image kubeflower
is used to deploy the containers with the Flower's pipeline and other dependencies. These containers are deployed in pods. The FL server Pod exposes port 8080 for the gRPC communication implemented by Flower. Instead of using a predefined IP for the server, we use K8s service ClusterIP
that will allow to locate the FL server pod even if it restarts and change its IP. The service exposes the port 30051 which can be targeted by the FL Client Pods through http:service-server:30051
. For the FL setup, we use the FL PyTorch implementation of Flower. This simple example can be found here.
To deploy this architecture you need to:
-
Deploy the
service-server
K8s service. From the root folder run:kubectl apply -f descriptors/serverService.yaml
We are using ClusterIP but it can be modified with a NodePort or LoadBalancer if specific communications are required.
-
Deploy the FL server pod through the K8s deployment.
kubectl apply -f descriptors/serverDeploy.yaml
By default, the server will start a run of 5 rounds when 2 clients are available. To change thess values, edit the
serverDeploy.yaml
file. Different values should be passed as arguments in the lineargs: ["python ./src/server.py"]
. Possible values are: --clients, --min, --rounds. -
Check the SELECTOR for both the service and deployment. They should match
app=flower-server
.kubectl get all -owide
-
Deploy FL clients using the clientDeploy.yaml descriptor.
kubectl apply -f descriptors/clientDeploy.yaml
By default, this descriptor will deploy 2 clients. To increase the number of clients, edit the
replicas: 2
value in the .yaml file. -
Monitor the training process.
kubectl get all
Get the pods IDs.
Check the logs on the
flower-server
pod.kubectl logs flower-server-64f78b8c5c-kwf89 -f
Open a new terminal and check the logs on the
flower-client
pods. Repeat the process if required for the different clients.kubectl logs flower-client-7c69c8c776-cjw6r -f
-
After the FL process has finished, kill the pods and services, and stop the K8s cluster on minikube.
kubectl delete deploy flower-client flower-server kubectl delete service service-server minikube stop
This is a simple implementation of container-based FL using Flower and K8s for orchestration. For further discussions/ideas/projects, please contact the developers at the Smart Internet Lab.