The controller:
- Monitors the creation, update, and deletion of PredictiveAutoscaler custom resources
- On a schedule (every 5 minutes by default), collects current pod counts
- Calculates required pods using the predictive formula
- Updates the target HPA's minReplicas
- Updates the historical data with the weighted formula
- Stores this data persistently for future predictions
This implementation provides a clean separation of code and configuration, making it easier to develop, test, and maintain the controller.
Navigate to the controller directory cd CSCI555/Scaler/controller
Build the Docker image
docker build -t predictive-autoscaler:latest .
Tag and push to your container registry
docker tag predictive-autoscaler:latest anirudhr120100/csci555-predictive-autoscaler:latest
docker push anirudhr120100/csci555-predictive-autoscaler:latest
Edit /Scaler/deploy/controller-deployment.yaml
kubectl apply -f Scaler/crd/predictive-autoscaler-crd.yaml
kubectl apply -f Scaler/deploy/rbac.yaml
kubectl apply -f Scaler/deploy/controller-deployment.yaml
kubectl get pods -l app=predictive-autoscaler-controller
kubectl apply -f Scaler/deploy/predictive-autoscaler-instance.yaml
kubectl get predictiveautoscalers
or
kubectl get pa
(using the short name)
kubectl exec -it predictive-autoscaler-controller-xxx -- mkdir -p /data
kubectl cp Scaler/test-data/realistic-traffic.json predictive-autoscaler-controller-xxx:/data/default_simpleweb-predictor_history.json
kubectl logs -f deployment/predictive-autoscaler-controller
kubectl get hpa
kubectl get predictiveautoscalers
Cloudlabs has pre-existing profiles. Select the K8s
- Click Experiment -> Start experiment
- Change Profile to K8s profile from select window
- Click confirm and next to the parameterize page.
- Make edits to the parameters (optional)
- Click next to the Finalize Page. -> Assign a name and cluster location
- Click next to the schedule page.
- Pick a time to deploy and click next
Once the cluster starts, click "extend" to extend the cluster expiration by 7 days.
Click on the node in the node graph click on "Shell" from the pop-up menu
Dependencies:
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
Docker GPG keys
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Create the docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
Install docker:
sudo apt install -y docker-ce docker-ce-cli containerd.io
Start docker
sudo systemctl enable docker
sudo systemctl start docker
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
sudo apt install -y git
git clone https://github.com/CSCI555-Spring25/Scaler.git
- navigate to "webserver" folder
- Create docker container using
docker build -t simpleweb:latest .
- Find the IP of the node/registry using the following command:
docker ps | grep "registry:2"
- If the local registry is not running, run the following to start the registry and go back to step 3:
docker run -d -p 5000:5000 --name registry registry:2
- Once IP is known, for example 10.10.1.1
Tag and push the docker container to the registry
docker tag simpleweb:latest 10.10.1.1:5000/simpleweb:latest docker push 10.10.1.1:5000/simpleweb:latest
- If the ip address is not 10.10.1.1:5000, edit spec/template/spec/containers/image to have the correct IP address
- navigate to root directory of Scaler github repo
- Start the webserver with kubernetes:
kubectl apply -f echo-server.yaml
- Check status of the running pods with
kubectl get pods
kubectl expose deployment simpleweb-deployment --port=80 --type=LoadBalancer
If kubectl is giving an error while returning nodes, make sure there is a K8s profile for your user on cloud labs. Use following commands to copy the admin profile into your user directory.
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config