You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 29, 2025. It is now read-only.
There are a number of options available to us under the "extenders" configuration object. Some of these fields - such as setting the urlPrefix, filterVerb and prioritizeVerb are necessary to point the Kubernetes scheduler to our scheduling service, while other sections deal the TLS configuration of mutual TLS. The remaining fields tune the behavior of the scheduler: managedResource is used to specify which pods should be scheduled using this service, in this case pods which request the dummy resource telemetry/scheduling, ignorable tells the scheduler what to do if it can't reach our extender and weight sets the relative influence our extender has on prioritization decisions.
63
48
64
-
With a policy like the above as part of the Kubernetes scheduler configuration the identified webhook becomes part of the scheduling process.
49
+
With a configuration like the above as part of the Kubernetes scheduler configuration the identified webhook becomes part of the scheduling process.
65
50
66
51
To read more about scheduler extenders see the [official docs](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/scheduling/scheduler_extender.md).
67
-
52
+
68
53
## Adding a new extender to Platform Aware Scheduling
69
54
Platform Aware Scheduling is a single repo designed to host multiple hardware enabling Kubernetes Scheduler Extenders. A new scheduler can be added with an issue and pull request.
Copy file name to clipboardExpand all lines: telemetry-aware-scheduling/README.md
+22-42Lines changed: 22 additions & 42 deletions
Original file line number
Diff line number
Diff line change
@@ -46,50 +46,30 @@ If this pipeline isn't set up, and node level metrics aren't exposed through it,
46
46
Note: a shell script that shows these steps can be found [here](deploy/extender-configuration). This script should be seen as a guide only, and will not work on most Kubernetes installations.
47
47
48
48
The extender configuration files can be found under deploy/extender-configuration.
49
-
TAS Scheduler Extender needs to be registered with the Kubernetes Scheduler. In order to do this a configmap should be created like the below:
49
+
TAS Scheduler Extender needs to be registered with the Kubernetes Scheduler. In order to do this a configuration file should be created like one the below:
This file can be found [in the deploy folder](deploy/extender-configuration/scheduler-extender-configmap.yaml). This configmap can be created with ``kubectl apply -f ./deploy/scheduler-extender-configmap.yaml``
87
-
The scheduler requires flags passed to it in order to know the location of this config map. The flags are:
88
-
````
89
-
- --policy-configmap=scheduler-extender-policy
90
-
- --policy-configmap-namespace=kube-system
91
-
````
92
-
71
+
This file can be found [in the deploy folder](deploy/extender-configuration/scheduler-config.yaml). The API version of the file is updated by executing a [shell script](deploy/extender-configuration/configure-scheduler.sh).
72
+
Note that k8s, from version 1.22 onwards, will no longer accept a scheduling policy to be passed as a flag to the kube-scheduler. The shell script will make sure the scheduler is set-up according to its version: scheduling by policy or configuration file.
93
73
If scheduler is running as a service these can be added as flags to the binary. If scheduler is running as a container - as in kubeadm - these args can be passed in the deployment file.
94
74
Note: For Kubeadm set ups some additional steps may be needed.
95
75
1) Add the ability to get configmaps to the kubeadm scheduler config map. (A cluster role binding for this is at deploy/extender-configuration/configmap-getter.yaml)
@@ -98,7 +78,7 @@ Note: For Kubeadm set ups some additional steps may be needed.
98
78
After these steps the scheduler extender should be registered with the Kubernetes Scheduler.
99
79
100
80
#### Deploy TAS
101
-
Telemetry Aware Scheduling uses go modules. It requires Go 1.13+ with modules enabled in order to build. TAS has been tested with Kubernetes 1.14+. TAS was tested on Intel® Server Board S2600WF-Based Systems (Wolf Pass).
81
+
Telemetry Aware Scheduling uses go modules. It requires Go 1.16+ with modules enabled in order to build. TAS has been tested with Kubernetes 1.20+. TAS was tested on Intel® Server Board S2600WF-Based Systems (Wolf Pass).
102
82
A yaml file for TAS is contained in the deploy folder along with its service and RBAC roles and permissions.
103
83
104
84
**Note:** If run without the unsafe flag ([described in the table below](#tas-scheduler-extender)) a secret called extender-secret will need to be created with the cert and key for the TLS endpoint.
0 commit comments