Skip to content
This repository was archived by the owner on Jan 29, 2025. It is now read-only.

Commit 25a646e

Browse files
committed
Documentation updates
In this commit: - we add the supported descheduler version to the TAS demo - we remove the 'pre-production' reference(s) related to TAS from the documenation - other fix and clarification were added to the documentation text
1 parent 56c4573 commit 25a646e

File tree

3 files changed

+7
-6
lines changed

3 files changed

+7
-6
lines changed

telemetry-aware-scheduling/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@ Telemetry Aware Scheduling (TAS) makes telemetry data available to scheduling an
44
For example - a pod that requires certain cache characteristics can be schedule on output from Intel® RDT metrics. Likewise a combination of RDT, RAS and other platform metrics can be used to provide a signal for the overall health of a node and be used to proactively ensure workload resiliency.
55

66

7-
**This software is a pre-production alpha version and should not be deployed to production servers.**
8-
97

108
## Introduction
119

@@ -82,7 +80,10 @@ Note: For Kubeadm set ups some additional steps may be needed.
8280
After these steps the scheduler extender should be registered with the Kubernetes Scheduler.
8381

8482
#### Deploy TAS
85-
Telemetry Aware Scheduling uses go modules. It requires Go 1.16+ with modules enabled in order to build. TAS has been tested with Kubernetes 1.20+. TAS was tested on Intel® Server Board S2600WF-Based Systems (Wolf Pass).
83+
Telemetry Aware Scheduling uses go modules. It requires Go 1.16+ with modules enabled in order to build.
84+
TAS current version has been tested with the recent Kubernetes version at the released date. It maintains support to the three most recent K8s versions.
85+
TAS was tested on Intel® Server Boards S2600WF and S2600WT-Based Systems.
86+
8687
A yaml file for TAS is contained in the deploy folder along with its service and RBAC roles and permissions.
8788

8889
A secret called extender-secret will need to be created with the cert and key for the TLS endpoint. TAS will not deploy if there is no secret available with the given deployment file.

telemetry-aware-scheduling/docs/health-metric-example.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,13 +13,13 @@ A video of that closed loop in action is available [here](https://networkbuilder
1313
## Assumptions
1414
This guide requires TAS be running as described in the README, and that the [custom metrics pipeline](custom-metrics.md) is supplying it with up to date metrics. Also required is a multinode Kubernetes set-up with user access to all three machines in the cluster.
1515

16-
There should be text file (with extension .prom) at /tmp/node-metrics/metrics.prom that contains health metrics in [the Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-format-details).
16+
There should be a text file (with extension .prom) in the /tmp/node-metrics/ folder that contains health metrics in [the Prometheus text format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-format-details).
1717

1818
If the helm charts were used to install the metrics pipeline this directory will already be created. If another method of setting up Node Exporter was followed the [textfile collector](https://github.com/prometheus/node_exporter#textfile-collector) will need to be enabled in Node Exporter's configuration.
1919

2020

2121
## Setting the health metric
22-
With our health metric the file at /tmp/node-metrics/test.prom should look like:
22+
With our health metric the file at /tmp/node-metrics/text.prom should look like:
2323

2424
````node_health_metric 0````
2525

telemetry-aware-scheduling/docs/strategy-labeling-example.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ Once the metric changes for a given node, and it returns to a schedulable condit
258258

259259

260260
### Descheduler
261-
[Kubernetes Descheduler](https://github.com/kubernetes-sigs/descheduler) allows control of pod evictions in the cluster after being bound to a node. Descheduler, based on its policy, finds pods that can be moved and evicted. There are many ways to install and run the K8s [Descheduler](https://github.com/kubernetes-sigs/descheduler#quick-start). Here, we have executed it as a [deployment](https://github.com/kubernetes-sigs/descheduler#run-as-a-deployment).
261+
[Kubernetes Descheduler](https://github.com/kubernetes-sigs/descheduler) allows control of pod evictions in the cluster after being bound to a node. Descheduler, based on its policy, finds pods that can be moved and evicted. There are many ways to install and run the K8s [Descheduler](https://github.com/kubernetes-sigs/descheduler#quick-start). Here, we have executed it as a [deployment](https://github.com/kubernetes-sigs/descheduler#run-as-a-deployment) by using descheduler:v0.23.1.
262262
In a shell terminal, deploy the Descheduler files:
263263

264264
````

0 commit comments

Comments
 (0)