-
Notifications
You must be signed in to change notification settings - Fork 17
Add 3 labels to all resources, add e2e test that validate the existence of the three labels #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
2. Add test that validate retrieval of operator controller pod by the label.
Hi @Shai1-Levi. Thanks for your PR. I'm waiting for a medik8s member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
2. Add test that validate retrieval of operator controller pod by the label.
in the past at least, label additions broke updates... [1] /test 4.17-openshift-e2e |
update still fails 🤷🏼♂️
|
- add recommended app.kubernetes.io/name label to everything With these changes we prevent services selecting pods by other operators, because they all have the control-plane: controller-manager label by default. Because deployment updates with modified labels fail, also rename the deployment by modifying the namePrefix (causes bundle file renames as well). - Add test that validate retrieval of operator controller pod by the label.
/test 4.17-openshift-e2e |
@Shai1-Levi: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
- add recommended app.kubernetes.io/name label to everything With these changes we prevent services selecting pods by other operators, because they all have the control-plane: controller-manager label by default. Because deployment updates with modified labels fail, also rename the deployment by modifying the namePrefix (causes bundle file renames as well). - Add test that validate retrieval of operator controller pod by the label.
/test 4.17-openshift-e2e |
operator update succeeded /ok-to-test |
- Add recommended app.kubernetes.io/name label to everything With these changes we prevent services selecting pods by other operators, because they all have the control-plane: controller-manager label by default. Because deployment updates with modified labels fail, also rename the deployment by modifying the namePrefix (causes bundle file renames as well). - Add test that validate retrieval of operator controller pod by the label. - Add the default container label, for easier getting desired logs.
Hey, /test 4.17-openshift-e2e |
bundle/manifests/node-maintenance-operator.clusterserviceversion.yaml
Outdated
Show resolved
Hide resolved
bundle/manifests/node-maintenance-operator.clusterserviceversion.yaml
Outdated
Show resolved
Hide resolved
- Add recommended app.kubernetes.io/name label to everything With these changes we prevent services selecting pods by other operators, because they all have the control-plane: controller-manager label by default. Because deployment updates with modified labels fail, also rename the deployment by modifying the namePrefix (causes bundle file renames as well). - Add test that validate retrieval of operator controller pod by the label. - Add the default container label, for easier getting desired logs.
/retest |
/retest |
setup failed... /retest |
@razo7 since when is the e2e test so flaky? :/
|
/retest |
test/e2e/node_maintenance_test.go
Outdated
@@ -419,6 +422,13 @@ func getOperatorPod() *corev1.Pod { | |||
return &pods.Items[0] | |||
} | |||
|
|||
func getOperatorLabel() bool { | |||
pods, err := KubeClient.CoreV1().Pods(operatorNsName).List(context.Background(), metav1.ListOptions{LabelSelector: "app.kubernetes.io/name=node-maintenance-operator"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC the PR introduces 3 new labels while this function looks for the existence of only one of them. See also K8s recommended labels .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I will add the new labels into thse e2e test.
Regard k8s recommended labels I over this doc and I think that all the relevant labels was added. Thank you for introducing me with these k8s convention
…labels existence - Add recommended app.kubernetes.io/name label to everything With these changes we prevent services selecting pods by other operators, because they all have the control-plane: controller-manager label by default. Because deployment updates with modified labels fail, also rename the deployment by modifying the namePrefix (causes bundle file renames as well). - Add tests that validate retrieval of operator controller pod by label. - Add the default container label, for easier getting desired logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/retest
test/e2e/node_maintenance_test.go
Outdated
ExpectWithOffset(2, len(pods.Items)).ToNot(BeZero(), "no operator pod found") | ||
return true | ||
} | ||
|
||
func validateOperatorCustomLabelComponent() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not that relevant for test code, but for production code this would be very inefficient. You could get the pod with one label, and then check if it has the other labels as well, instead of looking for the pod 3 times
/hold since this is not urgent, let's first try to fix the e2e test, these retests are a waste of resources... |
New changes are detected. LGTM label has been removed. |
@@ -179,6 +179,9 @@ var _ = Describe("Starting Maintenance", func() { | |||
// it should be caused by the test deployment's termination graceperiod > drain timeout | |||
Expect(getOperatorLogs()).To(ContainSubstring(nodemaintenance.FixedDurationReconcileLog)) | |||
|
|||
//validate that operator controller pod have app.kubernetes.io/name label | |||
Expect(validateOperatorCustomLabels()).To(BeTrue(), "operator custom label validation failed") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the return value and checking it isn't needed IMHO, everything is checked inside the function already?
podName := podOperator.ObjectMeta.Name | ||
|
||
// Get the Pod | ||
pod, err := KubeClient.CoreV1().Pods(operatorNsName).Get(context.TODO(), podName, metav1.GetOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you have the pod already, why getting it here again?
// Check if each label is present | ||
for _, key := range requiredKeys { | ||
_, exists := pod.Labels[key] | ||
ExpectWithOffset(1, exists).Should(BeTrue(), fmt.Sprintf("Missing required label '%s' in pod '%s'", key, podName)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a) you don't check label values anymore
b) hint: ExpectWithOffset(1, pod.Labels).To(HaveKeyWithValue(key, value))
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Shai1-Levi The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Shai1-Levi: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Why we need this PR
To meet k8s custom labels standardization:
Add 3 custom labels to all resources, add e2e that test the existence of name label
Changes made
With these changes we prevent services selecting pods by other
operators, because they all have the control-plane: controller-manager
label by default.
Because deployment updates with modified labels fail, also rename the
deployment by modifying the namePrefix (causes bundle file renames as
well).
Add test that validate retrieval of operator controller pod by the name label.
Add the default container label, for easier getting desired logs.
Which issue(s) this PR fixes
Test plan