Skip to content

adding new pod-count test to the observability suite #3084

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

acornett21
Copy link
Contributor

@acornett21 acornett21 commented Jul 14, 2025

Motivation

To implement a new test that conforms to the business ask in EET-4647.

Changes

  • Exported autodiscover functions
    • CreateLabels
    • FindPodsByLabels
  • Added function testComparePodCount to tests/observability/suite.go
    • Included the above as a test identifier

Notes/Questions

  • Do we want to see if there are extra pods running after the fact?
  • If this is the direction we want to go for this, tests can be added to this PR.

Assisted-by: Cursor

@dcibot
Copy link
Collaborator

dcibot commented Jul 14, 2025

@acornett21 acornett21 force-pushed the pod_count branch 2 times, most recently from 74f38a5 to dc497e7 Compare July 14, 2025 18:55
@dcibot
Copy link
Collaborator

dcibot commented Jul 14, 2025

@sebrandon1
Copy link
Member

Looks like your new test needs a corresponding Impact Statement 2025/07/14 21:22:42 ERROR Test case observability-pod-count is missing an impact statement in the ImpactMap

@acornett21
Copy link
Contributor Author

Looks like your new test needs a corresponding Impact Statement 2025/07/14 21:22:42 ERROR Test case observability-pod-count is missing an impact statement in the ImpactMap

@sebrandon1 I just added this and re-generated the catalog. PTAL.

@dcibot
Copy link
Collaborator

dcibot commented Jul 14, 2025

func testComparePodCount(check *checksdb.Check, env *provider.TestEnvironment) {
oc := clientsholder.GetClientsHolder()

originalPods := env.Pods
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of comparing directly pods, why not compare the status of the deployment and stateful set instead ( ready replicas for instance). So in this test, you could use the <statefulset/deployment name>-replica-/ to add a stable reference to pods in your results list. Otherwise, we would have a lot of false positives if the pod recreation test is triggered as a different uuid is appended to the pod after they are deleted and recreated.
For the orphan pods (the pods with owner references that are not a statefulset or replicaset), if any, we could compare them as already described here. See owner reference example: testPodsOwnerReference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote the test the way it was described in the jira, if we want something other then pods before and pods after, then is it really a pod comparison tests?

I'm happy to write whatever, but it's odd that this question came up, so is what we are trying to accomplish even needed, if we are unsure of what we really want?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote the test the way it was described in the jira, if we want something other then pods before and pods after, then is it really a pod comparison tests?

The jira describes counting the number of ready and non ready pods before and after running the suite. See below:
Add a new test case that checks and collects the number of ready and non ready pods before and after running the certsuite to have an idea of any changes that have occurred while running the suite. This is a quick gauge to check the stability of the workload while running the suite.

I like the idea of adding more details in term of which pods became not ready (or ready) after running the suite, but in my understanding this would work only if the pod names are stable after being re-created. Most pods names that belong to deployment/statefulsets will be different after pod re-creation because of the randomized identifier added at the end. For instance in the self-node-remediation-ds-g26dh pod name "-g26dh" will change when this pod is re-created.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get what you are saying, but if a pod gets a new uuid, doesn't that mean that it's not stable, and should be reported?

This is a quick gauge to check the stability of the workload while running the suite.

It was already stated that test will have false positives/negatives, this is why it's not mandatory anywhere (which IMO is odd, since other certifications don't work this way, but that's another story).

Copy link
Member

@edcdavid edcdavid Jul 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get what you are saying, but if a pod gets a new uuid, doesn't that mean that it's not stable, and should be reported?

yep it could indicative of instability, especially if the pod keeps crashing with CrashLoopBackOff but not when simply being terminated and recreated. We know that we have a lifecycle-pod-recreation deleting pods and re-creating them so we expect these uuid to change. Instead, the goal of this test is to catch any degradation of the application, by checking after the test if pods that used to be ready are no longer ready, or pods that used to be not ready are now ready (whether uuid changed or not).

It was already stated that test will have false positives/negatives, this is why it's not mandatory anywhere

Even if not mandatory, in my opinion, we should keep the false positives/negatives to a minimum. I feel that if just comparing the name for most pods it would not match because most pods have an owner and add this uuid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How many tests in the test suite manipulate pods? Can tests be ordered / have priority?

Copy link
Member

@edcdavid edcdavid Jul 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests that manipulate pods are identified by this function: GetNotIntrusiveSkipFn. This allows to have a switch to skip them. 4 tests: lifecycle-crd-scaling, lifecycle-deployment-scaling, lifecycle-statefulset-scaling, lifecycle-pod-recreation.
The tests are ordered in the order they are added in the code, we have not implemented changing the order or priority.
Maybe this test could be re-run after each intrusive tests?

@dcibot
Copy link
Collaborator

dcibot commented Jul 16, 2025

from change #3084:

  • ERROR no DCI job found

@tonyskapunk
Copy link
Collaborator

/dci-rerun

@dcibot
Copy link
Collaborator

dcibot commented Jul 16, 2025

Signed-off-by: Adam D. Cornett <adc@redhat.com>
@dcibot
Copy link
Collaborator

dcibot commented Jul 23, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants