Skip to content

K8SPSMDB-1308: Improve physical restore logs #1915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: main
Choose a base branch
from
Open

Conversation

egegunes
Copy link
Contributor

@egegunes egegunes commented May 12, 2025

K8SPSMDB-1308 Powered by Pull Request Badge

CHANGE DESCRIPTION

Problem:
Physical restore logs are lost after restore is finished.

Solution:
Logs can be found in /data/db/pbm-restore-logs after restore.

Warning

PBM will delete /data/db/pbm-restore-logs if you run a new physical restore.
You can only see logs of the latest restore.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

Comment on lines +34 to +35
echo "Still in progress at $(date)"
sleep 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
echo "Still in progress at $(date)"
sleep 10
echo "Still in progress at $(date)"
sleep 10

@egegunes egegunes marked this pull request as ready for review May 14, 2025 09:20
@egegunes egegunes added this to the v1.21.0 milestone May 14, 2025
gkech
gkech previously approved these changes May 20, 2025
hors
hors previously approved these changes May 20, 2025
jvpasinatto and others added 12 commits May 21, 2025 11:39
* K8SPSMDB-1265: Update versions for 1.20.0 release

* Update test dependencies versions

* Update cr images
…estore (#1911)

Use kubectl wait instead of regular loop in `wait_restore()`
Add retry for `demand-backup-sharded` test backup presence in minio storage
Delete backups during test cleanup before removing finalizers from objects.
* K8SPSMDB-1265: Print operator version details
* K8SPSMDB-1268 pmm3 support

* fix imports

* assert the env var lengths

* add server host

* cover more inits in test

* add custom params

* improve test

* remove spammy logs

* e2e tests

* fix mounts for pmm3 container

* update secret with new token comment

* wrapup e2e test

* drop unused env vars

* ensure that pmm3 test is fully functional

* bonus: improve the custom name e2e verification

* add small assertion to ensure that disabled pmm and nil secret return no container

* make custom cluster name configurable in cr for the e2e test

* fix linter

* cr: package rename to config

* add some more test cases
* K8SPSMDB-1216 update to db.hello

* update rs-shard-migration test

---------

Co-authored-by: Viacheslav Sarzhan <slava.sarzhan@percona.com>
@JNKPercona
Copy link
Collaborator

Test name Status
arbiter passed
balancer passed
cross-site-sharded passed
custom-replset-name passed
custom-tls passed
custom-users-roles passed
custom-users-roles-sharded passed
data-at-rest-encryption failure
data-sharded passed
demand-backup failure
demand-backup-eks-credentials-irsa passed
demand-backup-fs passed
demand-backup-incremental passed
demand-backup-incremental-sharded passed
demand-backup-physical passed
demand-backup-physical-sharded failure
demand-backup-sharded passed
expose-sharded passed
finalizer passed
ignore-labels-annotations passed
init-deploy passed
ldap passed
ldap-tls passed
limits passed
liveness passed
mongod-major-upgrade passed
mongod-major-upgrade-sharded passed
monitoring-2-0 passed
monitoring-pmm3 passed
multi-cluster-service passed
multi-storage failure
non-voting passed
one-pod passed
operator-self-healing-chaos passed
pitr failure
pitr-physical failure
pitr-sharded passed
pitr-physical-backup-source passed
preinit-updates passed
pvc-resize passed
recover-no-primary passed
replset-overrides failure
rs-shard-migration passed
scaling passed
scheduled-backup passed
security-context passed
self-healing-chaos passed
service-per-pod passed
serviceless-external-nodes passed
smart-update passed
split-horizon passed
stable-resource-version passed
storage passed
tls-issue-cert-manager passed
upgrade passed
upgrade-consistency passed
upgrade-consistency-sharded-tls failure
upgrade-sharded passed
users passed
version-service passed
We run 60 out of 60

commit: 2d82912
image: perconalab/percona-server-mongodb-operator:PR-1915-2d829121

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants