Skip to content

Releases: percona/percona-server-mongodb-operator

v1.20.0

19 May 18:26
Compare
Choose a tag to compare

Release Highlights

This release of Percona Operator for MongoDB includes the following new features and improvements:

Point-in-time recovery from any backup storage

The Operator now natively supports multiple backup storages, inheriting this feature from Percona Backup for MongoDB (PBM). This enables you to make a point-in-time recovery from any backup stored on any storage - PBM and the Operator maintain the data consistency for you. And you no longer have to wait till the Operator reconfigures a cluster after you select a different storage for a backup or a restore. As a result, overall performance of your backup flow improves.

Improve RTO with the added support of incremental physical backups (tech preview)

Using incremental physical backups in the Operator, you can now back up only the changes happened since the previous backup. Since increments are smaller in size than the whole backup, the backup completion is faster and you also save on the storage and data transfer costs. Using incremental backups and point-in-time recovery improves your recovery time objective (RTO).

You do need the base backup to start the incremental backup chain and you must make the whole chain from the same storage. Also, note that the percona.com/delete-backup finalizer and the .spec.backup.tasks.[].keep option apply for the incremental base backup but are ignored for subsequent incremental backups.

Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM

Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.

To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:

spec:
  pmm:
    customClusterName: mongo-cluster

Changelog

New Features

  • K8SPSMDB-1237 - Added support for incremental physical backups

  • K8SPSMDB-1329 - Allowed setting loadBalancerClass service type and using a custom implementation of a load balancer rather than the cloud provider default one.

Improvements

  • K8SPSMDB-621 - Set PBM_MONGODB_URI env variable in PBM container to avoid defining it for every shell session and improve setup automation (Thank you Damiano Albani for reporting this issue)

  • K8SPSMDB-1219 - Improved the support of multiple storages for backups by using the Multi Storage support functionality in PBM. This enables users to make point-in-time recovery from any storage

  • K8SPSMDB-1223 - Improved the MONGODB_PBM_URI connection string construction by enabling every pbm-agent to connect to local mongoDB directly

  • K8SPSMDB-1226 - Documented how to pass custom configuration for PBM

  • K8SPSMDB-1234 - Added the ability to use non default ports 27017 for MongoDB cluster components: mongod, mongos and configsvrReplSet Pods

  • K8SPSMDB-1236 - Added a check for a username to be unique when defining it via the Custom Resource manifest

  • K8SPSMDB-1253 - Made the SmartUpdate the default update strategy

  • K8SPSMDB-1276 - Added logic to the getMongoUri function to compare the content of the existing TLS and CA certificate files with the secret data. Files are only overwritten if the data has changed, preventing redundant writes and ensuring smoother operations during backup checks. (Thank you Anton Averianov for reporting and contributing to this issue)

  • K8SPSMDB-1316 - Added the ability to define a custom cluster name for pmm-admin component

  • K8SPSMDB-1325 Added the directShardOperations role for a mongo user used for monitoring MongoDB 8 and above

  • K8SPSMDB-1337 Add imagePullSecrets for PMM and backup images

Bugs Fixed

  • K8SPSMDB-1197 - Fixed the healthcheck log rotation routine to delete log file created 1 day before.

  • K8SPSMDB-1231 - Fixed the issue with a single-node cluster to temporarily report the Error state during initial provisioning by ignoring the No mongod containers in running state error.

  • K8SPSMDB-1239 - Fixed the issue with cron jobs running simultaneously

  • K8SPSMDB-1245 - Improved Telemetry for cluster-wide deployments to handle both an empty value and a comma-separated list of namespaces

  • K8SPSMDB-1256 - Fixed the issue with PBM failing with the length of read message too large error by verifying the existence of TLS files when constructing the PBM_MONGODB_URI connection string URI

  • K8SPSMDB-1263 - Fixed the issue with the Operator losing connection to mongod pods during backup and throwing an error by retrying to connect and proceed with the backup

  • K8SPSMDB-1274 - Disable balancer before logical restore to meet the PBM restore requirements

  • K8SPSMDB-1275 - Fixed the issue with the Operator failing when the getLastErrorModes write concern value is set for a replica set by using the data type for a value that matches MongoDB behavior (Thank you user clrxbl for reporting and contributing to this issue)

  • K8SPSMDB-1294 - Fixed the API mismatch error with the multi-cluster Services (MCS) enabled in the Operator by using the DiscoveryClient.ServerPreferredResources method to align with the kubectl behavior.

  • K8SPSMDB-1302 - Fixed the issue with the Operator being stuck during physical restore when the update strategy is set to SmartUpdate

  • K8SPSMDB-1306 - Fixed the Operator panics if a user configures PBM priorities without timeouts

  • K8SPSMDB-1347 - Fixed the issue with the Operator throwing errors when auto generating password for multiple users by properly updating the secret after a password generation

Upgrade considerations

The added support for multiple backup storages requires specifying the main storage. If you use a single storage, it will automatically be marked as main in the Custom Resource manifest during the upgrade. If you use multiple storages, you must define one of them as the main storage when you upgrade to version 1.20.0. The following command shows how to set the s3-us-west storage as the main one:

$ kubectl patch psmdb my-cluster-name --type=merge --patch '{
    "spec": {
      "crVersion": "1.20.0",
      "image": "percona/percona-server-mongodb:7.0.18-11",
      "backup": {
        "image": "percona/percona-backup-mongodb:2.9.1",
        "storages": {
          "s3-us-west": {
            "main": true
          }
        }
      },
      "pmm": {
        "image": "percona/pmm-client:2.44.1"
      }
    }
  }'

Supported software

The Operator was developed and tested with the following software:

  • Percona Server for MongoDB 6.0.21-18, 7.0.18-11, and 8.0.8-3.
  • Percona Backup for MongoDB 2.9.1.

Other options may also work but have not been tested.

Supported platforms

Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.20.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.

v1.19.1

20 Feb 15:28
Compare
Choose a tag to compare

Bugs Fixed

  • K8SPSMDB-1274: Revert to disabling MongoDB balancer during restores to follow requirements of Percona Backup for MongoDB 2.8.0.

Known limitations

  • PBM-1493: Operator versions 1.19.0 and 1.19.1 have a recommended MongoDB version set to 7.0 because point-in-time recovery may fail on MongoDB 8.0 if sharding is enabled and the Operator version is 1.19.x. Therefore, upgrading to Operator 1.19.0/1.19.1 is not recommended for sharded MongoDB 8.0 clusters.

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 6.0.19-16, 7.0.15-9, and 8.0.4-1. Other options may also work but have not been tested. The Operator also uses Percona Backup for MongoDB 2.8.0.

Percona Operators are designed for compatibility with all CNCF-certified
Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.19.1:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.

v1.19.0

21 Jan 18:52
afb9dbe
Compare
Choose a tag to compare

Release Highlights

Using remote file server for backups (tech preview)

The new filesystem backup storage type was added in this release in addition to already existing s3 and azure types.
It allows users to mount a remote file server to a local directory, and make Percona Backup for MongoDB using this directory as a storage for backups. The approach is based on common Network File System (NFS) protocol, and should be useful in network-restricted environments without S3-compatible storage or in cases with a non-standard storage service supporting NFS access.

To use NFS-capable remote file server as a backup storage, user needs to mount the remote storage as a sidecar volume in the replsets section of the Custom Resource (and also configsvrReplSet in case of a sharded cluster):

replsets:
  ...
  sidecarVolumes:
  - name: backup-nfs
    nfs:
      server: "nfs-service.storage.svc.cluster.local"
      path: "/psmdb-some-name-rs0"
  ...

Finally, this new storage needs to be configured in the same Custom Resource as a normal storage for backups:

backup:
  ...
  storages:
    backup-nfs:
      filesystem:
        path: /mnt/nfs/
          type: filesystem
  ...
  volumeMounts:
  - mountPath: /mnt/nfs/
    name: backup-nfs

See more in our documentation about this storage type.

Generated passwords for custom MongoDB users

A new improvement for the declarative management of custom MongoDB users brings the possibility to use automatic generation of users passwords. When you specify a new user in deploy/cr.yaml configuration file, you can omit specifying a reference to an already existing Secret with the user’s password, and the Operator will generate it automatically:

...
users:
  - name: my-user
    db: admin
    roles:
      - name: clusterAdmin
        db: admin
      - name: userAdminAnyDatabase
        db: admin

Find more details on this automatically created Secret in our documentation.

Percona Server for MongoDB 8.0 support

Percona Server for MongoDB 8.0 is now supported by the Operator in addition to 6.0 and 7.0 versions. The appropriate images are now included into the list of Percona-certified images. See this blogpost for details about the latest MongoDB 8.0 features with the added reliability and performance improvements.

New Features

Improvements

Bugs Fixed

  • K8SPSMDB-1215: Fix a bug where ExternalTrafficPolicy was incorrectly set for LoadBalancer and NodePort services (Thanks to Anton Averianov for contributing)
  • K8SPSMDB-675: Fix a bug where disabling sharding failed on a running cluster with enabled backups
  • K8SPSMDB-754: Fix a bug where some error messages had “INFO” log level and therefore were not seen in logs with the “ERROR” log level turned on
  • K8SPSMDB-1088: Fix a bug which caused the Operator starting two backup operations if the user patches the backup object while its state is empty or Waiting
  • K8SPSMDB-1156: Fix a bug that prevented the Operator with enabled backups to recover from invalid TLS configurations (Thanks to KOS for reporting)
  • K8SPSMDB-1172: Fix a bug where backup user’s password username with special characters caused Percona Backup for MongoDB to fail
  • K8SPSMDB-1212: Stop disabling balancer during restores, because it is not required for Percona Backup for MongoDB 2.x

Deprecation, Rename and Removal

  • The psmdbCluster option from the deploy/backup/backup.yaml manifest used for on-demand backups, which was deprecated since the Operator version 1.12.0 in favor of the clusterName option, has been removed and is no longer supported.
  • Percona Server for MongoDB 5.0 has reached its end of life and in no longer supported by the Operator

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 6.0.19-16, 7.0.15-9, and 8.0.4-1. Other options may also work but have not been tested. The Operator also uses Percona Backup for MongoDB 2.8.0.

Percona Operators are designed for compatibility with all CNCF-certified
Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.19.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.

v1.18.0

15 Nov 07:52
Compare
Choose a tag to compare

Release Highlights

Enhancements of the declarative user management

The declarative management of custom MongoDB users was improved compared to its initial implementation in the previous release, where the Operator did not track and sync user-related changes in the Custom Resource and the database. Also, starting from now you can create custom MongoDB roles on various databases just like users in the deploy/cr.yaml manifest:

...
roles:
  - name: clusterAdmin
    db: admin
  - name: userAdminAnyDatabase
    db: admin

See the documentation to find more details about this feature.

Support for selective restores

Percona Backup for MongoDB 2.0.0 has introduced a new functionality that allows partial restores, which means selectively restoring only with the desired subset of data. Now the Operator also supports this feature, allowing you to restore a specific database or a collection from a backup. You can achieve this by using an additional selective section in the PerconaServerMongoDBRestore Custom Resource:

spec:
  selective:
    withUsersAndRoles: true
    namespaces:
    - "db.collection"

You can find more on selective restores and their limitations in our documentation.

Splitting the replica set of the database cluster over multiple Kubernetes clusters

Recent improvements in cross-site replication made it possible to keep the replica set of the database cluster in different data centers. The Operator itself cannot deploy MongoDB replicas to other data centers, but this still can be achieved with a number of Operator deployments, equal to the size of your replica set: one Operator to control the replica set via cross-site replication, and at least two Operators to bootstrap the unmanaged clusters with other MongoDB replica set instances. Splitting the replica set of the database cluster over multiple Kubernetes clusters can be useful to get a fault-tolerant system in which all replicas are in different data centers.
You can find more about configuring such a multi-datacenter MongoDB cluster and the limitations of this solution on the dedicated documentation page.

New Features

K8SPSMDB-894: It is now possible to restore a subset of data (a specific database or a collection) from a backup which is useful to reduce time on restore operations when fixing corrupted data fragment
K8SPSMDB-1113: The new percona.com/delete-pitr-chunks finalizer allows the deletion of PITR log files from the backup storage when deleting a cluster so that leftover data does not continue to take up space in the cloud
K8SPSMDB-1124 and K8SPSMDB-1146: Declarative user management now covers creating and managing user roles, and syncs user-related changes between the Custom Resource and the database
K8SPSMDB-1140 and K8SPSMDB-1141: Multi-datacenter cluster deployment is now possible

Improvements

K8SPSMDB-739: A number of Service exposure options in the replsets, sharding.configsvrReplSet, and sharding.mongos were renamed for unification with other Percona Operators
K8SPSMDB-1002: New Custom Resource options under the replsets.primaryPreferTagSelector` subsection allow providing Primary instance selection preferences based on specific zone and region, which may be especially useful within the planned zone switchover process (Thanks to sergelogvinov for contribution)
K8SPSMDB-1096: Restore logs were improved to contain pbm-agent logs in mongod containers, useful to debug failures in the backup restoration process
K8SPSMDB-1135: Split-horizon DNS for external (unmanaged) nodes is now configurable via the replsets.externalNodes subsection in Custom Resource
K8SPSMDB-1152: Starting from now, the Operator uses multi-architecture images of Percona Server for MongoDB and Percona Backup for MongoDB, making it easier to deploy a cluster on ARM
K8SPSMDB-1160: The PVC resize feature introduced in previous release can now be enabled or disabled via the enableVolumeExpansion Custom Resource option (false by default), which protects the cluster from storage resize triggered by mistake
K8SPSMDB-1132: A new secrets.keyFile Custom Resource option allows to configure custom name for the Secret with the MongoDB internal auth key file

Bugs Fixed

K8SPSMDB-912: Fix a bug where the full backup connection string including the password was visible in logs in case of the Percona Backup for MongoDB errors
K8SPSMDB-1047: Fix a bug where the Operator was changing writeConcernMajorityJournalDefault to “true” during the replica set reconfiguring, ignoring the value set by user
K8SPSMDB-1168: Fix a bug where successful backups could obtain a failed state in case of the Operator configured with watchAllNamespaces: true and having the same name for MongoDB clusters across multiple namespaces (Thanks to Markus Küffner for contribution)
K8SPSMDB-1170: Fix a bug that prevented deletion of a cluster with the active percona.com/delete-psmdb-pods-in-order finalizer in case of the cluster error state (e.g. when mongo replset failed to reconcile)
K8SPSMDB-1184: Fix a bug where the Operator failed to reconcile when using the container security context with readOnlyRootFilesystem set to true (Thanks to applejag for contribution)

Deprecation, Rename and Removal

  • The new enableVolumeExpansion Custom Resource option allows users to disable the automated storage scaling with Volume Expansion capability. The default value of this option is false, which means that the automated scaling is turned off by default.

  • A number of Service exposure Custom Resource options in the replsets, sharding.configsvrReplSet, and sharding.mongos subsections were renamed to provide a unified experience with other Percona Operators:

    • expose.serviceAnnotations option renamed to expose.annotations
    • expose.serviceLabels option renamed to expose.labels
    • expose.exposeType option renamed to expose.type

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 5.0.29-25,
6.0.18-15, and 7.0.14-8. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.7.0.

The following platforms were tested and are officially supported by the Operator
1.18.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.

v1.17.0

09 Sep 15:15
Compare
Choose a tag to compare

Release Highlights

Declarative user management (technical preview)

Before the Operator version 1.17.0 custom MongoDB users had to be created manually. Now the declarative creation of custom MongoDB users is supported via the users subsection in the Custom Resource. You can specify a new user in deploy/cr.yaml manifest, setting the user’s login name and database, PasswordSecretRef (a reference to a key in a Secret resource containing user’s password) and as well as MongoDB roles on various databases which should be assigned to this user:

...
users:
- name: my-user
  db: admin
  passwordSecretRef: 
    name: my-user-password
    key: my-user-password-key
  roles:
    - name: clusterAdmin
      db: admin
    - name: userAdminAnyDatabase
      db: admin

See documentation to find more details about this feature with additional explanations and the list of current limitations.

Liveness check improvements

Several improvements in logging were made related to the liveness checks, to allow getting more information for debugging, and to make these logs persist on failures to allow further examination.

Liveness check logs are stored in the /data/db/mongod-data/logs/mongodb-healthcheck.log file, which can be accessed in the corresponding Pod if needed. Starting from now, Liveness check generates more log messages, and the default log level is set to DEBUG.

Each time the health check fails, the current log is saved to a gzip compressed file named mongodb-healthcheck-<timestamp>.log.gz, and the mongodb-healthcheck.log log file is reset.
Logs older than 24 hours are automatically deleted.

New Features

  • K8SPSMDB-253: It is now possible to create and manage users via the Custom Resource

Improvements

  • K8SPSMDB-899: Add Labels for all Kubernetes objects created by Operator (backups/restores, Secrets, Volumes, etc.) to make them clearly distinguishable
  • K8SPSMDB-919: The Operator now checks if the needed Secrets exist and connects to the storage to check the validity of credentials and the existence of a backup before starting the restore process
  • K8SPSMDB-934: Liveness checks are providing more debug information and keeping separate log archives for each failure with the 24 hours retention
  • K8SPSMDB-1057: Finalizers were renamed to contain fully qualified domain names (FQDNs), avoiding potential conflicts with other finalizer names in the same Kubernetes environment
  • K8SPSMDB-1108: The new Custom Resource option allows setting custom containerSecurityContext for PMM containers
  • K8SPSMDB-994: Remove a limitation where it wasn’t possible to create a new cluster with splitHorizon enabled, leaving the only way to enable it later on the running cluster

Bugs Fixed

  • K8SPSMDB-925: Fix a bug where the Operator generated “failed to start balancer” and “failed to get mongos connection” log messages when using Mongos with servicePerPod and LoadBalancer services, while the cluster was operating properly
  • K8SPSMDB-1105: The memory requests and limits for backups were increased in the deploy/cr.yaml configuration file example to reflect the Percona Backup for MongoDB minimal pbm-agents requirement of 1 Gb RAM needed for stable operation
  • K8SPSMDB-1074: Fix a bug where MongoDB Cluster could not failover in case of all Pods downtime and exposeType Custom Resource option set to either NodePort or LoadBalancer
  • K8SPSMDB-1089: Fix a bug where it was impossible to delete a cluster in error state with finalizers present
  • K8SPSMDB-1092: Fix a bug where Percona Backup for MongoDB log messages during physical restore were not accessible with the kubectl logs command
  • K8SPSMDB-1094: Fix a bug where it wasn’t possible to create a new cluster with upgradeOptions.setFCV Custom Resource option set to true
  • K8SPSMDB-1110: Fix a bug where nil Custom Resource annotations were causing the Operator panic

Deprecation, Rename and Removal

Finalizers were renamed to contain fully qualified domain names to comply with the Kubernetes standards.

  • PerconaServerMongoDB Custom Resource:
    • delete-psmdb-pods-in-order finalizer renamed to percona.com/delete-psmdb-pods-in-order
    • delete-psmdb-pvc finalizer renamed to percona.com/delete-psmdb-pvc
  • PerconaServerMongoDBBackup Custom Resource:
    • delete-backup finalizer renamed to percona.com/delete-backup

Supported Platforms

The Operator was developed and tested with Percona Server for MongoDB 5.0.28-24,
6.0.16-13, and 7.0.12-7. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.5.0.

The following platforms were tested and are officially supported by the Operator
1.17.0:

This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.