Release Highlights
This release of Percona Operator for MongoDB includes the following new features and improvements:
Point-in-time recovery from any backup storage
The Operator now natively supports multiple backup storages, inheriting this feature from Percona Backup for MongoDB (PBM). This enables you to make a point-in-time recovery from any backup stored on any storage - PBM and the Operator maintain the data consistency for you. And you no longer have to wait till the Operator reconfigures a cluster after you select a different storage for a backup or a restore. As a result, overall performance of your backup flow improves.
Improve RTO with the added support of incremental physical backups (tech preview)
Using incremental physical backups in the Operator, you can now back up only the changes happened since the previous backup. Since increments are smaller in size than the whole backup, the backup completion is faster and you also save on the storage and data transfer costs. Using incremental backups and point-in-time recovery improves your recovery time objective (RTO).
You do need the base backup to start the incremental backup chain and you must make the whole chain from the same storage. Also, note that the percona.com/delete-backup
finalizer and the .spec.backup.tasks.[].keep
option apply for the incremental base backup but are ignored for subsequent incremental backups.
Improved monitoring for clusters in multi-region or multi-namespace deployments in PMM
Now you can define a custom name for your clusters deployed in different data centers. This name helps Percona Management and Monitoring (PMM) Server to correctly recognize clusters as connected and monitor them as one deployment. Similarly, PMM Server identifies clusters deployed with the same names in different namespaces as separate ones and correctly displays performance metrics for you on dashboards.
To assign a custom name, define this configuration in the Custom Resource manifest for your cluster:
spec:
pmm:
customClusterName: mongo-cluster
Changelog
New Features
-
K8SPSMDB-1237 - Added support for incremental physical backups
-
K8SPSMDB-1329 - Allowed setting loadBalancerClass service type and using a custom implementation of a load balancer rather than the cloud provider default one.
Improvements
-
K8SPSMDB-621 - Set
PBM_MONGODB_URI
env variable in PBM container to avoid defining it for every shell session and improve setup automation (Thank you Damiano Albani for reporting this issue) -
K8SPSMDB-1219 - Improved the support of multiple storages for backups by using the Multi Storage support functionality in PBM. This enables users to make point-in-time recovery from any storage
-
K8SPSMDB-1223 - Improved the
MONGODB_PBM_URI
connection string construction by enabling everypbm-agent
to connect to local mongoDB directly -
K8SPSMDB-1226 - Documented how to pass custom configuration for PBM
-
K8SPSMDB-1234 - Added the ability to use non default ports 27017 for MongoDB cluster components:
mongod
,mongos
andconfigsvrReplSet
Pods -
K8SPSMDB-1236 - Added a check for a username to be unique when defining it via the Custom Resource manifest
-
K8SPSMDB-1253 - Made the
SmartUpdate
the default update strategy -
K8SPSMDB-1276 - Added logic to the getMongoUri function to compare the content of the existing TLS and CA certificate files with the secret data. Files are only overwritten if the data has changed, preventing redundant writes and ensuring smoother operations during backup checks. (Thank you Anton Averianov for reporting and contributing to this issue)
-
K8SPSMDB-1316 - Added the ability to define a custom cluster name for
pmm-admin
component -
K8SPSMDB-1325 Added the
directShardOperations
role for amongo
user used for monitoring MongoDB 8 and above -
K8SPSMDB-1337 Add imagePullSecrets for PMM and backup images
Bugs Fixed
-
K8SPSMDB-1197 - Fixed the healthcheck log rotation routine to delete log file created 1 day before.
-
K8SPSMDB-1231 - Fixed the issue with a single-node cluster to temporarily report the Error state during initial provisioning by ignoring the
No mongod containers in running state
error. -
K8SPSMDB-1239 - Fixed the issue with cron jobs running simultaneously
-
K8SPSMDB-1245 - Improved Telemetry for cluster-wide deployments to handle both an empty value and a comma-separated list of namespaces
-
K8SPSMDB-1256 - Fixed the issue with PBM failing with the
length of read message too large
error by verifying the existence of TLS files when constructing thePBM_MONGODB_URI
connection string URI -
K8SPSMDB-1263 - Fixed the issue with the Operator losing connection to
mongod
pods during backup and throwing an error by retrying to connect and proceed with the backup -
K8SPSMDB-1274 - Disable balancer before logical restore to meet the PBM restore requirements
-
K8SPSMDB-1275 - Fixed the issue with the Operator failing when the
getLastErrorModes
write concern value is set for a replica set by using the data type for a value that matches MongoDB behavior (Thank you userclrxbl
for reporting and contributing to this issue) -
K8SPSMDB-1294 - Fixed the API mismatch error with the multi-cluster Services (MCS) enabled in the Operator by using the
DiscoveryClient.ServerPreferredResources
method to align with thekubectl
behavior. -
K8SPSMDB-1302 - Fixed the issue with the Operator being stuck during physical restore when the update strategy is set to SmartUpdate
-
K8SPSMDB-1306 - Fixed the Operator panics if a user configures PBM priorities without timeouts
-
K8SPSMDB-1347 - Fixed the issue with the Operator throwing errors when auto generating password for multiple users by properly updating the secret after a password generation
Upgrade considerations
The added support for multiple backup storages requires specifying the main storage. If you use a single storage, it will automatically be marked as main in the Custom Resource manifest during the upgrade. If you use multiple storages, you must define one of them as the main storage when you upgrade to version 1.20.0. The following command shows how to set the s3-us-west
storage as the main one:
$ kubectl patch psmdb my-cluster-name --type=merge --patch '{
"spec": {
"crVersion": "1.20.0",
"image": "percona/percona-server-mongodb:7.0.18-11",
"backup": {
"image": "percona/percona-backup-mongodb:2.9.1",
"storages": {
"s3-us-west": {
"main": true
}
}
},
"pmm": {
"image": "percona/pmm-client:2.44.1"
}
}
}'
Supported software
The Operator was developed and tested with the following software:
- Percona Server for MongoDB 6.0.21-18, 7.0.18-11, and 8.0.8-3.
- Percona Backup for MongoDB 2.9.1.
Other options may also work but have not been tested.
Supported platforms
Percona Operators are designed for compatibility with all CNCF-certified Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.20.0:
- Google Kubernetes Engine (GKE) 1.30-1.32
- Amazon Elastic Container Service for Kubernetes (EKS) 1.30-1.32
- OpenShift Container Platform 4.14 - 4.18
- Azure Kubernetes Service (AKS) 1.30-1.32
- Minikube 1.35.0 based on Kubernetes 1.32.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.