Skip to content

Commit eecaee9

Browse files
authored
Merge pull request #77574 from lahinson/osdocs-10910-etcd-automated-backups
[OSDOCS-10910]: Fixing issues in automated backup etcd docs
2 parents fdc7c6e + 357e10e commit eecaee9

File tree

4 files changed

+457
-298
lines changed

4 files changed

+457
-298
lines changed

backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,3 +29,5 @@ include::modules/backup-etcd.adoc[leveloffset=+1]
2929

3030
// Creating automated etcd backups
3131
include::modules/etcd-creating-automated-backups.adoc[leveloffset=+1]
32+
include::modules/creating-single-etcd-backup.adoc[leveloffset=+1]
33+
include::modules/creating-recurring-etcd-backups.adoc[leveloffset=+1]
Lines changed: 258 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,258 @@
1+
[id="creating-recurring-etcd-backups_{context}"]
2+
== Creating recurring etcd backups
3+
4+
Follow these steps to create automated recurring backups of etcd.
5+
6+
Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible.
7+
8+
.Prerequisites
9+
10+
* You have access to the cluster as a user with the `cluster-admin` role.
11+
* You have access to the OpenShift CLI (`oc`).
12+
13+
.Procedure
14+
15+
. If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups:
16+
17+
.. Create a persistent volume claim (PVC) named `etcd-backup-pvc.yaml` with contents such as the following example:
18+
+
19+
[source,yaml]
20+
----
21+
kind: PersistentVolumeClaim
22+
apiVersion: v1
23+
metadata:
24+
name: etcd-backup-pvc
25+
namespace: openshift-etcd
26+
spec:
27+
accessModes:
28+
- ReadWriteOnce
29+
resources:
30+
requests:
31+
storage: 200Gi <1>
32+
volumeMode: Filesystem
33+
storageClassName: etcd-backup-local-storage
34+
----
35+
<1> The amount of storage available to the PVC. Adjust this value for your requirements.
36+
+
37+
[NOTE]
38+
====
39+
Each of the following providers require changes to the `accessModes` and `storageClassName` keys:
40+
41+
[cols="1,1,1"]
42+
|===
43+
|Provider|`accessModes` value|`storageClassName` value
44+
45+
|AWS with the `versioned-installer-efc_operator-ci` profile
46+
|`- ReadWriteMany`
47+
|`efs-sc`
48+
49+
|Google Cloud Platform
50+
|`- ReadWriteMany`
51+
|`filestore-csi`
52+
53+
|Microsoft Azure
54+
|`- ReadWriteMany`
55+
|`azurefile-csi`
56+
|===
57+
====
58+
59+
.. Apply the PVC by running the following command:
60+
+
61+
[source,terminal]
62+
----
63+
$ oc apply -f etcd-backup-pvc.yaml
64+
----
65+
66+
.. Verify the creation of the PVC by running the following command:
67+
+
68+
[source,terminal]
69+
----
70+
$ oc get pvc
71+
----
72+
+
73+
.Example output
74+
[source,terminal]
75+
----
76+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
77+
etcd-backup-pvc Bound 51s
78+
----
79+
+
80+
[NOTE]
81+
====
82+
Dynamic PVCs stay in the `Pending` state until they are mounted.
83+
====
84+
85+
. If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps:
86+
+
87+
[WARNING]
88+
====
89+
If you delete or otherwise lose access to the node that contains the stored backup data, you can lose data.
90+
====
91+
92+
.. Create a `StorageClass` CR file named `etcd-backup-local-storage.yaml` with the following contents:
93+
+
94+
[source,yaml]
95+
----
96+
apiVersion: storage.k8s.io/v1
97+
kind: StorageClass
98+
metadata:
99+
name: etcd-backup-local-storage
100+
provisioner: kubernetes.io/no-provisioner
101+
volumeBindingMode: Immediate
102+
----
103+
104+
.. Apply the `StorageClass` CR by running the following command:
105+
+
106+
[source,terminal]
107+
----
108+
$ oc apply -f etcd-backup-local-storage.yaml
109+
----
110+
111+
.. Create a PV named `etcd-backup-pv-fs.yaml` from the applied `StorageClass` with contents such as the following example:
112+
+
113+
[source,yaml]
114+
----
115+
apiVersion: v1
116+
kind: PersistentVolume
117+
metadata:
118+
name: etcd-backup-pv-fs
119+
spec:
120+
capacity:
121+
storage: 100Gi <1>
122+
volumeMode: Filesystem
123+
accessModes:
124+
- ReadWriteMany
125+
persistentVolumeReclaimPolicy: Delete
126+
storageClassName: etcd-backup-local-storage
127+
local:
128+
path: /mnt/
129+
nodeAffinity:
130+
required:
131+
nodeSelectorTerms:
132+
- matchExpressions:
133+
- key: kubernetes.io/hostname
134+
operator: In
135+
values:
136+
- <example_master_node> <2>
137+
----
138+
<1> The amount of storage available to the PV. Adjust this value for your requirements.
139+
<2> Replace this value with the master node to attach this PV to.
140+
+
141+
[TIP]
142+
====
143+
Run the following command to list the available nodes:
144+
145+
[source,terminal]
146+
----
147+
$ oc get nodes
148+
----
149+
====
150+
151+
.. Verify the creation of the PV by running the following command:
152+
+
153+
[source,terminal]
154+
----
155+
$ oc get pv
156+
----
157+
+
158+
.Example output
159+
[source,terminal]
160+
----
161+
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
162+
etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s
163+
----
164+
165+
.. Create a PVC named `etcd-backup-pvc.yaml` with contents such as the following example:
166+
+
167+
[source,yaml]
168+
----
169+
kind: PersistentVolumeClaim
170+
apiVersion: v1
171+
metadata:
172+
name: etcd-backup-pvc
173+
spec:
174+
accessModes:
175+
- ReadWriteMany
176+
volumeMode: Filesystem
177+
resources:
178+
requests:
179+
storage: 10Gi <1>
180+
storageClassName: etcd-backup-local-storage
181+
----
182+
<1> The amount of storage available to the PVC. Adjust this value for your requirements.
183+
184+
.. Apply the PVC by running the following command:
185+
+
186+
[source,terminal]
187+
----
188+
$ oc apply -f etcd-backup-pvc.yaml
189+
----
190+
191+
. Create a custom resource definition (CRD) file named `etcd-recurring-backups.yaml`. The contents of the created CRD define the schedule and retention type of automated backups.
192+
+
193+
For the default retention type of `RetentionNumber` with 15 retained backups, use contents such as the following example:
194+
+
195+
[source,yaml]
196+
----
197+
apiVersion: config.openshift.io/v1alpha1
198+
kind: Backup
199+
metadata:
200+
name: etcd-recurring-backup
201+
spec:
202+
etcd:
203+
schedule: "20 4 * * *" <1>
204+
timeZone: "UTC"
205+
pvcName: etcd-backup-pvc
206+
----
207+
<1> The `CronTab` schedule for recurring backups. Adjust this value for your needs.
208+
+
209+
To use retention based on the maximum number of backups, add the following key-value pairs to the `etcd` key:
210+
+
211+
[source,yaml]
212+
----
213+
spec:
214+
etcd:
215+
retentionPolicy:
216+
retentionType: RetentionNumber <1>
217+
retentionNumber:
218+
maxNumberOfBackups: 5 <2>
219+
----
220+
<1> The retention type. Defaults to `RetentionNumber` if unspecified.
221+
<2> The maximum number of backups to retain. Adjust this value for your needs. Defaults to 15 backups if unspecified.
222+
+
223+
[WARNING]
224+
====
225+
A known issue causes the number of retained backups to be one greater than the configured value.
226+
====
227+
+
228+
For retention based on the file size of backups, use the following:
229+
+
230+
[source,yaml]
231+
----
232+
spec:
233+
etcd:
234+
retentionPolicy:
235+
retentionType: RetentionSize
236+
retentionSize:
237+
maxSizeOfBackupsGb: 20 <1>
238+
----
239+
<1> The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified.
240+
+
241+
[WARNING]
242+
====
243+
A known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value.
244+
====
245+
246+
. Create the cron job defined by the CRD by running the following command:
247+
+
248+
[source,terminal]
249+
----
250+
$ oc create -f etcd-recurring-backup.yaml
251+
----
252+
253+
. To find the created cron job, run the following command:
254+
+
255+
[source,terminal]
256+
----
257+
$ oc get cronjob -n openshift-etcd
258+
----

0 commit comments

Comments
 (0)