You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/main/concepts/data-engines/local-storage.md
+9-7Lines changed: 9 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,15 +13,17 @@ OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://
13
13
14
14
As the local volume is accessible only from a single node, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume will also become inaccessible and a Pod using it will not be able to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.
15
15
16
-
## When Should You Use or Avoid OpenEBS Local Storage?
16
+
## When Should You Use OpenEBS Local Storage?
17
17
18
-
-Use when:
19
-
- High performance is needed by those applications that manage their own replication, data protection, and other features such as snapshots and clones.
20
-
- When local disks need to be managed dynamically and monitored for impending notice of them going bad.
18
+
Use when:
19
+
- High performance is needed by those applications that manage their own replication, data protection, and other features such as snapshots and clones.
20
+
- When local disks need to be managed dynamically and monitored for impending notice of them going bad.
21
21
22
-
- Avoid when:
23
-
- When applications expect replication from storage.
24
-
- When the volume size needs to be changed dynamically and the underlying disk is not resizable.
22
+
## When Should You Avoid OpenEBS Local Storage?
23
+
24
+
Avoid when:
25
+
- When applications expect replication from storage.
26
+
- When the volume size needs to be changed dynamically and the underlying disk is not resizable.
Copy file name to clipboardExpand all lines: docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/configuration/rs-storage-class-parameters.md
-291Lines changed: 0 additions & 291 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,297 +56,6 @@ The `agents.core.capacity.thin` spec present in the Replicated PV Mayastor helm
56
56
57
57
The parameter `allowVolumeExpansion` enables the expansion of PVs when using Persistent Volume Claims (PVCs). You must set the `allowVolumeExpansion` parameter to `true` in the StorageClass to enable the expansion of a volume. In order to expand volumes where volume expansion is enabled, edit the size of the PVC. Refer to the [Resize documentation](../replicated-pv-mayastor/advanced-operations/resize.md) for more details.
58
58
59
-
## Topology Parameters
60
-
61
-
The topology parameters defined in storage class helps in determining the placement of volume replicas across different nodes/pools of the cluster. A brief explanation of each parameter is as follows.
62
-
63
-
:::note
64
-
We support only one type of topology parameter per storage class.
65
-
:::
66
-
67
-
### "nodeAffinityTopologyLabel"
68
-
69
-
The parameter `nodeAffinityTopologyLabel` will allow the placement of replicas on the node that exactly matches the labels defined in the storage class.
70
-
For the case shown below, the volume replicas will be provisioned on `worker-node-1` and `worker-node-3` only as they match the labels specified under `nodeAffinityTopologyLabel` in storage class which is equal to zone=us-west-1.
71
-
72
-
**Command**
73
-
```text
74
-
cat <<EOF | kubectl create -f -
75
-
kind: StorageClass
76
-
apiVersion: storage.k8s.io/v1
77
-
metadata:
78
-
name: mayastor-1
79
-
parameters:
80
-
ioTimeout: "30"
81
-
protocol: nvmf
82
-
repl: "2"
83
-
nodeAffinityTopologyLabel: |
84
-
zone: us-west-1
85
-
provisioner: io.openebs.csi-mayastor
86
-
volumeBindingMode: Immediate
87
-
EOF
88
-
```
89
-
90
-
Apply the labels to the nodes using the below command:
The parameter `nodeHasTopologyKey` will allow the placement of replicas on the nodes having a label whose key matches the key specified in the storage class.
111
-
112
-
**Command**
113
-
```text
114
-
cat <<EOF | kubectl create -f -
115
-
kind: StorageClass
116
-
apiVersion: storage.k8s.io/v1
117
-
metadata:
118
-
name: mayastor-1
119
-
parameters:
120
-
ioTimeout: "30"
121
-
protocol: nvmf
122
-
repl: "2"
123
-
nodeHasTopologykey: |
124
-
rack
125
-
provisioner: io.openebs.csi-mayastor
126
-
volumeBindingMode: Immediate
127
-
EOF
128
-
```
129
-
130
-
Apply the labels on the node using the below command:
# kubectl mayastor get nodes -n openebs --show-labels
139
-
ID GRPC ENDPOINT STATUS LABELS
140
-
worker-node-1 65.108.91.181:10124 Online rack=1
141
-
worker-node-3 65.21.4.103:10124 Online rack=2
142
-
worker-node-3 37.27.13.10:10124 Online rack=2
143
-
```
144
-
145
-
In this case, the volume replicas will be provisioned on any two of the three nodes i.e.
146
-
-`worker-node-1` and `worker-node-2` or
147
-
-`worker-node-1` and `worker-node-3` or
148
-
-`worker-node-2` and `worker-node-3`
149
-
as the storage class has `rack` as the value for `nodeHasTopologyKey` that matches the label key of the node.
150
-
151
-
### "nodeSpreadTopologyKey"
152
-
153
-
The parameter `nodeSpreadTopologyKey` will allow the placement of replicas on the node that has label keys that are identical to the keys specified in the storage class but have different values.
154
-
155
-
**Command**
156
-
```text
157
-
cat <<EOF | kubectl create -f -
158
-
kind: StorageClass
159
-
apiVersion: storage.k8s.io/v1
160
-
metadata:
161
-
name: mayastor-1
162
-
parameters:
163
-
ioTimeout: "30"
164
-
protocol: nvmf
165
-
repl: "2"
166
-
nodeSpreadTopologyKey: |
167
-
zone
168
-
provisioner: io.openebs.csi-mayastor
169
-
volumeBindingMode: Immediate
170
-
EOF
171
-
```
172
-
173
-
Apply the labels to the nodes using the below command:
In this case, the volume replicas will be provisioned on the below given nodes i.e.
192
-
-`worker-node-1` and `worker-node-2` or
193
-
-`worker-node-2` and `worker-node-3`
194
-
as the storage class has `zone` as the value for `nodeSpreadTopologyKey` that matches the label key of the node but has a different value.
195
-
196
-
### "poolAffinityTopologyLabel"
197
-
198
-
The parameter `poolAffinityTopologyLabel` will allow the placement of replicas on the pool that exactly match the labels defined in the storage class.
199
-
200
-
**Command**
201
-
```text
202
-
cat <<EOF | kubectl create -f -
203
-
kind: StorageClass
204
-
apiVersion: storage.k8s.io/v1
205
-
metadata:
206
-
name: mayastor-1
207
-
parameters:
208
-
ioTimeout: "30"
209
-
protocol: nvmf
210
-
repl: "2"
211
-
poolAffinityTopologyLabel: |
212
-
zone: us-west-1
213
-
provisioner: io.openebs.csi-mayastor
214
-
volumeBindingMode: Immediate
215
-
EOF
216
-
```
217
-
218
-
Apply the labels to the pools using the below command:
219
-
220
-
**Command**
221
-
```text
222
-
cat <<EOF | kubectl create -f -
223
-
apiVersion: "openebs.io/v1beta2"
224
-
kind: DiskPool
225
-
metadata:
226
-
name: pool-on-node-0
227
-
namespace: mayastor
228
-
spec:
229
-
node: worker-node-0
230
-
disks: ["/dev/sdb"]
231
-
topology:
232
-
labelled:
233
-
zone: us-west-1
234
-
---
235
-
apiVersion: "openebs.io/v1beta2"
236
-
kind: DiskPool
237
-
metadata:
238
-
name: pool-on-node-1
239
-
namespace: mayastor
240
-
spec:
241
-
node: worker-node-1
242
-
disks: ["/dev/sdb"]
243
-
topology:
244
-
labelled:
245
-
zone: us-east-1
246
-
---
247
-
apiVersion: "openebs.io/v1beta2"
248
-
kind: DiskPool
249
-
metadata:
250
-
name: pool-on-node-2
251
-
namespace: mayastor
252
-
spec:
253
-
node: worker-node-2
254
-
disks: ["/dev/sdb"]
255
-
topology:
256
-
labelled:
257
-
zone: us-west-1
258
-
EOF
259
-
```
260
-
261
-
**Command (Get filtered pools based on labels)**
262
-
```text
263
-
kubectl mayastor get pools -n openebs --selector zone=eu-west-1
264
-
ID GRPC ENDPOINT STATUS LABELS
265
-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
266
-
pool-on-node-0 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-0 Online 10GiB 0 B 10GiB 0 B
267
-
pool-on-node-2 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-2 Online 10GiB 0 B 10GiB 0 B
268
-
269
-
kubectl mayastor get pools -n openebs --selector zone=eu-east-1
270
-
ID GRPC ENDPOINT STATUS LABELS
271
-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
272
-
pool-on-node-1 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-1 Online 10GiB 0 B 10GiB 0 B
273
-
```
274
-
275
-
For the case shown above, the volume replicas will be provisioned on `pool-on-node-0` and `pool-on-node-3` only as they match the labels specified under `poolAffinityTopologyLabel` in the storage class that is equal to zone=us-west-1.
276
-
277
-
### "poolHasTopologyKey"
278
-
279
-
The parameter `poolHasTopologyKey` will allow the placement of replicas on the pool that has label keys same as the keys passed in the storage class.
280
-
281
-
**Command**
282
-
```text
283
-
cat <<EOF | kubectl create -f -
284
-
kind: StorageClass
285
-
apiVersion: storage.k8s.io/v1
286
-
metadata:
287
-
name: mayastor-1
288
-
parameters:
289
-
ioTimeout: "30"
290
-
protocol: nvmf
291
-
repl: "2"
292
-
poolHasTopologykey: |
293
-
zone
294
-
provisioner: io.openebs.csi-mayastor
295
-
volumeBindingMode: Immediate
296
-
EOF
297
-
```
298
-
299
-
**Command (Get filtered pools based on labels)**
300
-
```text
301
-
kubectl mayastor get pools -n openebs --selector zone=eu-west-1
302
-
ID GRPC ENDPOINT STATUS LABELS
303
-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
304
-
pool-on-node-0 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-0 Online 10GiB 0 B 10GiB 0 B
305
-
pool-on-node-2 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-2 Online 10GiB 0 B 10GiB 0 B
306
-
307
-
kubectl mayastor get pools -n openebs --selector zone=eu-east-1
308
-
ID GRPC ENDPOINT STATUS LABELS
309
-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
310
-
pool-on-node-1 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-1 Online 10GiB 0 B 10GiB 0 B
311
-
```
312
-
313
-
In this case, the volume replicas will be provisioned on any two of the three pools i.e.
314
-
-`pool-on-node-1` and `pool-on-node-2` or
315
-
-`pool-on-node-1` and `pool-on-node-3` or
316
-
-`pool-on-node-2` and `pool-on-node-3`
317
-
as the storage class has `zone` as the value for `poolHasTopologyKey` that matches with the label key of the pool.
318
-
319
-
### "stsAffinityGroup"
320
-
321
-
`stsAffinityGroup` represents a collection of volumes that belong to instances of Kubernetes StatefulSet. When a StatefulSet is deployed, each instance within the StatefulSet creates its own individual volume, which collectively forms the `stsAffinityGroup`. Each volume within the `stsAffinityGroup` corresponds to a pod of the StatefulSet.
322
-
323
-
This feature enforces the following rules to ensure the proper placement and distribution of replicas and targets so that there is not any single point of failure affecting multiple instances of StatefulSet.
324
-
325
-
1. Anti-Affinity among single-replica volumes:
326
-
This rule ensures that replicas of different volumes are distributed in such a way that there is no single point of failure. By avoiding the colocation of replicas from different volumes on the same node.
327
-
328
-
2. Anti-Affinity among multi-replica volumes:
329
-
If the affinity group volumes have multiple replicas, they already have some level of redundancy. This feature ensures that in such cases, the replicas are distributed optimally for the stsAffinityGroup volumes.
330
-
331
-
3. Anti-affinity among targets:
332
-
The [High Availability](../replicated-pv-mayastor/advanced-operations/HA.md) feature ensures that there is no single point of failure for the targets.
333
-
The `stsAffinityGroup` ensures that in such cases, the targets are distributed optimally for the stsAffinityGroup volumes.
334
-
335
-
By default, the `stsAffinityGroup` feature is disabled. To enable it, modify the storage class YAML by setting the `parameters.stsAffinityGroup` parameter to true.
336
-
337
-
#### Known Limitation
338
-
For multi-replica volumes that are part of a `stsAffinityGroup`, scaling down is permitted only up to two replicas. Reducing the replica count below two is not supported.
339
-
340
-
### "cloneFsIdAsVolumeId"
341
-
342
-
`cloneFsIdAsVolumeId` is a setting for volume clones/restores with two options: `true` and `false`. By default, it is set to `false`.
343
-
- When set to `true`, the created clone/restore's filesystem `uuid` will be set to the restore volume's `uuid`. This is important because some file systems, like XFS, do not allow duplicate filesystem `uuid` on the same machine by default.
344
-
- When set to `false`, the created clone/restore's filesystem `uuid` will be the same as the original volume `uuid`, but it will be mounted using the `nouuid` flag to bypass duplicate `uuid` validation.
345
-
346
-
:::note
347
-
This option needs to be set to true when using a `btrfs` filesystem, if the application using the restored volume is scheduled on the same node where the original volume is mounted, concurrently.
0 commit comments