Skip to content

Commit 7a92133

Browse files
committed
docs: Update OSS 4.3 Docs
Signed-off-by: Bala Harish <161304963+balaharish7@users.noreply.github.com>
1 parent ff194f8 commit 7a92133

File tree

2 files changed

+9
-298
lines changed

2 files changed

+9
-298
lines changed

docs/main/concepts/data-engines/local-storage.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,17 @@ OpenEBS provides Dynamic PV provisioners for [Kubernetes Local Volumes](https://
1313

1414
As the local volume is accessible only from a single node, local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume will also become inaccessible and a Pod using it will not be able to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.
1515

16-
## When Should You Use or Avoid OpenEBS Local Storage?
16+
## When Should You Use OpenEBS Local Storage?
1717

18-
- Use when:
19-
- High performance is needed by those applications that manage their own replication, data protection, and other features such as snapshots and clones.
20-
- When local disks need to be managed dynamically and monitored for impending notice of them going bad.
18+
Use when:
19+
- High performance is needed by those applications that manage their own replication, data protection, and other features such as snapshots and clones.
20+
- When local disks need to be managed dynamically and monitored for impending notice of them going bad.
2121

22-
- Avoid when:
23-
- When applications expect replication from storage.
24-
- When the volume size needs to be changed dynamically and the underlying disk is not resizable.
22+
## When Should You Avoid OpenEBS Local Storage?
23+
24+
Avoid when:
25+
- When applications expect replication from storage.
26+
- When the volume size needs to be changed dynamically and the underlying disk is not resizable.
2527

2628
## Use Cases
2729

docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/configuration/rs-storage-class-parameters.md

Lines changed: 0 additions & 291 deletions
Original file line numberDiff line numberDiff line change
@@ -56,297 +56,6 @@ The `agents.core.capacity.thin` spec present in the Replicated PV Mayastor helm
5656

5757
The parameter `allowVolumeExpansion` enables the expansion of PVs when using Persistent Volume Claims (PVCs). You must set the `allowVolumeExpansion` parameter to `true` in the StorageClass to enable the expansion of a volume. In order to expand volumes where volume expansion is enabled, edit the size of the PVC. Refer to the [Resize documentation](../replicated-pv-mayastor/advanced-operations/resize.md) for more details.
5858

59-
## Topology Parameters
60-
61-
The topology parameters defined in storage class helps in determining the placement of volume replicas across different nodes/pools of the cluster. A brief explanation of each parameter is as follows.
62-
63-
:::note
64-
We support only one type of topology parameter per storage class.
65-
:::
66-
67-
### "nodeAffinityTopologyLabel"
68-
69-
The parameter `nodeAffinityTopologyLabel` will allow the placement of replicas on the node that exactly matches the labels defined in the storage class.
70-
For the case shown below, the volume replicas will be provisioned on `worker-node-1` and `worker-node-3` only as they match the labels specified under `nodeAffinityTopologyLabel` in storage class which is equal to zone=us-west-1.
71-
72-
**Command**
73-
```text
74-
cat <<EOF | kubectl create -f -
75-
kind: StorageClass
76-
apiVersion: storage.k8s.io/v1
77-
metadata:
78-
name: mayastor-1
79-
parameters:
80-
ioTimeout: "30"
81-
protocol: nvmf
82-
repl: "2"
83-
nodeAffinityTopologyLabel: |
84-
zone: us-west-1
85-
provisioner: io.openebs.csi-mayastor
86-
volumeBindingMode: Immediate
87-
EOF
88-
```
89-
90-
Apply the labels to the nodes using the below command:
91-
92-
**Command**
93-
```text
94-
kubectl mayastor label node worker-node-1 zone=us-west-1
95-
kubectl mayastor label node worker-node-2 zone=eu-east-1
96-
kubectl mayastor label node worker-node-3 zone=us-west-1
97-
```
98-
99-
**Command (Get nodes)**
100-
```text
101-
kubectl mayastor get nodes -n openebs --show-labels
102-
ID GRPC ENDPOINT STATUS LABELS
103-
worker-node-1 65.108.91.181:10124 Online zone=eu-west-1
104-
worker-node-3 65.21.4.103:10124 Online zone=eu-east-1
105-
worker-node-3 37.27.13.10:10124 Online zone=us-west-1
106-
```
107-
108-
### "nodeHasTopologyKey"
109-
110-
The parameter `nodeHasTopologyKey` will allow the placement of replicas on the nodes having a label whose key matches the key specified in the storage class.
111-
112-
**Command**
113-
```text
114-
cat <<EOF | kubectl create -f -
115-
kind: StorageClass
116-
apiVersion: storage.k8s.io/v1
117-
metadata:
118-
name: mayastor-1
119-
parameters:
120-
ioTimeout: "30"
121-
protocol: nvmf
122-
repl: "2"
123-
nodeHasTopologykey: |
124-
rack
125-
provisioner: io.openebs.csi-mayastor
126-
volumeBindingMode: Immediate
127-
EOF
128-
```
129-
130-
Apply the labels on the node using the below command:
131-
132-
**Command**
133-
```text
134-
# kubectl mayastor label node worker-node-1 rack=1
135-
# kubectl mayastor label node worker-node-2 rack=2
136-
# kubectl mayastor label node worker-node-3 rack=2
137-
138-
# kubectl mayastor get nodes -n openebs --show-labels
139-
ID GRPC ENDPOINT STATUS LABELS
140-
worker-node-1 65.108.91.181:10124 Online rack=1
141-
worker-node-3 65.21.4.103:10124 Online rack=2
142-
worker-node-3 37.27.13.10:10124 Online rack=2
143-
```
144-
145-
In this case, the volume replicas will be provisioned on any two of the three nodes i.e.
146-
- `worker-node-1` and `worker-node-2` or
147-
- `worker-node-1` and `worker-node-3` or
148-
- `worker-node-2` and `worker-node-3`
149-
as the storage class has `rack` as the value for `nodeHasTopologyKey` that matches the label key of the node.
150-
151-
### "nodeSpreadTopologyKey"
152-
153-
The parameter `nodeSpreadTopologyKey` will allow the placement of replicas on the node that has label keys that are identical to the keys specified in the storage class but have different values.
154-
155-
**Command**
156-
```text
157-
cat <<EOF | kubectl create -f -
158-
kind: StorageClass
159-
apiVersion: storage.k8s.io/v1
160-
metadata:
161-
name: mayastor-1
162-
parameters:
163-
ioTimeout: "30"
164-
protocol: nvmf
165-
repl: "2"
166-
nodeSpreadTopologyKey: |
167-
zone
168-
provisioner: io.openebs.csi-mayastor
169-
volumeBindingMode: Immediate
170-
EOF
171-
```
172-
173-
Apply the labels to the nodes using the below command:
174-
175-
**Command**
176-
```text
177-
kubectl mayastor label node worker-node-1 zone=us-west-1
178-
kubectl mayastor label node worker-node-2 zone=eu-east-1
179-
kubectl mayastor label node worker-node-3 zone=us-west-1
180-
```
181-
182-
**Command (Get nodes)**
183-
```text
184-
kubectl mayastor get nodes -n openebs --show-labels
185-
ID GRPC ENDPOINT STATUS LABELS
186-
worker-node-1 65.108.91.181:10124 Online zone=eu-west-1
187-
worker-node-3 65.21.4.103:10124 Online zone=eu-east-1
188-
worker-node-3 37.27.13.10:10124 Online zone=us-west-1
189-
```
190-
191-
In this case, the volume replicas will be provisioned on the below given nodes i.e.
192-
- `worker-node-1` and `worker-node-2` or
193-
- `worker-node-2` and `worker-node-3`
194-
as the storage class has `zone` as the value for `nodeSpreadTopologyKey` that matches the label key of the node but has a different value.
195-
196-
### "poolAffinityTopologyLabel"
197-
198-
The parameter `poolAffinityTopologyLabel` will allow the placement of replicas on the pool that exactly match the labels defined in the storage class.
199-
200-
**Command**
201-
```text
202-
cat <<EOF | kubectl create -f -
203-
kind: StorageClass
204-
apiVersion: storage.k8s.io/v1
205-
metadata:
206-
name: mayastor-1
207-
parameters:
208-
ioTimeout: "30"
209-
protocol: nvmf
210-
repl: "2"
211-
poolAffinityTopologyLabel: |
212-
zone: us-west-1
213-
provisioner: io.openebs.csi-mayastor
214-
volumeBindingMode: Immediate
215-
EOF
216-
```
217-
218-
Apply the labels to the pools using the below command:
219-
220-
**Command**
221-
```text
222-
cat <<EOF | kubectl create -f -
223-
apiVersion: "openebs.io/v1beta2"
224-
kind: DiskPool
225-
metadata:
226-
name: pool-on-node-0
227-
namespace: mayastor
228-
spec:
229-
node: worker-node-0
230-
disks: ["/dev/sdb"]
231-
topology:
232-
labelled:
233-
zone: us-west-1
234-
---
235-
apiVersion: "openebs.io/v1beta2"
236-
kind: DiskPool
237-
metadata:
238-
name: pool-on-node-1
239-
namespace: mayastor
240-
spec:
241-
node: worker-node-1
242-
disks: ["/dev/sdb"]
243-
topology:
244-
labelled:
245-
zone: us-east-1
246-
---
247-
apiVersion: "openebs.io/v1beta2"
248-
kind: DiskPool
249-
metadata:
250-
name: pool-on-node-2
251-
namespace: mayastor
252-
spec:
253-
node: worker-node-2
254-
disks: ["/dev/sdb"]
255-
topology:
256-
labelled:
257-
zone: us-west-1
258-
EOF
259-
```
260-
261-
**Command (Get filtered pools based on labels)**
262-
```text
263-
kubectl mayastor get pools -n openebs --selector zone=eu-west-1
264-
ID GRPC ENDPOINT STATUS LABELS
265-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
266-
pool-on-node-0 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-0 Online 10GiB 0 B 10GiB 0 B
267-
pool-on-node-2 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-2 Online 10GiB 0 B 10GiB 0 B
268-
269-
kubectl mayastor get pools -n openebs --selector zone=eu-east-1
270-
ID GRPC ENDPOINT STATUS LABELS
271-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
272-
pool-on-node-1 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-1 Online 10GiB 0 B 10GiB 0 B
273-
```
274-
275-
For the case shown above, the volume replicas will be provisioned on `pool-on-node-0` and `pool-on-node-3` only as they match the labels specified under `poolAffinityTopologyLabel` in the storage class that is equal to zone=us-west-1.
276-
277-
### "poolHasTopologyKey"
278-
279-
The parameter `poolHasTopologyKey` will allow the placement of replicas on the pool that has label keys same as the keys passed in the storage class.
280-
281-
**Command**
282-
```text
283-
cat <<EOF | kubectl create -f -
284-
kind: StorageClass
285-
apiVersion: storage.k8s.io/v1
286-
metadata:
287-
name: mayastor-1
288-
parameters:
289-
ioTimeout: "30"
290-
protocol: nvmf
291-
repl: "2"
292-
poolHasTopologykey: |
293-
zone
294-
provisioner: io.openebs.csi-mayastor
295-
volumeBindingMode: Immediate
296-
EOF
297-
```
298-
299-
**Command (Get filtered pools based on labels)**
300-
```text
301-
kubectl mayastor get pools -n openebs --selector zone=eu-west-1
302-
ID GRPC ENDPOINT STATUS LABELS
303-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
304-
pool-on-node-0 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-0 Online 10GiB 0 B 10GiB 0 B
305-
pool-on-node-2 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-2 Online 10GiB 0 B 10GiB 0 B
306-
307-
kubectl mayastor get pools -n openebs --selector zone=eu-east-1
308-
ID GRPC ENDPOINT STATUS LABELS
309-
ID DISKS MANAGED NODE STATUS CAPACITY ALLOCATED AVAILABLE COMMITTED
310-
pool-on-node-1 aio:///dev/sdb?uuid=b7779970-793c-4dfa-b8d7-03d5b50a45b8 true worker-node-1 Online 10GiB 0 B 10GiB 0 B
311-
```
312-
313-
In this case, the volume replicas will be provisioned on any two of the three pools i.e.
314-
- `pool-on-node-1` and `pool-on-node-2` or
315-
- `pool-on-node-1` and `pool-on-node-3` or
316-
- `pool-on-node-2` and `pool-on-node-3`
317-
as the storage class has `zone` as the value for `poolHasTopologyKey` that matches with the label key of the pool.
318-
319-
### "stsAffinityGroup"
320-
321-
`stsAffinityGroup` represents a collection of volumes that belong to instances of Kubernetes StatefulSet. When a StatefulSet is deployed, each instance within the StatefulSet creates its own individual volume, which collectively forms the `stsAffinityGroup`. Each volume within the `stsAffinityGroup` corresponds to a pod of the StatefulSet.
322-
323-
This feature enforces the following rules to ensure the proper placement and distribution of replicas and targets so that there is not any single point of failure affecting multiple instances of StatefulSet.
324-
325-
1. Anti-Affinity among single-replica volumes:
326-
This rule ensures that replicas of different volumes are distributed in such a way that there is no single point of failure. By avoiding the colocation of replicas from different volumes on the same node.
327-
328-
2. Anti-Affinity among multi-replica volumes:
329-
If the affinity group volumes have multiple replicas, they already have some level of redundancy. This feature ensures that in such cases, the replicas are distributed optimally for the stsAffinityGroup volumes.
330-
331-
3. Anti-affinity among targets:
332-
The [High Availability](../replicated-pv-mayastor/advanced-operations/HA.md) feature ensures that there is no single point of failure for the targets.
333-
The `stsAffinityGroup` ensures that in such cases, the targets are distributed optimally for the stsAffinityGroup volumes.
334-
335-
By default, the `stsAffinityGroup` feature is disabled. To enable it, modify the storage class YAML by setting the `parameters.stsAffinityGroup` parameter to true.
336-
337-
#### Known Limitation
338-
For multi-replica volumes that are part of a `stsAffinityGroup`, scaling down is permitted only up to two replicas. Reducing the replica count below two is not supported.
339-
340-
### "cloneFsIdAsVolumeId"
341-
342-
`cloneFsIdAsVolumeId` is a setting for volume clones/restores with two options: `true` and `false`. By default, it is set to `false`.
343-
- When set to `true`, the created clone/restore's filesystem `uuid` will be set to the restore volume's `uuid`. This is important because some file systems, like XFS, do not allow duplicate filesystem `uuid` on the same machine by default.
344-
- When set to `false`, the created clone/restore's filesystem `uuid` will be the same as the original volume `uuid`, but it will be mounted using the `nouuid` flag to bypass duplicate `uuid` validation.
345-
346-
:::note
347-
This option needs to be set to true when using a `btrfs` filesystem, if the application using the restored volume is scheduled on the same node where the original volume is mounted, concurrently.
348-
:::
349-
35059
## See Also
35160

35261
- [Installation](../../../quickstart-guide/installation.md)

0 commit comments

Comments
 (0)