Skip to content

Commit c1a7cf9

Browse files
authored
doc(provision, faq): add readme and faq (#11)
Signed-off-by: Akhil Mohan <akhil.mohan@mayadata.io>
1 parent 4196dfa commit c1a7cf9

File tree

4 files changed

+348
-2
lines changed

4 files changed

+348
-2
lines changed

README.md

Lines changed: 157 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,159 @@
1-
# device-localpv (experimental/pre-alpha)
2-
1+
# OpenEBS Local Device CSI Driver
32

43
CSI Driver for using Local Block Devices
4+
5+
## Project Status
6+
7+
Currently, the Device-LocalPV CSI Driver is in pre-alpha.
8+
9+
## Usage
10+
11+
### Prerequisites
12+
13+
Before installing the device CSI driver please make sure your Kubernetes Cluster
14+
must meet the following prerequisites:
15+
16+
1. Disks are available on the node with a single 10MB partition having partition name used to
17+
identify the disk
18+
2. You have access to install RBAC components into kube-system namespace.
19+
The OpenEBS Device driver components are installed in kube-system namespace
20+
to allow them to be flagged as system critical components.
21+
22+
### Supported System
23+
24+
K8S : 1.18+
25+
26+
OS : Ubuntu
27+
28+
### Setup
29+
30+
Find the disk which you want to use for the Device LocalPV, for testing a loopback device can be used
31+
32+
```
33+
truncate -s 1024G /tmp/disk.img
34+
sudo losetup -f /tmp/disk.img --show
35+
```
36+
37+
Create the meta partition on the loop device which will be used for provisioning volumes
38+
39+
```
40+
sudo parted /dev/loop9 mklabel gpt
41+
sudo parted /dev/loop9 mkpart test-device 1MiB 10MiB
42+
```
43+
44+
### Installation
45+
46+
Deploy the Operator yaml
47+
48+
```
49+
kubectl apply -f https://raw.githubusercontent.com/openebs/device-localpv/master/deploy/device-operator.yaml
50+
```
51+
52+
### Deployment
53+
54+
55+
#### 1. Create a Storage class
56+
57+
```
58+
$ cat sc.yaml
59+
60+
apiVersion: storage.k8s.io/v1
61+
kind: StorageClass
62+
metadata:
63+
name: openebs-device-sc
64+
allowVolumeExpansion: true
65+
parameters:
66+
devname: "test-device"
67+
provisioner: device.csi.openebs.io
68+
volumeBindingMode: WaitForFirstConsumer
69+
```
70+
71+
Check the doc on [storageclasses](docs/storageclasses.md) to know all the supported parameters for Device LocalPV
72+
73+
##### Device Availability
74+
75+
If the device with meta partition is available on certain nodes only, then make use of topology to tell the list of nodes where we have the devices available.
76+
As shown in the below storage class, we can use allowedTopologies to describe device availability on nodes.
77+
78+
```
79+
apiVersion: storage.k8s.io/v1
80+
kind: StorageClass
81+
metadata:
82+
name: openebs-device-sc
83+
allowVolumeExpansion: true
84+
parameters:
85+
devname: "test-device"
86+
provisioner: device.csi.openebs.io
87+
allowedTopologies:
88+
- matchLabelExpressions:
89+
- key: kubernetes.io/hostname
90+
values:
91+
- device-node1
92+
- device-node2
93+
```
94+
95+
The above storage class tells that device with meta partition "test-device" is available on nodes device-node1 and device-node2 only. The Device CSI driver will create volumes on those nodes only.
96+
97+
98+
#### 2. Create the PVC
99+
100+
```
101+
$ cat pvc.yaml
102+
103+
kind: PersistentVolumeClaim
104+
apiVersion: v1
105+
metadata:
106+
name: csi-devicepv
107+
spec:
108+
storageClassName: openebs-device-sc
109+
accessModes:
110+
- ReadWriteOnce
111+
resources:
112+
requests:
113+
storage: 4Gi
114+
```
115+
116+
Create a PVC using the storage class created for the Device driver.
117+
118+
#### 3. Deploy the application
119+
120+
Create the deployment yaml using the pvc backed by Device driver storage.
121+
122+
```
123+
$ cat fio.yaml
124+
125+
apiVersion: v1
126+
kind: Pod
127+
metadata:
128+
name: fio
129+
spec:
130+
restartPolicy: Never
131+
containers:
132+
- name: perfrunner
133+
image: openebs/tests-fio
134+
command: ["/bin/bash"]
135+
args: ["-c", "while true ;do sleep 50; done"]
136+
volumeMounts:
137+
- mountPath: /datadir
138+
name: fio-vol
139+
tty: true
140+
volumes:
141+
- name: fio-vol
142+
persistentVolumeClaim:
143+
claimName: csi-devicepv
144+
```
145+
146+
After the deployment of the application, we can go to the node and see that the partition is created and is being used as a volume
147+
by the application for reading/writting the data.
148+
149+
#### 4. Deprovisioning
150+
151+
for deprovisioning the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv the partition will be wiped and deleted from the device.
152+
153+
```
154+
$ kubectl delete -f fio.yaml
155+
pod "fio" deleted
156+
$ kubectl delete -f pvc.yaml
157+
persistentvolumeclaim "csi-devicepv" deleted
158+
```
159+

deploy/sample/fio-block.yaml

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
apiVersion: storage.k8s.io/v1
2+
kind: StorageClass
3+
metadata:
4+
name: openebs-device-sc
5+
allowVolumeExpansion: true
6+
parameters:
7+
devname: "test-device"
8+
provisioner: device.csi.openebs.io
9+
volumeBindingMode: WaitForFirstConsumer
10+
---
11+
kind: PersistentVolumeClaim
12+
apiVersion: v1
13+
metadata:
14+
name: block-claim
15+
spec:
16+
volumeMode: Block
17+
storageClassName: openebs-device-sc
18+
accessModes:
19+
- ReadWriteOnce
20+
resources:
21+
requests:
22+
storage: 4Gi
23+
---
24+
apiVersion: apps/v1
25+
kind: Deployment
26+
metadata:
27+
name: fiob
28+
spec:
29+
replicas: 1
30+
selector:
31+
matchLabels:
32+
name: fiob
33+
template:
34+
metadata:
35+
labels:
36+
name: fiob
37+
spec:
38+
containers:
39+
- resources:
40+
name: perfrunner
41+
image: openebs/tests-fio
42+
imagePullPolicy: IfNotPresent
43+
command: ["/bin/bash"]
44+
args: ["-c", "while true ;do sleep 50; done"]
45+
volumeDevices:
46+
- devicePath: /dev/xvda
47+
name: storage
48+
volumes:
49+
- name: storage
50+
persistentVolumeClaim:
51+
claimName: block-claim

docs/faq.md

Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
### 1. How to add custom topology key
2+
3+
To add custom topology key, we can label all the nodes with the required key and value :-
4+
5+
```sh
6+
$ kubectl label node k8s-node-1 openebs.io/rack=rack1
7+
node/k8s-node-1 labeled
8+
9+
$ kubectl get nodes k8s-node-1 --show-labels
10+
NAME STATUS ROLES AGE VERSION LABELS
11+
k8s-node-1 Ready worker 16d v1.17.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true,openebs.io/rack=rack1
12+
13+
```
14+
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
15+
16+
Once we have labeled the node, we can install the lvm driver. The driver will pick the node labels and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can label the node with the topology information and then restart the Device-LocalPV CSI driver daemon sets (openebs-device-node) are required so that the driver can pick the labels and add them as supported topology keys. We should restart the pod in kube-system namespace with the name as openebs-device-node-[xxxxx] which is the node agent pod for the Device-LocalPV Driver.
17+
18+
Note that restart of Device LocalPV CSI driver daemon sets are must in case, if we are going to use WaitForFirstConsumer as volumeBindingMode in storage class. In case of immediate volume binding mode, restart of daemon set is not a must requirement, irrespective of sequence of labelling the node either prior to install lvm driver or after install. However it is recommended to restart the daemon set if we are labeling the nodes after the installation.
19+
20+
```sh
21+
$ kubectl get pods -n kube-system -l role=openebs-lvm
22+
23+
NAME READY STATUS RESTARTS AGE
24+
openebs-device-controller-0 4/4 Running 0 5h28m
25+
openebs-device-node-4d94n 2/2 Running 0 5h28m
26+
openebs-device-node-gssh8 2/2 Running 0 5h28m
27+
openebs-device-node-twmx8 2/2 Running 0 5h28m
28+
```
29+
30+
We can verify that key has been registered successfully with the Device LocalPV CSI Driver by checking the CSI node object yaml :-
31+
32+
```yaml
33+
$ kubectl get csinodes k8s-node-1 -oyaml
34+
apiVersion: storage.k8s.io/v1
35+
kind: CSINode
36+
metadata:
37+
creationTimestamp: "2020-04-13T14:49:59Z"
38+
name: k8s-node-1
39+
ownerReferences:
40+
- apiVersion: v1
41+
kind: Node
42+
name: k8s-node-1
43+
uid: fe268f4b-d9a9-490a-a999-8cde20c4dadb
44+
resourceVersion: "4586341"
45+
selfLink: /apis/storage.k8s.io/v1/csinodes/k8s-node-1
46+
uid: 522c2110-9d75-4bca-9879-098eb8b44e5d
47+
spec:
48+
drivers:
49+
- name: local.csi.openebs.io
50+
nodeID: k8s-node-1
51+
topologyKeys:
52+
- beta.kubernetes.io/arch
53+
- beta.kubernetes.io/os
54+
- kubernetes.io/arch
55+
- kubernetes.io/hostname
56+
- kubernetes.io/os
57+
- node-role.kubernetes.io/worker
58+
- openebs.io/rack
59+
```
60+
61+
We can see that "openebs.io/rack" is listed as topology key. Now we can create a storageclass with the topology key created :
62+
63+
```yaml
64+
apiVersion: storage.k8s.io/v1
65+
kind: StorageClass
66+
metadata:
67+
name: openebs-device-sc
68+
allowVolumeExpansion: true
69+
parameters:
70+
devname: "test-device"
71+
provisioner: device.csi.openebs.io
72+
allowedTopologies:
73+
- matchLabelExpressions:
74+
- key: openebs.io/rack
75+
values:
76+
- rack1
77+
```
78+
79+
The Device LocalPV CSI driver will schedule the PV to the nodes where label "openebs.io/rack" is set to "rack1".
80+
81+
Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e. some nodes are having different keys, then DevicePV's default scheduler can not effectively do the volume capacity based scheduling. Here, in this case the CSI provisioner will pick keys from any random node and then prepare the preferred topology list using the nodes which has those keys defined and DevicePV scheduler will schedule the PV among those nodes only.

docs/storageclasses.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
## Parameters
2+
3+
### StorageClass With Custom Node Labels
4+
5+
There can be a use case where we have certain kinds of device present on certain nodes only, and we want a particular type of application to use that device. We can create a storage class with `allowedTopologies` and mention all the nodes there where that device type is present:
6+
7+
```yaml
8+
apiVersion: storage.k8s.io/v1
9+
kind: StorageClass
10+
metadata:
11+
name: device-sc
12+
allowVolumeExpansion: true
13+
parameters:
14+
devname: "test-device"
15+
provisioner: device.csi.openebs.io
16+
allowedTopologies:
17+
- matchLabelExpressions:
18+
- key: openebs.io/nodename
19+
values:
20+
- node-1
21+
- node-2
22+
```
23+
24+
Here we can have device with meta partition name “test-device” created on the nvme disks and want to use this high performing devices for the applications that need higher IOPS. We can use the above SorageClass to create the PVC and deploy the application using that.
25+
26+
The problem with the above StorageClass is that it works fine if the number of nodes is less, but if the number of nodes is huge, it is cumbersome to list all the nodes like this. In that case, what we can do is, we can label all the similar nodes using the same key value and use that label to create the StorageClass.
27+
28+
```
29+
user@k8s-master:~ $ kubectl label node k8s-node-2 openebs.io/devname=nvme
30+
node/k8s-node-2 labeled
31+
user@k8s-master:~ $ kubectl label node k8s-node-1 openebs.io/devname=nvme
32+
node/k8s-node-1 labeled
33+
```
34+
35+
Now, restart the Device-LocalPV Driver (if already deployed, otherwise please ignore) so that it can pick the new node label as the supported topology. Check [faq](./faq.md#1-how-to-add-custom-topology-key) for more details.
36+
37+
```
38+
$ kubectl delete po -n kube-system -l role=openebs-device
39+
```
40+
41+
Now, we can create the StorageClass like this:
42+
43+
```yaml
44+
apiVersion: storage.k8s.io/v1
45+
kind: StorageClass
46+
metadata:
47+
name: nvme-device-sc
48+
allowVolumeExpansion: true
49+
parameters:
50+
devname: "test-device"
51+
provisioner: device.csi.openebs.io
52+
allowedTopologies:
53+
- matchLabelExpressions:
54+
- key: openebs.io/devname
55+
values:
56+
- nvme
57+
```
58+
59+
Here, the volumes will be provisioned on the nodes which has label “openebs.io/devname” set as “nvme”.

0 commit comments

Comments
 (0)