You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ OpenEBS is an Apache 2.0 Licensed project and all your commits should be signed
34
34
35
35
We use the Developer Certificate of Origin (DCO) as an additional safeguard for the OpenEBS project. This is a well established and widely used mechanism to assure contributors have confirmed their right to license their contribution under the project's license. Please read [developer-certificate-of-origin](https://github.com/openebs/openebs/blob/HEAD/contribute/developer-certificate-of-origin).
36
36
37
-
Please certify it by just adding a line to every git commit message. Any PR with Commits which does not have DCO Signoff will not be accepted:
37
+
Please certify it by just adding a line to every git commit message. Any PR with commits that do not have DCO Signoff will not be accepted:
38
38
39
39
```
40
40
Signed-off-by: Random J Developer <random@developer.example.org>
Copy file name to clipboardExpand all lines: docs/main/concepts/architecture.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,11 +25,11 @@ The data engines are at the core of OpenEBS and are responsible for performing t
25
25
26
26
The data engines are responsible for:
27
27
- Aggregating the capacity available in the block devices allocated to them and then carving out volumes for applications.
28
-
- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes
29
-
- Provide volume services like - synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data and so forth
30
-
- Provide strong consistency while persisting the data to the underlying storage devices
28
+
- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes.
29
+
- Provide volume services like synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data, and so forth.
30
+
- Provide strong consistency while persisting the data to the underlying storage devices.
31
31
32
-
OpenEBS follow a micro-services model to implement the data engine where the functionality is further decomposed into different layers, allowing for flexibility to interchange the layers and make data engines future-ready for changes coming in the application and data center technologies.
32
+
OpenEBS follows a micro-services model to implement the data engine where the functionality is further decomposed into different layers, allowing for flexibility to interchange the layers and make data engines future-ready for changes coming in the application and data center technologies.
33
33
34
34
The OpenEBS Data Engines comprise of the following layers:
Copy file name to clipboardExpand all lines: docs/main/faqs/faqs.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -95,9 +95,9 @@ env:
95
95
value: "openebs.io/rack"
96
96
97
97
```
98
-
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
98
+
It is recommended to label all the nodes with the same key; they can have different values for the given keys, but all keys should be present on all the worker nodes.
99
99
100
-
Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node).
100
+
Once we have labeled the node, we can install the LVM driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node).
101
101
102
102
103
103
```sh
@@ -266,9 +266,9 @@ env:
266
266
- name: ALLOWED_TOPOLOGIES
267
267
value: "openebs.io/rack"
268
268
```
269
-
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
269
+
It is recommended to label all the nodes with the same key; they can have different values for the given keys, but all keys should be present on all the worker nodes.
270
270
271
-
Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node).
271
+
Once we have labeled the node, we can install the ZFS driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node).
272
272
273
273
```sh
274
274
$ kubectl get pods -n kube-system -l role=openebs-zfs
@@ -334,7 +334,7 @@ If storageclass is using immediate binding mode and storageclass `allowedTopolog
334
334
335
335
[Go to top](#top)
336
336
337
-
### Why the ZFS volume size is different than the reqeusted size in PVC?
337
+
### Why is the ZFS volume size different than the requested size in PVC?
338
338
339
339
:::note
340
340
The size will be rounded off to the nearest Mi or Gi unit. M/G notation uses 1000 base and Mi/Gi notation uses 1024 base, so 1M will be 1000 * 1000 byte and 1Mi will be 1024 * 1024.
@@ -392,7 +392,7 @@ PVC size as zero in not a valid capacity. The minimum allocatable size for the L
392
392
393
393
### How to migrate PVs to the new node in case old node is not accessible?
394
394
395
-
The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that pod gets scheduled to that node only where the volume is present. Now, the problem here is, when that node is not accesible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as k8s scheduler will be looking for that node only to schedule the pod.
395
+
The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that the pod gets scheduled to that node only where the volume is present. Now, the problem here is when that node is not accessible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as the K8s scheduler will be looking for that node only to schedule the pod.
396
396
397
397
From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the Local PV ZFS driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value.
Now, the Driver will use `openebs.io/nodeid` as the key and the corresponding value to set the affinity on the PV and k8s scheduler will consider this affinity label while scheduling the pods.
410
410
411
-
When a node is not accesible, follow the steps below:
411
+
When a node is not accessible, follow the steps below:
412
412
413
413
1. Remove the old node from the cluster or we can just remove the above node label from the node which we want to remove.
414
414
2. Add a new node in the cluster
@@ -426,7 +426,7 @@ Once the above steps are done, the pod should be able to run on this new node wi
426
426
427
427
### How is data protected in Replicated Storage? What happens when a host, client workload, or a data center fails?
428
428
429
-
The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
429
+
The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switchover of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the configured replication factor to ensure no single point of failure.
430
430
Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor.
431
431
432
432
[Go to top](#top)
@@ -568,7 +568,7 @@ Replicated Storage, as any other solution leveraging TCP for network transport,
568
568
569
569
### Why do Replicated Storage's IO engine pods show high levels of CPU utilization when there is little or no I/O being processed?
570
570
571
-
Replicated Storage has been designed so as to be able to leverage the peformance capabilities of contemporary high-end solid-state storage devices. A significant aspect of this is the selection of a polling based I/O service queue, rather than an interrupt driven one. This minimizes the latency introduced into the data path but at the cost of additional CPU utilization by the "reactor" - the poller operating at the heart of the Replicated Storage's IO engine pod. When the IO engine pod is deployed on a cluster, it is expected that these daemonset instances will make full utilization of their CPU allocation, even when there is no I/O load on the cluster. This is simply the poller continuing to operate at full speed, waiting for I/O. For the same reason, it is recommended that when configuring the CPU resource limits for the IO engine daemonset, only full, not fractional, CPU limits are set; fractional allocations will also incur additional latency, resulting in a reduction in overall performance potential. The extent to which this performance degradation is noticeable in practice will depend on the performance of the underlying storage in use, as well as whatvever other bottlenecks/constraints may be present in the system as cofigured.
571
+
Replicated Storage has been designed so as to be able to leverage the performance capabilities of contemporary high-end solid-state storage devices. A significant aspect of this is the selection of a polling-based I/O service queue, rather than an interrupt-driven one. This minimizes the latency introduced into the data path but at the cost of additional CPU utilization by the "reactor" - the poller operating at the heart of the Replicated Storage's IO engine pod. When the IO engine pod is deployed on a cluster, it is expected that these daemonset instances will make full utilization of their CPU allocation, even when there is no I/O load on the cluster. This is simply the poller continuing to operate at full speed, waiting for I/O. For the same reason, it is recommended that when configuring the CPU resource limits for the IO engine daemonset, only full, not fractional, CPU limits are set; fractional allocations will also incur additional latency, resulting in a reduction in overall performance potential. The extent to which this performance degradation is noticeable in practice will depend on the performance of the underlying storage in use, as well as whatever other bottlenecks/constraints may be present in the system as configured.
572
572
573
573
[Go to top](#top)
574
574
@@ -592,7 +592,7 @@ The PV garbage collector deploys a watcher component, which subscribes to the Ku
592
592
593
593
### How to disable cow for btrfs filesystem?
594
594
595
-
To disbale cow for `btrfs` filesystem, use `nodatacow` as a mountOption in the storage class which would be used to provision the volume.
595
+
To disable cow for `btrfs` filesystem, use `nodatacow` as a mountOption in the storage class which would be used to provision the volume.
Copy file name to clipboardExpand all lines: docs/main/introduction-to-openebs/introduction-to-openebs.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The [OpenEBS Adoption stories](https://github.com/openebs/community/blob/develop
28
28
29
29
## What does OpenEBS do?
30
30
31
-
OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or[Replicated](#replicated-volumes) Persistent Volumes to Stateful workloads.
31
+
OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or[Replicated](#replicated-volumes) Persistent Volumes to Stateful workloads.
@@ -74,7 +74,7 @@ Installing OpenEBS in your cluster is as simple as running a few `kubectl` or `h
74
74
75
75
## Community Support via Slack
76
76
77
-
OpenEBS has a vibrant community that can help you get started. If you have further questions and want to learn more about OpenEBS, join the [OpenEBS community on Kubernetes Slack](https://kubernetes.slack.com). If you are already signed up, head to our discussions at[#openebs](https://kubernetes.slack.com/messages/openebs/) channel.
77
+
OpenEBS has a vibrant community that can help you get started. If you have further questions and want to learn more about OpenEBS, join the [OpenEBS community on Kubernetes Slack](https://kubernetes.slack.com). If you are already signed up, head to our discussions at[#openebs](https://kubernetes.slack.com/messages/openebs/) channel.
Copy file name to clipboardExpand all lines: docs/main/stateful-applications/cassandra.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,13 +11,13 @@ description: Instructions to run a Kudo operator based Cassandra StatefulSets wi
11
11
12
12

13
13
14
-
This tutorial provides detailed instructions to run a Kudo operatorbased Cassandra StatefulSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and it's performance benchmark.
14
+
This tutorial provides detailed instructions to run a Kudo operator-based Cassandra StatefulSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and its performance benchmark.
15
15
16
16
## Introduction
17
17
18
18
Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle a large amounts of data across nodes, providing high availability with no single point of failure. It uses asynchronous masterless replication allowing low latency operations for all clients.
19
19
20
-
OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and it's highly flexible deployment options to meet the storage needs of any given stateful application.
20
+
OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and its highly flexible deployment options to meet the storage needs of any given stateful application.
21
21
22
22
Depending on the performance and high availability requirements of Cassandra, you can select to run Cassandra with the following deployment options:
Copy file name to clipboardExpand all lines: docs/main/stateful-applications/mongodb.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,15 +33,15 @@ MongoDB is a cross-platform document-oriented database. Classified as a NoSQL da
33
33
34
34
1.**Install OpenEBS**
35
35
36
-
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
36
+
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
37
37
38
38
2.**Configure cStor Pool**
39
39
40
40
After OpenEBS installation, cStor pool has to be configured. If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. If cStor pool is already configured, go to the next step.
41
41
42
42
4.**Create Storage Class**
43
43
44
-
You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. In this solution, MongoDB is installed as a Deployment. So it requires replication at the storage level. So cStor volume `replicaCount` is 3. Sample YAML named **openebs-sc-disk.yaml** to consume cStor pool with cStove volume replica count as 3 is provided in the configuration details below.
44
+
You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. In this solution, MongoDB is installed as a Deployment. So it requires replication at the storage level. So cStor volume `replicaCount` is 3. Sample YAML named **openebs-sc-disk.yaml** to consume cStor pool with cStor volume replica count as 3 is provided in the configuration details below.
Copy file name to clipboardExpand all lines: docs/main/stateful-applications/mysql.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,15 +21,15 @@ Use OpenEBS and MySQL containers to quickly launch an RDS like service, where da
21
21
22
22
[](../assets/mysql-deployment.svg)
23
23
24
-
As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration work fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.
24
+
As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration works fine when the nodes (hence the cStor pool) are deployed across Kubernetes zones.
25
25
26
26
## Configuration workflow
27
27
28
28
1.**Install OpenEBS**
29
29
30
-
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
30
+
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
31
31
32
-
2.**Configure cStor Pool** : After OpenEBS installation, cStor pool has to be configured. As MySQL is a deployment, it need high availability at storage level. OpenEBS cStor volume has to be configured with 3 replica. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. If cStor Pool is already configured as required go to Step 4 to create MySQL StorageClass.
32
+
2.**Configure cStor Pool** : After OpenEBS installation, cStor pool has to be configured. As MySQL is a deployment, it needs high availability at storage level. OpenEBS cStor volume has to be configured with 3 replica. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. If cStor Pool is already configured as required go to Step 4 to create MySQL StorageClass.
0 commit comments