Skip to content

Commit f052861

Browse files
committed
Fix typos and linguistic errors in documentation
Signed-off-by: Sebastien Dionne <survivant00@gmail.com>
1 parent f7ea758 commit f052861

File tree

9 files changed

+28
-28
lines changed

9 files changed

+28
-28
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ OpenEBS is an Apache 2.0 Licensed project and all your commits should be signed
3434

3535
We use the Developer Certificate of Origin (DCO) as an additional safeguard for the OpenEBS project. This is a well established and widely used mechanism to assure contributors have confirmed their right to license their contribution under the project's license. Please read [developer-certificate-of-origin](https://github.com/openebs/openebs/blob/HEAD/contribute/developer-certificate-of-origin).
3636

37-
Please certify it by just adding a line to every git commit message. Any PR with Commits which does not have DCO Signoff will not be accepted:
37+
Please certify it by just adding a line to every git commit message. Any PR with commits that do not have DCO Signoff will not be accepted:
3838

3939
```
4040
Signed-off-by: Random J Developer <random@developer.example.org>

docs/main/concepts/architecture.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,11 @@ The data engines are at the core of OpenEBS and are responsible for performing t
2525

2626
The data engines are responsible for:
2727
- Aggregating the capacity available in the block devices allocated to them and then carving out volumes for applications.
28-
- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes
29-
- Provide volume services like - synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data and so forth
30-
- Provide strong consistency while persisting the data to the underlying storage devices
28+
- Provide standard system or network transport interfaces (NVMe) for connecting to local or remote volumes.
29+
- Provide volume services like synchronous replication, compression, encryption, maintaining snapshots, access to the incremental or full snapshots of data, and so forth.
30+
- Provide strong consistency while persisting the data to the underlying storage devices.
3131

32-
OpenEBS follow a micro-services model to implement the data engine where the functionality is further decomposed into different layers, allowing for flexibility to interchange the layers and make data engines future-ready for changes coming in the application and data center technologies.
32+
OpenEBS follows a micro-services model to implement the data engine where the functionality is further decomposed into different layers, allowing for flexibility to interchange the layers and make data engines future-ready for changes coming in the application and data center technologies.
3333

3434
The OpenEBS Data Engines comprise of the following layers:
3535

docs/main/faqs/faqs.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,9 @@ env:
9595
value: "openebs.io/rack"
9696

9797
```
98-
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
98+
It is recommended to label all the nodes with the same key; they can have different values for the given keys, but all keys should be present on all the worker nodes.
9999

100-
Once we have labeled the node, we can install the lvm driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node).
100+
Once we have labeled the node, we can install the LVM driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the Local PV LVM CSI driver daemon sets (openebs-lvm-node).
101101

102102

103103
```sh
@@ -266,9 +266,9 @@ env:
266266
- name: ALLOWED_TOPOLOGIES
267267
value: "openebs.io/rack"
268268
```
269-
It is recommended is to label all the nodes with the same key, they can have different values for the given keys, but all keys should be present on all the worker node.
269+
It is recommended to label all the nodes with the same key; they can have different values for the given keys, but all keys should be present on all the worker nodes.
270270

271-
Once we have labeled the node, we can install the zfs driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node).
271+
Once we have labeled the node, we can install the ZFS driver. The driver will pick the keys from env "ALLOWED_TOPOLOGIES" and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can edit the LocalPV ZFS CSI driver daemon sets (openebs-zfs-node).
272272

273273
```sh
274274
$ kubectl get pods -n kube-system -l role=openebs-zfs
@@ -334,7 +334,7 @@ If storageclass is using immediate binding mode and storageclass `allowedTopolog
334334

335335
[Go to top](#top)
336336

337-
### Why the ZFS volume size is different than the reqeusted size in PVC?
337+
### Why is the ZFS volume size different than the requested size in PVC?
338338

339339
:::note
340340
The size will be rounded off to the nearest Mi or Gi unit. M/G notation uses 1000 base and Mi/Gi notation uses 1024 base, so 1M will be 1000 * 1000 byte and 1Mi will be 1024 * 1024.
@@ -392,7 +392,7 @@ PVC size as zero in not a valid capacity. The minimum allocatable size for the L
392392

393393
### How to migrate PVs to the new node in case old node is not accessible?
394394

395-
The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that pod gets scheduled to that node only where the volume is present. Now, the problem here is, when that node is not accesible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as k8s scheduler will be looking for that node only to schedule the pod.
395+
The Local PV ZFS driver will set affinity on the PV to make the volume stick to the node so that the pod gets scheduled to that node only where the volume is present. Now, the problem here is when that node is not accessible due to some reason and we move the disks to a new node and import the pool there, the pods will not be scheduled to this node as the K8s scheduler will be looking for that node only to schedule the pod.
396396

397397
From release 1.7.0 of the Local PV ZFS, the driver has the ability to use the user defined affinity for creating the PV. While deploying the Local PV ZFS driver, first we should label all the nodes using the key `openebs.io/nodeid` with some unique value.
398398
```
@@ -408,7 +408,7 @@ $ kubectl label node node-3 openebs.io/nodeid=custom-value-3
408408
409409
Now, the Driver will use `openebs.io/nodeid` as the key and the corresponding value to set the affinity on the PV and k8s scheduler will consider this affinity label while scheduling the pods.
410410
411-
When a node is not accesible, follow the steps below:
411+
When a node is not accessible, follow the steps below:
412412
413413
1. Remove the old node from the cluster or we can just remove the above node label from the node which we want to remove.
414414
2. Add a new node in the cluster
@@ -426,7 +426,7 @@ Once the above steps are done, the pod should be able to run on this new node wi
426426
427427
### How is data protected in Replicated Storage? What happens when a host, client workload, or a data center fails?
428428
429-
The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switch over of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the congigured replication factor to ensure no single point of failure.
429+
The OpenEBS Replicated Storage (a.k.a Replicated Engine or Mayastor) ensures resilience with built-in highly available architecture. It supports on-demand switchover of the NVMe controller to ensure IO continuity in case of host failure. The data is synchronously replicated as per the configured replication factor to ensure no single point of failure.
430430
Faulted replicas are automatically rebuilt in the background without IO disruption to maintain the replication factor.
431431
432432
[Go to top](#top)
@@ -568,7 +568,7 @@ Replicated Storage, as any other solution leveraging TCP for network transport,
568568
569569
### Why do Replicated Storage's IO engine pods show high levels of CPU utilization when there is little or no I/O being processed?
570570
571-
Replicated Storage has been designed so as to be able to leverage the peformance capabilities of contemporary high-end solid-state storage devices. A significant aspect of this is the selection of a polling based I/O service queue, rather than an interrupt driven one. This minimizes the latency introduced into the data path but at the cost of additional CPU utilization by the "reactor" - the poller operating at the heart of the Replicated Storage's IO engine pod. When the IO engine pod is deployed on a cluster, it is expected that these daemonset instances will make full utilization of their CPU allocation, even when there is no I/O load on the cluster. This is simply the poller continuing to operate at full speed, waiting for I/O. For the same reason, it is recommended that when configuring the CPU resource limits for the IO engine daemonset, only full, not fractional, CPU limits are set; fractional allocations will also incur additional latency, resulting in a reduction in overall performance potential. The extent to which this performance degradation is noticeable in practice will depend on the performance of the underlying storage in use, as well as whatvever other bottlenecks/constraints may be present in the system as cofigured.
571+
Replicated Storage has been designed so as to be able to leverage the performance capabilities of contemporary high-end solid-state storage devices. A significant aspect of this is the selection of a polling-based I/O service queue, rather than an interrupt-driven one. This minimizes the latency introduced into the data path but at the cost of additional CPU utilization by the "reactor" - the poller operating at the heart of the Replicated Storage's IO engine pod. When the IO engine pod is deployed on a cluster, it is expected that these daemonset instances will make full utilization of their CPU allocation, even when there is no I/O load on the cluster. This is simply the poller continuing to operate at full speed, waiting for I/O. For the same reason, it is recommended that when configuring the CPU resource limits for the IO engine daemonset, only full, not fractional, CPU limits are set; fractional allocations will also incur additional latency, resulting in a reduction in overall performance potential. The extent to which this performance degradation is noticeable in practice will depend on the performance of the underlying storage in use, as well as whatever other bottlenecks/constraints may be present in the system as configured.
572572
573573
[Go to top](#top)
574574
@@ -592,7 +592,7 @@ The PV garbage collector deploys a watcher component, which subscribes to the Ku
592592
593593
### How to disable cow for btrfs filesystem?
594594
595-
To disbale cow for `btrfs` filesystem, use `nodatacow` as a mountOption in the storage class which would be used to provision the volume.
595+
To disable cow for `btrfs` filesystem, use `nodatacow` as a mountOption in the storage class which would be used to provision the volume.
596596
597597
[Go to top](#top)
598598

docs/main/glossary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Glossary of Terms
44
keywords:
55
- Community
66
- OpenEBS community
7-
description: This section lists the abbreviations used thorughout the OpenEBS documentation
7+
description: This section lists the abbreviations used throughout the OpenEBS documentation
88
---
99

1010
| Abbreviations | Definition |

docs/main/introduction-to-openebs/introduction-to-openebs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ The [OpenEBS Adoption stories](https://github.com/openebs/community/blob/develop
2828

2929
## What does OpenEBS do?
3030

31-
OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or[Replicated](#replicated-volumes) Persistent Volumes to Stateful workloads.
31+
OpenEBS manages the storage available on each of the Kubernetes nodes and uses that storage to provide [Local](#local-volumes) or [Replicated](#replicated-volumes) Persistent Volumes to Stateful workloads.
3232

3333
![data-engines-comparision](../assets/data-engines-comparision.svg)
3434

@@ -74,7 +74,7 @@ Installing OpenEBS in your cluster is as simple as running a few `kubectl` or `h
7474

7575
## Community Support via Slack
7676

77-
OpenEBS has a vibrant community that can help you get started. If you have further questions and want to learn more about OpenEBS, join the [OpenEBS community on Kubernetes Slack](https://kubernetes.slack.com). If you are already signed up, head to our discussions at[#openebs](https://kubernetes.slack.com/messages/openebs/) channel.
77+
OpenEBS has a vibrant community that can help you get started. If you have further questions and want to learn more about OpenEBS, join the [OpenEBS community on Kubernetes Slack](https://kubernetes.slack.com). If you are already signed up, head to our discussions at [#openebs](https://kubernetes.slack.com/messages/openebs/) channel.
7878

7979
## See Also
8080

docs/main/stateful-applications/cassandra.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ description: Instructions to run a Kudo operator based Cassandra StatefulSets wi
1111

1212
![OpenEBS and Cassandra](../assets/o-cassandra.png)
1313

14-
This tutorial provides detailed instructions to run a Kudo operator based Cassandra StatefulSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and it's performance benchmark.
14+
This tutorial provides detailed instructions to run a Kudo operator-based Cassandra StatefulSets with OpenEBS storage and perform some simple database operations to verify the successful deployment and its performance benchmark.
1515

1616
## Introduction
1717

1818
Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle a large amounts of data across nodes, providing high availability with no single point of failure. It uses asynchronous masterless replication allowing low latency operations for all clients.
1919

20-
OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and it's highly flexible deployment options to meet the storage needs of any given stateful application.
20+
OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and its highly flexible deployment options to meet the storage needs of any given stateful application.
2121

2222
Depending on the performance and high availability requirements of Cassandra, you can select to run Cassandra with the following deployment options:
2323

docs/main/stateful-applications/mongodb.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,15 @@ MongoDB is a cross-platform document-oriented database. Classified as a NoSQL da
3333

3434
1. **Install OpenEBS**
3535

36-
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
36+
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
3737

3838
2. **Configure cStor Pool**
3939

4040
After OpenEBS installation, cStor pool has to be configured. If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. If cStor pool is already configured, go to the next step.
4141

4242
4. **Create Storage Class**
4343

44-
You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. In this solution, MongoDB is installed as a Deployment. So it requires replication at the storage level. So cStor volume `replicaCount` is 3. Sample YAML named **openebs-sc-disk.yaml** to consume cStor pool with cStove volume replica count as 3 is provided in the configuration details below.
44+
You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. In this solution, MongoDB is installed as a Deployment. So it requires replication at the storage level. So cStor volume `replicaCount` is 3. Sample YAML named **openebs-sc-disk.yaml** to consume cStor pool with cStor volume replica count as 3 is provided in the configuration details below.
4545

4646
5. **Launch and test MongoDB**
4747

docs/main/stateful-applications/mysql.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,15 +21,15 @@ Use OpenEBS and MySQL containers to quickly launch an RDS like service, where da
2121

2222
[![OpenEBS and Percona](../assets/mysql-deployment.svg)](../assets/mysql-deployment.svg)
2323

24-
As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration work fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.
24+
As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration works fine when the nodes (hence the cStor pool) are deployed across Kubernetes zones.
2525

2626
## Configuration workflow
2727

2828
1. **Install OpenEBS**
2929

30-
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
30+
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.
3131

32-
2. **Configure cStor Pool** : After OpenEBS installation, cStor pool has to be configured. As MySQL is a deployment, it need high availability at storage level. OpenEBS cStor volume has to be configured with 3 replica. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. If cStor Pool is already configured as required go to Step 4 to create MySQL StorageClass.
32+
2. **Configure cStor Pool** : After OpenEBS installation, cStor pool has to be configured. As MySQL is a deployment, it needs high availability at storage level. OpenEBS cStor volume has to be configured with 3 replica. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. If cStor Pool is already configured as required go to Step 4 to create MySQL StorageClass.
3333

3434
4. **Create Storage Class**
3535

docs/main/troubleshooting/troubleshooting-local-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ Setup the cluster using RKE with openSUSE CaaS MicroOS using CNI Plugin Cilium.
179179

180180
**Troubleshooting**
181181

182-
Check journalctl logs of each nodes and check if similar logs are observed. In the following log snippets, showing the corresponding logs of 3 nodes.
182+
Check journalctl logs of each node and check if similar logs are observed. In the following log snippets, showing the corresponding logs of 3 nodes.
183183

184184
Node1:
185185

@@ -234,7 +234,7 @@ There are 2 possible solutions.
234234

235235
Approach1:
236236

237-
Do the following on each nodes to stop the transactional update.
237+
Do the following on each node to stop the transactional update.
238238

239239
```
240240
systemctl disable --now rebootmgr.service
@@ -245,7 +245,7 @@ This is the preferred approach.
245245

246246
Approach2:
247247

248-
Set the reboot timer schedule at different time i.e. staggered at various interval of the day, so that only one nodes get rebooted at a time.
248+
Set the reboot timer schedule at different times i.e. staggered at various intervals of the day, so that only one node gets rebooted at a time.
249249

250250
### How to fetch the OpenEBS Dynamic Local Provisioner logs?
251251

0 commit comments

Comments
 (0)