Skip to content

Commit d6d791e

Browse files
authored
Merge pull request #13994 from huffmanca/OSDOCS-318-2
OSDOCS-318: Following up on peer review changes.
2 parents 2b0fad4 + 290d7c9 commit d6d791e

8 files changed

+49
-47
lines changed

_topic_map.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -298,7 +298,7 @@ Name: Storage
298298
Dir: storage
299299
Distros: openshift-*
300300
Topics:
301-
- Name: Persistent storage
301+
- Name: Understanding persistent storage
302302
File: understanding-persistent-storage
303303
- Name: Configuring persistent storage
304304
Dir: persistent-storage

modules/storage-persistent-storage-block-volume-examples.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,5 +135,5 @@ match the name of the PVC as expected.
135135

136136
[IMPORTANT]
137137
====
138-
Unspecified values result in the default value of *Filesystem*.
138+
Unspecified values result in the default value of `Filesystem`.
139139
====

modules/storage-persistent-storage-block-volume.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ You can statically provision raw block volumes by including API fields
1212
in your PV and PVC specifications. This functionality is only available for
1313
manually provisioned PVs.
1414

15-
To use block volume, you must first enable the `BlockVolume` feature gate.
16-
To enable the feature gates for master(s), add `feature-gates` to
15+
To use a block volume, you must first enable the `BlockVolume` feature
16+
gate. To enable the feature gates for master(s), add `feature-gates` to
1717
`apiServerArguments` and `controllerArguments`. To enable the feature
18-
gates fornode(s), add `feature-gates` to `kubeletArguments`. For example:
18+
gates for node(s), add `feature-gates` to `kubeletArguments`. For example:
1919

2020
----
2121
kubeletArguments:

modules/storage-persistent-storage-lifecycle.adoc

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -34,23 +34,23 @@ with manually provisioned PVs. To minimize the excess, {product-title}
3434
binds to the smallest PV that matches all other criteria.
3535

3636
Claims remain unbound indefinitely if a matching volume does not exist or
37-
cannot be created with any available provisioner servicing a storage
37+
can not be created with any available provisioner servicing a storage
3838
class. Claims are bound as matching volumes become available. For example,
3939
a cluster with many manually provisioned 50Gi volumes would not match a
4040
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the
4141
cluster.
4242

4343
[[using]]
44-
== Use pods and claimed PVs
44+
== Use Pods and claimed PVs
4545

4646
Pods use claims as volumes. The cluster inspects the claim to find the bound
47-
volume and mounts that volume for a pod. For those volumes that support
47+
volume and mounts that volume for a Pod. For those volumes that support
4848
multiple access modes, you must specify which mode applies when you use
49-
the claim as a volume in a pod.
49+
the claim as a volume in a Pod.
5050

5151
Once you have a claim and that claim is bound, the bound PV belongs to you
52-
for as long as you need it. You can schedule pods and access claimed
53-
PVs by including `persistentVolumeClaim` in the pod's volumes block.
52+
for as long as you need it. You can schedule Pods and access claimed
53+
PVs by including `persistentVolumeClaim` in the Pod's volumes block.
5454

5555
ifdef::openshift-origin,openshift-enterprise[]
5656

@@ -77,11 +77,13 @@ The reclaim policy of a `PersistentVolume` tells the cluster what to do with
7777
the volume after it is released. Volumes reclaim policy can either be
7878
`Retain`, `Recycle`, or `Delete`.
7979

80-
`Retain` reclaim policy allows manual reclamation of the resource for
81-
those volume plug-ins that support it. `Delete` reclaim policy deletes
82-
both the `PersistentVolume` object from {product-title} and the associated
83-
storage asset in external infrastructure, such as AWS EBS, GCE PD, or
84-
Cinder volume.
80+
* `Retain` reclaim policy allows manual reclamation of the resource for
81+
those volume plug-ins that support it.
82+
* `Recycle` reclaim policy recycles the volume back into the pool of
83+
unbound persistent volumes once it is released from its claim.
84+
* `Delete` reclaim policy deletes both the `PersistentVolume` object
85+
from {product-title} and the associated storage asset in external
86+
infrastructure, such as AWS EBS, GCE PD, or Cinder volume.
8587

8688
[NOTE]
8789
====

modules/storage-persistent-storage-overview.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// storage/understanding-persistent-storage.adoc[leveloffset=+1]
44

55
[id=persistent-storage-overview-{context}]
6-
= Persistent Storage Overview
6+
= Persistent storage overview
77

88
Managing storage is a distinct problem from managing compute resources.
99
{product-title} uses the Kubernetes persistent volume (PV) framework to
@@ -14,15 +14,15 @@ without having specific knowledge of the underlying storage infrastructure.
1414
PVCs are specific to a project, and are created and used by developers as
1515
a means to use a PV. PV resources on their own are not scoped to any
1616
single project; they can be shared across the entire {product-title}
17-
cluster and claimed from any project. After a PV is bound to a PVC
18-
that PV cannot then be bound to additional PVCs. This has the effect of
17+
cluster and claimed from any project. After a PV is bound to a PVC,
18+
that PV can not then be bound to additional PVCs. This has the effect of
1919
scoping a bound PV to a single namespace, that of the binding project.
2020

2121
PVs are defined by a `PersistentVolume` API object, which represents a
2222
piece of existing, networked storage in the cluster that was provisioned
2323
by the cluster administrator. It is a resource in the cluster just like a
2424
node is a cluster resource. PVs are volume plug-ins like `Volumes` but
25-
have a lifecycle that is independent of any individual pod that uses the
25+
have a lifecycle that is independent of any individual Pod that uses the
2626
PV. PV objects capture the details of the implementation of the storage,
2727
be that NFS, iSCSI, or a cloud-provider-specific storage system.
2828

@@ -33,8 +33,8 @@ storage provider.
3333
====
3434

3535
PVCs are defined by a `PersistentVolumeClaim` API object, which represents a
36-
request for storage by a developer. It is similar to a pod in that pods
37-
consume node resources and PVCs consume PV resources. For example, pods
38-
can request specific levels of resources (e.g., CPU and memory), while
36+
request for storage by a developer. It is similar to a Pod in that Pods
37+
consume node resources and PVCs consume PV resources. For example, Pods
38+
can request specific levels of resources, such as CPU and memory, while
3939
PVCs can request specific storage capacity and access modes. For example,
40-
they can be mounted once read/write or many times read-only.
40+
they can be mounted once read-write or many times read-only.

modules/storage-persistent-storage-pv.adoc

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * storage/understanding-persistent-storage.adoc
44

55
[id='persistent-volumes-{context}']
6-
= Persistent Volumes
6+
= Persistent volumes
77

88
Each PV contains a `spec` and `status`, which is the specification and
99
status of the volume, for example:
@@ -25,7 +25,7 @@ spec:
2525
status:
2626
...
2727
----
28-
<1> Name of the persistent volume
28+
<1> Name of the persistent volume.
2929
<2> The amount of storage available to the volume.
3030
<3> The access mode, defining the read-write and mount permissions.
3131
<4> The reclaim policy, indicating how the resource should be handled
@@ -65,7 +65,7 @@ requested. Future attributes may include IOPS, throughput, and so on.
6565
A `PersistentVolume` can be mounted on a host in any way supported by the
6666
resource provider. Providers have different capabilities and each PV's
6767
access modes are set to the specific modes supported by that particular
68-
volume. For example, NFS can support multiple read/write clients, but a
68+
volume. For example, NFS can support multiple read-write clients, but a
6969
specific NFS PV might be exported on the server as read-only. Each PV gets
7070
its own set of access modes describing that specific PV's capabilities.
7171

@@ -98,7 +98,7 @@ The following table lists the access modes:
9898
|The volume can be mounted as read-write by a single node.
9999
|ReadOnlyMany
100100
|`ROX`
101-
|The volume can be mounted read-only by many nodes.
101+
|The volume can be mounted as read-only by many nodes.
102102
|ReadWriteMany
103103
|`RWX`
104104
|The volume can be mounted as read-write by many nodes.
@@ -118,7 +118,7 @@ iSCSI and Fibre Channel volumes do not currently have any fencing
118118
mechanisms. You must ensure the volumes are only used by one node at a
119119
time. In certain situations, such as draining a node, the volumes can be
120120
used simultaneously by two nodes. Before draining the node, first ensure
121-
the pods that use these volumes are deleted.
121+
the Pods that use these volumes are deleted.
122122
====
123123

124124
.Supported access modes for PVs
@@ -142,7 +142,7 @@ the pods that use these volumes are deleted.
142142

143143
[NOTE]
144144
====
145-
Use a recreate deployment strategy for pods that rely on AWS EBS.
145+
Use a recreate deployment strategy for Pods that rely on AWS EBS.
146146
// GCE Persistent Disks, or Openstack Cinder PVs.
147147
====
148148

@@ -158,28 +158,28 @@ ifdef::openshift-dedicated[]
158158
* PVs are provisioned with either EBS volumes (AWS) or GCP storage (GCP),
159159
depending on where the cluster is provisioned.
160160
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent
161-
Disks cannot be mounted to multiple nodes.
162-
* *emptyDir* has the same lifecycle as the pod:
161+
Disks can not be mounted to multiple nodes.
162+
* *emptyDir* has the same lifecycle as the Pod:
163163
** *emptyDir* volumes survive container crashes/restarts.
164-
** *emptyDir* volumes are deleted when the pod is deleted.
164+
** *emptyDir* volumes are deleted when the Pod is deleted.
165165
endif::[]
166166

167167
ifdef::openshift-online[]
168168
* PVs are provisioned with EBS volumes (AWS).
169169
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent
170-
Disks cannot be mounted to multiple nodes.
170+
Disks can not be mounted to multiple nodes.
171171
* Docker volumes are disabled.
172172
** VOLUME directive without a mapped external volume fails to be
173173
instantiated
174174
.
175175
* *emptyDir* is restricted to 512 Mi per project (group) per node.
176-
** A single pod for a project on a particular node can use up to 512 Mi
176+
** A single Pod for a project on a particular node can use up to 512 Mi
177177
of *emptyDir* storage.
178-
** Multiple pods for a project on a particular node share the 512 Mi of
178+
** Multiple Pods for a project on a particular node share the 512 Mi of
179179
*emptyDir* storage.
180-
* *emptyDir* has the same lifecycle as the pod:
180+
* *emptyDir* has the same lifecycle as the Pod:
181181
** *emptyDir* volumes survive container crashes/restarts.
182-
** *emptyDir* volumes are deleted when the pod is deleted.
182+
** *emptyDir* volumes are deleted when the Pod is deleted.
183183
endif::[]
184184

185185
[[pv-reclaim-policy]]
@@ -201,7 +201,7 @@ The following table lists the current reclaim policy:
201201

202202
[WARNING]
203203
====
204-
If you do not want to retain all pods, use dynamic provisioning.
204+
If you do not want to retain all Pods, use dynamic provisioning.
205205
====
206206

207207
[[pv-phase]]

modules/storage-persistent-storage-pvc.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
// * storage/understanding-persistent-storage.adoc
44

55
[id='persistent-volume-claims-{context}']
6-
= Persistent Volume Claims
6+
= Persistent volume claims
77

88
Each persistent volume claim (PVC) contains a `spec` and `status`, which
99
is the specification and status of the claim, for example:
@@ -55,19 +55,19 @@ specific access modes.
5555
[[pvc-resources]]
5656
== Resources
5757

58-
Claims, such as pods, can request specific quantities of a resource. In
58+
Claims, such as Pods, can request specific quantities of a resource. In
5959
this case, the request is for storage. The same resource model applies to
6060
volumes and claims.
6161

6262
[[pvc-claims-as-volumes]]
6363
== Claims as volumes
6464

6565
Pods access storage by using the claim as a volume. Claims must exist in the
66-
same namespace as the pod by using the claim. The cluster finds the claim
67-
in the pod's namespace and uses it to get the `PersistentVolume` backing
68-
the claim. The volume is mounted to the host and into the pod, for example:
66+
same namespace as the Pod by using the claim. The cluster finds the claim
67+
in the Pod's namespace and uses it to get the `PersistentVolume` backing
68+
the claim. The volume is mounted to the host and into the Pod, for example:
6969

70-
.Mount volume to the host and into the pod example
70+
.Mount volume to the host and into the Pod example
7171
[source,yaml]
7272
----
7373
kind: Pod
@@ -86,6 +86,6 @@ spec:
8686
persistentVolumeClaim:
8787
claimName: myclaim <3>
8888
----
89-
<1> Path to mount the volume inside the pod
89+
<1> Path to mount the volume inside the Pod
9090
<2> Name of the volume to mount
9191
<3> Name of the PVC, that exists in the same namespace, to use

storage/understanding-persistent-storage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[id='understanding-persistent-storage']
2-
= Understanding Persistent Storage
2+
= Understanding persistent storage
33
include::modules/common-attributes.adoc[]
44
:context: understanding-persistent-storage
55
toc::[]

0 commit comments

Comments
 (0)