Skip to content

Commit 2f187ff

Browse files
authored
Merge pull request #12994 from ahardin-rh/optimizing-storage
Added optimizing storage content
2 parents 21920c3 + 647ba08 commit 2f187ff

File tree

4 files changed

+223
-0
lines changed

4 files changed

+223
-0
lines changed

_topic_map.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,13 @@ Topics:
120120
- Name: Optimizing compute resources
121121
File: optimizing-compute-resources
122122
---
123+
Name: Storage
124+
Dir: storage
125+
Distros: openshift-*
126+
Topics:
127+
- Name: Optimizing storage
128+
File: optimizing-storage
129+
---
123130
Name: Operators
124131
Dir: operators
125132
Distros: openshift-*
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/optimizing-storage.adoc
4+
5+
[id='available-persistent-storage-options_{context}']
6+
= Available persistent storage options
7+
8+
Understand your persistent storage options so that you can optimize your
9+
{product-title} environment.
10+
11+
.Available storage options
12+
[cols="1,4,3",options="header"]
13+
|===
14+
| Storage type | Description | Examples
15+
16+
|Block
17+
a|* Presented to the operating system (OS) as a block device
18+
* Suitable for applications that need full control of storage and operate at a low level on files
19+
bypassing the file system
20+
* Also referred to as a Storage Area Network (SAN)
21+
* Non-shareable, which means that only one client at a time can mount an endpoint of this type
22+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV,{gluster-native}/{gluster-external} GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk
23+
24+
|File
25+
a| * Presented to the OS as a file system export to be mounted
26+
* Also referred to as Network Attached Storage (NAS)
27+
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
28+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS
29+
30+
| Object
31+
a| * Accessible through a REST API endpoint
32+
* Configurable for use in the {product-title} Registry
33+
* Applications must build their drivers into the application and/or container.
34+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
35+
|===
36+
37+
You can use {gluster-native} GlusterFS (a hyperconverged or cluster-hosted
38+
storage solution) or {gluster-external} GlusterFS (an externally hosted storage
39+
solution) for block, file, and object storage for {product-title} registry,
40+
logging, and metrics.
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/optimizing-storage.adoc
4+
:gluster: GlusterFS
5+
:gluster-native: Containerized GlusterFS
6+
:gluster-external: External GlusterFS
7+
:gluster-install-link: https://docs.gluster.org/en/latest/Install-Guide/Overview/
8+
:gluster-admin-link: https://docs.gluster.org/en/latest/Administrator%20Guide/overview/
9+
:gluster-role-link: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
10+
ifdef::openshift-enterprise[]
11+
:gluster: Red Hat Gluster Storage
12+
:gluster-native: converged mode
13+
:gluster-external: independent mode
14+
:gluster-install-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/installation_guide/
15+
:gluster-admin-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/
16+
:cns-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/
17+
[id='recommended-configurable-storage-technology_{context}']
18+
= Recommended configurable storage technology
19+
20+
The following table summarizes the recommended and configurable storage
21+
technologies for the given {product-title} cluster application.
22+
23+
.Recommended and configurable storage technology
24+
[options="header"]
25+
|===
26+
|Storage type |ROX footnoteref:[rox,ReadOnlyMany]|RWX footnoteref:[rwx,ReadWriteMany] |Registry|Scaled registry|Metrics|Logging|Apps
27+
28+
| Block
29+
| Yes footnoteref:[disk,This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.]
30+
| No
31+
| Configurable
32+
| Not configurable
33+
| Recommended
34+
| Recommended
35+
| Recommended
36+
37+
| File
38+
| Yes footnoteref:[disk]
39+
| Yes
40+
| Configurable
41+
| Configurable
42+
| Configurable footnoteref:[metrics-warning,For metrics, it is an anti-pattern to use any shared storage and a single volume
43+
(RWX). By default, metrics deploys with one volume per Cassandra Replica.]
44+
| Configurable footnoteref:[logging-warning,For logging, using any shared
45+
storage would be an anti-pattern. One volume per logging-es is required.]
46+
| Recommended
47+
48+
| Object
49+
| Yes
50+
| Yes
51+
| Recommended
52+
| Recommended
53+
| Not configurable
54+
| Not configurable
55+
| Not configurable footnoteref:[object,Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ]
56+
|===
57+
58+
[NOTE]
59+
====
60+
A scaled registry is an {product-title} registry where three or more pod replicas are running.
61+
====
62+
63+
== Specific application storage recommendations
64+
65+
[IMPORTANT]
66+
====
67+
Testing shows issues with using the NFS server on RHEL as storage backend for
68+
the container image registry. This includes the OpenShift Container Registry and Quay, Cassandra
69+
for metrics storage, and ElasticSearch for logging storage. Therefore, using NFS
70+
to back PVs used by core services is not recommended.
71+
72+
Other NFS implementations on the marketplace might not have these issues.
73+
Contact the individual NFS implementation vendor for more information on any
74+
testing that was possibly completed against these OpenShift core components.
75+
====
76+
77+
==== Registry
78+
79+
In a non-scaled/high-availability (HA) {product-title} registry cluster deployment:
80+
81+
* The preferred storage technology is object storage followed by block storage. The
82+
storage technology does not need to support RWX access mode.
83+
* The storage technology must ensure read-after-write consistency. All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are not
84+
recommended for {product-title} Registry cluster deployment with production workloads.
85+
* While `hostPath` volumes are configurable for a non-scaled/HA {product-title} Registry, they are not recommended for cluster deployment.
86+
87+
==== Scaled registry
88+
89+
In a scaled/HA {product-title} registry cluster deployment:
90+
91+
* The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency.
92+
* File storage and block storage are not recommended for a scaled/HA {product-title} registry cluster deployment with production workloads.
93+
* All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are
94+
not recommended for {product-title} Registry cluster deployment with production workloads.
95+
96+
97+
==== Metrics
98+
99+
In an {product-title} hosted metrics cluster deployment:
100+
101+
* The preferred storage technology is block storage.
102+
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
103+
104+
[IMPORTANT]
105+
====
106+
Testing shows issues with using the NFS server on RHEL as storage backend for
107+
the container image registry. This includes the Cassandra for metrics storage.
108+
Therefore, using NFS to back PVs used by core services is not recommended.
109+
110+
Other NFS implementations on the marketplace might not have these issues.
111+
Contact the individual NFS implementation vendor for more information on any
112+
testing that was possibly completed against these OpenShift core components.
113+
====
114+
115+
==== Logging
116+
117+
In an {product-title} hosted logging cluster deployment:
118+
119+
* The preferred storage technology is block storage.
120+
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
121+
122+
[IMPORTANT]
123+
====
124+
Testing shows issues with using the NFS server on RHEL as storage backend for
125+
the container image registry. This includes ElasticSearch for logging storage.
126+
Therefore, using NFS to back PVs used by core services is not recommended.
127+
128+
Other NFS implementations on the marketplace might not have these issues.
129+
Contact the individual NFS implementation vendor for more information on any
130+
testing that was possibly completed against these OpenShift core components.
131+
====
132+
133+
==== Applications
134+
135+
Application use cases vary from application to application, as described in the following examples:
136+
137+
* Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied
138+
to nodes to support a healthy cluster.
139+
* Application developers are responsible for knowing and understanding the storage
140+
requirements for their application, and how it works with the provided storage
141+
to ensure that issues do not occur when an application scales or interacts
142+
with the storage layer.
143+
144+
=== Other specific application storage recommendations
145+
146+
* {product-title} Internal *etcd*: For the best etcd reliability, the lowest consistent latency storage technology is preferable.
147+
* OpenStack Cinder: OpenStack Cinder tends to be adept in ROX access mode use cases.
148+
* Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage.

storage/optimizing-storage.adoc

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
[id='optimizing-storage']
2+
= Optimizing storage
3+
include::modules/common-attributes.adoc[]
4+
:gluster: GlusterFS
5+
:gluster-native: Containerized GlusterFS
6+
:gluster-external: External GlusterFS
7+
:gluster-install-link: https://docs.gluster.org/en/latest/Install-Guide/Overview/
8+
:gluster-admin-link: https://docs.gluster.org/en/latest/Administrator%20Guide/overview/
9+
:gluster-role-link: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
10+
ifdef::openshift-enterprise[]
11+
:gluster: Red Hat Gluster Storage
12+
:gluster-native: converged mode
13+
:gluster-external: independent mode
14+
:gluster-install-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/installation_guide/
15+
:gluster-admin-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/
16+
:cns-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/
17+
:context: persistent-storage
18+
19+
toc::[]
20+
21+
22+
Optimizing storage helps to minimize storage use across all resources. By
23+
optimizing storage, administrators help ensure that existing storage resources
24+
are working in an efficient manner.
25+
26+
include::modules/available-persistent-storage-options.adoc[leveloffset=+1]
27+
28+
include::modules/recommended-configurable-storage-technology.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)