|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * storage/optimizing-storage.adoc |
| 4 | +:gluster: GlusterFS |
| 5 | +:gluster-native: Containerized GlusterFS |
| 6 | +:gluster-external: External GlusterFS |
| 7 | +:gluster-install-link: https://docs.gluster.org/en/latest/Install-Guide/Overview/ |
| 8 | +:gluster-admin-link: https://docs.gluster.org/en/latest/Administrator%20Guide/overview/ |
| 9 | +:gluster-role-link: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs |
| 10 | +ifdef::openshift-enterprise[] |
| 11 | +:gluster: Red Hat Gluster Storage |
| 12 | +:gluster-native: converged mode |
| 13 | +:gluster-external: independent mode |
| 14 | +:gluster-install-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/installation_guide/ |
| 15 | +:gluster-admin-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/ |
| 16 | +:cns-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/ |
| 17 | +[id='recommended-configurable-storage-technology_{context}'] |
| 18 | += Recommended configurable storage technology |
| 19 | + |
| 20 | +The following table summarizes the recommended and configurable storage |
| 21 | +technologies for the given {product-title} cluster application. |
| 22 | + |
| 23 | +.Recommended and configurable storage technology |
| 24 | +[options="header"] |
| 25 | +|=== |
| 26 | +|Storage type |ROX footnoteref:[rox,ReadOnlyMany]|RWX footnoteref:[rwx,ReadWriteMany] |Registry|Scaled registry|Metrics|Logging|Apps |
| 27 | + |
| 28 | +| Block |
| 29 | +| Yes footnoteref:[disk,This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk.] |
| 30 | +| No |
| 31 | +| Configurable |
| 32 | +| Not configurable |
| 33 | +| Recommended |
| 34 | +| Recommended |
| 35 | +| Recommended |
| 36 | + |
| 37 | +| File |
| 38 | +| Yes footnoteref:[disk] |
| 39 | +| Yes |
| 40 | +| Configurable |
| 41 | +| Configurable |
| 42 | +| Configurable footnoteref:[metrics-warning,For metrics, it is an anti-pattern to use any shared storage and a single volume |
| 43 | +(RWX). By default, metrics deploys with one volume per Cassandra Replica.] |
| 44 | +| Configurable footnoteref:[logging-warning,For logging, using any shared |
| 45 | +storage would be an anti-pattern. One volume per logging-es is required.] |
| 46 | +| Recommended |
| 47 | + |
| 48 | +| Object |
| 49 | +| Yes |
| 50 | +| Yes |
| 51 | +| Recommended |
| 52 | +| Recommended |
| 53 | +| Not configurable |
| 54 | +| Not configurable |
| 55 | +| Not configurable footnoteref:[object,Object storage is not consumed through {product-title}'s PVs/persistent volume claims (PVCs). Apps must integrate with the object storage REST API. ] |
| 56 | +|=== |
| 57 | + |
| 58 | +[NOTE] |
| 59 | +==== |
| 60 | +A scaled registry is an {product-title} registry where three or more pod replicas are running. |
| 61 | +==== |
| 62 | + |
| 63 | +== Specific application storage recommendations |
| 64 | + |
| 65 | +[IMPORTANT] |
| 66 | +==== |
| 67 | +Testing shows issues with using the NFS server on RHEL as storage backend for |
| 68 | +the container image registry. This includes the OpenShift Container Registry and Quay, Cassandra |
| 69 | +for metrics storage, and ElasticSearch for logging storage. Therefore, using NFS |
| 70 | +to back PVs used by core services is not recommended. |
| 71 | +
|
| 72 | +Other NFS implementations on the marketplace might not have these issues. |
| 73 | +Contact the individual NFS implementation vendor for more information on any |
| 74 | +testing that was possibly completed against these OpenShift core components. |
| 75 | +==== |
| 76 | + |
| 77 | +==== Registry |
| 78 | + |
| 79 | +In a non-scaled/high-availability (HA) {product-title} registry cluster deployment: |
| 80 | + |
| 81 | +* The preferred storage technology is object storage followed by block storage. The |
| 82 | +storage technology does not need to support RWX access mode. |
| 83 | +* The storage technology must ensure read-after-write consistency. All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are not |
| 84 | +recommended for {product-title} Registry cluster deployment with production workloads. |
| 85 | +* While `hostPath` volumes are configurable for a non-scaled/HA {product-title} Registry, they are not recommended for cluster deployment. |
| 86 | + |
| 87 | +==== Scaled registry |
| 88 | + |
| 89 | +In a scaled/HA {product-title} registry cluster deployment: |
| 90 | + |
| 91 | +* The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency. |
| 92 | +* File storage and block storage are not recommended for a scaled/HA {product-title} registry cluster deployment with production workloads. |
| 93 | +* All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are |
| 94 | +not recommended for {product-title} Registry cluster deployment with production workloads. |
| 95 | + |
| 96 | + |
| 97 | +==== Metrics |
| 98 | + |
| 99 | +In an {product-title} hosted metrics cluster deployment: |
| 100 | + |
| 101 | +* The preferred storage technology is block storage. |
| 102 | +* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads. |
| 103 | + |
| 104 | +[IMPORTANT] |
| 105 | +==== |
| 106 | +Testing shows issues with using the NFS server on RHEL as storage backend for |
| 107 | +the container image registry. This includes the Cassandra for metrics storage. |
| 108 | +Therefore, using NFS to back PVs used by core services is not recommended. |
| 109 | +
|
| 110 | +Other NFS implementations on the marketplace might not have these issues. |
| 111 | +Contact the individual NFS implementation vendor for more information on any |
| 112 | +testing that was possibly completed against these OpenShift core components. |
| 113 | +==== |
| 114 | + |
| 115 | +==== Logging |
| 116 | + |
| 117 | +In an {product-title} hosted logging cluster deployment: |
| 118 | + |
| 119 | +* The preferred storage technology is block storage. |
| 120 | +* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads. |
| 121 | + |
| 122 | +[IMPORTANT] |
| 123 | +==== |
| 124 | +Testing shows issues with using the NFS server on RHEL as storage backend for |
| 125 | +the container image registry. This includes ElasticSearch for logging storage. |
| 126 | +Therefore, using NFS to back PVs used by core services is not recommended. |
| 127 | +
|
| 128 | +Other NFS implementations on the marketplace might not have these issues. |
| 129 | +Contact the individual NFS implementation vendor for more information on any |
| 130 | +testing that was possibly completed against these OpenShift core components. |
| 131 | +==== |
| 132 | + |
| 133 | +==== Applications |
| 134 | + |
| 135 | +Application use cases vary from application to application, as described in the following examples: |
| 136 | + |
| 137 | +* Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied |
| 138 | +to nodes to support a healthy cluster. |
| 139 | +* Application developers are responsible for knowing and understanding the storage |
| 140 | +requirements for their application, and how it works with the provided storage |
| 141 | +to ensure that issues do not occur when an application scales or interacts |
| 142 | +with the storage layer. |
| 143 | + |
| 144 | +=== Other specific application storage recommendations |
| 145 | + |
| 146 | +* {product-title} Internal *etcd*: For the best etcd reliability, the lowest consistent latency storage technology is preferable. |
| 147 | +* OpenStack Cinder: OpenStack Cinder tends to be adept in ROX access mode use cases. |
| 148 | +* Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. |
0 commit comments