Skip to content

Commit 93be00f

Browse files
authored
Merge pull request #86517 from lpettyjo/OSDOCS-12890
OSDOCS-12890 and 12891#GCP PD support for C3 and N4 instance types
2 parents 5362ed1 + 4bb91a5 commit 93be00f

5 files changed

+311
-3
lines changed

modules/persistent-storage-csi-drivers-supported.adoc

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ endif::openshift-rosa,openshift-aro[]
4141
|AWS EBS | ✅ | | ✅|
4242
|AWS EFS | | | |
4343
ifndef::openshift-rosa[]
44-
|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅ | ✅|
44+
|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅^[5]^ | ✅|
4545
|GCP Filestore | ✅ | | ✅|
4646
endif::openshift-rosa[]
4747
ifndef::openshift-dedicated,openshift-rosa[]
@@ -85,6 +85,11 @@ ifndef::openshift-dedicated,openshift-rosa[]
8585
* Azure File cloning and snapshot are Technology Preview features:
8686

8787
:FeatureName: Azure File CSI cloning and snapshot
88-
include::snippets/technology-preview.adoc[leveloffset=+1]
88+
include::snippets/technology-preview.adoc[leveloffset=+2]
89+
90+
5.
91+
92+
* Cloning is not supported on hyperdisk-balanced disks with storage pools.
93+
8994
--
9095
endif::openshift-dedicated,openshift-rosa[]
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="persistent-storage-csi-gcp-hyperdisk-limitations_{context}"]
7+
= C3 and N4 instance type limitations
8+
The GCP PD CSI driver support for the C3 instance type for bare metal and N4 machine series have the following limitations:
9+
10+
* Cloning volumes is not supported when using storage pools.
11+
12+
* For cloning or resizing, hyperdisk-balanced disks original volume size must be 6Gi or greater.
13+
14+
* The default storage class is standard-csi.
15+
+
16+
[IMPORTANT]
17+
====
18+
You need to manually create a storage class.
19+
20+
For information about creating the storage class, see Step 2 in Section _Setting up hyperdisk-balanced disks_.
21+
====
22+
23+
* Clusters with mixed virtual machines (VMs) that use different storage types, for example, N2 and N4, are not supported. This is due to hyperdisks-balanced disks not being usable on most legacy VMs. Similarly, regular persistent disks are not usable on N4/C3 VMs.
24+
25+
* A GCP cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16 (link:https://issues.redhat.com/browse/OCPBUGS-39258[JIRA link]).
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
4+
5+
:_mod-docs-content-type: CONCEPT
6+
[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-overview_{context}"]
7+
= Storage pools for hyperdisk-balanced disks overview
8+
9+
Hyperdisk storage pools can be used with Compute Engine for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed. You can use hyperdisk storage pools to create and manage disks in pools and use the disks across multiple workloads. By managing disks in aggregate, you can save costs while achieving expected capacity and performance growth. By using only the storage that you need in hyperdisk storage pools, you reduce the complexity of forecasting capacity and reduce management by going from managing hundreds of disks to managing a single storage pool.
Lines changed: 249 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,249 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_{context}"]
7+
= Setting up hyperdisk-balanced disks
8+
9+
.Prerequisites
10+
* Access to the cluster with administrative privileges
11+
12+
.Procedure
13+
To set up hyperdisk-balanced disks:
14+
15+
ifndef::openshift-dedicated[]
16+
. Create GCP cluster with attached disks provisioned with hyperdisk-balanced disks.
17+
endif::openshift-dedicated[]
18+
19+
ifndef::openshift-dedicated[]
20+
. Create a storage class specifying the hyperdisk-balanced disks during installation:
21+
endif::openshift-dedicated[]
22+
23+
ifndef::openshift-dedicated[]
24+
.. Follow the procedure in the _Installing a cluster on GCP with customizations_ section.
25+
+
26+
For your install-config.yaml file, use the following example file:
27+
+
28+
.Example install-config YAML file
29+
[source, yaml]
30+
----
31+
apiVersion: v1
32+
metadata:
33+
name: ci-op-9976b7t2-8aa6b
34+
35+
sshKey: |
36+
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
37+
baseDomain: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
38+
platform:
39+
gcp:
40+
projectID: XXXXXXXXXXXXXXXXXXXXXX
41+
region: us-central1
42+
controlPlane:
43+
architecture: amd64
44+
name: master
45+
platform:
46+
gcp:
47+
type: n4-standard-4 <1>
48+
osDisk:
49+
diskType: hyperdisk-balanced <2>
50+
diskSizeGB: 200
51+
replicas: 3
52+
compute:
53+
- architecture: amd64
54+
name: worker
55+
replicas: 3
56+
platform:
57+
gcp:
58+
type: n4-standard-4 <1>
59+
osDisk:
60+
diskType: hyperdisk-balanced <2>
61+
----
62+
<1> Specifies the node type as n4-standard-4.
63+
<2> Specifies the node has the root disk backed by hyperdisk-balanced disk type. All nodes in the cluster should use the same disk type, either hyperdisks-balanced or pd-*.
64+
+
65+
[NOTE]
66+
====
67+
All nodes in the cluster must support hyperdisk-balanced volumes. Clusters with mixed nodes are not supported, for example N2 and N3 using hyperdisk-balanced disks.
68+
====
69+
endif::openshift-dedicated[]
70+
71+
ifndef::openshift-dedicated[]
72+
.. After step 3 in _Incorporating the Cloud Credential Operator utility manifests_ section, copy the following manifests into the manifests directory created by the installation program:
73+
+
74+
* cluster_csi_driver.yaml - specifies opting out of the default storage class creation
75+
* storageclass.yaml - creates a hyperdisk-specific storage class
76+
+
77+
.Example cluster CSI driver YAML file
78+
[source, yaml]
79+
----
80+
apiVersion: operator.openshift.io/v1
81+
kind: "ClusterCSIDriver"
82+
metadata:
83+
name: "pd.csi.storage.gke.io"
84+
spec:
85+
logLevel: Normal
86+
managementState: Managed
87+
operatorLogLevel: Normal
88+
storageClassState: Unmanaged <1>
89+
----
90+
<1> Specifies disabling creation of the default {product-title} storage classes.
91+
+
92+
.Example storage class YAML file
93+
[source, yaml]
94+
----
95+
apiVersion: storage.k8s.io/v1
96+
kind: StorageClass
97+
metadata:
98+
name: hyperdisk-sc <1>
99+
annotations:
100+
storageclass.kubernetes.io/is-default-class: "true"
101+
provisioner: pd.csi.storage.gke.io <2>
102+
volumeBindingMode: WaitForFirstConsumer
103+
allowVolumeExpansion: true
104+
reclaimPolicy: Delete
105+
parameters:
106+
type: hyperdisk-balanced <3>
107+
replication-type: none
108+
provisioned-throughput-on-create: "140Mi" <4>
109+
provisioned-iops-on-create: "3000" <5>
110+
storage-pools: projects/my-project/zones/us-east4-c/storagePools/pool-us-east4-c <6>
111+
allowedTopologies: <7>
112+
- matchLabelExpressions:
113+
- key: topology.kubernetes.io/zone
114+
values:
115+
- us-east4-c
116+
...
117+
----
118+
<1> Specify the name for your storage class. In this example, it is `hyperdisk-sc`.
119+
<2> `pd.csi.storage.gke.io` specifies GCP CSI provisioner.
120+
<3> Specifies using hyperdisk-balanced disks.
121+
<4> Specifies the throughput value in MiBps using the "Mi" qualifier. For example, if your required throughput is 250 MiBps, specify "250Mi". If you do not specify a value, the capacity is based upon the disk type default.
122+
<5> Specifies the IOPS value without any qualifiers. For example, if you require 7,000 IOPS, specify "7000". If you do not specify a value, the capacity is based upon the disk type default.
123+
<6> If using storage pools, specify a list of specific storage pools that you want to use in the format: projects/PROJECT_ID/zones/ZONE/storagePools/STORAGE_POOL_NAME.
124+
<7> If using storage pools, set `allowedTopologies` to restrict the topology of provisioned volumes to where the storage pool exists. In this example, `us-east4-c`.
125+
endif::openshift-dedicated[]
126+
127+
. Create a persistent volume claim (PVC) that uses the hyperdisk-specific storage class using the following example YAML file:
128+
+
129+
.Example PVC YAML file
130+
[source, yaml]
131+
----
132+
apiVersion: v1
133+
kind: PersistentVolumeClaim
134+
metadata:
135+
name: my-pvc
136+
spec:
137+
storageClassName: hyperdisk-sc <1>
138+
accessModes:
139+
- ReadWriteOnce
140+
resources:
141+
requests:
142+
storage: 2048Gi <2>
143+
----
144+
<1> PVC references the the storage pool-specific storage class. In this example, `hyperdisk-sc`.
145+
<2> Target storage capacity of the hyperdisk-balanced volume. In this example, `2048Gi`.
146+
147+
. Create a deployment that uses the PVC that you just created. Using a deployment helps ensure that your application has access to the persistent storage even after the pod restarts and rescheduling:
148+
149+
.. Ensure a node pool with the specified machine series is up and running before creating the deployment. Otherwise, the pod fails to schedule.
150+
151+
.. Use the following example YAML file to create the deployment:
152+
+
153+
.Example deployment YAML file
154+
[source, yaml]
155+
----
156+
apiVersion: apps/v1
157+
kind: Deployment
158+
metadata:
159+
name: postgres
160+
spec:
161+
selector:
162+
matchLabels:
163+
app: postgres
164+
template:
165+
metadata:
166+
labels:
167+
app: postgres
168+
spec:
169+
nodeSelector:
170+
cloud.google.com/machine-family: n4 <1>
171+
containers:
172+
- name: postgres
173+
image: postgres:14-alpine
174+
args: [ "sleep", "3600" ]
175+
volumeMounts:
176+
- name: sdk-volume
177+
mountPath: /usr/share/data/
178+
volumes:
179+
- name: sdk-volume
180+
persistentVolumeClaim:
181+
claimName: my-pvc <2>
182+
----
183+
<1> Specifies the machine family. In this example, it is `n4`.
184+
<2> Specifies the name of the PVC created in the preceding step. In this example, it is `my-pfc`.
185+
186+
.. Confirm that the deployment was successfully created by running the following command:
187+
+
188+
[source, terminal]
189+
----
190+
$ oc get deployment
191+
----
192+
+
193+
.Example output
194+
[source, terminal]
195+
----
196+
NAME READY UP-TO-DATE AVAILABLE AGE
197+
postgres 0/1 1 0 42s
198+
----
199+
+
200+
It might take a few minutes for hyperdisk instances to complete provisioning and display a READY status.
201+
202+
.. Confirm that PVC `my-pvc` has been successfully bound to a persistent volume (PV) by running the following command:
203+
+
204+
[source, terminal]
205+
----
206+
$ oc get pvc my-pvc
207+
----
208+
+
209+
.Example output
210+
+
211+
[source, terminal]
212+
----
213+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
214+
my-pvc Bound pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 2Ti RWO hyperdisk-sc <unset> 2m24s
215+
----
216+
217+
.. Confirm the expected configuration of your hyperdisk-balanced disk:
218+
+
219+
[source, terminal]
220+
----
221+
$ gcloud compute disks list
222+
----
223+
+
224+
.Example output
225+
+
226+
[source, terminal]
227+
----
228+
NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS
229+
instance-20240914-173145-boot us-central1-a zone 150 pd-standard READY
230+
instance-20240914-173145-data-workspace us-central1-a zone 100 pd-balanced READY
231+
c4a-rhel-vm us-central1-a zone 50 hyperdisk-balanced READY <1>
232+
----
233+
<1> Hyperdisk-balanced disk.
234+
235+
.. If using storage pools, check that the volume is provisioned as specified in your storage class and PVC by running the following command:
236+
+
237+
[source, terminal]
238+
----
239+
$ gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c
240+
----
241+
+
242+
.Example output
243+
+
244+
[source, terminal]
245+
----
246+
NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB
247+
pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048
248+
----
249+

storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,9 @@ To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage
1818

1919
* *GCP PD CSI Driver Operator*: By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]). You also have the option to create the GCP PD storage class as described in xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk].
2020

21-
* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs.
21+
* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs.
22+
+
23+
GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks.
2224

2325
ifndef::openshift-dedicated[]
2426
[NOTE]
@@ -27,6 +29,23 @@ ifndef::openshift-dedicated[]
2729
====
2830
endif::openshift-dedicated[]
2931

32+
== C3 instance type for bare metal and N4 machine series
33+
34+
include::modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc[leveloffset=+2]
35+
36+
include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc[leveloffset=+2]
37+
38+
To set up storage pools, see xref:../../storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc#persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_persistent-storage-csi-gcp-pd[Setting up hyperdisk-balanced disks].
39+
40+
include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc[leveloffset=+2]
41+
42+
ifndef::openshift-dedicated[]
43+
[id="resources-for-gcp-c3-n4-instances"]
44+
[role="_additional-resources"]
45+
=== Additional resources
46+
* xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-customizations[Installing a cluster on GCP with customizations]
47+
endif::openshift-dedicated[]
48+
3049
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
3150

3251
include::modules/persistent-storage-csi-gcp-pd-storage-class-ref.adoc[leveloffset=+1]
@@ -39,6 +58,7 @@ include::modules/persistent-storage-byok.adoc[leveloffset=+1]
3958
For information about installing with user-managed encryption for GCP PD, see xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installation-configuration-parameters_installing-gcp-customizations[Installation configuration parameters].
4059
endif::openshift-rosa,openshift-dedicated[]
4160

61+
[id="resources-for-gcp"]
4262
[role="_additional-resources"]
4363
== Additional resources
4464
* xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk]

0 commit comments

Comments
 (0)