|
| 1 | +:_mod-docs-content-type: ASSEMBLY |
| 2 | +[id="updating-cluster-prepare-past-4.18"] |
| 3 | += Preparing to update from {product-title} 4.18 to a newer version |
| 4 | +include::_attributes/common-attributes.adoc[] |
| 5 | +:context: updating-cluster-prepare-past-4.18 |
| 6 | + |
| 7 | +toc::[] |
| 8 | + |
| 9 | +Before you update from {product-title} 4.18 to a newer version, learn about some of the specific concerns around {op-system-base-full} compute machines. |
| 10 | + |
| 11 | +[id="migrating-workloads-to-different-nodes_{context}"] |
| 12 | +== Migrating workloads off of package-based {op-system-base} worker nodes |
| 13 | + |
| 14 | +With the introduction of {product-title} 4.19, package-based {op-system-base} worker nodes are no longer supported. If you try to update your cluster while those nodes are up and running, the update will fail. |
| 15 | + |
| 16 | +You can reschedule pods running on {op-system-base} compute nodes to run on your {op-system} nodes instead by using node selectors. |
| 17 | + |
| 18 | +For example, the following `Node` object has a label for its operating system information, in this case {op-system}: |
| 19 | + |
| 20 | +.Sample `Node` object with {op-system} label |
| 21 | +[source,yaml,subs="+attributes"] |
| 22 | +---- |
| 23 | +kind: Node |
| 24 | +apiVersion: v1 |
| 25 | +metadata: |
| 26 | + name: ip-10-0-131-14.ec2.internal |
| 27 | + selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal |
| 28 | + uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 |
| 29 | + resourceVersion: '478704' |
| 30 | + creationTimestamp: '2019-06-10T14:46:08Z' |
| 31 | + labels: |
| 32 | + kubernetes.io/os: linux |
| 33 | + failure-domain.beta.kubernetes.io/zone: us-east-1a |
| 34 | + node.openshift.io/os_version: '{product-version}' |
| 35 | + node-role.kubernetes.io/worker: '' |
| 36 | + failure-domain.beta.kubernetes.io/region: us-east-1 |
| 37 | + node.openshift.io/os_id: rhcos <1> |
| 38 | + beta.kubernetes.io/instance-type: m4.large |
| 39 | + kubernetes.io/hostname: ip-10-0-131-14 |
| 40 | + beta.kubernetes.io/arch: amd64 |
| 41 | +#... |
| 42 | +---- |
| 43 | +<1> The label identifying the operating system that runs on the node, to match the pod node selector. |
| 44 | + |
| 45 | +Any pods that you want to schedule to new {op-system} nodes must contain a matching label in its `nodeSelector` field. The following procedure describes how to add the label. |
| 46 | + |
| 47 | +.Procedure |
| 48 | + |
| 49 | +. Deschedule the {op-system-base} node currently running your existing pods by entering the following command: |
| 50 | ++ |
| 51 | +[source,terminal] |
| 52 | +---- |
| 53 | +$ oc adm cordon <rhel-node> |
| 54 | +---- |
| 55 | + |
| 56 | +. Add an `rhcos` node selector to a pod: |
| 57 | + |
| 58 | +** To add the node selector to existing and future pods, add the node selector to the controller object for the pods by entering the following command: |
| 59 | ++ |
| 60 | +.Example `Deployment` object with `rhcos` label |
| 61 | +[source,terminal] |
| 62 | +---- |
| 63 | +$ oc patch dc <my-app> -p '{"spec":{"template":{"spec":{"nodeSelector":{"node.openshift.io/os_id":"rhcos"}}}}}' |
| 64 | +---- |
| 65 | ++ |
| 66 | +Any existing pods under your `Deployment` controlling object will be re-created on your {op-system} nodes. |
| 67 | + |
| 68 | +** To add the node selector to a specific, new pod, add the selector to the `Pod` object directly: |
| 69 | ++ |
| 70 | +.Example `Pod` object with `rhcos` label |
| 71 | +[source,yaml] |
| 72 | +---- |
| 73 | +apiVersion: v1 |
| 74 | +kind: Pod |
| 75 | +metadata: |
| 76 | + name: <my-app> |
| 77 | +#... |
| 78 | +spec: |
| 79 | + nodeSelector: |
| 80 | + node.openshift.io/os_id: rhcos |
| 81 | +#... |
| 82 | +---- |
| 83 | ++ |
| 84 | +The new pod will be created on {op-system} nodes, assuming the pod also has a controlling object. |
| 85 | + |
| 86 | +[id="identifying-and-removing-rhel-worker-nodes_{context}"] |
| 87 | +== Identifying and removing {op-system-base} worker nodes |
| 88 | + |
| 89 | +With the introduction of {product-title} 4.19, package-based {op-system-base} worker nodes are no longer supported. The following procedure describes how to identify {op-system-base} nodes for cluster removal on bare-metal installations. You must complete the following steps to successfully update your cluster. |
| 90 | + |
| 91 | +.Procedure |
| 92 | + |
| 93 | +. Identify nodes in your cluster that are running {op-system-base} by entering the following command: |
| 94 | ++ |
| 95 | +[source,terminal] |
| 96 | +---- |
| 97 | +$ oc get -l node.openshift.io/os_id=rhel |
| 98 | +---- |
| 99 | ++ |
| 100 | +.Example output |
| 101 | ++ |
| 102 | +[source,text] |
| 103 | +---- |
| 104 | +NAME STATUS ROLES AGE VERSION |
| 105 | +rhel-node1.example.com Ready worker 7h v1.31.7 |
| 106 | +rhel-node2.example.com Ready worker 7h v1.31.7 |
| 107 | +rhel-node3.example.com Ready worker 7h v1.31.7 |
| 108 | +---- |
| 109 | + |
| 110 | +. xref:../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-deleting-bare-metal_nodes-nodes-working[Continue with the node removal process]. {op-system-base} nodes are not managed by the Machine API and have no compute machine sets associated with them. You must unschedule and drain the node before you manually delete it from the cluster. |
| 111 | ++ |
| 112 | +For more information on this process, see link:https://access.redhat.com/solutions/4976801[How to remove a worker node from Red{nbsp}Hat {product-title} 4 UPI]. |
| 113 | + |
| 114 | +[id="provision-new-rhcos-nodes_{context}"] |
| 115 | +== Provisioning new {op-system} worker nodes |
| 116 | + |
| 117 | +If you need additional compute nodes for your workloads, you can provision new ones either before or after you update your cluster. For more information, see the following xref:../../machine_management/index.adoc#overview-of-machine-management[machine management] documentation: |
| 118 | + |
| 119 | +* xref:../../machine_management/manually-scaling-machineset.adoc#manually-scaling-machineset[Manually scaling a compute machine set] |
| 120 | +* xref:../../machine_management/applying-autoscaling.adoc#applying-autoscaling[Applying autoscaling to an {product-title} cluster] |
| 121 | +* xref:../../machine_management/user_infra/adding-compute-user-infra-general.adoc#adding-compute-user-infra-general[Adding compute machines to clusters with user-provisioned infrastructure manually] |
| 122 | + |
| 123 | +For installer-provisioned infrastructure installations, automatic scaling adds {op-system} nodes by default. For user-provisioned infrastructure installations on bare metal platforms, you can manually xref:../../post_installation_configuration/node-tasks.adoc#post-install-config-adding-fcos-compute[add {op-system} compute nodes to your cluster]. |
0 commit comments