Skip to content

OSDOCS-15521: Fix discrete headings #96685

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions applications/deployments/what-deployments-are.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,9 @@ include::modules/deployments-kube-deployments.adoc[leveloffset=+1]
include::modules/deployments-deploymentconfigs.adoc[leveloffset=+1]

include::modules/deployments-comparing-deploymentconfigs.adoc[leveloffset=+1]
include::modules/deployment-specific-features.adoc[leveloffset=+2]
include::modules/deploymentconfig-object-specific-features.adoc[leveloffset=+2]

////
Update when converted:
[role="_additional-resources"]
Expand Down
3 changes: 2 additions & 1 deletion applications/pruning-objects.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ include::modules/pruning-images.adoc[leveloffset=+1]
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
include::modules/pruning-images-manual.adoc[leveloffset=+1]
include::modules/pruning-images-troubleshooting.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
Expand All @@ -50,7 +51,7 @@ Registry Operator in {product-title}] for information on how to create a
registry route.
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
// cannot patch resource "configs"
// cannot patch resource "configs"
ifndef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
include::modules/pruning-hard-pruning-registry.adoc[leveloffset=+1]
Expand Down
26 changes: 26 additions & 0 deletions modules/deployment-specific-features.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
// Module included in the following assemblies:
//
// * applications/deployments/what-deployments-are.adoc

:_mod-docs-content-type: CONCEPT
[id="deployment-specific-features_{context}"]
= Deployment-specific features

[id="deployment-specific-features-rollover_{context}"]
== Rollover

The deployment process for `Deployment` objects is driven by a controller loop, in contrast to `DeploymentConfig` objects that use deployer pods for every new rollout. This means that the `Deployment` object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one.

`DeploymentConfig` objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for `Deployment` objects.

[id="deployment-specific-features-proportional-scaling_{context}"]
== Proportional scaling

Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a `Deployment` object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set.

`DeploymentConfig` objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller.

[id="deployment-specific-features-pausing-mid-rollout_{context}"]
== Pausing mid-rollout

Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes.
33 changes: 33 additions & 0 deletions modules/deploymentconfig-object-specific-features.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * applications/deployments/what-deployments-are.adoc

:_mod-docs-content-type: CONCEPT
[id="deploymentconfig-object-specific-features_{context}"]
= DeploymentConfig object-specific features

[id="deploymentconfig-object-specific-features-automatic-rollbacks_{context}"]
== Automatic rollbacks

Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure.

[id="deploymentconfig-object-specific-features-triggers_{context}"]
== Triggers

Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout.
If you do not want new rollouts on pod template changes, pause the deployment:

[source,terminal]
----
$ oc rollout pause deployments/<name>
----

[id="deploymentconfig-object-specific-features-lifecycle-hooks_{context}"]
== Lifecycle hooks

Deployments do not yet support any lifecycle hooks.

[id="deploymentconfig-object-specific-features-custom-strategies_{context}"]
== Custom strategies

Deployments do not support user-specified custom deployment strategies.
51 changes: 0 additions & 51 deletions modules/deployments-comparing-deploymentconfigs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,54 +19,3 @@ One important difference between `Deployment` and `DeploymentConfig` objects is
For `DeploymentConfig` objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod.

However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs.

[id="delpoyments-specific-features_{context}"]
== Deployment-specific features

[discrete]
==== Rollover

The deployment process for `Deployment` objects is driven by a controller loop, in contrast to `DeploymentConfig` objects that use deployer pods for every new rollout. This means that the `Deployment` object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one.

`DeploymentConfig` objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for `Deployment` objects.

[discrete]
==== Proportional scaling

Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a `Deployment` object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set.

`DeploymentConfig` objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller.

[discrete]
==== Pausing mid-rollout

Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes.

[id="delpoymentconfigs-specific-features_{context}"]
== DeploymentConfig object-specific features

[discrete]
==== Automatic rollbacks

Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure.

[discrete]
==== Triggers

Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout.
If you do not want new rollouts on pod template changes, pause the deployment:

[source,terminal]
----
$ oc rollout pause deployments/<name>
----

[discrete]
==== Lifecycle hooks

Deployments do not yet support any lifecycle hooks.

[discrete]
==== Custom strategies

Deployments do not support user-specified custom deployment strategies.
5 changes: 3 additions & 2 deletions modules/deployments-lifecycle-hooks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ Every hook has a _failure policy_, which defines the action the strategy should

Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the `execNewPod` field.

[discrete]
==== Pod-based lifecycle hook
[id="deployments-lifecycle-hooks-pod-based_{context}"]
== Pod-based lifecycle hook

Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a `DeploymentConfig` object.

Expand Down Expand Up @@ -87,6 +87,7 @@ In this example, the `pre` hook will be executed in a new pod using the `openshi

[id="deployments-setting-lifecycle-hooks_{context}"]
== Setting lifecycle hooks
// out of scope for this PR - needs a separate module

You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI.

Expand Down
6 changes: 2 additions & 4 deletions modules/deployments-triggers.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,8 @@ A `DeploymentConfig` object can contain triggers, which drive the creation of ne
If no triggers are defined on a `DeploymentConfig` object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.
====

[discrete]
[id="deployments-configchange-trigger_{context}"]
=== Config change deployment triggers
== Config change deployment triggers

The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the `DeploymentConfig` object.

Expand All @@ -37,9 +36,8 @@ spec:
- type: "ConfigChange"
----

[discrete]
[id="deployments-imagechange-trigger_{context}"]
=== Image change deployment triggers
== Image change deployment triggers

The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed).

Expand Down
5 changes: 3 additions & 2 deletions modules/nodes-pods-plugins-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,9 @@ service DevicePlugin {
}
----

[discrete]
=== Example device plugins
[id="example-device-plugins_{context}"]
== Example device plugins

* link:https://github.com/GoogleCloudPlatform/Container-engine-accelerators/tree/master/cmd/nvidia_gpu[Nvidia GPU device plugin for COS-based operating system]
* link:https://github.com/NVIDIA/k8s-device-plugin[Nvidia official GPU device plugin]
* link:https://github.com/vikaschoudhary16/sfc-device-plugin[Solarflare device plugin]
Expand Down
7 changes: 2 additions & 5 deletions modules/nw-infw-operator-rules-object.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,8 @@ The fields for the Ingress Node Firewall rules object are described in the follo
|`ingress` allows you to configure the rules that allow outside access to the services on your cluster.
|====

[discrete]
[id="nw-infw-ingress-rules-object_{context}"]
=== Ingress object configuration
== Ingress object configuration

The values for the `ingress` object are defined in the following table:

Expand Down Expand Up @@ -65,7 +64,6 @@ Ingress firewall rules are verified using a verification webhook that blocks any
====
|====

[discrete]
[id="nw-ingress-node-firewall-example-cr_{context}"]
== Ingress Node Firewall rules object example

Expand Down Expand Up @@ -112,7 +110,6 @@ spec:
----
<1> A <label_name> and a <label_value> must exist on the node and must match the `nodeselector` label and value applied to the nodes you want the `ingressfirewallconfig` CR to run on. The <label_value> can be `true` or `false`. By using `nodeSelector` labels, you can target separate groups of nodes to apply different rules to using the `ingressfirewallconfig` CR.

[discrete]
[id="nw-ingress-node-firewall-zero-trust-example-cr_{context}"]
== Zero trust Ingress Node Firewall rules object example

Expand Down Expand Up @@ -154,4 +151,4 @@ spec:
<1> Network-interface cluster
<2> The <label_name> and <label_value> needs to match the `nodeSelector` label and value applied to the specific nodes with which you wish to apply the `ingressfirewallconfig` CR.
<3> `0.0.0.0/0` set to match any CIDR
<4> `action` set to `Deny`
<4> `action` set to `Deny`
8 changes: 4 additions & 4 deletions modules/olm-dependency-resolution-examples.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@

In the following examples, a _provider_ is an Operator which "owns" a CRD or API service.

[discrete]
=== Example: Deprecating dependent APIs
[id="olm-dependency-resolution-examples-deprecating-dependent-APIs_{context}"]
== Example: Deprecating dependent APIs

A and B are APIs (CRDs):

Expand All @@ -23,8 +23,8 @@ This results in:

This is a case OLM prevents with its upgrade strategy.

[discrete]
=== Example: Version deadlock
[id="olm-dependency-resolution-examples-version-deadlock_{context}"]
== Example: Version deadlock

A and B are APIs:

Expand Down
3 changes: 1 addition & 2 deletions modules/olm-operatorgroups-intersections.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,8 @@ A potential issue is that Operator groups with intersecting provided APIs can co
When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces.
====

[discrete]
[id="olm-operatorgroups-intersection-rules_{context}"]
=== Rules for intersection
== Rules for intersection

Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set:

Expand Down
3 changes: 1 addition & 2 deletions modules/olm-operatorgroups-troubleshooting.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,8 @@
[id="olm-operatorgroups-troubleshooting_{context}"]
= Troubleshooting Operator groups

[discrete]
[id="olm-operatorgroups-troubleshooting-membership_{context}"]
=== Membership
== Membership

* An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios:
+
Expand Down
108 changes: 1 addition & 107 deletions modules/pruning-images-manual.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
:_mod-docs-content-type: PROCEDURE
[id="pruning-images-manual_{context}"]
= Manually pruning images
// out of scope for this PR - needs to be split into multiple modules, there shouldn't be multiple procedures in one module

The pruning custom resource enables automatic image pruning for the images from the {product-registry}. However, administrators can manually prune images that are no longer required by the system due to age, status, or exceed limits. There are two methods to manually prune images:

Expand Down Expand Up @@ -278,110 +279,3 @@ or choosing the insecure connection when prompted.
If the registry is secured by a certificate authority different from the one used by {product-title}, it must be specified using the
`--certificate-authority` flag. Otherwise, the `prune` command fails with an error.
====

[id="pruning-images-problems_{context}"]
== Image pruning problems

[discrete]
[id="pruning-images-not-being-pruned_{context}"]
==== Images not being pruned

If your images keep accumulating and the `prune` command removes just a small
portion of what you expect, ensure that you understand the image prune
conditions that must apply for an image to be considered a candidate for
pruning.

Ensure that images you want removed occur at higher positions in each tag
history than your chosen tag revisions threshold. For example, consider an old
and obsolete image named `sha256:abz`. By running the following command in your
namespace, where the image is tagged, the image is tagged three times in a
single image stream named `myapp`:

[source,terminal]
----
$ oc get is -n <namespace> -o go-template='{{range $isi, $is := .items}}{{range $ti, $tag := $is.status.tags}}'\
'{{range $ii, $item := $tag.items}}{{if eq $item.image "sha256:<hash>"}}{{$is.metadata.name}}:{{$tag.tag}} at position {{$ii}} out of {{len $tag.items}}\n'\
'{{end}}{{end}}{{end}}{{end}}'
----

.Example output
[source,terminal]
----
myapp:v2 at position 4 out of 5
myapp:v2.1 at position 2 out of 2
myapp:v2.1-may-2016 at position 0 out of 1
----

When default options are used, the image is never pruned because it occurs at
position `0` in a history of `myapp:v2.1-may-2016` tag. For an image to be
considered for pruning, the administrator must either:

* Specify `--keep-tag-revisions=0` with the `oc adm prune images` command.
+
[WARNING]
====
This action removes all the tags from all the namespaces with underlying images, unless they are younger or they are referenced by objects younger than the specified threshold.
====

* Delete all the `istags` where the position is below the revision threshold,
which means `myapp:v2.1` and `myapp:v2.1-may-2016`.

* Move the image further in the history, either by running new builds pushing to
the same `istag`, or by tagging other image. This is not always
desirable for old release tags.

Tags having a date or time of a particular image's build in their names should
be avoided, unless the image must be preserved for an undefined amount of time.
Such tags tend to have just one image in their history, which prevents
them from ever being pruned.

[discrete]
[id="pruning-images-secure-against-insecure_{context}"]
==== Using a secure connection against insecure registry

If you see a message similar to the following in the output of the `oc adm prune images`
command, then your registry is not secured and the `oc adm prune images`
client attempts to use a secure connection:

[source,terminal]
----
error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client
----

* The recommended solution is to secure the registry. Otherwise, you can force the
client to use an insecure connection by appending `--force-insecure` to the
command; however, this is not recommended.

[discrete]
[id="pruning-images-insecure-against-secure_{context}"]
==== Using an insecure connection against a secured registry

If you see one of the following errors in the output of the `oc adm prune images`
command, it means that your registry is secured using a certificate signed by a
certificate authority other than the one used by `oc adm prune images` client for
connection verification:

[source,terminal]
----
error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response "\x15\x03\x01\x00\x02\x02"]
----

By default, the certificate authority data stored in the user's configuration files is used; the same is true for communication with the master API.

Use the `--certificate-authority` option to provide the right certificate authority for the container image registry server.

[discrete]
[id="pruning-images-wrong-ca_{context}"]
==== Using the wrong certificate authority

The following error means that the certificate authority used to sign the certificate of the secured container image registry is different from the authority used by the client:

[source,terminal]
----
error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority
----

Make sure to provide the right one with the flag `--certificate-authority`.

As a workaround, the `--force-insecure` flag can be added instead. However, this is not recommended.
Loading