Skip to content

Testing updated script #96516

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 2 additions & 2 deletions modules/about-log-collection.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ If you configure the log collector to collect audit logs, it collects them from
The log collector collects the logs from these sources and forwards them internally or externally depending on your {logging} configuration.

[id="about-log-collectors-types_{context}"]
== Log collector types
= Log collector types

link:https://vector.dev/docs/about/what-is-vector/[Vector] is a log collector offered as an alternative to Fluentd for the {logging}.

Expand All @@ -41,7 +41,7 @@ spec:
----

[id="about-log-collectors-limitations_{context}"]
== Log collection limitations
= Log collection limitations

The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered _best effort_.

Expand Down
2 changes: 1 addition & 1 deletion modules/about-manually-maintained-credentials-upgrade.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The process to update the cloud provider resources and the `upgradeable-to` anno
====

[id="cco-platform-options_{context}"]
== Cloud credential configuration options and update requirements by platform type
= Cloud credential configuration options and update requirements by platform type

Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements.

Expand Down
2 changes: 1 addition & 1 deletion modules/about-redhat-openshift-gitops.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ These repositories contain a declarative description of the infrastructure you n
Argo CD reports any configurations that deviate from their specified state. These reports allow administrators to automatically or manually resync configurations to the defined state. Therefore, Argo CD enables you to deliver global custom resources, like the resources that are used to configure {product-title} clusters.

[id="key-features_{context}"]
== Key features
= Key features

{gitops-title} helps you automate the following tasks:

Expand Down
4 changes: 2 additions & 2 deletions modules/adding-node-iso-configs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ When creating the ISO image, configurations are retrieved from the target cluste
Any configurations for your cluster are applied to the nodes unless you override the configurations in either the `nodes-config.yaml` file or any flags you add to the `oc adm node-image create` command.

[id="adding-node-iso-yaml-config_{context}"]
== YAML file parameters
= YAML file parameters

Configuration parameters that can be specified in the `nodes-config.yaml` file are described in the following table:

Expand Down Expand Up @@ -89,7 +89,7 @@ You must also set the `--pxe` flag to generate PXE assets instead of an ISO imag


[id="adding-node-iso-flags-config_{context}"]
== Command flag options
= Command flag options

You can use command flags with the `oc adm node-image create` command to configure the nodes you are creating.

Expand Down
6 changes: 3 additions & 3 deletions modules/admin-limit-operations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="admin-limit-operations_{context}"]
= Limit range operations

== Creating a limit range
= Creating a limit range

Shown here is an example procedure to follow for creating a limit range.

Expand All @@ -19,7 +19,7 @@ Shown here is an example procedure to follow for creating a limit range.
$ oc create -f <limit_range_file> -n <project>
----

== View the limit
= View the limit

You can view any limit ranges that are defined in a project by navigating in the web console to the `Quota` page for the project. You can also use the CLI to view limit range details by performing the following steps:

Expand Down Expand Up @@ -64,7 +64,7 @@ openshift.io/ImageStream openshift.io/image - 12 -
openshift.io/ImageStream openshift.io/image-tags - 10 - - -
----

== Deleting a limit range
= Deleting a limit range

To remove a limit range, run the following command:
+
Expand Down
12 changes: 6 additions & 6 deletions modules/admin-quota-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ spec:

You can specify both core and {product-title} resources in one limit range object.

== Container limits
= Container limits

*Supported Resources:*

Expand Down Expand Up @@ -149,7 +149,7 @@ For example, if a container has `cpu: 500` in the `limit` value, and `cpu: 100`
`Default Requests[<resource>]`:: Defaults `container.resources.requests[<resource>]` to specified value if none.


== Pod limits
= Pod limits

*Supported Resources:*

Expand Down Expand Up @@ -177,7 +177,7 @@ Across all containers in a pod, the following must hold true:

|===

== Image limits
= Image limits

Supported Resources:

Expand All @@ -203,7 +203,7 @@ Per image, the following must hold true if specified:
To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quota. The `REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA` environment variable must be set to `true`. By default, the environment variable is set to `true` for new deployments.
====

== Image stream limits
= Image stream limits

*Supported Resources:*

Expand Down Expand Up @@ -233,14 +233,14 @@ Per image stream, the following must hold true if specified:

|===

== Counting of image references
= Counting of image references

The `openshift.io/image-tags` resource represents unique stream limits. Possible references are an `ImageStreamTag`, an `ImageStreamImage`, or a `DockerImage`. Tags can be created by using the `oc tag` and `oc import-image` commands or by using image streams. No distinction is made between internal and external references. However, each unique reference that is tagged in an image stream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction.

The `openshift.io/images` resource represents unique image names that are recorded in image stream status. It helps to restrict several images that can be pushed to the internal registry. Internal and external references are not distinguished.


== PersistentVolumeClaim limits
= PersistentVolumeClaim limits

*Supported Resources:*

Expand Down
4 changes: 2 additions & 2 deletions modules/admin-quota-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ $ oc create quota <name> --hard=count/<resource>.<group>=<quota> <1>
<1> `<resource>` is the name of the resource, and `<group>` is the API group, if applicable.
Use the `kubectl api-resources` command for a list of resources and their associated API groups.

== Setting resource quota for extended resources
= Setting resource quota for extended resources

Overcommitment of resources is not allowed for extended resources, so you must specify `requests` and `limits` for the same extended resource in a quota. Currently, only quota items with the prefix `requests.` are allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource `nvidia.com/gpu`.

Expand Down Expand Up @@ -287,7 +287,7 @@ Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod
+
This `Forbidden` error message occurs because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota.

== Quota scopes
= Quota scopes

Each quota can have an associated set of _scopes_. A quota only measures usage for a resource if it matches the intersection of enumerated scopes.

Expand Down
16 changes: 8 additions & 8 deletions modules/admin-quota-usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
[id="admin-quota-usage_{context}"]
= Admin quota usage

== Quota enforcement
= Quota enforcement

After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics.

Expand All @@ -20,14 +20,14 @@ If project modifications exceed a quota usage limit, the server denies the actio
quota constraint violated, and what their currently observed usage stats are in the system.


== Requests compared to limits
= Requests compared to limits

When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.

If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`, then it requires that every incoming container specify an explicit limit for those resources.


== Sample resource quota definitions
= Sample resource quota definitions


.Example core-object-counts.yaml
Expand Down Expand Up @@ -186,7 +186,7 @@ spec:
<7> Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to `0`, it means bronze storage class cannot create claims.


== Creating a quota
= Creating a quota

To create a quota, first define the quota in a file. Then use that file to apply it to a project. See the Additional resources section for a link describing this.

Expand All @@ -202,7 +202,7 @@ Here is an example using the `core-object-counts.yaml` resource quota definition
$ oc create -f core-object-counts.yaml -n demoproject
----

== Creating object count quotas
= Creating object count quotas

You can create an object count quota for all {product-title} standard namespaced resource types, such as `BuildConfig`, and `DeploymentConfig`. An object quota count places a defined quota on all standard namespaced resource types.

Expand Down Expand Up @@ -235,7 +235,7 @@ count/secrets 0 4

This example limits the listed resources to the hard limit in each project in the cluster.

== Viewing a quota
= Viewing a quota

You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's `Quota` page.

Expand Down Expand Up @@ -273,7 +273,7 @@ services 2 10

ifdef::openshift-origin,openshift-enterprise[]

== Configuring quota synchronization period
= Configuring quota synchronization period

When a set of resources are deleted, the synchronization time frame of resources is determined by the `resource-quota-sync-period` setting in the `/etc/origin/master/master-config.yaml` file.

Expand Down Expand Up @@ -312,7 +312,7 @@ endif::[]

ifdef::openshift-origin,openshift-enterprise,openshift-dedicated[]

== Explicit quota to consume a resource
= Explicit quota to consume a resource

If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded.

Expand Down
4 changes: 2 additions & 2 deletions modules/admission-webhook-types.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain.

[id="mutating-admission-plug-in_{context}"]
== Mutating admission plugin
= Mutating admission plugin

The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification.

Expand Down Expand Up @@ -61,7 +61,7 @@ In {product-title} {product-version}, objects created by users or control loops
====

[id="validating-admission-plug-in_{context}"]
== Validating admission plugin
= Validating admission plugin

A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all `nodeSelector` fields are constrained by the node selector restrictions on the namespace.

Expand Down
4 changes: 2 additions & 2 deletions modules/agent-configuration-parameters.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ These settings are used for installation only, and cannot be modified after inst
====

[id="agent-configuration-parameters-required_{context}"]
== Required configuration parameters
= Required configuration parameters

Required Agent configuration parameters are described in the following table:

Expand Down Expand Up @@ -46,7 +46,7 @@ When you do not provide `metadata.name` through either the `install-config.yaml`


[id="agent-configuration-parameters-optional_{context}"]
== Optional configuration parameters
= Optional configuration parameters

Optional Agent configuration parameters are described in the following table:

Expand Down
2 changes: 1 addition & 1 deletion modules/agent-install-dns-none.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ You can use the `dig` command to verify name and reverse name resolution.
====

[id="agent-install-dns-none-example_{context}"]
== Example DNS configuration for platform "none" clusters
= Example DNS configuration for platform "none" clusters

This section provides A and PTR record configuration samples that meet the DNS requirements for deploying {product-title} using the platform `none` option. The samples are not meant to provide advice for choosing one DNS solution over another.

Expand Down
4 changes: 2 additions & 2 deletions modules/agent-install-load-balancing-none.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Before you install {product-title}, you must provision the API and application I

[NOTE]
====
These requirements do not apply to single-node OpenShift clusters using the platform `none` option.
These requirements do not apply to {sno} clusters using the platform `none` option.
====

[NOTE]
Expand Down Expand Up @@ -111,7 +111,7 @@ If you are deploying a three-node cluster with zero compute nodes, the Ingress C
====

[id="agent-install-load-balancing-none-example_{context}"]
== Example load balancer configuration for platform "none" clusters
= Example load balancer configuration for platform "none" clusters

This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform `none` option. The sample is an `/etc/haproxy/haproxy.cfg` configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.

Expand Down
4 changes: 2 additions & 2 deletions modules/agent-install-networking.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ In an environment without a DHCP server, you can define IP addresses statically.
In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds.

[id="agent-install-networking-DHCP_{context}"]
== DHCP
= DHCP

.Preferred method: `install-config.yaml` and `agent-config.yaml`

Expand All @@ -32,7 +32,7 @@ rendezvousIP: 192.168.111.80 <1>
<1> The IP address for the rendezvous host.

[id="agent-install-networking-static_{context}"]
== Static networking
= Static networking

.. Preferred method: `install-config.yaml` and `agent-config.yaml`

Expand Down
2 changes: 1 addition & 1 deletion modules/albo-installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running
----

[id="aws-load-balancer-operator-validating-the-deployment_{context}"]
== Validating the deployment
= Validating the deployment

. Create a new project:
+
Expand Down
4 changes: 2 additions & 2 deletions modules/albo-prerequisites.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ endif::openshift-rosa-hcp[]
* OC CLI

[id="aws-load-balancer-operator-environment_{context}"]
== AWS Load Balancer Operator environment set up
= AWS Load Balancer Operator environment set up

Optional: You can set up temporary environment variables to streamline your installation commands.

Expand Down Expand Up @@ -61,7 +61,7 @@ Cluster name: <cluster_id>, Region: us-east-2, OIDC Endpoint: oidc.op1.openshift
----

[id="aws-vpc-subnets_{context}"]
== AWS VPC and subnets
= AWS VPC and subnets

Before you can install the AWS Load Balancer Operator, you must tag your AWS VPC resources.

Expand Down
Loading