diff --git a/docs/vendor/embedded-using.mdx b/docs/vendor/embedded-using.mdx index 9c46132345..de7f383b39 100644 --- a/docs/vendor/embedded-using.mdx +++ b/docs/vendor/embedded-using.mdx @@ -145,7 +145,7 @@ This section describes managing nodes in multi-node clusters created with Embedd You can optionally define node roles in the Embedded Cluster Config. For multi-node clusters, roles can be useful for the purpose of assigning specific application workloads to nodes. If nodes roles are defined, users access the Admin Console to assign one or more roles to a node when it is joined to the cluster. -For more information, see [roles](/reference/embedded-config#roles) in _Embedded Cluster Config_. +For more information, see [roles](/reference/embedded-config#roles-beta) in _Embedded Cluster Config_. ### Adding Nodes @@ -238,4 +238,4 @@ When the containerd options are configured as shown above, the NVIDIA GPU Operat If you include the NVIDIA GPU Operator as a Helm extension, remove any existing containerd services that are running on the host (such as those deployed by Docker) before attempting to install the release with Embedded Cluster. If there are any containerd services on the host, the NVIDIA GPU Operator will generate an invalid containerd config, causing the installation to fail. For more information, see [Installation failure when NVIDIA GPU Operator is included as Helm extension](#nvidia) in _Troubleshooting Embedded Cluster_. This is the result of a known issue with v24.9.x of the NVIDIA GPU Operator. For more information about the known issue, see [container-toolkit does not modify the containerd config correctly when there are multiple instances of the containerd binary](https://github.com/NVIDIA/nvidia-container-toolkit/issues/982) in the nvidia-container-toolkit repository in GitHub. -::: \ No newline at end of file +:::