|
| 1 | +:_mod-docs-content-type: ASSEMBLY |
| 2 | +[id="about-hardware-accelerators"] |
| 3 | += About hardware accelerators |
| 4 | +include::_attributes/common-attributes.adoc[] |
| 5 | +:context: about-hardware-accelerators |
| 6 | + |
| 7 | +toc::[] |
| 8 | + |
| 9 | +Specialized hardware accelerators play a key role in the emerging generative artificial intelligence and machine learning (AI/ML) industry. Specifically, hardware accelerators are essential to the training and serving of large language and other foundational models that power this new technology. Data scientists, data engineers, ML engineers, and developers can take advantage of the specialized hardware acceleration for data-intensive transformations and model development and serving. Much of that ecosystem is open source, with a number of contributing partners and open source foundations. |
| 10 | + |
| 11 | +Red{nbsp}Hat {product-title} provides support for cards and peripheral hardware that add processing units that comprise hardware accelerators: |
| 12 | + |
| 13 | +* Graphical processing units (GPUs) |
| 14 | +* Neural processing units (NPUs) |
| 15 | +* Application-specific integrated circuits (ASICs) |
| 16 | +* Data processing units (DPUs) |
| 17 | +
|
| 18 | +
|
| 19 | +image::OCP_HW_Accelerators_4.png[Supported hardware accelerators cards and peripherals] |
| 20 | + |
| 21 | +Specialized hardware accelerators provide a rich set of benefits for AI/ML development: |
| 22 | + |
| 23 | +One platform for all:: A collaborative environment for developers, data engineers, data scientists, and DevOps |
| 24 | +Extended capabilities with Operators:: Operators allow for bringing AI/ML capabilities to {product-title} |
| 25 | +Hybrid-cloud support:: On-premise support for model development, delivery, and deployment |
| 26 | +Support for AI/ML workloads:: Model testing, iteration, integration, promotion, and serving into production as services |
| 27 | + |
| 28 | +Red{nbsp}Hat provides an optimized platform to enable these specialized hardware accelerators in {op-system-base-full} and {product-title} platforms at the Linux (kernel and userspace) and Kubernetes layers. To do this, Red{nbsp}Hat combines the proven capabilities of Red{nbsp}Hat OpenShift AI and Red{nbsp}Hat {product-title} in a single enterprise-ready AI application platform. |
| 29 | + |
| 30 | +Hardware Operators use the operating framework of a Kubernetes cluster to enable the required accelerator resources. You can also deploy the provided device plugin manually or as a daemon set. This plugin registers the GPU in the cluster. |
| 31 | + |
| 32 | +Certain specialized hardware accelerators are designed to work within disconnected environments where a secure environment must be maintained for development and testing. |
| 33 | + |
| 34 | +include::modules/hardware-accelerators.adoc[leveloffset=+1] |
| 35 | + |
| 36 | +[role="_additional-resources"] |
| 37 | +.Additional resources |
| 38 | + |
| 39 | +* link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2-latest/html/introduction_to_red_hat_openshift_ai/index[Introduction to Red Hat OpenShift AI] |
| 40 | +
|
| 41 | +* link:https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html[ |
| 42 | +NVIDIA GPU Operator on Red Hat OpenShift Container Platform] |
| 43 | +
|
| 44 | +* link:https://www.amd.com/en/products/accelerators/instinct.html[AMD Instinct Accelerators] |
| 45 | +
|
| 46 | +* link:https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html[Intel Gaudi Al Accelerators] |
0 commit comments