Skip to content

Commit 3384239

Browse files
committed
updated readme
Signed-off-by: vbedida79 <veenadhari.bedida@intel.com>
1 parent 2945f3c commit 3384239

File tree

1 file changed

+1
-14
lines changed

1 file changed

+1
-14
lines changed

README.md

Lines changed: 1 addition & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,6 @@
11
# Intel® Technology Enabling for OpenShift*
22
## Overview
3-
The Intel Technology Enabling for OpenShift project focuses on Intel’s enterprise AI and cloud native foundation for [Red Hat OpenShift Container Platform (RHOCP)](https://www.redhat.com/en/technologies/cloud-computing/openshift/container-platform) solution enablement and innovation including Intel data center hardware features, Intel technology enhanced AI platform and the referenced AI workloads provisioning for OpenShift.
4-
5-
These Intel Data Center hardware features include:
6-
- **Intel® Data Center GPU Flex Series**
7-
- **Intel® Data Center GPU Max Series**
8-
- **Intel® Software Guard Extensions (Intel® SGX)**
9-
- **Intel® QuickAssist Technology (Intel® QAT)**
10-
- **Intel® Data Streaming Accelerator (Intel® DSA)**
11-
12-
Intel AI hardware and optimized software solutions are integrated into Red Hat OpenShift AI for ease of provisioning and configuration. The [Habana AI Operator](https://catalog.redhat.com/software/container-stacks/detail/64342b3bcbfbb9a6588ce8dd?gs&q=habana) is used to provision Intel® Gaudi® accelerators and released on the Red Hat Ecosystem Catalog.
13-
14-
Red Hat Distributed CI* (DCI) based CI/CD pipeline is leveraged to enable and test this E2E solution with each RHOCP release to ensure new features and improvements can be promptly available.
15-
16-
The [Open Platform for Enterprise AI (OPEA)](https://github.com/opea-project) Retrieval-Augmented Generation (RAG) workloads are used to validate and optimize Intel enterprise AI foundation for OpenShift.
3+
The project delivers a comprehensive full-stack solution for the Intel® Enterprise AI Foundation on the [OpenShift platform](https://www.redhat.com/en/technologies/cloud-computing/openshift/container-platform), applicable across data center, cloud, and edge environments. It utilizes innovative [General Operators technology](https://github.com/intel/intel-technology-enabling-for-openshift/wiki/Intel-Technology-Enabling-for-OpenShift-Architecture-and-Working-Scope#architecture-options) to provision AI accelerators, including the Intel Gaudi Processor, Flex and Max GPUs, and Xeon CPU accelerators such as QAT, SGX, and DSA. Additionally, the project introduces solutions for integrating [Gaudi Software](https://docs.habana.ai/en/latest/index.html) or [OneAPI-based](https://www.intel.com/content/www/us/en/developer/tools/oneapi/overview.html#gs.kgdasr) AI software into OpenShift AI. Key AI workload integrations, such as LLM inferencing, fine-tuning, and post-training for enterprise AI, are under development. The plans also include the GPU network provisioning and full-stack integration with OpenShift.
174

185
![Alt text](/docs/images/Intel-Technology-Enabling-for-OpenShift-Architecture.png)
196

0 commit comments

Comments
 (0)