Skip to content

Commit cc9ef36

Browse files
committed
tests_l2: Update readme for 1.0.0 release
Signed-off-by: vbedida79 <veenadhari.bedida@intel.com>
1 parent fcf0dd1 commit cc9ef36

File tree

1 file changed

+52
-73
lines changed

1 file changed

+52
-73
lines changed

tests/l2/README.md

Lines changed: 52 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,79 +1,58 @@
1-
### L2 overview
2-
This layer consists of workloads for resource provisioning after Intel Device Plugins Operator is installed and custom resources are created.
3-
4-
#### dGPU testing workload
5-
The workload used is [clinfo](https://github.com/Oblomov/clinfo), which displays the related information of dGPU card. The OCP buildconfig is leveraged to build clinfo container image and push it to the embedded repository through OCP imagestream.
6-
The Job pod is scheduled on a node with dGPU card and the resource ```gpu.intel.com/i915``` is registered by the dGPU device plugin.
7-
Below operations are verified on OCP-4.11 bare-metal cluster.
8-
To deploy the workload:
9-
```
10-
oc apply -f clinfo_build.yaml
11-
oc apply -f clinfo_job.yaml
12-
```
13-
To check the clinfo pod logs:
14-
```
15-
oc get pods | grep clinfo
16-
oc logs <clinfo_pod_name>
17-
```
18-
19-
A sample result for clinfo detecting dGPU card:
20-
```
1+
# Verifying Intel Hardware Feature Provisioning
2+
## Introduction
3+
After provisioning Intel hardware features on RHOCP, the respective hardware resources are exposed to the RHOCP cluster. The workload containers can request these resources. The following sample workloads help verify if these resources can be used as expected. These sample workloads container images are built and packaged on-premises through [RHOCP BuildConfig](https://docs.openshift.com/container-platform/4.12/cicd/builds/understanding-buildconfigs.html) and pushed to the embedded repository through [RHOCP ImageStream](https://docs.openshift.com/container-platform/4.12/openshift_images/image-streams-manage.html).
4+
## Prerequisites
5+
• Provisioned RHOCP 4.12 cluster. Follow steps [here](https://github.com/intel/intel-technology-enabling-for-openshift#provisioning-rhocp-cluster).
6+
• Provisioning Intel HW features on RHOCP. Follow steps [here](https://github.com/intel/intel-technology-enabling-for-openshift#provisioning-intel-hardware-features-on-rhocp)
7+
### Verify Intel® Software Guard Extensions (Intel® SGX) Provisioning
8+
This [SampleEnclave](https://github.com/intel/linux-sgx/tree/master/SampleCode/SampleEnclave) application workload from the Intel SGX SDK runs an Intel SGX enclave utilizing the EPC resource from the Intel SGX provisioning.
9+
* Build the container image.
10+
```$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/ main/tests/l2/sgx/sgx_build.yaml```
11+
* Deploy and run the workload.
12+
```$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/sgx/sgx_job.yaml```
13+
* Check the results.
14+
```$ oc get pods
15+
intel-sgx-job-4tnh5 0/1 Completed 0 2m10s
16+
intel-sgx-workload-1-build 0/1 Completed 0 30s
17+
```
18+
```
19+
$ oc logs intel-sgx-job-4tnh5
20+
Checksum(0x0x7fffac6f41e0, 100) = 0xfffd4143
21+
Info: executing thread synchronization, please wait...
22+
Info: SampleEnclave successfully returned.
23+
Enter a character before exit ...
24+
```
25+
### Verify Intel® Data Center GPU provisioning
26+
This workload runs [clinfo](https://github.com/Oblomov/clinfo) utilizing the i915 resource from GPU provisioning and displays the related GPU information.
27+
* Build the workload container image.
28+
```$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_build.yaml ```
29+
* Deploy and execute the workload.
30+
```$ oc apply -f https://raw.githubusercontent.com/intel/intel-technology-enabling-for-openshift/main/tests/l2/dgpu/clinfo_job.yaml```
31+
* Check the results.
32+
```
33+
$ oc get pods
34+
intel-dgpu-clinfo-1-build 0/1 Completed 0 3m20s
35+
intel-dgpu-clinfo-56mh2 0/1 Completed 0 35s
36+
```
37+
```
38+
$ oc logs intel-dgpu-clinfo-56mh2
2139
Platform Name Intel(R) OpenCL HD Graphics
22-
Number of devices 1
23-
Device Name Intel(R) Graphics [0x56c1]
40+
Number of devices 1
41+
Device Name Intel(R) Data Center GPU Flex Series 140 [0x56c1]
2442
Device Vendor Intel(R) Corporation
2543
Device Vendor ID 0x8086
2644
Device Version OpenCL 3.0 NEO
27-
Driver Version 22.23.23405
45+
Device UUID 86800000-c156-0000-0000-000000000000
46+
Driver UUID 32322e34-332e-3234-3539-352e33350000
47+
Valid Device LUID No
48+
Device LUID 80c6-4e56fd7f0000
49+
Device Node Mask 0
50+
Device Numeric Version 0xc00000 (3.0.0)
51+
Driver Version 22.43.24595.35
2852
Device OpenCL C Version OpenCL C 1.2
29-
Device Type GPU
30-
Device Profile FULL_PROFILE
31-
Device Available Yes
32-
Compiler Available Yes
33-
Linker Available Yes
34-
Max compute units 128
35-
Max clock frequency 2100MHz
36-
Device Partition (core)
37-
Max number of sub-devices 0
38-
Supported partition types None
39-
Supported affinity domains (n/a)
40-
Max work item dimensions 3
41-
Max work item sizes 1024x1024x1024
42-
Max work group size 1024
43-
Preferred work group size multiple 64
44-
Max sub-groups per work group 128
45-
Sub-group sizes (Intel) 8, 16, 32
46-
Preferred / native vector sizes
47-
```
53+
Device OpenCL C all versions OpenCL
54+
```
4855

49-
#### SGX test case
50-
The test case used is SGX SDK [Sample Enclave App](https://github.com/intel/linux-sgx/tree/master/SampleCode/SampleEnclave), which launches a simple SGX enclave. Similar to dGPU, OCP buildconfig and imagestream are leveraged for the container image.
51-
The job pod is scheduled on a node enabled with SGX requesting enclave memory resource ```sgx.intel.com/epc```. The resource is created by the SGX device plugin.
52-
Below operations are verified on OCP 4.11 bare-metal cluster.
53-
To build the test case:
54-
```
55-
oc apply -f sgx_build.yaml
56-
```
57-
To deploy the job:
58-
```
59-
oc apply -f sgx_job.yaml
60-
```
61-
To check the pod logs:
62-
```
63-
oc get pods | grep sgx
64-
oc logs <sgx_pod_name>
65-
```
66-
Sample pod result:
67-
```
68-
Checksum(0x0x7fffac6f41e0, 100) = 0xfffd4143
69-
Info: executing thread synchronization, please wait...
70-
Info: SampleEnclave successfully returned.
71-
Enter a character before exit ...
72-
```
73-
On the node, the updated resources are:
74-
```
75-
oc describe <node name> | grep sgx.intel.com
76-
sgx.intel.com/enclave 1 1
77-
sgx.intel.com/epc 5Mi 5Mi
78-
sgx.intel.com/provision 0 0
79-
```
56+
## See Also
57+
For Intel SGX demos on vanilla Kubernetes, refer to [link](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/demo/sgx-sdk-demo)
58+
For GPU demos on vanilla Kubernetes, refer to [link](https://github.com/intel/intel-device-plugins-for-kubernetes/tree/main/demo/intel-opencl-icd)

0 commit comments

Comments
 (0)