Skip to content

[CI, enhancement] add pytorch+gpu testing ci #2494

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 65 commits into
base: main
Choose a base branch
from

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented May 26, 2025

Description

This PR introduces a public GPU CI job to sklearnex. It is not fully featured but does provide first GPU testing publicly. Due to issues with n_jobs support (which are being addressed in #2364), run times are extremely long but viable. The GPU is only currently used in the sklearn conformance steps, not in sklearnex/onedal testing. This is because it tests without dpctl installed for GPU offloading. It will in the future extract queues from data in combination with PyTorch, which has Intel GPU capabilities since PyTorch 2.4 (https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html). This will allow GPU testing in the other steps.

This CI is important for at least 3 reasons: sklearn tests array_api using CuPy, PyTorch and array_api_strict frameworks. PyTorch is the only non __sycl_usm_array_interface__ GPU data framework which is expected to work for both sklearn and sklearnex. Therefore 1) it provides an array_api-only GPU testing framework to validate with sklearn conformance 2) it is likely the first entry point for users who which to use Intel GPU data natively (due to the user base size). 3) It validates that sklearnex can properly function without dpctl installed for GPU use, removing limitations on python versions and dependency stability issues. Note PyTorch DOES NOT FOLLOW THE ARRAY_API STANDARD, sklearn uses array_api_compat to shoe-horn in pytorch support. There are quirks associated with PyTorch and should be tested by sklearnex. This has impacts on how we design our estimators, as checking for __array_namespace__ is insufficient if we wish to support PyTorch.

Unlike other public runners, it takes a strategy of splitting apart the build and test steps in to separate jobs. The test step occurs on a CPU-only runner and on a GPU runner at the same time. It does not use a virtual environment like Conda or venv for simplicity, however it can reuse all of the previously written infrastructure.

It uses Python 3.12 and sklearn 1.4 due to simplicity (i.e. to duplicate other GPU testing systems). This will be updated in a follow up PR as it becomes further used (likely requiring different deselections).

When successful, a large increase in code coverage should be observed in codecov, as code coverage is also made available.

This should be very important for validating array_api changes in the codebase coming soon, which would otherwise be obscured by dpctl.

This required the following changes:

  • A new additional job 'Identify oneDAL nightly' is created, which removes duplication of code in ci.yml, it will identify the oneDAL build to download for all of the GitHub actions CI runners.
  • Changes to run_sklearn_tests.sh were required to get the gpu deselections to work publicly.
  • Renamed 'oneDALNightly/pip' to 'oneDALNightly/venv' to signify that a virtual environment is used instead of the package manager
  • patching of assert_all_finite would fail in combination with array_api_dispatching, changes are made in daal4py, to only use DAAL in the case it is numpy or a dataframe. As PyTorch has a different use for the size attribute, changes needed to be made for it.
  • Checking and moving data from GPU to CPU was incorrectly written for array_api, as we did not have a GPU data framework to test against. We need to verify the device via the __dlpack_device__ attribute instead, and then use asarray if __array__ is available, or from_dlpack because the __dlpack__ attribute is available. This required exposing some dlpack enums for verification.
  • The PR includes changes from [CI, Enhancement] add external pytest frameworks control #2489 which were needed to limit the running time of CI, which will focus on PyTorch and numpy for CPU and GPU.
  • Deselection of some torch tests occur in line with the original array_api rollout (ENH: adding array-api-compat and enabling array api conformance tests #2079)
  • Deselection of test_learning_curve_some_failing_fits_warning[42] because of unknown issue with _intercept_ and SVC on GPU. (Must be investigated)

This will require the following PRs afterwards (by theme):

  • [bugfix, enhancement] Address affinity bug by using threadpoolctl/joblib for n_jobs dispatching #2364 fix issues with thread affinity/ Kubernetes pod operation for n_jobs
  • Introduce PyTorch to onedal/tests/utils/_dataframes_support.py and onedal/tests/utils/_device_selection.py to have public GPU testing in sklearnex.
  • Rewrite from_data in onedal/utils/_sycl_queue_manager.py to extract queues from __dlpack__ data (special PyTorch interface already in place in pybind11).
  • Introduce a lazy loading approach for frameworks torch, dpnp, and dpctl.tensor due to load times in a centralized way (likely following strategy laid out in array_api_compat).
  • Update the sklearn version to not replicate other CI systems
  • Fix issue with SVC and _intercept_ attribute for (test_learning_curve_some_failing_fits_warning[42] sklearn conformance test)

No performance benchmarks necessary


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

Copy link

codecov bot commented May 26, 2025

Codecov Report

Attention: Patch coverage is 50.00000% with 12 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/_device_offload.py 33.33% 6 Missing and 2 partials ⚠️
onedal/datatypes/table.cpp 0.00% 0 Missing and 2 partials ⚠️
sklearnex/utils/validation.py 66.66% 2 Missing ⚠️
Flag Coverage Δ
azure 79.81% <54.54%> (-0.06%) ⬇️
github 73.61% <50.00%> (+1.99%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/datatypes/__init__.py 100.00% <100.00%> (ø)
onedal/datatypes/_data_conversion.py 91.17% <100.00%> (+6.80%) ⬆️
onedal/datatypes/table.cpp 51.92% <0.00%> (-1.02%) ⬇️
sklearnex/utils/validation.py 67.94% <66.66%> (-0.55%) ⬇️
onedal/_device_offload.py 76.00% <33.33%> (-5.04%) ⬇️

... and 15 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@icfaust icfaust changed the title [CI,WIP, enhancement] add pytorch+gpu testing ci [CI, enhancement] add pytorch+gpu testing ci Jun 1, 2025
@icfaust icfaust marked this pull request as ready for review June 1, 2025 22:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant