Skip to content

[enhancement] enable array_api return values from from_table #2441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 38 commits into
base: main
Choose a base branch
from

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented Apr 17, 2025

Description

** Dependent on #2430 **

leaning into the functional style development for from_table, its is generalized to work with expected major frameworks (dpctl, dpnp, and various flavors of array_api arrays). This is done by changing the from_table API to depend not on kwargs sua_iface and sycl_queue but on a single keyword like, which can either be an array type or a callable which will return the expected array type. The returned array will preserve the aspects of the array (shape, type, row or column major contiguous, etc.) and in most circumstances the device. A major exception is sycl_usm cpu USM allocations, which require a manual memcpy to a CPU allocation.

Follow-on work must consistently apply the use of like as the roll-out of raw_inputs in #2153, as some estimators will force values to the return type and others not. As this is inconsistently applied, it must be fixed for any array_api-zero copy enabled estimator.

Changes:

  • Switched oneDAL table __sycl_usm_array_interface__ to throw an AttributeError if a queue is unavailable, rather than set to None. The attribute should only be available when a USM allocation is made, which is not the case for CPU data out of oneDAL. This check is placed early in the attribute code so that it can be used for checking for sycl usm conversion with has_attr
  • Created return_type_constructor function to generalize oneDAL table to array construction
  • Added and fixed documentation in _data_conversion.py
  • Modified functions to remove use of sua_iface and sycl_queue keywords to use like
  • Remove necessary imports of dpnp and dpctl/dpctl.tensor from this file through generalization in return_type_constructor

NOTES: The initial support version for from_table is not general enough and highly coded for sycl usm inputs. This changes the interface to mirror the type (and possibly queue) of the data. Getting this integrated will begin the fight against the technical debt introduced with use_raw_inputs. The core of this PR is simple, the changes needed associated with the technical debt will be brutal.

Incremental algorithms will first collect and store the return type generator function via the return_type_constructor function. This is similar to how the queue must be stored, which will be applied at the end at the finalization step. The serialization deserialization of this isn't possible, so it should be handled similarly as the queue.

If we want to type hint these changes, I need help. I'm not sure what to do for the specifics of stating an array type.

I don't think performance benchmarks are necessary, but can do upon request.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

@icfaust icfaust requested a review from Alexsandruss as a code owner April 17, 2025 18:55
@icfaust icfaust requested a review from ahuber21 as a code owner April 18, 2025 00:25
@icfaust
Copy link
Contributor Author

icfaust commented Apr 20, 2025

/intelci: run

Copy link

codecov bot commented Apr 20, 2025

Codecov Report

Attention: Patch coverage is 71.30435% with 33 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/datatypes/dlpack/data_conversion.cpp 64.40% 2 Missing and 19 partials ⚠️
onedal/datatypes/table.cpp 44.44% 0 Missing and 5 partials ⚠️
onedal/datatypes/sycl_usm/data_conversion.cpp 40.00% 0 Missing and 3 partials ⚠️
onedal/datatypes/_data_conversion.py 89.47% 0 Missing and 2 partials ⚠️
onedal/linear_model/logistic_regression.py 0.00% 2 Missing ⚠️
Flag Coverage Δ
azure 79.85% <90.47%> (+1.66%) ⬆️
github 72.00% <71.30%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/cluster/dbscan.py 86.66% <100.00%> (ø)
onedal/common/sycl_interfaces.hpp 100.00% <ø> (ø)
onedal/covariance/covariance.py 82.22% <100.00%> (ø)
onedal/datatypes/__init__.py 100.00% <100.00%> (ø)
onedal/ensemble/forest.py 73.40% <100.00%> (ø)
onedal/linear_model/incremental_linear_model.py 91.81% <100.00%> (+0.22%) ⬆️
onedal/linear_model/linear_model.py 83.94% <100.00%> (ø)
onedal/datatypes/_data_conversion.py 93.10% <89.47%> (-0.65%) ⬇️
onedal/linear_model/logistic_regression.py 29.62% <0.00%> (ø)
onedal/datatypes/sycl_usm/data_conversion.cpp 34.56% <40.00%> (-7.30%) ⬇️
... and 2 more

... and 49 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@icfaust icfaust changed the title [WIP, enhancement] enable array_api return values from from_table [enhancement] enable array_api return values from from_table Apr 22, 2025
@icfaust
Copy link
Contributor Author

icfaust commented Apr 22, 2025

/intelci: run


Parameters
----------
*args : {scalar, numpy array, sycl_usm_ndarray, csr_matrix, or csr_array}
*args : scalar, numpy array, sycl_usm_ndarray, array API standard array,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returns
-------
tables: {oneDAL homogeneous tables}
tables: oneDAL homogeneous tables
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format here would need to be:

name_of_returned_variable : type_of_returned variable
    Description about it.

@icfaust
Copy link
Contributor Author

icfaust commented Apr 23, 2025

Fixes must be made to spmd testing :(

@icfaust
Copy link
Contributor Author

icfaust commented Apr 23, 2025

This has implications about array-api-compat. I need to rethink this strategy.

@icfaust icfaust marked this pull request as draft April 23, 2025 15:02
@icfaust
Copy link
Contributor Author

icfaust commented Apr 23, 2025

The situation is as follows. I didn't look thoroughly enough into the reasons behind array-api-compat and its inclusion into sklearn. Even if we fully support array_api inputs, it will not mean we will support pytorch out of the box. Pytorch support via array api comes via getting the namespace generating from array-api-compat, meaning manually searching for array_namespace won't work (and the reason why sklearn has get_namespace to compensate for that. Thus either we need to re-invent get_namespace in onedal, acquire array namespaces from sklearn and pass them around like we do for queues (not advised) having used sklearn's get_namespace or think of another way of doing it. The logic is insufficient for pytorch support.

@icfaust
Copy link
Contributor Author

icfaust commented Apr 23, 2025

I'm going to have to modify the ingestion scripts (to_table) for non-contiguous inputs and dtype changes which we have a special case trigger in sycl-usm for dpnp, which we will also have to add similarly for pytorch...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants