We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backend: oneAPIBackend() [ Info: ... downloaded 17.0 B (100% complete, 213496.504 days) [ Info: ... downloaded 17.0 B (100% complete, 213496.504 days) [ Info: ... downloaded 17.0 B (100% complete, 213496.504 days) [ Info: ... downloaded 137.1 MiB (30% complete, 213496.504 days) [ Info: ... downloaded 182.8 MiB (40% complete, 213496.504 days) [ Info: ... downloaded 228.5 MiB (50% complete, 213496.504 days) [ Info: ... downloaded 274.2 MiB (60% complete, 213496.504 days) [ Info: ... downloaded 319.9 MiB (70% complete, 213496.504 days) [ Info: ... downloaded 365.5 MiB (80% complete, 213496.504 days) [ Info: ... downloaded 411.2 MiB (90% complete, 213496.504 days) [ Info: ... downloaded 456.1 MiB (100% complete, 213496.504 days) ┌ Error: Module compilation failed: │ │ error: Total size of kernel arguments exceeds limit! Total arguments size: 2056, limit: 2048
Thanks to the help from ALCF (@kris-rowe), it could be resolved with:
export IGC_OverrideOCLMaxParamSize=4096
Here was the issue with CUDA.jl https://github.com/JuliaGPU/CUDA.jl/pull/2180/files#diff-ecbfaf5b99ab10dcafff5717c7cc5f856768e4313446fa3fb58b839a25b17cfc .
I guess we should find a way to poll for the hardware/compiler capabilities.
The text was updated successfully, but these errors were encountered:
OpenCL has CL_DEVICE_MAX_PARAMETER_SIZE; I don't see an equivalent in Level Zero.
CL_DEVICE_MAX_PARAMETER_SIZE
Sorry, something went wrong.
I think that ze_device_module_properties_t.maxArgumentsSize is the equivalent.
Ah yes, my grep-fu was lacking this morning. Should be relatively easy to port that functionality, then.
No branches or pull requests
Uh oh!
There was an error while loading. Please reload this page.
Thanks to the help from ALCF (@kris-rowe), it could be resolved with:
Here was the issue with CUDA.jl https://github.com/JuliaGPU/CUDA.jl/pull/2180/files#diff-ecbfaf5b99ab10dcafff5717c7cc5f856768e4313446fa3fb58b839a25b17cfc .
I guess we should find a way to poll for the hardware/compiler capabilities.
The text was updated successfully, but these errors were encountered: