Skip to content

make KandinskyV22PipelineInpaintCombinedFastTests::test_float16_inference pass on XPU #11308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 14, 2025

Conversation

yao-matrix
Copy link
Contributor

loose expected_max_diff from 5e-1 to 8e-1 to make KandinskyV22PipelineInpaintCombinedFastTests::test_float16_inference pass on XPU.

KandinskyV22PipelineInpaintCombinedFastTests::test_float16_inference
pass on XPU

Signed-off-by: Matrix Yao <matrix.yao@intel.com>
@yao-matrix
Copy link
Contributor Author

@hlky , pls help review, thx

Copy link
Contributor

@hlky hlky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @yao-matrix. This pipeline is not commonly used and already requires high tolerance so no issue increasing it further. If in the future we find some larger difference in required tolerances like 1e-4 on one backend and 1e-3 on another, then we could consider something similar to the expected slice changes where the expected_max_diff is set per backend type.

@hlky hlky merged commit aa541b9 into huggingface:main Apr 14, 2025
8 checks passed
@yao-matrix
Copy link
Contributor Author

sure, will co-work w/ you to make it once needed

@yao-matrix yao-matrix deleted the issue78 branch April 14, 2025 22:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants