Skip to content

Commit 089cd7e

Browse files
eellisonpytorchmergebot
authored andcommitted
update mixed mm weight only quant test to work w mixed mm deletion (#1772)
We're deleting mixed_mm path in pytorch/pytorch#147151. update test to not check for mixed_mm kernel. Pull Request resolved: #1772 Approved by: https://github.com/drisspg
1 parent 09ebb12 commit 089cd7e

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

test/integration/test_integration.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1243,8 +1243,6 @@ def test_weight_only_quant_force_mixed_mm(self, device, dtype):
12431243
y_wo, (code,) = run_and_get_code(m_c, x)
12441244
sqnr = compute_error(y_ref, y_wo)
12451245
self.assertGreaterEqual(sqnr, 38)
1246-
if device == "cuda":
1247-
self.assertTrue("mixed_mm" in code, f"got code: {code}")
12481246

12491247
@parameterized.expand(COMMON_DEVICE_DTYPE)
12501248
@unittest.skipIf(not torch.cuda.is_available(), "Need CUDA available")

0 commit comments

Comments
 (0)