Skip to content

[MLIR][Vector]Add constraints to vector.shape_cast(constant) -> constant #147691

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

MengmSun
Copy link

@MengmSun MengmSun commented Jul 9, 2025

We have the case that after ConvertToLLVMPass it looks like:

...
%4 = llvm.mlir.constant(dense<0.000000e+00> : vector<192xf8E4M3FN>) : vector<192xi8>
%8 = vector.shape_cast %4 : vector<192xi8> to vector<1x192xi8>
%10 = vector.extract %8[0] : vector<192xi8> from vector<1x192xi8>
...

Our next pass is Canonicalizer. Several months ago everything went smoothly. However recently we met problem that

mlir::DenseElementsAttr mlir::DenseElementsAttr::reshape(mlir::ShapedType): Assertion `newType.getElementType() == curType.getElementType() && "expected the same element type"' failed.

and we found that's because a reshape operation is added for vector.shape_cast(constant) -> constant when calling ShapeCastOp::fold() in the Canonicalizer pass. This operation will fail if the element type of the source attribute and return type are different.

So we want to add the constraints that only when the element type of the source attribute and return type are the same it will return reshape operation to make our case work as before and will not influence other cases.

Copy link

github-actions bot commented Jul 9, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Jul 9, 2025

@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-mlir

Author: Mengmeng Sun (MengmSun)

Changes

We have the case that after ConvertToLLVMPass it looks like:

...
%4 = llvm.mlir.constant(dense&lt;0.000000e+00&gt; : vector&lt;192xf8E4M3FN&gt;) : vector&lt;192xi8&gt;
%8 = vector.shape_cast %4 : vector&lt;192xi8&gt; to vector&lt;1x192xi8&gt;
%10 = vector.extract %8[0] : vector&lt;192xi8&gt; from vector&lt;1x192xi8&gt;
...

Our next pass is Canonicalizer. Several months ago everything went smoothly. However recently we met problem that

mlir::DenseElementsAttr mlir::DenseElementsAttr::reshape(mlir::ShapedType): Assertion `newType.getElementType() == curType.getElementType() &amp;&amp; "expected the same element type"' failed.

and we found that's because a reshape operation is added for vector.shape_cast(constant) -&gt; constant. This operation will fail if the element type of the source attribute and return type are different.

So we want to add the constraints that only when the element type of the source attribute and return type are the same it will return reshape operation to make our case work as before and will not influence other cases.


Full diff: https://github.com/llvm/llvm-project/pull/147691.diff

2 Files Affected:

  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+6-3)
  • (modified) mlir/test/Dialect/Vector/canonicalize.mlir (+12)
diff --git a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
index 214d2ba7e1b8e..5bbe6704aac48 100644
--- a/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
+++ b/mlir/lib/Dialect/Vector/IR/VectorOps.cpp
@@ -5922,10 +5922,13 @@ OpFoldResult ShapeCastOp::fold(FoldAdaptor adaptor) {
       return bcastOp.getSource();
   }
 
-  // shape_cast(constant) -> constant
+  // shape_cast(constant) -> constant,
+  // if element type of the source and result are the same
   if (auto splatAttr =
-          llvm::dyn_cast_if_present<SplatElementsAttr>(adaptor.getSource()))
-    return splatAttr.reshape(getType());
+          llvm::dyn_cast_if_present<SplatElementsAttr>(adaptor.getSource())) {
+    if (splatAttr.getElementType() == resultType.getElementType())
+      return splatAttr.reshape(getType());
+  }
 
   // shape_cast(poison) -> poison
   if (llvm::dyn_cast_if_present<ub::PoisonAttr>(adaptor.getSource())) {
diff --git a/mlir/test/Dialect/Vector/canonicalize.mlir b/mlir/test/Dialect/Vector/canonicalize.mlir
index 8a9e27378df61..69da8a31d2c9b 100644
--- a/mlir/test/Dialect/Vector/canonicalize.mlir
+++ b/mlir/test/Dialect/Vector/canonicalize.mlir
@@ -1002,6 +1002,18 @@ func.func @fold_broadcast_shapecast(%arg0: vector<4xf32>) -> vector<4xf32> {
 
 // -----
 
+// CHECK-LABEL: func @canonicalize_extract_shapecast_different_element_type
+func.func @canonicalize_extract_shapecast_different_element_type()->vector<12xi8> {
+  %0 = llvm.mlir.constant(dense<0.000000e+00> : vector<12xf8E4M3FN>) : vector<12xi8>
+  // CHECK-NOT: vector.shape_cast
+  %1 = vector.shape_cast %0 : vector<12xi8> to vector<1x12xi8>
+  // CHECK-NOT: vector.extract
+  %2 = vector.extract %1[0] : vector<12xi8> from vector<1x12xi8>
+  return %2 : vector<12xi8>
+}
+
+// -----
+
 // CHECK-LABEL: func @canonicalize_broadcast_shapecast_scalar
 //       CHECK:   vector.broadcast
 //   CHECK-NOT:   vector.shape_cast

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants