Skip to content

Use COFF image-base-relative jump tables on AMD64 #147625

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sivadeilra
Copy link
Contributor

This changes codegen for jump tables on AMD64. This matches the behavior of MSVC, which increases compatibility with Windows-related development tools, such as binary analyzers and rewriters (including hot-patching tools).

This changes jump table codegen to use the same format used by MSVC on AMD64, which is unsigned 32-bit offsets that are relatve to the COFF image base. LLVM already has support for generating image-base-relative symbol offsets; this PR wires that support up to the jump table codegen.

This PR:

  • Adds a new jump table entry kind, EK_CoffImgRel32, which is used for jump table entries that are relative to the COFF image base.
  • Adds codegen for EK_CoffImgRel32, in both the SelectionDag and in the SjLj code paths.
  • Selects EK_CoffImgRel32 when targeting AMD64.
  • Adds a new MO_COFF_IMGREL32 operand flag for X86 operands. (This is very similar to existing operand flags, which handle encodings that are specific to the target environment.) Symbol operands that use MO_COFF_IMGREL32 will be encoded as a 32-bit unsigned offset.
  • Updates tests to verify results when targeting Windows, and verifies that this code is not active when not targeting Windows.

This PR targets only AMD64. The intent is that AArch64 (ARM64) support will be submitted in a future PR. This PR adds most of the support necessary for AArch64, so the PR that adds imgrel32-support for AArch64 will be smaller.

The codegen uses a pattern that is very similar to that of LLVM and MSVC, but using a RIP-relative load for the table address and a RIP-relative load for the image base.

Performance testing showed no difference between LLVM-style jump tables (such as EK_LabelDifference32) and MSVC-style jump tables.

@llvmbot
Copy link
Member

llvmbot commented Jul 9, 2025

@llvm/pr-subscribers-llvm-selectiondag
@llvm/pr-subscribers-debuginfo

@llvm/pr-subscribers-backend-x86

Author: None (sivadeilra)

Changes

This changes codegen for jump tables on AMD64. This matches the behavior of MSVC, which increases compatibility with Windows-related development tools, such as binary analyzers and rewriters (including hot-patching tools).

This changes jump table codegen to use the same format used by MSVC on AMD64, which is unsigned 32-bit offsets that are relatve to the COFF image base. LLVM already has support for generating image-base-relative symbol offsets; this PR wires that support up to the jump table codegen.

This PR:

  • Adds a new jump table entry kind, EK_CoffImgRel32, which is used for jump table entries that are relative to the COFF image base.
  • Adds codegen for EK_CoffImgRel32, in both the SelectionDag and in the SjLj code paths.
  • Selects EK_CoffImgRel32 when targeting AMD64.
  • Adds a new MO_COFF_IMGREL32 operand flag for X86 operands. (This is very similar to existing operand flags, which handle encodings that are specific to the target environment.) Symbol operands that use MO_COFF_IMGREL32 will be encoded as a 32-bit unsigned offset.
  • Updates tests to verify results when targeting Windows, and verifies that this code is not active when not targeting Windows.

This PR targets only AMD64. The intent is that AArch64 (ARM64) support will be submitted in a future PR. This PR adds most of the support necessary for AArch64, so the PR that adds imgrel32-support for AArch64 will be smaller.

The codegen uses a pattern that is very similar to that of LLVM and MSVC, but using a RIP-relative load for the table address and a RIP-relative load for the image base.

Performance testing showed no difference between LLVM-style jump tables (such as EK_LabelDifference32) and MSVC-style jump tables.


Full diff: https://github.com/llvm/llvm-project/pull/147625.diff

13 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/MIRYamlMapping.h (+2)
  • (modified) llvm/include/llvm/CodeGen/MachineJumpTableInfo.h (+9-1)
  • (modified) llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp (+8)
  • (modified) llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp (+4)
  • (modified) llvm/lib/CodeGen/MachineFunction.cpp (+21)
  • (modified) llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp (+10-5)
  • (modified) llvm/lib/Target/X86/MCTargetDesc/X86BaseInfo.h (+3)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+31)
  • (modified) llvm/lib/Target/X86/X86ISelLoweringCall.cpp (+5)
  • (modified) llvm/lib/Target/X86/X86MCInstLower.cpp (+3)
  • (modified) llvm/test/CodeGen/X86/sjlj-eh.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/win-import-call-optimization-jumptable.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/win64-jumptable.ll (+29-15)
diff --git a/llvm/include/llvm/CodeGen/MIRYamlMapping.h b/llvm/include/llvm/CodeGen/MIRYamlMapping.h
index 119786f045ed9..e07cdc678579e 100644
--- a/llvm/include/llvm/CodeGen/MIRYamlMapping.h
+++ b/llvm/include/llvm/CodeGen/MIRYamlMapping.h
@@ -141,6 +141,8 @@ template <> struct ScalarEnumerationTraits<MachineJumpTableInfo::JTEntryKind> {
                 MachineJumpTableInfo::EK_LabelDifference64);
     IO.enumCase(EntryKind, "inline", MachineJumpTableInfo::EK_Inline);
     IO.enumCase(EntryKind, "custom32", MachineJumpTableInfo::EK_Custom32);
+    IO.enumCase(EntryKind, "coff-imgrel32",
+                MachineJumpTableInfo::EK_CoffImgRel32);
   }
 };
 
diff --git a/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h b/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
index 1dd2371bd4582..750d37c4eff96 100644
--- a/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
+++ b/llvm/include/llvm/CodeGen/MachineJumpTableInfo.h
@@ -85,7 +85,12 @@ class MachineJumpTableInfo {
 
     /// EK_Custom32 - Each entry is a 32-bit value that is custom lowered by the
     /// TargetLowering::LowerCustomJumpTableEntry hook.
-    EK_Custom32
+    EK_Custom32,
+
+    // EK_CoffImgRel32 - In PE/COFF (Windows) images, each entry is a 32-bit
+    // unsigned offset that is added to the image base.
+    //       .word LBB123@IMGREL
+    EK_CoffImgRel32,
   };
 
 private:
@@ -100,6 +105,9 @@ class MachineJumpTableInfo {
   LLVM_ABI unsigned getEntrySize(const DataLayout &TD) const;
   /// getEntryAlignment - Return the alignment of each entry in the jump table.
   LLVM_ABI unsigned getEntryAlignment(const DataLayout &TD) const;
+  /// getEntryIsSigned - Return true if the load for the jump table index
+  /// should use signed extension, false if zero extension (unsigned)
+  LLVM_ABI bool getEntryIsSigned() const;
 
   /// createJumpTableIndex - Create a new jump table.
   ///
diff --git a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 07d9380a02c43..2c949614d9ff4 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -3136,6 +3136,14 @@ void AsmPrinter::emitJumpTableEntry(const MachineJumpTableInfo &MJTI,
     Value = MCBinaryExpr::createSub(Value, Base, OutContext);
     break;
   }
+
+  case MachineJumpTableInfo::EK_CoffImgRel32: {
+    // This generates an unsigned 32-bit offset, which is MBB's address minus
+    // the COFF image base.
+    Value = MCSymbolRefExpr::create(
+        MBB->getSymbol(), MCSymbolRefExpr::VK_COFF_IMGREL32, OutContext);
+    break;
+  }
   }
 
   assert(Value && "Unknown entry kind!");
diff --git a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
index bc74daf983e40..8f36c6b2d8d85 100644
--- a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
@@ -3564,6 +3564,10 @@ void CodeViewDebug::collectDebugInfoForJumpTables(const MachineFunction *MF,
           std::tie(Base, BaseOffset, Branch, EntrySize) =
               Asm->getCodeViewJumpTableInfo(JumpTableIndex, &BranchMI, Branch);
           break;
+        case MachineJumpTableInfo::EK_CoffImgRel32:
+          EntrySize = JumpTableEntrySize::UInt32;
+          Base = nullptr;
+          break;
         }
 
         const MachineJumpTableEntry &JTE = JTI.getJumpTables()[JumpTableIndex];
diff --git a/llvm/lib/CodeGen/MachineFunction.cpp b/llvm/lib/CodeGen/MachineFunction.cpp
index 38ad582ba923c..e3d52f4125d00 100644
--- a/llvm/lib/CodeGen/MachineFunction.cpp
+++ b/llvm/lib/CodeGen/MachineFunction.cpp
@@ -1328,6 +1328,7 @@ unsigned MachineJumpTableInfo::getEntrySize(const DataLayout &TD) const {
   case MachineJumpTableInfo::EK_GPRel32BlockAddress:
   case MachineJumpTableInfo::EK_LabelDifference32:
   case MachineJumpTableInfo::EK_Custom32:
+  case MachineJumpTableInfo::EK_CoffImgRel32:
     return 4;
   case MachineJumpTableInfo::EK_Inline:
     return 0;
@@ -1348,6 +1349,7 @@ unsigned MachineJumpTableInfo::getEntryAlignment(const DataLayout &TD) const {
     return TD.getABIIntegerTypeAlignment(64).value();
   case MachineJumpTableInfo::EK_GPRel32BlockAddress:
   case MachineJumpTableInfo::EK_LabelDifference32:
+  case MachineJumpTableInfo::EK_CoffImgRel32:
   case MachineJumpTableInfo::EK_Custom32:
     return TD.getABIIntegerTypeAlignment(32).value();
   case MachineJumpTableInfo::EK_Inline:
@@ -1356,6 +1358,25 @@ unsigned MachineJumpTableInfo::getEntryAlignment(const DataLayout &TD) const {
   llvm_unreachable("Unknown jump table encoding!");
 }
 
+/// getEntryIsSigned - Return true if the load for the jump table index
+/// should use signed extension, false if zero extension (unsigned)
+bool MachineJumpTableInfo::getEntryIsSigned() const {
+  switch (getEntryKind()) {
+  case MachineJumpTableInfo::EK_BlockAddress:
+  case MachineJumpTableInfo::EK_GPRel64BlockAddress:
+  case MachineJumpTableInfo::EK_GPRel32BlockAddress:
+  case MachineJumpTableInfo::EK_LabelDifference32:
+  case MachineJumpTableInfo::EK_LabelDifference64:
+  case MachineJumpTableInfo::EK_Inline:
+  case MachineJumpTableInfo::EK_Custom32:
+    return true;
+
+  case MachineJumpTableInfo::EK_CoffImgRel32:
+    return false;
+  }
+  llvm_unreachable("Unknown jump table encoding!");
+}
+
 /// Create a new jump table entry in the jump table info.
 unsigned MachineJumpTableInfo::createJumpTableIndex(
                                const std::vector<MachineBasicBlock*> &DestBBs) {
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index a48dd0e5fedba..7bb6cadbce378 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -4133,8 +4133,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     const DataLayout &TD = DAG.getDataLayout();
     EVT PTy = TLI.getPointerTy(TD);
 
-    unsigned EntrySize =
-      DAG.getMachineFunction().getJumpTableInfo()->getEntrySize(TD);
+    MachineJumpTableInfo *MJTI = DAG.getMachineFunction().getJumpTableInfo();
+    unsigned EntrySize = MJTI->getEntrySize(TD);
 
     // For power-of-two jumptable entry sizes convert multiplication to a shift.
     // This transformation needs to be done here since otherwise the MIPS
@@ -4151,10 +4151,15 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
 
     EVT MemVT = EVT::getIntegerVT(*DAG.getContext(), EntrySize * 8);
     SDValue LD = DAG.getExtLoad(
-        ISD::SEXTLOAD, dl, PTy, Chain, Addr,
-        MachinePointerInfo::getJumpTable(DAG.getMachineFunction()), MemVT);
+        MJTI->getEntryIsSigned() ? ISD::SEXTLOAD : ISD::ZEXTLOAD, dl, PTy,
+        Chain, Addr, MachinePointerInfo::getJumpTable(DAG.getMachineFunction()),
+        MemVT);
     Addr = LD;
-    if (TLI.isJumpTableRelative()) {
+    if (MJTI->getEntryKind() == MachineJumpTableInfo::EK_CoffImgRel32) {
+      SDValue ImageBase = DAG.getExternalSymbol(
+          "__ImageBase", TLI.getPointerTy(DAG.getDataLayout()));
+      Addr = DAG.getMemBasePlusOffset(ImageBase, Addr, dl);
+    } else if (TLI.isJumpTableRelative()) {
       // For PIC, the sequence is:
       // BRIND(RelocBase + load(Jumptable + index))
       // RelocBase can be JumpTable, GOT or some sort of global base.
diff --git a/llvm/lib/Target/X86/MCTargetDesc/X86BaseInfo.h b/llvm/lib/Target/X86/MCTargetDesc/X86BaseInfo.h
index 569484704a249..8c20550bf5129 100644
--- a/llvm/lib/Target/X86/MCTargetDesc/X86BaseInfo.h
+++ b/llvm/lib/Target/X86/MCTargetDesc/X86BaseInfo.h
@@ -486,6 +486,9 @@ enum TOF {
   /// reference is actually to the ".refptr.FOO" symbol.  This is used for
   /// stub symbols on windows.
   MO_COFFSTUB,
+  /// MO_COFF_IMGREL32: Indicates that the operand value is unsigned 32-bit
+  /// offset from ImageBase to a symbol (basically .imgrel32).
+  MO_COFF_IMGREL32,
 };
 
 enum : uint64_t {
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index fd617f7062313..8c6de28398ce9 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -37617,6 +37617,37 @@ X86TargetLowering::EmitSjLjDispatchBlock(MachineInstr &MI,
       BuildMI(DispContBB, MIMD, TII->get(X86::JMP64r)).addReg(TReg);
       break;
     }
+    case MachineJumpTableInfo::EK_CoffImgRel32: {
+      Register ImageBaseReg = MRI->createVirtualRegister(&X86::GR64RegClass);
+      Register OReg64 = MRI->createVirtualRegister(&X86::GR64RegClass);
+      Register TReg = MRI->createVirtualRegister(&X86::GR64RegClass);
+
+      // movl (BReg,IReg64,4), OReg
+      // This implicitly zero-extends from uint32 to uint64.
+      BuildMI(DispContBB, MIMD, TII->get(X86::MOV32rm), OReg64)
+          .addReg(BReg)
+          .addImm(4)
+          .addReg(IReg64)
+          .addImm(0)
+          .addReg(0);
+
+      // leaq (__ImageBase,OReg64), ImageBaseReg
+      BuildMI(DispContBB, MIMD, TII->get(X86::LEA64r), ImageBaseReg)
+          .addReg(X86::RIP)
+          .addImm(0)
+          .addReg(0)
+          .addExternalSymbol("__ImageBase", X86II::MO_COFF_IMGREL32)
+          .addReg(0);
+
+      // addq ImageBaseReg, OReg64
+      BuildMI(DispContBB, MIMD, TII->get(X86::ADD64rr), TReg)
+          .addReg(ImageBaseReg)
+          .addReg(OReg64);
+
+      // jmpq *TReg
+      BuildMI(DispContBB, MIMD, TII->get(X86::JMP64r)).addReg(TReg);
+      break;
+    }
     default:
       llvm_unreachable("Unexpected jump table encoding");
     }
diff --git a/llvm/lib/Target/X86/X86ISelLoweringCall.cpp b/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
index cb38a39ff991d..9c2786ae75ddb 100644
--- a/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
+++ b/llvm/lib/Target/X86/X86ISelLoweringCall.cpp
@@ -420,6 +420,11 @@ bool X86TargetLowering::allowsMemoryAccess(LLVMContext &Context,
 /// current function.  The returned value is a member of the
 /// MachineJumpTableInfo::JTEntryKind enum.
 unsigned X86TargetLowering::getJumpTableEncoding() const {
+  // Always use EK_CoffImgRel32 for 64-bit Windows targets.
+  if (Subtarget.isTargetWin64()) {
+    return MachineJumpTableInfo::EK_CoffImgRel32;
+  }
+
   // In GOT pic mode, each entry in the jump table is emitted as a @GOTOFF
   // symbol.
   if (isPositionIndependent() && Subtarget.isPICStyleGOT())
diff --git a/llvm/lib/Target/X86/X86MCInstLower.cpp b/llvm/lib/Target/X86/X86MCInstLower.cpp
index 45d596bb498f6..41ae0c2d21cd6 100644
--- a/llvm/lib/Target/X86/X86MCInstLower.cpp
+++ b/llvm/lib/Target/X86/X86MCInstLower.cpp
@@ -320,6 +320,9 @@ MCOperand X86MCInstLower::LowerSymbolOperand(const MachineOperand &MO,
       Expr = MCSymbolRefExpr::create(Label, Ctx);
     }
     break;
+  case X86II::MO_COFF_IMGREL32:
+    Expr = MCSymbolRefExpr::create(Sym, MCSymbolRefExpr::VK_COFF_IMGREL32, Ctx);
+    break;
   }
 
   if (!Expr)
diff --git a/llvm/test/CodeGen/X86/sjlj-eh.ll b/llvm/test/CodeGen/X86/sjlj-eh.ll
index d2dcb35a4908e..4bc24c574d752 100644
--- a/llvm/test/CodeGen/X86/sjlj-eh.ll
+++ b/llvm/test/CodeGen/X86/sjlj-eh.ll
@@ -117,10 +117,10 @@ try.cont:
 ; CHECK-X64: [[CONT]]:
 ;     *Handlers[UFC.__callsite]
 ; CHECK-X64: leaq .[[TABLE:LJTI[0-9]+_[0-9]+]](%rip), %rcx
-; CHECK-X64: movl (%rcx,%rax,4), %eax
-; CHECK-X64: cltq
-; CHECK-X64: addq %rcx, %rax
-; CHECK-X64: jmpq *%rax
+; CHECK-X64: movl (%rcx,%rax,4), %rax
+; CHECK-X64: leaq __ImageBase@IMGREL(%rip), %rcx
+; CHECK-X64: addq %rax, %rcx
+; CHECK-X64: jmpq *%rcx
 
 ; CHECK-X64-LINUX: .[[RESUME:LBB[0-9]+_[0-9]+]]:
 ;     assert(UFC.__callsite < 1);
diff --git a/llvm/test/CodeGen/X86/win-import-call-optimization-jumptable.ll b/llvm/test/CodeGen/X86/win-import-call-optimization-jumptable.ll
index fe22b251685e6..d7f3bc20f2881 100644
--- a/llvm/test/CodeGen/X86/win-import-call-optimization-jumptable.ll
+++ b/llvm/test/CodeGen/X86/win-import-call-optimization-jumptable.ll
@@ -2,7 +2,7 @@
 
 ; CHECK-LABEL:  uses_rax:
 ; CHECK:        .Limpcall0:
-; CHECK-NEXT:     jmpq    *%rax
+; CHECK-NEXT:     jmpq    *%rcx
 
 define void @uses_rax(i32 %x) {
 entry:
@@ -74,7 +74,7 @@ declare void @g(i32)
 ; CHECK-NEXT:   .asciz  "RetpolineV1"
 ; CHECK-NEXT:   .long   24
 ; CHECK-NEXT:   .secnum .text
-; CHECK-NEXT:   .long   16
+; CHECK-NEXT:   .long   17
 ; CHECK-NEXT:   .secoffset      .Limpcall0
 ; CHECK-NEXT:   .long   17
 ; CHECK-NEXT:   .secoffset      .Limpcall1
diff --git a/llvm/test/CodeGen/X86/win64-jumptable.ll b/llvm/test/CodeGen/X86/win64-jumptable.ll
index 17ef0d333a727..507b924d20a07 100644
--- a/llvm/test/CodeGen/X86/win64-jumptable.ll
+++ b/llvm/test/CodeGen/X86/win64-jumptable.ll
@@ -1,6 +1,6 @@
 ; RUN: llc < %s -relocation-model=static | FileCheck %s
-; RUN: llc < %s -relocation-model=pic | FileCheck %s --check-prefix=PIC
-; RUN: llc < %s -relocation-model=pic -code-model=large | FileCheck %s --check-prefix=PIC
+; RUN: llc < %s -relocation-model=pic | FileCheck %s --check-prefixes=CHECK,PIC
+; RUN: llc < %s -relocation-model=pic -code-model=large | FileCheck %s --check-prefixes=CHECK,LARGE
 
 ; FIXME: Remove '-relocation-model=static' when it is no longer necessary to
 ; trigger the separate .rdata section.
@@ -43,25 +43,39 @@ declare void @g(i32)
 ; CHECK: .text
 ; CHECK: f:
 ; CHECK: .seh_proc f
-; CHECK: jmpq    *.LJTI0_0
+; CHECK: .seh_endprologue
+
+; STATIC: movl .LJTI0_0(,%rax,4), %eax
+; STATIC: leaq __ImageBase(%rax), %rax
+; STATIC: jmpq *%rax
+
+; PIC: movl %ecx, %eax
+; PIC: leaq .LJTI0_0(%rip), %rcx
+; PIC: movl (%rcx,%rax,4), %eax
+; PIC: leaq __ImageBase(%rip), %rcx
+; PIC: addq %rax, %rcx
+; PIC: jmpq *%rcx
+
+; LARGE: movl %ecx, %eax
+; LARGE-NEXT: movabsq $.LJTI0_0, %rcx
+; LARGE-NEXT: movl (%rcx,%rax,4), %eax
+; LARGE-NEXT: movabsq $__ImageBase, %rcx
+; LARGE-NEXT: addq %rax, %rcx
+; LARGE-NEXT: jmpq *%rcx
+
 ; CHECK: .LBB0_{{.*}}: # %sw.bb
 ; CHECK: .LBB0_{{.*}}: # %sw.bb2
 ; CHECK: .LBB0_{{.*}}: # %sw.bb3
 ; CHECK: .LBB0_{{.*}}: # %sw.bb1
-; CHECK: callq g
-; CHECK: jmp g # TAILCALL
+; STATIC: callq g
+; STATIC: jmp g # TAILCALL
 ; CHECK: .section        .rdata,"dr"
-; CHECK: .quad .LBB0_
-; CHECK: .quad .LBB0_
-; CHECK: .quad .LBB0_
-; CHECK: .quad .LBB0_
+; CHECK: .LJTI0_0:
+; CHECK: .long .LBB0_{{[0-9]+}}@IMGREL
+; CHECK: .long .LBB0_{{[0-9]+}}@IMGREL
+; CHECK: .long .LBB0_{{[0-9]+}}@IMGREL
+; CHECK: .long .LBB0_{{[0-9]+}}@IMGREL
 
 ; It's important that we switch back to .text here, not .rdata.
 ; CHECK: .text
 ; CHECK: .seh_endproc
-
-; Windows PIC code should use 32-bit entries
-; PIC: .long .LBB0_2-.LJTI0_0
-; PIC: .long .LBB0_3-.LJTI0_0
-; PIC: .long .LBB0_4-.LJTI0_0
-; PIC: .long .LBB0_5-.LJTI0_0

@sivadeilra sivadeilra changed the title Use COFF image-base.relative jump tables on AMD64 Use COFF image-base-relative jump tables on AMD64 Jul 9, 2025
Comment on lines +144 to +145
IO.enumCase(EntryKind, "coff-imgrel32",
MachineJumpTableInfo::EK_CoffImgRel32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs mir support tests in test/CodeGen/MIR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants