Skip to content

[NVPTX] Add NVPTXIncreaseAligmentPass to improve vectorization #144958

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions llvm/lib/Target/NVPTX/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ set(NVPTXCodeGen_sources
NVPTXISelLowering.cpp
NVPTXLowerAggrCopies.cpp
NVPTXLowerAlloca.cpp
NVPTXIncreaseAlignment.cpp
NVPTXLowerArgs.cpp
NVPTXLowerUnreachable.cpp
NVPTXMCExpr.cpp
Expand Down
7 changes: 7 additions & 0 deletions llvm/lib/Target/NVPTX/NVPTX.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ FunctionPass *createNVPTXTagInvariantLoadsPass();
MachineFunctionPass *createNVPTXPeephole();
MachineFunctionPass *createNVPTXProxyRegErasurePass();
MachineFunctionPass *createNVPTXForwardParamsPass();
FunctionPass *createNVPTXIncreaseLocalAlignmentPass();

void initializeNVVMReflectLegacyPassPass(PassRegistry &);
void initializeGenericToNVVMLegacyPassPass(PassRegistry &);
Expand All @@ -76,6 +77,7 @@ void initializeNVPTXAAWrapperPassPass(PassRegistry &);
void initializeNVPTXExternalAAWrapperPass(PassRegistry &);
void initializeNVPTXPeepholePass(PassRegistry &);
void initializeNVPTXTagInvariantLoadLegacyPassPass(PassRegistry &);
void initializeNVPTXIncreaseLocalAlignmentLegacyPassPass(PassRegistry &);

struct NVVMIntrRangePass : PassInfoMixin<NVVMIntrRangePass> {
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
Expand Down Expand Up @@ -111,6 +113,11 @@ struct NVPTXTagInvariantLoadsPass : PassInfoMixin<NVPTXTagInvariantLoadsPass> {
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
};

struct NVPTXIncreaseLocalAlignmentPass
: PassInfoMixin<NVPTXIncreaseLocalAlignmentPass> {
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM);
};

namespace NVPTX {
enum DrvInterface {
NVCL,
Expand Down
131 changes: 131 additions & 0 deletions llvm/lib/Target/NVPTX/NVPTXIncreaseAlignment.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
//===-- NVPTXIncreaseAlignment.cpp - Increase alignment for local arrays --===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// A simple pass that looks at local memory arrays that are statically
// sized and sets an appropriate alignment for them. This enables vectorization
// of loads/stores to these arrays if not explicitly specified by the client.
//
// TODO: Ideally we should do a bin-packing of local arrays to maximize
// alignments while minimizing holes.
Comment on lines +13 to +14
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If someone finds themselves with that much local memory that the holes matter, proper alignment of locals will likely be lost in the noise of their actual performance issues.

If we provide the knob allowing explicitly set the alignment for locals, that would be a sufficient escape hatch to let the user find an acceptable gaps vs alignment trade-off.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I agree it seems like improving most programs is more important then handling these edge cases as well as possible. Do you think I should change the default behavior of the pass to the more aggressive alignment improvement heuristic?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the changes that are likely to affect everyone, the typical approach is to introduce it as an optional feature (or enabled only for clear wins), then allow wider testing with more aggressive settings (I can help with that).
If the changes are deemed to be relatively low risk, aggressive defaults + escape hatch to disable it also works.

I think, in this case we're probably OK with aligning aggressively.

In fact, I think it will, accidentally, benefit cutlass (NVIDIA/cutlass#2003 (comment)), which does have the code with a known UB, where it uses local variables and then uses vector loads/stores on the, assuming that they are always aligned. It works in optimized builds where the locals are optimized away, but fails in debug builds.

//
//===----------------------------------------------------------------------===//

#include "NVPTX.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/Module.h"
#include "llvm/Pass.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/MathExtras.h"

using namespace llvm;

static cl::opt<bool>
MaxLocalArrayAlignment("nvptx-use-max-local-array-alignment",
cl::init(false), cl::Hidden,
cl::desc("Use maximum alignment for local memory"));
Comment on lines +32 to +34
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may as well allow the option to specify the exact alignment, instead of a boolean knob with a vague max value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way this knob works now, I don't think it can be expressed in terms of an exact alignment. This option controls whether we conservatively use the maximum "safe" alignment (an alignment that is a multiple of the aggregate size to avoid introducing new holes), or the maximum "useful" alignment (as big an alignment as possible without going past the limits of what we can load/store in a single instruction).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. How about the upper limit on the alignment. E.g. the pass may want to align float3 by 16, but the user wants to avoid/minimize the gaps and limit the alignment of 4 or 8?


static constexpr Align MaxPTXArrayAlignment = Align::Constant<16>();

/// Get the maximum useful alignment for an array. This is more likely to
/// produce holes in the local memory.
///
/// Choose an alignment large enough that the entire array could be loaded with
/// a single vector load (if possible). Cap the alignment at
/// MaxPTXArrayAlignment.
static Align getAggressiveArrayAlignment(const unsigned ArraySize) {
return std::min(MaxPTXArrayAlignment, Align(PowerOf2Ceil(ArraySize)));
}

/// Get the alignment of arrays that reduces the chances of leaving holes when
/// arrays are allocated within a contiguous memory buffer (like shared memory
/// and stack). Holes are still possible before and after the array allocation.
///
/// Choose the largest alignment such that the array size is a multiple of the
/// alignment. If all elements of the buffer are allocated in order of
/// alignment (higher to lower) no holes will be left.
static Align getConservativeArrayAlignment(const unsigned ArraySize) {
return commonAlignment(MaxPTXArrayAlignment, ArraySize);
}

/// Find a better alignment for local arrays
static bool updateAllocaAlignment(const DataLayout &DL, AllocaInst *Alloca) {
// Looking for statically sized local arrays
if (!Alloca->isStaticAlloca())
return false;

// For now, we only support array allocas
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any particular reason it can't be applied to all aggregates with known in-memory size?

It's not uncommon to have classes/struct instances as local variables. E.g. use of float3 in all sort of temporary vars is ubiquitous.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it would be correct to apply this to any static alloca, I think the big majority of the useful cases would be arrays. I would expect that SROA should be able to eliminate nearly all other aggregates such as float3 and this pass would generally be for cases where we cannot use registers due to dynamic indexing which is most likely to occur with arrays.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aggregates can contain arrays, so your reasoning about dynamic indexing applies there, too.

Also, dynamic indexing (and SROA failures) indirectly applies to aggregates as well. E.g x = condition ? f3.x : f3.y.
Another source of problems is when code takes an address of the local variable and passes it around. Being able to access such a pointer with high enough alignment is useful, IMO. Think of capturing lambdas. They end up in all sorts of weird use cases and often capture things without the user being aware of the specific details.

Would applying the alignment to aggregates add a lot more complexity? Even if you are correct that most of the cases where it may end up being useful would be for arrays, I think there's benefit applying it uniformly to all aggregates. If anything, it avoids having to explain why we've singled out only arrays for the benefit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true. I've removed the check limiting this to arrays and now we'll try to improve the alignment of any static alloca.

if (!(Alloca->isArrayAllocation() || Alloca->getAllocatedType()->isArrayTy()))
return false;

const auto ArraySize = Alloca->getAllocationSize(DL);
if (!(ArraySize && ArraySize->isFixed()))
return false;

const auto ArraySizeValue = ArraySize->getFixedValue();
const Align PreferredAlignment =
MaxLocalArrayAlignment ? getAggressiveArrayAlignment(ArraySizeValue)
: getConservativeArrayAlignment(ArraySizeValue);

if (PreferredAlignment > Alloca->getAlign()) {
Alloca->setAlignment(PreferredAlignment);
return true;
}

return false;
}

static bool runSetLocalArrayAlignment(Function &F) {
bool Changed = false;
const DataLayout &DL = F.getParent()->getDataLayout();

BasicBlock &EntryBB = F.getEntryBlock();
for (Instruction &I : EntryBB)
if (AllocaInst *Alloca = dyn_cast<AllocaInst>(&I))
Changed |= updateAllocaAlignment(DL, Alloca);

return Changed;
}

namespace {
struct NVPTXIncreaseLocalAlignmentLegacyPass : public FunctionPass {
static char ID;
NVPTXIncreaseLocalAlignmentLegacyPass() : FunctionPass(ID) {}

bool runOnFunction(Function &F) override;
StringRef getPassName() const override {
return "NVPTX Increase Local Alignment";
}
};
} // namespace

char NVPTXIncreaseLocalAlignmentLegacyPass::ID = 0;
INITIALIZE_PASS(NVPTXIncreaseLocalAlignmentLegacyPass,
"nvptx-increase-local-alignment",
"Increase alignment for statically sized alloca arrays", false,
false)

FunctionPass *llvm::createNVPTXIncreaseLocalAlignmentPass() {
return new NVPTXIncreaseLocalAlignmentLegacyPass();
}

bool NVPTXIncreaseLocalAlignmentLegacyPass::runOnFunction(Function &F) {
return runSetLocalArrayAlignment(F);
}

PreservedAnalyses
NVPTXIncreaseLocalAlignmentPass::run(Function &F, FunctionAnalysisManager &AM) {
bool Changed = runSetLocalArrayAlignment(F);

if (!Changed)
return PreservedAnalyses::all();

PreservedAnalyses PA;
PA.preserveSet<CFGAnalyses>();
return PA;
}
1 change: 1 addition & 0 deletions llvm/lib/Target/NVPTX/NVPTXPassRegistry.def
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,5 @@ FUNCTION_PASS("nvvm-intr-range", NVVMIntrRangePass())
FUNCTION_PASS("nvptx-copy-byval-args", NVPTXCopyByValArgsPass())
FUNCTION_PASS("nvptx-lower-args", NVPTXLowerArgsPass(*this))
FUNCTION_PASS("nvptx-tag-invariant-loads", NVPTXTagInvariantLoadsPass())
FUNCTION_PASS("nvptx-increase-local-alignment", NVPTXIncreaseLocalAlignmentPass())
#undef FUNCTION_PASS
2 changes: 2 additions & 0 deletions llvm/lib/Target/NVPTX/NVPTXTargetMachine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -392,6 +392,8 @@ void NVPTXPassConfig::addIRPasses() {
// but EarlyCSE can do neither of them.
if (getOptLevel() != CodeGenOptLevel::None) {
addEarlyCSEOrGVNPass();
// Increase alignment for local arrays to improve vectorization.
addPass(createNVPTXIncreaseLocalAlignmentPass());
if (!DisableLoadStoreVectorizer)
addPass(createLoadStoreVectorizerPass());
addPass(createSROAPass());
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/NVPTX/call-with-alloca-buffer.ll
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ define ptx_kernel void @kernel_func(ptr %a) {
entry:
%buf = alloca [16 x i8], align 4

; CHECK: .local .align 4 .b8 __local_depot0[16]
; CHECK: .local .align 16 .b8 __local_depot0[16]
; CHECK: mov.b64 %SPL

; CHECK: ld.param.b64 %rd[[A_REG:[0-9]+]], [kernel_func_param_0]
Expand Down
85 changes: 85 additions & 0 deletions llvm/test/CodeGen/NVPTX/increase-local-align.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
; RUN: opt -S -passes=nvptx-increase-local-alignment < %s | FileCheck %s --check-prefixes=COMMON,DEFAULT
; RUN: opt -S -passes=nvptx-increase-local-alignment -nvptx-use-max-local-array-alignment < %s | FileCheck %s --check-prefixes=COMMON,MAX
target triple = "nvptx64-nvidia-cuda"

define void @test1() {
; COMMON-LABEL: define void @test1() {
; COMMON-NEXT: [[A:%.*]] = alloca i8, align 1
; COMMON-NEXT: ret void
;
%a = alloca i8, align 1
ret void
}

define void @test2() {
; DEFAULT-LABEL: define void @test2() {
; DEFAULT-NEXT: [[A:%.*]] = alloca [63 x i8], align 1
; DEFAULT-NEXT: ret void
;
; MAX-LABEL: define void @test2() {
; MAX-NEXT: [[A:%.*]] = alloca [63 x i8], align 16
; MAX-NEXT: ret void
;
%a = alloca [63 x i8], align 1
ret void
}

define void @test3() {
; COMMON-LABEL: define void @test3() {
; COMMON-NEXT: [[A:%.*]] = alloca [64 x i8], align 16
; COMMON-NEXT: ret void
;
%a = alloca [64 x i8], align 1
ret void
}

define void @test4() {
; DEFAULT-LABEL: define void @test4() {
; DEFAULT-NEXT: [[A:%.*]] = alloca i8, i32 63, align 1
; DEFAULT-NEXT: ret void
;
; MAX-LABEL: define void @test4() {
; MAX-NEXT: [[A:%.*]] = alloca i8, i32 63, align 16
; MAX-NEXT: ret void
;
%a = alloca i8, i32 63, align 1
ret void
}

define void @test5() {
; COMMON-LABEL: define void @test5() {
; COMMON-NEXT: [[A:%.*]] = alloca i8, i32 64, align 16
; COMMON-NEXT: ret void
;
%a = alloca i8, i32 64, align 1
ret void
}

define void @test6() {
; COMMON-LABEL: define void @test6() {
; COMMON-NEXT: [[A:%.*]] = alloca i8, align 32
; COMMON-NEXT: ret void
;
%a = alloca i8, align 32
ret void
}

define void @test7() {
; COMMON-LABEL: define void @test7() {
; COMMON-NEXT: [[A:%.*]] = alloca i32, align 2
; COMMON-NEXT: ret void
;
%a = alloca i32, align 2
ret void
}

define void @test8() {
; COMMON-LABEL: define void @test8() {
; COMMON-NEXT: [[A:%.*]] = alloca [2 x i32], align 8
; COMMON-NEXT: ret void
;
%a = alloca [2 x i32], align 2
ret void
}