Skip to content

[offload][SYCL] Add Module splitting by categories. #131347

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions llvm/include/llvm/Transforms/Utils/SplitModuleByCategory.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
//===-------- SplitModuleByCategory.h - module split ------------*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// Functionality to split a module by categories.
//===----------------------------------------------------------------------===//

#ifndef LLVM_TRANSFORM_UTILS_SPLIT_MODULE_BY_CATEGORY_H
#define LLVM_TRANSFORM_UTILS_SPLIT_MODULE_BY_CATEGORY_H

#include "llvm/ADT/STLFunctionalExtras.h"

#include <memory>
#include <optional>
#include <string>

namespace llvm {

class Module;
class Function;

/// Splits the given module \p M using the given \p FunctionCategorizer.
/// \p FunctionCategorizer returns integer category for an input Function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A side note, I think it'd be more helpful (at least putting on my AMD hat) to be able to determine where a global variable goes as well, if we'd like to make this pass generic to support all potential targets. The reason is, for AMDGPU, we probably need to categorize all functions that could potentially reference a global variable in the sam module, due to the lowering of LDS (shared) variables.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave this for later.

/// It may return std::nullopt if a function doesn't have a category.
/// Module's functions are being grouped by categories. Every such group
/// populates a call graph containing group's functions themselves and all
/// reachable functions and globals. Split outputs are populated from each call
/// graph associated with some category.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two comments:

  1. I am unsure why this talks about a call graph and such. The entire interface works with an opaque "oraql" and categories, why that "oraql" puts functions in the same or different categories might be call graph related, or it might be because their names have the same prefix. We should not conflate one use with this generic capability. EDIT: Coming back to this after I read the implementation, I believe I understand what is happening. The interface is conflating two things, which is by itself OK. However, given the naming, this is confusing. I believe the easiest fix is to modify the comment and the name a little. As it is now, one could reasonably assume all functions with category X go into the same module, nothing else. That would have been my preferred way of doing this. The difference is that the dependence logic would then be part of the caller, and this would be a stupid splitting interface. I'm OK with keeping it the other way around for now. The interface doesn't splitModuleByCategory though, it's more like splitModuleTransitiveFromEntryPoints. And the comment here needs to define "transitive" and what are considered entry points.
  2. I am unsure why this uses an optional, and what that means. If std::nullopt means it is duplicated into ever module, or something special like that, I can see how that is useful. However, just given the comment here std::nullopt is an option and it is not clear what that means.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unsure why this uses an optional, and what that means.

In SYCL case FunctionCategorizer allows to define entry points and group them together by assigning group identifiers (categories). For non-entry functions we need a way to not choose them in a selection step of the algorithm. I chose std::nullopt as an indicator that the corresponding function shouldn't be added in any entry group. The function still can be copied in case if this is transitively used by some entry points.

We could replace std::optional<int> with a simple int and use value -1 for not choosing functions in entry groups.

Copy link
Member

@jdoerfert jdoerfert Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nullopt is fine, but include the meaning in the comment. EDIT: you did.

///
/// Every split output is being passed to \p Callback for further possible
/// processing.
///
/// Currently, the supported targets are SPIRV, AMDGPU and NVPTX.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update this comment. I am not sure if the targets are restrictive. I think the restriction is whether the input module has recursive calls or not.

Thanks

Copy link
Contributor

@shiltian shiltian Jun 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Now we have call backs so it should just work for all.

Update:

This is probably because isKernel function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably because isKernel function.

Yes and the algorithm was implemented with assumption that the input is a heterogenous program, which usually don't have recursion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which usually don't have recursion.'

FWIW, "usually" is doing the heavy lifting here. Please do not assume anything about GPU codes that is not required. So, recursion should be assumed to happen.

void splitModuleByCategory(
std::unique_ptr<Module> M,
function_ref<std::optional<int>(const Function &F)> FunctionCategorizer,
function_ref<void(std::unique_ptr<Module> Part)> Callback);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be helpful to pass the category to the callback in addition to the module. But if this is not needed right now, we can do this later.


} // namespace llvm

#endif // LLVM_TRANSFORM_UTILS_SPLIT_MODULE_BY_CATEGORY_H
1 change: 1 addition & 0 deletions llvm/lib/Transforms/Utils/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ add_llvm_component_library(LLVMTransformUtils
SimplifyLibCalls.cpp
SizeOpts.cpp
SplitModule.cpp
SplitModuleByCategory.cpp
StripNonLineTableDebugInfo.cpp
SymbolRewriter.cpp
UnifyFunctionExitNodes.cpp
Expand Down
340 changes: 340 additions & 0 deletions llvm/lib/Transforms/Utils/SplitModuleByCategory.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,340 @@
//===-------- SplitModuleByCategory.cpp - split a module by categories ----===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// See comments in the header.
//===----------------------------------------------------------------------===//

#include "llvm/Transforms/Utils/SplitModuleByCategory.h"
#include "llvm/ADT/SetVector.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/StringExtras.h"
#include "llvm/IR/Constants.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/InstIterator.h"
#include "llvm/IR/Instructions.h"
#include "llvm/IR/Module.h"
#include "llvm/Support/Debug.h"
#include "llvm/Transforms/Utils/Cloning.h"

#include <map>
#include <string>
#include <utility>

using namespace llvm;

#define DEBUG_TYPE "split-module-by-category"

namespace {

// A vector that contains a group of function with the same category.
using EntryPointSet = SetVector<const Function *>;

/// Represents a group of functions with one category.
struct EntryPointGroup {
int ID;
EntryPointSet Functions;

EntryPointGroup() = default;
EntryPointGroup(const EntryPointGroup &) = default;
EntryPointGroup &operator=(const EntryPointGroup &) = default;
EntryPointGroup(EntryPointGroup &&) = default;
EntryPointGroup &operator=(EntryPointGroup &&) = default;

EntryPointGroup(int ID, EntryPointSet Functions = EntryPointSet())
: ID(ID), Functions(std::move(Functions)) {}

void clear() { Functions.clear(); }

#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
LLVM_DUMP_METHOD void dump() const {
constexpr size_t INDENT = 4;
dbgs().indent(INDENT) << "ENTRY POINTS"
<< " " << ID << " {\n";
for (const Function *F : Functions)
dbgs().indent(INDENT) << " " << F->getName() << "\n";

dbgs().indent(INDENT) << "}\n";
}
#endif
};

/// Annotates an llvm::Module with information necessary to perform and track
/// the result of code (llvm::Module instances) splitting:
/// - entry points group from the module.
class ModuleDesc {
std::unique_ptr<Module> M;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume ModuleDesc "owns" a module after splitting?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ModuleDesc own an initial module and new created modules.

EntryPointGroup EntryPoints;

public:
ModuleDesc() = delete;
ModuleDesc(const ModuleDesc &) = delete;
ModuleDesc &operator=(const ModuleDesc &) = delete;
ModuleDesc(ModuleDesc &&) = default;
ModuleDesc &operator=(ModuleDesc &&) = default;

ModuleDesc(std::unique_ptr<Module> M,
EntryPointGroup EntryPoints = EntryPointGroup())
: M(std::move(M)), EntryPoints(std::move(EntryPoints)) {
assert(this->M && "Module should be non-null");
}

Module &getModule() { return *M; }
const Module &getModule() const { return *M; }

std::unique_ptr<Module> releaseModule() {
EntryPoints.clear();
return std::move(M);
}

#if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
LLVM_DUMP_METHOD void dump() const {
dbgs() << "ModuleDesc[" << M->getName() << "] {\n";
EntryPoints.dump();
dbgs() << "}\n";
}
#endif
};

bool isKernel(const Function &F) {
return F.getCallingConv() == CallingConv::SPIR_KERNEL ||
F.getCallingConv() == CallingConv::AMDGPU_KERNEL ||
F.getCallingConv() == CallingConv::PTX_Kernel;
}

// Represents "dependency" or "use" graph of global objects (functions and
// global variables) in a module. It is used during device code split to
// understand which global variables and functions (other than entry points)
// should be included into a split module.
//
// Nodes of the graph represent LLVM's GlobalObjects, edges "A" -> "B" represent
// the fact that if "A" is included into a module, then "B" should be included
// as well.
//
// Examples of dependencies which are represented in this graph:
// - Function FA calls function FB
// - Function FA uses global variable GA
// - Global variable GA references (initialized with) function FB
// - Function FA stores address of a function FB somewhere
//
// The following cases are treated as dependencies between global objects:
// 1. Global object A is used within by a global object B in any way (store,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"within by"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please explain your point here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find the language weird, or the grammar broken. "used within by a global object" is not something I can parse. "used within a global object" works for me, "used by a global object" works as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed that.

// bitcast, phi node, call, etc.): "A" -> "B" edge will be added to the
// graph;
// 2. function A performs an indirect call of a function with signature S and
// there is a function B with signature S. "A" -> "B" edge will be added to
// the graph;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a slippery slope. I'm fine with it for now but in reality this doesn't work. We do allow, and execute, way more than perfect signature matches.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious what cases you mean which are allowed to be executed.
It would help me to evaluate the way to support them.

I've added a test recently called complex-indirect-call-chain2.ll which checks that and it is not clear how would I support this case without signature matching.

class DependencyGraph {
public:
using GlobalSet = SmallPtrSet<const GlobalValue *, 16>;

DependencyGraph(const Module &M) {
// Group functions by their signature to handle case (2) described above
DenseMap<const FunctionType *, DependencyGraph::GlobalSet>
FuncTypeToFuncsMap;
for (const auto &F : M.functions()) {
// Kernels can't be called (either directly or indirectly).
if (isKernel(F))
continue;

FuncTypeToFuncsMap[F.getFunctionType()].insert(&F);
}

for (const auto &F : M.functions()) {
// case (1), see comment above the class definition
for (const Value *U : F.users())
addUserToGraphRecursively(cast<const User>(U), &F);

// case (2), see comment above the class definition
for (const auto &I : instructions(F)) {
const auto *CI = dyn_cast<CallInst>(&I);
if (!CI || !CI->isIndirectCall()) // Direct calls were handled above
continue;

const FunctionType *Signature = CI->getFunctionType();
const auto &PotentialCallees = FuncTypeToFuncsMap[Signature];
Graph[&F].insert(PotentialCallees.begin(), PotentialCallees.end());
}
}

// And every global variable (but their handling is a bit simpler)
for (const auto &GV : M.globals())
for (const Value *U : GV.users())
addUserToGraphRecursively(cast<const User>(U), &GV);
}

iterator_range<GlobalSet::const_iterator>
dependencies(const GlobalValue *Val) const {
auto It = Graph.find(Val);
return (It == Graph.end())
? make_range(EmptySet.begin(), EmptySet.end())
: make_range(It->second.begin(), It->second.end());
}

private:
void addUserToGraphRecursively(const User *Root, const GlobalValue *V) {
SmallVector<const User *, 8> WorkList;
WorkList.push_back(Root);

while (!WorkList.empty()) {
const User *U = WorkList.pop_back_val();
if (const auto *I = dyn_cast<const Instruction>(U)) {
const auto *UFunc = I->getFunction();
Graph[UFunc].insert(V);
} else if (isa<const Constant>(U)) {
if (const auto *GV = dyn_cast<const GlobalVariable>(U))
Graph[GV].insert(V);
// This could be a global variable or some constant expression (like
// bitcast or gep). We trace users of this constant further to reach
// global objects they are used by and add them to the graph.
for (const auto *UU : U->users())
WorkList.push_back(UU);
} else
llvm_unreachable("Unhandled type of function user");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should trigger on metadata, doesn't it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the inheritance hierarchy [1] it doesn't look like we can get a metadata from users(). If you want me to check any specific cases please tell me.

[1] - https://llvm.org/doxygen/classllvm_1_1User.html

}
}

DenseMap<const GlobalValue *, GlobalSet> Graph;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this const is not necessary. it can allow to use dyn_cast<xxx>, which looks more LLVM, than dyn_cast<const xxx>.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would require me to use const_cast like the following.

void addUserToGraphRecursively(const User *Root, const GlobalValue *V) {
  SmallVector<User *, 8> WorkList;
  WorkList.push_back(const_cast<User *>(Root));
  /// ...
}

Would that be ok in LLVM style?

SmallPtrSet<const GlobalValue *, 1> EmptySet;
};

void collectFunctionsAndGlobalVariablesToExtract(
SetVector<const GlobalValue *> &GVs, const Module &M,
const EntryPointGroup &ModuleEntryPoints, const DependencyGraph &DG) {
// We start with module entry points
for (const auto *F : ModuleEntryPoints.Functions)
GVs.insert(F);

// Non-discardable global variables are also include into the initial set
for (const auto &GV : M.globals())
if (!GV.isDiscardableIfUnused())
GVs.insert(&GV);

// GVs has SetVector type. This type inserts a value only if it is not yet
// present there. So, recursion is not expected here.
size_t Idx = 0;
while (Idx < GVs.size()) {
const GlobalValue *Obj = GVs[Idx++];

for (const GlobalValue *Dep : DG.dependencies(Obj)) {
if (const auto *Func = dyn_cast<const Function>(Dep)) {
if (!Func->isDeclaration())
GVs.insert(Func);
} else
GVs.insert(Dep); // Global variables are added unconditionally
}
}
}

ModuleDesc extractSubModule(const Module &M,
const SetVector<const GlobalValue *> &GVs,
EntryPointGroup ModuleEntryPoints) {
// For each group of entry points collect all dependencies.
ValueToValueMapTy VMap;
// Clone definitions only for needed globals. Others will be added as
// declarations and removed later.
std::unique_ptr<Module> SubM = CloneModule(
M, VMap, [&](const GlobalValue *GV) { return GVs.count(GV); });
// Replace entry points with cloned ones.
EntryPointSet NewEPs;
const EntryPointSet &EPs = ModuleEntryPoints.Functions;
std::for_each(EPs.begin(), EPs.end(), [&](const Function *F) {
NewEPs.insert(cast<Function>(VMap[F]));
});
ModuleEntryPoints.Functions = std::move(NewEPs);
return ModuleDesc{std::move(SubM), std::move(ModuleEntryPoints)};
}

// The function produces a copy of input LLVM IR module M with only those
// functions and globals that can be called from entry points that are specified
// in ModuleEntryPoints vector, in addition to the entry point functions.
ModuleDesc extractCallGraph(const Module &M, EntryPointGroup ModuleEntryPoints,
const DependencyGraph &DG) {
SetVector<const GlobalValue *> GVs;
collectFunctionsAndGlobalVariablesToExtract(GVs, M, ModuleEntryPoints, DG);

ModuleDesc SplitM = extractSubModule(M, GVs, std::move(ModuleEntryPoints));
LLVM_DEBUG(SplitM.dump());
return SplitM;
}

using EntryPointGroupVec = SmallVector<EntryPointGroup>;

/// Module Splitter.
/// It gets a module (in a form of module descriptor, to get additional info)
/// and a collection of entry points groups. Each group specifies subset entry
/// points from input module that should be included in a split module.
class ModuleSplitter {
private:
ModuleDesc Input;
EntryPointGroupVec Groups;
DependencyGraph DG;

private:
EntryPointGroup drawEntryPointGroup() {
assert(Groups.size() > 0 && "Reached end of entry point groups list.");
EntryPointGroup Group = std::move(Groups.back());
Groups.pop_back();
return Group;
}

public:
ModuleSplitter(ModuleDesc MD, EntryPointGroupVec GroupVec)
: Input(std::move(MD)), Groups(std::move(GroupVec)),
DG(Input.getModule()) {
assert(!Groups.empty() && "Entry points groups collection is empty!");
}

/// Gets next subsequence of entry points in an input module and provides
/// split submodule containing these entry points and their dependencies.
ModuleDesc getNextSplit() {
return extractCallGraph(Input.getModule(), drawEntryPointGroup(), DG);
}

/// Check that there are still submodules to split.
bool hasMoreSplits() const { return Groups.size() > 0; }
};

EntryPointGroupVec
selectEntryPointGroups(const Module &M,
function_ref<std::optional<int>(const Function &F)> FC) {
// std::map is used here to ensure stable ordering of entry point groups,
// which is based on their contents, this greatly helps LIT tests
std::map<int, EntryPointSet> EntryPointsMap;

for (const auto &F : M.functions()) {
if (auto Category = FC(F); Category) {
auto It = EntryPointsMap.find(*Category);
if (It == EntryPointsMap.end())
It = EntryPointsMap.emplace(*Category, EntryPointSet()).first;

It->second.insert(&F);
}
}

EntryPointGroupVec Groups;
Groups.reserve(EntryPointsMap.size());
// Start with properties of a source module
for (auto &[Key, EntryPoints] : EntryPointsMap)
Groups.emplace_back(Key, std::move(EntryPoints));

return Groups;
}

} // namespace

void llvm::splitModuleByCategory(
std::unique_ptr<Module> M,
function_ref<std::optional<int>(const Function &F)> FunctionCategorizer,
function_ref<void(std::unique_ptr<Module> Part)> Callback) {
EntryPointGroupVec Groups = selectEntryPointGroups(*M, FunctionCategorizer);
ModuleDesc MD = std::move(M);
ModuleSplitter Splitter(std::move(MD), std::move(Groups));
while (Splitter.hasMoreSplits()) {
ModuleDesc MD = Splitter.getNextSplit();
Callback(std::move(MD.releaseModule()));
}
}
Loading
Loading