-
Notifications
You must be signed in to change notification settings - Fork 59
Description
This is an uber-issue for making the tuner easier to maintain. The current implementation has a few issues that make the tuner library fragile and prone to getting out of sync with the IREE compiler. Specifically, we identified the following issues:
-
There are two ways to (re-)configure executable sources:
a. By updating the lowering config and translation info in-situ. This is used when producing candidate dispatches using executable benchmarks as the source-of-truth.
b. By using the transform dialect library script to match root ops and apply compilation info attributes to them. This is used during the model candidate compilation and benchmarking stage.As a result, we have duplicate logic to apply configurations found by the constraint solver. The fix is to write a pass that strips existing configuration from executable sources, and then use transform dialect to re-configure them. This can be done as a separate invocation of
iree-opt
. -
The MLIR processing is mostly string-based. While this allowed us to quickly prototype, it makes the code prone to getting out of sync with the IREE compiler. The lowering configs and translation info attributes are considered compiler-internals and there's no stability guarantee as to the exact structure and format of these attributes. As a result, every time the format changes, we have to update the parsing and printing logic in the tuner to match the new format in the compiler.
Here, the proposed solution is to expose these key attributes (translation info, compilation info, and MFMA intrinsic info) through python bindings. We already have it for the GPU pipeline options that can be used as a template for future bindings: Reland #18804 iree-org/iree#18840.
-
Make it easier to identify 'root ops'. We can make the IREE compiler annotate the root lingalg ops with a new attribute that the tuner can use to recognize them, without having to duplicate the compiler logic.
-
The
Configuration
representation is modeled after the requirements of theLLVMGPUVectorDistribute
pipeline. This made it so that the surrounding code makes implicit assumptions about the problem representation. Instead, we should define an interface that allows us to support multiple compilation pipelines, such that the generated SMT constraints are specific to both the pipeline and the dispatch kind. Further, the constraint generation code should be decoupled from the parsing/printing code, such that projects like TKW can use just the constraint generation and benchmarking infra. -
Move from two stages of compile-and-benchmark to just one. This made sense for SDXL where the best isolated dispatch does not necessarily perform best across the whole model, but it may not be necessary or even sufficiently general for other applications. This is related to the
libtuner.TuningClient
class; clients should be able to define their own tuning stages with libtuner providing the interface to specify the compilation and benchmarking commands.
Tasks
- Add an iree-opt pass to strip configuration from executable sources (incl. executable benchmarks) @bangtianliu
- Expose key attributes via python bindings. @kuhar
iree_gpu
Python bindings (GPUPipelineOptionsAttr
) iree-org/iree#18804- [python][tuner] Add bindings for MMAIntrinsic iree-org/iree#19095
- [python][tuner] Add bindings for lowering config iree-org/iree#19096
- [python] Simplify iree_gpu dialect bindings tests. NFC. iree-org/iree#19104
- [Codegen] Update translation_info attribute assembly format. NFC. iree-org/iree#19107
- [python][tuner] Set up bindings for iree_codegen iree-org/iree#19108
- [python][tuner] Add bindings for
iree_codegen.translation_info
iree-org/iree#19128 - [python][tuner] Add bindings for
iree_codegen.compilation_info
iree-org/iree#19129
- Add a utility function to query supported MMA intrinsics and expose it to C API and python @bangtianliu
- Use MLIR types for types in the tuner @kuhar
- Use IREE attributes for MFMA intrinsics in the tuner @bangtianliu
- Use IREE bindings for compilation info (incl., lowering_config and translation_info) @bangtianliu
- [tuner]: retire data class GPUPipelineOptions, use iree_gpu.PipelineOptionsAttr. #626
- [tuner]: use lowering config binding #629
- [tuner]: add property functions to lowering config python binding iree-org/iree#19376
- [tuner]: use property function from iree lowering config python binding #662
- [tuner]: use translation_info binding #669
- [tuner]: use compilation_info binding #678
- Update the tuner to generate candidate dispatches using the new iree-opt pass and transform dialect tunin specs. @Max191
- Modify IREE to annotate root ops with a new unit attribute @nithinsubbiah
- Update the tuner to identify root ups using the new unit attribute produced by IREE @Max191
- Move constraint generation logic out of the parsing/printing logic in
candidate_gen.py
. @kuhar - Use only one compile-benchmark stage in
TuningCandidate
. Update the existing example to adapt to this change. @Max191 - Fix duplicate builtin attribute registration issues in MLIR/IREE python bindings gen @makslevental