FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.
Currently released:
- BF16, FP16
- Paged kvcache with block size of 64
python setup.py installpython tests/test_flash_mla.pyAchieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.8.
from flash_mla import get_mla_metadata, flash_mla_with_kvcache
tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)
for i in range(num_layers):
...
o_i, lse_i = flash_mla_with_kvcache(
q_i, kvcache_i, block_table, cache_seqlens, dv,
tile_scheduler_metadata, num_splits, causal=True,
)
...- Hopper GPUs
- CUDA 12.3 and above
- But we highly recommend 12.8 or above for the best performance
- PyTorch 2.0 and above
FlashMLA is inspired by FlashAttention 2&3 and cutlass projects.
For MetaX GPUs, visit the official website: MetaX.
The corresponding FlashMLA version can be found at: MetaX-MACA/FlashMLA
For the Moore Threads GPU, visit the official website: Moore Threads.
The corresponding FlashMLA version is available on GitHub: MooreThreads/MT-flashMLA.
For the Hygon DCU, visit the official website: Hygon Developer.
The corresponding FlashMLA version is available here: OpenDAS/MLAttention.
For the Intellifusion NNP, visit the official website: Intellifusion.
The corresponding FlashMLA version is available on Gitee: Intellifusion/tyllm.
For Iluvatar Corex GPUs, visit the official website: Iluvatar Corex.
The corresponding FlashMLA version is available on GitHub: Deep-Spark/FlashMLA
For AMD Instinct GPUs, visit the official website: AMD Instinct.
The corresponding FlashMLA version can be found at: AITER/MLA
@misc{flashmla2025,
title={FlashMLA: Efficient MLA decoding kernels},
author={Jiashi Li},
year={2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
}-
Visual Studio Code - insiders
-
Visual studio 2022
-
Visual Studio code
- il Fork di VsCode ovvero Cursor
- il Fork di VsCode ovvero VSCodium
-
PyCharm
-
Sublime Text
-
Notepad++
-
Atom
-
Eclipse
-
IntelliJ IDEA
-
NetBeans
-
Android Studio
-
Xcode( solo per Mac )
-
WebStorm
- PhpStorm
- DataGrip
- RubyMine
-
AppCode
-
CLion
-
DataGrip
-
Rider
-
GoLand
-
PyCharm Edu( solo per Linux , non cambia nulla se è su un'altro OS )
-
Brackets
-
Vim( solo per Linux , ma anche per il OS di Apple [ Mac os ] )
-
Emacs
-
nano
-
Geany
-
Bluefish
-
Kate
-
Code::Blocks
-
Anjuta
-
KDevelop
-
Lazarus
-
MonoDevelop
-
CodeLite
-
JCreator
-
DrJava
-
Notepadqq
-
Gedit
-
Kate
-
KWrite ( solo ed escusivamente su Linux )
-
SciTE
-
Kile
-
WinEdt
-
LyX
-
TeXShop
-
TeXworks
-
TeXstudio
-
-
TeXnicCenter
-
TeXmaker
- TeXpen
-
-
TeXlipse
-
TeXmacs
-
TeXShade
-
TeX2pag(e)
-
TeX2RTF
- TeX2HTML ( in questo caso tutti gli TeX saranno in ritardo di 1-8 giorni )
-
-