Skip to content

Cat-Gawr/DeepSeek-FlashMLA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepSeek-FlashMLA


Reposity forkato manualmente ( con VsCode ) dal reposity deepSeek

ovvero ho copiato manulamente il reposity con VsCode - Insiders


FlashMLA

FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.

Currently released:

  • BF16, FP16
  • Paged kvcache with block size of 64

Quick start

Install

python setup.py install

Benchmark

python tests/test_flash_mla.py

Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.8.

Usage

from flash_mla import get_mla_metadata, flash_mla_with_kvcache

tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)

for i in range(num_layers):
    ...
    o_i, lse_i = flash_mla_with_kvcache(
        q_i, kvcache_i, block_table, cache_seqlens, dv,
        tile_scheduler_metadata, num_splits, causal=True,
    )
    ...

Requirements

  • Hopper GPUs
  • CUDA 12.3 and above
    • But we highly recommend 12.8 or above for the best performance
  • PyTorch 2.0 and above

Acknowledgement

FlashMLA is inspired by FlashAttention 2&3 and cutlass projects.

Community Support

MetaX

For MetaX GPUs, visit the official website: MetaX.

The corresponding FlashMLA version can be found at: MetaX-MACA/FlashMLA

Moore Threads

For the Moore Threads GPU, visit the official website: Moore Threads.

The corresponding FlashMLA version is available on GitHub: MooreThreads/MT-flashMLA.

Hygon DCU

For the Hygon DCU, visit the official website: Hygon Developer.

The corresponding FlashMLA version is available here: OpenDAS/MLAttention.

Intellifusion

For the Intellifusion NNP, visit the official website: Intellifusion.

The corresponding FlashMLA version is available on Gitee: Intellifusion/tyllm.

Iluvatar Corex

For Iluvatar Corex GPUs, visit the official website: Iluvatar Corex.

The corresponding FlashMLA version is available on GitHub: Deep-Spark/FlashMLA

AMD Instinct

For AMD Instinct GPUs, visit the official website: AMD Instinct.

The corresponding FlashMLA version can be found at: AITER/MLA

Citation

@misc{flashmla2025,
      title={FlashMLA: Efficient MLA decoding kernels},
      author={Jiashi Li},
      year={2025},
      publisher = {GitHub},
      howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
}

IDE in cui modificarlo :

  • Visual Studio Code - insiders

  • Visual studio 2022

  • Visual Studio code

    • il Fork di VsCode ovvero Cursor
    • il Fork di VsCode ovvero VSCodium
  • PyCharm

  • Sublime Text

  • Notepad++

  • Atom

  • Eclipse

  • IntelliJ IDEA

  • NetBeans

  • Android Studio

  • Xcode( solo per Mac )

  • WebStorm

    • PhpStorm
    • DataGrip
    • RubyMine
  • AppCode

  • CLion

  • DataGrip

  • Rider

  • GoLand

  • PyCharm Edu( solo per Linux , non cambia nulla se è su un'altro OS )

  • Brackets

  • Vim( solo per Linux , ma anche per il OS di Apple [ Mac os ] )

  • Emacs

  • nano

  • Geany

  • Bluefish

  • Kate

  • Code::Blocks

  • Anjuta

  • KDevelop

  • Lazarus

  • MonoDevelop

  • CodeLite

  • JCreator

  • DrJava

  • Notepadqq

  • Gedit

  • Kate

  • KWrite ( solo ed escusivamente su Linux )

  • SciTE

  • Kile

  • WinEdt

  • LyX

  • TeXShop

    • TeXworks

    • TeXstudio

  • TeXnicCenter

    • TeXmaker

      • TeXpen
  • TeXlipse

    • TeXmacs

    • TeXShade

      • TeX2pag(e)

      • TeX2RTF

        • TeX2HTML ( in questo caso tutti gli TeX saranno in ritardo di 1-8 giorni )

About

DeepSeek Flash MLA - DeepSeek - copy manual

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published