Skip to content
GitHub Universe 2025
Last chance: Save $400 on Universe passes until 9/17. Register now
#

memory-efficient-attention

Here is 1 public repository matching this topic...

Adaptive Contextual Attention Gating (ACAG) — context‑aware, efficient attention mechanism for Transformers. Optimized for long‑context LLMs, few‑shot reasoning, and scalable NLP with PyTorch.

  • Updated Sep 11, 2025

Improve this page

Add a description, image, and links to the memory-efficient-attention topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the memory-efficient-attention topic, visit your repo's landing page and select "manage topics."

Learn more