You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The transformer architecture (https://arxiv.org/pdf/1706.03762) has been instrumental to scale sequence neural networks.
The transformer architecture is the fundamental building block of all LLMs. The trend of open sourcing LLM or reducing the number of parameters is strong. So the support of transformer architecture and attention head would be a great addition to hls4ml, which power and latency gains expected.