Skip to content

Adapters v1.2.0

Latest
Compare
Choose a tag to compare
@calpt calpt released this 20 May 19:43
· 1 commit to main since this release

Blog post: https://adapterhub.ml/blog/2025/05/adapters-for-any-transformer

This version is built for Hugging Face Transformers v4.51.x.

New

Adapter Model Plugin Interface (@calpt via #738; @lenglaender via #797)

The new adapter model interface makes it easy to plug most adapter features into any new or custom Transformer model. Check out our release blog post for details. Also see https://docs.adapterhub.ml/plugin_interface.html.

Multi-Task composition with MTL-LoRA (@FrLdy via #792)

MTL-LoRA (Yang et al., 2024) is a new adapter composition method leveraging LoRA for multi-task learning. See https://docs.adapterhub.ml/multi_task_methods.html#mtl-lora.

VeRA - parameter-efficient LoRA variant (@julian-fong via #763)

VeRA (Kopiczko et al., 2024) is a LoRA adapter variant that requires even less trainable parameters. See https://docs.adapterhub.ml/methods.html#vera.

New Models (via new interface)

A couple of new models are supported out-of-the-box via the new adapter model plugin interface:

  • Gemma 2, Gemma 3
  • ModernBERT
  • Phi 1, Phi 2
  • Qwen 2, Qwen 2.5, Qwen 3

More

  • New init_weights_seed adapter config attribute to initialize adapters with identical weights (@TimoImhof via #786)
  • Support defining custom forward method args via ForwardContext (@calpt via #789)

Changed