Skip to content

Conversation

@alien-0119
Copy link
Collaborator

@alien-0119 alien-0119 commented Oct 28, 2025

What does this PR do?

Adds # (feature)
Add ModernBERT Decoder model and fast ut.

Usage Example:

  • ModernBertDecoderForCausalLM
import mindspore as ms
from mindone.transformers import AutoModelForCausalLM
from transformers import AutoTokenizer

model_id = "jhu-clsp/ettin-decoder-17m"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = "The future of artificial intelligence is"
inputs = tokenizer(prompt, return_tensors="np")
inputs = {k: ms.tensor(v) for k, v in inputs.items()}

outputs = model.generate(
    **inputs,
    max_length=50,
    num_return_sequences=1,
    temperature=0.7,
    do_sample=True,
    pad_token_id=tokenizer.eos_token_id
)

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Generated text: {generated_text}")
# The future of artificial intelligence is a major challenge for the world. In this article we will discuss “AI” in the future.
# AI is a great way to understand the future. It includes the ability to recognize and recognize the world.
  • ModernBertDecoderForSequenceClassification
import mindspore as ms
from mindone.transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer


model_id = "jhu-clsp/ettin-decoder-17m"

tokenizer = AutoTokenizer.from_pretrained(model_id)

classifier_model = AutoModelForSequenceClassification.from_pretrained(
    model_id,
    num_labels=2
)

text = "This movie is really great!"
inputs = tokenizer(text, return_tensors="np")
inputs = {k: ms.tensor(v) for k, v in inputs.items()}

outputs = classifier_model(**inputs)
predictions = ms.mint.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = ms.mint.argmax(predictions, dim=-1)

print(f"Predicted class: {predicted_class.item()}")
# Predicted class: 0
print(f"Prediction probabilities: {predictions}")
# [[0.55634815 0.44365188]]

Performance:
Experiments were tested on Ascend Atlas 800T A2 machines with mindspore 2.7.0 pynative mode.

model precision weight load(s) s/step
jhu-clsp/ettin-decoder-17m fp32 4.582 0.999
jhu-clsp/ettin-decoder-17m fp16 4.705 0.973
jhu-clsp/ettin-decoder-17m bf16 4.836 0.979

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@alien-0119 alien-0119 requested a review from vigo999 as a code owner October 28, 2025 07:58
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alien-0119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the mindone.transformers library by introducing the ModernBERT Decoder model. This new decoder-only transformer architecture is designed to support tasks such as causal language modeling and sequence classification. The changes encompass the full model implementation, integration into the auto-loading mechanisms, and thorough testing to ensure its robustness and compatibility within the MindSpore framework.

Highlights

  • New Model Introduction: The ModernBERT Decoder model has been added to the mindone.transformers library, expanding the available transformer architectures.
  • Core Architecture Components: This PR includes the full implementation of the ModernBERT Decoder, featuring ModernBertDecoderAttention (supporting both local and global attention with sliding windows), ModernBertDecoderLayer, and the base ModernBertDecoderModel.
  • Task-Specific Models: Specific model heads for causal language modeling (ModernBertDecoderForCausalLM) and sequence classification (ModernBertDecoderForSequenceClassification) are introduced, enabling the decoder for various NLP tasks.
  • Auto-Configuration and Auto-Modeling Integration: The new ModernBERT Decoder models and their configurations are integrated into the mindone.transformers.models.auto system, allowing for seamless loading and usage via auto classes.
  • Comprehensive Testing: Dedicated unit tests have been added for the ModernBERT Decoder models, ensuring functional correctness and numerical precision by comparing MindSpore implementations against PyTorch.
  • Rotary Embedding Refactor: A minor refactoring was applied to the ModernBertRotaryEmbedding class in the existing modernbert model, simplifying its initialization signature.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the ModernBERT Decoder, which is a valuable addition. However, there are a few critical issues that need to be addressed before merging. Firstly, there are some copy-paste errors in the auto-model configuration files that incorrectly map modernbert-decoder to MobileBert components. Secondly, a change to the ModernBertRotaryEmbedding constructor signature breaks the existing ModernBertModel as the call site was not updated. Please address these issues to ensure the new model integrates correctly and doesn't introduce regressions.


class ModernBertRotaryEmbedding(nn.Cell):
def __init__(self, config: ModernBertConfig, dim: int, base: float):
def __init__(self, config: ModernBertConfig):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

While removing the unused dim and base parameters from this __init__ method is a good cleanup, the call to this constructor within ModernBertAttention in this same file has not been updated. This will cause a TypeError when instantiating ModernBertModel.

Please update the call site at mindone/transformers/models/modernbert/modeling_modernbert.py:317 to match the new signature:

self.rotary_emb = ModernBertRotaryEmbedding(config=config)

("mobilevitv2", "MobileViTV2Config"),
("mobilenet_v1", "MobileNetV1Config"),
("mobilenet_v2", "MobileNetV2Config"),
("modernbert-decoder", "MobileBertDecoderConfig"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There seems to be a typo here. The modernbert-decoder model type should be mapped to ModernBertDecoderConfig, not MobileBertDecoderConfig.

Suggested change
("modernbert-decoder", "MobileBertDecoderConfig"),
("modernbert-decoder", "ModernBertDecoderConfig"),

("mobilevitv2", "MobileViTV2"),
("mobilenet_v1", "MobileNetV1"),
("mobilenet_v2", "MobileNetV2"),
("modernbert-decoder", "MobileBERTDecoder"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There seems to be a typo here. The modernbert-decoder model type should be mapped to ModernBertDecoder, not MobileBERTDecoder.

Suggested change
("modernbert-decoder", "MobileBERTDecoder"),
("modernbert-decoder", "ModernBertDecoder"),

@alien-0119 alien-0119 force-pushed the modernbert_decoder_master branch from 895e1cc to fd40c29 Compare October 28, 2025 08:08
@alien-0119 alien-0119 force-pushed the modernbert_decoder_master branch from fd40c29 to f8cff8e Compare October 28, 2025 08:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant