Skip to content

feats(transformers):add longcat_flash model#1443

Open
iugoood wants to merge 1 commit intomindspore-lab:masterfrom
iugoood:longcatflash
Open

feats(transformers):add longcat_flash model#1443
iugoood wants to merge 1 commit intomindspore-lab:masterfrom
iugoood:longcatflash

Conversation

@iugoood
Copy link
Copy Markdown
Contributor

@iugoood iugoood commented Nov 26, 2025

Add

1 add longcat_flash model
2 add UT

Notes

moe models with 560b params, not validated with real-weights.

@iugoood iugoood requested a review from vigo999 as a code owner November 26, 2025 06:22
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @iugoood, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the mindone.transformers library by introducing the longcat_flash model. This new model is designed for causal language modeling and incorporates advanced features such as Rotary Position Embeddings (RoPE), Mixture-of-Experts (MoE) for efficient scaling, and a dual-sublayer decoder architecture. The changes include the full implementation of the model's components and a robust testing framework to validate its behavior against a reference PyTorch implementation.

Highlights

  • New Model Integration: The longcat_flash model, including LongcatFlashForCausalLM, LongcatFlashModel, and LongcatFlashPreTrainedModel, has been added to the mindone.transformers library.
  • Modular Model Implementation: The core logic for the longcat_flash model, featuring components like LongcatFlashRMSNorm, LongcatFlashRotaryEmbedding, LongcatFlashMoE, and LongcatFlashMLA, is introduced in a new, auto-generated file.
  • Comprehensive Testing: A dedicated test suite for the longcat_flash model has been added, ensuring its functionality and compatibility within the MindSpore framework by comparing outputs with a PyTorch implementation.
  • CausalLM Tester Update: The CausalLMModelTester has been refactored to remove the parent argument from its constructor, simplifying its initialization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the longcat_flash model. The implementation is comprehensive, but there are a couple of significant performance concerns in the model's implementation that should be addressed, particularly an inefficient loop in the Mixture of Experts (MoE) layer and a suboptimal implementation of rotary position embeddings. Additionally, a bug in the new test file will prevent the tests from running successfully. My review provides specific feedback on these points.


input_mask = None
if self.use_input_mask:
input_mask = np.tril(np.ones_like(self.batch_size, self.seq_length))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The use of np.ones_like here is incorrect. np.ones_like expects an array-like object as its first argument to determine the shape and dtype, but self.batch_size is an integer. This will raise a TypeError. You should use np.ones((self.batch_size, self.seq_length)) instead to create an array of the desired shape.

Suggested change
input_mask = np.tril(np.ones_like(self.batch_size, self.seq_length))
input_mask = np.tril(np.ones((self.batch_size, self.seq_length)))

Comment on lines +171 to +181
for expert_idx in range(len(self.experts)):
expert = self.experts[expert_idx]
mask = expert_mask[expert_idx]
token_indices, weight_indices = mindspore.mint.where(mask)

if token_indices.numel() > 0:
expert_weights = topk_weights[token_indices, weight_indices]
expert_input = hidden_states[token_indices]
expert_output = expert(expert_input)
weighted_output = expert_output * expert_weights.unsqueeze(-1)
final_hidden_states.index_add_(0, token_indices, weighted_output)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The for loop over experts in the moe method is inefficient and will be a significant performance bottleneck, especially for models with a large number of experts. This should be vectorized to process experts in a batch. A common approach is to use batched matrix multiplication or similar techniques. The docstring for this method already calls this out as needing optimization.

Comment on lines +268 to +272
b, h, s, d = q.shape
q = q.view(b, h, s, d // 2, 2).transpose(4, 3).reshape(b, h, s, d)

b, h, s, d = k.shape
k = k.view(b, h, s, d // 2, 2).transpose(4, 3).reshape(b, h, s, d)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This implementation of applying rotary position embeddings is inefficient due to multiple view, transpose, and reshape operations, as noted in the function's docstring. These operations can be computationally expensive and should be refactored for better performance, for instance by using a more direct computation method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant