Skip to content

fix[example/hunyunvideo]:HunyuanVideo is compatible with MindSpore 2.6 and 2.7#1382

Open
iugoood wants to merge 47 commits intomindspore-lab:masterfrom
iugoood:hunyunvideo
Open

fix[example/hunyunvideo]:HunyuanVideo is compatible with MindSpore 2.6 and 2.7#1382
iugoood wants to merge 47 commits intomindspore-lab:masterfrom
iugoood:hunyunvideo

Conversation

@iugoood
Copy link
Copy Markdown
Contributor

@iugoood iugoood commented Oct 20, 2025

What does this PR do?

HunyuanVideo is compatible with MindSpore 2.6 and 2.7

  1. example/hunyunvideo added GroupNorm class to fix the misalignment of GroupNorm precision types during model training and inference.
  2. fix some bug

Fixes # (issue)

Adds # (feature)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline?
  • Did you make sure to update the documentation with your changes? E.g. record bug fixes or new features in What's New. Here are the
    documentation guidelines
  • Did you build and run the code without any errors?
  • Did you report the running environment (NPU type/MS version) and performance in the doc? (better record it for data loading, model inference, or training tasks)
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@xxx

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @iugoood, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing the compatibility of the HunyuanVideo project with MindSpore versions 2.6 and 2.7. It addresses several bugs, including a critical fix for Group Normalization precision during model operations, corrections in command-line argument parsing for training scripts, and adjustments to how activation functions and text encoder outputs are handled. The overall aim is to improve the stability and functionality of the HunyuanVideo framework within the specified MindSpore environments.

Highlights

  • MindSpore Compatibility: The HunyuanVideo project has been updated to ensure full compatibility with MindSpore versions 2.6 and 2.7.
  • GroupNorm Implementation: A new GroupNorm class has been introduced in mindone/diffusers/models/normalization.py to resolve precision type misalignment issues during model training and inference.
  • Argument Parsing Correction: Command-line argument definitions in the training script (examples/hunyuanvideo/scripts/train.py) were updated to correctly use the -- prefix for validation parameters.
  • Activation Function Instantiation: The unet_causal_3d_blocks.py file was modified to correctly handle the instantiation of activation functions, removing an unnecessary function call.
  • Text Encoder Output Index Fix: An indexing error was corrected in the text encoder's encode method to properly retrieve the last_hidden_state.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@iugoood iugoood force-pushed the hunyunvideo branch 2 times, most recently from 147f320 to 29ed7a2 Compare October 20, 2025 02:18
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces compatibility fixes for HunyuanVideo with MindSpore 2.6 and 2.7. The changes include correcting an indexing issue for hidden states, fixing an activation function call, making command-line arguments optional, and adding a new GroupNorm class to address precision-related misalignments. The changes are generally well-implemented and address the stated goals. My review includes one suggestion to improve the maintainability of the new GroupNorm class.

Comment on lines +758 to +762
if self.affine:
x = group_norm(x, self.num_groups, self.weight.to(x.dtype), self.bias.to(x.dtype), self.eps)
else:
x = group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
return x
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The construct method can be made more concise and maintainable by removing the duplicated call to group_norm. You can prepare the weight and bias tensors before a single call.

Suggested change
if self.affine:
x = group_norm(x, self.num_groups, self.weight.to(x.dtype), self.bias.to(x.dtype), self.eps)
else:
x = group_norm(x, self.num_groups, self.weight, self.bias, self.eps)
return x
weight = self.weight
bias = self.bias
if self.affine:
weight = weight.to(x.dtype)
bias = bias.to(x.dtype)
return group_norm(x, self.num_groups, weight, bias, self.eps)

@iugoood iugoood changed the title (bugs):HunyuanVideo is compatible with MindSpore 2.6 and 2.7 fix[example/hunyunvideo]:HunyuanVideo is compatible with MindSpore 2.6 and 2.7 Oct 20, 2025
Comment thread examples/hunyuanvideo/README.md Outdated
Comment thread mindone/diffusers/models/normalization.py Outdated
MAX_VALUE = 1e5


class GroupNorm(nn.Cell):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not reuse the previous GroupNorm, e.g. the one in mindone.diffusers

- Add standardized MindSpore/CANN requirements tables to all model READMEs
- Create comprehensive READMEs for 8 new transformers models (aria, bert, got_ocr2, helium, herbert, kimi_vl, qwen, qwen2_audio)
- Update 50+ existing READMEs with consistent requirements format
- Remove center alignment div wrappers and installation sections from transformers READMEs
- Standardize all requirements tables to left-aligned 3-column format
- Update changelog and main README for v0.4.0 release
- Remove div center alignment wrappers from all examples READMEs
- Update requirements tables to left-aligned 3-column format
- Change installation instructions from 'git clone master + pip install -e .' to 'pip install mindone==0.4.0'
- Standardize all examples to use consistent v0.4.0 installation approach
- Update 19 main examples README files with unified format
- Replace all 'pip install -e .' commands with 'pip install mindone==0.4.0'
- Add cd commands before 'pip install -r requirements.txt' to navigate to correct directories
- Update 13 README files with corrected installation instructions
- Ensure consistent v0.4.0 installation approach across all model examples
- Add cd examples/[model_name] before pip install -r requirements.txt in all README files
- Updated 12 additional README files: emu3, opensora_pku, omnigen, wan2_2, hunyuanvideo, step_video_t2v, sparktts, omnigen2, wan2_1, moviegen, hunyuanvideo-i2v, opensora_hpcai
- Ensure users navigate to correct directory before installing model-specific requirements
- Add CogView, CogVideoX, and Flux to diffusers support description
- Make the diffusers entry more informative about specific model support
- Update cogview link to point to correct diffusers path
- Add cogvideox entry with proper diffusers path
- Remove flux entry since no separate example directory exists
- Add flux entry pointing to examples/diffusers/dreambooth (contains flux lora training)
- Add all missing models: canny_edit, lang_sam, mmada, omnigen, omnigen2, sam2, sparktts
- Update repository links to correct original sources
- Complete the model support list for examples folder (excluding diffusers/transformers)
- Remove the first janus entry that had typo 'DeekSeek'
- Keep the correct 'DeepSeek AI official' entry
- Replace all /blob/master/ URLs with /blob/v0.4.0/
- Updated 22 links to point to the v0.4.0 release branch
- Update all model links to point to v0.4.0 branch instead of master
- Add missing models from examples/README.md: canny_edit, lang_sam, mmada, omnigen, omnigen2, sam2, sparktts
- Maintain the same table format with task, model, inference, finetune, pretrain, institute columns
- Updated 30+ links to use v0.4.0 branch
vigo999 and others added 30 commits November 2, 2025 17:58
- Removed models that don't exist in examples folder: magvit, dynamicrafter, venhancer, t2v_turbo, svd, kohya_sd_scripts, story_diffusion, animate diff, video composer, flux, stable diffusion 3, stable diffusion xl, stable diffusion, hunyuan_dit, pixart_sigma, fit, latte, dit, t2i-adapter, ip adapter, mvdream, instantmesh, sv3d, hunyuan3d-1.0
- Kept only models that actually have corresponding directories in examples/
- Flux exists in examples/diffusers/dreambooth/ with training scripts and README
- Added back flux entry that was incorrectly removed
- Changed lang_sam, sam2, and sparktts from full support (✅ ✅ ✅) to inference-only (✅ ✖️ ✖️)
- These models only support inference, not finetuning or pretraining
…t the top

- Moved all inference-only models (✅ ✖️ ✖️) to the top of the table
- Grouped full support models (✅ ✅ ✅) below
- Better organization for easier navigation by capability
- Moved flux to after sparktts (inference-only models)
- Flux now properly positioned in 'inference + finetune' section
- Maintains logical order: inference-only → inference+finetune → full support
- Removed all fire emojis (🔥) from the README model table
- Cleaned up the table appearance for a more professional look
- Updated table header to use more appropriate term 'organization'
- More accurate description for the institute/organization column
- Updated changelog with actual v0.4.0 release content
- Organized changes by model categories (multimodal, video, image, audio, CV)
- Updated statistics to reflect actual release metrics
- Replaced placeholder content with real release information
- Added detailed model additions from mindone/transformers/models and mindone/diffusers/pipelines
- Included bug fixes with PR links from v0.3.0 to v0.4.0
- Added examples/models changes
- Updated statistics to reflect actual changes
- Categorized changes by transformers, diffusers, and examples
- Added mindone.peft v0.15.2 upgrade (mindspore-lab#1194)
- Added Qwen2.5-Omni LoRA finetuning script (mindspore-lab#1218)
- Added PEFT layer fixes (mindspore-lab#1187)
- PEFT is an important parameter-efficient fine-tuning library
- Added version compatibility information at the top
- mindone.diffusers compatible with hf diffusers v0.35.0
- mindone.transformers compatible with hf transformers v4.50
- MindSpore upgraded to require >=2.6.0
- Updated Transformers Models section to mention 280+ models supported
- Updated Diffusers Pipelines section to mention 160+ pipelines supported
- Restructured to highlight major upgrades and comprehensive capabilities
- Better represents the actual scope of v0.4.0 release
- Added detailed categorization of all new transformers models
- Included 50+ specific model additions with PR links
- Organized by model type: Vision, Audio, Text/Multilingual, Multimodal, Architecture
- Added examples and documentation updates
- Comprehensive coverage of v0.4.0 transformers enhancements
- Updated pipeline count to accurately reflect 78 pipeline directories
- Each directory represents a unique pipeline type
- Corrected statistics section with accurate numbers
- Added detailed categorization of new diffusers pipelines
- Included 15+ specific pipeline additions with PR links
- Organized by pipeline type: Video, Image, Audio, Sampling, Testing
- Matches the format used for transformers models summary
- Added comprehensive list of new diffusers model components
- Included video transformers, autoencoders, controlnets, and processing modules
- Listed 15+ new model components added in v0.4.0
- Organized by component type (transformers, autoencoders, controlnets, etc.)
- Changed heading to 'mindone.diffusers update'
- Restructured into 'New pipelines:' and 'Model components:' subsections
- Removed separate 'Diffusers Model Components' section
- Better organized diffusers-related changes under one cohesive section
- Changed 'Transformers Models' to 'mindone.transformers updates' (### level)
- Changed 'New Model Additions:' to 'new models' (#### level)
- Changed 'mindone.diffusers update' to 'mindone.diffusers updates' (### level)
- Changed 'New pipelines:' to 'new pipelines' (#### level)
- Changed 'Model components:' to 'model components' (#### level)
- Updated Examples Models and PEFT headings to ### level
- Consistent hierarchical structure with ### for main sections and #### for subsections
- Changed '**Compatibility Updates:**' to '### Compatibility Updates'
- Consistent with other third level headings in the changelog
- Changed 'PEFT (Parameter-Efficient Fine-Tuning)' to 'mindone.peft'
- Moved mindone.peft section after mindone.diffusers updates
- Changed 'Examples Models' to 'models under examples (mostly with finetune/training scripts)'
- Better logical flow: transformers → diffusers → peft → examples
- Added PR links to model components where specific PRs exist (mindspore-lab#1288, mindspore-lab#1148)
- Added PR links to examples models that have individual PRs (mindspore-lab#1378, mindspore-lab#1233, mindspore-lab#1363, mindspore-lab#1243, mindspore-lab#687, mindspore-lab#1362, mindspore-lab#1227, mindspore-lab#1346, mindspore-lab#1200, mindspore-lab#1369)
- Noted that some components were added as part of broader pipeline implementations
- Improved traceability for specific model additions
- Added PR links for transformer_skyreels_v2 (mindspore-lab#1203), transformer_chroma (mindspore-lab#1157),
  transformer_cosmos (mindspore-lab#1196), transformer_hunyuan_video_framepack (mindspore-lab#1029),
  and consisid_transformer_3d (mindspore-lab#1124)
- Improved traceability for specific model component additions
- Added PR links for autoencoder_kl_cosmos (mindspore-lab#1196), controlnet_sana (mindspore-lab#1145),
  multicontrolnet_union (mindspore-lab#1158), cache_utils (mindspore-lab#1299), auto_model (mindspore-lab#1158),
  lora processing modules (mindspore-lab#1158)
- Added PR links for VAR model (mindspore-lab#905), HunyuanVideo and HunyuanVideo-I2V models (mindspore-lab#1029, mindspore-lab#883),
  Step-Video-T2V model (mindspore-lab#848), CogView4 model (mindspore-lab#874), MMaDA (mindspore-lab#1116)
- Improved traceability for all model components and examples additions
- Updated all (mindspore-lab#1234) references to full URLs ([PR mindspore-lab#1234](https://github.com/mindone/mindone/pull/1234))
- This ensures PR links work properly in local editors, documentation viewers, and when the file is not hosted on GitHub
- Covers transformers models, diffusers pipelines, model components, examples models, and all fixes/changes sections
- Changed all PR links from github.com/mindone/mindone to github.com/mindspore-lab/mindone
- This ensures PR links point to the correct MindSpore Lab repository
- Removed all bullet points under ### Changed section
- Kept the ### Changed header for future accurate updates
- Removed the ### Changed header and all its content
- CHANGELOG now flows directly from examples to Fixed section
- Updated total commits from 52 to 374 (v0.3.0 to v0.4.0)
- Updated files changed from 103 to 4128
- Other statistics remain accurate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants