Skip to content

npu_megatron_adapt#99

Draft
addsubmuldiv wants to merge 1 commit intomodelscope:mainfrom
addsubmuldiv:mindspeed_adapt
Draft

npu_megatron_adapt#99
addsubmuldiv wants to merge 1 commit intomodelscope:mainfrom
addsubmuldiv:mindspeed_adapt

Conversation

@addsubmuldiv
Copy link
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the framework's compatibility by integrating support for Ascend NPU devices within the Megatron-Core ecosystem. It refines how distributed metrics are aggregated, intelligently configures Megatron models and optimizers based on the detected hardware, and ensures robust handling of dependencies like TransformerEngine. These changes aim to provide a more flexible and performant training experience across diverse hardware platforms.

Highlights

  • NPU (Ascend) Support: Introduced conditional logic and imports to enable Megatron-Core functionality on Ascend NPU devices, leveraging is_torch_npu_available().
  • Distributed Metric Aggregation: Modified the loss calculation to use torch.distributed.all_reduce for NPU devices, which is more stable than all_gather_object in certain NPU/HCCL environments, while retaining the original method for other devices.
  • Dynamic Megatron-Core Configuration: Adapted Megatron-Core model creation, optimizer configuration, and initialization to dynamically apply NPU-specific layer specifications (e.g., get_gpt_layer_local_spec with RMSNorm) and process group handling (e.g., create_gloo_process_groups=False).
  • MindSpeed Integration and Compatibility: Integrated mindspeed.megatron_adaptor.repatch for NPU-specific optimizations, including error handling for known upstream bugs in MindSpeed, and ensured optimizer argument compatibility across different Megatron-Core versions.
  • Layer Normalization Key Handling: Updated the GPT model bridge to correctly map layer normalization weights by dynamically checking for different key names (linear_qkv.layer_norm_weight vs input_layernorm.weight) depending on whether TransformerEngine or local specs are in use.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/twinkle/metric/loss.py
    • Imported necessary modules for distributed operations and NPU detection.
    • Modified the calculate method to conditionally use torch.distributed.all_reduce for NPU devices for metric aggregation, improving stability.
  • src/twinkle/model/megatron/args.py
    • Imported is_torch_npu_available and get_gpt_layer_local_spec.
    • Implemented NPU-specific logic in create_model to select appropriate GPT layer specifications, including RMSNorm for NPU.
  • src/twinkle/model/megatron/megatron.py
    • Added NPU detection and conditional import of mindspeed.megatron_adaptor.repatch.
    • Adjusted data parallel group creation in __post_init__ for NPU devices without FSDP.
    • Incorporated NPU-specific patching via repatch in __init__, with error handling for MindSpeed bugs.
    • Ensured batch tensors are moved to the local device in forward_step_func.
    • Updated _create_megatron_optimizer to dynamically pass arguments to OptimizerConfig for version compatibility and disabled Gloo process groups for NPU.
    • Configured Megatron-Core initialization to disable Gloo process groups when NPU is detected.
  • src/twinkle/model/megatron/model/gpt_bridge.py
    • Modified _set_layer_attn and _set_layer_mlp to dynamically determine the correct layer normalization key names based on the Megatron-Core layer specification being used.
  • src/twinkle/model/megatron/model/gpt_model.py
    • Refactored the import of TELinear to use a try-except block, providing a fallback dummy class that raises an error if TransformerEngine is required but not installed.
Activity
  • No specific activity (comments, reviews, or progress updates) was provided in the pull request context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces adaptations for running Megatron models on NPUs. The changes include using NPU-specific communication primitives, handling optional dependencies like TransformerEngine, and making the code more robust for different hardware backends. My review focuses on improving code maintainability by reducing redundant function calls to check for NPU availability.

Comment on lines +243 to +245
is_npu = is_torch_npu_available()

if repatch is not None and is_npu:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To avoid repeated calls to is_torch_npu_available(), it's better to call it once in the initializer and store the result in an instance attribute like self.is_npu. This attribute can then be reused in other methods of this class, such as _create_megatron_optimizer and initialize, improving code clarity and maintainability.

Suggested change
is_npu = is_torch_npu_available()
if repatch is not None and is_npu:
self.is_npu = is_torch_npu_available()
if repatch is not None and self.is_npu:


# Ensure each model chunk has ddp_config attached (required by Megatron optimizer)
from megatron.core.distributed import DistributedDataParallelConfig
is_npu = is_torch_npu_available()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To maintain consistency and avoid redundant calls, please use the self.is_npu attribute that should be initialized in the constructor.

Suggested change
is_npu = is_torch_npu_available()
is_npu = self.is_npu

from .args import get_args
self._try_init_process_group()
args = get_args()
is_npu = is_torch_npu_available()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To maintain consistency and avoid redundant calls, please use the self.is_npu attribute that should be initialized in the constructor.

Suggested change
is_npu = is_torch_npu_available()
is_npu = self.is_npu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant