Skip to content

[BIONEMO-2639] Add Evo2 LoRA example to jupyter notebook#1066

Open
balvisio wants to merge 1 commit intomainfrom
dev/ba/BIONEMO-2639-add-evo2-lora-example
Open

[BIONEMO-2639] Add Evo2 LoRA example to jupyter notebook#1066
balvisio wants to merge 1 commit intomainfrom
dev/ba/BIONEMO-2639-add-evo2-lora-example

Conversation

@balvisio
Copy link
Collaborator

Description

Added how to use finetune with LoRA to Evo2 jupyter notebook

Type of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Refactor
  • Documentation update
  • Other (please describe):

CI Pipeline Configuration

Configure CI behavior by applying the relevant labels:

Note

By default, the notebooks validation tests are skipped unless explicitly enabled.

Authorizing CI Runs

We use copy-pr-bot to manage authorization of CI
runs on NVIDIA's compute resources.

  • If a pull request is opened by a trusted user and contains only trusted changes, the pull request's code will
    automatically be copied to a pull-request/ prefixed branch in the source repository (e.g. pull-request/123)
  • If a pull request is opened by an untrusted user or contains untrusted changes, an NVIDIA org member must leave an
    /ok to test comment on the pull request to trigger CI. This will need to be done for each new commit.

Usage

# TODO: Add code snippet

Pre-submit Checklist

  • I have tested these changes locally
  • I have updated the documentation accordingly
  • I have added/updated tests as needed
  • All existing tests pass successfully

@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 25, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@dorotat-nv
Copy link
Collaborator

@balvisio , evo2 finetune + LoRA does not work, see issue #1136

@cclough
Copy link

cclough commented Sep 18, 2025

can we add an inference demo, for a fine-tuned model, to the demo notebook too? See comment

"difference in accuracy.",
)
ap.add_argument(
"--lora-checkpoint-path",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider expanding the help text to give users a clearer idea of what happens when they pass a LoRA checkpoint path, that it inits model transform. For example, you could briefly mention what is restored and how it affects the model
Also, for consistency and readability, you might want to use the common capitalization LoRA in the description

seq_len_interpolation_factor: int | None = None,
lora_checkpoint_path: Path | None = None,
):
"""Inference workflow for Evo2.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add lora specific arg to the docs

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this file is under run? could you please relocate it under
sub-packages/bionemo-evo2/src/bionemo/evo2/models where it should belong?

),
),
log_every_n_steps=1,
limit_val_batches=10,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need in predict limit_val_batches?

),
log_every_n_steps=1,
limit_val_batches=10,
num_sanity_val_steps=0,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same question as above?

)
parser.add_argument("--lora-finetune", action="store_true", help="Use LoRA fine-tuning", default=False)
parser.add_argument("--lora-checkpoint-path", type=Path, default=None, help="LoRA checkpoint path")
parser.add_argument("--lora-checkpoint-path", type=str, default=None, help="LoRA checkpoint path")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in predict.py, lora path is a Path. please keep it consistent. Also, the same comment applies here as above related to the description of the param


if args.lora_finetune:
callbacks.append(ModelTransform())
callbacks.append(lora_transform)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in predict.py, lora transformed is initlised by

        if lora_checkpoint_path:
             model_transform = Evo2LoRA(peft_ckpt_path=str(lora_checkpoint_path))
             callbacks.append(model_transform)
        else:
             model_transform = None

and there is no need for params lora_finetune. Could we keep it consistent?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imho --lora-finetune is not needed, lets remove it and use lora if the checkpoint is provided. please specify in docs that providing checkpoint enables LoRa


if args.lora_finetune:
callbacks.append(ModelTransform())
callbacks.append(lora_transform)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please update

with required changes?

it can be done in the followup mr

@balvisio balvisio force-pushed the dev/ba/BIONEMO-2639-add-evo2-lora-example branch from 7b9d3f1 to 5c10500 Compare October 19, 2025 19:12
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 19, 2025

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fb6eebf2-a6a0-48bd-9e72-8ae48a6c09f6

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dev/ba/BIONEMO-2639-add-evo2-lora-example

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Bruno Alvisio <balvisio@nvidia.com>
@balvisio balvisio force-pushed the dev/ba/BIONEMO-2639-add-evo2-lora-example branch from 5c10500 to 26e1904 Compare March 4, 2026 20:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants