[BIONEMO-2639] Add Evo2 LoRA example to jupyter notebook#1066
[BIONEMO-2639] Add Evo2 LoRA example to jupyter notebook#1066
Conversation
|
can we add an inference demo, for a fine-tuned model, to the demo notebook too? See comment |
| "difference in accuracy.", | ||
| ) | ||
| ap.add_argument( | ||
| "--lora-checkpoint-path", |
There was a problem hiding this comment.
Consider expanding the help text to give users a clearer idea of what happens when they pass a LoRA checkpoint path, that it inits model transform. For example, you could briefly mention what is restored and how it affects the model
Also, for consistency and readability, you might want to use the common capitalization LoRA in the description
| seq_len_interpolation_factor: int | None = None, | ||
| lora_checkpoint_path: Path | None = None, | ||
| ): | ||
| """Inference workflow for Evo2. |
There was a problem hiding this comment.
please add lora specific arg to the docs
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and |
There was a problem hiding this comment.
why this file is under run? could you please relocate it under
sub-packages/bionemo-evo2/src/bionemo/evo2/models where it should belong?
| ), | ||
| ), | ||
| log_every_n_steps=1, | ||
| limit_val_batches=10, |
There was a problem hiding this comment.
why do we need in predict limit_val_batches?
| ), | ||
| log_every_n_steps=1, | ||
| limit_val_batches=10, | ||
| num_sanity_val_steps=0, |
There was a problem hiding this comment.
the same question as above?
| ) | ||
| parser.add_argument("--lora-finetune", action="store_true", help="Use LoRA fine-tuning", default=False) | ||
| parser.add_argument("--lora-checkpoint-path", type=Path, default=None, help="LoRA checkpoint path") | ||
| parser.add_argument("--lora-checkpoint-path", type=str, default=None, help="LoRA checkpoint path") |
There was a problem hiding this comment.
in predict.py, lora path is a Path. please keep it consistent. Also, the same comment applies here as above related to the description of the param
|
|
||
| if args.lora_finetune: | ||
| callbacks.append(ModelTransform()) | ||
| callbacks.append(lora_transform) |
There was a problem hiding this comment.
in predict.py, lora transformed is initlised by
if lora_checkpoint_path:
model_transform = Evo2LoRA(peft_ckpt_path=str(lora_checkpoint_path))
callbacks.append(model_transform)
else:
model_transform = None
and there is no need for params lora_finetune. Could we keep it consistent?
There was a problem hiding this comment.
Imho --lora-finetune is not needed, lets remove it and use lora if the checkpoint is provided. please specify in docs that providing checkpoint enables LoRa
|
|
||
| if args.lora_finetune: | ||
| callbacks.append(ModelTransform()) | ||
| callbacks.append(lora_transform) |
There was a problem hiding this comment.
could you please update
with required changes?it can be done in the followup mr
7b9d3f1 to
5c10500
Compare
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Bruno Alvisio <balvisio@nvidia.com>
5c10500 to
26e1904
Compare
Description
Added how to use finetune with LoRA to Evo2 jupyter notebook
Type of changes
CI Pipeline Configuration
Configure CI behavior by applying the relevant labels:
Note
By default, the notebooks validation tests are skipped unless explicitly enabled.
Authorizing CI Runs
We use copy-pr-bot to manage authorization of CI
runs on NVIDIA's compute resources.
automatically be copied to a pull-request/ prefixed branch in the source repository (e.g. pull-request/123)
/ok to testcomment on the pull request to trigger CI. This will need to be done for each new commit.Usage
# TODO: Add code snippetPre-submit Checklist