-
Notifications
You must be signed in to change notification settings - Fork 12
Refactor applications/DynaCell into a reusable benchmark package #404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
alxndrkalinin
wants to merge
312
commits into
modular-viscy-staging
Choose a base branch
from
dynacell-models
base: modular-viscy-staging
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
312 commits
Select commit
Hold shift + click to select a range
4e8581e
refactor(eval): consolidate save_metrics loop, skip empty DataFrames
alxndrkalinin 1b10b7f
refactor(eval): split GT/pred feature computation, add force_recompute
alxndrkalinin ebf769d
feat(eval): integrate GT cache into evaluate_predictions
alxndrkalinin 4f43dfe
feat(eval): add dynacell precompute-gt CLI
alxndrkalinin f68eca0
docs(eval): document GT cache, precompute-gt CLI, parallel sweeps
alxndrkalinin db70c78
refactor(eval): batch zarr opens per FOV, dedup slug, type kind
alxndrkalinin de4882b
refactor(eval): encapsulate cache dirty flag, narrow broad except
alxndrkalinin c822c84
test(eval): add pinned-value regression tests for feature pairing
alxndrkalinin fd030f8
update the model .yml file for unetvit3d
60f9ca9
update the training script for unetvit3d on sec61b
1690b7f
perf(eval): cache ckpt sha256 via sidecar file
alxndrkalinin 7df8f07
feat(cli): strip launcher and benchmark reserved keys in compose
alxndrkalinin a83c4a2
chore(configs): commit benchmark schema and virtual_staining skeleton
alxndrkalinin 8114048
feat(configs): add CellDiff train leaves for er/mito/nucleus/membrane
alxndrkalinin 22bdab9
feat(configs): add CellDiff predict leaves (self-predict)
alxndrkalinin 8e00988
feat(tools): add submit_benchmark_job.py with dry-run and sbatch temp…
alxndrkalinin 13da046
chore(configs): archive Dihan's CellDiff trees under tools/LEGACY
alxndrkalinin 1b7dae8
docs(dynacell): update README with benchmark layout and submit tool
alxndrkalinin 86db6d4
refactor(utils): promote deep_merge to public API
alxndrkalinin ff53b3d
fix(tools): address simplify review findings
alxndrkalinin 219b9b0
fix(tools): address code review — pytest pythonpath, flag semantics
alxndrkalinin 7706ae8
fix(tools): decouple preview contract from --dry-run
alxndrkalinin 4a967c0
fix(tools): shlex-quote env values in rendered sbatch
alxndrkalinin 4e64ff3
test(utils): restore test_deep_merge_* underscore separator
alxndrkalinin 5b352cc
docs(dynacell): document submit tool flags and preview contract
alxndrkalinin 5e69dc7
docs(eval): note ckpt sha256 sidecar under cache identity
alxndrkalinin 9e94d02
feat(configs): migrate UNetViT3D and FNet3D paper SEC61B leaves to sc…
alxndrkalinin a2361ca
refactor(data): rename HCSDataModule preload kwarg to mmap_preload
alxndrkalinin 7096d64
feat(configs): add FNet3D paper-baseline fit leaves for 3 more organe…
alxndrkalinin 6d00854
feat(tools): make sbatch constraint directive optional
alxndrkalinin 16fa6fa
fix(configs): align fnet3d_paper leaves with paper-run hardware + max…
alxndrkalinin 6ed2494
fix(configs): bump gpu_any_long mem to 512G to survive mmap preload
alxndrkalinin ffd84d7
update unetvit3d training yml
44aa49c
fix(configs): narrow 512G mem bump to cell.zarr-backed leaves
alxndrkalinin c9b8f3c
fix(configs): drop num_log_steps from unetvit3d overlay
alxndrkalinin e6780bb
test(configs): allow checkpoint policy divergence in unetvit3d test
alxndrkalinin 66b4a71
feat(configs): add UNeXt2 SEC61B fit leaf (Run 4 reproduction)
alxndrkalinin be84b25
update the predict_method for unetvit3d
c9a6e16
feat(dynacell): add denoise_sliding_window with overlap averaging
4702d7a
feat(configs): set predict_method=iterative for celldiff iPSC confocal
8b2332c
perf(data): preserve native dtype in mmap_preload, cast on sample read
alxndrkalinin 70765f2
refactor(data): tighten array_key sentinel, drop WHAT-comments in tests
alxndrkalinin 9734d07
refactor(configs): rename runtime_single_gpu to runtime_shared
alxndrkalinin f9b8f1e
feat(configs): add topology recipes under dynacell and cytoland
alxndrkalinin 5b7eaae
refactor(configs): unify fit/predict trainer recipes, own topology se…
alxndrkalinin 3fdb7cf
refactor: trim WHAT-comments and drop unused _strip_reserved in confi…
alxndrkalinin bde233d
refactor(tools): drop undocumented stdout echo from --dry-run
alxndrkalinin e0f5c00
refactor(cli): log warning when composed-config parse fails
alxndrkalinin f31205c
perf(compose): memoize YAML parsing in load_composed_config
alxndrkalinin 957cf9d
ci: add dynacell benchmark-config tests to the test matrix
alxndrkalinin 44c2834
feat(configs): set predict params and fix output paths for CELLDiff i…
f4af391
feat(configs): add UNetViT3D train and predict configs for iPSC confocal
77a7063
Add UNetViT3D mito predict benchmark config
f29be3f
fix(tools): set umask 0002 so benchmark run outputs are group-writable
alxndrkalinin 0618acd
refactor(cli): let config read errors propagate in _maybe_compose_config
alxndrkalinin a6d2576
feat(configs): add membrane predict config and switch to shared runti…
abe35fa
style(engine): fix pre-existing ruff E741 and E501 violations
alxndrkalinin fc3cf5f
feat(engine): add encoder_only FCMAE load to DynacellUNet
alxndrkalinin 53385bb
feat(configs): add FCMAE-family benchmark pair on ER/SEC61B
alxndrkalinin ee86d29
test(engine,configs): cover encoder_only load + scratch≡pretrained in…
alxndrkalinin c954bc6
fix(evaluation): declare feature_extractor schema in eval.yaml
alxndrkalinin 1c72f2f
feat(evaluation): add Hydra config groups for target/predict_set/feat…
alxndrkalinin 80d9465
feat(configs): bump fcmae_vscyto3d fit overlay lr to 0.0004
alxndrkalinin 78231ed
feat(dynacell): add eval_gpu extra with cupy-cuda12x + cucim-cu12
alxndrkalinin b409c0d
docs(evaluation): drop redundant cell_segmentation_path override in e…
alxndrkalinin 4dea81a
fix(evaluation): fail loud on missing channels, skip empty metric files
alxndrkalinin 817bd6c
feat(evaluation): add benchmark eval leaves mirroring predict tree
alxndrkalinin 4f6a6e2
docs(evaluation): add benchmark row to Config groups table
alxndrkalinin 85e31f6
fix(dynacell): run 4-GPU DDP with ntasks=4, not ntasks=1
alxndrkalinin b74a534
fix(configs): use ddp_find_unused_parameters_true for FCMAE leaves
alxndrkalinin d1d02fd
refactor(evaluation): move HPC-bound eval configs out of src/
alxndrkalinin e16be57
feat(dynacell): enforce ntasks == gpus == nodes × devices at submit
alxndrkalinin 089c6e9
refactor(__main__): locate external configs via pyproject.toml marker
alxndrkalinin 5974a5c
feat(configs): add FCMAE scratch/pretrained pair for mito/TOMM20
alxndrkalinin 8d3ca52
docs(configs): redefine benchmark config schema for unified layout
alxndrkalinin 587b1b4
refactor(configs): unify benchmark tree with train+predict+eval per cell
alxndrkalinin 29cd698
fix(configs): update train/predict leaf base: paths after shared/mode…
alxndrkalinin 5224d8d
refactor(__main__,tests): apply simplify review cleanup
alxndrkalinin 2ac99fa
docs(eval): refresh README for post-reorg config layout
alxndrkalinin 3cf7dd2
refactor(configs): move shared/ + leaf/ under hidden _internal/ root
alxndrkalinin 23d924d
fix(engine): honor predict_overlap and multi-channel sliding windows
alxndrkalinin b3ab22d
fix(__main__): keep hydra.searchpath adjacent to positional overrides
alxndrkalinin 5178636
docs(eval): point README intro at _internal/ for HPC-bound groups
alxndrkalinin 94ed736
docs(configs): remove stale benchmark schema doc
alxndrkalinin 26c41d3
fix(dynacell): emit --ntasks-per-node for SLURM/Lightning compat
alxndrkalinin 18bf2ef
feat(eval): share DINOv3 + other HF artifacts across team via HF_HOME
alxndrkalinin 010a9e2
feat(eval): pin canonical feature-extractor and gt_cache_dir defaults
alxndrkalinin 8d3af13
refactor(__main__,tests): apply simplify review cleanup
alxndrkalinin f11abba
fix(eval): shared HF cache uses HF_HUB_CACHE not HF_HOME
alxndrkalinin f0d8d9b
fix(configs): halve FCMAE dataloader workers to stay under 512G cgroup
alxndrkalinin 14f59f1
refactor(configs): group benchmark leaves by train set under <org>/<m…
alxndrkalinin bfc6189
fix(evaluation): independent min-max norm for metrics, fix cache cont…
38d47b3
feat(dynacell): manifest-driven dataset_ref resolver for benchmark le…
alxndrkalinin 032e424
docs(dynacell): roadmap + spec for dataset_ref resolver staged rollout
alxndrkalinin 4bb9f09
refactor(dynacell): strict dataset_ref collision + shared manifest-ro…
alxndrkalinin 11836c8
test(dynacell): add nucleus and membrane fixture manifest targets
alxndrkalinin 326b2d0
refactor(dynacell): migrate mito/nucleus/membrane targets to dataset_ref
alxndrkalinin 6273439
test(dynacell): consolidate dataset_ref resolver tests across migrate…
alxndrkalinin 8924ab2
feat(dynacell): add Hydra-side dataset_ref hook module
alxndrkalinin f5a6e56
refactor(dynacell): wire Hydra dataset_ref hook; migrate eval configs
alxndrkalinin a984384
refactor(dynacell): dedup dataset_ref hooks and test setup
alxndrkalinin 46be278
feat(dynacell): add evaluation scripts and CUDA envrc, ignore checkpo…
3a27c45
chore(dynacell): bump 4gpu mem 512G -> 1024G after two OOM deaths
alxndrkalinin e9c4cd7
fix(viscy-data): let BatchedConcatDataset tolerate single-sample chil…
alxndrkalinin 8c21ca6
refactor(viscy-data): drop :class: markup, assert lossless batch grou…
alxndrkalinin 4bc2e53
feat(viscy-data): attach ShardedDistributedSampler in BatchedConcatDa…
alxndrkalinin e01089d
feat(viscy-utils): add configure_adamw_scheduler helper
alxndrkalinin d296c7f
refactor(dynacell): use shared optimizer helper; expose warmup knobs
alxndrkalinin ef545c0
refactor(cytoland): adopt shared optimizer helper and ckpt_path hpara…
alxndrkalinin a519ac9
chore(cytoland): default vscyto3d finetune to bf16-mixed
alxndrkalinin 5950576
refactor(dynacell): split fit overlays into model+trainer vs HCS data
alxndrkalinin c603687
docs(dynacell): add job submission reliability plan
alxndrkalinin b157daa
feat(dynacell): add --exclude as optional SBATCH directive
alxndrkalinin b9ebf7b
feat(dynacell): add NCCL preflight smoke test to sbatch template
alxndrkalinin 96c5787
feat(viscy-data): add include/exclude_fov_names to HCSDataModule
alxndrkalinin aa68329
feat(cytoland): add A549 infection VSCyto3D warm-start finetune configs
alxndrkalinin ae7296b
refactor(dynacell): /simplify cleanup of preflight + exclude impl
alxndrkalinin 9149ed7
perf(viscy-data): store FOV filters as sets for O(1) lookup
alxndrkalinin ab50b7c
fix(viscy-data): resolve per-timepoint norm_meta before transforms
alxndrkalinin 9a730f8
feat(dynacell): add FNet3D paper predict configs for ipsc_confocal
alxndrkalinin 8c7bbae
feat(cytoland): align A549 infection finetune to dynacell FCMAE recipe
alxndrkalinin 9654e2b
feat(dynacell): add first Stage 7 joint train leaf (celldiff, ER/SEC61B)
alxndrkalinin 3072c3e
feat(viscy-data): support include/exclude_fov_names with mmap_preload…
alxndrkalinin 26f1a7b
docs(dynacell): refresh virtual_staining README for fit-split + joint…
alxndrkalinin 48797b6
fix(dynacell): clarify error message for missing seg_model
alxndrkalinin b65037e
fix(dynacell): guard sbatch cleanup against unset SLURM_JOB_ID
alxndrkalinin 9c1ab9c
fix(dynacell): single-source dinov3 model name via Hydra group
alxndrkalinin e06a71b
fix(dynacell): drop NaN bars from cross-model metric figure
alxndrkalinin 5a2c6fb
refactor(dynacell): use Field(default_factory=list) for Pydantic list…
alxndrkalinin b422bc5
fix(viscy-utils): strip top-level _-prefixed keys from composed config
alxndrkalinin 437eb11
docs(dynacell): document anchor convention; pin no-anchor-leak invariant
alxndrkalinin 3434b7e
chore(dynacell): declare wandb optional extra; document the runtime gap
alxndrkalinin 4d399d5
feat(dynacell): add canonical joint smoke leaf for celldiff/joint_ips…
alxndrkalinin 453b09d
feat(dynacell): add hardware_h200_single_smoke launcher profile
alxndrkalinin 8ff7abf
refactor(dynacell): /simplify smoke profile via wall-only overlay
alxndrkalinin 3e3909a
feat(dynacell): disable logger at the smoke leaf level
alxndrkalinin 00a2730
fix(viscy-data): drop non-tensor metadata in BatchedConcatDataModule …
alxndrkalinin 5b2327f
fix(dynacell): cut smoke leaf batch_size to 1 to fit a single H200
alxndrkalinin 234819a
feat(dynacell): add 4-GPU DDP smoke leaf for joint celldiff SEC61B
alxndrkalinin 48f4878
perf(viscy-utils): bf16-precision SSIM helper for Hopper FCMAE traini…
alxndrkalinin 0b04b24
fix(viscy-data): drop use_thread_workers to fix DDP deadlock (#413)
alxndrkalinin 2e0ee29
feat(dynacell): nucleus + membrane FCMAE_VSCyto3D scratch + pretraine…
alxndrkalinin a198b5e
fix(dynacell): override Structure aug-keys for nucleus + membrane FCM…
alxndrkalinin e814147
feat(dynacell): add eval script for FNet3D paper predictions on iPSC …
8f92711
feat(dynacell): unblock joint training + add A549 cross-eval (#415)
alxndrkalinin 6c75284
docs(dynacell): final findings + 8-job FCMAE benchmark, open items
alxndrkalinin 4bb4fd9
docs(dynacell): clean up resolved planning docs; refresh A549 roadmap
alxndrkalinin 9af8bdf
chore(dynacell): rename ER+Mito FCMAE pretrained outputs to _ws8500
alxndrkalinin 40445fe
chore(dynacell): pin compute-job CWD to repo_root in sbatch template
alxndrkalinin b196723
docs(dynacell): refresh A549 roadmap with 2026-04-24 status
alxndrkalinin bd74574
fix(dynacell): repoint mito A549 cross-eval to 2024_11_21 plate
alxndrkalinin 6516c1a
feat(dynacell): add fnet3d_paper Stage 6 A549 predict leaves
alxndrkalinin b41e7df
feat(dynacell): add fcmae_vscyto3d Stage 6 A549 predict scaffolding
alxndrkalinin df85d93
feat(dynacell): add FNet3D ER joint single-GPU smoke leaf
alxndrkalinin ad1df84
perf(cytoland): enable mmap+persistent+bf16 for A549 infected finetune
alxndrkalinin 41d39ba
chore(cytoland): drop h200 constraint on A549 infected sbatch
alxndrkalinin 2395e1d
feat(dynacell): bundle manifest registry as first-class package data
alxndrkalinin ef0e7c8
chore(cytoland): require >=80 GB VRAM for A549 infected sbatch
alxndrkalinin 76940d1
docs(dynacell): note manifest registry drift policy in README
alxndrkalinin 31d72ee
feat(dynacell): submit_benchmark_job.py supports optional --dependenc…
alxndrkalinin cbcf7bd
feat(dynacell): bundle 4 missing A549 manifests + predict_set fragments
alxndrkalinin 3098cfe
feat(dynacell): per-plate A549 predict leaves for ER + MITO across 5 …
alxndrkalinin 2f4dff1
feat(dynacell): submit_benchmark_batch.py — chain N predict leaves in…
alxndrkalinin 694d744
feat(dynacell): compute FID/KID at dataset level across all cells
ffc15db
feat(dynacell): add per-model eval scripts in model subfolders
b027421
fix(dynacell): use uv run consistently and fix pred path in eval scripts
b4c66aa
refactor(dynacell): consolidate eval scripts into model subfolders
f051def
chore(manifests): point a549-mantis stores at .ozx
alxndrkalinin a327d0d
feat(dynacell): add --override + --overwrite to submit_benchmark_batc…
alxndrkalinin 6b8ccc9
feat(dynacell): add --overwrite alias to submit_benchmark_job.py for …
alxndrkalinin b2a17fa
feat(dynacell): predict_local_a549.sh — local-GPU per-plate predict b…
alxndrkalinin ab4ee83
fix(dynacell): predict_local_a549.sh dying after first plate
alxndrkalinin b6988c4
fix(dynacell): predict_local_a549.sh - probe unbuffer + catch child f…
alxndrkalinin ac6c159
feat(dynacell): tomm20 fcmae scratch predict leaves pinned to best ckpt
alxndrkalinin 5644906
feat(dynacell): matrix-fill iPSC predict leaves for remaining 7 FCMAE…
alxndrkalinin b0c0f24
chore(dynacell): drop legacy per-date a549_mantis registry + leaves
alxndrkalinin 503774b
feat(dynacell): per-condition a549_mantis registry + predict/eval matrix
alxndrkalinin e98b407
feat(dynacell): a549 evaluation runner scripts for fnet3d + unetvit3d
alxndrkalinin 6c92b9f
chore(gitignore): ignore plot_related/ directory
e5e907e
feat(dynacell): predict_local_ipsc.sh — local ipsc_confocal predict r…
alxndrkalinin 2441b1c
feat(dynacell): ER joint train uses pooled a549 SEC61B_all store
alxndrkalinin 65f3ae5
chore(dynacell): drop legacy predict__a549_mantis leaves for membrane…
alxndrkalinin d985a76
feat(dynacell): nucleus fcmae pretrained predict leaves pinned to bes…
alxndrkalinin 40b6dc9
feat(dynacell): sec61b fcmae scratch predict leaves pinned to best ckpt
alxndrkalinin a7bc0cf
feat(dynacell): tomm20 fcmae pretrained predict leaves pinned to best…
alxndrkalinin 0d72bbf
chore(security): warn against upgrading past lightning 2.6.1
alxndrkalinin eef1459
feat(dynacell): joint ipsc_confocal+a549_mantis train leaves for 7 cells
alxndrkalinin 4c53eaf
fix(security): pin lightning to direct CDN URL after PyPI quarantine
alxndrkalinin abba46f
fix(viscy-utils): HCSPredictionWriter idempotent on multi-timepoint i…
alxndrkalinin 370beb7
feat(dynacell): joint train leaves for fcmae/fnet3d/unext2 (13 cells)
alxndrkalinin 0a1b764
fix(dynacell): fnet3d_paper joint train leaves are single-GPU like iPSC
alxndrkalinin 72d4183
fix(dynacell): align joint train channel typing with iPSC (str not list)
alxndrkalinin afc997d
fix(dynacell): celldiff + unetvit3d joint train leaves are single-H20…
alxndrkalinin 2d288f3
fix(dynacell): mem=512G for fnet joint nucleus + membrane (matches iPSC)
alxndrkalinin 25af720
fix(dynacell): bump joint train mem to 512G (preloads two stores)
alxndrkalinin e41c795
feat(dynacell): a549-only train leaves for all 21 organelle×model cells
alxndrkalinin e7f2af6
feat(dynacell): switch celldiff a549_mantis predict configs to iterat…
a7b7403
feat(dynacell): add unext2 eval runner script
8cc7bf5
fix(dynacell): bump joint fnet3d nucl/memb mem 512G->1024G (OOM)
alxndrkalinin 6415f7d
fix(dynacell): bump a549 nucl fnet3d mem 512G->1024G (OOM)
alxndrkalinin b48a13c
chore(cytoland): repoint A549 infected configs from Lustre to VAST
alxndrkalinin 6ec0d6f
fix(viscy-data): mmap_preload reads via BasicIndexer (~6x less RAM)
alxndrkalinin 2a513b4
revert(dynacell): drop fnet3d mem overrides after mmap_preload fix
alxndrkalinin 8c31d20
feat(dynacell): wire iPSC FCMAE membrane best ckpt into predict leaves
alxndrkalinin d97c23b
fix(dynacell): set PYTORCH_ALLOC_CONF=expandable_segments:True
alxndrkalinin 7a884b5
fix(dynacell): joint fnet3d batch_size 48->6 (CUDA OOM)
alxndrkalinin c793da1
fix(dynacell): joint fcmae batch_size 32->8 (CUDA OOM risk)
alxndrkalinin 7832af3
fix(dynacell): hardware_4gpu profile now 512G + H100/H200-only
alxndrkalinin 513d3e6
chore(dynacell): point SEC61B fcmae_pretrained predict configs at ep 123
alxndrkalinin e085ee3
chore(dynacell): point Memb fcmae_scratch predict configs at ep 136
alxndrkalinin 52b53d5
fix(dynacell): consistent 512G mem across 8 fnet3d a549/joint leaves
alxndrkalinin 4602496
feat(dynacell): submit script for 8 fnet3d a549/joint training jobs
alxndrkalinin 21df26e
chore(dynacell): point Nucl fcmae_scratch predict configs at ep 80
alxndrkalinin 6e935e4
fix(dynacell): predict_local_*.sh fail fast on placeholder ckpt_path
alxndrkalinin 4bbcee8
feat(dynacell): add VSCyto3D eval runner script
b48fc13
fix(dynacell): joint fcmae batch_size 32->8 for ER + MITO
alxndrkalinin f80466a
fix(dynacell): correct A549 UNetViT3D eval script — 4 organelles × 3 …
ab0e193
feat(dynacell): add A549 CellDiff eval script — 3 variants × 4 organe…
10e5c16
fix(viscy-data): support heterogeneous T per FOV in mmap_preload
alxndrkalinin dcfedfd
refactor(viscy-data): compute mmap T offsets once per setup_fit
alxndrkalinin 848f89b
feat(dynacell): A549 eval scripts for all 5 models — flat mantis_v1 l…
5a2a346
fix(viscy-data): skip bs%num_samples check for BatchedConcat children
alxndrkalinin d407687
refactor(viscy-data): tighten joint-divisibility test + comment
alxndrkalinin 4951fc0
feat(dynacell): a549-trained nucleus fnet3d_paper predict configs
alxndrkalinin 16d5482
docs: explain joint vs single-set training batch semantics
alxndrkalinin 397bff2
refactor(dynacell): unify predict_local script across train sets
alxndrkalinin dd80af3
docs(dynacell): add model name convention reference
alxndrkalinin 889c1a5
test(dynacell): fcmae er joint smoke configs
alxndrkalinin f6af4dd
chore(dynacell): handoff script for a549-only fcmae resubmits
alxndrkalinin 35b4f04
feat(dynacell): a549-trained membrane fcmae_scratch predict configs
alxndrkalinin 1c393bf
feat(dynacell): a549-trained membrane fnet3d_paper predict configs
alxndrkalinin a6331cb
feat(dynacell): a549-trained nucleus fcmae_scratch predict configs
alxndrkalinin e5abfe2
refactor(dynacell): unify predict_batch script across train/test sets
alxndrkalinin a7a2ebd
feat(dynacell): save single-cell embeddings with FOV/timepoint metadata
a821459
feat(dynacell): add CellDiff A549 mantis predict configs; reduce batc…
2915982
feat(dynacell): a549-trained nucleus fcmae_pretrained predict configs
alxndrkalinin ae10bf7
docs(dynacell): document prediction zarr naming convention
alxndrkalinin 38e08e0
feat(dynacell): a549-trained ER fcmae_pretrained predict configs
alxndrkalinin 8981a08
train infected 4gpu
edyoshikun a5d1571
feat(dynacell): joint-trained nucleus fcmae_scratch predict configs
alxndrkalinin 4cc95ea
feat(dynacell): joint-trained membrane fcmae_pretrained predict configs
alxndrkalinin 7c663e7
feat(dynacell): joint-trained membrane fcmae_scratch predict configs
alxndrkalinin b64d1d6
feat(dynacell): a549-trained mito fcmae_scratch predict configs
alxndrkalinin e35244c
feat(dynacell): a549-trained ER fcmae_scratch predict configs
alxndrkalinin ad65b21
feat(dynacell): joint-trained nucleus fnet3d predict configs
alxndrkalinin ad45beb
feat(dynacell): joint-trained membrane fnet3d predict configs
alxndrkalinin 142be0a
feat(dynacell): a549-trained mito fnet3d predict configs
alxndrkalinin 68f4d06
feat(dynacell): add joint-trained eval scripts and predict configs fo…
78544dd
feat(dynacell): joint-trained nucleus fnet3d eval scripts (full metrics)
alxndrkalinin ef58867
feat(dynacell): joint-trained membrane unext2 eval scripts (full metr…
alxndrkalinin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| export CUDA_PATH=/hpc/apps/cuda/12.8.0_570.86.10 | ||
| export PATH=$CUDA_PATH/bin:$PATH | ||
| export LD_LIBRARY_PATH=$CUDA_PATH/lib64:${LD_LIBRARY_PATH:-} | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.