Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
28e744b
Add HITL finetuning test data generation infrastructure
davidackerman Feb 9, 2026
e1157b0
Add LoRA wrapper and CorrectionDataset for HITL finetuning
davidackerman Feb 9, 2026
aba789a
Add LoRA training loop and CLI for HITL finetuning
davidackerman Feb 9, 2026
ffd1a01
Fix LoRA wrapper for Sequential models and add comprehensive document…
davidackerman Feb 9, 2026
f322b22
Complete HITL finetuning pipeline with full component validation
davidackerman Feb 9, 2026
0b3b1e8
Fix zarr structure for Neuroglancer/OME-NGFF compatibility
davidackerman Feb 9, 2026
0cf7b1f
Fix dataset to handle mismatched raw/mask sizes
davidackerman Feb 9, 2026
258ddf9
Disable patching to use full-size corrections for training
davidackerman Feb 9, 2026
8c276fa
Fix CLI to handle None patch_shape
davidackerman Feb 9, 2026
e8c484c
Disable spatial augmentation for mismatched input/output sizes
davidackerman Feb 9, 2026
c2f98cf
Add mito correction generator with fly_organelles model
davidackerman Feb 11, 2026
0bc2078
Add channel selection and logging for LoRA finetuning
davidackerman Feb 11, 2026
5f76102
Add diagnostic tools for analyzing finetuning quality
davidackerman Feb 11, 2026
e15ca7e
Document LoRA finetuning workflow with detailed walkthrough
davidackerman Feb 11, 2026
1d1d29f
Fix normalization pipeline and add sparse annotation support
davidackerman Feb 11, 2026
f662c53
Add utility scripts for sparse annotation workflow
davidackerman Feb 11, 2026
77eec72
Add MinIO hosting scripts for Neuroglancer annotations
davidackerman Feb 12, 2026
f048001
Remove unnecessary HTTP and legacy MinIO scripts
davidackerman Feb 12, 2026
b1a06ea
Add finetune annotation crop viewer integration
davidackerman Feb 12, 2026
0646561
Add bidirectional MinIO annotation syncing and improve finetuning wor…
davidackerman Feb 12, 2026
5dd9078
Update finetuning documentation with dashboard workflow
davidackerman Feb 12, 2026
b238457
Improve finetuning: fix gradient accumulation bug, add live log strea…
davidackerman Feb 13, 2026
4086834
Add finetuning job management system and dashboard integration
davidackerman Feb 13, 2026
4682d58
Add auto-serve inference after finetuning and iterative training on s…
davidackerman Feb 14, 2026
675e3e5
Add auto-serve status display and restart training UI to finetune tab
davidackerman Feb 14, 2026
fc407f3
Fix dark mode styling for modals, form controls, labels, and muted text
davidackerman Feb 14, 2026
68549fd
Fix chunk boundary bug in log marker detection for neuroglancer layer…
davidackerman Feb 14, 2026
9dc1467
Filter noisy debug and werkzeug lines from training log stream
davidackerman Feb 14, 2026
1a4a164
Fix restart layer update: run iteration check every monitor cycle
davidackerman Feb 14, 2026
f3e7c9d
Fix training collapse and add MSE loss with label smoothing
davidackerman Feb 14, 2026
d214fd4
Clamp dataloader batch size to dataset size
davidackerman Feb 14, 2026
3aa86e3
Add margin loss, teacher distillation, and expanded training UI controls
davidackerman Feb 14, 2026
8b084f0
Add class balancing option to prevent foreground overprediction
davidackerman Feb 14, 2026
4f3abdd
Fix duplicate log lines by removing redundant file write
davidackerman Feb 14, 2026
1d168db
Fix MinIO startup failure when console port is already in use
davidackerman Feb 15, 2026
2feef59
Add GPU queue selection to finetuning UI
davidackerman Feb 15, 2026
08d3628
Fix delayed log streaming by disabling pipe buffering
davidackerman Feb 15, 2026
e376cbf
Save only LoRA weights in checkpoints instead of full model
davidackerman Feb 15, 2026
e570e03
Improve restart control path and CLI logging behavior
davidackerman Feb 15, 2026
1008505
Speed up finetune dashboard log streaming and restart UX
davidackerman Feb 15, 2026
795fa43
Add model load timing logs and fs visibility probe tool
davidackerman Feb 15, 2026
f897439
Update finetune progress UI from live SSE epoch/loss logs
davidackerman Feb 15, 2026
083dd4a
Update finetune UI, dataset, and job manager behavior
davidackerman Feb 18, 2026
686ca79
Remove dev scripts, docs artifacts, and output binaries
davidackerman Feb 18, 2026
cae19e8
Revert non-finetune changes to match main
davidackerman Feb 18, 2026
e2ead85
Refactor finetune module and restore blueprint architecture
davidackerman Feb 18, 2026
51ebe33
Fix split_dataset_path for paths with nested .zarr segments
davidackerman Feb 18, 2026
ec43fb3
Deduplicate sync logic, job status, and dataset path extraction
davidackerman Feb 18, 2026
73972a5
Merge branch 'main' into finetuning_refactor
davidackerman Feb 25, 2026
e362282
Add target transform classes for converting annotations to training t…
davidackerman Feb 26, 2026
132cc91
Support target_transform in LoRAFinetuner
davidackerman Feb 26, 2026
80d98d7
Add --output-type, --select-channel, --offsets CLI args and target tr…
davidackerman Feb 26, 2026
83d27eb
Wire output_type, select_channel, and offsets through dashboard and j…
davidackerman Feb 26, 2026
d2630ba
Fix formatting in model_spec_affinities example
davidackerman Feb 26, 2026
878b769
Add finetuning guide documentation
davidackerman Feb 26, 2026
23fa4ae
Remove normalize parameter from CorrectionDataset and create_dataloader
davidackerman Feb 26, 2026
c15ed97
Move dashboard state from state.py into Flow singleton in globals.py
davidackerman Feb 26, 2026
cb471f1
Add finetuning screenshots and image references to guide
davidackerman Feb 26, 2026
363cbd7
Move test_target_transforms.py to tests/finetune/
davidackerman Feb 26, 2026
b7c8932
add my_yamls to gitignore
davidackerman Mar 10, 2026
97ad8c5
Use spawn multiprocessing context in DataLoader to fix tensorstore fo…
davidackerman Mar 10, 2026
8fd6c92
Add FP16 probe, OOM recovery, NaN detection, and margin loss sigmoid …
davidackerman Mar 10, 2026
7b2bb66
Add restart status markers and fresh LoRA reset on training restart
davidackerman Mar 10, 2026
aaf8a1c
Track iteration count to prevent stale restart markers from re-trigge…
davidackerman Mar 10, 2026
04c9713
Show restart sub-status updates in dashboard progress text
davidackerman Mar 10, 2026
926a495
Fix annotation label corruption, persist restart params, and improve …
davidackerman Mar 20, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
my_yamls/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down Expand Up @@ -162,4 +164,13 @@ cython_debug/
#.idea/

# Misc
.vscode/
.vscode/

# Project-specific
ignore/
*.zarr/
.claude/
test_corrections.zarr/
correction_slices/
corrections/
output/
10 changes: 7 additions & 3 deletions cellmap_flow/cli/server_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@
from cellmap_flow.utils.plugin_manager import load_plugins


logging.basicConfig()
logger = logging.getLogger(__name__)


Expand All @@ -47,7 +46,7 @@ def cli(log_level):
cellmap_flow_server script -s /path/to/script.py -d /path/to/data
cellmap_flow_server cellmap-model -f /path/to/model -n mymodel -d /path/to/data
"""
logging.basicConfig(level=getattr(logging, log_level.upper()))
logging.basicConfig(level=getattr(logging, log_level.upper()), force=True)


@cli.command(name="list-models")
Expand Down Expand Up @@ -82,6 +81,9 @@ def create_dynamic_server_command(cli_name: str, config_class: Type[ModelConfig]
except:
type_hints = {}

# Track used short names to avoid collisions with common options.
used_short_names = {"-d", "-p"}

# Create the command function
def command_func(**kwargs):
# Separate model config kwargs from server kwargs
Expand Down Expand Up @@ -141,7 +143,9 @@ def command_func(**kwargs):

# Add model-specific options based on constructor parameters
for param_name, param_info in reversed(list(sig.parameters.items())):
option_config = create_click_option_from_param(param_name, param_info)
option_config = create_click_option_from_param(
param_name, param_info, used_short_names
)
if option_config:
command_func = click.option(
*option_config.pop("param_decls"), **option_config
Expand Down
77 changes: 77 additions & 0 deletions cellmap_flow/cli/viewer_cli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
"""
Simple CLI for viewing datasets with CellMap Flow without requiring model configs.
"""

import click
import logging
import neuroglancer
from cellmap_flow.dashboard.app import create_and_run_app
from cellmap_flow.globals import g
from cellmap_flow.utils.scale_pyramid import get_raw_layer

logging.basicConfig()
logger = logging.getLogger(__name__)


@click.command()
@click.option(
"-d",
"--dataset",
required=True,
type=str,
help="Path to the dataset (zarr or n5)",
)
@click.option(
"--log-level",
type=click.Choice(
["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"], case_sensitive=False
),
default="INFO",
help="Set the logging level",
)
def main(dataset, log_level):
"""
Start CellMap Flow viewer with a dataset.

Example:
cellmap_flow_viewer -d /path/to/dataset.zarr
"""
logging.basicConfig(level=getattr(logging, log_level.upper()))

logger.info(f"Starting CellMap Flow viewer with dataset: {dataset}")

# Set up neuroglancer server
neuroglancer.set_server_bind_address("0.0.0.0")

# Create viewer
viewer = neuroglancer.Viewer()

# Set dataset path in globals
g.dataset_path = dataset
g.viewer = viewer

# Add dataset layer to viewer
with viewer.txn() as s:
# Set coordinate space
s.dimensions = neuroglancer.CoordinateSpace(
names=["z", "y", "x"],
units="nm",
scales=[8, 8, 8],
)

# Add data layer
s.layers["data"] = get_raw_layer(dataset)

# Print viewer URL
logger.info(f"Neuroglancer viewer URL: {viewer}")
print(f"\n{'='*80}")
print(f"Neuroglancer viewer: {viewer}")
print(f"Dataset: {dataset}")
print(f"{'='*80}\n")

# Start the dashboard app
create_and_run_app(neuroglancer_url=str(viewer), inference_servers=None)


if __name__ == "__main__":
main()
9 changes: 5 additions & 4 deletions cellmap_flow/dashboard/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
from flask import Flask
from flask_cors import CORS

from cellmap_flow.dashboard import state
from cellmap_flow.dashboard.state import LogHandler
from cellmap_flow.globals import g, LogHandler
from cellmap_flow.dashboard.routes.logging_routes import logging_bp
from cellmap_flow.dashboard.routes.index_page import index_bp
from cellmap_flow.dashboard.routes.pipeline_builder_page import pipeline_builder_bp
from cellmap_flow.dashboard.routes.models import models_bp
from cellmap_flow.dashboard.routes.pipeline import pipeline_bp
from cellmap_flow.dashboard.routes.blockwise import blockwise_bp
from cellmap_flow.dashboard.routes.bbx_generator import bbx_bp
from cellmap_flow.dashboard.routes.finetune_routes import finetune_bp

logger = logging.getLogger(__name__)

Expand All @@ -37,11 +37,12 @@
app.register_blueprint(pipeline_bp)
app.register_blueprint(blockwise_bp)
app.register_blueprint(bbx_bp)
app.register_blueprint(finetune_bp)


def create_and_run_app(neuroglancer_url=None, inference_servers=None):
state.NEUROGLANCER_URL = neuroglancer_url
state.INFERENCE_SERVER = inference_servers
g.NEUROGLANCER_URL = neuroglancer_url
g.INFERENCE_SERVER = inference_servers
hostname = socket.gethostname()
port = 0
logger.warning(f"Host name: {hostname}")
Expand Down
Loading