|
Author: Wout Decrop (VLIZ) Related publication: Resources: Projects: iMagine
It was originally developed for FlowCam data, and has also been retrained or adapted in separate branches for other instruments and datasets: If you want the full repository with Docker, OSCAR, AI4OS, packaged deployment assets, and broader project explanation, see: |
+.
+: :==.
% .#.
#:*==* *=
-+**+*####.
+********%%.
+*******#**#+
********#%%####+
.*====+==::=#%%*
-%** --::=-:.
+=#. -:::+.
-+*++: +. +:::*
:+. .+- ==: +::::*
=- == ::-+*+:::::*##-
.+. :+-.-====-:::::+%#.
===*: :++::::-=:++*#=
-#. -+**:::=*++**%##+
.=+-= ##*:**#*%******=
.=**+ =*++#************#-
.++*****++++++++*##+
:+*+#%++++++++*+.
*** :###-
::#**. +**+
.%@+.: --@@@%
:.
|
Install with Python 3.12 and pip:
pip install planktonclassFor notebook support:
pip install "planktonclass[notebooks]"Use:
planktonclass train my_projectThis is the best choice if you already know where your image folder is and want a direct local workflow.
Use:
planktonclass api my_projectThen open:
http://127.0.0.1:5000/uihttp://127.0.0.1:5000/api#/
This is the best choice if you want to interact through the DEEPaaS UI or integrate with an external service.
Use:
planktonclass notebooks my_projectThis copies the packaged notebooks into my_project/notebooks/. It is the best choice for exploration, augmentation experiments, prediction analysis, and explainability.
pip install planktonclass installs the package dependencies used by the notebooks, including TensorFlow, plotting, and reporting libraries.
For local notebook use, install the notebook extra instead:
pip install "planktonclass[notebooks]"pip install planktonclassThen create a project:
planktonclass init my_projectOr create a runnable demo project:
planktonclass init my_project --demoOPTIONAL: Validate the generated config:
planktonclass validate-config my_projectLocal training:
planktonclass train my_projectFor a quick smoke test on the demo project:
planktonclass train my_project --quickOPTIONAL: Download a published pretrained model into the project:
planktonclass pretrained my_project --model FlowCamAvailable published pretrained model names currently include FlowCam, FlowCyto, and PI10.
Only the actual model directory is extracted into my_project/models, even when the downloaded archive contains a
full exported project tree.
OPTIONAL: Build an inference Docker image from your trained model run:
planktonclass docker my_projectFor the published FlowCam pretrained model, the packaged checkpoint is currently
final_model.h5. The FlowCyto and PI10 published models are expected to use
best_model.keras. New training runs created by planktonclass train
save best_model.keras when validation is enabled. If you train without validation,
the run saves final_model.keras instead.
Report generation after training:
planktonclass report my_projectIf you leave out --timestamp, planktonclass report suggests the most recent run, lists the available timestamps, and lets you choose another one by number.
It also lets you choose between quick and full mode. quick is the default and creates the core figures only; full also generates the threshold-based plots in the results/ subfolders.
pip install planktonclassThen create a project:
planktonclass init my_projectLocal API:
planktonclass api my_projectFor local notebook use:
pip install "planktonclass[notebooks]"Then create a project:
planktonclass init my_projectCopy notebooks into the project:
planktonclass notebooks my_projectIn the model-based notebooks (3.0, 3.1, and 3.2), the first variables to check are TIMESTAMP and MODEL_NAME. They are prefilled for the published pretrained model so the notebooks work out of the box, but when you want to inspect a model from your own training run you should change those two values first.
After planktonclass init, your project looks like this:
my_project/
config.yaml
data/
images/
dataset_files/
models/
notebooks/
The only mandatory input is the image directory:
data/images/- or the directory pointed to by
images_directoryinconfig.yaml
If data/dataset_files/ is empty, training can generate dataset splits automatically from the image-folder structure.
If you provide your own dataset metadata files, the expected files are:
- custom-split required:
classes.txt,train.txt - optional:
val.txt,test.txt,info.txt,aphia_ids.txt
The split files map image paths to integer labels starting at 0.
The main user config is a project-local config.yaml.
It is created by:
planktonclass init my_projectMost users only need to adjust a small number of fields:
general.base_directorygeneral.images_directorymodel.modelnamepretrained.use_pretrainedpretrained.namepretrained.versiontraining.epochstraining.batch_sizetraining.use_validationtraining.use_testmonitor.use_tensorboard
Internal-only values such as model-specific preprocessing are now derived automatically and are not meant to be edited by users.
The package installs a planktonclass command with these main subcommands:
planktonclass init [DIR]planktonclass init [DIR] --demoplanktonclass validate-config [DIR]planktonclass train [DIR]planktonclass report [DIR] [--timestamp TS]planktonclass api [DIR]planktonclass docker [DIR]planktonclass pretrained [DIR]planktonclass list-models [DIR]planktonclass notebooks [DIR]
The pretrained command accepts a published model name and version, for example:
planktonclass pretrained my_project --model FlowCyto --version latestThe list-models command now shows published pretrained models with extra metadata such as architecture, version, and checkpoint name, while local timestamped runs still appear as plain folder names.
Typical local workflow:
planktonclass init my_project
planktonclass notebooks my_project
planktonclass validate-config my_project
planktonclass train my_project
planktonclass docker my_project
planktonclass report my_projectFor a faster package smoke test with the demo data:
planktonclass init my_project --demo
planktonclass train my_project --quick
planktonclass report my_projectStart the API with:
planktonclass init my_project
planktonclass api my_projectThen open:
http://127.0.0.1:5000/uihttp://127.0.0.1:5000/api#/
You can also start DEEPaaS directly after a repo install:
$env:planktonclass_CONFIG = (Resolve-Path .\my_project\config.yaml)
$env:DEEPAAS_V2_MODEL = "planktonclass"
deepaas-run --listen-ip 0.0.0.0Important notes:
0.0.0.0is a bind address, not the browser URL- open
127.0.0.1in the browser - for prediction, the browser UI supports file uploads for
imageandzip - for training,
images_directoryis a path field, so it must point to a folder visible to the machine running the API
Copy the packaged notebooks into your project with:
planktonclass init my_project
planktonclass notebooks my_projectThe copied notebooks auto-detect the nearest project config.yaml, so they use the paths inside your local project folder rather than the installed package directory.
They also copy data/data_transformation/start, reference_style, and end for the transformation notebook.
Notebook overview:
1.0-Dataset_exploration.ipynb1.1-Image_transformation.ipynb1.2-Image_augmentation.ipynb2.0-Model_training.ipynb3.0-Computing_predictions.ipynb3.1-Prediction_statistics.ipynb3.2-Saliency_maps.ipynb
For 1.1-Image_transformation.ipynb:
- put your new raw images in
data/data_transformation/start/ - keep one or more reference images in
data/data_transformation/reference_style/ - the transformed outputs are written to
data/data_transformation/end/
Each training run creates a timestamped folder under models/:
models/<timestamp>/
ckpts/
conf/
logs/
stats/
dataset_files/
predictions/
results/
Useful outputs include:
- checkpoints like
best_model.keras stats.json- saved prediction JSON files
- saved test metrics JSON files with top-k accuracy, precision, recall, and F1 summaries
- report images and CSV summaries under
results/
For a portable inference runtime after training, you can package a selected model run into a Docker image:
planktonclass docker my_projectThis builds an image from the local package source and bundles the latest trained timestamp by default. You can choose a specific run or checkpoint with:
planktonclass docker my_project --timestamp 2026-04-21_120000 --ckpt-name best_model.keras --tag my-plankton-api:latestTo generate performance plots after training:
planktonclass report my_projectIf you keep the standard project layout created by planktonclass init, these commands automatically use my_project/config.yaml. Use --config PATH only when your config file lives somewhere else.
The full documentation is available here:
Main documentation pages:
For Docker, OSCAR, AI4OS, and the broader deployment-oriented repository, see:
Choose this only if you want to work on the package itself.
git clone https://github.com/lifewatch/planktonclass
cd phyto-plankton-classification
python -m venv .venv
.venv\Scripts\activate
pip install -U pip
pip install -e .
pip install -e ".[dev]"
python -m pytestIf you use this project, please consider citing:
Decrop, W., Lagaisse, R., Mortelmans, J., Muñiz, C., Heredia, I., Calatrava, A., & Deneudt, K. (2025). Automated image classification workflow for phytoplankton monitoring. Frontiers in Marine Science, 12. https://doi.org/10.3389/fmars.2025.1699781