Skip to content

Jookare/self-supervised-plankton

Repository files navigation

Self-Supervised Plankton

Contains the codes from the paper "Self-Supervised Pretraining for Fine-Grained Plankton Recognition".

Overview

This repository contains code for pretraining a vision transformer using masked autoencoder (MAE) utilizing multiple plankton datasets.

  • K. He, et al., "Masked autoencoders are scalable vision learners," CVPR 2022

Usage

Pretraining datasets

In the paper, multiple public plankton datasets were utilized for the pretraining. The full list with the references is shown below: Table: Summary of the datasets used for pretraining.

Dataset Plankton Type # of Species # of Images Link
Kaggle-Plankton (Cowen2015) zooplankton 121 130,000 Link
Lake Zooplankton (kyathanahally2021deep) zooplankton 35 18,000 Link
SYKE-Plankton-ZooScan_2024 (zooscan2024) zooplankton 20 24,000 Link
PMID2019 (li2020developing) phytoplankton 24 14,000 Link
SYKE-Plankton-IFCB_2022 (syke2022) phytoplankton 50 63,000 Link
UDE Diatoms in the Wild 2024 (Kloster2024) phytoplankton 611 84,000 Link
DAPlankton (batrakhanov2024daplankton) phytoplankton 44 112,000 Link
Total 443,000

Citation

@misc{opensetplankton2025,
    author={Joona Kareinen and Tuomas Eerola 
    and Kaisa Kraft and Lasse Lensu 
    and Sanna Suikkanen and Heikki K\"{a}lvi\"{a}inen},
    title={Self-Supervised Pretraining for Fine-Grained Plankton Recognition},
    year="2025"
}

About

Contains codes for paper "Self-Supervised Pretraining for Fine-Grained Plankton Recognition".

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors