Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions docs/one-ai/02-demos/19-lemons.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
---
id: lemon-detection-demo
title: Lemon Detection Demo
sidebar_label: Lemon Detection
---

import Link from '@docusaurus/Link';
import SupportBanner from '@site/src/components/SupportBanner';

# Lemon Detection Demo

<video autoPlay loop muted playsInline style={{ maxWidth: '80%', height: 'auto', display: 'block', margin: '0 auto', marginBottom: '5px' }}>
<source src={require('/img/ai/one_ai_plugin/demos/lemons/lemon_demo_web.webm').default} type="video/webm" />
</video>

:::info Try it yourself
To try the AI, simply click on the **Try Demo** button below. If you do not have an account yet, you will be prompted to sign up. Afterwards, the quick start projects overview will open where you can select **Lemon Detection**. After installing ONE WARE Studio, the project will open automatically.
:::

<div className="text--center" style={{ display: 'flex', justifyContent: 'center', gap: '1rem', flexWrap: 'wrap' }}>
<Link className="button button--primary button--lg" href="https://cloud.one-ware.com/quick-start" target="_blank" rel="noopener noreferrer">
Try Demo
</Link>
<Link className="button button--primary button--lg" href="https://onewarecom.sharepoint.com/:u:/s/Public/IQBZ5f23Fx6NQZ1nqsjN5Nj0AY-RdfC_H7AtUY6wEmYa14M?e=BAOBjn" target="_blank" rel="noopener noreferrer">
Dataset
</Link>
</div>

## About this demo

This demo showcases a simple but realistic computer vision task: detecting **ripe** and **unripe lemons** in a production environment.

The interesting part is that the entire dataset was created from a **single 9-second smartphone video**. Using the video import tool in ONE WARE Studio, the video was automatically split into **135 frames**, which were then used as the dataset for training.

The goal of this demo is not just detection accuracy, but also demonstrating how **dataset creation and labeling can be accelerated dramatically by using the model itself during the annotation process**.

## Project Setup

To reproduce this demo, start a new project by clicking:

**AI → ONE AI Project Generator**

Then configure:

- **Mode:** Detection
- **Task:** Object Detection
- **Template:** Controlled Environment

This template is ideal for industrial setups where the **camera angle, background, and lighting remain relatively stable**.

## Fast Labeling with Model-Assisted Annotation

Instead of labeling all images manually, we used a much faster workflow where the model helps with the annotation.

First, we manually labeled only **about 15 images** with two classes:

- **ripe**
- **unripe**

Then we trained a quick preliminary model for **5 minutes**. During this step it is important to enable:

**"Focus on images with objects"**

![focus](/img/ai/one_ai_plugin/demos/lemons/training.jpg)

Even though the model is still very rough at this point, it already learns the basic visual structure of the lemons.

After exporting this temporary model, we used it to **automatically predict annotations for the remaining frames**. These predictions are not perfect, but they dramatically reduce manual labeling effort because the annotations only need small corrections.

This process can be repeated several times:

1. Label a small batch of images
2. Train a short model
3. Use the model to predict annotations
4. Correct the predictions
5. Train again

Each iteration improves the prediction quality and **reduces manual labeling time significantly**.

<video autoPlay loop muted playsInline style={{ maxWidth: '80%', height: 'auto', display: 'block', margin: '0 auto', marginBottom: '5px' }}>
<source src={require('/img/ai/one_ai_plugin/demos/lemons/annotation.webm').default} type="video/webm" />
</video>

After the dataset was fully annotated, we manually split it into:

- **train**
- **validation**
- **test**

This also allowed us to reuse the exact same dataset for training a YOLO model as a benchmark comparison.

## Data Processing

For prefilters we selected:

**Resize to 25%**

Since the lemons occupy a large part of the frame, downscaling significantly reduces computation while still preserving enough detail for reliable detection.

For augmentations we used the default template settings for:

- Move
- Rotation
- Size
- Color

The **Flip augmentation was disabled** because the production setup has a fixed orientation and flipping would introduce unrealistic samples.

## Advanced Model Settings

To make the benchmark transparent, we switched from the standard capability mode to **Advanced**:

**AI → Capability Mode**

![capability](/img/ai/one_ai_plugin/demos/lemons/capability.jpg)

The following configuration was used:

- **Size Precision Effort:** 65%
- **Position Prediction Resolution:** 65%
- **Allow Overlap:** enabled
- **Surrounding Size Mode:** Relative to Image

Estimated object dimensions:

- **Min Width:** 10
- **Min Height:** 5
- **Max Width:** 30
- **Max Height:** 20

Complexity settings:

- **Same Class Difference:** 15
- **Background Difference:** 20
- **Detect Complexity:** 40

These parameters are not strictly required, but they help ensure a **fair and transparent benchmark setup**.

## Training

After configuring the dataset and model settings, the model was trained using the **default training settings for 15 minutes**.

Once training finished, the model was exported as **ONNX** and used in a Python benchmark script to compare performance against a YOLO model trained on the same dataset.

## Results

We compared the exported ONE AI model against **YOLO26-fast**, trained with Roboflow using the same dataset and similar augmentations. The YOLO model used an input resolution of **640 × 640** and was trained for approximately **145 epochs**.

| Metric | YOLO26-fast | ONE AI Custom ONNX |
| :--- | :--- | :--- |
| **Average inference time** | 18.39 ms | **8.88 ms** |
| **Throughput** | 54.38 images/sec | **112.66 images/sec** |
| **Precision** | **0.901** | 0.889 |
| **Recall** | 0.547 | **0.894** |
| **F1 score** | 0.681 | **0.891** |
| **mAP@0.50** | 0.476 | **0.578** |
| **True positives** | 191 | **312** |
| **False negatives** | 158 | **37** |

While YOLO achieved slightly higher precision, the ONE AI model achieved **much higher recall and a significantly better F1 score**, meaning it detected far more lemons overall.

The benchmark also showed a major performance advantage on CPU inference. With an average runtime of **8.88 ms per image**, the ONE AI model runs **more than twice as fast** as the YOLO baseline.

This combination of **fast training, assisted labeling, and efficient inference** makes the approach particularly attractive for quickly deploying vision models in production environments.#

The YOLO baseline model used in this comparison was trained using Roboflow and evaluated using a benchmark script that runs both models on the same dataset.

Since parts of the YOLO tooling are licensed under AGPL-3.0, the full benchmark script used for this evaluation is publicly available in the repository linked above.

<SupportBanner subject="ONE AI Tutorial Support" />
124 changes: 124 additions & 0 deletions docs/one-ai/02-demos/20-brain-tumor-segmentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
---
id: brain-tumor-segmentation-demo
title: Brain Tumor Segmentation Demo
sidebar_label: Segmentation (Brain Tumor)
---
import Link from '@docusaurus/Link';


# Brain Tumor Segmentation Demo

<div style={{ display: 'flex', gap: '1rem', flexWrap: 'wrap' }}>
<img src="/img/ai/one_ai_plugin/demos/brain-tumor/sample_1.jpg" alt="chips_example_non-defective_01" style={{ width: '28%' }} />
<img src="/img/ai/one_ai_plugin/demos/brain-tumor/sample_2.jpg" alt="chips_example_non-defective_02" style={{ width: '28%' }} />
<img src="/img/ai/one_ai_plugin/demos/brain-tumor/sample_3.jpg" alt="chips_example_defective_01" style={{ width: '28%' }} />
</div>


:::info Try it yourself
To try this demo, click on the **Try Demo** button below. If you don’t have an account yet, you will be prompted to sign up. After logging in, the quick start overview will open where you can select the **Raspberry Pi Warning Sign** project. Once installed, ONE WARE Studio will open automatically.
:::

<div className="text--center" style={{ display: 'flex', justifyContent: 'center', gap: '1rem', flexWrap: 'wrap' }}>
<Link className="button button--primary button--lg" href="oneware://oneai/quick-start/chips" target="_blank" rel="noopener noreferrer">
<p className="m-0 p-0">Download Project</p>
</Link>
</div>

## About this demo
In this demo, we will build our first **segmentation model** using OneAI. Unlike classification or object detection, segmentation assigns a class label to **every individual pixel** of an image. This makes it particularly well suited for medical imaging tasks, such as identifying tumor regions in MRI scans.


The goal of this tutorial is to demonstrate how a brain tumor segmentation model can be trained using only **50 MRI images**. The focus is not on achieving state-of-the-art medical performance, but on showing how quickly and efficiently a segmentation workflow can be set up and trained with OneAI.

By using the **Medical** template, we already benefit from meaningful default prefilter and augmentation settings that are well suited for medical imaging data.

## Dataset overview
For this tutorial, we use a small dataset consisting of **50 MRI images of the human brain**.

These images were selected from a larger Roboflow dataset containing many more samples annotated in an object detection format. You can find the original dataset [here](https://universe.roboflow.com/brain-igk9s/sbr1-ogtyy).

Even with a relatively small dataset, segmentation combined with targeted augmentations can extract useful spatial information and allow us to train a functional model.

## Preparing the dataset
To get started, create a new AI Generator by clicking **AI > Open AI Generator**.

Enter a name for your project and select the **Medical** template. This template provides a set of default configurations suitable for medical data such as MRI scans.

![template](/img/ai/one_ai_plugin/demos/brain-tumor/template_selection.jpg)

Click **Create** to generate the `.oneai` configuration file. This file contains all relevant settings, including dataset import, prefilters, augmentations, hardware configuration, model settings, and training parameters.

Since the dataset is not annotated in a segmentation format, it cannot be imported as a standard labeled dataset. For this tutorial, the dataset is already provided in the OneWare format.

To load it, simply replace the existing dataset folder in the project directory with the provided dataset. This can be done directly in OneWare Studio via drag and drop.

Make sure to switch the mode in the top-right corner from **Detection** to **Segmentation**, and add a label in the labels section. After doing this, the segmentation masks should appear when opening an image.

![mode_selection](/img/ai/one_ai_plugin/demos/brain-tumor/mode_selection.jpg)

If you want to annotate the images yourself, you can import the MRI images using **Import Dataset** or by dragging and dropping them into OneWare Studio. Because the images are initially unlabeled, they will be imported as an unlabeled dataset.

For annotation you can use the following tools:

- **Colored Pencil** – used to directly mark tumor regions on the MRI images
- **Eraser** – used to remove incorrect or imprecise annotations
- **Brush Size Control** – adjust the brush size depending on tumor size and image resolution
- **Opacity Control** – modify the opacity of the segmentation mask for better visibility

<video autoPlay loop muted playsInline style={{ maxWidth: '80%', height: 'auto', display: 'block', margin: '0 auto', marginBottom: '5px' }}>
<source src={require('/img/ai/one_ai_plugin/demos/brain-tumor/segmentation_mask.webm').default} type="video/webm" />
</video>
Carefully label the tumor regions on all 50 images. At this stage, **annotation quality is more important than speed**, as the overall segmentation performance strongly depends on precise masks.

In a real medical scenario, this task should ideally be performed by a **domain expert** to ensure annotation accuracy.

## Filters and Augmentations
By selecting the **Medical** template, most prefilter and augmentation settings are already well suited for this task.

One useful adjustment is the **channel filter** in the prefilter section. Since MRI images are grayscale, we can remove the unnecessary color channels (**G and B**). This reduces memory usage and keeps the processing pipeline efficient without losing relevant information.

Additional prefilters are generally not recommended, as they may remove subtle but important image details.

The preset augmentations already fit this use case well, so we keep them unchanged. These augmentations simulate small variations in patient positioning and scanner setup while preserving the anatomical structure of the brain.

## Model settings
In medical applications, missing a tumor region is usually more critical than falsely detecting one. Therefore, we bias the model towards **higher recall** by setting the **precision–recall prioritization** to **75**.

We apply the following model settings:

- **Maximum Memory Usage:** **90** (sufficient for testing the model later on CPU)
- **Estimated Surrounding (Min):** **10 / 10**
- **Estimated Surrounding (Max):** **30 / 30**
These values correspond to the approximate size of the smallest and largest tumors in the dataset.
- **Same Class Difference:** **25**
Tumor appearances vary slightly but remain relatively consistent.
- **Background Difference:** **25**
MRI scans were taken from different angles, resulting in moderate background variation.
- **Detection Simplicity:** **75**
Overall, this is a relatively simple segmentation task.

These settings help the model focus on local tumor structures while maintaining sufficient separation from the background.

## Training the model
Once labeling, filtering, augmentation, and model configuration are complete, we can start training.

Click **Sync**, create a new model using the **+** button, and open the training configuration by clicking **Train**.

For this tutorial, we set the **training time** to **15 minutes**. This is sufficient to demonstrate the segmentation workflow and produce meaningful results even with a limited dataset.

During training, you can monitor progress in the **Logs** and **Statistics** sections.

## Testing the model
After training is finished, click **Test** to evaluate the segmentation model.

The testing view allows you to visually inspect:

- The predicted segmentation masks
- Differences between ground truth annotations and model output

This qualitative evaluation is especially important for segmentation tasks, where **spatial accuracy and mask quality** often matter more than a single numerical metric.

import SupportBanner from '@site/src/components/SupportBanner';

<SupportBanner subject="ONE AI Tutorial Support" />
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading