diff --git a/docs/one-ai/02-demos/19-lemons.mdx b/docs/one-ai/02-demos/19-lemons.mdx
new file mode 100644
index 0000000..e4ae852
--- /dev/null
+++ b/docs/one-ai/02-demos/19-lemons.mdx
@@ -0,0 +1,170 @@
+---
+id: lemon-detection-demo
+title: Lemon Detection Demo
+sidebar_label: Lemon Detection
+---
+
+import Link from '@docusaurus/Link';
+import SupportBanner from '@site/src/components/SupportBanner';
+
+# Lemon Detection Demo
+
+
+
+:::info Try it yourself
+To try the AI, simply click on the **Try Demo** button below. If you do not have an account yet, you will be prompted to sign up. Afterwards, the quick start projects overview will open where you can select **Lemon Detection**. After installing ONE WARE Studio, the project will open automatically.
+:::
+
+
+
+ Try Demo
+
+
+ Dataset
+
+
+
+## About this demo
+
+This demo showcases a simple but realistic computer vision task: detecting **ripe** and **unripe lemons** in a production environment.
+
+The interesting part is that the entire dataset was created from a **single 9-second smartphone video**. Using the video import tool in ONE WARE Studio, the video was automatically split into **135 frames**, which were then used as the dataset for training.
+
+The goal of this demo is not just detection accuracy, but also demonstrating how **dataset creation and labeling can be accelerated dramatically by using the model itself during the annotation process**.
+
+## Project Setup
+
+To reproduce this demo, start a new project by clicking:
+
+**AI → ONE AI Project Generator**
+
+Then configure:
+
+- **Mode:** Detection
+- **Task:** Object Detection
+- **Template:** Controlled Environment
+
+This template is ideal for industrial setups where the **camera angle, background, and lighting remain relatively stable**.
+
+## Fast Labeling with Model-Assisted Annotation
+
+Instead of labeling all images manually, we used a much faster workflow where the model helps with the annotation.
+
+First, we manually labeled only **about 15 images** with two classes:
+
+- **ripe**
+- **unripe**
+
+Then we trained a quick preliminary model for **5 minutes**. During this step it is important to enable:
+
+**"Focus on images with objects"**
+
+
+
+Even though the model is still very rough at this point, it already learns the basic visual structure of the lemons.
+
+After exporting this temporary model, we used it to **automatically predict annotations for the remaining frames**. These predictions are not perfect, but they dramatically reduce manual labeling effort because the annotations only need small corrections.
+
+This process can be repeated several times:
+
+1. Label a small batch of images
+2. Train a short model
+3. Use the model to predict annotations
+4. Correct the predictions
+5. Train again
+
+Each iteration improves the prediction quality and **reduces manual labeling time significantly**.
+
+
+
+After the dataset was fully annotated, we manually split it into:
+
+- **train**
+- **validation**
+- **test**
+
+This also allowed us to reuse the exact same dataset for training a YOLO model as a benchmark comparison.
+
+## Data Processing
+
+For prefilters we selected:
+
+**Resize to 25%**
+
+Since the lemons occupy a large part of the frame, downscaling significantly reduces computation while still preserving enough detail for reliable detection.
+
+For augmentations we used the default template settings for:
+
+- Move
+- Rotation
+- Size
+- Color
+
+The **Flip augmentation was disabled** because the production setup has a fixed orientation and flipping would introduce unrealistic samples.
+
+## Advanced Model Settings
+
+To make the benchmark transparent, we switched from the standard capability mode to **Advanced**:
+
+**AI → Capability Mode**
+
+
+
+The following configuration was used:
+
+- **Size Precision Effort:** 65%
+- **Position Prediction Resolution:** 65%
+- **Allow Overlap:** enabled
+- **Surrounding Size Mode:** Relative to Image
+
+Estimated object dimensions:
+
+- **Min Width:** 10
+- **Min Height:** 5
+- **Max Width:** 30
+- **Max Height:** 20
+
+Complexity settings:
+
+- **Same Class Difference:** 15
+- **Background Difference:** 20
+- **Detect Complexity:** 40
+
+These parameters are not strictly required, but they help ensure a **fair and transparent benchmark setup**.
+
+## Training
+
+After configuring the dataset and model settings, the model was trained using the **default training settings for 15 minutes**.
+
+Once training finished, the model was exported as **ONNX** and used in a Python benchmark script to compare performance against a YOLO model trained on the same dataset.
+
+## Results
+
+We compared the exported ONE AI model against **YOLO26-fast**, trained with Roboflow using the same dataset and similar augmentations. The YOLO model used an input resolution of **640 × 640** and was trained for approximately **145 epochs**.
+
+| Metric | YOLO26-fast | ONE AI Custom ONNX |
+| :--- | :--- | :--- |
+| **Average inference time** | 18.39 ms | **8.88 ms** |
+| **Throughput** | 54.38 images/sec | **112.66 images/sec** |
+| **Precision** | **0.901** | 0.889 |
+| **Recall** | 0.547 | **0.894** |
+| **F1 score** | 0.681 | **0.891** |
+| **mAP@0.50** | 0.476 | **0.578** |
+| **True positives** | 191 | **312** |
+| **False negatives** | 158 | **37** |
+
+While YOLO achieved slightly higher precision, the ONE AI model achieved **much higher recall and a significantly better F1 score**, meaning it detected far more lemons overall.
+
+The benchmark also showed a major performance advantage on CPU inference. With an average runtime of **8.88 ms per image**, the ONE AI model runs **more than twice as fast** as the YOLO baseline.
+
+This combination of **fast training, assisted labeling, and efficient inference** makes the approach particularly attractive for quickly deploying vision models in production environments.#
+
+The YOLO baseline model used in this comparison was trained using Roboflow and evaluated using a benchmark script that runs both models on the same dataset.
+
+Since parts of the YOLO tooling are licensed under AGPL-3.0, the full benchmark script used for this evaluation is publicly available in the repository linked above.
+
+
\ No newline at end of file
diff --git a/static/img/ai/one_ai_plugin/demos/lemons/annotation.webm b/static/img/ai/one_ai_plugin/demos/lemons/annotation.webm
new file mode 100644
index 0000000..f0694f1
Binary files /dev/null and b/static/img/ai/one_ai_plugin/demos/lemons/annotation.webm differ
diff --git a/static/img/ai/one_ai_plugin/demos/lemons/capability.jpg b/static/img/ai/one_ai_plugin/demos/lemons/capability.jpg
new file mode 100644
index 0000000..01bdd58
Binary files /dev/null and b/static/img/ai/one_ai_plugin/demos/lemons/capability.jpg differ
diff --git a/static/img/ai/one_ai_plugin/demos/lemons/detection.jpg b/static/img/ai/one_ai_plugin/demos/lemons/detection.jpg
new file mode 100644
index 0000000..06404d4
Binary files /dev/null and b/static/img/ai/one_ai_plugin/demos/lemons/detection.jpg differ
diff --git a/static/img/ai/one_ai_plugin/demos/lemons/lemon_demo_web.webm b/static/img/ai/one_ai_plugin/demos/lemons/lemon_demo_web.webm
new file mode 100644
index 0000000..f5f08bc
Binary files /dev/null and b/static/img/ai/one_ai_plugin/demos/lemons/lemon_demo_web.webm differ
diff --git a/static/img/ai/one_ai_plugin/demos/lemons/training.jpg b/static/img/ai/one_ai_plugin/demos/lemons/training.jpg
new file mode 100644
index 0000000..ceb6db0
Binary files /dev/null and b/static/img/ai/one_ai_plugin/demos/lemons/training.jpg differ