Skip to content

A real-time plant classification pipeline on NVIDIA Jetson optimized for fast, accurate inference at the edge.

Notifications You must be signed in to change notification settings

KamaTechOrg/AgStream

Repository files navigation

AgStream - Plant Classification Pipeline For NVIDIA Edge Devices

Plant Classification

🌾 Product Description

AgStream is a real-time inference pipeline for crop and weed classification, optimized for NVIDIA Jetson devices. The system demonstrates end-to-end model optimization from PyTorch → ONNX → TensorRT deployment with DeepStream 6.4.

Key Capabilities:

  • Real-time Processing: Classify RTSP video streams with metadata extraction
  • Edge Deployment: Optimized for NVIDIA Jetson platforms
  • Model Optimization: PyTorch → ONNX → TensorRT conversion
  • Agricultural Focus: 83 crop and weed categories from CropAndWeed dataset

🔧 Hardware & Software Requirements

Target Platform

  • Device: NVIDIA Jetson Orin Nano (Developer Kit)
  • JetPack: 6.2 [L4T 36.4.3]
  • DeepStream SDK: 6.4 (Triton multi-arch)
  • CUDA: 12.6
  • TensorRT: 10.3
  • Memory: 8GB RAM
  • OpenCV: 4.8.0 (GPU support depends on build)

📊 Performance Metrics

Evaluation was done using the PyTorch model, with inputs resized to 256×256 (CPU inference).

Classification Accuracy (CropAndWeed Dataset)

Model 83-Class Binary (Crop/Weed) 9-Class 24-Class Model Size
MobileNet 67.2% 85.2% 84.6% 43.6% 28.3 MB
ResNet18 67.2% 82.1% 81.5% 41.0% 135 MB

Inference Latency (CPU)

Model Average Latency Std Dev
MobileNet 55.1ms 26.7ms
ResNet18 84.9ms 51.4ms

⚠ Accuracy improves with hierarchical classification (fewer classes)


🚀 Quick Start

1. Environment Setup

bash scripts/run_dev_jetson.sh

2. Run Pipeline

# Start RTSP server (terminal 1)
python src/rtsp/rtsp_server.py

# Run classification pipeline (terminal 2)
python src/deepstream/pipelines/deepstream_pipeline_cpu.py
# or
python src/deepstream/pipelines/deepstream_pipeline_gpu.py

# Optional: run metadata extraction
python src/deepstream/pipelines/access_metadata.py

🧠 Research and Development

1. Model Conversion & Optimization

# Export PyTorch to ONNX
python scripts/export_to_onnx.py resnet18
python scripts/export_to_onnx.py mobilenet
# TensorRT engine generation is automatic

2. Performance Benchmarking

python src/deepstream/speed_benchmark.py

🎯 Pipeline Architecture

RTSP Stream → H.264 Decode → Video Convert → Stream Mux → AI Inference (TensorRT) → OSD Overlay → JPEG Encode → Frame Output

pipeline schema

Processing Details:

  • Input: 256×256 RGB frames from RTSP
  • Normalization: mean=[0.5,0.5,0.5], std=[0.25,0.25,0.25]
  • Batch Size: 1 (real-time)
  • Precision: FP16 (default; configurable)

📁 Project Structure

  • src/ – Pipeline logic, inference modules, conversion scripts, evaluation
  • models/ – Trained models (PyTorch, ONNX, TensorRT)
  • scripts/ – Execution and export scripts
  • env/ – Environment setup per target (Jetson / CPU)
  • configs/ – Configuration files for pipeline and models
  • assets/ – Test data (images, videos)
  • docs/ – Documentation

🔬 Technical Details

  • Dataset: CropAndWeed (WACV 2023), 83 categories
  • Training: PyTorch
  • Export: PyTorch → ONNX (opset 17)
  • Optimization: ONNX → TensorRT
  • Deployment: DeepStream Python API
  • Container: nvcr.io/nvidia/deepstream-l4t:6.4-triton-multiarch, Python 3.10, OpenCV 4.11 CUDA

Development Focus:

  • Model optimization & performance analysis
  • Edge deployment & real-time inference
  • End-to-end video processing & metadata extraction
  • Benchmarking: latency & throughput

Code Quality:

isort src/ && black src/ && flake8 src/

Model Evaluation:

python src/evaluation/run_evaluation.py
python src/evaluation/run_hierarchical_evaluation.py

⭐ If you found this project useful, consider giving it a star

About

A real-time plant classification pipeline on NVIDIA Jetson optimized for fast, accurate inference at the edge.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 6