AgStream is a real-time inference pipeline for crop and weed classification, optimized for NVIDIA Jetson devices. The system demonstrates end-to-end model optimization from PyTorch → ONNX → TensorRT deployment with DeepStream 6.4.
Key Capabilities:
- Real-time Processing: Classify RTSP video streams with metadata extraction
- Edge Deployment: Optimized for NVIDIA Jetson platforms
- Model Optimization: PyTorch → ONNX → TensorRT conversion
- Agricultural Focus: 83 crop and weed categories from CropAndWeed dataset
- Device: NVIDIA Jetson Orin Nano (Developer Kit)
- JetPack: 6.2 [L4T 36.4.3]
- DeepStream SDK: 6.4 (Triton multi-arch)
- CUDA: 12.6
- TensorRT: 10.3
- Memory: 8GB RAM
- OpenCV: 4.8.0 (GPU support depends on build)
Evaluation was done using the PyTorch model, with inputs resized to 256×256 (CPU inference).
| Model | 83-Class | Binary (Crop/Weed) | 9-Class | 24-Class | Model Size |
|---|---|---|---|---|---|
| MobileNet | 67.2% | 85.2% | 84.6% | 43.6% | 28.3 MB |
| ResNet18 | 67.2% | 82.1% | 81.5% | 41.0% | 135 MB |
| Model | Average Latency | Std Dev |
|---|---|---|
| MobileNet | 55.1ms | 26.7ms |
| ResNet18 | 84.9ms | 51.4ms |
⚠ Accuracy improves with hierarchical classification (fewer classes)
bash scripts/run_dev_jetson.sh# Start RTSP server (terminal 1)
python src/rtsp/rtsp_server.py
# Run classification pipeline (terminal 2)
python src/deepstream/pipelines/deepstream_pipeline_cpu.py
# or
python src/deepstream/pipelines/deepstream_pipeline_gpu.py
# Optional: run metadata extraction
python src/deepstream/pipelines/access_metadata.py# Export PyTorch to ONNX
python scripts/export_to_onnx.py resnet18
python scripts/export_to_onnx.py mobilenet
# TensorRT engine generation is automaticpython src/deepstream/speed_benchmark.pyRTSP Stream → H.264 Decode → Video Convert → Stream Mux → AI Inference (TensorRT) → OSD Overlay → JPEG Encode → Frame Output
Processing Details:
- Input: 256×256 RGB frames from RTSP
- Normalization: mean=[0.5,0.5,0.5], std=[0.25,0.25,0.25]
- Batch Size: 1 (real-time)
- Precision: FP16 (default; configurable)
src/– Pipeline logic, inference modules, conversion scripts, evaluationmodels/– Trained models (PyTorch, ONNX, TensorRT)scripts/– Execution and export scriptsenv/– Environment setup per target (Jetson / CPU)configs/– Configuration files for pipeline and modelsassets/– Test data (images, videos)docs/– Documentation
- Dataset: CropAndWeed (WACV 2023), 83 categories
- Training: PyTorch
- Export: PyTorch → ONNX (opset 17)
- Optimization: ONNX → TensorRT
- Deployment: DeepStream Python API
- Container: nvcr.io/nvidia/deepstream-l4t:6.4-triton-multiarch, Python 3.10, OpenCV 4.11 CUDA
Development Focus:
- Model optimization & performance analysis
- Edge deployment & real-time inference
- End-to-end video processing & metadata extraction
- Benchmarking: latency & throughput
Code Quality:
isort src/ && black src/ && flake8 src/Model Evaluation:
python src/evaluation/run_evaluation.py
python src/evaluation/run_hierarchical_evaluation.py⭐ If you found this project useful, consider giving it a star

