Skip to content

False Positives with RTSP 401: Introduce Credential Validation and Deep Stream Verification #5

@855princekumar

Description

@855princekumar

Feature Request: Credential-Aware Validation, Deep Check Strategy, and Retention Integration

Background

Currently, RTSP streams returning 401 Unauthorized are treated as healthy (reachable), which is valid from a network availability perspective. However, this introduces a gap when incorrect credentials are configured by the user. In such cases, the system continues to mark the stream as OK without verifying actual video accessibility.


Problem Statement

  • Incorrect credentials are not detected

  • No frame-level validation occurs for protected streams

  • System may report false positives for stream health


Proposed Enhancements

1. Initial Deep Validation on Stream Addition

When a new stream is added:

  • Perform a full connection with frame capture (OpenCV-based) as the first validation step

  • This ensures:

    • Credentials are correct

    • Stream is decodable

    • Endpoint is not just reachable but usable

Expected Behavior:

  • Frame successfully captured → mark stream as valid

  • Frame capture fails → mark as misconfigured (even if RTSP responds)


2. Periodic Deep Validation Strategy

After initial validation:

  • Continue using lightweight RTSP DESCRIBE / HTTP checks

  • Introduce periodic deep validation

Suggested Approach:

  • Perform one frame-based validation every N cycles (e.g., every 10–20 checks)

  • Use round-robin or randomized selection across streams

Purpose:

  • Detect credential changes

  • Identify stream decode failures

  • Catch silent failures (camera alive but not streaming)

  • Handle endpoint/IP changes


3. Context-Aware Handling of 401 Responses

Refine classification logic:

Scenario | Expected Behavior -- | -- No credentials provided | Treat 401 as reachable (OK) Credentials provided but invalid | Mark as FAIL after deep validation Valid credentials | Confirm via successful frame validation

This introduces credential-aware health classification without breaking lightweight monitoring.


4. Retention Policy Alignment (Reference Existing Issue)

A separate issue has already been raised regarding retention policy and database growth.

Summary of current concern:

  • Long-running deployments accumulate large SQLite tables (per stream)

  • Exported CSV becomes heavy when aggregating multiple streams

  • No automatic cleanup or archival mechanism exists

Proposed Alignment:

  • Integrate retention with monitoring lifecycle:

    • Auto-export CSV (daily / weekly / monthly)

    • Apply retention window to SQLite tables (per stream)

  • Keep SQLite as a lean, rolling operational store

  • Use CSV as the long-term analytical dataset (SLA, reporting)

This ensures:

  • Stable database performance

  • Efficient SLA computation

  • Predictable storage usage over time


Future Alignment

  • SLA + MQTT integration (currently under testing)

  • Unified binary builds for Windows and Linux (in progress)

  • Docker-based deployments will remain consistent


Summary

These enhancements will improve StreamPulse by:

  • Eliminating false positives caused by incorrect credentials

  • Introducing controlled deep validation without impacting performance

  • Aligning monitoring with long-term data retention strategy

The goal is to maintain the existing lightweight architecture while improving accuracy and operational robustness.


Open Questions

  • Optimal frequency for deep validation (fixed vs adaptive)?

  • Should credential validation be optional or enforced by default?

  • Retention defaults: global vs per-stream configuration?


Metadata

Metadata

Labels

bugSomething isn't workingenhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions