Skip to content

detect/thresh: expose threshold hash bucket depth stats#15110

Closed
jlucovsky wants to merge 4 commits intoOISF:mainfrom
jlucovsky:8401/3
Closed

detect/thresh: expose threshold hash bucket depth stats#15110
jlucovsky wants to merge 4 commits intoOISF:mainfrom
jlucovsky:8401/3

Conversation

@jlucovsky
Copy link
Copy Markdown
Contributor

Continuation of #15098

Add two new stats counters that report collision pressure in the threshold hash table:

  • detect.thresholds.max_bucket_depth — current maximum bucket depth across all buckets
  • detect.thresholds.avg_bucket_depth — average depth of non-empty buckets (total entries / non-empty buckets)

These counters allow operators to determine whether detect.thresholds.hash-size needs tuning. A consistently high
max_bucket_depth indicates hash collisions are degrading lookup performance. A low avg_bucket_depth with a high
max indicates skewed distribution into hot-spot buckets.

Link to ticket: https://redmine.openinfosecfoundation.org/issues/8401

Describe changes:

  • Add infrastructure for context-dependent counters
  • Expose threshold has bucket counters
  • Document value

Updates

  • Alphabetically sort added schema entries
  • Split counter api change into separate commit

Provide values to any of the below to override the defaults.

  • To use a Suricata-Verify or Suricata-Update pull request,
    link to the pull request in the respective _BRANCH variable.
  • Leave unused overrides blank or remove.

SV_REPO=
SV_BRANCH=OISF/suricata-verify#2985
SU_REPO=
SU_BRANCH=

Add StatsRegisterGlobalCounterWithContext() which passes a caller-
supplied context pointer to the getter function on each poll. This
allows multiple independent subsystems to share a single getter
implementation without requiring per-instance static wrappers.

Refactor counter registration to use a shared StatsFindOrAllocCounter()
helper, eliminating duplication between the standard and context-aware
registration paths. Move type assignment into the helper and fix a
double strrchr() call when computing short_name.

Issue: 8401
Add per-bucket length tracking, a nonempty_buckets atomic, and a
256-slot depth histogram to THashTableContext. The histogram enables
amortized O(1) computation of the current maximum bucket depth — the
value decreases as entries are removed, unlike a high-water mark.

THashBucketInsert/THashBucketRemove helpers consolidate bookkeeping
across all insert and remove sites. The walk-down path that lowers
max_bucket_depth re-checks the vacated histogram slot before the CAS
to avoid underreporting when a concurrent insert repopulates it.

Issue: 8401
Register detect.thresholds.max_bucket_depth and
detect.thresholds.avg_bucket_depth as global counters in
ThresholdRegisterGlobalCounters(), where stats_ctx is guaranteed
to be initialized.

Together, avg_bucket_depth shows overall collision pressure while
max_bucket_depth identifies pathological hot-spot buckets, helping
determine whether the hash function or table size needs tuning.

Update EVE JSON schema with the new fields.

Issue: 8401
Add a note to the threshold hash-size configuration section explaining
how the new avg_bucket_depth and max_bucket_depth counters can guide
hash-size tuning.

Issue: 8401
@suricata-qa
Copy link
Copy Markdown

Information: QA ran without warnings.

Pipeline = 30569

Comment thread etc/schema.json
@jlucovsky
Copy link
Copy Markdown
Contributor Author

Continued in #15113

@jlucovsky jlucovsky closed this Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants