Skip to content

Latest commit

 

History

History
184 lines (107 loc) · 4.17 KB

File metadata and controls

184 lines (107 loc) · 4.17 KB

NecroScope Testing Strategy (v1)

This document defines the testing strategy for NecroScope v1. It describes how tests are organized, which types of tests must exist, how sample log fixtures are used, and how agents should write and run tests. All tests must follow the rules established in AGENTS.md and the project specifications. See README.md "V1 limitations" for current unsupported features.

  1. Purpose

The goal of this testing strategy is to ensure:

Correct and deterministic ingestion of Linux kernel logs

Stable parsing across diverse log formats and edge cases

Reliable analysis behavior for detected errors and warnings

Accurate and predictable output formatting

High confidence during refactors or when adding new functionality

All tests must be written using pytest unless otherwise specified.

  1. Test Categories

NecroScope uses four major categories of tests:

2.1 Unit Tests

Small, isolated tests for individual functions or classes. Examples:

Timestamp normalization

Kernel version extraction

Metadata parsing

Line tokenization

2.2 Ingestion Tests

Focused on verifying that the ingestion layer behaves correctly across a wide range of log conditions. These tests should validate:

Multi-kernel logs

Malformed or truncated logs

Logs missing timestamps

Logs with unusual or non-standard prefixes

Logs containing stack traces

2.3 Analysis Engine Tests

Ensure that the analysis engine correctly:

Detects kernel errors, warnings, panics, oopses

Extracts stack traces

Maps events to kernel versions or boot boundaries

Produces consistent summaries

2.4 Output Formatting Tests

Verify that structured output matches the v1 output specification. Tests should validate:

Field names and ordering

Error lists

Summaries

Any metadata fields defined in OUTPUT.md

2.5 End-to-End Tests

Full pipeline tests using real sample logs. These tests validate:

Ingestion → analysis → output

Overall stability

No unexpected regressions

  1. Sample Log Fixtures

Sample kernel logs are stored under:

tests/data/

Fixtures should be simple text files and named descriptively, for example:

dmesg_simple.txt
dmesg_stacktrace.txt
dmesg_multiboot.txt
dmesg_malformed.txt

3.1 Sanitization Requirements

All fixtures must be fully anonymized. Check for and remove:

hostnames

MAC addresses

serial numbers

identifiable mount paths

custom kernel command lines with sensitive content

Fixtures should be representative of real-world logs but contain no personal or system-identifying information.

3.2 Fixture Documentation

Each fixture should have a short description inside this file indicating:

What the log demonstrates

What ingestion is expected to extract

What errors or warnings should be detected

Which tests use the fixture

Current fixtures:

dmesg_anonymized.log
- Anonymized real dmesg log with PCI BAR allocation failures
- Expected ingestion: kernel version and command lines detected
- Expected analysis: PCI_RESOURCE_FAILURE blocks with titles

dmesg_warn_cut_here_excerpt.log
- WARN cut-here excerpt used for structured block detection
- Expected analysis: WARN block with stacktrace frames

dmesg_errors_synthetic.log
- Synthetic error-like lines used for deterministic severity tests
- Expected analysis: error-like events without structured blocks
  1. Expected Behavior for Agents When Writing Tests

Agents generating tests must:

Use fixtures whenever possible instead of synthetic logs Use synthetic logs only when real fixtures are not practical or would introduce sensitive data

Follow the ingestion and output specifications

Avoid introducing new dependencies

Write deterministic tests (no randomness)

Avoid guessing expected output - consult the spec

Request clarification when expected output is ambiguous

  1. Running Tests

All tests are executed using pytest:

pytest -q

Agents must respect any test-runner customizations defined in the repository’s configuration (e.g., commit hooks or CI scripts, if added later).

  1. Future Additions

This file will expand as the specification documents (INGESTION.md, OUTPUT.md, DESIGN.md, ARCHITECTURE.md) evolve. Expected additions include:

Coverage requirements

Guidelines for testing performance limits

Integration with CI workflows