This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
@AGENTS.md
- Run core tests:
go test ./... -v - Run core tests with race detection:
go test -race -v ./... - Run module tests:
for module in modules/*/; do [ -f "$module/go.mod" ] && (cd "$module" && go test ./... -v); done - Run module tests with race detection:
for module in modules/*/; do [ -f "$module/go.mod" ] && (cd "$module" && go test -race -v ./...); done - Run example tests:
for example in examples/*/; do [ -f "$example/go.mod" ] && (cd "$example" && go test ./... -v); done - Run CLI tests:
cd cmd/modcli && go test ./... -v - Format code:
go fmt ./... - Lint code:
golangci-lint run
When using the test-runner agent or running comprehensive test verification, always:
- Use race detection: Add
-raceflag to catch race conditions - Capture full output: Don't limit output with
head/tail- analyze complete results - Look for panic indicators:
- "panic:" strings in output
- "runtime error:" messages
- Stack traces with "runtime.gopanic"
- "WARNING:" messages indicating recovered panics
- Check for systemic failures: Multiple tests failing with same error pattern indicates structural issues
- Verify BDD scenarios: Ensure BDD tests execute logic rather than fail fast with initialization errors
- Calculate pass rates: Provide clear metrics on test health (e.g., "366 passing, 33 failing = 92% pass rate")
- Distinguish infrastructure vs business logic failures: Infrastructure panics/races are critical; business logic test failures are normal development work
- Report improvement trends: Compare current results against previous runs to show progress
- CRITICAL - Zero tolerance for failures: ANY test failure must trigger immediate agent assignment for fixing
- Mandatory escalation: If any failures detected, immediately categorize and assign to specialized agents:
- Race conditions → multi-tenant-specialist or go-module-expert
- Router/service issues → dependency-resolver
- Configuration problems → config-validator
- BDD scenario failures → go-module-expert
- Continuous verification: After any fix, re-run tests to verify resolution and catch any new issues
- Quality gate enforcement: Do not consider work complete until 100% test success is achieved
- BDD-specific requirements: For modules with BDD tests (e.g., reverseproxy), always:
- Run BDD tests specifically:
go test -race -v . -run TestModuleBDD - Check BDD summary line: "X scenarios (Y passed, Z failed)" - Z must be 0
- Verify no "DATA RACE" warnings appear in output
- If BDD failures found, immediately delegate to go-module-expert or multi-tenant-specialist
- Never accept partial BDD success - all scenarios must pass
- Run BDD tests specifically:
- Race condition detection: If "WARNING: DATA RACE" appears anywhere in output:
- This is CRITICAL and must be fixed immediately
- Delegate to multi-tenant-specialist for event/concurrency issues
- Never ignore race conditions as they indicate production safety issues
- Module-specific testing: For each module in modules/*, run:
cd modules/MODULENAME && go test -race -v ./...
This is a Go workspace with multiple go.mod files:
- Root: Core framework (application.go, module.go, service registry)
- modules/*/: Each module has its own go.mod (auth, cache, database, etc.)
- examples/*/: Each example has its own go.mod (basic-app, reverse-proxy, etc.)
- cmd/modcli/: CLI tool with its own go.mod
When working in modules or examples, be aware you're in a separate Go module.
The framework uses dependency injection through a service registry:
- Services are registered via
ProvidesServices()method - Services are consumed via
RequiresServices()andConstructor()pattern - Both name-based and interface-based service matching supported
- Struct tags drive validation:
required:"true",default:"value",desc:"description" - Custom validation via
ConfigValidatorinterface - Multi-format support: YAML, JSON, TOML
- Per-application config feeders via
app.SetConfigFeeders()(preferred over global)
- Context-based tenant propagation via
modular.TenantContext - Tenant-aware modules implement
TenantAwareModuleinterface - Per-tenant configuration isolation
When creating or modifying modules:
- Implement the core
Moduleinterface - Use dependency injection pattern with service registry
- Follow configuration validation patterns with struct tags
- Write comprehensive tests (unit, integration, BDD where applicable)
- Each module directory has its own go.mod file
- Use
app.SetConfigFeeders()for test isolation instead of mutating globalmodular.ConfigFeeders - Tests can run in parallel when properly isolated
- Each module/example tests independently due to separate go.mod files
Always run this sequence:
go fmt ./...
golangci-lint run
go test -race -v ./...
# Test modules with race detection
for module in modules/*/; do [ -f "$module/go.mod" ] && (cd "$module" && go test -race -v ./...); done
# Test examples
for example in examples/*/; do [ -f "$example/go.mod" ] && (cd "$example" && go test -race -v ./...); done
# Test CLI
cd cmd/modcli && go test -race -v ./...When tests fail with panics or race conditions:
- Nil map panics: Check for uninitialized maps in test contexts - add
make(map[...])initialization - Nil pointer dereferences: Verify application context and service injection in BDD tests
- Router panics: Ensure test routers properly initialize their internal maps
- Race conditions: Use
-raceflag and check for concurrent access to shared data structures
Common fixes:
- Add nil checks before map assignments:
if m == nil { m = make(map[string]string) } - Initialize test contexts properly in BDD scenarios
- Use panic recovery for external service calls in tests
- Ensure proper cleanup in test teardown methods