Location: .claude/mcp/meta-prompter/
A prompt evaluation tool available as both an MCP server and a standalone CLI. Grades prompts and returns JSON-only analysis across 8 dimensions (clarity, specificity, context, actionability, safety, testability, hallucination prevention, token efficiency) with weighted global scoring.
Key Features:
- Temperature-controlled evaluation (0 vs Claude Code's default 1)
- Machine-readable JSON output for agentic workflows
- Built-in result viewer with
eval-viewer.html - Support for multiple AI providers (Anthropic, OpenAI)
See meta-prompter README for setup and usage.
Location: .claude/commands/
/debug-partner- AI-assisted debugging partner for systematic troubleshooting- Evidence-based investigation with code-centric traces
- Progressive clarification and hypothesis testing
- Includes specific file names, function names, and line numbers
- Collaborative approach with verification at each step
Location: .claude/statusline/
Context monitoring script (ctx_monitor.js) that displays real-time usage:
- Context window usage percentage (0-200k tokens) with color coding (green/yellow/red)
- Session ID and token consumption tracking
- Model name and usage statistics
- Automatic synthetic message filtering

