diff --git a/README.md b/README.md index 4aebc7e..c3a7017 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ Methodology skills that work in any runtime. Adapted from [obra/superpowers](htt | `skill-conflict-detector` | Detects name shadowing and description-overlap conflicts between installed skills | `detect.py` | | `skill-portability-checker` | Validates OS/binary dependencies in companion scripts; catches non-portable calls | `check.py` | -### OpenClaw-Native (24 skills) +### OpenClaw-Native (28 skills) Skills that require OpenClaw's persistent runtime — cron scheduling, session state, or long-running execution. Not useful in session-based tools. @@ -94,6 +94,10 @@ Skills that require OpenClaw's persistent runtime — cron scheduling, session s | `skill-compatibility-checker` | Checks installed skills against the current OpenClaw version for feature compatibility | — | ✓ | `check.py` | | `heartbeat-governor` | Enforces per-skill execution budgets for cron skills; auto-pauses runaway skills | every hour | ✓ | `governor.py` | | `community-skill-radar` | Scans Reddit for OpenClaw pain points and feature requests; writes prioritized PROPOSALS.md | every 3 days | ✓ | `radar.py` | +| `memory-graph-builder` | Parses MEMORY.md into a knowledge graph; detects duplicates, contradictions, and stale entries; generates compressed digest | daily 10pm | ✓ | `graph.py` | +| `config-encryption-auditor` | Scans config directories for plaintext API keys, tokens, and world-readable permissions | Sundays 9am | ✓ | `audit.py` | +| `tool-description-optimizer` | Scores skill descriptions for trigger quality — clarity, specificity, keyword density — and suggests rewrites | — | ✓ | `optimize.py` | +| `mcp-health-checker` | Monitors MCP server connections for health, latency, and availability; detects stale connections | every 6h | ✓ | `check.py` | ### Community (1 skill) @@ -113,7 +117,7 @@ Stateful skills commit a `STATE_SCHEMA.yaml` defining the shape of their runtime Skills marked with a script in the table above ship a small executable alongside their `SKILL.md`: -- **Python scripts** (`run.py`, `audit.py`, `check.py`, `guard.py`, `bridge.py`, `onboard.py`, `sync.py`, `doctor.py`, `loadout.py`, `governor.py`, `detect.py`, `test.py`, `radar.py`) — run directly to manipulate state, generate reports, or trigger actions. No extra dependencies required; `pyyaml` is optional but recommended. +- **Python scripts** (`run.py`, `audit.py`, `check.py`, `guard.py`, `bridge.py`, `onboard.py`, `sync.py`, `doctor.py`, `loadout.py`, `governor.py`, `detect.py`, `test.py`, `radar.py`, `graph.py`, `optimize.py`) — run directly to manipulate state, generate reports, or trigger actions. No extra dependencies required; `pyyaml` is optional but recommended. - **`vet.sh`** — Pure bash scanner; runs on any system with grep. - Each script supports `--help` and prints a human-readable summary. JSON output available where useful (`--format json`). Dry-run mode available on scripts that make changes. - See the `example-state.yaml` in each skill directory for sample state and a commented walkthrough of the skill's cron behaviour. @@ -122,7 +126,7 @@ Skills marked with a script in the table above ship a small executable alongside ## Security skills at a glance -Five skills address the documented top security risks for OpenClaw agents: +Six skills address the documented top security risks for OpenClaw agents: | Threat | Skill | How | |---|---|---| @@ -131,6 +135,7 @@ Five skills address the documented top security risks for OpenClaw agents: | Agent takes destructive action without confirmation | `dangerous-action-guard` | Pre-execution gate with 5-min expiry window and full audit trail | | Post-install skill tampering or credential injection | `installed-skill-auditor` | Weekly content-hash drift detection; INJECTION / CREDENTIAL / EXFILTRATION checks | | Silent skill loading failures hiding broken skills | `skill-doctor` | 6 diagnostic checks per skill; surfaces every load-time failure before it disappears | +| Plaintext API keys and tokens in config files | `config-encryption-auditor` | Scans for 8 API key patterns + 3 token patterns; auto-fixes permissions; suggests env var migration | --- diff --git a/skills/openclaw-native/config-encryption-auditor/SKILL.md b/skills/openclaw-native/config-encryption-auditor/SKILL.md new file mode 100644 index 0000000..7ee5994 --- /dev/null +++ b/skills/openclaw-native/config-encryption-auditor/SKILL.md @@ -0,0 +1,81 @@ +--- +name: config-encryption-auditor +version: "1.0" +category: openclaw-native +description: Scans OpenClaw config directories for plaintext API keys, tokens, and secrets in unencrypted files — flags exposure risks and suggests encryption or environment variable migration. +stateful: true +cron: "0 9 * * 0" +--- + +# Config Encryption Auditor + +## What it does + +OpenClaw stores configuration in `~/.openclaw/` — API keys, channel tokens, provider credentials. By default, these are plaintext YAML or JSON files readable by any process on your machine. + +OpenLobster solved this with AES-GCM encrypted config files. We can't change OpenClaw's config format, but we can audit it — scanning for exposed secrets, flagging unencrypted credential files, and suggesting migrations to environment variables or encrypted vaults. + +## When to invoke + +- Automatically, every Sunday at 9am (cron) +- After initial OpenClaw setup +- Before deploying to shared infrastructure +- After any config change that adds new API keys + +## Checks performed + +| Check | Severity | What it detects | +|---|---|---| +| PLAINTEXT_API_KEY | CRITICAL | API key patterns in config files (sk-, AKIA, ghp_, etc.) | +| PLAINTEXT_TOKEN | HIGH | OAuth tokens, bearer tokens, passwords in config | +| WORLD_READABLE | HIGH | Config files with 644/755 permissions (readable by all users) | +| NO_GITIGNORE | MEDIUM | Config directory not gitignored (risk of committing secrets) | +| ENV_AVAILABLE | INFO | Secret could be migrated to environment variable | + +## How to use + +```bash +python3 audit.py --scan # Full audit +python3 audit.py --scan --critical-only # CRITICAL findings only +python3 audit.py --fix-permissions # chmod 600 on config files +python3 audit.py --suggest-env # Print env var migration guide +python3 audit.py --status # Last audit summary +python3 audit.py --format json +``` + +## Procedure + +**Step 1 — Run the audit** + +```bash +python3 audit.py --scan +``` + +**Step 2 — Fix CRITICAL issues first** + +For each PLAINTEXT_API_KEY finding, migrate the key to an environment variable: + +```bash +# Instead of storing in config.yaml: +# api_key: sk-abc123... +# Use: +export OPENCLAW_API_KEY="sk-abc123..." +``` + +**Step 3 — Fix file permissions** + +```bash +python3 audit.py --fix-permissions +``` + +This sets `chmod 600` on all config files (owner read/write only). + +**Step 4 — Verify gitignore coverage** + +Ensure `~/.openclaw/` or at minimum the config files are in your global `.gitignore`. + +## State + +Audit results and history stored in `~/.openclaw/skill-state/config-encryption-auditor/state.yaml`. + +Fields: `last_audit_at`, `findings`, `files_scanned`, `audit_history`. diff --git a/skills/openclaw-native/config-encryption-auditor/STATE_SCHEMA.yaml b/skills/openclaw-native/config-encryption-auditor/STATE_SCHEMA.yaml new file mode 100644 index 0000000..05bdb54 --- /dev/null +++ b/skills/openclaw-native/config-encryption-auditor/STATE_SCHEMA.yaml @@ -0,0 +1,27 @@ +version: "1.0" +description: Config file audit results — plaintext secrets, permission issues, and migration suggestions. +fields: + last_audit_at: + type: datetime + files_scanned: + type: integer + default: 0 + findings: + type: list + items: + file_path: { type: string } + check: { type: enum, values: [PLAINTEXT_API_KEY, PLAINTEXT_TOKEN, WORLD_READABLE, NO_GITIGNORE, ENV_AVAILABLE] } + severity: { type: enum, values: [CRITICAL, HIGH, MEDIUM, INFO] } + detail: { type: string } + suggestion: { type: string } + detected_at: { type: datetime } + resolved: { type: boolean } + audit_history: + type: list + description: Rolling audit summaries (last 12) + items: + audited_at: { type: datetime } + files_scanned: { type: integer } + critical_count: { type: integer } + high_count: { type: integer } + medium_count: { type: integer } diff --git a/skills/openclaw-native/config-encryption-auditor/audit.py b/skills/openclaw-native/config-encryption-auditor/audit.py new file mode 100755 index 0000000..daf55fe --- /dev/null +++ b/skills/openclaw-native/config-encryption-auditor/audit.py @@ -0,0 +1,300 @@ +#!/usr/bin/env python3 +""" +Config Encryption Auditor for openclaw-superpowers. + +Scans OpenClaw config directories for plaintext API keys, tokens, +and secrets in unencrypted files. + +Usage: + python3 audit.py --scan + python3 audit.py --scan --critical-only + python3 audit.py --fix-permissions + python3 audit.py --suggest-env + python3 audit.py --status + python3 audit.py --format json +""" + +import argparse +import json +import os +import re +import stat +import sys +from datetime import datetime +from pathlib import Path + +try: + import yaml + HAS_YAML = True +except ImportError: + HAS_YAML = False + +OPENCLAW_DIR = Path(os.environ.get("OPENCLAW_HOME", Path.home() / ".openclaw")) +STATE_FILE = OPENCLAW_DIR / "skill-state" / "config-encryption-auditor" / "state.yaml" +MAX_HISTORY = 12 + +# Scan these directories for config files +SCAN_DIRS = [OPENCLAW_DIR] +SCAN_EXTENSIONS = {".yaml", ".yml", ".json", ".toml", ".env", ".conf", ".cfg", ".ini"} + +# ── Secret patterns ─────────────────────────────────────────────────────────── + +API_KEY_PATTERNS = [ + (re.compile(r'sk-[A-Za-z0-9]{20,}'), "OpenAI/Anthropic API key"), + (re.compile(r'AKIA[0-9A-Z]{16}'), "AWS Access Key ID"), + (re.compile(r'(?:ghp|gho|ghu|ghs|ghr)_[A-Za-z0-9]{36}'), "GitHub token"), + (re.compile(r'xoxb-[0-9A-Za-z\-]{50,}'), "Slack bot token"), + (re.compile(r'xoxp-[0-9A-Za-z\-]{50,}'), "Slack user token"), + (re.compile(r'[0-9]+:AA[A-Za-z0-9_-]{33}'), "Telegram bot token"), + (re.compile(r'AIza[0-9A-Za-z_-]{35}'), "Google API key"), + (re.compile(r'sk_live_[0-9a-zA-Z]{24,}'), "Stripe secret key"), +] + +TOKEN_PATTERNS = [ + (re.compile(r'(?:token|secret|password|passwd|pwd|apikey|api_key)\s*[:=]\s*["\']?[A-Za-z0-9_\-\.]{8,}', re.I), + "Generic secret assignment"), + (re.compile(r'Bearer [A-Za-z0-9\-_\.]{20,}'), "Bearer token"), + (re.compile(r'Basic [A-Za-z0-9+/=]{20,}'), "Basic auth credential"), +] + +# Environment variable mapping suggestions +ENV_SUGGESTIONS = { + "anthropic": "OPENCLAW_ANTHROPIC_API_KEY", + "openai": "OPENCLAW_OPENAI_API_KEY", + "slack": "OPENCLAW_SLACK_TOKEN", + "telegram": "OPENCLAW_TELEGRAM_TOKEN", + "discord": "OPENCLAW_DISCORD_TOKEN", + "github": "OPENCLAW_GITHUB_TOKEN", + "stripe": "OPENCLAW_STRIPE_KEY", + "aws": "OPENCLAW_AWS_ACCESS_KEY", +} + + +# ── State helpers ───────────────────────────────────────────────────────────── + +def load_state() -> dict: + if not STATE_FILE.exists(): + return {"findings": [], "audit_history": []} + try: + text = STATE_FILE.read_text() + return (yaml.safe_load(text) or {}) if HAS_YAML else {} + except Exception: + return {} + + +def save_state(state: dict) -> None: + STATE_FILE.parent.mkdir(parents=True, exist_ok=True) + if HAS_YAML: + with open(STATE_FILE, "w") as f: + yaml.dump(state, f, default_flow_style=False, allow_unicode=True) + + +# ── Scanning ────────────────────────────────────────────────────────────────── + +def scan_file(filepath: Path) -> list[dict]: + findings = [] + now = datetime.now().isoformat() + rel = str(filepath.relative_to(OPENCLAW_DIR)) if str(filepath).startswith(str(OPENCLAW_DIR)) else str(filepath) + + try: + text = filepath.read_text(errors="replace") + except (PermissionError, OSError): + return findings + + # Check for API keys + for pattern, label in API_KEY_PATTERNS: + if pattern.search(text): + findings.append({ + "file_path": rel, "check": "PLAINTEXT_API_KEY", + "severity": "CRITICAL", + "detail": f"Found {label} pattern in plaintext", + "suggestion": f"Migrate to environment variable or encrypted vault.", + "detected_at": now, "resolved": False, + }) + + # Check for tokens + for pattern, label in TOKEN_PATTERNS: + if pattern.search(text): + findings.append({ + "file_path": rel, "check": "PLAINTEXT_TOKEN", + "severity": "HIGH", + "detail": f"Found {label} pattern in plaintext", + "suggestion": "Use environment variables instead of inline credentials.", + "detected_at": now, "resolved": False, + }) + + # Check file permissions (Unix only) + try: + mode = filepath.stat().st_mode + if mode & stat.S_IROTH or mode & stat.S_IRGRP: + findings.append({ + "file_path": rel, "check": "WORLD_READABLE", + "severity": "HIGH", + "detail": f"File permissions {oct(mode)[-3:]} — readable by other users", + "suggestion": "Run: chmod 600 " + str(filepath), + "detected_at": now, "resolved": False, + }) + except (OSError, AttributeError): + pass + + return findings + + +def scan_all(critical_only: bool = False) -> tuple[list, int]: + all_findings = [] + files_scanned = 0 + now = datetime.now().isoformat() + + for scan_dir in SCAN_DIRS: + if not scan_dir.exists(): + continue + for filepath in scan_dir.rglob("*"): + if not filepath.is_file(): + continue + if filepath.suffix not in SCAN_EXTENSIONS: + continue + # Skip skill-state (our own state files) + if "skill-state" in str(filepath): + continue + files_scanned += 1 + findings = scan_file(filepath) + all_findings.extend(findings) + + # Check gitignore + gitignore = Path.home() / ".gitignore" + openclaw_gitignored = False + if gitignore.exists(): + try: + gi_text = gitignore.read_text() + if ".openclaw" in gi_text or "openclaw" in gi_text: + openclaw_gitignored = True + except Exception: + pass + if not openclaw_gitignored: + all_findings.append({ + "file_path": str(OPENCLAW_DIR), "check": "NO_GITIGNORE", + "severity": "MEDIUM", + "detail": "~/.openclaw not found in global .gitignore", + "suggestion": "Add '.openclaw/' to ~/.gitignore to prevent accidental commits.", + "detected_at": now, "resolved": False, + }) + + if critical_only: + all_findings = [f for f in all_findings if f["severity"] == "CRITICAL"] + + return all_findings, files_scanned + + +# ── Commands ────────────────────────────────────────────────────────────────── + +def cmd_scan(state: dict, critical_only: bool, fmt: str) -> None: + findings, files_scanned = scan_all(critical_only) + now = datetime.now().isoformat() + critical = sum(1 for f in findings if f["severity"] == "CRITICAL") + high = sum(1 for f in findings if f["severity"] == "HIGH") + medium = sum(1 for f in findings if f["severity"] == "MEDIUM") + + if fmt == "json": + print(json.dumps({"files_scanned": files_scanned, "findings": findings}, indent=2)) + else: + print(f"\nConfig Encryption Audit — {datetime.now().strftime('%Y-%m-%d')}") + print("─" * 50) + print(f" {files_scanned} files scanned | " + f"{critical} CRITICAL | {high} HIGH | {medium} MEDIUM") + print() + if not findings: + print(" ✓ No exposed secrets detected.") + else: + for f in findings: + icon = "✗" if f["severity"] == "CRITICAL" else ("!" if f["severity"] == "HIGH" else "⚠") + print(f" {icon} [{f['severity']}] {f['file_path']}: {f['check']}") + print(f" {f['detail']}") + print(f" → {f['suggestion']}") + print() + + # Persist + history = state.get("audit_history") or [] + history.insert(0, { + "audited_at": now, "files_scanned": files_scanned, + "critical_count": critical, "high_count": high, "medium_count": medium, + }) + state["audit_history"] = history[:MAX_HISTORY] + state["last_audit_at"] = now + state["files_scanned"] = files_scanned + state["findings"] = findings + save_state(state) + sys.exit(1 if critical > 0 else 0) + + +def cmd_fix_permissions(state: dict) -> None: + fixed = 0 + for scan_dir in SCAN_DIRS: + if not scan_dir.exists(): + continue + for filepath in scan_dir.rglob("*"): + if not filepath.is_file() or filepath.suffix not in SCAN_EXTENSIONS: + continue + if "skill-state" in str(filepath): + continue + try: + mode = filepath.stat().st_mode + if mode & stat.S_IROTH or mode & stat.S_IRGRP: + filepath.chmod(0o600) + print(f" ✓ chmod 600: {filepath}") + fixed += 1 + except (OSError, AttributeError): + pass + print(f"\n✓ Fixed permissions on {fixed} files.") + + +def cmd_suggest_env() -> None: + print("\nEnvironment Variable Migration Guide") + print("─" * 48) + print("Replace plaintext credentials in config files with environment variables:\n") + for service, env_var in sorted(ENV_SUGGESTIONS.items()): + print(f" {service:12s} → export {env_var}=\"\"") + print(f"\nAdd these to your shell profile (~/.zshrc, ~/.bashrc) or a .env file.") + print("OpenClaw reads OPENCLAW_* environment variables automatically.\n") + + +def cmd_status(state: dict) -> None: + last = state.get("last_audit_at", "never") + print(f"\nConfig Encryption Auditor — Last run: {last}") + history = state.get("audit_history") or [] + if history: + h = history[0] + print(f" {h.get('files_scanned',0)} files | " + f"{h.get('critical_count',0)} CRITICAL | " + f"{h.get('high_count',0)} HIGH | {h.get('medium_count',0)} MEDIUM") + active = [f for f in (state.get("findings") or []) if not f.get("resolved")] + if active: + print(f"\n Unresolved ({len(active)}):") + for f in active[:3]: + print(f" [{f['severity']}] {f['file_path']}: {f['check']}") + print() + + +def main(): + parser = argparse.ArgumentParser(description="Config Encryption Auditor") + group = parser.add_mutually_exclusive_group(required=True) + group.add_argument("--scan", action="store_true") + group.add_argument("--fix-permissions", action="store_true") + group.add_argument("--suggest-env", action="store_true") + group.add_argument("--status", action="store_true") + parser.add_argument("--critical-only", action="store_true") + parser.add_argument("--format", choices=["text", "json"], default="text") + args = parser.parse_args() + + state = load_state() + if args.scan: + cmd_scan(state, args.critical_only, args.format) + elif args.fix_permissions: + cmd_fix_permissions(state) + elif args.suggest_env: + cmd_suggest_env() + elif args.status: + cmd_status(state) + + +if __name__ == "__main__": + main() diff --git a/skills/openclaw-native/config-encryption-auditor/example-state.yaml b/skills/openclaw-native/config-encryption-auditor/example-state.yaml new file mode 100644 index 0000000..d435292 --- /dev/null +++ b/skills/openclaw-native/config-encryption-auditor/example-state.yaml @@ -0,0 +1,76 @@ +# Example runtime state for config-encryption-auditor +last_audit_at: "2026-03-16T09:00:15.332000" +files_scanned: 14 +findings: + - file_path: "config/providers.yaml" + check: PLAINTEXT_API_KEY + severity: CRITICAL + detail: "Found OpenAI/Anthropic API key pattern in plaintext" + suggestion: "Migrate to environment variable or encrypted vault." + detected_at: "2026-03-16T09:00:15.000000" + resolved: false + - file_path: "config/integrations.yaml" + check: PLAINTEXT_TOKEN + severity: HIGH + detail: "Found Generic secret assignment pattern in plaintext" + suggestion: "Use environment variables instead of inline credentials." + detected_at: "2026-03-16T09:00:15.100000" + resolved: false + - file_path: "config/providers.yaml" + check: WORLD_READABLE + severity: HIGH + detail: "File permissions 644 — readable by other users" + suggestion: "Run: chmod 600 ~/.openclaw/config/providers.yaml" + detected_at: "2026-03-16T09:00:15.200000" + resolved: false + - file_path: "/Users/you/.openclaw" + check: NO_GITIGNORE + severity: MEDIUM + detail: "~/.openclaw not found in global .gitignore" + suggestion: "Add '.openclaw/' to ~/.gitignore to prevent accidental commits." + detected_at: "2026-03-16T09:00:15.300000" + resolved: false +audit_history: + - audited_at: "2026-03-16T09:00:15.332000" + files_scanned: 14 + critical_count: 1 + high_count: 2 + medium_count: 1 + - audited_at: "2026-03-09T09:00:12.000000" + files_scanned: 12 + critical_count: 0 + high_count: 1 + medium_count: 1 +# ── Walkthrough ────────────────────────────────────────────────────────────── +# Cron runs every Sunday at 9am: python3 audit.py --scan +# +# Config Encryption Audit — 2026-03-16 +# ────────────────────────────────────────────────── +# 14 files scanned | 1 CRITICAL | 2 HIGH | 1 MEDIUM +# +# ✗ [CRITICAL] config/providers.yaml: PLAINTEXT_API_KEY +# Found OpenAI/Anthropic API key pattern in plaintext +# → Migrate to environment variable or encrypted vault. +# +# ! [HIGH] config/integrations.yaml: PLAINTEXT_TOKEN +# Found Generic secret assignment pattern in plaintext +# → Use environment variables instead of inline credentials. +# +# ! [HIGH] config/providers.yaml: WORLD_READABLE +# File permissions 644 — readable by other users +# → Run: chmod 600 ~/.openclaw/config/providers.yaml +# +# ⚠ [MEDIUM] /Users/you/.openclaw: NO_GITIGNORE +# ~/.openclaw not found in global .gitignore +# → Add '.openclaw/' to ~/.gitignore to prevent accidental commits. +# +# python3 audit.py --fix-permissions +# ✓ chmod 600: /Users/you/.openclaw/config/providers.yaml +# ✓ Fixed permissions on 1 files. +# +# python3 audit.py --suggest-env +# Environment Variable Migration Guide +# ──────────────────────────────────────────────── +# anthropic → export OPENCLAW_ANTHROPIC_API_KEY="" +# aws → export OPENCLAW_AWS_ACCESS_KEY="" +# ... diff --git a/skills/openclaw-native/mcp-health-checker/SKILL.md b/skills/openclaw-native/mcp-health-checker/SKILL.md new file mode 100644 index 0000000..c66abfe --- /dev/null +++ b/skills/openclaw-native/mcp-health-checker/SKILL.md @@ -0,0 +1,112 @@ +--- +name: mcp-health-checker +version: "1.0" +category: openclaw-native +description: Monitors MCP server connections for health, latency, and availability — detects stale connections, timeouts, and unreachable servers before they cause silent tool failures. +stateful: true +cron: "0 */6 * * *" +--- + +# MCP Health Checker + +## What it does + +MCP (Model Context Protocol) servers are how OpenClaw connects to external tools — but connections go stale silently. A crashed MCP server doesn't throw an error until the agent tries to use it, causing confusing mid-task failures. + +MCP Health Checker proactively monitors all configured MCP connections. It pings servers, measures latency, tracks uptime history, and alerts you before a stale connection causes a problem. + +Inspired by OpenLobster's MCP connection health monitoring and OAuth 2.1+PKCE token refresh tracking. + +## When to invoke + +- Automatically every 6 hours (cron) — silent background health check +- Manually before starting a task that depends on MCP tools +- When an MCP tool call fails unexpectedly — diagnose the connection +- After restarting MCP servers — verify all connections restored + +## Health checks performed + +| Check | What it tests | Severity on failure | +|---|---|---| +| REACHABLE | Server responds to connection probe | CRITICAL | +| LATENCY | Response time under threshold (default: 5s) | HIGH | +| STALE | Connection age exceeds max (default: 24h) | HIGH | +| TOOL_COUNT | Server exposes expected number of tools | MEDIUM | +| CONFIG_VALID | MCP config entry has required fields | MEDIUM | +| AUTH_EXPIRY | OAuth/API token approaching expiration | HIGH | + +## How to use + +```bash +python3 check.py --ping # Ping all configured MCP servers +python3 check.py --ping --server # Ping a specific server +python3 check.py --ping --timeout 3 # Custom timeout in seconds +python3 check.py --status # Last check summary from state +python3 check.py --history # Show past check results +python3 check.py --config # Validate MCP config entries +python3 check.py --format json # Machine-readable output +``` + +## Cron wakeup behaviour + +Every 6 hours: + +1. Read MCP server configuration from `~/.openclaw/config/` (YAML/JSON) +2. For each configured server: + - Attempt connection probe (TCP or HTTP depending on transport) + - Measure response latency + - Check connection age against staleness threshold + - Verify tool listing matches expected count (if tracked) + - Check auth token expiry (if applicable) +3. Update state with per-server health records +4. Print summary: healthy / degraded / unreachable counts +5. Exit 1 if any CRITICAL findings + +## Procedure + +**Step 1 — Run a health check** + +```bash +python3 check.py --ping +``` + +Review the output. Healthy servers show a green check. Degraded servers show latency warnings. Unreachable servers show a critical alert. + +**Step 2 — Diagnose a specific server** + +```bash +python3 check.py --ping --server filesystem +``` + +Detailed output for a single server: latency, last seen, tool count, auth status. + +**Step 3 — Validate configuration** + +```bash +python3 check.py --config +``` + +Checks that all MCP config entries have the required fields (`command`, `args` or `url` depending on transport type). + +**Step 4 — Review history** + +```bash +python3 check.py --history +``` + +Shows uptime trends over the last 20 checks. Spot servers that are intermittently failing. + +## State + +Per-server health records and check history stored in `~/.openclaw/skill-state/mcp-health-checker/state.yaml`. + +Fields: `last_check_at`, `servers` list, `check_history`. + +## Notes + +- Does not modify MCP configuration — read-only monitoring +- Connection probes use the same transport as the MCP server (stdio subprocess spawn or HTTP GET) +- For stdio servers: probes verify the process can start and respond to `initialize` +- For HTTP/SSE servers: probes send a health-check HTTP request +- Latency threshold configurable via `--timeout` (default: 5s) +- Staleness threshold configurable via `--max-age` (default: 24h) diff --git a/skills/openclaw-native/mcp-health-checker/STATE_SCHEMA.yaml b/skills/openclaw-native/mcp-health-checker/STATE_SCHEMA.yaml new file mode 100644 index 0000000..a756c7e --- /dev/null +++ b/skills/openclaw-native/mcp-health-checker/STATE_SCHEMA.yaml @@ -0,0 +1,31 @@ +version: "1.0" +description: MCP server health records, per-server status, and check history. +fields: + last_check_at: + type: datetime + servers: + type: list + description: Per-server health status from the most recent check + items: + name: { type: string, description: "Server name from config" } + transport: { type: string, description: "stdio or http" } + status: { type: enum, values: [healthy, degraded, unreachable, unknown] } + latency_ms: { type: integer, description: "Response time in milliseconds" } + last_seen_at: { type: datetime, description: "Last successful probe" } + tool_count: { type: integer, description: "Number of tools exposed" } + findings: + type: list + items: + check: { type: string } + severity: { type: string } + detail: { type: string } + checked_at: { type: datetime } + check_history: + type: list + description: Rolling log of past checks (last 20) + items: + checked_at: { type: datetime } + servers_checked: { type: integer } + healthy: { type: integer } + degraded: { type: integer } + unreachable: { type: integer } diff --git a/skills/openclaw-native/mcp-health-checker/check.py b/skills/openclaw-native/mcp-health-checker/check.py new file mode 100755 index 0000000..57757be --- /dev/null +++ b/skills/openclaw-native/mcp-health-checker/check.py @@ -0,0 +1,514 @@ +#!/usr/bin/env python3 +""" +MCP Health Checker for openclaw-superpowers. + +Monitors MCP server connections for health, latency, and availability. + +Usage: + python3 check.py --ping + python3 check.py --ping --server + python3 check.py --ping --timeout 3 + python3 check.py --config + python3 check.py --status + python3 check.py --history + python3 check.py --format json +""" + +import argparse +import json +import os +import subprocess +import sys +import time +from datetime import datetime, timedelta +from pathlib import Path + +try: + import yaml + HAS_YAML = True +except ImportError: + HAS_YAML = False + +OPENCLAW_DIR = Path(os.environ.get("OPENCLAW_HOME", Path.home() / ".openclaw")) +STATE_FILE = OPENCLAW_DIR / "skill-state" / "mcp-health-checker" / "state.yaml" +MAX_HISTORY = 20 + +# MCP config locations to search +MCP_CONFIG_PATHS = [ + OPENCLAW_DIR / "config" / "mcp.yaml", + OPENCLAW_DIR / "config" / "mcp.json", + OPENCLAW_DIR / "mcp.yaml", + OPENCLAW_DIR / "mcp.json", + Path.home() / ".config" / "openclaw" / "mcp.yaml", + Path.home() / ".config" / "openclaw" / "mcp.json", +] + +DEFAULT_TIMEOUT = 5 # seconds +DEFAULT_MAX_AGE = 24 # hours + + +# ── State helpers ──────────────────────────────────────────────────────────── + +def load_state() -> dict: + if not STATE_FILE.exists(): + return {"servers": [], "check_history": []} + try: + text = STATE_FILE.read_text() + return (yaml.safe_load(text) or {}) if HAS_YAML else {} + except Exception: + return {} + + +def save_state(state: dict) -> None: + STATE_FILE.parent.mkdir(parents=True, exist_ok=True) + if HAS_YAML: + with open(STATE_FILE, "w") as f: + yaml.dump(state, f, default_flow_style=False, allow_unicode=True) + + +# ── MCP config discovery ──────────────────────────────────────────────────── + +def find_mcp_config() -> tuple[Path | None, dict]: + """Find and parse MCP configuration.""" + for config_path in MCP_CONFIG_PATHS: + if not config_path.exists(): + continue + try: + text = config_path.read_text() + if config_path.suffix == ".json": + data = json.loads(text) + elif HAS_YAML: + data = yaml.safe_load(text) or {} + else: + continue + return config_path, data + except Exception: + continue + return None, {} + + +def extract_servers(config: dict) -> list[dict]: + """Extract server definitions from MCP config.""" + servers = [] + # Support both flat and nested formats + mcp_servers = config.get("mcpServers") or config.get("servers") or config + if isinstance(mcp_servers, dict): + for name, defn in mcp_servers.items(): + if not isinstance(defn, dict): + continue + transport = "stdio" + if "url" in defn: + transport = "http" + elif "command" in defn: + transport = "stdio" + servers.append({ + "name": name, + "transport": transport, + "command": defn.get("command"), + "args": defn.get("args", []), + "url": defn.get("url"), + "env": defn.get("env", {}), + }) + return servers + + +# ── Health checks ──────────────────────────────────────────────────────────── + +def probe_stdio_server(server: dict, timeout: int) -> dict: + """Probe a stdio MCP server by attempting to start and initialize it.""" + command = server.get("command") + args = server.get("args", []) + if not command: + return { + "status": "unreachable", + "latency_ms": 0, + "findings": [{"check": "CONFIG_VALID", "severity": "MEDIUM", + "detail": "No command specified for stdio server"}], + } + + # Build the initialize JSON-RPC request + init_request = json.dumps({ + "jsonrpc": "2.0", + "id": 1, + "method": "initialize", + "params": { + "protocolVersion": "2024-11-05", + "capabilities": {}, + "clientInfo": {"name": "mcp-health-checker", "version": "1.0"}, + } + }) + "\n" + + start = time.monotonic() + try: + env = os.environ.copy() + env.update(server.get("env", {})) + proc = subprocess.Popen( + [command] + args, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + env=env, + ) + stdout, stderr = proc.communicate( + input=init_request.encode(), + timeout=timeout, + ) + elapsed_ms = int((time.monotonic() - start) * 1000) + + if proc.returncode is not None and proc.returncode != 0 and not stdout: + return { + "status": "unreachable", + "latency_ms": elapsed_ms, + "findings": [{"check": "REACHABLE", "severity": "CRITICAL", + "detail": f"Process exited with code {proc.returncode}"}], + } + + # Try to parse response + findings = [] + tool_count = 0 + try: + response = json.loads(stdout.decode().strip().split("\n")[0]) + if "result" in response: + caps = response["result"].get("capabilities", {}) + if "tools" in caps: + tool_count = -1 # Has tools capability but count unknown until list + except (json.JSONDecodeError, IndexError): + findings.append({"check": "REACHABLE", "severity": "HIGH", + "detail": "Server responded but output not valid JSON-RPC"}) + + # Check latency + status = "healthy" + if elapsed_ms > timeout * 1000: + findings.append({"check": "LATENCY", "severity": "HIGH", + "detail": f"Response time {elapsed_ms}ms exceeds {timeout}s threshold"}) + status = "degraded" + elif elapsed_ms > (timeout * 1000) // 2: + findings.append({"check": "LATENCY", "severity": "MEDIUM", + "detail": f"Response time {elapsed_ms}ms approaching threshold"}) + + if findings and status == "healthy": + status = "degraded" + + return { + "status": status, + "latency_ms": elapsed_ms, + "tool_count": tool_count, + "findings": findings, + } + + except subprocess.TimeoutExpired: + elapsed_ms = int((time.monotonic() - start) * 1000) + try: + proc.kill() + proc.wait(timeout=2) + except Exception: + pass + return { + "status": "unreachable", + "latency_ms": elapsed_ms, + "findings": [{"check": "LATENCY", "severity": "CRITICAL", + "detail": f"Server did not respond within {timeout}s"}], + } + except FileNotFoundError: + return { + "status": "unreachable", + "latency_ms": 0, + "findings": [{"check": "REACHABLE", "severity": "CRITICAL", + "detail": f"Command not found: {command}"}], + } + except Exception as e: + return { + "status": "unreachable", + "latency_ms": 0, + "findings": [{"check": "REACHABLE", "severity": "CRITICAL", + "detail": f"Probe failed: {str(e)[:100]}"}], + } + + +def probe_http_server(server: dict, timeout: int) -> dict: + """Probe an HTTP/SSE MCP server via HTTP GET.""" + url = server.get("url") + if not url: + return { + "status": "unreachable", + "latency_ms": 0, + "findings": [{"check": "CONFIG_VALID", "severity": "MEDIUM", + "detail": "No URL specified for HTTP server"}], + } + + start = time.monotonic() + try: + import urllib.request + req = urllib.request.Request(url, method="GET") + req.add_header("User-Agent", "mcp-health-checker/1.0") + with urllib.request.urlopen(req, timeout=timeout) as resp: + elapsed_ms = int((time.monotonic() - start) * 1000) + status_code = resp.status + + findings = [] + if status_code >= 400: + findings.append({"check": "REACHABLE", "severity": "CRITICAL", + "detail": f"HTTP {status_code} response"}) + return {"status": "unreachable", "latency_ms": elapsed_ms, "findings": findings} + + if elapsed_ms > timeout * 1000: + findings.append({"check": "LATENCY", "severity": "HIGH", + "detail": f"Response time {elapsed_ms}ms exceeds threshold"}) + + status = "degraded" if findings else "healthy" + return {"status": status, "latency_ms": elapsed_ms, "findings": findings} + + except Exception as e: + elapsed_ms = int((time.monotonic() - start) * 1000) + return { + "status": "unreachable", + "latency_ms": elapsed_ms, + "findings": [{"check": "REACHABLE", "severity": "CRITICAL", + "detail": f"Connection failed: {str(e)[:100]}"}], + } + + +def check_staleness(server_name: str, state: dict, max_age_hours: int) -> list[dict]: + """Check if a server connection is stale based on last seen time.""" + findings = [] + prev_servers = state.get("servers") or [] + for prev in prev_servers: + if prev.get("name") == server_name and prev.get("last_seen_at"): + try: + last_seen = datetime.fromisoformat(prev["last_seen_at"]) + age = datetime.now() - last_seen + if age > timedelta(hours=max_age_hours): + findings.append({ + "check": "STALE", + "severity": "HIGH", + "detail": f"Last successful probe was {age.total_seconds()/3600:.1f}h ago " + f"(threshold: {max_age_hours}h)", + }) + except (ValueError, TypeError): + pass + return findings + + +def validate_config_entry(server: dict) -> list[dict]: + """Validate a server config entry has required fields.""" + findings = [] + if server["transport"] == "stdio": + if not server.get("command"): + findings.append({"check": "CONFIG_VALID", "severity": "MEDIUM", + "detail": "Missing 'command' field for stdio server"}) + elif server["transport"] == "http": + if not server.get("url"): + findings.append({"check": "CONFIG_VALID", "severity": "MEDIUM", + "detail": "Missing 'url' field for HTTP server"}) + return findings + + +# ── Commands ───────────────────────────────────────────────────────────────── + +def cmd_ping(state: dict, server_filter: str, timeout: int, max_age: int, fmt: str) -> None: + config_path, config = find_mcp_config() + now = datetime.now().isoformat() + + if not config_path: + print("No MCP configuration found. Searched:") + for p in MCP_CONFIG_PATHS: + print(f" {p}") + print("\nCreate an MCP config to enable health checking.") + sys.exit(1) + + servers = extract_servers(config) + if server_filter: + servers = [s for s in servers if s["name"] == server_filter] + if not servers: + print(f"Error: server '{server_filter}' not found in config.") + sys.exit(1) + + results = [] + healthy = degraded = unreachable = 0 + + for server in servers: + # Probe based on transport + if server["transport"] == "http": + probe = probe_http_server(server, timeout) + else: + probe = probe_stdio_server(server, timeout) + + # Add staleness check + stale_findings = check_staleness(server["name"], state, max_age) + all_findings = probe.get("findings", []) + stale_findings + + # Determine final status + status = probe["status"] + if status == "healthy" and stale_findings: + status = "degraded" + + last_seen = now if status == "healthy" else None + # Preserve previous last_seen if current probe failed + if not last_seen: + for prev in (state.get("servers") or []): + if prev.get("name") == server["name"]: + last_seen = prev.get("last_seen_at") + break + + result = { + "name": server["name"], + "transport": server["transport"], + "status": status, + "latency_ms": probe.get("latency_ms", 0), + "last_seen_at": last_seen, + "tool_count": probe.get("tool_count", 0), + "findings": all_findings, + "checked_at": now, + } + results.append(result) + + if status == "healthy": + healthy += 1 + elif status == "degraded": + degraded += 1 + else: + unreachable += 1 + + if fmt == "json": + print(json.dumps({ + "config_path": str(config_path), + "servers_checked": len(results), + "healthy": healthy, "degraded": degraded, "unreachable": unreachable, + "servers": results, + }, indent=2)) + else: + print(f"\nMCP Health Check — {datetime.now().strftime('%Y-%m-%d %H:%M')}") + print("-" * 55) + print(f" Config: {config_path}") + print(f" {len(results)} servers | {healthy} healthy | {degraded} degraded | {unreachable} unreachable") + print() + for r in results: + if r["status"] == "healthy": + icon = "+" + elif r["status"] == "degraded": + icon = "!" + else: + icon = "x" + print(f" {icon} [{r['status'].upper():>11}] {r['name']} ({r['transport']}) — {r['latency_ms']}ms") + for f in r.get("findings", []): + print(f" [{f['severity']}] {f['check']}: {f['detail']}") + print() + + # Persist + state["last_check_at"] = now + state["servers"] = results + history = state.get("check_history") or [] + history.insert(0, { + "checked_at": now, "servers_checked": len(results), + "healthy": healthy, "degraded": degraded, "unreachable": unreachable, + }) + state["check_history"] = history[:MAX_HISTORY] + save_state(state) + + sys.exit(1 if unreachable > 0 else 0) + + +def cmd_config(fmt: str) -> None: + config_path, config = find_mcp_config() + if not config_path: + print("No MCP configuration found.") + sys.exit(1) + + servers = extract_servers(config) + issues = [] + for server in servers: + findings = validate_config_entry(server) + if findings: + issues.append({"server": server["name"], "findings": findings}) + + if fmt == "json": + print(json.dumps({ + "config_path": str(config_path), + "servers": len(servers), + "issues": issues, + }, indent=2)) + else: + print(f"\nMCP Config Validation — {config_path}") + print("-" * 50) + print(f" {len(servers)} servers configured") + print() + if not issues: + print(" All config entries valid.") + else: + for issue in issues: + print(f" ! {issue['server']}:") + for f in issue["findings"]: + print(f" [{f['severity']}] {f['detail']}") + print() + for server in servers: + print(f" {server['name']}: transport={server['transport']}", end="") + if server.get("command"): + print(f" cmd={server['command']}", end="") + if server.get("url"): + print(f" url={server['url']}", end="") + print() + + +def cmd_status(state: dict) -> None: + last = state.get("last_check_at", "never") + print(f"\nMCP Health Checker — Last check: {last}") + servers = state.get("servers") or [] + if servers: + healthy = sum(1 for s in servers if s.get("status") == "healthy") + degraded = sum(1 for s in servers if s.get("status") == "degraded") + unreachable = sum(1 for s in servers if s.get("status") == "unreachable") + print(f" {len(servers)} servers | {healthy} healthy | {degraded} degraded | {unreachable} unreachable") + for s in servers: + icon = {"healthy": "+", "degraded": "!", "unreachable": "x"}.get(s.get("status", ""), "?") + print(f" {icon} {s['name']}: {s.get('status', 'unknown')} ({s.get('latency_ms', 0)}ms)") + print() + + +def cmd_history(state: dict, fmt: str) -> None: + history = state.get("check_history") or [] + if fmt == "json": + print(json.dumps({"check_history": history}, indent=2)) + else: + print(f"\nMCP Health Check History") + print("-" * 50) + if not history: + print(" No check history yet.") + else: + for h in history[:10]: + total = h.get("servers_checked", 0) + healthy = h.get("healthy", 0) + degraded = h.get("degraded", 0) + unreachable = h.get("unreachable", 0) + pct = round(healthy / total * 100) if total else 0 + ts = h.get("checked_at", "?")[:16] + bar = "=" * (pct // 10) + "-" * (10 - pct // 10) + print(f" {ts} [{bar}] {pct}% healthy ({healthy}/{total})") + print() + + +def main(): + parser = argparse.ArgumentParser(description="MCP Health Checker") + group = parser.add_mutually_exclusive_group(required=True) + group.add_argument("--ping", action="store_true", help="Ping all configured MCP servers") + group.add_argument("--config", action="store_true", help="Validate MCP config entries") + group.add_argument("--status", action="store_true", help="Last check summary from state") + group.add_argument("--history", action="store_true", help="Show past check results") + parser.add_argument("--server", type=str, metavar="NAME", help="Check a specific server only") + parser.add_argument("--timeout", type=int, default=DEFAULT_TIMEOUT, help="Timeout in seconds (default: 5)") + parser.add_argument("--max-age", type=int, default=DEFAULT_MAX_AGE, help="Max connection age in hours (default: 24)") + parser.add_argument("--format", choices=["text", "json"], default="text") + args = parser.parse_args() + + state = load_state() + if args.ping: + cmd_ping(state, args.server, args.timeout, args.max_age, args.format) + elif args.config: + cmd_config(args.format) + elif args.status: + cmd_status(state) + elif args.history: + cmd_history(state, args.format) + + +if __name__ == "__main__": + main() diff --git a/skills/openclaw-native/mcp-health-checker/example-state.yaml b/skills/openclaw-native/mcp-health-checker/example-state.yaml new file mode 100644 index 0000000..1005202 --- /dev/null +++ b/skills/openclaw-native/mcp-health-checker/example-state.yaml @@ -0,0 +1,86 @@ +# Example runtime state for mcp-health-checker +last_check_at: "2026-03-16T12:00:08.554000" +servers: + - name: filesystem + transport: stdio + status: healthy + latency_ms: 120 + last_seen_at: "2026-03-16T12:00:02.000000" + tool_count: 11 + findings: [] + checked_at: "2026-03-16T12:00:02.000000" + - name: github + transport: stdio + status: healthy + latency_ms: 340 + last_seen_at: "2026-03-16T12:00:04.000000" + tool_count: 18 + findings: [] + checked_at: "2026-03-16T12:00:04.000000" + - name: web-search + transport: http + status: degraded + latency_ms: 4200 + last_seen_at: "2026-03-16T12:00:08.000000" + tool_count: 3 + findings: + - check: LATENCY + severity: MEDIUM + detail: "Response time 4200ms approaching threshold" + checked_at: "2026-03-16T12:00:08.000000" + - name: database + transport: stdio + status: unreachable + latency_ms: 0 + last_seen_at: "2026-03-15T06:00:00.000000" + tool_count: 0 + findings: + - check: REACHABLE + severity: CRITICAL + detail: "Command not found: pg-mcp-server" + - check: STALE + severity: HIGH + detail: "Last successful probe was 30.0h ago (threshold: 24h)" + checked_at: "2026-03-16T12:00:08.000000" +check_history: + - checked_at: "2026-03-16T12:00:08.554000" + servers_checked: 4 + healthy: 2 + degraded: 1 + unreachable: 1 + - checked_at: "2026-03-16T06:00:05.000000" + servers_checked: 4 + healthy: 3 + degraded: 1 + unreachable: 0 + - checked_at: "2026-03-16T00:00:04.000000" + servers_checked: 4 + healthy: 4 + degraded: 0 + unreachable: 0 +# ── Walkthrough ────────────────────────────────────────────────────────────── +# Cron runs every 6 hours: python3 check.py --ping +# +# MCP Health Check — 2026-03-16 12:00 +# ─────────────────────────────────────────────────────── +# Config: /Users/you/.openclaw/config/mcp.yaml +# 4 servers | 2 healthy | 1 degraded | 1 unreachable +# +# + [ HEALTHY] filesystem (stdio) — 120ms +# +# + [ HEALTHY] github (stdio) — 340ms +# +# ! [ DEGRADED] web-search (http) — 4200ms +# [MEDIUM] LATENCY: Response time 4200ms approaching threshold +# +# x [UNREACHABLE] database (stdio) — 0ms +# [CRITICAL] REACHABLE: Command not found: pg-mcp-server +# [HIGH] STALE: Last successful probe was 30.0h ago (threshold: 24h) +# +# python3 check.py --history +# +# MCP Health Check History +# ────────────────────────────────────────────────── +# 2026-03-16T12:00 [=====-----] 50% healthy (2/4) +# 2026-03-16T06:00 [=======---] 75% healthy (3/4) +# 2026-03-16T00:00 [==========] 100% healthy (4/4) diff --git a/skills/openclaw-native/tool-description-optimizer/SKILL.md b/skills/openclaw-native/tool-description-optimizer/SKILL.md new file mode 100644 index 0000000..75011c6 --- /dev/null +++ b/skills/openclaw-native/tool-description-optimizer/SKILL.md @@ -0,0 +1,109 @@ +--- +name: tool-description-optimizer +version: "1.0" +category: openclaw-native +description: Analyzes skill descriptions for trigger quality — scores clarity, keyword density, and specificity, then suggests rewrites that improve discovery accuracy. +stateful: true +--- + +# Tool Description Optimizer + +## What it does + +A skill's description is its only discovery mechanism. If the description is vague, overlapping, or keyword-poor, the agent won't trigger it — or worse, will trigger the wrong skill. Tool Description Optimizer analyzes every installed skill's description for trigger quality and suggests concrete rewrites. + +Inspired by OpenLobster's tool-description scoring layer, which penalizes vague descriptions and rewards keyword-rich, action-specific ones. + +## When to invoke + +- After installing new skills — check if descriptions are trigger-ready +- When a skill isn't firing when expected — diagnose whether the description is the problem +- Periodically to audit all descriptions for quality drift +- Before publishing a skill — polish the description for discoverability + +## How it works + +### Scoring dimensions (5 metrics, 0–10 each) + +| Metric | What it measures | Weight | +|---|---|---| +| Clarity | Single clear purpose, no ambiguity | 2x | +| Specificity | Action verbs, concrete nouns vs. vague terms | 2x | +| Keyword density | Trigger-relevant keywords per sentence | 1.5x | +| Uniqueness | Low overlap with other installed skill descriptions | 1.5x | +| Length | Optimal range (15–40 words) — too short = vague, too long = diluted | 1x | + +### Quality grades + +| Grade | Score range | Meaning | +|---|---|---| +| A | 8.0–10.0 | Excellent — high trigger accuracy expected | +| B | 6.0–7.9 | Good — minor improvements possible | +| C | 4.0–5.9 | Fair — likely to miss triggers or overlap | +| D | 2.0–3.9 | Poor — needs rewrite | +| F | 0.0–1.9 | Failing — will not trigger reliably | + +## How to use + +```bash +python3 optimize.py --scan # Score all installed skills +python3 optimize.py --scan --grade C # Only show skills graded C or below +python3 optimize.py --skill # Deep analysis of a single skill +python3 optimize.py --suggest # Generate rewrite suggestions +python3 optimize.py --compare "desc A" "desc B" # Compare two descriptions +python3 optimize.py --status # Last scan summary +python3 optimize.py --format json # Machine-readable output +``` + +## Procedure + +**Step 1 — Run a full scan** + +```bash +python3 optimize.py --scan +``` + +Review the scorecard. Focus on skills graded C or below — these are the ones most likely to cause trigger failures. + +**Step 2 — Get rewrite suggestions for low-scoring skills** + +```bash +python3 optimize.py --suggest +``` + +The optimizer generates 2–3 alternative descriptions with predicted score improvements. + +**Step 3 — Compare alternatives** + +```bash +python3 optimize.py --compare "original description" "suggested rewrite" +``` + +Side-by-side scoring shows exactly which metrics improved. + +**Step 4 — Apply the best rewrite** + +Edit the skill's `SKILL.md` frontmatter `description:` field with the chosen rewrite. + +## Vague word penalties + +These words score 0 on specificity — they say nothing actionable: + +`helps`, `manages`, `handles`, `deals with`, `works with`, `does stuff`, `various`, `things`, `general`, `misc`, `utility`, `tool for`, `assistant for` + +## Strong trigger keywords (examples) + +`scans`, `detects`, `validates`, `generates`, `audits`, `monitors`, `checks`, `reports`, `fixes`, `migrates`, `syncs`, `schedules`, `blocks`, `scores`, `diagnoses` + +## State + +Scan results and per-skill scores stored in `~/.openclaw/skill-state/tool-description-optimizer/state.yaml`. + +Fields: `last_scan_at`, `skill_scores` list, `scan_history`. + +## Notes + +- Does not modify any skill files — analysis and suggestions only +- Uniqueness scoring uses Jaccard similarity against all other installed descriptions +- Length scoring uses a bell curve centered at 25 words (optimal) +- Rewrite suggestions are heuristic-based, not LLM-generated — deterministic and fast diff --git a/skills/openclaw-native/tool-description-optimizer/STATE_SCHEMA.yaml b/skills/openclaw-native/tool-description-optimizer/STATE_SCHEMA.yaml new file mode 100644 index 0000000..2a61879 --- /dev/null +++ b/skills/openclaw-native/tool-description-optimizer/STATE_SCHEMA.yaml @@ -0,0 +1,27 @@ +version: "1.0" +description: Tool description quality scores, rewrite suggestions, and scan history. +fields: + last_scan_at: + type: datetime + skill_scores: + type: list + description: Per-skill quality scores from the most recent scan + items: + skill_name: { type: string } + description: { type: string } + word_count: { type: integer } + clarity: { type: float, description: "0-10 clarity score" } + specificity: { type: float, description: "0-10 specificity score" } + keyword_density: { type: float, description: "0-10 keyword density score" } + uniqueness: { type: float, description: "0-10 uniqueness vs other skills" } + length_score: { type: float, description: "0-10 length optimality score" } + overall: { type: float, description: "Weighted composite score" } + grade: { type: string, description: "A/B/C/D/F" } + scan_history: + type: list + description: Rolling log of past scans (last 20) + items: + scanned_at: { type: datetime } + skills_scanned: { type: integer } + avg_score: { type: float } + grade_distribution: { type: object, description: "Count per grade: A, B, C, D, F" } diff --git a/skills/openclaw-native/tool-description-optimizer/example-state.yaml b/skills/openclaw-native/tool-description-optimizer/example-state.yaml new file mode 100644 index 0000000..2760fd7 --- /dev/null +++ b/skills/openclaw-native/tool-description-optimizer/example-state.yaml @@ -0,0 +1,94 @@ +# Example runtime state for tool-description-optimizer +last_scan_at: "2026-03-16T14:00:05.221000" +skill_scores: + - skill_name: using-superpowers + description: "Bootstrap — teaches the agent how to find and invoke skills" + word_count: 11 + clarity: 7.2 + specificity: 3.8 + keyword_density: 3.3 + uniqueness: 8.1 + length_score: 4.8 + overall: 5.6 + grade: C + - skill_name: config-encryption-auditor + description: "Scans OpenClaw config directories for plaintext API keys, tokens, and secrets in unencrypted files." + word_count: 15 + clarity: 9.2 + specificity: 8.5 + keyword_density: 8.0 + uniqueness: 9.0 + length_score: 7.5 + overall: 8.5 + grade: A + - skill_name: memory-graph-builder + description: "Parses MEMORY.md into a knowledge graph with typed relationships, detects duplicates and contradictions, and generates a compressed memory digest." + word_count: 22 + clarity: 8.8 + specificity: 7.6 + keyword_density: 7.2 + uniqueness: 9.4 + length_score: 9.5 + overall: 8.5 + grade: A +scan_history: + - scanned_at: "2026-03-16T14:00:05.221000" + skills_scanned: 40 + avg_score: 7.2 + grade_distribution: + A: 18 + B: 14 + C: 6 + D: 2 + F: 0 + - scanned_at: "2026-03-13T14:00:00.000000" + skills_scanned: 36 + avg_score: 6.8 + grade_distribution: + A: 14 + B: 12 + C: 7 + D: 3 + F: 0 +# ── Walkthrough ────────────────────────────────────────────────────────────── +# python3 optimize.py --scan +# +# Tool Description Quality Scan — 2026-03-16 +# ──────────────────────────────────────────────────────────── +# 40 skills scanned | avg score: 7.2 +# Grades: 18xA 14xB 6xC 2xD 0xF +# +# ! [D] 3.8 — some-vague-skill +# clarity=2.0 spec=1.5 kw=1.2 uniq=8.0 len=6.5 +# "A helpful utility tool that manages various things..." +# +# ~ [C] 5.6 — using-superpowers +# clarity=7.2 spec=3.8 kw=3.3 uniq=8.1 len=4.8 +# "Bootstrap — teaches the agent how to find and invoke skills" +# +# python3 optimize.py --suggest using-superpowers +# +# Rewrite Suggestions: using-superpowers +# ────────────────────────────────────────────────── +# Current: "Bootstrap — teaches the agent how to find and invoke skills" +# Score: 5.6 (C) +# +# 1. Front-load action verb +# "Teaches the agent how to discover, invoke, and chain installed skills" +# Predicted: 7.4 (B) [+1.8] +# +# python3 optimize.py --compare "A tool that helps manage stuff" "Scans config files for plaintext secrets and suggests env var migration" +# +# Description Comparison +# ────────────────────────────────────────────────── +# A: "A tool that helps manage stuff" +# B: "Scans config files for plaintext secrets and suggests env var migration" +# +# Clarity A=2.0 B=9.5 B +# Specificity A=0.0 B=8.5 B +# Keywords A=0.0 B=7.8 B +# Uniqueness A=7.0 B=7.0 = +# Length A=5.2 B=8.8 B +# OVERALL A=2.8 B=8.4 B +# +# Grade: A=D B=A diff --git a/skills/openclaw-native/tool-description-optimizer/optimize.py b/skills/openclaw-native/tool-description-optimizer/optimize.py new file mode 100755 index 0000000..576ba76 --- /dev/null +++ b/skills/openclaw-native/tool-description-optimizer/optimize.py @@ -0,0 +1,549 @@ +#!/usr/bin/env python3 +""" +Tool Description Optimizer for openclaw-superpowers. + +Scores skill descriptions for trigger quality and suggests rewrites. + +Usage: + python3 optimize.py --scan + python3 optimize.py --scan --grade C + python3 optimize.py --skill + python3 optimize.py --suggest + python3 optimize.py --compare "desc A" "desc B" + python3 optimize.py --status + python3 optimize.py --format json +""" + +import argparse +import json +import math +import os +import re +import sys +from datetime import datetime +from pathlib import Path + +try: + import yaml + HAS_YAML = True +except ImportError: + HAS_YAML = False + +OPENCLAW_DIR = Path(os.environ.get("OPENCLAW_HOME", Path.home() / ".openclaw")) +STATE_FILE = OPENCLAW_DIR / "skill-state" / "tool-description-optimizer" / "state.yaml" +MAX_HISTORY = 20 + +# Skill directories to scan +SKILL_DIRS = [ + Path(__file__).resolve().parent.parent.parent, # repo skills/ root +] + +# ── Scoring constants ──────────────────────────────────────────────────────── + +VAGUE_WORDS = { + "helps", "manages", "handles", "deals", "works", "does", "stuff", + "various", "things", "general", "misc", "miscellaneous", "utility", + "tool", "assistant", "helper", "processor", "handler", "manager", + "simple", "basic", "easy", "nice", "good", "great", +} + +STRONG_VERBS = { + "scans", "detects", "validates", "generates", "audits", "monitors", + "checks", "reports", "fixes", "migrates", "syncs", "schedules", + "blocks", "scores", "diagnoses", "parses", "extracts", "compiles", + "compacts", "deduplicates", "prunes", "enforces", "breaks", "chains", + "writes", "creates", "builds", "searches", "filters", "tracks", + "prevents", "recovers", "resumes", "verifies", "tests", "measures", +} + +STRONG_NOUNS = { + "api", "key", "token", "secret", "credential", "permission", + "cron", "schedule", "context", "memory", "state", "schema", + "skill", "agent", "session", "task", "workflow", "budget", + "injection", "drift", "conflict", "error", "failure", "loop", + "graph", "node", "edge", "digest", "report", "proposal", + "reddit", "github", "slack", "config", "yaml", "json", +} + +OPTIMAL_LENGTH = 25 # words +LENGTH_SIGMA = 10 # std dev for bell curve + +GRADE_THRESHOLDS = [ + (8.0, "A"), (6.0, "B"), (4.0, "C"), (2.0, "D"), (0.0, "F"), +] + + +# ── State helpers ──────────────────────────────────────────────────────────── + +def load_state() -> dict: + if not STATE_FILE.exists(): + return {"skill_scores": [], "scan_history": []} + try: + text = STATE_FILE.read_text() + return (yaml.safe_load(text) or {}) if HAS_YAML else {} + except Exception: + return {} + + +def save_state(state: dict) -> None: + STATE_FILE.parent.mkdir(parents=True, exist_ok=True) + if HAS_YAML: + with open(STATE_FILE, "w") as f: + yaml.dump(state, f, default_flow_style=False, allow_unicode=True) + + +# ── Skill discovery ────────────────────────────────────────────────────────── + +def discover_skills() -> list[dict]: + """Find all installed skills and extract their descriptions.""" + skills = [] + for skill_root in SKILL_DIRS: + if not skill_root.exists(): + continue + for category_dir in sorted(skill_root.iterdir()): + if not category_dir.is_dir(): + continue + for skill_dir in sorted(category_dir.iterdir()): + skill_md = skill_dir / "SKILL.md" + if not skill_md.exists(): + continue + desc = extract_description(skill_md) + if desc: + skills.append({ + "name": skill_dir.name, + "category": category_dir.name, + "description": desc, + }) + return skills + + +def extract_description(skill_md: Path) -> str: + """Extract description from SKILL.md frontmatter.""" + try: + text = skill_md.read_text() + except (PermissionError, OSError): + return "" + # Parse YAML frontmatter + if not text.startswith("---"): + return "" + end = text.find("---", 3) + if end == -1: + return "" + frontmatter = text[3:end].strip() + for line in frontmatter.split("\n"): + if line.startswith("description:"): + desc = line[len("description:"):].strip().strip("\"'") + return desc + return "" + + +# ── Scoring ────────────────────────────────────────────────────────────────── + +def tokenize(text: str) -> list[str]: + return re.findall(r'[a-z0-9]+', text.lower()) + + +def jaccard(a: set, b: set) -> float: + if not a and not b: + return 1.0 + inter = len(a & b) + union = len(a | b) + return inter / union if union > 0 else 0.0 + + +def score_clarity(tokens: list[str]) -> float: + """Score clarity: penalize vague words, reward single clear purpose.""" + if not tokens: + return 0.0 + vague_count = sum(1 for t in tokens if t in VAGUE_WORDS) + vague_ratio = vague_count / len(tokens) + # Penalize heavily for high vague ratio + score = 10.0 * (1.0 - vague_ratio * 2.5) + # Bonus for having a verb early (signals clear purpose) + for t in tokens[:5]: + if t in STRONG_VERBS: + score += 1.0 + break + return max(0.0, min(10.0, score)) + + +def score_specificity(tokens: list[str]) -> float: + """Score specificity: strong verbs and concrete nouns.""" + if not tokens: + return 0.0 + verb_count = sum(1 for t in tokens if t in STRONG_VERBS) + noun_count = sum(1 for t in tokens if t in STRONG_NOUNS) + strong_ratio = (verb_count + noun_count) / len(tokens) + score = min(10.0, strong_ratio * 25.0) + return max(0.0, score) + + +def score_keyword_density(tokens: list[str]) -> float: + """Score keyword density: trigger-relevant terms per token.""" + if not tokens: + return 0.0 + all_keywords = STRONG_VERBS | STRONG_NOUNS + keyword_count = sum(1 for t in tokens if t in all_keywords) + density = keyword_count / len(tokens) + score = min(10.0, density * 30.0) + return max(0.0, score) + + +def score_uniqueness(tokens_set: set, all_other_sets: list[set]) -> float: + """Score uniqueness: low Jaccard similarity to other descriptions.""" + if not all_other_sets: + return 10.0 + max_sim = max(jaccard(tokens_set, other) for other in all_other_sets) + # 0.0 similarity = 10.0 score, 1.0 similarity = 0.0 score + score = 10.0 * (1.0 - max_sim) + return max(0.0, min(10.0, score)) + + +def score_length(word_count: int) -> float: + """Score length: bell curve centered on OPTIMAL_LENGTH.""" + z = (word_count - OPTIMAL_LENGTH) / LENGTH_SIGMA + score = 10.0 * math.exp(-0.5 * z * z) + return max(0.0, min(10.0, score)) + + +def compute_overall(clarity, specificity, keyword_density, uniqueness, length_score) -> float: + """Weighted composite score.""" + weighted = ( + clarity * 2.0 + + specificity * 2.0 + + keyword_density * 1.5 + + uniqueness * 1.5 + + length_score * 1.0 + ) + total_weight = 2.0 + 2.0 + 1.5 + 1.5 + 1.0 + return round(weighted / total_weight, 1) + + +def get_grade(score: float) -> str: + for threshold, grade in GRADE_THRESHOLDS: + if score >= threshold: + return grade + return "F" + + +def score_description(desc: str, all_other_descs: list[str]) -> dict: + """Full scoring of a single description.""" + tokens = tokenize(desc) + tokens_set = set(tokens) + other_sets = [set(tokenize(d)) for d in all_other_descs] + word_count = len(desc.split()) + + clarity = round(score_clarity(tokens), 1) + specificity = round(score_specificity(tokens), 1) + keyword_density = round(score_keyword_density(tokens), 1) + uniqueness = round(score_uniqueness(tokens_set, other_sets), 1) + length = round(score_length(word_count), 1) + overall = compute_overall(clarity, specificity, keyword_density, uniqueness, length) + grade = get_grade(overall) + + return { + "word_count": word_count, + "clarity": clarity, + "specificity": specificity, + "keyword_density": keyword_density, + "uniqueness": uniqueness, + "length_score": length, + "overall": overall, + "grade": grade, + } + + +# ── Suggestion engine ──────────────────────────────────────────────────────── + +def suggest_rewrites(name: str, desc: str, all_other_descs: list[str]) -> list[dict]: + """Generate 2-3 rewrite suggestions with predicted improvements.""" + suggestions = [] + tokens = tokenize(desc) + words = desc.split() + + # Strategy 1: Replace vague words with strong verbs + rewrite1_words = [] + replacements = { + "helps": "assists", "manages": "tracks", "handles": "processes", + "deals": "resolves", "works": "operates", "does": "executes", + } + changed = False + for w in words: + low = w.lower().rstrip(".,;:") + if low in VAGUE_WORDS and low in replacements: + rewrite1_words.append(replacements[low]) + changed = True + else: + rewrite1_words.append(w) + if changed: + rewrite1 = " ".join(rewrite1_words) + s1 = score_description(rewrite1, all_other_descs) + suggestions.append({ + "strategy": "Replace vague words", + "rewrite": rewrite1, + "predicted_score": s1["overall"], + "predicted_grade": s1["grade"], + }) + + # Strategy 2: Trim to optimal length if too long + if len(words) > 40: + trimmed = " ".join(words[:35]) + if not trimmed.endswith("."): + trimmed += "." + s2 = score_description(trimmed, all_other_descs) + suggestions.append({ + "strategy": "Trim to optimal length", + "rewrite": trimmed, + "predicted_score": s2["overall"], + "predicted_grade": s2["grade"], + }) + + # Strategy 3: Front-load with action verb if none in first 3 words + first_tokens = tokenize(" ".join(words[:3])) + has_verb = any(t in STRONG_VERBS for t in first_tokens) + if not has_verb: + # Try to extract the main verb from the description + for t in tokens: + if t in STRONG_VERBS: + verb = t.capitalize() + "s" + rewrite3 = f"{verb} {desc[0].lower()}{desc[1:]}" + s3 = score_description(rewrite3, all_other_descs) + suggestions.append({ + "strategy": "Front-load action verb", + "rewrite": rewrite3, + "predicted_score": s3["overall"], + "predicted_grade": s3["grade"], + }) + break + + if not suggestions: + suggestions.append({ + "strategy": "No automatic rewrites — description already scores well", + "rewrite": desc, + "predicted_score": score_description(desc, all_other_descs)["overall"], + "predicted_grade": score_description(desc, all_other_descs)["grade"], + }) + + return suggestions + + +# ── Commands ───────────────────────────────────────────────────────────────── + +def cmd_scan(state: dict, grade_filter: str, fmt: str) -> None: + skills = discover_skills() + now = datetime.now().isoformat() + all_descs = [s["description"] for s in skills] + results = [] + + for i, skill in enumerate(skills): + other_descs = all_descs[:i] + all_descs[i+1:] + scores = score_description(skill["description"], other_descs) + scores["skill_name"] = skill["name"] + scores["description"] = skill["description"] + results.append(scores) + + # Sort by overall score ascending (worst first) + results.sort(key=lambda r: r["overall"]) + + # Apply grade filter + if grade_filter: + grade_order = {"F": 0, "D": 1, "C": 2, "B": 3, "A": 4} + cutoff = grade_order.get(grade_filter.upper(), 2) + results = [r for r in results if grade_order.get(r["grade"], 0) <= cutoff] + + # Grade distribution + dist = {"A": 0, "B": 0, "C": 0, "D": 0, "F": 0} + all_results = [] + for i, skill in enumerate(skills): + other_descs = all_descs[:i] + all_descs[i+1:] + scores = score_description(skill["description"], other_descs) + dist[scores["grade"]] = dist.get(scores["grade"], 0) + 1 + scores["skill_name"] = skill["name"] + scores["description"] = skill["description"] + all_results.append(scores) + + avg_score = round(sum(r["overall"] for r in all_results) / len(all_results), 1) if all_results else 0.0 + + if fmt == "json": + print(json.dumps({"skills_scanned": len(skills), "results": results, "avg_score": avg_score, "grades": dist}, indent=2)) + else: + print(f"\nTool Description Quality Scan — {datetime.now().strftime('%Y-%m-%d')}") + print("-" * 60) + print(f" {len(skills)} skills scanned | avg score: {avg_score}") + print(f" Grades: {dist['A']}xA {dist['B']}xB {dist['C']}xC {dist['D']}xD {dist['F']}xF") + print() + if not results: + print(" All skills above grade threshold.") + else: + for r in results: + icon = {"A": "+", "B": "+", "C": "~", "D": "!", "F": "x"} + print(f" {icon.get(r['grade'], '?')} [{r['grade']}] {r['overall']:>4} — {r['skill_name']}") + print(f" clarity={r['clarity']} spec={r['specificity']} kw={r['keyword_density']} " + f"uniq={r['uniqueness']} len={r['length_score']}") + # Truncate description for display + desc = r["description"] + if len(desc) > 80: + desc = desc[:77] + "..." + print(f" \"{desc}\"") + print() + + # Persist + state["last_scan_at"] = now + state["skill_scores"] = all_results + history = state.get("scan_history") or [] + history.insert(0, { + "scanned_at": now, "skills_scanned": len(skills), + "avg_score": avg_score, "grade_distribution": dist, + }) + state["scan_history"] = history[:MAX_HISTORY] + save_state(state) + + +def cmd_skill(state: dict, name: str, fmt: str) -> None: + skills = discover_skills() + target = None + for s in skills: + if s["name"] == name: + target = s + break + if not target: + print(f"Error: skill '{name}' not found.") + sys.exit(1) + + all_descs = [s["description"] for s in skills if s["name"] != name] + scores = score_description(target["description"], all_descs) + + if fmt == "json": + scores["skill_name"] = name + scores["description"] = target["description"] + print(json.dumps(scores, indent=2)) + else: + print(f"\nDeep Analysis: {name}") + print("-" * 50) + print(f" Description: \"{target['description']}\"") + print(f" Word count: {scores['word_count']}") + print() + print(f" Clarity: {scores['clarity']:>4}/10 {'||' * int(scores['clarity'])}") + print(f" Specificity: {scores['specificity']:>4}/10 {'||' * int(scores['specificity'])}") + print(f" Keyword density: {scores['keyword_density']:>4}/10 {'||' * int(scores['keyword_density'])}") + print(f" Uniqueness: {scores['uniqueness']:>4}/10 {'||' * int(scores['uniqueness'])}") + print(f" Length score: {scores['length_score']:>4}/10 {'||' * int(scores['length_score'])}") + print(f" ─────────────────────────") + print(f" Overall: {scores['overall']:>4}/10 Grade: {scores['grade']}") + print() + + # Show vague words found + tokens = tokenize(target["description"]) + vague_found = [t for t in tokens if t in VAGUE_WORDS] + if vague_found: + print(f" Vague words: {', '.join(set(vague_found))}") + + strong_found = [t for t in tokens if t in STRONG_VERBS | STRONG_NOUNS] + if strong_found: + print(f" Strong keywords: {', '.join(set(strong_found))}") + print() + + +def cmd_suggest(state: dict, name: str, fmt: str) -> None: + skills = discover_skills() + target = None + for s in skills: + if s["name"] == name: + target = s + break + if not target: + print(f"Error: skill '{name}' not found.") + sys.exit(1) + + all_descs = [s["description"] for s in skills if s["name"] != name] + current = score_description(target["description"], all_descs) + suggestions = suggest_rewrites(name, target["description"], all_descs) + + if fmt == "json": + print(json.dumps({"skill": name, "current_score": current["overall"], + "current_grade": current["grade"], "suggestions": suggestions}, indent=2)) + else: + print(f"\nRewrite Suggestions: {name}") + print("-" * 50) + print(f" Current: \"{target['description']}\"") + print(f" Score: {current['overall']} ({current['grade']})") + print() + for i, s in enumerate(suggestions, 1): + delta = s["predicted_score"] - current["overall"] + arrow = "+" if delta > 0 else "" + print(f" {i}. {s['strategy']}") + print(f" \"{s['rewrite']}\"") + print(f" Predicted: {s['predicted_score']} ({s['predicted_grade']}) [{arrow}{delta}]") + print() + + +def cmd_compare(desc_a: str, desc_b: str, fmt: str) -> None: + scores_a = score_description(desc_a, [desc_b]) + scores_b = score_description(desc_b, [desc_a]) + + if fmt == "json": + print(json.dumps({"a": {"description": desc_a, **scores_a}, + "b": {"description": desc_b, **scores_b}}, indent=2)) + else: + print(f"\nDescription Comparison") + print("-" * 50) + print(f" A: \"{desc_a}\"") + print(f" B: \"{desc_b}\"") + print() + metrics = ["clarity", "specificity", "keyword_density", "uniqueness", "length_score", "overall"] + labels = ["Clarity", "Specificity", "Keywords", "Uniqueness", "Length", "OVERALL"] + for label, metric in zip(labels, metrics): + va = scores_a[metric] + vb = scores_b[metric] + winner = "A" if va > vb else ("B" if vb > va else "=") + print(f" {label:12s} A={va:<5} B={vb:<5} {winner}") + print(f"\n Grade: A={scores_a['grade']} B={scores_b['grade']}") + print() + + +def cmd_status(state: dict) -> None: + last = state.get("last_scan_at", "never") + print(f"\nTool Description Optimizer — Last scan: {last}") + history = state.get("scan_history") or [] + if history: + h = history[0] + print(f" {h.get('skills_scanned', 0)} skills | avg score: {h.get('avg_score', 0)}") + dist = h.get("grade_distribution", {}) + print(f" Grades: {dist.get('A',0)}xA {dist.get('B',0)}xB " + f"{dist.get('C',0)}xC {dist.get('D',0)}xD {dist.get('F',0)}xF") + scores = state.get("skill_scores") or [] + low = [s for s in scores if s.get("grade") in ("D", "F")] + if low: + print(f"\n Low-scoring ({len(low)}):") + for s in low[:5]: + print(f" [{s['grade']}] {s['overall']} — {s['skill_name']}") + print() + + +def main(): + parser = argparse.ArgumentParser(description="Tool Description Optimizer") + group = parser.add_mutually_exclusive_group(required=True) + group.add_argument("--scan", action="store_true", help="Score all installed skill descriptions") + group.add_argument("--skill", type=str, metavar="NAME", help="Deep analysis of a single skill") + group.add_argument("--suggest", type=str, metavar="NAME", help="Generate rewrite suggestions") + group.add_argument("--compare", nargs=2, metavar=("DESC_A", "DESC_B"), help="Compare two descriptions") + group.add_argument("--status", action="store_true", help="Last scan summary") + parser.add_argument("--grade", type=str, metavar="GRADE", help="Only show skills at or below this grade (A-F)") + parser.add_argument("--format", choices=["text", "json"], default="text") + args = parser.parse_args() + + state = load_state() + if args.scan: + cmd_scan(state, args.grade, args.format) + elif args.skill: + cmd_skill(state, args.skill, args.format) + elif args.suggest: + cmd_suggest(state, args.suggest, args.format) + elif args.compare: + cmd_compare(args.compare[0], args.compare[1], args.format) + elif args.status: + cmd_status(state) + + +if __name__ == "__main__": + main()