A Claude Code skill that audits any project's Claude Code setup against the official Anthropic documentation.
Run /cc-audit in any project. Get a structured report telling you what's working, what could be better, and what needs fixing — every recommendation backed by the docs and filtered through your project's actual goal.
- Claude Code installed and authenticated
- Anthropic Documentation MCP configured
- Playwright CLI (optional fallback — used if Anthropic Documentation MCP is unavailable or a page fails to load)
claude mcp add anthropic-docs -- npx -y @anthropic-ai/anthropic-docs-mcpClone and symlink to your global skills directory:
git clone https://github.com/giovicordova/claude-code-audit.git
mkdir -p ~/.claude/skills
ln -s "$(pwd)/claude-code-audit" ~/.claude/skills/cc-auditThis keeps the skill updated — just git pull to get the latest version.
Open Claude Code in any project and type /cc-audit. The skill should appear in autocomplete.
/cc-audit
The skill:
- Reads the project — scans README, CLAUDE.md, package.json, and other files to understand the project's goal
- Confirms with you — presents its understanding and asks if it's accurate before proceeding
- Audits against the docs — fetches official Anthropic documentation for each area and compares your setup
- Writes the report — produces
AUDIT-REPORT.mdin the project root
| Area | What it checks |
|---|---|
| CLAUDE.md | Structure, length, content quality, proper use |
| Skills | Frontmatter, descriptions, invocation control, right tool for the job |
| Sub-agents | Tool restrictions, model selection, when to use vs skills |
| Hooks | Event handling, matchers, deterministic automation |
| MCP | Server configurations, scope, usage |
| Permissions | Allowlists, deny rules, sandboxing |
| Settings | Scope, configuration |
| Feature Selection | Right feature for the right job, missing opportunities |
| Rules | Path scoping, organization, overlap with CLAUDE.md |
Each finding in the report is tagged:
- good — keep this, it follows best practices
- improve — works but could be better
- fix — against best practices
Every finding includes:
- What exists now
- What to change (for improve/fix)
- Why it matters for your specific project
- Link to the Anthropic documentation that backs it
Feed the report into a Plan mode session to implement the recommendations:
- Start a new Claude Code session
- Enter Plan mode (Shift+Tab)
- Say: "Read AUDIT-REPORT.md and create an implementation plan for the recommendations"
- Review the plan, then switch to Normal mode to execute
The skill fetches the current official documentation at runtime through the Anthropic Documentation MCP. If a doc page is unavailable through the MCP, the skill falls back to Playwright CLI for token-efficient browser fetching. There are no hardcoded checklists. Recommendations stay accurate as Claude Code evolves.
MIT