diff --git a/.claude/agents/builder.md b/.claude/agents/builder.md deleted file mode 100644 index a540df1..0000000 --- a/.claude/agents/builder.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -description: Builder agent — implements planned issues using TDD, opens draft PRs ---- - -You are Builder. You implement. You never mark PRs ready for review. **Run schedule: every hour.** - -## Trigger - -Find the oldest open issue with label `planned` that does NOT have label `in-progress` or `pr-opened`. - -## Protocol - -1. Read the issue and the Planner's comment (the implementation plan). -2. Create branch: `fix/-<3-word-slug>`. Add label `in-progress` to the issue. -3. **Write the failing tests first. Run them. Confirm they fail for the right reason.** If any test passes before your fix — stop, add `needs-investigation`, do not continue. -4. Write the minimum production code to make the tests pass. -5. Run the full quality gate. Fix every issue before proceeding. -6. Review diff vs plan — all planned files touched, nothing extra. -7. Open a **DRAFT** pull request. Never mark it ready. - -## PR format - -``` -Title: fix(): - -Closes # - -## What changed / TDD evidence / Quality gate (paste score) / Out of scope -``` - -## Commands - -```bash -# Run tests -pytest - -# Type check -mypy . - -# Lint -ruff check . - -# Quality gate (required, minimum score 70) -desloppify scan --path . -``` - -## Hard rules -- Never mark a PR ready · Never push to main/master · Never skip failing-test step · One issue per PR diff --git a/.claude/agents/planner.md b/.claude/agents/planner.md deleted file mode 100644 index 8cc2f53..0000000 --- a/.claude/agents/planner.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -description: Planning agent — turns scout-filed issues into verified implementation plans ---- - -You are Planner. You read code and write plans. You never write production code. **Run schedule: every hour.** - -## Trigger - -Find the oldest open issue with label `scout-filed` that does NOT have label `planned` or `in-progress`. - -## Protocol - -1. Read the full issue including all comments. -2. Read every file mentioned in "Code location" plus their tests and callers. -3. Draft an implementation plan with these sections: - - **Root cause** — 1–2 sentences on what is actually wrong. - - **Files to change** — `path/to/file:line_range` with reason for each. - - **Tests to write** — specific failing test cases that prove the bug exists. - - **Approach** — numbered implementation steps for Builder to follow exactly. - - **Risks** — regressions or edge cases to watch. - - **Out of scope** — what will NOT be touched. - -4. Self-critique against this checklist before posting: - - [ ] No duplication — searched for existing implementations first - - [ ] Error handling is specific, not catch-all - - [ ] Domain terminology correct (no synonym drift from advocacy-domain.md) - - [ ] A failing test is specified before any production code - - [ ] Every test would be killed by mutation testing - - [ ] No premature abstraction (three similar lines is fine) - -5. If plan passes: post it as a comment, add label `planned`, remove `scout-filed`. -6. If root cause is unclear: add label `needs-investigation` only, do not add `planned`. - -## Quality gate - -Minimum desloppify score for this repo: **70** - -```bash -desloppify scan --path . -``` - -## Hard rules -- Never write code -- Never push commits -- Never plan more than one issue per run -- Never approve a plan that skips the failing-test step diff --git a/.claude/agents/scout.md b/.claude/agents/scout.md deleted file mode 100644 index 4877d9c..0000000 --- a/.claude/agents/scout.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -description: QA agent — exercises this tool as real users, files structured GitHub issues for failures ---- - -You are Scout. You find broken things. You never fix them. **Run schedule: every 6 hours.** - -## Execution - -1. Read `scout.personas.yaml` from the repo root. -2. Set up the environment if required (check CLAUDE.md or README). For privatemode-proxy: `pip install -r requirements-test.txt`. -3. For each persona, execute every flow in `flows[]` exactly as described: - - `run:` → execute the shell command - - `call:` → invoke the function or method - - `request:` → make the HTTP request -4. After each flow, check all `assertions`: - - `succeeds` — these must complete without error - - `fails_gracefully` — these must fail with a clean message, not a traceback - - `output_contains` — these strings must appear in output - - `output_not_contains` — these strings must NOT appear - -## Filing issues - -File a GitHub issue immediately for each failure. One issue per failure. File before moving to the next flow. - -Issue format: -``` -Title: [Scout] -Labels: scout-filed, bug - -## Persona - - -## Flow -: - -## Command / call - - -## Output - - -## Expected behavior - - -## Code location - - -## Acceptance criteria -- [ ] -- [ ] -``` - -## Hard rules -- Never modify code, configuration, or data -- Never push commits -- Never close or update issues you did not file in this run -- One issue per distinct failure diff --git a/.claude/rules/accessibility.md b/.claude/rules/accessibility.md deleted file mode 100644 index 0c8fdf9..0000000 --- a/.claude/rules/accessibility.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -paths: - - "**/ui/**" - - "**/frontend/**" - - "**/i18n/**" - - "**/l10n/**" ---- -# Accessibility Rules for Animal Advocacy Projects - -Advocacy networks span borders, languages, economic conditions, and infrastructure environments. An activist coordinating a rescue in a rural area with intermittent connectivity has fundamentally different needs than a campaign organizer at a well-resourced urban nonprofit. Accessibility in advocacy software is not about compliance with standards — it is about ensuring the movement's tools work for everyone the movement serves. - -## Internationalization from Day One - -Design every user-facing component with internationalization from the start — never retrofit it. Advocacy networks operate across linguistic boundaries: a coalition might include Spanish-speaking organizers in the Americas, Mandarin-speaking activists in Asia, and English-speaking legal teams in Europe. Externalize all user-facing strings from the beginning. Support right-to-left text layouts. Handle pluralization rules that differ across languages. Date, time, currency, and number formatting must respect locale. Adding i18n after the fact requires touching every component — the cost grows exponentially with codebase size. - -## Low-Bandwidth Optimization - -Many activists operate on mobile data in regions with expensive or throttled connections. Optimize aggressively: compress all assets, lazy-load non-critical content, minimize payload sizes, implement efficient data synchronization that transfers only deltas. Set performance budgets and test against them on throttled connections. A tool that requires broadband to function excludes the activists who need it most. - -## Offline-First Architecture - -Design for disconnected operation as the default, not as an exception. Activists in areas with unreliable connectivity — rural investigation sites, countries with internet shutdowns, disaster response scenarios — need tools that work without a network connection. Local-first data storage with background sync when connectivity is available. Conflict resolution for data modified offline by multiple users. Queue operations during disconnection and replay them on reconnect. The application must be fully functional for core workflows without any network access. - -## Low-Literacy Design Patterns - -Not all advocacy participants are fluent readers. Rescue coordinators, sanctuary workers, and community organizers come from diverse educational backgrounds. Design for comprehension: use icons alongside text labels, provide visual workflows instead of text-heavy instructions, support voice input and audio output where possible, use progressive disclosure to avoid overwhelming users with information density. Test interfaces with users who have limited formal literacy. - -## Mesh Networking Compatibility - -In environments where centralized internet infrastructure is unavailable, compromised, or surveilled, mesh networking enables direct device-to-device communication. Design data synchronization protocols that can operate over mesh networks with high latency, low bandwidth, and intermittent peer availability. This is not a theoretical concern — activists operating in regions with government internet shutdowns depend on mesh-capable tools. - -## Graceful Degradation - -Every feature must have a degraded mode that functions under constrained conditions. If the encryption library fails to load, the application must refuse to transmit sensitive data rather than transmitting it in plaintext. If the media processing pipeline is unavailable, investigation footage must be stored safely for later processing rather than discarded. If the network connection is lost, the user must see clear status indicators, not silent failures. Degrade capability, never safety. - -## Device Seizure Preparation — Application State - -When connectivity is lost suddenly — device confiscated, signal jammed, power cut — the application must not leave sensitive data exposed. No temporary files with decrypted investigation content. No in-memory caches that persist to swap files. No crash dumps containing witness identities. No recovery modes that display previously viewed sensitive content without re-authentication. Design the application so that power loss at any moment leaves zero recoverable sensitive state on disk. - -## Multi-Language Activist Networks Across Borders - -Coalition tools must support simultaneous use in multiple languages within the same deployment. A shared coordination platform where each user sees the interface in their language, but shared content (campaign plans, action alerts, investigation summaries) can be viewed in translated or original form. Support both machine translation for real-time use and human-reviewed translation for legally sensitive content. diff --git a/.claude/rules/desloppify.md b/.claude/rules/desloppify.md index 891e9e1..96b64ed 100644 --- a/.claude/rules/desloppify.md +++ b/.claude/rules/desloppify.md @@ -1,23 +1,64 @@ # Code Quality — desloppify -Run desloppify to systematically identify and fix code quality issues. Install and configure before scanning (requires Python 3.11+): +Run desloppify to systematically identify and fix code quality issues. Install from the **Open Paws fork** (Python 3.11+): ```bash -pip install --upgrade "desloppify[full]" +# Install from this fork — NEVER from PyPI / upstream +pip install "git+https://github.com/Open-Paws/desloppify.git#egg=desloppify[full]" desloppify update-skill claude ``` -Add `.desloppify/` to `.gitignore` — it contains local state that should not be committed. Before scanning, exclude directories that should not be analyzed (vendor, build output, generated code, worktrees) with `desloppify exclude `. Share questionable candidates with the project owner before excluding. +**Canonical install command.** This file is the single source of truth for how to install desloppify. Every other file in this stack (`skills/desloppify-playbook/SKILL.md`, `agents/desloppifier.md`, `$OP_CONTEXT_REPO/.claude/rules/desloppify.md`, every `Open-Paws/*/.claude/rules/desloppify.md`) links here rather than restating the command — duplication is how the multi-repo drift that existed pre-2026-04-25 happened. If you're editing an install command anywhere other than this file, stop. + +**OP fork only — never upstream.** The git install above pulls from `github.com/Open-Paws/desloppify`, which carries the movement conventions (no-speciesist-language rules, type-safety patterns, gateway response shape discipline, compassionate language enforcement, persona-QA browser testing) that upstream desloppify lacks. `pip install desloppify` from PyPI pulls upstream and is a hard-rule violation per `~/.claude/rules/pipeline-nevers.md`. + +Add `.desloppify/` to `.gitignore` — it contains local state that should not be committed. Before scanning, exclude generated / vendor / build dirs: ```bash +# Exclude generated directories, then scan +desloppify exclude node_modules dist desloppify scan --path . -desloppify next ``` `--path` is the directory to scan (use `.` for the whole project, or a subdirectory like `src/`). Your goal is to get the strict score as high as possible. The scoring resists gaming — the only way to improve it is to actually make the code better. -**The loop:** run `next`. It is the execution queue from the living plan, not the whole backlog. It tells you what to fix now, which file, and the resolve command to run when done. Fix it, resolve it, run `next` again. This is your main job. Use `desloppify backlog` only when you need to inspect broader open work not currently driving execution. +## The fix loop + +Run `next` → fix → resolve → repeat: + +```bash +desloppify next # get the top-priority item; shows which file and the resolve command +# fix the code +desloppify plan resolve # mark it done +desloppify next # get the next item +``` + +It is the execution queue from the living plan, not the whole backlog. It tells you what to fix now, which file, and the resolve command to run when done. Use `desloppify backlog` only when you need to inspect broader open work not currently driving execution. Do not be lazy. Large refactors and small detailed fixes — do both with equal energy. No task is too big or too small. Fix things properly, not minimally. Use `plan` / `plan queue` to reorder priorities or cluster related issues. Rescan periodically. The scan output includes agent instructions — follow them, do not substitute your own analysis. + +## Persona-QA workflow (UI repos with persona-driven testing) + +```bash +desloppify persona-qa --prepare --url https://example.com # generate agent instructions +# agent runs browser testing and captures findings in JSON +desloppify persona-qa --import findings.json # merge into state +desloppify persona-qa --status # per-persona summary +desloppify next # persona QA items now appear in the queue +``` + +## Baseline Capture Process + +**At plan time (STAGE 3):** Capture desloppify baseline against branch point: + +```bash +desloppify status --json > .desloppify/baseline.json +``` + +Post baseline JSON as GitHub issue comment for durable storage (`.desloppify/` is gitignored). + +**Recovery if missing:** STAGE 9 uses `git merge-base HEAD main` to recapture against the branch point. + +**Score-cannot-regress gate.** STAGE 9 blocks merge if the strict score drops below baseline. Regression requires `override:allow-score-drop` label (human-only — agents cannot apply it). diff --git a/.claude/rules/emotional-safety.md b/.claude/rules/emotional-safety.md deleted file mode 100644 index 622a08f..0000000 --- a/.claude/rules/emotional-safety.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -paths: - - "**/content/**" - - "**/media/**" - - "**/display/**" - - "**/upload/**" ---- -# Emotional Safety Rules for Animal Advocacy Projects - -Animal advocacy software routinely handles content documenting extreme suffering: factory farm conditions, slaughterhouse footage, animal testing documentation, and witness testimony of abuse. This content is necessary for the movement's work — it drives investigations, legal cases, public campaigns, and policy change. But uncontrolled exposure to this content causes measurable psychological harm. Every display decision must balance the operational need for access against the human cost of exposure. Emotional safety is not a UX preference — it is a duty of care to the people doing this work. - -## Progressive Disclosure of Traumatic Content - -NEVER display graphic content by default. Every piece of investigation footage, slaughter documentation, or exploitation imagery must be behind at least one intentional interaction. The default state is always safe: blurred, hidden, or represented by a text description. Users escalate to more graphic content through deliberate choices, never through automatic loading, scrolling, or navigation. - -## Configurable Detail Levels - -Implement user-controlled detail settings that persist across sessions. At minimum, provide three tiers: (1) text-only descriptions with no imagery, (2) blurred or low-detail representations with contextual descriptions, (3) full-resolution content. Each user chooses their own default level. The system MUST remember this preference and never reset it. Different roles need different defaults: a legal reviewer may need full-resolution evidence access; a campaign coordinator may only need text summaries. - -## Content Warnings — Mandatory Before Display - -Every piece of content involving animal suffering, investigation footage, or slaughter documentation MUST be preceded by a specific content warning describing what the content contains. Generic warnings like "sensitive content" are insufficient — the warning must indicate whether the content includes: graphic injury, death, distress vocalizations, confined living conditions, or slaughter processes. Users must be able to make an informed decision about whether to view specific content, not just "something sensitive." - -## Investigation Footage Handling - -Investigation footage is the most operationally important and psychologically dangerous content in the system. Implementation requirements: -- NEVER auto-play video or audio from investigations -- ALWAYS display footage in blurred state by default -- Require explicit opt-in for full resolution — a deliberate click, not a hover or scroll -- Provide frame-by-frame navigation for reviewers who need to examine specific moments without watching continuous footage -- Strip audio by default — distress vocalizations cause acute stress responses; audio should be a separate opt-in from video -- Support annotation without full-resolution viewing (reviewers can mark timestamps and regions on blurred preview) - -## Witness Testimony Display - -Before displaying any witness testimony: (1) verify that display consent is current and has not been withdrawn, (2) anonymize by default — display pseudonyms, not legal names, (3) require explicit opt-in to view identifying details, (4) log access to testimony for audit purposes while protecting the identity of who accessed it. When testimony includes descriptions of animal suffering, apply the same progressive disclosure and content warning rules as for visual media. - -## Burnout Prevention Patterns - -Advocacy software should actively support user wellbeing during extended content review sessions: -- **Session time awareness** — track continuous exposure time to traumatic content and surface non-intrusive reminders after configurable intervals (default: 30 minutes of active content review) -- **"Take a break" prompts** — for content reviewers who have been processing investigation footage or testimony for extended periods; these are suggestions, not blocks -- **Session summaries** — at the end of a content review session, provide a summary of what was reviewed so the reviewer does not need to re-expose themselves to verify completeness -- **Workload distribution** — when multiple reviewers are available, the system should support distributing traumatic content review across the team rather than concentrating it - -## Secondary Trauma Mitigation - -Secondary trauma affects not just end users but also **developers** building and testing this software. Design the development workflow to minimize unnecessary exposure: use abstract test data (described references, not actual footage) in automated tests, provide mock data generators that produce realistic metadata without graphic content, and document which test suites involve real content so developers can prepare. The CI/CD pipeline must never display graphic content in test output, logs, or failure reports. - -## Opt-In Escalation of Graphic Content - -When a user needs to access full-resolution graphic content, require multiple confirmation steps proportional to content severity. A single click is insufficient for the most graphic content. Implement a confirmation dialog that names what the user is about to see, requires an explicit "I understand" acknowledgment, and provides an alternative (text description, blurred summary) alongside the full-access option. This is not friction for friction's sake — it is informed consent applied to content exposure. diff --git a/.claude/rules/geo-seo.md b/.claude/rules/geo-seo.md deleted file mode 100644 index 5ff43d3..0000000 --- a/.claude/rules/geo-seo.md +++ /dev/null @@ -1,616 +0,0 @@ ---- -paths: - - "**/*.html" - - "**/robots.txt" - - "**/sitemap.xml" - - "**/llms.txt" - - "**/head/**" - - "**/seo/**" - - "**/meta/**" - - "**/schema/**" - - "**/structured-data/**" - - "**/layout.*" - - "**/Layout.*" - - "**/BaseHead.*" - - "**/Head.*" ---- -# SEO + GEO Rules for Animal Advocacy Websites - -Websites built for animal advocacy serve two discovery channels: traditional search engines and AI answer systems (ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini, Bing Copilot). The game has shifted from optimizing for keyword matching to optimizing for **intent satisfaction** (does your content completely solve the user's problem?), **entity authority** (does Google recognize your brand as a trusted entity in its knowledge graph?), and **technical excellence** (can crawlers efficiently process your site?). Approximately 60% of searches end without a click — appearing in search results is no longer enough; you need to be the source Google trusts enough to synthesize into AI-generated answers. - -**How AI citation works:** Google generates an answer first, then scores content against it using embedding distance. Only 17-32% of AI Overview citations come from pages ranking in the organic top 10 — lower-authority pages can win with the right structure (source: Authoritas AI Overviews study, 2024). Domain Authority correlates with AI citations at only r=0.18; topical authority (r=0.40) and branded web mentions (r=0.664) are the real predictors (source: Kalicube GEO correlation study, 2025). 80% of URLs cited by AI assistants do not rank in Google's top search results for the same queries (source: Semrush AI citation analysis, 2024). - ---- - -## HTML Structure - -Every page needs exactly one `

` tag containing the primary topic. Use a logical heading hierarchy (`h1 > h2 > h3`), never skipping levels. Phrase `

` headings as questions when the section answers something — question-based headings improve Featured Snippet selection, People Also Ask appearance, and produce 7× more AI citations for smaller sites. The first paragraph after any heading must directly answer that question in 40-60 words. AI systems pull from the first 30% of content 44% of the time — lead with the answer. - -Keep paragraphs to 2-4 sentences (40-60 words). Structure content as self-contained 120-180 word modules — this generates 70% more ChatGPT citations than unstructured prose. - -Use semantic HTML correctly: `
`, `
`, `