Skip to content

clarkchenkai/bias-audit

Repository files navigation

🧠 Bias Audit

Decision-Framing Agent Skill for Surfacing Bias Before It Hardens

Reframe before you decide.

License Agent Skills Platform Version

English · 中文 · Philosophy · 哲学说明 · Install Guide · 安装指南 · Examples · 案例


What is Bias Audit?

An open-source Agent Skill that audits the framing of user requests and model outputs before that framing quietly dictates the conclusion. It identifies anchoring, loss aversion, loaded wording, false binaries, and availability-driven urgency, then rewrites the problem into a cleaner decision frame.

Built on Daniel Kahneman's work on heuristics, framing effects, and dual-process judgment. In AI systems, the danger is not only wrong answers. It is answers that feel reasonable because the question itself arrived biased.


Who Is This For?

🧭 Decision Makers
Use it before high-stakes judgments shaped by fear, urgency, or loaded language.

🏢 Managers & Founders
Use it when team narratives feel certain before the evidence is stable.

🧪 Product & Research Teams
Use it on feature kill/keep, pricing, and prioritization debates.

⚠️ Review Layers
Use it before human review adds its own bias on top of the model's.


How It Works

graph TD
    A["A loaded request arrives"] --> B["Capture original framing"]
    B --> C["Identify bias signals"]
    C --> D["Rewrite neutrally"]
    D --> E["Define evidence and criteria"]
    E --> F["Next decision step"]

    style A fill:#fef3cd,stroke:#d4a843
    style C fill:#e8f4f8,stroke:#5ba4c9
    style E fill:#e8f7ec,stroke:#28a745
    style F fill:#eef2ff,stroke:#4f46e5
Loading

The key move: shape the problem the right way before spending more execution on it.


Output Contract

Every valid Bias Audit response ends with the same six-part framing audit:

## Original Framing
## Bias Signals
## Neutral Reframe
## Missing Evidence
## Decision Criteria
## Recommended Next Step

Use Cases

‘Is this project already doomed?’

The wording pushes the conversation toward abandonment before assessment.

Name the catastrophic framing, separate evidence from exhaustion, and restate the decision as a recoverability assessment.

Full walkthrough

‘This candidate feels weak’

The judgment arrives as a vague vibe with a built-in action bias.

Audit the wording, separate impression from evidence, and require explicit criteria before rejection.

Full walkthrough

‘We have to cut prices or we will lose the market’

The sentence frames a false binary under loss pressure.

Expose the loss framing, ask what evidence exists for price sensitivity, and reopen the action space.

Full walkthrough

‘Should we kill this feature now?’

A few vivid reactions are driving a shutdown question faster than the evidence supports.

Audit ‘obviously’ and ‘hate,’ ask for representative evidence, and separate rollback, iterate, and remove as distinct actions.

Full walkthrough


What Makes It Different

Typical Response Bias Audit
Accepts the first framing Audits the framing before reasoning proceeds
Treats urgency as evidence Separates urgency from proof
Lets loaded wording steer the answer Neutralizes wording before judgment
Looks only for supporting evidence Asks what counterevidence is missing
Ends in vibes Ends in explicit decision criteria

Kahneman to AI

  • System 1 makes fast stories feel true before they are checked.
  • Framing changes judgment even when the underlying facts do not.
  • Loss aversion can turn reversible choices into panic decisions.
  • A good audit changes the decision frame before it changes the answer.

Read the full philosophical foundation


Quick Install

curl -fsSL https://raw.githubusercontent.com/clarkchenkai/bias-audit/main/install/install.sh | bash
git clone https://github.com/clarkchenkai/bias-audit.git
cp -r bias-audit/bias-audit ~/.your-platform/skills/bias-audit

Full installation guide


Supported Platforms

Platform Install Method Status
Claude Code ~/.claude/skills/bias-audit
Cursor Remote rule or local skills folder
OpenAI Codex ~/.codex/skills/bias-audit or .agents/skills/bias-audit
Gemini CLI ~/.gemini/skills/bias-audit
Google Antigravity ~/.gemini/antigravity/skills/bias-audit
Amp / Goose / Cline ~/.agents/skills/bias-audit

Project Structure

bias-audit/
├── bias-audit/
│   ├── SKILL.md
│   ├── agents/
│   │   └── openai.yaml
│   └── references/
├── docs/
│   ├── philosophy.md
│   └── philosophy-zh.md
├── install/
│   ├── install.sh
│   ├── README-install.md
│   └── README-install-zh.md
├── examples/
├── examples-zh/
├── scripts/
│   └── validate-docs.sh
├── README.md
├── README-zh.md
└── LICENSE

Contributing

Contributions are welcome, especially when they improve:

  • sharper protocols and examples
  • stronger mirrored Chinese documentation
  • better platform compatibility
  • clearer high-risk boundaries

Changelog

Version history

License

Released under the MIT License.

About

Decision-framing Agent Skill for surfacing bias before it hardens.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages