Skip to content

Improve critique strategies (Step 4) #5

@GiggleLiu

Description

@GiggleLiu

Current Critique Strategies

In Step 4, each brainstorm idea is paired with a devil's advocate subagent that tries to kill it with evidence. The critique evaluates ideas on three axes:

Axis Challenge
Novelty "I found [paper X] very similar. How is this different?"
Rigor "State the core claim as a testable hypothesis."
Impact "If this works perfectly, what improvement? Enough for [venue]?"

Each devil's advocate also:

  • Searches for prior art (has this been tried?)
  • Identifies the weakest assumption
  • Estimates feasibility (what would it actually take?)

How to contribute

Adversarial critique is only as strong as the axes it evaluates. There may be important dimensions of evaluation that are missing, or the existing axes may need sharper definitions. If you've seen research ideas fail for reasons not covered here, please suggest new critique angles!

Some questions to consider:

  • Are there critique dimensions specific to your field (e.g., ethical review, data availability, reproducibility)?
  • Are there common failure modes for research ideas that the current axes don't catch?
  • Should the feasibility estimation be more structured (e.g., time, cost, expertise, infrastructure)?
  • Are there ways to make the critique more constructive — not just killing ideas but suggesting how to salvage them?
  • Should the critique process include domain-specific checklists?

Please describe:

  1. The critique axis or strategy name
  2. What it evaluates (what failure mode does it catch?)
  3. The challenge question it would ask
  4. An example of how it would improve the critique

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions