Community guide and AI-agent skill for working with the MiroFish engine.
The upstream MiroFish repository explains the product and how to launch it.
This repository serves a different job:
- capture repeatable MiroFish practices in one place;
- turn those practices into a portable
SKILL.mdfor AI coding agents; - document the parts of the engine that matter when you are trying to get better outputs, debug failures, or maintain a support repository around MiroFish.
In short: MiroFish is the engine, mirofish-guide is the operator's playbook and reusable skill layer.
This repository is for the people searching for:
MiroFish guideMiroFish guide skillMiroFish tutorialMiroFish setupMiroFish troubleshootingMiroFish best practicesMiroFish skillMiroFish prompt designMiroFish seed textMiroFish report debuggingMiroFish evaluationMiroFish operator workflowMiroFish proxy compatibilityMiroFish graph build failureMiroFish runtime forensicsMiroFish report audit
If MiroFish feels powerful but hard to operate consistently, this repository is meant to close that gap.
This guide is grounded in the current MiroFish pipeline:
- upload source material (
pdf,md,txt,markdown); - build a Zep graph from chunked text;
- filter entities for simulation;
- generate OASIS agent profiles;
- generate simulation config with an LLM;
- run Twitter and Reddit simulations in parallel;
- generate a report and interact with the simulated world.
The repository focuses on the practices around that pipeline:
- source document and seed-text quality;
- simulation requirement writing;
- interpreting entity and persona generation;
- tuning runs without guessing blindly;
- debugging from generated artifacts and logs;
- recording empirical findings separately from code-confirmed facts;
- scoring run quality with explicit evaluation rubrics;
- comparing routes and models without confusing transport issues with engine issues.
mirofish-guide/
|-- SKILL.md
|-- references/
| |-- anti-patterns.md
| |-- evidence-taxonomy.md
| |-- evaluation-rubric.md
| |-- experiment-protocol.md
| |-- graph-build-runbook.md
| |-- glossary.md
| |-- model-proxy-guidance.md
| |-- operator-workflow.md
| |-- report-audit.md
| |-- runtime-forensics.md
| |-- workflow.md
| |-- debugging.md
| |-- seed-templates.md
| `-- experiments.md
|-- agents/
| `-- openai.yaml
|-- assets/
| |-- mirofish-guide-banner.jpg
| |-- mirofish-guide-banner.svg
| `-- mirofish-guide-mark.svg
|-- CONTRIBUTING.md
|-- SECURITY.md
|-- README.md
`-- LICENSE
SKILL.md: the compact skill entry point for Claude Code, OpenClaw, Codex-style workflows, and similar agents.references/workflow.md: the engine map, stage by stage.references/debugging.md: artifact-level troubleshooting guide.references/seed-templates.md: reusable source-material and simulation-requirement templates.references/experiments.md: empirical run notes and tuning observations.references/evidence-taxonomy.md: how to label claims and verify report evidence.references/evaluation-rubric.md: how to score simulation and report quality.references/experiment-protocol.md: how to run reproducible comparisons.references/graph-build-runbook.md: concrete checks for graph-stage failures and sparse ontologies.references/operator-workflow.md: the practical operator loop from seed to audit.references/model-proxy-guidance.md: route acceptance checks and proxy-specific failure patterns.references/runtime-forensics.md: how to judge whether a run was substantively alive.references/report-audit.md: how to verify report claims against raw evidence.references/anti-patterns.md: mistakes that waste budget or produce misleading runs.references/glossary.md: compact definitions for MiroFish-specific terms.
Point your agent at SKILL.md. The skill is designed to route the agent into the right reference file instead of dumping the entire guide into context.
Use it when the task is about:
- planning or running MiroFish simulations;
- improving prompts, seed material, or operator workflows;
- debugging graph, entity, profile, config, runtime, or report stages;
- maintaining a repository that documents MiroFish best practices.
Start with SKILL.md, then open the relevant reference:
references/workflow.mdfor architecture and stage mapping;references/debugging.mdfor operational problems;references/graph-build-runbook.mdfor graph-stage failures;references/seed-templates.mdfor reusable input patterns;references/experiments.mdfor empirical tradeoffs;references/operator-workflow.mdfor the end-to-end operating loop;references/runtime-forensics.mdfor deciding whether a run actually produced meaningful behavior;references/report-audit.mdfor verifying report claims;references/evaluation-rubric.mdwhen you need to judge whether a run was actually good.
- Read
references/workflow.mdto locate the stage you are working on. - Use
references/seed-templates.mdto prepare stronger source material and simulation requirements. - Use
references/operator-workflow.mdto run a small pilot before scaling. - If something fails or looks weak, go to
references/debugging.md. - If you are comparing models, cost, or run quality, use
references/experiment-protocol.md,references/evaluation-rubric.md, andreferences/experiments.md.
- MiroFish users trying to get better simulation outputs
- contributors maintaining internal or public MiroFish playbooks
- AI-agent users who want a reusable MiroFish skill
- people debugging graph extraction, persona generation, runtime behavior, or report quality
- Treat source material quality as the main quality lever. Weak inputs create weak entities, weak personas, and weak reports.
- Separate code-confirmed behavior from experiment-only observations. Do not present guesses as engine facts.
- Debug from artifacts first. MiroFish writes useful state, config, profile, and report files; use them before patching code.
- Prefer changes in seed material and simulation requirements before invasive engine changes.
- Keep the guide repository sharper than the engine README: shorter claims, more verification, better operator context.
- Treat the report as a summary layer, not the only evidence layer.
This repository should never become a dump of local context or private data.
- do not commit
.envfiles, exports, run logs, uploads, or private datasets; - redact keys, tokens, internal URLs, and user-identifying content from examples;
- keep empirical notes useful without exposing confidential source material;
- follow
SECURITY.mdwhen adding examples or logs.
- add code-grounded notes tied to concrete engine files or generated artifacts;
- add new experiment logs with model, rounds, action counts, and constraints;
- add repeatable prompt or seed templates for real MiroFish use cases;
- add scorecards, compatibility notes, and operator checks that reduce wasted runs;
- document version drift when upstream MiroFish behavior changes.
See CONTRIBUTING.md for the expected format.
- MiroFish: the engine itself
- OASIS: the underlying simulation framework
- Zep: graph and memory service used by MiroFish
MIT.
