MiroFish is a next-generation AI prediction engine powered by multi-agent technology. By extracting seed information from the real world (such as breaking news, policy drafts, or financial signals), it automatically constructs a high-fidelity parallel digital world. Within this space, thousands of intelligent agents β each with independent personalities, long-term memory, and behavioral logic β freely interact and undergo social evolution. You can inject variables dynamically from a "God's-eye view" to precisely simulate future trajectories β rehearse the future in a digital sandbox, and win decisions after countless simulations.
You only need to: Upload seed materials (data analysis reports or interesting novel stories) and describe your prediction requirements in natural language
MiroFish will return: A detailed prediction report and a deeply interactive high-fidelity digital world
MiroFish is dedicated to creating a swarm intelligence mirror that maps reality. By capturing the collective emergence triggered by individual interactions, we break through the limitations of traditional prediction:
- At the Macro Level: We are a rehearsal laboratory for decision-makers, allowing policies and public relations to be tested at zero risk
- At the Micro Level: We are a creative sandbox for individual users β whether deducing novel endings or exploring imaginative scenarios, everything can be fun, playful, and accessible
From serious predictions to playful simulations, we let every "what if" see its outcome, making it possible to predict anything.
Visit our online demo environment to experience a prediction simulation on trending public opinion events: mirofish-live-demo
Click the image to watch the full demo video of prediction using the BettaFish-generated "Wuhan University Public Opinion Report"
Click the image to watch MiroFish's deep prediction of the lost ending based on hundreds of thousands of words from the first 80 chapters of "Dream of the Red Chamber"
Financial Prediction, Political News Prediction, and more examples coming soon...
- Graph Building: Real-world seed extraction & Individual/collective memory injection & GraphRAG construction
- Environment Setup: Entity relationship extraction & Persona generation & Environment configuration agent injects simulation parameters
- Start Simulation: Dual-platform parallel simulation & Auto-parse prediction requirements & Dynamic temporal memory updates
- Report Generation: ReportAgent with a rich toolset for deep interaction with the post-simulation environment
- Deep Interaction: Chat with any agent in the simulated world & Interact with ReportAgent
| Tool | Version | Description | Check Installation |
|---|---|---|---|
| Node.js | 18+ | Frontend runtime, includes npm | node -v |
| Python | β₯3.11, β€3.12 | Backend runtime | python --version |
| uv | Latest | Python package manager | uv --version |
# Copy the example configuration file
cp .env.example .env
# Edit the .env file and fill in the required API keysRequired Environment Variables:
# LLM API Configuration (supports any LLM API in OpenAI SDK format)
# Recommended: Alibaba Qwen-plus model via Bailian Platform: https://bailian.console.aliyun.com/
# Note: consumption can be high β try simulations with fewer than 40 rounds first
LLM_API_KEY=your_api_key
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus
# Zep Cloud Configuration
# Free monthly quota is sufficient for basic usage: https://app.getzep.com/
ZEP_API_KEY=your_zep_api_key# One-click install all dependencies (root + frontend + backend)
npm run setup:allOr install step by step:
# Install Node dependencies (root + frontend)
npm run setup
# Install Python dependencies (backend, auto-creates virtual environment)
npm run setup:backend# Start both frontend and backend (run from project root)
npm run devService URLs:
- Frontend:
http://localhost:3000 - Backend API:
http://localhost:5001
Start Individually:
npm run backend # Start backend only
npm run frontend # Start frontend only# 1. Configure environment variables (same as source deployment)
cp .env.example .env
# 2. Pull image and start
docker compose up -dBy default, reads .env from the root directory and maps ports 3000 (frontend) / 5001 (backend)
A mirror address for faster pulling is provided as comments in
docker-compose.ymlβ replace if needed.
The MiroFish team is recruiting full-time and intern positions. If you're interested in multi-agent applications, feel free to send your resume to: mirofish@shanda.com
MiroFish has received strategic support and incubation from Shanda Group!
MiroFish's simulation engine is powered by OASIS. We sincerely thank the CAMEL-AI team for their open-source contributions!






