Releases: ashiras/multi-ai-cli
v0.14.0 - Introduces the new dual-mode CLI architecture for multi-ai-cli.
What’s included
- add startup mode dispatch based on
stdin.isatty() - keep the existing stateful Interactive REPL mode intact
- add a new stateless Unix-style Filter mode (
stdin -> AI -> stdout) - add a dedicated
filter_mode.pyrunner - support Filter mode with exactly one AI agent plus
-mand-r - reject interactive-only features in Filter mode such as
-w,-e,@sequence,->,||, and non-AI command adapters - ensure Filter mode writes only the final AI result to stdout
- route diagnostics and validation errors to stderr
- suppress interactive auto-continue progress output during Filter mode execution
- extract shared reference-file loading for reuse across prompt builders
- add/update validation and checker scripts for the new behavior
Why
This change allows multi-ai-cli to be used in two distinct ways:
- as a rich interactive REPL for multi-step AI workflows
- as a shell-friendly filter for scripting and pipelines
Examples now supported:
multi-aiecho "apple" | multi-ai @gpt -m "Translate this to Japanese only."cat spec.md | multi-ai @gpt -m "Write a design doc" -r existing_code.py > design.md
Notes
This PR intentionally keeps Filter mode minimal and focused:
- single AI agent only
- stateless execution
- stdin as primary input
- shell redirection for output files
It does not expand Filter mode to adapters or orchestration features.
v0.13.0 - Agent/Engine Separation and GitHub Adapter
v0.13.0 — Agent/Engine Separation and Read-Only GitHub Adapter
This release introduces two major upgrades to multi-ai-cli:
- Agent/Engine Separation
- Read-only GitHub Adapter
Together, these changes make the CLI significantly more flexible as a multi-agent development environment and expand its external interface surface beyond AI engines.
Highlights
Agent/Engine Separation
multi-ai-cli now separates:
- models (
[MODELS]) - physical execution backends (
[ENGINE.*]) - logical stateful agents (
[AGENT.*])
This makes it possible to define reusable engine backends and bind them to multiple logical agents such as:
@gpt@gpt.code@claude.review@gemini.chat@local.test
Key benefits:
- cleaner configuration structure
- reusable backend definitions
- agent-level conversation history separation
- clearer distinction between provider/backend and logical role
- better scalability for multi-agent orchestration
Legacy config format is still supported through a compatibility layer.
New GitHub Adapter (@github.*)
This release adds a new read-only GitHub adapter with native CLI commands for inspecting repositories directly from the terminal.
New commands:
@github.repo@github.tree@github.file@github.issue@github.issues
Supported capabilities:
- repository metadata read
- directory/path listing
- file content read
- single issue read
- issue list read
Additional behavior:
- supports private repositories via token authentication
- filters pull requests out of
@github.issues - rejects pull requests in
@github.issue(issue-only in v1) - works naturally with
-wfor saving outputs into local artifacts
This makes GitHub a first-class external interface in multi-ai-cli, alongside the existing Shell and Figma adapters.
Configuration Changes
New Agent/Engine layout
The recommended config structure is now:
[MODELS][RUNTIME][ENGINE.*][AGENT.*]
This replaces the older provider-only mental model with a more explicit architecture.
New GitHub config section
Added support for:
[GITHUB]
token = ...
api_base_url = https://api.github.comEnvironment variable overrides:
- GITHUB_TOKEN
- GITHUB_API_BASE_URL
Internal Improvements
- command dispatch updated for
@github.* - parser validation updated to recognize GitHub adapter commands
- welcome banner/help text updated for the new architecture
- registry reset behavior added for safer re-initialization and testing
- improved separation between adapter commands and agent commands
Example Workflows
Fetch a GitHub issue and use it as AI input:
@github.issue --repo ashiras/multi-ai-cli --number 40 -w issue40.md
@gpt.plan "Summarize the issue and propose an implementation plan." -r issue40.mdFetch a file and review it with an AI agent:
@github.file --repo ashiras/multi-ai-cli --path src/multi_ai_cli/handlers.py -w handlers.py
@claude.review "Review this file and suggest improvements." -r handlers.py
Notes
- GitHub support in this release is
read-only - pull request support is
not includedin v0.13.0 - current implementation expects a GitHub token for
@github.*commands - legacy configuration remains supported, but the new Agent/Engine format is recommended
Thanks
This release marks an important architectural step for multi-ai-cli:
from a simple multi-model CLI into a more explicit multi-agent + multi-adapter orchestration environment.
As always, feedback, bug reports, and ideas are welcome.
Release v0.12.0: Figma Integration, Shell Orchestration & Auto-Continue
🚀 Release v0.12.0: Figma Integration, Shell Orchestration & Auto-Continue
Welcome to Multi-AI CLI v0.12.0! This major update bridges the gap between design workflows, local shell execution, and multi-agent AI orchestration. We've transformed the CLI into a true "strategic hub" for developers and designers alike.
✨ What's New in v0.12.0
🎨 1. Figma Adapter (@figma)
Seamlessly integrate your AI workflow with Figma design data. Bridge the gap between "Design as Code" and AI generation.
@figma.pull: Fetch design files, specific nodes, or pages directly from Figma's REST API. Output as raw or AI-friendly normalized JSON.@figma.push: Send local content (Markdown, generated JSON specs) directly to a Figma-side plugin bridge, enabling AI-driven content updates.
⚙️ 2. Native Shell Orchestration (@sh)
Integrate directly with your local environment. Run commands, execute scripts, and feed the output right back into your AI sequence.
- Execute Scripts: Run
.py,.sh,.ts, and more natively (@sh -r script.py). - Capture Output: Save
stdout/stderras human-readable text or structured JSON (-w output.json). - Complex Commands: Support for pipes and environment variables using the
--shellflag.
🔄 3. Automatic Response Continuation
Never miss a word from your AI again.
- If an AI's response hits its maximum token limit (e.g., generating massive code blocks), the CLI now automatically detects it and instructs the AI to seamlessly continue from exactly where it stopped.
- Fully configurable via
multi_ai_cli.ini(auto_continue_max_rounds,auto_continue_tail_chars).
💻 Example Workflow: The Power of v0.12.0
Combine the new features in @sequence (HAN Syntax) to build incredible pipelines:
# 1. Pull design data from Figma
@figma.pull --file "FILE_KEY" --node "NODE_ID" -w component.json
-> # 2. AI generates React code based on the design
@claude "Generate a React component based on this design data." -r component.json -w:code Button.tsx
-> # 3. Run a local linter/formatter on the generated code
@sh "npx prettier --write Button.tsx"
🍎 Installation & Upgrade
Download the pre-built binary for macOS / Linux:
Check the assets below to download the executable.
chmod +x multi-ai
sudo mv multi-ai /usr/local/bin/For Python Developers (Source):
uv tool install multi-ai-cli # Or your preferred uv run methodView the full documentation in the README.
Multi-AI CLI v0.11.0: The Local-Loop Evolution
This major update transforms the CLI from a simple AI client into a robust AI Orchestration Hub. By bridging Local LLMs and the Local Shell, we enable a fully private, autonomous development loop.
🚀 Key Updates:
- local Engine: Privacy-first AI integration via Ollama/LM Studio (OpenAI-compatible API).
- sh Shell Orchestration: Direct terminal command execution with Runner Map (Auto-detects Python, Bash, Ruby, etc.).
- Unified Dispatcher: Success/failure-aware command routing for reliable sequence pipelines.
- Improved I/O: Refined
-w:rawand-w:codebehavior for UNIX-like predictability.
Build the future: Think (Local) -> Write -> Execute -> Reflect.
Multi-AI CLI v0.9.1
This merge marks a major update to Multi-AI CLI, bringing it to version v0.9.1. The primary focus of this release is the introduction of complex multi-agent workflow orchestration via the HAN (Human-Agent-Network) Syntax and a complete overhaul of the file I/O system to follow a "predictable" UNIX-like philosophy.
Key Features & Changes
1. Workflow Orchestration (@sequence -e)
The @sequence command now supports an editor mode (-e), enabling users to define and execute sophisticated pipelines using the HAN Syntax.
- Sequential Execution (
->): Define clear dependencies where one step follows another. - Parallel Execution (
[ ... || ... ]): Execute multiple AI tasks concurrently usingThreadPoolExecutor. - Artifact Relay: Files written via -w in one step can be immediately consumed as input via
-rin subsequent steps. - Cascade Stop: Automatically halts the entire sequence if any individual task or step fails, preventing downstream errors.
- Thread-Safe UI: Uses
threading.Lockto synchronize terminal output, preventing log "spaghetti" during parallel execution.
2. Refined File I/O (Raw-by-Default)
The I/O behavior has been redesigned to optimize both documentation and code generation workflows.
- Raw-by-Default (
-w <file>): Saves the full AI response exactly as received. This ensures the integrity of READMEs, documentation, and context-heavy responses. - Code Modifier (
-w:code <file>): An explicit modifier to extract and concatenate fenced code blocks. - Fallback Logic: If
:codeis specified but no code blocks are found, the system gracefully falls back to saving the full response. - Multi-Input Support: Multiple
-rflags can now be used to inject several local files into a single prompt context.
3. Fixed Prompt Construction Priority
To ensure stable inference, the final prompt sent to the AI is now assembled in a strictly defined order:
- A1 (Title/Context)
- Message (
-m) - Editor (
-e) - Files (
-r)
4. Bug Fixes (Inception Bug)
- Markdown Parsing: Resolved the "Inception Bug" where the presence of triple backticks (```) inside the generated code would prematurely truncate the output. The parser now uses a robust line-by-line approach compliant with Markdown specifications.
Execution Sample (HAN Syntax)
# Step 1: Design
@gemini "Propose a system architecture" -w design.md
->
# Step 2: Parallel Review & Coding
[
@gpt "Implement the code" -r design.md -w:code app.py
||
@claude "Review the specification" -r design.md -w review.txt
]
->
# Step 3: Final Integration
@grok "Provide a final evaluation" -r app.py -r review.txt -w final.md
Tech Stack
- Python 3.x
concurrent.futures.ThreadPoolExecutor(Concurrency)threading.Lock(UI Synchronization)re(Modifier Parsing)os.fsync(I/O Consistency)
Multi-AI CLI v0.5.4
Release Notes - v0.5.4 🚀
This release introduces a major architectural overhaul to improve stability, security, and scalability, along with several critical bug fixes.
🌟 Key Changes
🏗️ Architectural Overhaul (AIEngine Class)
- Refactored the engine management to a class-based architecture (
AIEngine). This ensures robust state management across all providers (Gemini, GPT, Claude, Grok) and simplifies the addition of future models.
🐞 Bug Fixes
- Fixed
@efficientParsing: Resolved an issue where specifying thealltarget caused the command to misinterpret it as a filename. - Gemini Session Reset: Fixed a scoping bug where
@scrub geminifailed to correctly clear the conversation context. - System Prompt Duplication: Eliminated an issue in GPT and Grok where system instructions were being duplicated in the conversation history.
🔒 Security & UX Improvements
- Directory Traversal Protection: Implemented strict path resolution to prevent unauthorized access outside of designated work directories.
- Environment Variable Support: API keys can now be loaded via environment variables (e.g.,
GEMINI_API_KEY), prioritizing them over INI file settings. - HUD HUD Refresh Logic: Improved terminal width detection to ensure the "thinking..." status line is cleared accurately on all screen sizes.
📦 Installation & Update
- Download the binary for your OS from the Releases page.
- Replace your existing binary:
chmod +x multi-ai-macos
sudo mv multi-ai-macos /usr/local/bin/multi-ai- Verify the version:
multi-ai --versionMulti-AI CLI v0.4.1
- Quad-Engine support (Gemini, GPT, Claude, Grok)
- Real-time HUD logging (chat interface)
- Stealth Mode support
- Bug fixes for code block formatting
Multi-AI CLI First Beta
v0.1.0-beta Add Google Generative AI and OpenAI to requirements