Skip to content

TUI-based AI coding agent with security interceptor, live activity feed (AI news + project insights), and OpenRouter integration

License

Notifications You must be signed in to change notification settings

vizi2000/agentzero-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agent Zero CLI

AI coding agent with security interceptor - your prompts, your machine, your control.

PyPI version Python 3.10+ License: MIT

Quick Demo

Agent Zero CLI Demo

Quick Install

pip install agentzero-cli
a0

Why Agent Zero CLI?

  • Security First - Every command goes through approval. Dangerous patterns blocked automatically.
  • Local LLM Support - Run with Ollama, LM Studio - your data never leaves your machine.
  • Multi-Backend - Switch between Local LLM, OpenRouter, or deterministic mode.
  • Real Tool Execution - Actually runs shell commands, reads/writes files.

Features in Action

Security Interceptor

Dangerous commands like rm -rf / and fork bombs are automatically blocked:

Security Demo

Code Generation

Generate code with syntax highlighting and file save approval:

Code Generation Demo

Risk Explanation

Press [E] to ask AI to explain the risk before approving:

Risk Explanation Demo

Model Switching (LM Studio)

Use any model from your local LM Studio server:

Model Switch Demo

Multi-Backend Support

Automatic fallback chain with Local LLM as the safest option:

Backends Demo


Backend Configuration

Priority Backend Data Location Setup
1 Local LLM Your machine LOCAL_LLM_URL=http://localhost:1234/v1
2 Agent Zero API Your server AGENTZERO_API_URL=http://...
3 OpenRouter Cloud OPENROUTER_API_KEY=sk-or-...
4 Deterministic Local No config needed

Local LLM Setup (Recommended)

# LM Studio
export LOCAL_LLM_URL=http://localhost:1234/v1
export LOCAL_LLM_MODEL=mistralai/ministral-3-3b  # optional

# Ollama
export LOCAL_LLM_URL=http://localhost:11434/v1
export LOCAL_LLM_MODEL=llama3.2:3b

# Remote LM Studio (e.g., on another machine)
export LOCAL_LLM_URL=http://192.168.1.100:1234/v1

a0

Your prompts never leave your network.

OpenRouter (Cloud)

export OPENROUTER_API_KEY=sk-or-...
a0

Security Modes

Mode Read-only Write ops Dangerous
paranoid Confirm Confirm BLOCKED
balanced Auto-approve Confirm BLOCKED
god_mode Auto Auto BLOCKED

Dangerous patterns are always blocked, regardless of mode:

  • rm -rf /, rm -rf ~
  • Fork bombs :(){ :|:& };:
  • mkfs.*, dd if=/dev/
  • Download and execute (curl | sh)

Keyboard Shortcuts

Key Action
F1 Help
F2 Menu
F3 Mini-game
F10 Quit
Enter Send
Shift+Enter New line

Development

git clone https://github.com/vizi2000/agentzero-cli
cd agentzero-cli
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

Roadmap

  • TUI Interface (Textual)
  • Security interceptor with blocklist
  • Local LLM support (Ollama, LM Studio)
  • OpenRouter integration
  • Real tool execution
  • Risk explanation
  • PyPI package
  • MCP protocol support
  • VS Code extension

License

MIT License - see LICENSE


About

Developed by Wojciech Wiesner | wojciech@theones.io

TheOnes - Cutting edge tech powered by Neurodivergent minds

Neurodivergent? If you thrive on patterns, hyperfocus, and rapid iteration - let's build together.

GitHub Website News

About

TUI-based AI coding agent with security interceptor, live activity feed (AI news + project insights), and OpenRouter integration

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published