Welcome to Code-Interpreter π, an open-source tool that transforms natural language instructions into executable code using OpenAI, Gemini, Groq, Claude, DeepSeek, NVIDIA, Z AI, Browser Use, and HuggingFace models. It executes code safely and supports vision models for image processing.
Supports tasks like file operations, image editing, video processing, data analysis, and more. Works on Windows, MacOS, and Linux.
Committed to being free and simple - no downloads or tedious setups required. Works on Windows, Linux, macOS.
- Features
- Installation
- Usage
- Examples
- TUI Screenshots
- Settings
- Contributing
- Versioning
- Changelog
- License
- Acknowledgments
To install Code-Interpreter, run the following command:
pip install open-code-interpreterTo run the interpreter with Python:
python interpreter.py -m 'z-ai-glm-5' -md 'code'Make sure you install required packages before running the interpreter and have API keys setup in the .env file.
To get started with Code-Interpreter, follow these steps:
- Clone the repository:
git clone https://github.com/haseeb-heaven/code-interpreter.git
cd code-interpreter- Install the required packages:
pip install -r requirements.txt- Copy the example environment file and add the keys you plan to use:
copy .env.example .envFollow the steps below to obtain and set up the API keys for each service:
-
Obtain the API keys:
- HuggingFace: Visit HuggingFace Tokens and get your Access Token.
- Google Gemini: Visit Google AI Studio and click on the Create API Key button.
- OpenAI: Visit OpenAI Dashboard, sign up or log in, navigate to the API section in your account dashboard, and click on the Create New Key button.
- Groq AI: Visit Groq AI Console, sign up or log in, and click on the Create API Key button.
- Anthropic AI: Visit Anthropic AI Console, sign up or log in, and click on the Create Key button.
- NVIDIA API Catalog: Visit NVIDIA Build, create a key, and use
NVIDIA_API_KEY. - Z AI: Visit Z AI Docs and use
Z_AI_API_KEY. - OpenRouter: Visit OpenRouter Keys and use
OPENROUTER_API_KEY. - Browser Use: Visit Browser Use Docs and use
BROWSER_USE_API_KEY.
-
Save the API keys:
Create a .env file in your project root directory and add the following lines:
export HUGGINGFACE_API_KEY="Your HuggingFace API Key"
export GEMINI_API_KEY="Your Google Gemini API Key"
export OPENAI_API_KEY="Your OpenAI API Key"
export GROQ_API_KEY="Your Groq API Key"
export ANTHROPIC_API_KEY="Your Anthropic API Key"
export DEEPSEEK_API_KEY="Your Deepseek API Key"
export NVIDIA_API_KEY="Your NVIDIA API Key"
export Z_AI_API_KEY="Your Z AI API Key"
export OPENROUTER_API_KEY="Your OpenRouter API Key"
export BROWSER_USE_API_KEY="Your Browser Use API Key"This Interpreter supports offline models via LM Studio and Ollama. Follow the steps below:
- Download any model from LM Studio like Phi-2, Code-Llama, Mistral.
- In the app go to Local Server option and select the model.
- Start the server and copy the URL (LM-Studio will provide you with the URL).
- Run command
ollama serveand copy the URL (Ollama will provide you with the URL). - Open config file
configs/local-model.jsonand paste the URL in theapi_basefield. - Set the model name to
local-modeland run the interpreter.
python interpreter.py -md 'code' -m 'local-model'- π Executes generated code from instructions
- πΎ Saves and edits code with advanced editor
- π‘ Supports offline models via LM Studio and Ollama
- π Command history and mode selection
- π§ Multiple models and languages (Python/JavaScript)
- π Code review before execution
- π‘οΈ Safe sandbox execution with timeout and security
- π Self-repair for failed executions
- π» Cross-platform (Windows/macOS/Linux)
- π€ Integrates with HuggingFace, OpenAI, Gemini, Groq, Claude, DeepSeek, NVIDIA, Z AI, OpenRouter, Browser Use
- π― Versatile tasks: file ops, image/video editing, data analysis
The interpreter displays the current safety mode in the session banner:
- [SAFE MODE] - Default mode with safety restrictions enabled (green)
- [UNSAFE MODE
β οΈ ] - Unrestricted mode (red with warning emoji)
The interpreter handles dangerous operations with a single confirmation prompt:
SAFE MODE:
- Dangerous operations are blocked entirely (no confirmation prompt)
- You will see:
β Dangerous operation blocked in SAFE MODE. - No file deletion or modification operations are allowed
UNSAFE MODE:
- Single prompt for ALL operations (safe or dangerous)
- Safe operations:
Execute the code? (Y/N): - Dangerous operations:
β οΈ Dangerous operation. Continue? (Y/N): - Operations execute only if you confirm with
Y
To enable unsafe mode:
python interpreter.py --unsafeTo enable safe mode:
python interpreter.py --sandboxWarning: Use unsafe mode with caution. Dangerous operations can delete or modify your files.
To use Code-Interpreter, use the following command options:
-
List of all programming languages:
python- Python programming language.javascript- JavaScript programming language.
-
List of all modes:
code- Generates code from your instructions.script- Generates shell scripts from your instructions.command- Generates single line commands from your instructions.vision- Generates description of image or video.chat- Chat with your files and data.
-
See Models.MD for the complete list of supported models.
python interpreter.pypython interpreter.py opens the TUI and uses arrow-key navigation in a real terminal. The TUI falls back to plain text prompts when stdin is piped or not attached to a terminal.
python interpreter.py --clipython interpreter.py --cli automatically picks the best configured model from your .env file if you do not pass -m.
python interpreter.py --tui --sandboxpython interpreter.py --cli --no-sandboxpython interpreter.py --upgradepython scripts/validate_models_cli.py --providers gemini,groq --tier stable --mode chat
python scripts/validate_models_cli.py --providers openai,anthropic,deepseek,huggingface --tier stable --mode chat
python scripts/validate_models_cli.py --providers nvidia,z-ai,browser-use,openrouter --tier stable --mode chatpython interpreter.py -m 'nvidia-nemotron' -md 'chat' -dc
python interpreter.py -m 'z-ai-glm-5' -md 'chat' -dc
python interpreter.py -m 'openrouter-free' -md 'chat' -dc
python interpreter.py -m 'openrouter-qwen3-coder' -md 'chat' -dc
python interpreter.py -m 'browser-use-bu-max' -md 'chat' -dcLast verified model baseline: April 5, 2026.
The new TUI flow is designed for fast keyboard-first setup. Run python interpreter.py or python interpreter.py --tui to launch the selector UI, then use the arrow keys to choose the mode, model, language, and runtime options.
Choose between code, chat, script, command, and vision before the session starts.
Pick your provider and model directly from the terminal without typing long aliases manually.
After entering the session, generated code and execution output remain inside the terminal flow with the same safer runtime behavior used by the CLI.
You can enable or disable sandbox mode directly from the terminal session. This makes it easy to switch between the safer isolated runtime and unrestricted execution when needed.
When sandbox mode is enabled, commands and generated code run with the same safer execution constraints used by the CLI.
When sandbox mode is disabled, execution runs in unsafe mode without sandbox restrictions, intended only for trusted local workflows.
Here are the available commands:
- π
/save- Save the last code generated. - βοΈ
/edit- Edit the last code generated. βΆοΈ /execute- Execute the last code generated.- π
/mode- Change the mode of interpreter. - π
/model- Change the model of interpreter. - π¦
/install- Install a package from npm or pip. - π
/language- Change the language of the interpreter. - π§Ή
/clear- Clear the screen. - π
/help- Display this help message. - πͺ
/list- List all the models/modes/language available. - π
/version- Display the version of the interpreter. - πͺ
/exit- Exit the interpreter. - π
/fix- Fix the generated code for errors. - βοΈ
/settings- Open interactive TUI settings when running with--tui. - π
/log- Toggle different modes of logging. - β«
/upgrade- Upgrade the interpreter. - π
/prompt- Switch the prompt mode File or Input modes. - π
/debug- Toggle Debug mode for debugging. - π¦
/sandbox- Toggles secure sandbox system.
You can customize the settings of the current model from the .json file. It contains all the necessary parameters such as temperature, max_tokens, and more.
To integrate your own API server for OpenAI instead of the default server, follow these steps:
- Navigate to the
Configsdirectory. - Open the configuration file for the model you want to modify (
gpt-3.5-turbo.jsonorgpt-4.json). - Add the following key-value pair to the JSON object:
"api_base": "https://my-custom-base.com"
- Save and close the file.
Now, whenever you select that model, the system will automatically use your custom server.
- Copy the
.jsonfile and rename it toconfigs/hf-model-new.json. - Modify the parameters of the model like
start_sep,end_sep. - Set the model name from Hugging Face:
"model": "Model name here". - Use it like this:
python interpreter.py -m 'hf-model-new' -md 'code'. - Make sure the
-m 'hf-model-new'matches the config file inside theconfigsfolder.
- Go to the
scriptsdirectory and run theconfig_builderscript. - For Linux/MacOS run
config_builder.sh, for Windows runconfig_builder.bat. - Follow the instructions and enter the model name and parameters.
- The script will automatically create the
.jsonfile for you.
If you're interested in contributing to Code-Interpreter, we'd love to have you! Please fork the repository and submit a pull request. We welcome all contributions and are always eager to hear your feedback and suggestions for improvements.
Current version: 3.2.2
Quick highlights:
- v3.2.2 - Added sandbox security, improved Code Interpreter architecture, fixed execution language routing, restored sandbox toggle compatibility, added subprocess security delegation, and improved safe-mode timeout handling.
- v3.2.1 - Added mode indicator ([SAFE MODE] or [UNSAFE MODE
β οΈ ]) in session banner, implemented strict safety blocking for dangerous operations in SAFE MODE, added single confirmation prompt for operations in UNSAFE MODE. - v3.1.0 - Added OpenRouter free-model aliases, made
openrouter/freethe default OpenRouter selection, improved simple-task code generation, added fresh TUI screenshots, and prepared release packaging assets. - v3.0.0 - Added a default execution safety sandbox, dangerous command/code circuit breaker, bounded ReACT-style repair retries after failures, clearer execution feedback, and polished CLI/TUI runtime output.
- v2.4.1 - Added NVIDIA, Z AI, Browser Use,
.env.example, and--cli/--tuistartup flows. - v2.4.0 - 2026 model refresh across OpenAI, Gemini, Anthropic, Groq, and DeepSeek.
Full release history: CHANGELOG.md
This project is licensed under the MIT License. For more details, please refer to the LICENSE file.
Please note the following additional licensing details: This project is a client interface only. All models are provided by their respective third-party providers and subject to their own terms of service.
- We would like to express our gratitude to HuggingFace,Google,META,OpenAI,GroqAI,AnthropicAI for providing the models.
- A special shout-out to the open-source community. Your continuous support and contributions are invaluable to us.
This project is created and maintained by Haseeb-Heaven.





