A powerful terminal-based AI coding assistant that uses LM Studio to help you create, read, and edit code files efficiently. Now with support for multiple file operations and global terminal access.
- Create - Generate new code files from natural language prompts
- Read - Analyze and explain existing code files
- Edit - Modify existing files based on instructions
- Multiple Files - Create, read, and edit multiple files in one command
- Model Management - List, load, and unload models from LM Studio
- Multi-language - Supports C, C++, Java, Python, HTML, CSS, PHP, JavaScript, TypeScript
- Global Access - Use
gppfrom any terminal directory after installation
- LM Studio running locally on
http://localhost:1234 - Python 3.6+ with
requestslibrarypip3 install requests
chmod +x install.sh
./install.sh.\install.ps1install.batLinux/macOS:
# Copy files to ~/.local/bin
mkdir -p ~/.local/bin
cp gpp ~/.local/bin/gpp
cp model_manager.py ~/.local/bin/model_manager.py
chmod +x ~/.local/bin/gpp
# Add to PATH (add to ~/.bashrc, ~/.zshrc, or ~/.profile)
export PATH="$HOME/.local/bin:$PATH"Windows:
- Create
%USERPROFILE%\.local\bindirectory - Copy
gppandmodel_manager.pyto that directory - Add the directory to your system PATH
- Restart terminal
./verify_setup.shThis will check:
- ✅ gpp is accessible globally
- ✅ Python 3 is installed
- ✅ requests library is available
- ✅ LM Studio is running and accessible
- ✅ Models are available
# Create a new file
gpp "binary search function" --create search.py
# Edit existing file
gpp "add error handling" --edit script.py
# Read and analyze
gpp "what does this do?" --read program.c
# Analyze without prompt
gpp --read code.java# Create multiple files
gpp "create a web app" --create index.html style.css app.js
# Edit multiple files with same instruction
gpp "add comments to all functions" --edit util.py helper.py main.py
# Analyze multiple files
gpp "how do these files interact?" --read app.py config.py database.py# List all available models
gpp --list-models
# Show currently loaded model
gpp --current-model
# Load a specific model
gpp --load-model "qwen2.5-coder-3b-instruct"
# Unload current model
gpp --unload-model# Specify language
gpp "quick sort algorithm" --lang=python --create sort.py
# Adjust generation parameters
gpp "add validation" --edit config.py --temperature=0.5 --max-tokens=800| Language | Extensions |
|---|---|
| C | .c, .h |
| C++ | .cpp, .cc, .cxx, .hpp |
| Java | .java |
| Python | .py |
| HTML | .html, .htm |
| CSS | .css |
| PHP | .php |
| JavaScript | .js |
| TypeScript | .ts |
gpp "professional portfolio website with responsive design" \
--create index.html styles.css script.jsOutput:
1. index.html : [HTML code with semantic structure]
2. styles.css : [CSS code with responsive design]
3. script.js : [JavaScript for interactivity]
gpp "add comprehensive docstrings and type hints" \
--edit app.py utils.py models.pygpp "explain how these components work together" \
--read main.py database.py api.py config.pygpp "convert this algorithm to TypeScript" --lang=typescript
gpp "what are the security implications?" --read api.pyOptions:
--create FILE(s) Create one or multiple files
--edit FILE(s) Edit one or multiple files
--read FILE(s) Read and analyze files
--lang, -l LANG Specify programming language
--temperature, -t FLOAT Generation temperature (0.0-1.0, default: 0.7)
--max-tokens, -m INT Max tokens to generate (default: 500)
Model Management:
--list-models List all available models
--current-model Show currently loaded model
--load-model MODEL Load specific model
--unload-model Unload current model
-h, --help Show help message
The tool uses these defaults:
- API:
http://localhost:1234 - Model:
qwen2.5-coder-3b-instruct - Temperature:
0.7 - Max tokens:
500
To change defaults, edit the constants at the top of the gpp file:
API_BASE_URL = "http://localhost:1234"
MODEL_NAME = "qwen2.5-coder-3b-instruct"- Make sure LM Studio is running
- Verify it's accessible at
http://localhost:1234 - Check your firewall settings
- Run
gpp --helpfrom the installation directory - Verify PATH is set correctly:
echo $PATH - Restart your terminal after installation
- Ensure
model_manager.pyis in the same directory asgpp - Check file permissions:
chmod +x model_manager.py
pip3 install requests- Check if another model is already loaded
- Use
gpp --unload-modelfirst - Verify sufficient disk space and RAM
- Model Management: See
MODEL_MANAGEMENT.md - Implementation Details: See
IMPLEMENTATION_SUMMARY.md - Quick Reference: See
MODEL_COMMANDS_QUICK_REFERENCE.txt
To improve this tool:
- Test with different models and edge cases
- Report bugs and issues
- Suggest new features
This project is provided as-is for use with LM Studio.
GPP brings AI-powered code generation to your terminal with:
- ✅ Single and multiple file support
- ✅ Global terminal access from any directory
- ✅ Easy installation and setup
- ✅ Full model management capabilities
- ✅ Multiple language support
- ✅ Simple, intuitive command syntax
Get started now:
./install.sh
gpp --help