Every popular AI commit message tool — opencommit (7.2k ★), aicommits (8.9k ★) — requires Node.js. Python developers deserve a first-class tool in their own ecosystem.
ai-commit-msg is that tool: pip install and go.
- 🐍 Pure Python —
pip install ai-commit-msg, no Node.js, no npm, no extra runtimes - 🤖 4 AI backends — OpenAI, Anthropic Claude, DeepSeek, and Ollama (local / private)
- 📋 Conventional Commits — enforces
feat(scope): descriptionformat out of the box - 📦 Smart diff chunking — automatically summarizes large diffs file-by-file to stay within context limits
- 🎯 Multiple candidates — generates N candidates and lets you interactively pick (or auto-commit)
- ✏️ Jinja2 templates — fully customisable prompt templates with typed variables
- 🪝 Git hook — one-command
prepare-commit-msghook integration - ⚙️ Layered config — global
~/.ai-commit-msg.toml→ project.ai-commit-msg.toml→ env vars → CLI flags - 🌐 Multi-language output — generate commit messages in English, Chinese, Japanese, or any language
| Feature | ai-commit-msg | opencommit | aicommits |
|---|---|---|---|
| Runtime | 🐍 Pure Python | Node.js ❌ | Node.js ❌ |
| Install | pip install |
npm install -g |
npm install -g |
| OpenAI | ✅ | ✅ | ✅ |
| Claude (Anthropic) | ✅ | ✅ | ❌ |
| DeepSeek | ✅ | ❌ | |
| Ollama (local LLM) | ✅ | ❌ | |
| Smart large-diff chunking | ✅ | ❌ | ❌ |
| Multiple candidates + pick | ✅ (interactive) | ❌ | |
| Custom Jinja2 templates | ✅ | ❌ | ❌ |
| Conventional Commits | ✅ | ✅ | ✅ |
| Git hook integration | ✅ | ✅ | ✅ |
| Project-level config | ✅ | ❌ | |
| Multi-language output | ✅ | ✅ |
pip install ai-commit-msgpipx install ai-commit-msggit clone https://github.com/hidearmoon/ai-commit-msg
cd ai-commit-msg
pip install -e ".[dev]"acm --version# 1. Stage your changes as usual
git add .
# 2. Generate 3 AI candidates and interactively pick one
acm
# 3. Skip the prompt — auto-commit with the top candidate
acm --auto
# 4. Preview candidates without committing
acm --dry-runSet your API key once:
# Option A: CLI (writes to ~/.ai-commit-msg.toml)
acm config set openai.api_key sk-...
# Option B: environment variable (recommended for CI/CD)
export OPENAI_API_KEY=sk-...# Generate 3 candidates (default) and pick interactively
acm
# Generate 5 candidates
acm -n 5
# Use Claude instead of OpenAI
acm -p claude
# Use a specific model
acm -p openai -m gpt-4-turbo
# Generate in Chinese
acm --lang zh
# Disable Conventional Commits format (free-form)
acm --no-conventional# Auto-commit: no interaction, uses first candidate
acm --auto
# Dry-run: print candidates, don't commit
acm --dry-run
# Combine: preview with auto mode (useful in scripts)
acm --dry-run --autoacm --template ./my-prompt.j2acm [OPTIONS] [SUBCOMMAND]
Options:
-p, --provider [openai|claude|deepseek|ollama]
AI provider to use (overrides config)
-m, --model TEXT Model name override
-n, --num INTEGER Number of candidates to generate (default: 3)
-c, --conventional / --no-conventional
Enforce Conventional Commits format (default: on)
-t, --template PATH Path to a custom Jinja2 prompt template
--lang TEXT Output language code, e.g. en, zh, ja (default: en)
--auto Auto-commit using the first candidate
--dry-run Print generated messages without committing
--version Show version and exit
--help Show this message and exit
acm hook install # Install prepare-commit-msg hook into current repo
acm hook install --force # Overwrite existing hook (original is backed up)
acm hook uninstall # Remove hook (restores backup if one exists)
acm hook status # Check whether hook is activeOnce installed, the hook fires automatically on every git commit and pre-fills the commit message editor. It skips merge commits, amends, squashes, and cherry-picks.
acm config list # Pretty-print all config values
acm config get openai.api_key # Get a single value
acm config set openai.api_key sk-... # Set globally (~/.ai-commit-msg.toml)
acm config set openai.model gpt-4o # Change default model
acm config set default.language zh # Always generate in Chinese
acm config set default.num_candidates 5 # Always generate 5 candidates
acm config set default.language zh --local # Project-level override onlyCLI flags > project .ai-commit-msg.toml > ~/.ai-commit-msg.toml > defaults
Environment variables override both TOML files.
[default]
provider = "openai" # openai | claude | deepseek | ollama
model = "" # leave empty to use each provider's default
num_candidates = 3 # number of candidates to generate
language = "en" # output language: en | zh | ja | fr | de | ...
conventional = true # enforce Conventional Commits format
max_diff_tokens = 4000 # smart chunking threshold (tokens, ~4 chars each)
template_path = "" # path to a custom Jinja2 template (empty = built-in)
[openai]
api_key = "sk-..."
model = "gpt-4o" # default: gpt-4o
base_url = "https://api.openai.com/v1"
[claude]
api_key = "sk-ant-..."
model = "claude-sonnet-4-6" # default: claude-sonnet-4-6
base_url = "https://api.anthropic.com"
[deepseek]
api_key = "sk-..."
model = "deepseek-chat" # default: deepseek-chat
base_url = "https://api.deepseek.com/v1"
[ollama]
model = "llama3" # default: llama3
base_url = "http://localhost:11434"Place a .ai-commit-msg.toml in your project root. It is merged on top of the global config, so you only need to specify what differs:
# .ai-commit-msg.toml — enforce Chinese commit messages for this repo
[default]
language = "zh"
provider = "deepseek"| Key | Type | Default | Description |
|---|---|---|---|
default.provider |
string | "openai" |
AI backend: openai, claude, deepseek, ollama |
default.model |
string | "" |
Global model override (empty = each provider's default) |
default.num_candidates |
int | 3 |
How many commit message candidates to generate |
default.language |
string | "en" |
Output language code (en, zh, ja, etc.) |
default.conventional |
bool | true |
Enforce Conventional Commits type(scope): desc format |
default.max_diff_tokens |
int | 4000 |
Token budget for diff; larger diffs are auto-chunked |
default.template_path |
string | "" |
Absolute or relative path to a custom Jinja2 template |
openai.api_key |
string | "" |
OpenAI API key (also: OPENAI_API_KEY env var) |
openai.model |
string | "gpt-4o" |
OpenAI model identifier |
openai.base_url |
string | "https://api.openai.com/v1" |
OpenAI-compatible endpoint URL |
claude.api_key |
string | "" |
Anthropic API key (also: ANTHROPIC_API_KEY env var) |
claude.model |
string | "claude-sonnet-4-6" |
Claude model identifier |
claude.base_url |
string | "https://api.anthropic.com" |
Anthropic API endpoint |
deepseek.api_key |
string | "" |
DeepSeek API key (also: DEEPSEEK_API_KEY env var) |
deepseek.model |
string | "deepseek-chat" |
DeepSeek model identifier |
deepseek.base_url |
string | "https://api.deepseek.com/v1" |
DeepSeek endpoint (OpenAI-compatible) |
ollama.model |
string | "llama3" |
Ollama model tag |
ollama.base_url |
string | "http://localhost:11434" |
Ollama server URL |
| Variable | Overrides |
|---|---|
OPENAI_API_KEY |
openai.api_key |
ANTHROPIC_API_KEY |
claude.api_key |
DEEPSEEK_API_KEY |
deepseek.api_key |
ACM_PROVIDER |
default.provider |
ACM_MODEL |
default.model |
ACM_LANGUAGE |
default.language |
acm config set openai.api_key sk-...
acm -p openai # uses gpt-4o by default
acm -p openai -m gpt-4-turbo
acm -p openai -m o1-miniGet your key at platform.openai.com/api-keys.
acm config set claude.api_key sk-ant-...
acm -p claude # uses claude-sonnet-4-6 by default
acm -p claude -m claude-sonnet-4-6Get your key at console.anthropic.com.
DeepSeek's API is OpenAI-compatible and significantly cheaper than OpenAI for most workloads.
acm config set deepseek.api_key sk-...
acm -p deepseek # uses deepseek-chat by default
acm -p deepseek -m deepseek-reasonerGet your key at platform.deepseek.com.
Ollama lets you run models entirely on your machine — no API key, no data sent to the cloud.
# 1. Install Ollama: https://ollama.com
# 2. Start the server and pull a model
ollama serve
ollama pull llama3 # ~4 GB, good quality
ollama pull mistral # ~4 GB, fast
ollama pull codellama # optimised for code
# 3. Generate commit messages locally
acm -p ollama -m llama3
acm -p ollama -m mistralTo use a remote Ollama instance:
acm config set ollama.base_url http://my-server:11434The openai provider's base_url can point to any compatible API (LM Studio, vLLM, Together AI, Groq, etc.):
acm config set openai.base_url https://api.groq.com/openai/v1
acm config set openai.api_key gsk_...
acm -p openai -m llama-3.3-70b-versatileCreate a Jinja2 .j2 file to fully control the prompt sent to the AI.
| Variable | Type | Description |
|---|---|---|
{{ diff }} |
string | The (possibly chunked) git diff |
{{ files }} |
string | Newline-separated list of staged files |
{{ num }} |
int | Number of candidates requested |
{{ lang }} |
string | Output language code (e.g. en, zh) |
{{ extra_context }} |
string | Free-form context (currently empty; reserved for future use) |
{# my-prompt.j2 #}
You are a Git commit message expert.
Write {{ num }} short, specific commit messages for this diff.
Rules: imperative mood, ≤72 chars, language={{ lang }}.
Separate with ---
Diff:
{{ diff }}
Files: {{ files }}acm --template ./my-prompt.j2{# team-prompt.j2 — enforces JIRA ticket reference style #}
Generate {{ num }} commit messages following our team convention:
[PROJECT-XXXX] <type>: <description>
Where type is one of: feat | fix | refactor | test | docs | chore
Language: {{ lang }}
{{ diff }}
---
Staged: {{ files }}
{% if extra_context %}
Context: {{ extra_context }}
{% endif %}
Output {{ num }} candidates, separated by ---When a staged diff exceeds max_diff_tokens (default: 4000 ≈ 16 KB), ai-commit-msg automatically:
- Splits the diff by file
- Allocates a token budget per file
- Includes full diffs for small files; summarises large ones (lines added/removed + preview)
- Combines everything into a single context window-safe prompt
This means it handles massive refactors or dependency bumps gracefully — other tools simply fail or produce useless generic messages.
To raise the limit for large-context models:
# ~/.ai-commit-msg.toml
[default]
max_diff_tokens = 16000 # suitable for Claude or GPT-4o with 128k context$ git add src/auth/jwt.py tests/test_auth.py
$ acm
Generating 3 commit message(s) via openai (gpt-4o)…
╭─ Candidate 1 ────────────────────────────────────────────────╮
│ feat(auth): add JWT refresh token rotation │
│ │
│ Prevents token fixation attacks by issuing a new refresh │
│ token on each use. Old tokens are immediately invalidated. │
╰───────────────────────────────────────────────────────────────╯
╭─ Candidate 2 ────────────────────────────────────────────────╮
│ feat(auth): implement refresh token rotation for security │
╰───────────────────────────────────────────────────────────────╯
╭─ Candidate 3 ────────────────────────────────────────────────╮
│ security(auth): rotate refresh tokens on every use │
╰───────────────────────────────────────────────────────────────╯
? Select a commit message:
❯ 1. feat(auth): add JWT refresh token rotation
2. feat(auth): implement refresh token rotation for security
3. security(auth): rotate refresh tokens on every use
✏ Edit manually
✗ Cancel
✓ Committed:
[main 3a7f9b2] feat(auth): add JWT refresh token rotation
git clone https://github.com/hidearmoon/ai-commit-msg
cd ai-commit-msg
pip install -e ".[dev]"
pytest tests/ -vThe test suite has 98 tests covering:
- CLI argument parsing and end-to-end flows
- Git diff retrieval and smart chunking logic
- All four AI provider implementations (HTTP fully mocked)
- Prompt template rendering and candidate parsing
- Config loading, deep-merging, env var overrides, and persistence
- Git hook install / uninstall / backup lifecycle
Run with coverage:
pytest --cov=src --cov-report=term-missingSee CONTRIBUTING.md for guidelines.
Short version:
- Fork and create a feature branch
- Write code + tests (
pytestmust pass) - Open a PR — describe what and why
Issues and feature requests: github.com/hidearmoon/ai-commit-msg/issues
MIT © OpenForge AI
If this tool helps you, a ⭐ Star means the world to us!