Stage your changes, run one command, and get a meaningful commit message generated by a local LLM. No API keys. No cloud. Everything on your machine.
One command. No configuration files. No cloud dependencies. Just good commit messages.
Runs entirely on your machine via Ollama. Your code never leaves your computer. No API keys or subscriptions needed.
Just type commit-msg-ai after staging. It reads the diff, generates a message, and asks for confirmation.
Set your preferred model once with commit-msg-ai config. Override per-run with --model.
Messages always use feat: fix: bc: prefixes. No scopes, no noise. Consistent across your team.
Use llama3.2, mistral, qwen2.5-coder, or any model from the Ollama library. Switch models in seconds.
Open source, MIT licensed. No accounts, no telemetry, no vendor lock-in. Just a Python package you install and run.
Three steps. Under 10 seconds.
git add .
commit-msg-ai
Commit? [Y/n] y
Install Ollama, pull a model, install commit-msg-ai. Done.
$ brew install ollama
Linux: curl -fsSL https://ollama.com/install.sh | sh
# See available models
$ ollama list
# Pull one (lightweight, ~2GB)
$ ollama pull llama3.2
Browse all models at ollama.com/library
$ pip install commit-msg-ai
# Set default model
$ commit-msg-ai config model qwen2.5-coder
# Verify
$ commit-msg-ai config
model = qwen2.5-coder
Install commit-msg-ai and let your local LLM handle the boring stuff.
$ pip install commit-msg-ai