AI-powered terminal tool that critiques commit message quality and helps you write clear, high-signal commits from your shell.
CommitLens analyzes Git history, scores message quality, and suggests well-structured Conventional Commit messages from staged changes.
Commit messages are often:
- vague (
fixed bug) - noisy (
wip) - missing context
- inconsistent across teams
CommitLens turns commit history into actionable feedback and helps teams write clearer commits consistently.
- Analyze the last
Ncommits from local repositories - Optionally analyze public remote repositories via
--url - AI critique + score (0-10)
- Suggestions for weak commits
- Stats dashboard (average score, vague %, one-word %)
- Reads
git diff --staged - Summarizes staged changes
- Suggests a Conventional Commit message
- You always review/edit manually (tool never runs
git commit)
- Structured LLM output validation with Pydantic
- Rich terminal UX with progress and panels
- Diff filtering for lockfiles/binary assets
- Large diff truncation for prompt safety
- Lightweight eval harness for scoring behavior
- Minimal test suite for parsing/scoring/git-validation logic
- Python 3.11+
- OpenAI API
- Pydantic
- Rich
- Typer
- python-dotenv
- uv
- Python 3.11+
- Git
- Python package manager (recommended:
uv, install withpip install uv) - OpenAI API key
git clone <your-repo-url>
cd commitlens
cp .env.example .env
# add OPENAI_API_KEY to .env
uv sync
uv run python commit_critic.py --analyze --limit 10uv syncpython -m venv .venv && source .venv/bin/activate && pip install .Create .env:
OPENAI_API_KEY=your_key_hereOPENAI_API_KEYis read from environment variables (or local.env) at runtime.- CommitLens does not write your API key to project files.
.envis git-ignored; only.env.exampleis tracked.- If a key is exposed, rotate it immediately in your OpenAI dashboard.
- Use least-privilege practices: keep keys local, do not paste keys into commit messages, issues, or logs.
# Analyze last 50 commits (local repo)
uv run python commit_critic.py --analyze
# Analyze last 50 commits from a remote public repo
uv run python commit_critic.py --analyze --url="https://github.com/steel-dev/steel-browser"
# Interactive commit writer
uv run python commit_critic.py --writesource .venv/bin/activate
python commit_critic.py --analyze
python commit_critic.py --analyze --url="https://github.com/steel-dev/steel-browser"
python commit_critic.py --writeuv run python commit_critic.py --helpโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ฉ COMMITS THAT NEED WORK
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Commit: "add github action and quick start guide" โ
โ Score: 3/10 โ
โ Issue: Missing type prefix and scope; message is vague and not capitalized. โ
โ Better: ci: add GitHub Action and docs: add quick start guide โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ WELL-WRITTEN COMMITS
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Commit: "chore: add MIT license and update README with license link โ
โ โ
โ - Added a new LICENSE file containing the full MIT License text โ
โ - Updated README.md to replace placeholder license text with a link to the LICENSE file" โ
โ Score: 9/10 โ
โ Why it's good: Proper type 'chore' used; clear and descriptive message with useful details. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Commit: "docs: enhance README with detailed usage, features, examples, and architecture" โ
โ Score: 8/10 โ
โ Why it's good: Uses 'docs' type correctly and clearly describes the changes made. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ YOUR STATS
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโ
โ Average score โ 6.8/10 โ
โ Vague commits โ 1 (25.0%) โ
โ One-word commits โ 0 (0.0%) โ
โโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโ
Analyzing staged changes... (2 files changed, +22 -1 lines)
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ SUMMARY: โ
โ - Add MIT License file with full text โ
โ - Update README to link to the new LICENSE file โ
โ โ
โ SUBJECT: โ
โ chore: add MIT license and update README with license link โ
โ โ
โ BODY: โ
โ - Added a new LICENSE file containing the full MIT License text โ
โ - Updated README.md to replace placeholder license text with a link to the LICENSE file โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Changes detected:
- Add MIT License file with full text
- Update README to link to the new LICENSE file
Suggested commit message:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ chore: add MIT license and update README with license link โ
โ โ
โ - Added a new LICENSE file containing the full MIT License text โ
โ - Updated README.md to replace placeholder license text with a link to the LICENSE file โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Press Enter to accept, or type your own message ():
CommitLens scores each commit from 0 to 10 based on Conventional Commit clarity and specificity.
Score bands:
0-4->needs_work5-7->mid8-10->well_written
What the stats mean:
Average score: arithmetic mean of all analyzed commit scoresVague commits: commits with score< 5(same threshold asneeds_work)One-word commits: commits whose message contains only one word
Run eval suite:
uv run python evals/run_eval.pyEval report includes:
- Bucket accuracy (
needs_work/mid/well_written) - Score tolerance metric (
ยฑ1) for LLM variance - Repeatability check: each eval case is run
5times to measure consistency across runs
Eval bucket definitions:
needs_work: score< 5mid: score5-7well_written: score>= 8
How to read eval columns:
Expected: expected bucket fromevals/commits.jsonExpected Score: target score fromevals/commits.jsonPass %: percentage of runs where predicted bucket matched expected bucketScore ฮผ: mean predicted score across repeated runsScore ฯ: score standard deviation across repeated runsTolerance %: percentage of runs whereabs(predicted - expected_score) <= 1
Run tests:
python -m unittest discover -s tests -vgit commits/diff
-> prompt construction
-> OpenAI LLM call
-> Pydantic validation
-> scoring + stats
-> Rich terminal rendering
Key modules:
commit_critic/app.py: CLI entry and mode orchestrationcommit_critic/git_ops.py: git cloning/log/diff utilitiescommit_critic/llm_client.py: LLM prompts, API calls, parsing, validationcommit_critic/scoring.py: thresholds and statisticscommit_critic/ui.py: rich output renderingevals/run_eval.py: lightweight scoring evaluation harness
--limit: number of commits to analyze (default: 50)--model: OpenAI model to use (default:gpt-4.1-mini). See available OpenAI models--url: analyze a remote repository--analyze: analyze commit history mode--write: interactive commit writer mode
- Uses shallow clone:
git clone --depth 200 - Clones into a temporary directory for analysis
- Temporary clone directory is deleted after the run
- Best used with
--limitto keep analysis focused on recent commits - Public repositories are supported by default; private repositories require pre-configured Git credentials
- OpenAI models are supported
- Remote
--urlanalysis supports public Git repositories
- Multi-provider support (Anthropic/Gemini)
- Local model support
- Optional caching for repeated analyses
.
โโโ commit_critic/ # Core package logic
โ โโโ app.py # Typer CLI mode orchestration (--analyze / --write)
โ โโโ config.py # .env loading and API key validation
โ โโโ git_ops.py # Git operations (clone/log/diff/repo checks)
โ โโโ llm_client.py # LLM prompts, API calls, response parsing
โ โโโ models.py # Pydantic schemas for critiques/suggestions
โ โโโ scoring.py # Commit bucket logic and aggregate stats
โ โโโ ui.py # Rich terminal rendering
โ โโโ diff_cleaner.py # Diff filtering/truncation for prompt safety
โโโ tests/ # Unit tests (logic and mocked integrations)
โโโ evals/ # LLM scoring evaluation harness
โโโ commit_critic.py # CLI entry point
โโโ pyproject.toml # Project metadata and dependencies
โโโ uv.lock # Reproducible dependency lockfile
โโโ .env.example # Environment variable template
โโโ README.md # Documentation
โโโ LICENSE # MIT license