Faster, Consistent Conventional Commits with a Small Node Script and Gemini
I hit the same wall most Fridays: finish the code, then lose minutes writing a good commit message and PR description. Rush it, and history becomes noisy. Do it well, and it costs time and focus.
So I wrote gcm.js — a small CLI that reads your staged diff (or a specific commit) and proposes a clean conventional commit message, a sensible branch name, and a PR body. It uses Google Gemini 2.5 Flash and handles big diffs with smart summarisation. Below is how to use it, where it helps, and where it doesn't.
What Problem It Solves
- Inconsistent or unclear commit messages.
- Context switching from coding to writing.
- Hard-to-summarise large diffs.
What the Script Does
- Generates a conventional commit message (
type(scope): description) from your staged diff or an existing commit (--commitflag). - Suggests a branch name based on the change context.
- Creates a PR body explaining “what / why / how to test”.
- Uses Gemini 2.5 Flash for fast, cost‑effective generation, with a custom summarisation step to fit large diffs into token limits.
All of that runs locally via Node.js and requires only one environment variable:GOOGLE_GEMINI_API_KEY.
How to Use It
Installation
Install Node.js 18+ if you don't have it yet.
Get your API key from Google AI Studio.
Download the script from this Gist:
curl -o gcm.js https://gist.githubusercontent.com/cebreus/531fe63c0322fda4b990f178d103e184/raw/gcm.js
chmod +x gcm.js
- Move it to your bin (optional but recommended):
mkdir -p ~/bin
mv gcm.js ~/bin/gcm
- Add API key and PATH to your shell config (
.zshrc,.bashrc, or.bash_profile):
echo 'export GOOGLE_GEMINI_API_KEY="YOUR_API_KEY"' >> ~/.zshrc
echo 'export PATH="$HOME/bin:$PATH"' >> ~/.zshrc
echo 'alias gm="gcm"' >> ~/.zshrc
source ~/.zshrc
Replace YOUR_API_KEY with your actual key and .zshrc with your shell config file if needed.
- Try it:
git add .
gm
You’ll see a coloured output with a suggested commit message, branch name, and PR description you can copy or confirm.
Example Output
gm
Diff too large (95,733 chars), creating smart summary...
gemini-2.5-flash | 95,733 → 1,199 chars | estimated input: ~642 tokens
BRANCH:
refactor/llm-provider-arch
COMMIT_MESSAGE:
refactor(clifix): improve LLM provider architecture
- Restructure model service and provider interfaces
- Update Google and local LLM provider implementations
- Refine LLM prompt handling and generation logic
- Adjust `list-llm` and `fix-code` commands to use new services
- Update related documentation for LLM features
- Modify ESLint configuration for codebase consistency
PR_TITLE:
refactor(clifix): improve LLM provider architecture
PR_DESCRIPTION:
This pull request introduces a significant refactor of the LLM (Large Language Model) service architecture within the `clifix` package. The changes aim to enhance the modularity, maintainability, and extensibility of how different LLM providers are integrated and managed.
The core objective was to streamline the interaction with generative models and prepare the codebase for future expansions in provider support and functionality.
Key changes include:
* **Service Architecture**: Restructured core model service and provider interfaces for better separation of concerns and future scalability. This involved updates to `model-service.ts`, `provider.ts`, and `model-providers/types.ts`.
* **Provider Implementations**: Updated the Google and local LLM provider implementations (`google-provider.ts`, `local-provider.ts`) to align with the new architecture.
* **CLI Commands**: Adjusted `list-llm` and `fix-code` commands (`src/commands/list-llm.ts`, `src/fix-code.ts`) to seamlessly integrate with the refined LLM services.
* **Documentation**: Various documentation files (`docs/*.md`, `README.md`, `TODO.md`) have been updated to reflect the latest changes in LLM features and usage.
* **Codebase Standards**: The ESLint configuration (`eslint.config.js`) has been modified to maintain consistent code quality across the refactored modules.
gemini-2.5-flash | actual usage → input: 633 tokens | output: 412 tokens (1,864 chars)
Internal Logic (Conceptual Overview)
The source code in the gist exposes only the essentials, but here’s how it works under the hood:
- Diff collection: detects staged changes; falls back to a commit range if
--commit <sha>is used. - Sampling algorithm: trims or proportionally samples large diffs to stay within model token limits.
- AI invocation: sends a formatted prompt with diff metadata to Gemini 2.5 Flash.
- Response parsing: extracts headers, body lines, and optional BREAKING CHANGE notes.
- Formatting helpers: colourised terminal output and validation before commit.
- Error handling: catches missing env variable, invalid SHA, or empty diff.
Why It’s Optimised for Google Gemini
The script uses model parameters and token management logic optimized for Google’s Gemini 2.5 Flash. It leverages Gemini’s fast‑response APIs and cost profile rather than generic OpenAI or Anthropic endpoints.
In practice:
- Requests are quick (typically <1s for small diffs).
- Cost per call is very low.
- Summarisation keeps token usage stable even on large diffs.
While it could theoretically work with another LLM, it’s tuned for Google’s SDK — it uses the official @google/generative-ai client library and the GOOGLE_GEMINI_API_KEY environment variable.
When to Use It (and When Not to)
This works well if you:
- Commit frequently and want consistent, readable messages.
- Follow Conventional Commits for release automation or changelog generation.
- Value clean Git history without spending mental energy on formatting.
Use with caution when:
- Working on sensitive codebases (AI may leak internal names into commit text).
- Dealing with massive monorepo diffs (summarisation helps, but manual review is still essential).
Summary
In practice, gcm.js saves 1–3 minutes per commit or PR and produces more consistent messages, branch names, and PR bodies. It doesn't replace human judgement — you still review and approve — but it removes the friction from Git's mechanical parts.
Using it daily keeps your workflow focused on code, not on typing boilerplate. A small improvement that compounds with every commit.