Skip to content

commit

Generate a Conventional Commits v1.0.0 message for the currently staged changes via a local LLM.

Synopsis

bash
fobis commit [options]

Options

OptionShortDescription
--backend TEXT-bLLM backend: ollama (default) or openai
--url TEXT-uBase URL of the LLM server (default: http://localhost:11434)
--model TEXT-mModel identifier (default: qwen3-coder:30b-a3b-q4_K_M)
--max-diff INTMaximum staged-diff characters sent to the model (default: 12 000); truncation is file-boundary-aware — every file's metadata header is preserved
--refine-passes INTCritique-and-rewrite iterations after the initial draft (default: 0 = single pass); 1–3 recommended for small/fast models
--applyRun git commit with the generated message after interactive review
--config PATH-cPath to a custom FoBiS user config file
--show-configPrint the effective LLM configuration and exit
--init-configCreate a commented default config file and exit

Description

fobis commit reads the currently staged changes and sends them to a local LLM to generate a well-formed commit message following the Conventional Commits specification.

The prompt includes:

  • Complete file listgit diff --cached --name-status, always authoritative regardless of diff size
  • Stat summarygit diff --cached --stat
  • Staged diff — up to max_diff_chars; truncated at file boundaries (metadata headers preserved for all files even when hunks are cut)
  • Recent commit history — last 15 commits, used as a style reference
  • Branch name — gives the model deployment context

The generated message is printed to standard output. Passing --apply prompts for confirmation and then runs git commit -m <message> automatically.

No network access to external APIs is required — the LLM runs locally via Ollama or any OpenAI-compatible server (LM Studio, vLLM, llama.cpp, etc.).

Backends

ollama (default)

Calls the Ollama native streaming chat API at {url}/api/chat. Requires Ollama to be running locally.

bash
# Install Ollama: https://ollama.com
# Pull a model:
ollama pull qwen3-coder:30b-a3b-q4_K_M

# Run fobis commit (Ollama default, no extra flags needed):
fobis commit

openai

Calls any OpenAI-compatible endpoint at {url}/v1/chat/completions. Covers:

  • LM Studio — default URL: http://localhost:1234
  • llama.cpp server — default URL: http://localhost:8080
  • vLLM — default URL: http://localhost:8000
  • Any cloud proxy exposing the OpenAI API
bash
# LM Studio example:
fobis commit --backend openai --url http://localhost:1234 --model llama-3.2-3b

Note: Ollama also exposes an OpenAI-compatible endpoint at /v1/chat/completions, so --backend openai --url http://localhost:11434 works with Ollama too.

User config file

All LLM settings can be persisted in ~/.config/fobis/config.ini (XDG-aware: respects $XDG_CONFIG_HOME). Create the file with commented defaults:

bash
fobis commit --init-config

The generated file looks like:

ini
# FoBiS user configuration
# Location: /home/user/.config/fobis/config.ini
#
# All values shown are the defaults.  Uncomment and edit to override.

[llm]
# LLM backend: "ollama" (native API) or "openai" (any OpenAI-compatible endpoint)
# backend = ollama

# Base URL of the LLM server (no trailing slash)
# url = http://localhost:11434

# Model to use for commit-message generation
# model = qwen3-coder:30b-a3b-q4_K_M

# Maximum staged-diff characters sent to the model
# Truncation is file-boundary-aware: every file's metadata header is preserved.
# The complete file list is always sent separately and is never truncated.
# max_diff_chars = 12000

# Critique-and-rewrite passes after the initial draft (0 = single pass)
# Increase to 1-3 for small/fast models that produce shallow first drafts
# refine_passes = 0

Priority: CLI flags → config file → hardcoded defaults.

Inspect effective settings at any time:

bash
fobis commit --show-config
# Config file : /home/user/.config/fobis/config.ini
#   [llm]
#   backend       = ollama
#   url           = http://localhost:11434
#   model         = qwen3-coder:30b-a3b-q4_K_M
#   max_diff_chars= 12000
#   refine_passes = 0

Examples

Generate a message (print only)

bash
git add fobis/Commit.py fobis/cli/commit.py
fobis commit

Output:

[ollama:qwen3-coder:30b-a3b-q4_K_M] Generating commit message…

feat(cli): add LLM-assisted commit-message generation

Introduce `fobis commit` to generate Conventional Commits messages for
staged changes via a local LLM. Supports the native Ollama API and any
OpenAI-compatible endpoint (LM Studio, vLLM, llama.cpp).

Generate and commit in one step

bash
git add fobis/Commit.py
fobis commit --apply

After printing the message:

Commit with this message? [y/N] y
[develop abc1234] feat(cli): add LLM-assisted commit-message generation

Override model for a single run

bash
fobis commit --model llama3.2

Use LM Studio

bash
fobis commit --backend openai --url http://localhost:1234 --model llama-3.2-3b-instruct

Use a custom config file

bash
fobis commit --config ~/work/fobis-work.ini

See also