Tiny CLI wrapper around git diff
+ the llm
tool to get fast, actionable PR reviews from multiple models at once.
review --models "gpt-4o,gemini-2.0-flash,anthropic/claude-3-7-sonnet-latest" origin/master...HEAD
Swap
origin/master
fororigin/main
if your default branch ismain
.
- git and bash (any Linux/macOS shell is fine)
- llm CLI (Python‑based)
- Optional: bat for nice Markdown output
Quick install (recommended in a venv):
python3 -m venv ~/.llm-env
source ~/.llm-env/bin/activate
pip install --upgrade pip llm
# (Optional) On Debian/Ubuntu for pretty output
sudo apt install -y bat && sudo ln -s /usr/bin/batcat /usr/local/bin/bat || true
Place the review
script somewhere on your PATH
, e.g. ~/bin/review
, and make it executable:
chmod +x ~/bin/review
The script can call OpenAI, Gemini, and Claude via llm
. Set keys like this:
llm keys set openai # for OpenAI (e.g., gpt-4o)
llm keys set google # for Gemini (e.g., gemini-2.0-flash)
llm keys set anthropic # for Claude (e.g., anthropic/claude-3-7-sonnet-latest)
Quick test:
llm -m gpt-4o "hello"
llm -m gemini-2.0-flash "hello"
llm -m anthropic/claude-3-7-sonnet-latest "hello"
Compare current branch to default branch:
review --models "gpt-4o,gemini-2.0-flash,anthropic/claude-3-7-sonnet-latest" origin/main...HEAD
Only staged changes:
review --cached
Add extra context (PR description, ticket, etc.):
git log -1 --pretty=%B | review --context -
Set models once per shell:
export LLM_MODELS="gpt-4o,gemini-2.0-flash,anthropic/claude-3-7-sonnet-latest"
review origin/main...HEAD
- A short readiness rating (★☆☆ / ★★☆ / ★★★)
- Blocking items to fix before human review
- Non‑blocking suggestions
- Test plan gaps
- No output? Probably no diff. Try
review --cached
or confirm your range (e.g.,origin/main...HEAD
). - Auth error? Re‑run the key setup commands above.
- Command not found? Ensure
llm
andreview
are on yourPATH
, or activate your venv.
If you’re running this inside Codex, use the included codex-startup.sh
script in this repo (from the project root).
This script will:
- Create a Python venv and install
llm
(plus Gemini/Claude plugins) and thereview
CLI. - Persist your API keys for both the
llm
tool and your shell (adds exports to~/.bashrc
). - Generate helper scripts under
tools/
:tools/pre_pr_review.sh
– runs the multi-model reviewtools/auto_fix_from_review.sh
– asks an LLM for a minimal patch based on the review
Set these three secrets in Codex exactly as named:
Secret Name | Value (example) | Used For |
---|---|---|
OPENAI_API_KEY |
sk-... |
OpenAI models (e.g., gpt-4o ) |
ANTHROPIC_API_KEY |
sk-ant-... |
Claude models (e.g., claude-3-5-sonnet-* ) |
GOOGLE_API_KEY |
AIza... |
Gemini models (e.g., gemini-2.0-flash ) |
The startup script will run
llm keys set
for each provider and export these in the shell (with fallbacks in~/.bashrc
).
# Run an initial review
bash tools/pre_pr_review.sh
# (Optional) Try an auto-fix pass based on .ai-review.md
bash tools/auto_fix_from_review.sh .ai-review.md || true
# Run a second review after fixes
bash tools/pre_pr_review.sh
# Open a PR (uses repo’s default branch as base)
gh pr create --fill \
--body-file .ai-review.md \
--title "chore: codex multi-model review" \
--base "$(git remote show origin | sed -n '/HEAD branch/s/.*: //p')"
With the Codex integration and codex-startup.sh
in place, you can work entirely in plain English.
You could type to Codex:
"Refactor
src/app/example.component.ts
to use async/await instead of promises and run a multi-model review before opening a PR."
When you make a request like this, Codex will:
- Interpret your instruction and make the requested code change(s) in your repo.
- Run
tools/pre_pr_review.sh
– this triggers the multi-model review (review
CLI) with OpenAI, Gemini, and Claude using the API keys you set in Codex secrets. - Save the AI review output to
.ai-review.md
. - Optionally run
tools/auto_fix_from_review.sh
– this will feed.ai-review.md
to an LLM to propose a minimal patch and apply it. - Run a second review so you can see if the fixes resolved the earlier feedback.
- Open a PR using
gh pr create
with the.ai-review.md
contents as the PR body.
Because codex-startup.sh
persists your keys and sets everything up automatically, this works in every Codex session without re-configuring.
You can mix and match tasks — ask Codex to:
- Make a specific change and run the review
- Just run a review on your current branch
- Apply auto-fixes from the last review
- Open a PR after review passes
All using plain language instructions.