AI-powered code review tool.
Made with β€οΈ by @NikitaFilonov
- β¨ About
- π§ͺ Live Preview
- π Quick Start
- βοΈ οΈCI/CD Integration
- π GitHub Actions
- π GitLab CI/CD
- π Documentation
β οΈ Privacy & Responsibility Notice
AI Review is a developer tool that brings AI-powered code review directly into your workflow. It helps teams improve code quality, enforce consistency, and speed up the review process.
β¨ Key features:
- Multiple LLM providers β choose between OpenAI, Claude, Gemini, Ollama, or OpenRouter, and switch anytime.
- VCS integration β works out of the box with GitLab, GitHub, Bitbucket, and Gitea.
- Customizable prompts β adapt inline, context, and summary reviews to match your teamβs coding guidelines.
- Reply modes β AI can now participate in existing review threads, adding follow-up replies in both inline and summary discussions.
- Flexible configuration β supports
YAML
,JSON
, andENV
, with seamless overrides in CI/CD pipelines. - AI Review runs fully client-side β it never proxies or inspects your requests.
AI Review runs automatically in your CI/CD pipeline and posts both inline comments, summary reviews, and now AI-generated replies directly inside your merge requests. This makes reviews faster, more conversational, and still fully under human control.
Curious how AI Review works in practice? Here are three real Pull Requests reviewed entirely by the tool β one per mode:
Mode | Description | π GitHub | π¦ GitLab |
---|---|---|---|
π§© Inline | Adds line-by-line comments directly in the diff. Focuses on specific code changes. | View on GitHub | View on GitLab |
π§ Context | Performs a broader analysis across multiple files, detecting cross-file issues and inconsistencies. | View on GitHub | View on GitLab |
π Summary | Posts a concise high-level summary with key highlights, strengths, and major issues. | View on GitHub | View on GitLab |
π¬ Inline Reply | Generates a context-aware reply to an existing inline comment thread. Can clarify decisions, propose fixes, or provide code suggestions. | View on GitHub | View on GitLab |
π¬ Summary Reply | Continues the summary-level review discussion, responding to reviewer comments with clarifications, rationale, or actionable next steps. | View on GitHub | View on GitLab |
π Each review was generated automatically via GitHub Actions using the corresponding mode:
ai-review run-inline
ai-review run-summary
ai-review run-context
ai-review run-inline-reply
ai-review run-summary-reply
Install via pip:
pip install xai-review
π¦ Available on PyPI
Or run directly via Docker:
docker run --rm -v $(pwd):/app nikitafilonov/ai-review:latest ai-review run-summary
π³ Pull from DockerHub
π Before running, create a basic configuration file .ai-review.yaml in the root of your project:
llm:
provider: OPENAI
meta:
model: gpt-4o-mini
max_tokens: 1200
temperature: 0.3
http_client:
timeout: 120
api_url: https://api.openai.com/v1
api_token: ${OPENAI_API_KEY}
vcs:
provider: GITLAB
pipeline:
project_id: "1"
merge_request_id: "100"
http_client:
timeout: 120
api_url: https://gitlab.com
api_token: ${GITLAB_API_TOKEN}
π This will:
- Run AI Review against your codebase.
- Generate inline and/or summary comments (depending on the selected mode).
- Use your chosen LLM provider (OpenAI GPT-4o-mini in this example).
Note: Running
ai-review run
executes the full review (inline + summary). To run only one mode, use the dedicated subcommands:
- ai-review run-inline
- ai-review run-context
- ai-review run-summary
- ai-review run-inline-reply
- ai-review run-summary-reply
AI Review can be configured via .ai-review.yaml
, .ai-review.json
, or .env
. See ./docs/configs
for complete, ready-to-use examples.
Key things you can customize:
- LLM provider β OpenAI, Gemini, Claude, Ollama, or OpenRouter
- Model settings β model name, temperature, max tokens
- VCS integration β works out of the box with GitLab, GitHub, Bitbucket, and Gitea
- Review policy β which files to include/exclude, review modes
- Prompts β inline/context/summary prompt templates
π Minimal configuration is enough to get started. Use the full reference configs if you want fine-grained control ( timeouts, artifacts, logging, etc.).
AI Review works out-of-the-box with major CI providers.
Use these snippets to run AI Review automatically on Pull/Merge Requests.
Each integration uses environment variables for LLM and VCS configuration.
For full configuration details (timeouts, artifacts, logging, prompt overrides), see ./docs/configs.
Add a workflow like this (manual trigger from Actions tab):
name: AI Review
on:
workflow_dispatch:
inputs:
review-command:
type: choice
default: run
options: [ run, run-inline, run-context, run-summary, run-inline-reply, run-summary-reply ]
pull-request-number:
type: string
required: true
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: Nikita-Filonov/ai-review@v0.36.0
with:
review-command: ${{ inputs.review-command }}
env:
# --- LLM configuration ---
LLM__PROVIDER: "OPENAI"
LLM__META__MODEL: "gpt-4o-mini"
LLM__META__MAX_TOKENS: "15000"
LLM__META__TEMPERATURE: "0.3"
LLM__HTTP_CLIENT__API_URL: "https://api.openai.com/v1"
LLM__HTTP_CLIENT__API_TOKEN: ${{ secrets.OPENAI_API_KEY }}
# --- GitHub integration ---
VCS__PROVIDER: "GITHUB"
VCS__PIPELINE__OWNER: ${{ github.repository_owner }}
VCS__PIPELINE__REPO: ${{ github.event.repository.name }}
VCS__PIPELINE__PULL_NUMBER: ${{ inputs.pull-request-number }}
VCS__HTTP_CLIENT__API_URL: "https://api.github.com"
VCS__HTTP_CLIENT__API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
π Full example: ./docs/ci/github.yaml
For GitLab users:
ai-review:
when: manual
stage: review
image: nikitafilonov/ai-review:latest
rules:
- if: '$CI_MERGE_REQUEST_IID'
script:
- ai-review run
variables:
# --- LLM configuration ---
LLM__PROVIDER: "OPENAI"
LLM__META__MODEL: "gpt-4o-mini"
LLM__META__MAX_TOKENS: "15000"
LLM__META__TEMPERATURE: "0.3"
LLM__HTTP_CLIENT__API_URL: "https://api.openai.com/v1"
LLM__HTTP_CLIENT__API_TOKEN: "$OPENAI_API_KEY"
# --- GitLab integration ---
VCS__PROVIDER: "GITLAB"
VCS__PIPELINE__PROJECT_ID: "$CI_PROJECT_ID"
VCS__PIPELINE__MERGE_REQUEST_ID: "$CI_MERGE_REQUEST_IID"
VCS__HTTP_CLIENT__API_URL: "$CI_SERVER_URL"
VCS__HTTP_CLIENT__API_TOKEN: "$CI_JOB_TOKEN"
allow_failure: true # Optional: don't block pipeline if AI review fails
π Full example: ./docs/ci/gitlab.yaml
See these folders for reference templates and full configuration options:
- ./docs/ci β CI/CD integration templates (GitHub Actions, GitLab CI)
- ./docs/cli β CLI command reference and usage examples
- ./docs/hooks β hook reference and lifecycle events
- ./docs/configs β full configuration examples (
.yaml
,.json
,.env
) - ./docs/prompts β prompt templates for Python/Go (light & strict modes)
AI Review does not store, log, or transmit your source code to any external service other than the LLM
provider explicitly configured in your .ai-review.yaml
.
All data is sent directly from your CI/CD environment to the selected LLM API endpoint (e.g. OpenAI, Gemini, Claude, OpenRouter). No intermediary servers or storage layers are involved.
If you use Ollama, requests are sent to your local or self-hosted Ollama runtime
(by default http://localhost:11434
). This allows you to run reviews completely offline, keeping all data strictly
inside your infrastructure.
β οΈ Please ensure you use proper API tokens and avoid exposing corporate or personal secrets. If you accidentally leak private code or credentials due to incorrect configuration (e.g., using a personal key instead of an enterprise one), it is your responsibility β the tool does not retain or share any data by itself.
π§ AI Review β open-source AI-powered code reviewer