You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary
Add support for using local AI models (via Ollama or LM Studio) to generate commit messages directly in Fork.
Motivation
• Current AI-assisted commit generation typically relies on cloud services (e.g., ChatGPT, Claude).
• Many developers prefer not to send diffs or code externally due to privacy, security, or compliance concerns.
• Local model inference (Ollama, LM Studio) allows AI-powered commit messages while keeping all data on-device.
Proposal
• Allow Fork to connect to a local model provider (Ollama/LM Studio).
• Fork could send the staged diff to the local API and receive a suggested commit message.
• Users can then edit/approve before committing.
Benefits
• Privacy & security (code never leaves the machine).
• Performance (local inference is fast with optimized models).
• Flexibility (developers can pick or fine-tune models).