Privacy‑first AI coding assistant that lives entirely in your terminal. No clouds, no compromises.
Local Cursor wraps a local large‑language‑model (LLM) served by Ollama in a simple CLI, giving you Copilot‑style help without ever sending a byte of your code to external servers.
Capability | What it means |
---|---|
File tooling | Read, write, list & search files via natural‑language requests |
Shell commands | Safely run whitelisted commands (ls , grep , find , …) |
Web search | Optional Exa API integration for up‑to‑date answers |
Local LLM | Ships with qwen3:32b by default—swap in any Ollama model |
Extensible | Add new tools, commands or UI layers without touching model code |
- Python ≥ 3.8
- Ollama running locally (
brew install ollama
or see the docs) - (Optional) Exa API key for web search
# Clone and enter the repo
git clone https://github.com/towardsai/local-cursor.git
cd local‑cursor
# Set up a virtualenv
python -m venv .venv && source .venv/bin/activate
# Install Python deps
pip install -r requirements.txt
# Pull a model & start Ollama
ollama pull qwen3:32b
ollama serve # keep this terminal running
# (Optional) add your Exa key
echo "EXA_API_KEY=sk‑..." > .env
# Fire up the assistant
python main.py --model qwen3:32b
Tip → run python main.py --help
for all CLI flags (debug mode, model override, …).
- Natural language input from the terminal is sent to the model, along with a system prompt that lists the available tools.
- The local LLM (via Ollama) analyzes the request and responds with either a plain-text answer or a structured tool call.
- If a tool call is issued, the OllamaAgent executes the corresponding function (e.g., read/write a file, run a shell command) and sends the result back to the model.
- This loop continues until the model produces a final answer, which is then printed in your terminal.
All logic is in main.py
; the heavy lifting is done by the open‑source model running locally.
Tool | What it does |
---|---|
list_files(path=".") |
Show directories and files with human‑friendly icons |
read_file(path) |
Return the full text of a file |
write_file(path, content) |
Create or overwrite a file |
find_files(pattern) |
Glob search (e.g. **/*.py ) |
run_command(cmd) |
Execute a whitelisted shell command |
web_search(query, num_results=5) |
Query the web via Exa |
Add your own by editing get_tools_definition()
—the model will “see” them automatically at runtime.
Local Cursor keeps its runtime lean:
requests # To make EXA API calls
click # To write CLI
colorama # To format CLI output
openai # To create an OpenAI client
python‑dotenv # To load environment variables from our .env file
(See requirements.txt
for exact versions.)
- Only a safe subset of shell commands is allowed by default—edit
allowed_commands
inrun_command()
to adjust. - All file paths are resolved inside the current working directory to avoid accidental system‑wide access.
Symptom | Fix |
---|---|
Error: Exa API key not configured |
Add EXA_API_KEY to .env or disable web search |
command … not allowed for security reasons |
Add it to allowed_commands (understand the risks first!) |
High memory usage | Try a smaller Ollama model like phi3:4b |
- Fork & clone
- Create a virtualenv and install dev dependencies (
pip install -r requirements-dev.txt
) - Create a PR