A lightweight CLI-based local LLM assistant.
Note: Works only on GNU/Linux.
- Load and process text/code files
- Browse the web via the assistant
- Support over vision and thinking model
- (More features coming soon!)
Note: Make sure to use a tool_call capable model.
Nothing here! (for now)
- Clone the repository:
git clone <repo-url>- Create a virtual environment:
python3 -m venv myenv- Activate it and install dependencies:
source myenv/bin/activate
pip install -r requirements.txt- Enjoy! 🎉
See Additional Setup for optional improvements.
Basic configuration is in config.py and includes:
- System Prompt
- Model selection
- Ollama endpoint
- Verbose mode
- Streaming mode
- Cosmetic options...
| Command | Description |
|---|---|
/bye |
Exit the assistant |
/stream |
Clear context but keep system prompt |
/file <absolute/path> |
Load a text/code file (.txt, .py, .c, etc.) |
/help |
Show this help message |
/list |
List available models |
/model <model_name> |
Change the current model |
/regenerate |
Regenerate the last assistant message |
/show_config |
Show current configuration |
/stream True - False |
Enable or disable streaming (markdown not supported when True) |
/verbose True - False |
Enable or disable verbose mode |
To make it easier to run:
- Open your bash config:
sudo nano ~/.bashrc- Add an alias (change paths accordingly):
alias ai='cd /path/to/script_folder && source myenv/bin/activate && python main.py'- Reload bash:
source ~/.bashrcNow you can just type ai to start your assistant. UwU