A terminal user interface (TUI) application for managing local Ollama models, written in Rust.
lazyollama.mp4
- List Models: Displays a scrollable list of locally installed Ollama models.
- Run Models: Run any of the locally installed Ollama models.
- Inspect Models: Shows detailed information for the selected model (size, modification date, digest, family, parameters, etc.).
- Delete Models: Allows deleting the selected model with a confirmation prompt.
- Install Models: Allows to pull new models from the ollama registry.
- Environment Variable: Uses
OLLAMA_HOST
environment variable for the Ollama API endpoint (defaults tohttp://localhost:11434
).
- Rust toolchain (Install from rustup.rs)
- A running Ollama instance (ollama.com)
- Homebrew (macOS / Linux) (brew.sh)
Install using the official Homebrew tap.
Option 1 (Tap first, then install):
# Add the custom tap
brew tap webmatze/tap
# Install the tool
brew install lazyollama
Option 2 (Direct install):
Homebrew can automatically tap and install in one step if you provide the full formula name:
brew install webmatze/tap/lazyollama
Upgrading:
To upgrade to the latest version:
# Update Homebrew and all formulas (including lazyollama)
brew update
brew upgrade lazyollama
This is the simplest way to build and install LazyOllama to a system-wide location:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Run the installation script
chmod +x install.sh
./install.sh
The script will:
- Check for required dependencies
- Build the release version
- Install it to the appropriate location for your OS (typically
/usr/local/bin
on Unix-like systems) - Set appropriate permissions
If you have Rust installed, you can install directly using Cargo:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Install using cargo
cargo install --path .
This will install the binary to your Cargo bin directory (typically ~/.cargo/bin/
), which should be in your PATH.
If you prefer to manually build and place the binary:
# 1. Clone the repository
git clone https://github.com/webmatze/lazyollama.git
cd lazyollama
# 2. Build the application
cargo build --release
# 3. Copy the binary to a location in your PATH (optional)
# On Linux/macOS (may require sudo)
sudo cp target/release/lazyollama /usr/local/bin/
The executable will be located at target/release/lazyollama
.
- Linux/macOS: Installation to system directories (like
/usr/local/bin
) typically requires root privileges (sudo). - Windows: The installation script will attempt to install to an appropriate location, but you may need to adjust your PATH environment variable.
After installation, verify that lazyollama is correctly installed and accessible:
# Check if the command is available
which lazyollama
# Run lazyollama
lazyollama
If the command isn't found, ensure the installation location is in your PATH.
- Run the application:
lazyollama
- Set Custom Ollama Host (Optional):
If your Ollama instance is running on a different host or port, set the
OLLAMA_HOST
environment variable before running:export OLLAMA_HOST="http://your-ollama-host:port" lazyollama
q
: Quit the application.↓
/j
: Move selection down.↑
/k
: Move selection up.d
: Initiate deletion of the selected model (shows confirmation).y
/Y
: Confirm deletion (when in confirmation mode).n
/N
/Esc
: Cancel deletion (when in confirmation mode).i
: Install/Pull new modelsEnter
: Run selected model in ollama
This project uses the following main Rust crates:
ratatui
: For building the TUI.crossterm
: Terminal manipulation backend forratatui
.tokio
: Asynchronous runtime.reqwest
: HTTP client for interacting with the Ollama API.serde
: For serializing/deserializing API data.humansize
: For formatting file sizes.thiserror
: For error handling boilerplate.dotenvy
: (Optional) For loading.env
files if needed.
See Cargo.toml
for the full list and specific versions.
The application follows a simple event loop architecture:
- Initialization: Sets up the terminal, initializes
AppState
, and fetches the initial list of models from the Ollama API. - Event Loop:
- Draws the UI based on the current
AppState
. - Checks for user input (keyboard events) and results from background tasks (via channels).
- Handles input: Updates
AppState
(e.g., changes selection, enters delete mode, quits). - Handles background task results (e.g., updates model details).
- Triggers background tasks (e.g., fetching model details) when necessary.
- Draws the UI based on the current
- Cleanup: Restores the terminal state on exit.
graph TD
A[User Input] --> B[Event Loop]
B --> C[AppState]
C --> D[UI Renderer]
D --> E[Terminal Display]
B --> F[Background Tasks]
F --> G[Ollama API]
G --> F
F --> C
subgraph Event Handler
B
C
end
subgraph UI Layer
D
E
end
subgraph API Layer
F
G
end
- Connection Errors: Ensure your Ollama instance is running and accessible at the specified
OLLAMA_HOST
(or the defaulthttp://localhost:11434
). Check firewalls if necessary. - API Errors: If the Ollama API returns errors, they should be displayed in the status bar. Refer to the Ollama server logs for more details.
- Rendering Issues: Terminal rendering can vary. Ensure you are using a modern terminal emulator with good Unicode and color support.