,-. ,-.
( O ) (o.o)
`-’ |_) oliveowl
“who?”
This project was inspired by the Warp AI Terminal and developed with significant assistance from Gemini 2.5. But turned out to be more of a terminal Alternative of web bassed bloated chat ui .
- 1.0v (https://youtu.be/mkkkX1Grqs8)
- 2.0v (https://youtu.be/dUvcjOpBu6k)
A simple Bash script to interact with AI models (Google Gemini, OpenRouter, OpenAI, Cerebras, and Ollama) directly from your terminal. It features chat history, Markdown rendering via
bat
, easy command copying viagum choose
, dynamic model selection, and a loading spinner.
- Supports Google Gemini, OpenRouter, OpenAI, Cerebras, and local Ollama models.
- Interactive chat loop in the terminal.
- Saves chat history in JSON files (
~/.config/oliveowl/history/
). - Automatic Session Naming: If you start a chat without giving it a name, the AI will automatically name the session based on the context of the conversation. The name is suggested after your first prompt and then refined every five prompts. Manually named sessions keep their original name.
- Uses
fzf
to display human-readable session names (instead of filenames) for easy history selection. - Uses
gum choose
for selecting code blocks to copy. - Dynamically fetches and presents available models during configuration.
- Includes a loading spinner animation while waiting for AI responses, with retry options on API call failure.
- Allows using
/config
in the initial session prompt or during chat to reconfigure API settings. - Allows using
/view
during chat to open the current history in your configured editor. - Renders AI responses as Markdown using
bat
. - Enhanced History Preview: The
/history
command now provides a full-screenfzf
interface with a live preview of the JSON content of chat history files usingbat
. - Detects Markdown code blocks (```...```) in AI responses and allows copying their content using
gum choose
. - Displays "token speed" (words per second) for each AI response, providing insight into response generation performance.
- Configuration stored in
~/.config/oliveowl/
. - Supports local Ollama instances, allowing you to use models running on your own machine.
You need the following command-line tools installed:
bash
(usually default)curl
(for making API requests)jq
(for parsing JSON responses)fzf
(for fuzzy finding/selection menus)bat
(for syntax highlighting/Markdown rendering)gum
(for multi-line input editing and spinners)- A clipboard tool:
xclip
(for X11) orwl-copy
(for Wayland) - For Ollama users: A running Ollama instance. See Ollama's official website for installation instructions.
Install them using your system's package manager. For example, on Debian/Ubuntu:
# Ensure apt can use HTTPS repositories and install GPG key
sudo apt update && sudo apt install -y apt-transport-https curl gpg
# Add Charm repo GPG key
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/charm.gpg
# Add Charm repo to sources
echo "deb [signed-by=/etc/apt/trusted.gpg.d/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
# Install dependencies
sudo apt update && sudo apt install -y curl jq fzf bat gum xclip # or wl-clipboard for wl-copy
On Fedora:
# Add Charm repo GPG key
sudo rpm --import https://repo.charm.sh/yum/gpg.key
# Add Charm repo
sudo dnf config-manager --add-repo https://repo.charm.sh/yum/charm.repo
# Install dependencies
sudo dnf install -y curl jq fzf bat gum xclip # or wl-clipboard for wl-copy
For other Unix-like systems (Arch Linux, macOS with Homebrew, etc.), please refer to the documentation for each individual tool (curl
, jq
, fzf
, bat
, gum
, xclip
/wl-clipboard
) for installation instructions using your preferred package manager or method.
Important: It's recommended to use the latest version of gum
available from the official Charm repository (as shown above) or other official installation methods. Older versions might have bugs (e.g., issues with multi-line input handling). Ensure your package manager is configured to pull the latest version from the Charm source if applicable.
Note: bat
might be called batcat
on some systems (like Debian/Ubuntu). If so, you might need to create a symlink sudo ln -s /usr/bin/batcat /usr/local/bin/bat
or adjust the script.
Install and run OliveOwl with a single command:
# Download and install OliveOwl
curl -sL https://raw.githubusercontent.com/aptdnfapt/OliveOwl/main/oliveowl -o ~/.local/bin/oliveowl && chmod +x ~/.local/bin/oliveowl
# Make sure ~/.local/bin is in your PATH (if not already)
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
# Run OliveOwl
oliveowl
Alternatively, if you prefer wget
:
# Download and install OliveOwl
wget https://raw.githubusercontent.com/aptdnfapt/OliveOwl/main/oliveowl -O ~/.local/bin/oliveowl && chmod +x ~/.local/bin/oliveowl
# Make sure ~/.local/bin is in your PATH (if not already)
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
# Run OliveOwl
oliveowl
That's it! OliveOwl is now installed and ready to use.
Note: After running OliveOwl for the first time, use the /config
command to add your API providers and select a model before you can start chatting.
-
Create Config Directory (if needed): The script attempts to create
~/.config/oliveowl
on first run, but you can create it manually:mkdir -p ~/.config/oliveowl
-
Add API Keys: You can add API keys in two ways:
- Manual Method: Create or edit the environment file
~/.config/oliveowl/.env
. Add your API keys:Replace the placeholders with your actual keys. For Gemini, OpenRouter, OpenAI, and Cerebras, you only need the key for the provider(s) you intend to use. For Ollama,# ~/.config/oliveowl/.env GEMINI_API_KEY=YOUR_GEMINI_API_KEY_HERE OPENROUTER_API_KEY=YOUR_OPENROUTER_API_KEY_HERE OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE CEREBRAS_API_KEY=YOUR_CEREBRAS_API_KEY_HERE # Ollama base URL (optional, defaults to http://localhost:11434 if not set) # Example: OLLAMA_BASE_URL=http://my-ollama-server:11434 OLLAMA_BASE_URL=
OLLAMA_BASE_URL
is optional; if left blank or commented out, the script will default tohttp://localhost:11434
. Set it if your Ollama instance runs on a different host or port. Make the file readable only by you:chmod 600 ~/.config/oliveowl/.env
. - Interactive Method: Use the configuration menu during setup or by typing
/config
in chat. Select "Add Provider" to interactively add API keys for any provider without manually editing files.
- Manual Method: Create or edit the environment file
-
Run Initial Config: Run the script with the
--config
flag to access the configuration menu with three options: "Change Model", "Add Provider", and "Change Editor".- Change Model: Select your API provider (Gemini, OpenRouter, OpenAI, Cerebras, or Ollama) and model. For Gemini, OpenRouter, OpenAI, and Cerebras, the script will attempt to dynamically fetch available models if the API key is configured. For Ollama, the script will attempt to fetch models from your local Ollama instance (using the
OLLAMA_BASE_URL
if set, or the defaulthttp://localhost:11434
). Ensure your Ollama instance is running and accessible. - Add Provider: Interactively add API keys for any provider without manually editing files.
- Change Editor: Configure your preferred editor for viewing chat history.
The script uses
fzf
for selection.
./oliveowl.sh --config # or if added to PATH: # oliveowl --config
This saves your choices to
~/.config/oliveowl/config
. - Change Model: Select your API provider (Gemini, OpenRouter, OpenAI, Cerebras, or Ollama) and model. For Gemini, OpenRouter, OpenAI, and Cerebras, the script will attempt to dynamically fetch available models if the API key is configured. For Ollama, the script will attempt to fetch models from your local Ollama instance (using the
-
Change System Prompt (Optional): The AI's default behavior and instructions are defined by the
SYSTEM_PROMPT
variable within theoliveowl
script itself. If you wish to customize the AI's persona or provide specific instructions for all interactions, you can directly edit theSYSTEM_PROMPT
variable in theoliveowl
file.Open the
oliveowl
script in your preferred text editor (vim,nano or vscode) Locate theSYSTEM_PROMPT
variable (around line 294) and modify its content. Ensure you maintain the triple-quote"""
syntax for multi-line prompts.
Run the script:
./oliveowl.sh
# or if added to PATH:
# oliveowl
The script will prompt you to enter a name for a new chat session, or you can type /history
to load a previous chat, /config
to reconfigure, or /exit
to quit.
In-Chat Commands:
/exit
: Quit the current chat session./new
: Start a new chat session (prompts for an optional name)./history
: Usefzf
to select and load a previous chat session, with a full-screen interface and a live preview of the JSON content usingbat
./config
: Access the configuration menu to change your API provider, model, add new providers, or change your editor./view
: Open the current chat history in your configured editor (e.g.,nvim
,vi
,nano
).
User Input:
When prompted with You:
, the script will open a minimal text editor using gum write
.
- Type your message directly in the editor. Multi-line input and pasting work naturally here.
- Press
Ctrl+D
orEsc
(depending on the editor mode) to finish and submit your input. - Press
Ctrl+C
to cancel input.
Code Block Copying:
If the AI includes Markdown code blocks (```...```) in its response, the script will detect them after the response is displayed. It will then launch gum choose
, showing a numbered list of the detected blocks (displaying the first line of each). You can select multiple blocks one after another. After copying a block, it will be removed from the list, and the prompt will reappear, allowing you to copy another. This loop continues until you select the "Stop Copy loop" option or all blocks have been copied.
We welcome your feedback and contributions! If you have suggestions, bug reports, or would like to contribute code, please feel free to open an issue or pull request.