llm4cli
is a powerful command-line tool that allows you to interact with various Large Language Models (LLMs) directly from your terminal. It currently supports Google's Gemini models, with plans to integrate more vendors like OpenAI and Anthropic in future versions.
- Chat with LLMs directly from the command line
- Supports multiple models from different providers
- Lightweight and easy to use
- No need for complex setupsโjust run and chat!
For a quick and easy setup, download the latest release from the GitHub Releases page based on your operating system.
- Download the
llm.exe
file from the releases page. - Move the file to a directory like
C:\tools\llm4cli
. - Add this directory to your system's PATH variable.
- Open a new terminal and test:
llm -p "Tell me a joke."
- Download the
llm
binary from the releases page. - Move it to
/usr/local/bin/
and make it executable:chmod +x llm sudo mv llm /usr/local/bin/
- Verify installation:
llm -p "Tell me a joke."
If you prefer to build from source, follow these steps:
llm4cli
is built using JBang, which allows Java scripts to run seamlessly without additional setup. You can install JBang using the following instructions:
Once JBang is installed, you can directly run llm4cli
using:
jbang --fresh https://github.com/mehdizebhi/llm4cli/blob/main/src/Main.java -p "Hello!"
Alternatively, you can clone the repository and run it locally:
git clone https://github.com/mehdizebhi/llm4cli.git
cd llm4cli/src
jbang Main.java -p "Hello!"
You can also install llm4cli
into your system path using jbang without compiling the Java file:
git clone https://github.com/mehdizebhi/llm4cli.git
cd llm4cli
jbang app install --name llm .\src\Main.java
To build a standalone executable, follow these steps:
- Export the JAR file with dependencies:
jbang export portable Main.java
- Build the native executable using GraalVM:
native-image -cp .\lib\picocli-4.6.3.jar -H:ReflectionConfigurationFiles=reflect-config.json -jar .\Main.jar llm
You can use llm4cli
by providing different arguments to specify the LLM provider and model. Below are some examples:
llm -p "What is the meaning of life?"
llm -p "Tell me a joke." -v google
llm -p "Explain how AI works." -v google -m gemini-2.0-flash
Currently, llm4cli
supports the following models:
gemini-1.0-pro
gemini-1.5-pro
gemini-1.5-flash
gemini-2.0-flash
More models from OpenAI and Anthropic will be added soon.
Before using llm4cli
, you need to set up the API key for Gemini models. Set the environment variable as follows:
export GEMINI_API_KEY="your_api_key"
On Windows (PowerShell):
$env:GEMINI_API_KEY = "your_api_key"
You can obtain an API key from Google AI API Key Registration.
We welcome contributions! Feel free to submit issues or pull requests to improve llm4cli
.
llm4cli
is released under the MIT License.
Happy chatting! ๐