Skip to content

A simple and lightweight CLI tool for interacting with LLMs like Google Gemini directly from the terminal. ๐Ÿš€

License

Notifications You must be signed in to change notification settings

mehdizebhi/llm4cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Llm4cli - LLM Command Line Interface

llm4cli is a powerful command-line tool that allows you to interact with various Large Language Models (LLMs) directly from your terminal. It currently supports Google's Gemini models, with plans to integrate more vendors like OpenAI and Anthropic in future versions.

Features

  • Chat with LLMs directly from the command line
  • Supports multiple models from different providers
  • Lightweight and easy to use
  • No need for complex setupsโ€”just run and chat!

Installation

1. Download Prebuilt Executable (Recommended)

For a quick and easy setup, download the latest release from the GitHub Releases page based on your operating system.

Windows

  1. Download the llm.exe file from the releases page.
  2. Move the file to a directory like C:\tools\llm4cli.
  3. Add this directory to your system's PATH variable.
  4. Open a new terminal and test:
    llm -p "Tell me a joke."

Linux / macOS

  1. Download the llm binary from the releases page.
  2. Move it to /usr/local/bin/ and make it executable:
    chmod +x llm
    sudo mv llm /usr/local/bin/
  3. Verify installation:
    llm -p "Tell me a joke."

2. Build from Source (Alternative)

If you prefer to build from source, follow these steps:

Install JBang (if not already installed)

llm4cli is built using JBang, which allows Java scripts to run seamlessly without additional setup. You can install JBang using the following instructions:

Install JBang

Run llm4cli

Once JBang is installed, you can directly run llm4cli using:

jbang --fresh https://github.com/mehdizebhi/llm4cli/blob/main/src/Main.java -p "Hello!"

Alternatively, you can clone the repository and run it locally:

git clone https://github.com/mehdizebhi/llm4cli.git
cd llm4cli/src
jbang Main.java -p "Hello!"

You can also install llm4cli into your system path using jbang without compiling the Java file:

git clone https://github.com/mehdizebhi/llm4cli.git
cd llm4cli
jbang app install --name llm .\src\Main.java

Create a Native Executable with GraalVM

To build a standalone executable, follow these steps:

  1. Export the JAR file with dependencies:
    jbang export portable Main.java
  2. Build the native executable using GraalVM:
    native-image -cp .\lib\picocli-4.6.3.jar -H:ReflectionConfigurationFiles=reflect-config.json -jar .\Main.jar llm

Usage

You can use llm4cli by providing different arguments to specify the LLM provider and model. Below are some examples:

Basic Usage

llm -p "What is the meaning of life?"

Specify a Model Vendor

llm -p "Tell me a joke." -v google

Choose a Specific Model

llm -p "Explain how AI works." -v google -m gemini-2.0-flash

Supported Models

Currently, llm4cli supports the following models:

Google Gemini

  • gemini-1.0-pro
  • gemini-1.5-pro
  • gemini-1.5-flash
  • gemini-2.0-flash

More models from OpenAI and Anthropic will be added soon.

Setting Up Environment Variables

Before using llm4cli, you need to set up the API key for Gemini models. Set the environment variable as follows:

export GEMINI_API_KEY="your_api_key"

On Windows (PowerShell):

$env:GEMINI_API_KEY = "your_api_key"

You can obtain an API key from Google AI API Key Registration.

Contributing

We welcome contributions! Feel free to submit issues or pull requests to improve llm4cli.

License

llm4cli is released under the MIT License.


Happy chatting! ๐Ÿš€

About

A simple and lightweight CLI tool for interacting with LLMs like Google Gemini directly from the terminal. ๐Ÿš€

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages