Skip to content

neoluigi4123/Ai-Personnal-Assistant

Repository files navigation

Ai-Personal-Assistant 🖥️✨

A lightweight CLI-based local LLM assistant.

Note: Works only on GNU/Linux.

🌟 Capabilities

  • Load and process text/code files
  • Browse the web via the assistant
  • Support over vision and thinking model
  • (More features coming soon!)

Note: Make sure to use a tool_call capable model.

🗒️ To-Do

Nothing here! (for now)

⚙️ Installation

  1. Clone the repository:
git clone <repo-url>
  1. Create a virtual environment:
python3 -m venv myenv
  1. Activate it and install dependencies:
source myenv/bin/activate
pip install -r requirements.txt
  1. Enjoy! 🎉

See Additional Setup for optional improvements.

🛠️ Configuration

Basic configuration is in config.py and includes:

  • System Prompt
  • Model selection
  • Ollama endpoint
  • Verbose mode
  • Streaming mode
  • Cosmetic options...

💬 Commands

Command Description
/bye Exit the assistant
/stream Clear context but keep system prompt
/file <absolute/path> Load a text/code file (.txt, .py, .c, etc.)
/help Show this help message
/list List available models
/model <model_name> Change the current model
/regenerate Regenerate the last assistant message
/show_config Show current configuration
/stream True - False Enable or disable streaming (markdown not supported when True)
/verbose True - False Enable or disable verbose mode

📝 Additional Setup

To make it easier to run:

  1. Open your bash config:
sudo nano ~/.bashrc
  1. Add an alias (change paths accordingly):
alias ai='cd /path/to/script_folder && source myenv/bin/activate && python main.py'
  1. Reload bash:
source ~/.bashrc

Now you can just type ai to start your assistant. UwU

About

Cli based local llm assistant

Resources

Stars

Watchers

Forks

Languages