Skip to content

dragonpilee/K.I.R.A

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚡ KIRA - Knowledge Integration & Response Agent v1.3

KIRA Banner Model Creator Python LM Studio Terminal MIT License

Developed by ALAN CYRIL SUNNY
If you like this project, please ⭐ star the repository!


🧠 KIRA - Knowledge Integration & Response Agent

A fast, elegant, and responsive terminal-based AI assistant powered by your local liquid/lfm2-1.2b model via LM Studio.

  • 💬 Real-time, token-by-token streamed output
  • 🎨 Beautiful, colorful terminal UI with rich and ASCII banners via pyfiglet
  • 🧠 Multi-turn conversation memory
  • 🧬 Identity-aware (Name: KIRA, Version: 1.3, Creator: Alan Cyril Sunny)
  • ⌨️ Glitchy effects and slow typing for hacker vibes
  • ⚙️ Fully configurable API/model settings
  • 🔒 100% local processing for privacy and control
  • Supports both NVIDIA CUDA and AMD ROCm GPUs for accelerated inference

✨ Features

  • Identity Awareness: KIRA always knows her name, version, and creator.
  • Streamed Output: See responses appear live, token by token.
  • Rich Terminal UI: Styled with rich and ASCII art banners.
  • Conversation Memory: Remembers previous turns for context.
  • Customizable: Easily tweak API/model via environment variables.
  • Local & Private: All data stays on your machine.
  • GPU Support: Works with NVIDIA CUDA and AMD ROCm supported GPUs.

🛠️ Tech Stack

  • Language: Python 3.8+
  • Terminal UI: rich, pyfiglet
  • AI Model: liquid/lfm2-1.2b (via LM Studio)
  • Model Serving: LM Studio (REST API)
  • API Communication: REST API (HTTP)
  • GPU Acceleration: NVIDIA RTX GPU (CUDA) or AMD GPU (ROCm) for faster inference

💻 Requirements

  • Python 3.8 or higher
  • LM Studio running locally at http://localhost:1234
  • Installed model: liquid/lfm2-1.2b or compatible
  • (Optional) NVIDIA CUDA or AMD ROCm supported GPU for acceleration

🚀 Installation

  1. (Optional) Create and activate a virtual environment:

    python -m venv .venv
    .venv\Scripts\activate   # Windows
  2. Install the required Python packages:

    pip install requests rich pyfiglet
  3. Start LM Studio with the liquid/lfm2-1.2b model and keep it listening on http://localhost:1234.


⚡ Quick Start

With LM Studio running, launch KIRA:

python chatbot.py

You'll see the stylized ASCII banner and can chat live in your terminal.
Type 'exit' to gracefully disconnect.


📝 Usage

  1. Run the bot in your terminal.
  2. Chat with KIRA in natural language.
  3. Enjoy real-time, styled responses with memory and glitch effects.

💡 Example Prompts

  • "Who are you?"
  • "Summarize the latest conversation."
  • "Tell me a programming joke."
  • "What's your version and creator?"

🔧 Environment Variables (Optional)

Variable Description Default
LM_STUDIO_API_URL LM Studio API endpoint http://localhost:1234/v1
MODEL_NAME Model name used for completions liquid/lfm2-1.2b

📸 Demo

KIRA Terminal UI


📁 Project Structure

📦 K.I.R.A/
 ┣ chatbot.py                # Main chatbot script
 ┗ README.md              # Project README

🧠 Identity Prompt

KIRA is always aware of her identity:

Your name is KIRA.
Your version is 1.3.
Your creator is Alan Cyril Sunny.
You are based on the model liquid/lfm2-1.2b.

👨‍💻 Developer

Alan Cyril Sunny
📫 alancyrilsunny@protonmail.com


📜 License

MIT License. Free to use, modify, and share.


✨ “May the shadows keep you safe.” — KIRA

About

Knowledge Integration & Response Agent v1.3

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages