Skip to content

DLVAIA or Deliberately Left Vulnerable AI Application is meant to test your understanding of OWASP Top 10 vulnerabilities for LLMs.

License

Notifications You must be signed in to change notification settings

Iam-M-i-r-z-a/DLVAIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DLVAIA: Deliberately Left Vulnerable AI Application 🧪

Python Version Ollama


📜 License

This project is licensed under the MIT License. See the LICENSE file for details.


⚠️ Caution

This project is deliberately insecure.

  • Do NOT deploy this application in production.
  • NEVER enter real personal, sensitive, or confidential data.

DLVAIA is intended for educational and research purposes only to explore and demonstrate AI/LLM security vulnerabilities. Misusing it may cause harm or violate privacy policies.


🌟 About This Project

Welcome to DLVAIA (Deliberately Left Vulnerable AI Application)!

This project is a hands-on learning environment designed to help you understand and identify critical security vulnerabilities in Large Language Models (LLMs). Inspired by the OWASP Top 10 for LLMs, DLVAIA provides a practical sandbox to explore common attack vectors and defense mechanisms.

A key advantage of DLVAIA is its 100% local deployability. Due to Ollama, you can run the entire application, including the LLM inference, completely on your own machine. This means:

  • No API Keys Required: You won't need to connect to remote services like Claude, ChatGPT, or Gemini, eliminating dependency on third-party APIs.
  • Enhanced Privacy: Your data stays on your machine, ensuring maximum privacy and control over your interactions.
  • Cost-Free Inference: There are no recurring costs associated with LLM usage, making it ideal for continuous learning and experimentation.

Whether you're a security researcher, a developer building with LLMs, or an enthusiast curious about AI security, DLVAIA offers a unique opportunity to test your understanding in a controlled, private, and educational setting.


🎯 Challenges You'll Explore

DLVAIA specifically features and allows you to experiment some of OWASP Top 10 LLM vulnerabilities:

  • Prompt Injection: Manipulating the model's output through crafted inputs.
  • Insecure Output Handling: Exploiting how the application processes and displays LLM responses.
  • Model Denial of Service: Disrupting the LLM's availability or performance.
  • Supply Chain Vulnerabilities: Identifying risks in the components and integrations used.
  • Data Leakage (Sensitive Data Disclosure): Discovering unintended exposure of confidential information.
  • Model Theft: Exploring methods to extract or misuse proprietary models.
  • Overreliance: Recognizing the dangers of excessive trust in LLM outputs.
  • Excessive Agency: Examining risks when LLMs are granted too much autonomy.

🚀 Getting Started

Follow these steps to set up and run DLVAIA on your local machine.

Prerequisites

Make sure you have git and Python 3.x installed on your system.

Ollama Setup (Local LLM)

DLVAIA leverages Ollama for running local LLMs, ensuring your data stays private and you have full control over the model.

  1. Download Ollama: Visit the official Ollama website and download the installer for your operating system: ➡️ ollama.com/download

  2. Navigate the Models you wanna: Visit the official Ollama models website: ➡️ ollama.com/models

  3. Install an LLM Model: After installing Ollama, open your terminal or command prompt and pull the whatever model you want, but instruct models preferred

    ollama pull <model_name>

    (Note: Remember to update the model name within the project's code accordingly.)

Deploying the Application

With Ollama ready, you can now deploy DLVAIA:

  1. Clone the Repository:

    git clone https://github.com/Iam-M-i-r-z-a/DLVAIA.git
  2. Navigate to the Project Directory:

    cd DLVAIA
  3. Create a Virtual Environment: It's highly recommended to use a virtual environment to manage project dependencies.

    python3 -m venv venv
  4. Activate the Virtual Environment:

    • Linux / macOS:
      source venv/bin/activate
    • Windows (Command Prompt):
      venv\Scripts\activate
    • Windows (PowerShell):
      .\venv\Scripts\Activate.ps1
  5. Install Dependencies: Install all required Python packages using pip.

    pip install -r requirements.txt
  6. Run the Application: Start the Flask application.

    python app.py
  7. Access DLVAIA: Open your web browser and visit the application: http://localhost:5000/


🤝 Contributions & Feedback

Contributions, bug reports, and feature requests are welcome! If you find a new vulnerability pattern or have ideas for improving DLVAIA, please open an issue or submit a pull request.


✉️ Connect with Me


About

DLVAIA or Deliberately Left Vulnerable AI Application is meant to test your understanding of OWASP Top 10 vulnerabilities for LLMs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published