Skip to content

Hands-on lessons for attacking and defending AI systems, starting with the OWASP Top 10 for LLM Applications.

License

Notifications You must be signed in to change notification settings

citizenjosh/ai-security-training-lab

Repository files navigation

AI Security Training Lab

Welcome to the AI Security Training Lab — a hands-on, real-world environment for learning how to attack and defend artificial intelligence systems.

This lab currently focuses on lessons based on the OWASP Top 10 for Large Language Model (LLM) Applications, with future expansions planned into broader AI security challenges, standards, and frameworks.


📚 Lab Structure

/owasp/llm/01/
    attack.py
    mitigate.py
/owasp/llm/02/
    attack_overfitting.py
    mitigate_overfitting.py
    attack_output_manipulation.py
    mitigate_output_manipulation.py
/owasp/llm/03/
    attack.py
    mitigate.py
/owasp/llm/10/
    attack.py
    mitigate.py

attack.py — Demonstrates the attack technique
mitigate.py — Shows how to defend and recover from the attack


🚀 Quickstart

🔧 Local Setup

  1. Clone the repository:
git clone https://github.com/citizenjosh/ai-security-training-lab.git
cd ai-security-training-lab
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure your environment:
cp .env.example .env
nano .env
  • Add your OpenAI API key (if using OpenAI mode)
  • Set LLM_MODE=openai or LLM_MODE=local
  1. Run a lesson:
python3 owasp/llm/01/attack.py

🐳 Docker Setup

You can run the lab inside a Docker container.

  1. Build the Docker image:
docker build -t ai-security-training-lab .
  1. Run the container:
docker run --env-file .env -it ai-security-training-lab

✅ This ensures consistent environments for classrooms and workshops.


🧠 Dual Mode Operation

Mode Description
openai Connects to OpenAI API (requires API key and quota)
local Runs a local GPT-2 model on your machine (no API key required)

If using local mode, install HuggingFace libraries:

pip install torch transformers

⚠️ Local models like GPT-2 are intentionally vulnerable and may produce hallucinations or ignore safety instructions.


⚠️ API Key & Usage Notice

When using OpenAI mode:

💡 This project does not include free credits or API access. All usage costs are the user's responsibility.


🛠️ Tools Used

Free Tools


🛠️ Contribution Guidelines

Contributions are welcome!

  1. Fork the repository
  2. Create a new branch
  3. Make your changes
  4. Submit a pull request
  5. Follow the Code of Conduct

⚖️ License

This project is licensed under the MIT License.


🔖 Suggested GitHub Topics

ai-security
llm-security
prompt-injection
ethical-hacking
cybersecurity-education
owasp
adversarial-attacks
docker
machine-learning-security

Built and maintained by @citizenjosh 🚀

About

Hands-on lessons for attacking and defending AI systems, starting with the OWASP Top 10 for LLM Applications.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published