Welcome to the AI Security Training Lab — a hands-on, real-world environment for learning how to attack and defend artificial intelligence systems.
This lab currently focuses on lessons based on the OWASP Top 10 for Large Language Model (LLM) Applications, with future expansions planned into broader AI security challenges, standards, and frameworks.
/owasp/llm/01/
attack.py
mitigate.py
/owasp/llm/02/
attack_overfitting.py
mitigate_overfitting.py
attack_output_manipulation.py
mitigate_output_manipulation.py
/owasp/llm/03/
attack.py
mitigate.py
/owasp/llm/10/
attack.py
mitigate.py
✅ attack.py — Demonstrates the attack technique
✅ mitigate.py — Shows how to defend and recover from the attack
- Clone the repository:
git clone https://github.com/citizenjosh/ai-security-training-lab.git
cd ai-security-training-lab
- Install dependencies:
pip install -r requirements.txt
- Configure your environment:
cp .env.example .env
nano .env
- Add your OpenAI API key (if using OpenAI mode)
- Set LLM_MODE=openai or LLM_MODE=local
- Run a lesson:
python3 owasp/llm/01/attack.py
You can run the lab inside a Docker container.
- Build the Docker image:
docker build -t ai-security-training-lab .
- Run the container:
docker run --env-file .env -it ai-security-training-lab
✅ This ensures consistent environments for classrooms and workshops.
Mode | Description |
---|---|
openai | Connects to OpenAI API (requires API key and quota) |
local | Runs a local GPT-2 model on your machine (no API key required) |
If using local mode, install HuggingFace libraries:
pip install torch transformers
⚠️ Local models like GPT-2 are intentionally vulnerable and may produce hallucinations or ignore safety instructions.
When using OpenAI mode:
- You must have a valid API key in
.env
- Ensure you have sufficient quota
- Check your usage: https://platform.openai.com/account/usage
💡 This project does not include free credits or API access. All usage costs are the user's responsibility.
Contributions are welcome!
- Fork the repository
- Create a new branch
- Make your changes
- Submit a pull request
- Follow the Code of Conduct
This project is licensed under the MIT License.
ai-security
llm-security
prompt-injection
ethical-hacking
cybersecurity-education
owasp
adversarial-attacks
docker
machine-learning-security
Built and maintained by @citizenjosh 🚀