Welcome to the AI/ML/LLM Penetration Testing Toolkit by Mr-Infect — the #1 GitHub resource for AI security, red teaming, and adversarial ML techniques. This repository is dedicated to offensive and defensive security for cutting-edge AI, Machine Learning (ML), and Large Language Models (LLMs) like ChatGPT, Claude, and LLaMA.
✅ Designed for cybersecurity engineers, red teamers, AI/ML researchers, and ethical hackers ✅ focused to :
AI Penetration Testing,Prompt Injection,LLM Security,Red Team AI,AI Ethical Hacking
AI is now integrated across finance, healthcare, legal, defense, and national infrastructure. Penetration testing for AI systems is no longer optional — it is mission-critical.
- 🕵️ Sensitive Data Leaks – PII, trade secrets, source code
- 💀 Prompt Injection Attacks – Jailbreaking, sandbox escapes, plugin abuse
- 🧠 Model Hallucination – Offensive, misleading, or manipulated content
- 🐍 Data/Model Poisoning – Adversarial training manipulation
- 🔌 LLM Plugin Abuse – Uncontrolled API interactions
- 📦 AI Supply Chain Attacks – Dependency poisoning, model tampering
To use this repository effectively:
- 🔬 Understanding of AI/ML lifecycle:
Data > Train > Deploy > Monitor - 🧠 Familiarity with LLMs (e.g. Transformer models, tokenization)
- 🧑💻 Core pentesting skills: XSS, SQLi, RCE, API abuse
- 🐍 Strong Python scripting (most tools and exploits rely on Python)
- AI vs ML vs LLMs: Clear distinctions
- LLM Lifecycle: Problem -> Dataset -> Model -> Training -> Evaluation -> Deployment
- Tokenization & Vectorization: Foundation of how LLMs parse and understand input
- Prompt Injection
- Jailbreaking & Output Overwriting
- Sensitive Information Leakage
- Vector Store Attacks & Retrieval Manipulation
- Model Weight Poisoning
- Data Supply Chain Attacks
- "Ignore previous instructions" payloads
- Unicode, emojis, and language-switching evasion
- Markdown/image/HTML-based payloads
- Plugin and multi-modal attack vectors (image, audio, PDF, API)
| ID | Risk | SEO Keywords |
|---|---|---|
| LLM01 | Prompt Injection | "LLM jailbreak", "prompt override" |
| LLM02 | Sensitive Info Disclosure | "AI data leak", "PII exfiltration" |
| LLM03 | Supply Chain Risk | "dependency poisoning", "model repo hijack" |
| LLM04 | Data/Model Poisoning | "AI training corruption", "malicious dataset" |
| LLM05 | Improper Output Handling | "AI-generated XSS", "model SQLi" |
| LLM06 | Excessive Agency | "plugin abuse", "autonomous API misuse" |
| LLM07 | System Prompt Leakage | "instruction leakage", "LLM prompt reveal" |
| LLM08 | Vector Store Vulnerabilities | "embedding attack", "semantic poisoning" |
| LLM09 | Misinformation | "hallucination", "bias injection" |
| LLM10 | Unbounded Resource Consumption | "LLM DoS", "token flooding" |
| Tool | Description |
|---|---|
| LLM Attacks | Directory of adversarial LLM research |
| PIPE | Prompt Injection Primer for Engineers |
| MITRE ATLAS | MITRE's AI/ML threat knowledge base |
| Awesome GPT Security | Curated LLM threat intelligence tools |
| ChatGPT Red Team Ally | ChatGPT usage for red teaming |
| Lakera Gandalf | Live prompt injection playground |
| AI Immersive Labs | Prompt attack labs with real-time feedback |
| AI Goat | OWASP-style AI pentesting playground |
| L1B3RT45 | Jailbreak prompt collections |
- https://github.com/DummyKitty/Cyber-Security-chatGPT-prompt
- https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Prompt%20Injection
- https://github.com/f/awesome-chatgpt-prompts
- https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516
- https://kai-greshake.de/posts/inject-my-pdf
- https://www.lakera.ai/blog/guide-to-prompt-injection
- https://arxiv.org/abs/2306.05499
- https://www.csoonline.com/article/3613932/how-data-poisoning-attacks-corrupt-machine-learning-models.html
- https://pytorch.org/blog/compromised-nightly-dependency/
Want to improve this repo? Here's how:
# Fork and clone the repo
$ git clone https://github.com/Mr-Infect/AI-penetration-testing
$ cd AI-penetration-testing
# Create a new feature branch
$ git checkout -b feature/my-feature
# Commit, push, and create a pull request
AI Pentesting,Prompt Injection,LLM Security,Mr-Infect AI Hacking,ChatGPT Exploits,Large Language Model Jailbreak,AI Red Team Tools,Adversarial AI Attacks,OpenAI Prompt Security,LLM Ethical Hacking,AI Security Github,AI Offensive Security,LLM OWASP,LLM Top 10,AI Prompt Vulnerability,Token Abuse DoS,ChatGPT Jailbreak,Red Team AI,AI Security Research
- GitHub Profile: https://github.com/Mr-Infect
- Project Link: AI Penetration Testing Repository
⚠️ Disclaimer: This project is intended solely for educational, research, and authorized ethical hacking purposes. Unauthorized use is illegal.
⭐️ Star this repository to help others discover top-tier content on AI/LLM penetration testing and prompt injection!