AARF (Agentic AI Request Forgery) is a newly discovered architectural vulnerability class in Agentic AI orchestration systems using MCP Server, MAS, LangChain, and A2A Protocols.
💥 This is not injection.
💥 This is not jailbreak.
💥 This is blind trust between Planner ➝ Memory ➝ Plugin chains.
AARF lets attackers submit benign prompts that trigger privileged internal actions:
- Exfiltrate data via fallback plugin (e.g., SlackBot, EmailSender)
- Trigger unintended actions (like infrastructure shutdown or security escalations)
- Poison cross-agent memory in MAS setups
- All with no alerts, no SIEM hits, no policy violations
Read the full DEFCON-grade whitepaper (PDF): ➡ AARF_Agentic_AI_Request_Forgery.pdf
- ✔️ Executive Summary for CISOs, AppSec, and Red Teams
- ✔️ Real-world exploitation flows (PT-confirmed on MCP & MAS)
- ✔️ Bypassed controls and defense gaps
- ✔️ Mitigation and detection strategies
- ✔️ OWASP Top 10 proposal candidate
- ✔️ Red Team simulation steps
See /diagrams/
for validated PlantUML and exploitation flows.
See /red-team-simulation/
for step-by-step PT playbooks, detection gaps, and log evasion patterns.
See /OWASP_PR/
for submission-ready OWASP Agentic AI Top 10 item proposal for AARF.
- IDOR (2010)
- SSRF (2012)
- ➡ AARF (2025)
This repo is for research, awareness, and responsible disclosure advocacy. Always get permission before running these tests in live environments.
MIT License
Efi Jeremiah – Red Team Leader & Agentic AI Security Researcher