Skip to content

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.

License

Notifications You must be signed in to change notification settings

gensecaihq/genai-api-pentest-platform

Repository files navigation

GenAI API Pentest Platform

πŸš€ AI-Powered API Security Testing Platform

Features β€’ Installation β€’ Usage β€’ Documentation β€’ Contributing

πŸš€ Overview

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand business logic, predict vulnerabilities, and generate sophisticated attack scenarios.

✨ Features

Core GenAI Capabilities

  • Multi-LLM Integration: OpenAI, Anthropic, Google, OpenRouter, and local models
  • Semantic Understanding: Comprehends API behavior and business logic
  • Adaptive Testing: Learns and evolves during testing
  • Natural Language Analysis: Understands API documentation and responses
  • Predictive Security: Anticipates vulnerabilities before they're exploited

Advanced Security Testing

  • Business Logic Flaws: Detects complex multi-step vulnerabilities
  • Behavioral Analysis: Learns normal patterns and identifies anomalies
  • Context-Aware Payloads: Generates attacks specific to your API
  • Exploit Chain Discovery: Finds multi-step attack paths
  • Zero-Day Detection: Discovers novel vulnerability patterns

Supported Formats

  • OpenAPI/Swagger 2.0 & 3.x
  • Postman Collections
  • GraphQL Schemas
  • REST APIs
  • SOAP/WSDL

πŸ› οΈ Installation

Quick Start

# Clone the repository
git clone https://github.com/gensecai/genai-api-pentest-platform.git
cd genai-api-pentest-platform

# Run the setup script
./scripts/setup.sh

# Configure your API keys
cp .env.example .env
# Edit .env with your API keys

Manual Installation

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Install the package
pip install -e .

Docker Installation

# Build the image
docker build -t genai-pentest .

# Run with docker-compose
docker-compose up -d

🚦 Usage

Web Interface

# Start the web server
python -m src.web.app

# Open http://localhost:8000

Command Line

# Basic scan
python -m src.cli scan https://api.example.com/swagger.json

# Advanced scan with specific LLM
python -m src.cli scan api.yaml --provider openai --model gpt-4

# Interactive mode
python -m src.cli init --interactive

# Generate configuration
python -m src.cli init

Python API

from src import GenAIPentest

# Initialize the platform
pentest = GenAIPentest(
    providers=['openai', 'anthropic'],
    consensus_threshold=0.8
)

# Run a scan
results = await pentest.scan('https://api.example.com/swagger.json')

# Get detailed findings
for vuln in results.vulnerabilities:
    print(f"{vuln.severity}: {vuln.title}")
    print(f"AI Explanation: {vuln.ai_analysis}")

πŸ“Š Example Results

vulnerability:
  title: "Complex Authorization Bypass via Business Logic Flaw"
  severity: "CRITICAL"
  confidence: 0.95
  ai_analysis: |
    The AI discovered that by manipulating the order status from 'pending' 
    to 'refunded' before payment processing, an attacker can receive items 
    without payment. This multi-step attack exploits the gap between order 
    state transitions and payment validation.
  exploit_chain:
    - "POST /orders - Create order with high-value items"
    - "PATCH /orders/{id}/status - Change to 'processing'"
    - "DELETE /payments/{id} - Cancel payment reference"
    - "PATCH /orders/{id}/status - Force to 'refunded'"
    - "GET /orders/{id}/items - Access items without payment"

πŸ”§ Configuration

Basic Configuration

# pentest_config.yaml
providers:
  - name: openai
    api_key: ${OPENAI_API_KEY}
    model: gpt-4
  - name: anthropic
    api_key: ${ANTHROPIC_API_KEY}
    model: claude-3-opus-20240229

target:
  base_url: https://api.example.com
  spec_url: https://api.example.com/swagger.json
  
testing:
  mode: comprehensive
  parallel_requests: 10
  timeout: 30

πŸ“š Documentation

🀝 Contributing

We welcome contributions from the community! Please read our Contributing Guide for details on:

  • Code of Conduct
  • Development setup
  • Submission guidelines
  • Issue reporting

πŸ›‘οΈ Security

For security vulnerabilities, please email security@genai-pentest-platform.org instead of using the issue tracker.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

⚠️ Disclaimer

This tool is for authorized security testing only. Users must:

  • Obtain explicit written permission before testing any API
  • Comply with all applicable laws and regulations
  • Use the tool responsibly and ethically
  • Report vulnerabilities through appropriate channels

The GenAI API Pentest Platform team and contributors are not responsible for misuse or damage caused by this tool.

πŸ™ Acknowledgments

  • OpenAI, Anthropic, and Google for their LLM APIs
  • The global cybersecurity research community
  • All contributors to the GenAI API Pentest Platform project
  • Open source security tools that inspired this platform

Made with ❀️ by the GenSecAI Community in Kolkata

GitHub Organization β€’

About

The GenAI API Pentest Platform is a API security testing tool that leverages multiple Large Language Models (LLMs) to perform intelligent, context-aware API security assessments. Unlike traditional tools that rely on pattern matching, this platform uses AI to understand logic, predict vulnerabilities, and generate sophisticated attack scenario.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •