Skip to content

inquisitour/veriphi-verification

Repository files navigation

Neural Network Robustness Verification Tool

A neural network robustness verification tool that combines formal verification methods with adversarial attacks for comprehensive AI security analysis.

🎯 Overview

This tool implements an attack-guided verification strategy that combines:

  • Fast adversarial attacks (FGSM, I-FGSM) for quick falsification
  • Formal verification using Ξ±,Ξ²-CROWN for mathematical guarantees
  • Performance optimization ready for GPU acceleration

✨ Features

  • Ξ±,Ξ²-CROWN Integration: Uses auto-LiRPA library with the world's best verification algorithm (VNN-COMP winner 2021-2024)
  • Attack-Guided Verification: Novel hybrid approach combining speed and rigor
  • Multiple Attack Methods: FGSM, Iterative FGSM with targeted/untargeted variants
  • Comprehensive Analysis: Supports L∞ and L2 norm perturbations
  • Performance Optimized: CPU-efficient implementation ready for GPU acceleration
  • Extensible Architecture: Clean abstractions for adding new verification methods and attacks

πŸš€ Quick Start

Prerequisites

  • Python 3.12+
  • PyTorch 2.0+
  • auto-LiRPA (automatically installed)

Installation

# Clone repository
git clone https://github.com/inquisitour/veriphi-verification.git
cd veriphi-verification

# Setup environment
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Install auto-LiRPA
git clone https://github.com/Verified-Intelligence/auto_LiRPA.git
cd auto_LiRPA
pip install -e .
cd ..

# Verify installation
python verify_installation.py

Basic Usage

# Run from src directory
cd src

# Quick robustness check
python -c "
from core import create_core_system
from core.models import create_test_model, create_sample_input

core = create_core_system(use_attacks=True)
model = create_test_model('tiny')
input_sample = create_sample_input('tiny')

result = core.verify_robustness(model, input_sample, epsilon=0.1)
print(f'Verification result: {result.status.value}')
print(f'Time: {result.verification_time:.3f}s')
"

Advanced Usage

# Comprehensive robustness evaluation
from core import create_core_system
from core.models import create_test_model, create_sample_input
import torch

# Initialize system
core = create_core_system(use_attacks=True, device='cpu')
model = create_test_model('linear')  # or 'conv', 'deep'
test_inputs = torch.stack([create_sample_input('linear') for _ in range(5)])

# Evaluate across multiple epsilon values
results = core.evaluate_robustness(
    model, test_inputs, 
    epsilons=[0.01, 0.05, 0.1, 0.2],
    norm="inf"
)

# Display results
for eps, stats in results.items():
    print(f"Ξ΅={eps}: {stats['verification_rate']:.1%} verified")

πŸ›‘οΈ How It Works

Attack-Guided Verification Strategy

  1. Phase 1: Attack Phase (Fast) - Try adversarial attacks first

    • FGSM for single-step attacks
    • I-FGSM for iterative attacks
    • Quick falsification in ~20ms
  2. Phase 2: Formal Verification (Rigorous) - If attacks fail

    • Ξ±,Ξ²-CROWN via auto-LiRPA
    • Mathematical proof of robustness
    • Comprehensive bounds computation

Example Verification Flow

πŸš€ Starting attack-guided verification
   Property: Ρ=0.1, norm=L∞
   
   πŸ—‘οΈ Phase 1: Attack phase (timeout: 10.0s)
      Trying FGSMAttack...
      β—‹ FGSMAttack failed to find counterexample
      Trying IterativeFGSM...
      β—‹ IterativeFGSM failed to find counterexample
   β—‹ Attack phase completed (0.038s) - No counterexamples found
   
   ⚑ Attacks completed, proceeding with formal verification...
   Computing bounds using method: Ξ±-CROWN
   Robustness verified: predicted class maintains highest confidence
   
βœ… Verification result: verified (time: 0.136s)

Supported Verification Methods

  • IBP: Interval Bound Propagation (fastest)
  • CROWN: Convex Relaxation (tighter bounds)
  • Ξ±,Ξ²-CROWN: State-of-the-art optimization (best performance)

πŸ“Š Performance

  • Attack Phase: 20-50ms for quick falsification
  • Formal Phase: 100-500ms for mathematical proof
  • Total: Significantly faster than pure formal verification
  • Scalability: Ready for GPU acceleration (10-100x speedup potential)

Benchmark Results

Model Type Ξ΅=0.1 Method Time
Tiny (3 classes) Verified Attack-guided 136ms
Linear (10 classes) Falsified Attack-guided 24ms
Conv (10 classes) Verified Formal only 450ms

πŸ”¬ Testing

# Run comprehensive tests
python verify_installation.py          # System verification
python core_test_script.py            # Core functionality
python test_attack_guided_verification.py  # Attack system

# Run specific test suites
cd tests
python -m pytest unit/ -v             # Unit tests
python -m pytest integration/ -v      # Integration tests
python -m pytest benchmarks/ -v       # Performance tests

πŸ—οΈ Architecture

src/core/
β”œβ”€β”€ verification/          # Formal verification engines
β”‚   β”œβ”€β”€ base.py           # Abstract interfaces
β”‚   β”œβ”€β”€ alpha_beta_crown.py  # Ξ±,Ξ²-CROWN implementation
β”‚   └── attack_guided.py  # Hybrid verification strategy
β”œβ”€β”€ attacks/              # Adversarial attack methods
β”‚   β”œβ”€β”€ base.py          # Attack abstractions
β”‚   └── fgsm.py          # FGSM implementations
β”œβ”€β”€ models/              # Test neural networks
β”‚   └── test_models.py   # Various architectures
└── __init__.py          # Main VeriphiCore interface

Key Components

  • VeriphiCore: Main interface for verification system
  • AlphaBetaCrownEngine: Formal verification using auto-LiRPA
  • AttackGuidedEngine: Hybrid strategy combining attacks + formal verification
  • FGSMAttack / IterativeFGSM: Adversarial attack implementations
  • Test Models: Various neural network architectures for testing

Potential Extensions

  • Multi-GPU scaling for large model verification
  • Web dashboard for interactive verification
  • ONNX model support for industry compatibility
  • Additional attack methods (PGD, C&W, AutoAttack)
  • Certification generation for regulatory compliance

πŸ“œ Dependencies

Core Dependencies

  • PyTorch >= 2.0.0: Deep learning framework
  • auto-LiRPA >= 0.6.0: Neural network verification library
  • NumPy >= 1.24.0: Numerical computing
  • SciPy >= 1.11.0: Scientific computing

Development Dependencies

  • pytest >= 7.4.0: Testing framework
  • black >= 23.0.0: Code formatting
  • isort >= 5.12.0: Import sorting

Optional Dependencies

  • ONNX >= 1.15.0: Model format support
  • Jupyter >= 1.0.0: Interactive development

πŸ”§ Configuration

The system supports various configuration options:

from core.verification import VerificationConfig
from core.attacks import AttackConfig

# Verification configuration
verify_config = VerificationConfig(
    epsilon=0.1,           # Perturbation bound
    norm="inf",            # L∞ or L2 norm
    bound_method="CROWN",  # IBP, CROWN, or alpha-CROWN
    timeout=300,           # Maximum time in seconds
    optimization_steps=20  # Ξ±-CROWN optimization iterations
)

# Attack configuration
attack_config = AttackConfig(
    epsilon=0.1,           # Perturbation bound
    norm="inf",            # L∞ or L2 norm
    max_iterations=20,     # For iterative attacks
    targeted=False,        # Targeted vs untargeted
    early_stopping=True    # Stop when attack succeeds
)

🀝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Install development dependencies (pip install -r requirements-dev.txt)
  4. Make changes and add tests
  5. Run tests (python -m pytest)
  6. Format code (black . && isort .)
  7. Commit changes (git commit -m 'Add amazing feature')
  8. Push to branch (git push origin feature/amazing-feature)
  9. Open Pull Request

πŸ“š Research Background

This implementation is based on cutting-edge research in neural network verification:

  • Ξ±,Ξ²-CROWN: Wang et al., "Ξ±,Ξ²-CROWN: An Efficient and Scalable Verifier for Neural Networks" (NeurIPS 2021)
  • auto-LiRPA: Xu et al., "Automatic perturbation analysis for scalable certified robustness and beyond" (NeurIPS 2020)
  • Attack-guided verification: Novel combination of formal methods and adversarial attacks

Citations

If you use this tool in your research, please consider citing:

@software{veriphi_verification_2025,
  title={Neural Network Robustness Verification Tool},
  author={Veriphi Verification Team},
  year={2025},
  url={https://github.com/inquisitour/veriphi-verification}
}

πŸ“„ License

MIT License - see LICENSE file for details.

πŸ™ Acknowledgments

  • auto-LiRPA Team: For the excellent verification library and Ξ±,Ξ²-CROWN implementation
  • Ξ±,Ξ²-CROWN Authors: For the state-of-the-art verification algorithm
  • VNN-COMP: For driving verification research forward and providing benchmarks
  • PyTorch Team: For the deep learning framework
  • Open Source Community: For making this project possible

πŸ“§ Contact

For questions about this implementation or collaboration opportunities:

πŸ”— Related Projects


About

Neural Network Robustness Verification Tool with Attack-Guided Verification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages