A neural network robustness verification tool that combines formal verification methods with adversarial attacks for comprehensive AI security analysis.
This tool implements an attack-guided verification strategy that combines:
- Fast adversarial attacks (FGSM, I-FGSM) for quick falsification
- Formal verification using Ξ±,Ξ²-CROWN for mathematical guarantees
- Performance optimization ready for GPU acceleration
- Ξ±,Ξ²-CROWN Integration: Uses auto-LiRPA library with the world's best verification algorithm (VNN-COMP winner 2021-2024)
- Attack-Guided Verification: Novel hybrid approach combining speed and rigor
- Multiple Attack Methods: FGSM, Iterative FGSM with targeted/untargeted variants
- Comprehensive Analysis: Supports Lβ and L2 norm perturbations
- Performance Optimized: CPU-efficient implementation ready for GPU acceleration
- Extensible Architecture: Clean abstractions for adding new verification methods and attacks
- Python 3.12+
- PyTorch 2.0+
- auto-LiRPA (automatically installed)
# Clone repository
git clone https://github.com/inquisitour/veriphi-verification.git
cd veriphi-verification
# Setup environment
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Install auto-LiRPA
git clone https://github.com/Verified-Intelligence/auto_LiRPA.git
cd auto_LiRPA
pip install -e .
cd ..
# Verify installation
python verify_installation.py
# Run from src directory
cd src
# Quick robustness check
python -c "
from core import create_core_system
from core.models import create_test_model, create_sample_input
core = create_core_system(use_attacks=True)
model = create_test_model('tiny')
input_sample = create_sample_input('tiny')
result = core.verify_robustness(model, input_sample, epsilon=0.1)
print(f'Verification result: {result.status.value}')
print(f'Time: {result.verification_time:.3f}s')
"
# Comprehensive robustness evaluation
from core import create_core_system
from core.models import create_test_model, create_sample_input
import torch
# Initialize system
core = create_core_system(use_attacks=True, device='cpu')
model = create_test_model('linear') # or 'conv', 'deep'
test_inputs = torch.stack([create_sample_input('linear') for _ in range(5)])
# Evaluate across multiple epsilon values
results = core.evaluate_robustness(
model, test_inputs,
epsilons=[0.01, 0.05, 0.1, 0.2],
norm="inf"
)
# Display results
for eps, stats in results.items():
print(f"Ξ΅={eps}: {stats['verification_rate']:.1%} verified")
-
Phase 1: Attack Phase (Fast) - Try adversarial attacks first
- FGSM for single-step attacks
- I-FGSM for iterative attacks
- Quick falsification in ~20ms
-
Phase 2: Formal Verification (Rigorous) - If attacks fail
- Ξ±,Ξ²-CROWN via auto-LiRPA
- Mathematical proof of robustness
- Comprehensive bounds computation
π Starting attack-guided verification
Property: Ξ΅=0.1, norm=Lβ
π‘οΈ Phase 1: Attack phase (timeout: 10.0s)
Trying FGSMAttack...
β FGSMAttack failed to find counterexample
Trying IterativeFGSM...
β IterativeFGSM failed to find counterexample
β Attack phase completed (0.038s) - No counterexamples found
β‘ Attacks completed, proceeding with formal verification...
Computing bounds using method: Ξ±-CROWN
Robustness verified: predicted class maintains highest confidence
β
Verification result: verified (time: 0.136s)
- IBP: Interval Bound Propagation (fastest)
- CROWN: Convex Relaxation (tighter bounds)
- Ξ±,Ξ²-CROWN: State-of-the-art optimization (best performance)
- Attack Phase: 20-50ms for quick falsification
- Formal Phase: 100-500ms for mathematical proof
- Total: Significantly faster than pure formal verification
- Scalability: Ready for GPU acceleration (10-100x speedup potential)
Model Type | Ξ΅=0.1 | Method | Time |
---|---|---|---|
Tiny (3 classes) | Verified | Attack-guided | 136ms |
Linear (10 classes) | Falsified | Attack-guided | 24ms |
Conv (10 classes) | Verified | Formal only | 450ms |
# Run comprehensive tests
python verify_installation.py # System verification
python core_test_script.py # Core functionality
python test_attack_guided_verification.py # Attack system
# Run specific test suites
cd tests
python -m pytest unit/ -v # Unit tests
python -m pytest integration/ -v # Integration tests
python -m pytest benchmarks/ -v # Performance tests
src/core/
βββ verification/ # Formal verification engines
β βββ base.py # Abstract interfaces
β βββ alpha_beta_crown.py # Ξ±,Ξ²-CROWN implementation
β βββ attack_guided.py # Hybrid verification strategy
βββ attacks/ # Adversarial attack methods
β βββ base.py # Attack abstractions
β βββ fgsm.py # FGSM implementations
βββ models/ # Test neural networks
β βββ test_models.py # Various architectures
βββ __init__.py # Main VeriphiCore interface
- VeriphiCore: Main interface for verification system
- AlphaBetaCrownEngine: Formal verification using auto-LiRPA
- AttackGuidedEngine: Hybrid strategy combining attacks + formal verification
- FGSMAttack / IterativeFGSM: Adversarial attack implementations
- Test Models: Various neural network architectures for testing
- Multi-GPU scaling for large model verification
- Web dashboard for interactive verification
- ONNX model support for industry compatibility
- Additional attack methods (PGD, C&W, AutoAttack)
- Certification generation for regulatory compliance
- PyTorch >= 2.0.0: Deep learning framework
- auto-LiRPA >= 0.6.0: Neural network verification library
- NumPy >= 1.24.0: Numerical computing
- SciPy >= 1.11.0: Scientific computing
- pytest >= 7.4.0: Testing framework
- black >= 23.0.0: Code formatting
- isort >= 5.12.0: Import sorting
- ONNX >= 1.15.0: Model format support
- Jupyter >= 1.0.0: Interactive development
The system supports various configuration options:
from core.verification import VerificationConfig
from core.attacks import AttackConfig
# Verification configuration
verify_config = VerificationConfig(
epsilon=0.1, # Perturbation bound
norm="inf", # Lβ or L2 norm
bound_method="CROWN", # IBP, CROWN, or alpha-CROWN
timeout=300, # Maximum time in seconds
optimization_steps=20 # Ξ±-CROWN optimization iterations
)
# Attack configuration
attack_config = AttackConfig(
epsilon=0.1, # Perturbation bound
norm="inf", # Lβ or L2 norm
max_iterations=20, # For iterative attacks
targeted=False, # Targeted vs untargeted
early_stopping=True # Stop when attack succeeds
)
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Install development dependencies (
pip install -r requirements-dev.txt
) - Make changes and add tests
- Run tests (
python -m pytest
) - Format code (
black . && isort .
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
This implementation is based on cutting-edge research in neural network verification:
- Ξ±,Ξ²-CROWN: Wang et al., "Ξ±,Ξ²-CROWN: An Efficient and Scalable Verifier for Neural Networks" (NeurIPS 2021)
- auto-LiRPA: Xu et al., "Automatic perturbation analysis for scalable certified robustness and beyond" (NeurIPS 2020)
- Attack-guided verification: Novel combination of formal methods and adversarial attacks
If you use this tool in your research, please consider citing:
@software{veriphi_verification_2025,
title={Neural Network Robustness Verification Tool},
author={Veriphi Verification Team},
year={2025},
url={https://github.com/inquisitour/veriphi-verification}
}
MIT License - see LICENSE file for details.
- auto-LiRPA Team: For the excellent verification library and Ξ±,Ξ²-CROWN implementation
- Ξ±,Ξ²-CROWN Authors: For the state-of-the-art verification algorithm
- VNN-COMP: For driving verification research forward and providing benchmarks
- PyTorch Team: For the deep learning framework
- Open Source Community: For making this project possible
For questions about this implementation or collaboration opportunities:
- Open an issue on GitHub
- Email: deshmukhpratik931@gmail.com
- auto-LiRPA: Core verification library
- Ξ±,Ξ²-CROWN: Complete verification tool
- VNN-COMP: International verification competition