Modern pytest benchmarking for async code with beautiful terminal output and advanced comparison tools.
- π― Async-First: Designed specifically for benchmarking
async def
functions - π Pytest Integration: Seamless integration as a pytest plugin with full pytest-asyncio support
- π¨ Rich Output: Beautiful terminal reporting powered by Rich!
- π Comprehensive Stats: Min, max, mean, median, std dev, percentiles, and more
- βοΈ A vs B Comparisons: Compare different implementations side-by-side
- π Multi-Scenario Analysis: Benchmark multiple scenarios with detailed comparison tables
- π― Performance Grading: Automatic performance scoring and analysis
- β‘ Auto Calibration: Intelligent round and iteration detection
- π Quick Compare: One-line comparison utilities
- π Winner Detection: Automatic identification of best-performing implementation
- π Easy to Use: Simple fixture-based API with native
async
/await
support - π§ pytest-asyncio Compatible: Works perfectly with pytest-asyncio's event loop management
Already testing async APIs (FastAPI, Quart, aiohttp)? You're all set with the basic installation:
pip install pytest-async-benchmark
# or
uv add pytest-async-benchmark
You'll get the full async/await experience immediately since you already have pytest-asyncio!
Choose your installation based on your needs:
# Full installation with async/await support (recommended)
pip install pytest-async-benchmark[asyncio]
uv add pytest-async-benchmark --optional asyncio
# Basic installation (simple interface)
pip install pytest-async-benchmark
uv add pytest-async-benchmark
Already using @pytest.mark.asyncio
in your tests? Then the basic installation is all you need:
# If you already have tests like this:
@pytest.mark.asyncio
async def test_my_api():
# Your existing async test code
pass
# Then just add pytest-async-benchmark and use:
@pytest.mark.asyncio
async def test_my_api_performance(async_benchmark):
result = await async_benchmark(my_async_function)
assert result['mean'] < 0.01
pytest-async-benchmark automatically adapts to your environment, providing two convenient interfaces:
When pytest-asyncio is installed, use the natural async/await syntax:
import asyncio
import pytest
async def slow_async_operation():
await asyncio.sleep(0.01) # 10ms
return "result"
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=5, iterations=10)
async def test_async_performance(async_benchmark):
# Use await with pytest-asyncio for best experience
result = await async_benchmark(slow_async_operation)
# Your assertions here
assert result['mean'] < 0.02
For simpler setups, the sync interface works automatically:
import asyncio
import pytest
async def slow_async_operation():
await asyncio.sleep(0.01)
return "result"
@pytest.mark.async_benchmark(rounds=5, iterations=10)
def test_sync_performance(async_benchmark):
# No await needed - sync interface
result = async_benchmark(slow_async_operation)
# Your assertions here
assert result['mean'] < 0.02 # Should complete in under 20ms
pytest-async-benchmark supports two syntax options for configuring benchmarks:
@pytest.mark.async_benchmark(rounds=5, iterations=10)
async def test_with_marker(async_benchmark):
result = await async_benchmark(slow_async_operation)
assert result['rounds'] == 5 # From marker
async def test_with_parameters(async_benchmark):
result = await async_benchmark(slow_async_operation, rounds=5, iterations=10)
assert result['rounds'] == 5 # From function parameters
pytest-async-benchmark automatically detects your environment and provides the best interface:
When pytest-asyncio is installed, use natural async/await syntax:
# Set in your pyproject.toml for automatic async test detection
[tool.pytest.ini_options]
asyncio_mode = "auto"
# Then use await syntax
@pytest.mark.asyncio
async def test_my_benchmark(async_benchmark):
result = await async_benchmark(my_async_function)
# Your assertions here
Benefits of pytest-asyncio integration:
- β
Native
async
/await
syntax support - β Automatic event loop management
- β
No
RuntimeError: cannot be called from a running event loop
- β Better compatibility with async frameworks like FastAPI, Quart, aiohttp
- β Cleaner test code with standard async patterns
When pytest-asyncio is not available, the simple interface works automatically:
# No pytest-asyncio required
def test_my_benchmark(async_benchmark):
result = async_benchmark(my_async_function) # No await needed
# Your assertions here
Benefits of simple interface:
- β No additional dependencies required
- β Simpler setup for basic use cases
- β Perfect for getting started quickly
- β Automatic event loop management internally
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_with_marker(async_benchmark):
"""Use marker for consistent, visible configuration."""
result = await async_benchmark(my_async_function)
assert result['rounds'] == 10 # Configuration is explicit and visible
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
def test_with_marker_sync(async_benchmark):
"""Use marker for consistent, visible configuration - simple style."""
result = async_benchmark(my_async_function) # No await needed
assert result['rounds'] == 10 # Configuration is explicit and visible
# With pytest-asyncio
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_with_marker_async(async_benchmark):
result = await async_benchmark(my_async_function)
assert result['rounds'] == 10
# Without pytest-asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
def test_with_marker_sync(async_benchmark):
result = async_benchmark(my_async_function)
assert result['rounds'] == 10
# With pytest-asyncio
@pytest.mark.asyncio
async def test_with_parameters_async(async_benchmark):
result = await async_benchmark(
my_async_function,
rounds=10,
iterations=100,
warmup_rounds=2
)
assert result['rounds'] == 10
# Without pytest-asyncio
def test_with_parameters_simple(async_benchmark):
result = async_benchmark(
my_async_function,
rounds=10,
iterations=100,
warmup_rounds=2
)
assert result['rounds'] == 10
Both interfaces support parameter precedence where function parameters override marker settings:
@pytest.mark.async_benchmark(rounds=5, iterations=50) # Default config
async def test_with_override(async_benchmark): # Works with or without @pytest.mark.asyncio
"""Function parameters override marker settings."""
result = await async_benchmark( # Use 'await' only with pytest-asyncio
my_async_function,
rounds=20 # This overrides marker's rounds=5
# iterations=50 comes from marker
)
assert result['rounds'] == 20 # Function parameter wins
assert result['iterations'] == 50 # From marker
@pytest.mark.asyncio
async def test_my_async_function(async_benchmark):
async def my_function():
# Your async code here
await some_async_operation()
return result
# Benchmark with default settings (5 rounds, 1 iteration each)
stats = await async_benchmark(my_function)
# Access comprehensive timing statistics
print(f"Mean execution time: {stats['mean']:.3f}s")
print(f"Standard deviation: {stats['stddev']:.3f}s")
print(f"95th percentile: {stats['p95']:.3f}s")
pytest-async-benchmark offers two flexible ways to configure your benchmarks:
Use pytest markers for declarative, visible configuration:
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_high_precision_benchmark(async_benchmark):
"""High precision benchmark with marker configuration."""
result = await async_benchmark(my_async_function)
# Configuration is visible and consistent
assert result['rounds'] == 10
assert result['iterations'] == 100
Benefits:
- β Visible configuration - Parameters are clear at test level
- β IDE support - Better tooling and autocomplete
- β Test discovery - Easy to find all benchmark tests
- β Consistent configs - Same settings across related tests
Use function parameters for dynamic, flexible configuration:
@pytest.mark.asyncio
async def test_dynamic_benchmark(async_benchmark):
"""Dynamic benchmark with runtime configuration."""
# Configuration can be computed or conditional
rounds = 20 if is_production else 5
result = await async_benchmark(
my_async_function,
rounds=rounds,
iterations=50,
warmup_rounds=1
)
Benefits:
- β Dynamic configuration - Runtime parameter calculation
- β Conditional logic - Different configs based on environment
- β Per-call customization - Each benchmark call can differ
Function parameters override marker parameters:
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=5, iterations=50, warmup_rounds=1)
async def test_with_overrides(async_benchmark):
"""Use marker defaults with selective overrides."""
# Quick test with marker defaults
quick_result = await async_benchmark(fast_function)
# Precision test with overridden rounds
precise_result = await async_benchmark(
slow_function,
rounds=20 # Overrides marker's rounds=5
# iterations=50 and warmup_rounds=1 come from marker
)
assert quick_result['rounds'] == 5 # From marker
assert precise_result['rounds'] == 20 # From function override
@pytest.mark.asyncio
async def test_with_custom_settings(async_benchmark):
result = await async_benchmark(
my_async_function,
rounds=10, # Number of rounds to run
iterations=5, # Iterations per round
warmup_rounds=2 # Warmup rounds before measurement
)
@pytest.mark.asyncio
async def test_with_args(async_benchmark):
async def process_data(data, multiplier=1):
# Process the data
await asyncio.sleep(0.01)
return len(data) * multiplier
result = await async_benchmark(
process_data,
"test_data", # positional arg
multiplier=2, # keyword arg
rounds=3
)
from pytest_async_benchmark import quick_compare
async def algorithm_v1():
await asyncio.sleep(0.002) # 2ms
return "v1_result"
async def algorithm_v2():
await asyncio.sleep(0.0015) # 1.5ms - optimized
return "v2_result"
# Quick one-liner comparison
def test_algorithm_comparison():
winner, results = quick_compare(algorithm_v1, algorithm_v2, rounds=5)
assert winner == "algorithm_v2" # v2 should be faster
from pytest_async_benchmark import a_vs_b_comparison
def test_detailed_comparison():
# Compare with beautiful terminal output
a_vs_b_comparison(
"Original Algorithm", algorithm_v1,
"Optimized Algorithm", algorithm_v2,
rounds=8, iterations=20
)
from pytest_async_benchmark import BenchmarkComparator
def test_multi_scenario():
comparator = BenchmarkComparator()
# Add multiple scenarios
comparator.add_scenario(
"Database Query v1", db_query_v1,
rounds=5, iterations=10,
description="Original database implementation"
)
comparator.add_scenario(
"Database Query v2", db_query_v2,
rounds=5, iterations=10,
description="Optimized with connection pooling"
)
# Run comparison and get results
results = comparator.run_comparison()
# Beautiful comparison table automatically displayed
# Access programmatic results
fastest = results.get_fastest_scenario()
assert fastest.name == "Database Query v2"
Each benchmark returns detailed statistics:
{
'min': 0.001234, # Minimum execution time
'max': 0.005678, # Maximum execution time
'mean': 0.002456, # Mean execution time
'median': 0.002123, # Median execution time
'stddev': 0.000234, # Standard deviation
'p50': 0.002123, # 50th percentile (median)
'p90': 0.003456, # 90th percentile
'p95': 0.004123, # 95th percentile
'p99': 0.004789, # 99th percentile
'rounds': 5, # Number of rounds executed
'iterations': 1, # Number of iterations per round
'raw_times': [...], # List of raw timing measurements
'grade': 'A', # Performance grade (A-F)
'grade_score': 87.5 # Numeric grade score (0-100)
}
π Async Benchmark Results: test_my_function
βββββββββββββββ³βββββββββββββ
β Metric β Value β
β‘βββββββββββββββββββββββββββ©
β Min β 10.234ms β
β Max β 15.678ms β
β Mean β 12.456ms β
β Median β 12.123ms β
β Std Dev β 1.234ms β
β 95th %ile β 14.567ms β
β 99th %ile β 15.234ms β
β Grade β A (87.5) β
β Rounds β 5 β
β Iterations β 1 β
βββββββββββββββ΄βββββββββββββ
β
Benchmark completed successfully!
βοΈ A vs B Comparison Results
βββββββββββββββββββββββββββ³ββββββββββββββ³ββββββββββββββ³ββββββββββββ
β Scenario β Algorithm A β Algorithm B β Winner β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β Mean Time β 2.456ms β 1.789ms β B π β
β Median Time β 2.234ms β 1.678ms β B π β
β 95th Percentile β 3.456ms β 2.345ms β B π β
β Standard Deviation β 0.567ms β 0.234ms β B π β
β Performance Grade β B (76.2) β A (89.1) β B π β
β Improvement β - β 27.2% β - β
βββββββββββββββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββ
π Winner: Algorithm B (27.2% faster)
pytest-async-benchmark/
βββ src/
β βββ pytest_async_benchmark/
β βββ __init__.py # Main exports and API
β βββ plugin.py # Pytest plugin and fixtures
β βββ runner.py # Core benchmarking engine
β βββ display.py # Rich terminal output formatting
β βββ stats.py # Statistical calculations
β βββ utils.py # Utility functions
β βββ analytics.py # Performance analysis tools
β βββ comparison.py # A vs B comparison functionality
βββ examples/
β βββ pytest_examples.py # Comprehensive pytest usage examples
β βββ quart_api_comparison.py # Real-world API endpoint comparison
β βββ comparison_examples.py # Advanced comparison features demo
βββ tests/
β βββ test_async_bench.py # Core functionality tests
β βββ test_comparison.py # Comparison feature tests
β βββ test_demo.py # Demo test cases
β βββ conftest.py # Test configuration
βββ pyproject.toml # Package configuration
βββ README.md # This file
Comprehensive pytest usage examples including:
- Basic benchmarking with the
async_benchmark
fixture - Advanced configuration options
- Performance assertions and testing patterns
- Using markers for benchmark organization
Real-world API endpoint comparison demo featuring:
- Quart web framework setup
- API v1 vs v2 endpoint benchmarking
- Live server testing with actual HTTP requests
- Performance regression detection
Advanced comparison features showcase:
- Multi-scenario benchmark comparisons
- A vs B testing with detailed analysis
- Performance grading and scoring
- Statistical comparison utilities
from fastapi import FastAPI
from fastapi.testclient import TestClient
import pytest
app = FastAPI()
@app.get("/api/data")
async def get_data():
# Simulate database query
await asyncio.sleep(0.005)
return {"data": "example"}
@pytest.mark.asyncio
async def test_fastapi_endpoint_performance(async_benchmark):
async def make_request():
with TestClient(app) as client:
response = client.get("/api/data")
return response.json()
result = await async_benchmark(make_request, rounds=10)
assert result['mean'] < 0.1 # Should respond within 100ms
assert result['grade'] in ['A', 'B'] # Should have good performance grade
See the complete example in examples/quart_api_comparison.py
:
from pytest_async_benchmark import a_vs_b_comparison
import asyncio
import aiohttp
async def test_api_v1():
async with aiohttp.ClientSession() as session:
async with session.get('http://localhost:5000/api/v1/data') as resp:
return await resp.json()
async def test_api_v2():
async with aiohttp.ClientSession() as session:
async with session.get('http://localhost:5000/api/v2/data') as resp:
return await resp.json()
# Compare API versions
a_vs_b_comparison(
"API v1", test_api_v1,
"API v2 (Optimized)", test_api_v2,
rounds=10, iterations=5
)
@pytest.mark.asyncio
async def test_database_query_performance(async_benchmark):
async def fetch_user_data(user_id):
async with database.connection() as conn:
return await conn.fetch_one(
"SELECT * FROM users WHERE id = ?", user_id
)
result = await async_benchmark(fetch_user_data, 123, rounds=5)
assert result['mean'] < 0.05 # Should complete within 50ms
assert result['p95'] < 0.1 # 95% of queries under 100ms
@pytest.mark.async_benchmark
@pytest.mark.asyncio
async def test_performance(async_benchmark):
# Your benchmark test
result = await async_benchmark(my_async_function)
assert result is not None
Parameters:
func
: The async function to benchmark*args
: Positional arguments to pass to the functionrounds
: Number of measurement rounds (default: 5)iterations
: Number of iterations per round (default: 1)warmup_rounds
: Number of warmup rounds before measurement (default: 1)**kwargs
: Keyword arguments to pass to the function
Returns: A dictionary with comprehensive statistics including min, max, mean, median, stddev, percentiles, performance grade, and raw measurements.
quick_compare(func_a, func_b, **kwargs)
: Quick comparison returning winner and resultsa_vs_b_comparison(name_a, func_a, name_b, func_b, **kwargs)
: Detailed comparison with terminal outputBenchmarkComparator
: Class for multi-scenario benchmarking and analysis
- Python β₯ 3.9
- pytest β₯ 8.3.5
- pytest-asyncio β₯ 0.23.0 (automatically installed)
Note: Rich (for beautiful terminal output) is automatically installed as a dependency.
# Clone the repository
git clone https://github.com/yourusername/pytest-async-benchmark.git
cd pytest-async-benchmark
# Install dependencies
uv sync
# Run tests
uv run pytest tests/ -v
# Run examples
uv run pytest examples/pytest_examples.py -v
# Test real-world Quart API comparison
uv run python examples/quart_api_comparison.py
# See advanced comparison features
uv run python examples/comparison_examples.py
This project uses Ruff for both linting and formatting:
# Check code for linting issues
uv run ruff check .
# Fix auto-fixable linting issues
uv run ruff check . --fix
# Check code formatting
uv run ruff format --check .
# Format code automatically
uv run ruff format .
# Run both linting and formatting in one go
uv run ruff check . --fix && uv run ruff format .
# Run all quality checks at once (linting, formatting, and tests)
uv run python scripts/quality-check.py
Before creating a release, verify everything is ready:
# Run comprehensive release check
uv run python scripts/release-check.py
# This checks:
# β
Git repository status
# β
Version consistency
# β
Code formatting and linting
# β
Test suite passes
# β
Package builds successfully
# β
All required files exist
Run all quality checks at once:
# Run linting, formatting, tests, and release checks
python scripts/quality-check.py
# This will:
# π§ Fix linting issues automatically
# π¨ Format code with Ruff
# π§ͺ Run the full test suite
# π Check release readiness
This project uses GitHub Actions for automated testing and publishing to PyPI:
- Continuous Integration: Tests run on every push for Python 3.9-3.13
- Test Publishing: Automatic uploads to TestPyPI for testing releases
- Production Releases: Secure publishing to PyPI using trusted publishing
- Release Validation: Comprehensive checks ensure package quality
- Update version in
pyproject.toml
andsrc/pytest_async_benchmark/__init__.py
- Run
uv run python scripts/release-check.py
to verify readiness - Create a git tag:
git tag v1.0.0 && git push origin v1.0.0
- Create a GitHub release to trigger automated PyPI publishing
See RELEASE_GUIDE.md for detailed release instructions.
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
Built with β€οΈ for the async Python community