Skip to content

pytest-async-benchmark: Modern pytest benchmarking for async code. πŸš€ Effortlessly benchmark asynchronous Python code, especially asyncio and frameworks like Quart/FastAPI. Provides an async_benchmark fixture for precise performance measurement of coroutines and async functions.

License

Notifications You must be signed in to change notification settings

mihailtd/pytest-async-benchmark

Repository files navigation

pytest-async-benchmark πŸš€

Modern pytest benchmarking for async code with beautiful terminal output and advanced comparison tools.

✨ Features

  • 🎯 Async-First: Designed specifically for benchmarking async def functions
  • πŸ”Œ Pytest Integration: Seamless integration as a pytest plugin with full pytest-asyncio support
  • 🎨 Rich Output: Beautiful terminal reporting powered by Rich!
  • πŸ“Š Comprehensive Stats: Min, max, mean, median, std dev, percentiles, and more
  • βš–οΈ A vs B Comparisons: Compare different implementations side-by-side
  • πŸ“ˆ Multi-Scenario Analysis: Benchmark multiple scenarios with detailed comparison tables
  • 🎯 Performance Grading: Automatic performance scoring and analysis
  • ⚑ Auto Calibration: Intelligent round and iteration detection
  • πŸ”„ Quick Compare: One-line comparison utilities
  • πŸ† Winner Detection: Automatic identification of best-performing implementation
  • πŸš€ Easy to Use: Simple fixture-based API with native async/await support
  • πŸ”§ pytest-asyncio Compatible: Works perfectly with pytest-asyncio's event loop management

πŸ“¦ Installation

For Existing pytest-asyncio Users πŸš€

Already testing async APIs (FastAPI, Quart, aiohttp)? You're all set with the basic installation:

pip install pytest-async-benchmark
# or
uv add pytest-async-benchmark

You'll get the full async/await experience immediately since you already have pytest-asyncio!

For New Users

Choose your installation based on your needs:

# Full installation with async/await support (recommended)
pip install pytest-async-benchmark[asyncio]
uv add pytest-async-benchmark --optional asyncio

# Basic installation (simple interface)
pip install pytest-async-benchmark
uv add pytest-async-benchmark

Quick Check βœ…

Already using @pytest.mark.asyncio in your tests? Then the basic installation is all you need:

# If you already have tests like this:
@pytest.mark.asyncio
async def test_my_api():
    # Your existing async test code
    pass

# Then just add pytest-async-benchmark and use:
@pytest.mark.asyncio
async def test_my_api_performance(async_benchmark):
    result = await async_benchmark(my_async_function)
    assert result['mean'] < 0.01

πŸš€ Quick Start

pytest-async-benchmark automatically adapts to your environment, providing two convenient interfaces:

Async Interface (with pytest-asyncio)

When pytest-asyncio is installed, use the natural async/await syntax:

import asyncio
import pytest

async def slow_async_operation():
    await asyncio.sleep(0.01)  # 10ms
    return "result"

@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=5, iterations=10)
async def test_async_performance(async_benchmark):
    # Use await with pytest-asyncio for best experience
    result = await async_benchmark(slow_async_operation)
    
    # Your assertions here
    assert result['mean'] < 0.02

Simple Interface (without pytest-asyncio)

For simpler setups, the sync interface works automatically:

import asyncio
import pytest

async def slow_async_operation():
    await asyncio.sleep(0.01)
    return "result"

@pytest.mark.async_benchmark(rounds=5, iterations=10)
def test_sync_performance(async_benchmark):
    # No await needed - sync interface
    result = async_benchmark(slow_async_operation)
    
    # Your assertions here
    assert result['mean'] < 0.02  # Should complete in under 20ms

Flexible Configuration Syntax

pytest-async-benchmark supports two syntax options for configuring benchmarks:

Option 1: Marker Syntax (Recommended)

@pytest.mark.async_benchmark(rounds=5, iterations=10)
async def test_with_marker(async_benchmark):
    result = await async_benchmark(slow_async_operation)
    assert result['rounds'] == 5  # From marker

Option 2: Function Parameter Syntax

async def test_with_parameters(async_benchmark):
    result = await async_benchmark(slow_async_operation, rounds=5, iterations=10)
    assert result['rounds'] == 5  # From function parameters

πŸ”§ Interface Detection & Flexibility

pytest-async-benchmark automatically detects your environment and provides the best interface:

With pytest-asyncio (Recommended)

When pytest-asyncio is installed, use natural async/await syntax:

# Set in your pyproject.toml for automatic async test detection
[tool.pytest.ini_options]
asyncio_mode = "auto"

# Then use await syntax
@pytest.mark.asyncio
async def test_my_benchmark(async_benchmark):
    result = await async_benchmark(my_async_function)
    # Your assertions here

Benefits of pytest-asyncio integration:

  • βœ… Native async/await syntax support
  • βœ… Automatic event loop management
  • βœ… No RuntimeError: cannot be called from a running event loop
  • βœ… Better compatibility with async frameworks like FastAPI, Quart, aiohttp
  • βœ… Cleaner test code with standard async patterns

Without pytest-asyncio (Simple Setup)

When pytest-asyncio is not available, the simple interface works automatically:

# No pytest-asyncio required
def test_my_benchmark(async_benchmark):
    result = async_benchmark(my_async_function)  # No await needed
    # Your assertions here

Benefits of simple interface:

  • βœ… No additional dependencies required
  • βœ… Simpler setup for basic use cases
  • βœ… Perfect for getting started quickly
  • βœ… Automatic event loop management internally

🎯 Core Usage Examples

Interface Flexibility

Async Interface (with pytest-asyncio)

@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_with_marker(async_benchmark):
    """Use marker for consistent, visible configuration."""
    result = await async_benchmark(my_async_function)
    assert result['rounds'] == 10  # Configuration is explicit and visible

Simple Interface (without pytest-asyncio)

@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
def test_with_marker_sync(async_benchmark):
    """Use marker for consistent, visible configuration - simple style."""
    result = async_benchmark(my_async_function)  # No await needed
    assert result['rounds'] == 10  # Configuration is explicit and visible

Two Configuration Syntaxes

Marker Syntax (Declarative)

# With pytest-asyncio
@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_with_marker_async(async_benchmark):
    result = await async_benchmark(my_async_function)
    assert result['rounds'] == 10

# Without pytest-asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
def test_with_marker_sync(async_benchmark):
    result = async_benchmark(my_async_function)
    assert result['rounds'] == 10

Function Parameter Syntax (Dynamic)

# With pytest-asyncio
@pytest.mark.asyncio
async def test_with_parameters_async(async_benchmark):
    result = await async_benchmark(
        my_async_function,
        rounds=10,
        iterations=100,
        warmup_rounds=2
    )
    assert result['rounds'] == 10

# Without pytest-asyncio
def test_with_parameters_simple(async_benchmark):
    result = async_benchmark(
        my_async_function,
        rounds=10,
        iterations=100,
        warmup_rounds=2
    )
    assert result['rounds'] == 10

Combined Syntax (Override)

Both interfaces support parameter precedence where function parameters override marker settings:

@pytest.mark.async_benchmark(rounds=5, iterations=50)  # Default config
async def test_with_override(async_benchmark):  # Works with or without @pytest.mark.asyncio
    """Function parameters override marker settings."""
    result = await async_benchmark(  # Use 'await' only with pytest-asyncio
        my_async_function,
        rounds=20  # This overrides marker's rounds=5
        # iterations=50 comes from marker
    )
    assert result['rounds'] == 20      # Function parameter wins
    assert result['iterations'] == 50  # From marker

Basic Benchmarking

@pytest.mark.asyncio
async def test_my_async_function(async_benchmark):
    async def my_function():
        # Your async code here
        await some_async_operation()
        return result
    
    # Benchmark with default settings (5 rounds, 1 iteration each)
    stats = await async_benchmark(my_function)
    
    # Access comprehensive timing statistics
    print(f"Mean execution time: {stats['mean']:.3f}s")
    print(f"Standard deviation: {stats['stddev']:.3f}s")
    print(f"95th percentile: {stats['p95']:.3f}s")

Advanced Configuration

🎯 Two Configuration Syntaxes

pytest-async-benchmark offers two flexible ways to configure your benchmarks:

🏷️ Marker Syntax (Recommended)

Use pytest markers for declarative, visible configuration:

@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=10, iterations=100, warmup_rounds=2)
async def test_high_precision_benchmark(async_benchmark):
    """High precision benchmark with marker configuration."""
    result = await async_benchmark(my_async_function)
    
    # Configuration is visible and consistent
    assert result['rounds'] == 10
    assert result['iterations'] == 100

Benefits:

  • βœ… Visible configuration - Parameters are clear at test level
  • βœ… IDE support - Better tooling and autocomplete
  • βœ… Test discovery - Easy to find all benchmark tests
  • βœ… Consistent configs - Same settings across related tests

βš™οΈ Function Parameter Syntax

Use function parameters for dynamic, flexible configuration:

@pytest.mark.asyncio
async def test_dynamic_benchmark(async_benchmark):
    """Dynamic benchmark with runtime configuration."""
    # Configuration can be computed or conditional
    rounds = 20 if is_production else 5
    
    result = await async_benchmark(
        my_async_function,
        rounds=rounds,
        iterations=50,
        warmup_rounds=1
    )

Benefits:

  • βœ… Dynamic configuration - Runtime parameter calculation
  • βœ… Conditional logic - Different configs based on environment
  • βœ… Per-call customization - Each benchmark call can differ

πŸ”„ Combined Syntax (Best of Both)

Function parameters override marker parameters:

@pytest.mark.asyncio
@pytest.mark.async_benchmark(rounds=5, iterations=50, warmup_rounds=1)
async def test_with_overrides(async_benchmark):
    """Use marker defaults with selective overrides."""
    
    # Quick test with marker defaults
    quick_result = await async_benchmark(fast_function)
    
    # Precision test with overridden rounds
    precise_result = await async_benchmark(
        slow_function,
        rounds=20  # Overrides marker's rounds=5
        # iterations=50 and warmup_rounds=1 come from marker
    )
    
    assert quick_result['rounds'] == 5   # From marker
    assert precise_result['rounds'] == 20  # From function override

Traditional Configuration

@pytest.mark.asyncio
async def test_with_custom_settings(async_benchmark):
    result = await async_benchmark(
        my_async_function,
        rounds=10,        # Number of rounds to run
        iterations=5,     # Iterations per round
        warmup_rounds=2   # Warmup rounds before measurement
    )

With Function Arguments

@pytest.mark.asyncio
async def test_with_args(async_benchmark):
    async def process_data(data, multiplier=1):
        # Process the data
        await asyncio.sleep(0.01)
        return len(data) * multiplier
    
    result = await async_benchmark(
        process_data,
        "test_data",      # positional arg
        multiplier=2,     # keyword arg
        rounds=3
    )

βš–οΈ A vs B Comparison Features

Quick Comparison

from pytest_async_benchmark import quick_compare

async def algorithm_v1():
    await asyncio.sleep(0.002)  # 2ms
    return "v1_result"

async def algorithm_v2():
    await asyncio.sleep(0.0015)  # 1.5ms - optimized
    return "v2_result"

# Quick one-liner comparison
def test_algorithm_comparison():
    winner, results = quick_compare(algorithm_v1, algorithm_v2, rounds=5)
    assert winner == "algorithm_v2"  # v2 should be faster

Detailed A vs B Analysis

from pytest_async_benchmark import a_vs_b_comparison

def test_detailed_comparison():
    # Compare with beautiful terminal output
    a_vs_b_comparison(
        "Original Algorithm", algorithm_v1,
        "Optimized Algorithm", algorithm_v2,
        rounds=8, iterations=20
    )

Multi-Scenario Benchmarking

from pytest_async_benchmark import BenchmarkComparator

def test_multi_scenario():
    comparator = BenchmarkComparator()
    
    # Add multiple scenarios
    comparator.add_scenario(
        "Database Query v1", db_query_v1,
        rounds=5, iterations=10,
        description="Original database implementation"
    )
    
    comparator.add_scenario(
        "Database Query v2", db_query_v2,
        rounds=5, iterations=10,
        description="Optimized with connection pooling"
    )
    
    # Run comparison and get results
    results = comparator.run_comparison()
    
    # Beautiful comparison table automatically displayed
    # Access programmatic results
    fastest = results.get_fastest_scenario()
    assert fastest.name == "Database Query v2"

πŸ“Š Comprehensive Statistics

Each benchmark returns detailed statistics:

{
    'min': 0.001234,      # Minimum execution time
    'max': 0.005678,      # Maximum execution time  
    'mean': 0.002456,     # Mean execution time
    'median': 0.002123,   # Median execution time
    'stddev': 0.000234,   # Standard deviation
    'p50': 0.002123,      # 50th percentile (median)
    'p90': 0.003456,      # 90th percentile
    'p95': 0.004123,      # 95th percentile
    'p99': 0.004789,      # 99th percentile
    'rounds': 5,          # Number of rounds executed
    'iterations': 1,      # Number of iterations per round
    'raw_times': [...],   # List of raw timing measurements
    'grade': 'A',         # Performance grade (A-F)
    'grade_score': 87.5   # Numeric grade score (0-100)
}

🎨 Beautiful Terminal Output

Basic Benchmark Output

πŸš€ Async Benchmark Results: test_my_function
┏━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Metric      ┃ Value      ┃
┑━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
β”‚ Min         β”‚ 10.234ms   β”‚
β”‚ Max         β”‚ 15.678ms   β”‚
β”‚ Mean        β”‚ 12.456ms   β”‚
β”‚ Median      β”‚ 12.123ms   β”‚
β”‚ Std Dev     β”‚ 1.234ms    β”‚
β”‚ 95th %ile   β”‚ 14.567ms   β”‚
β”‚ 99th %ile   β”‚ 15.234ms   β”‚
β”‚ Grade       β”‚ A (87.5)   β”‚
β”‚ Rounds      β”‚ 5          β”‚
β”‚ Iterations  β”‚ 1          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
βœ… Benchmark completed successfully!

A vs B Comparison Output

βš–οΈ  A vs B Comparison Results
┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ Scenario                ┃ Algorithm A ┃ Algorithm B ┃ Winner    ┃
┑━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━┩
β”‚ Mean Time              β”‚ 2.456ms     β”‚ 1.789ms     β”‚ B πŸ†      β”‚
β”‚ Median Time            β”‚ 2.234ms     β”‚ 1.678ms     β”‚ B πŸ†      β”‚
β”‚ 95th Percentile        β”‚ 3.456ms     β”‚ 2.345ms     β”‚ B πŸ†      β”‚
β”‚ Standard Deviation     β”‚ 0.567ms     β”‚ 0.234ms     β”‚ B πŸ†      β”‚
β”‚ Performance Grade      β”‚ B (76.2)    β”‚ A (89.1)    β”‚ B πŸ†      β”‚
β”‚ Improvement            β”‚ -           β”‚ 27.2%       β”‚ -         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
πŸ† Winner: Algorithm B (27.2% faster)

πŸ—οΈ Project Structure

pytest-async-benchmark/
β”œβ”€β”€ src/
β”‚   └── pytest_async_benchmark/
β”‚       β”œβ”€β”€ __init__.py          # Main exports and API
β”‚       β”œβ”€β”€ plugin.py            # Pytest plugin and fixtures
β”‚       β”œβ”€β”€ runner.py            # Core benchmarking engine
β”‚       β”œβ”€β”€ display.py           # Rich terminal output formatting
β”‚       β”œβ”€β”€ stats.py             # Statistical calculations
β”‚       β”œβ”€β”€ utils.py             # Utility functions
β”‚       β”œβ”€β”€ analytics.py         # Performance analysis tools
β”‚       └── comparison.py        # A vs B comparison functionality
β”œβ”€β”€ examples/
β”‚   β”œβ”€β”€ pytest_examples.py      # Comprehensive pytest usage examples
β”‚   β”œβ”€β”€ quart_api_comparison.py  # Real-world API endpoint comparison
β”‚   └── comparison_examples.py   # Advanced comparison features demo
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ test_async_bench.py      # Core functionality tests
β”‚   β”œβ”€β”€ test_comparison.py       # Comparison feature tests
β”‚   β”œβ”€β”€ test_demo.py             # Demo test cases
β”‚   └── conftest.py              # Test configuration
β”œβ”€β”€ pyproject.toml               # Package configuration
└── README.md                    # This file

πŸ“š Example Files Guide

Comprehensive pytest usage examples including:

  • Basic benchmarking with the async_benchmark fixture
  • Advanced configuration options
  • Performance assertions and testing patterns
  • Using markers for benchmark organization

Real-world API endpoint comparison demo featuring:

  • Quart web framework setup
  • API v1 vs v2 endpoint benchmarking
  • Live server testing with actual HTTP requests
  • Performance regression detection

Advanced comparison features showcase:

  • Multi-scenario benchmark comparisons
  • A vs B testing with detailed analysis
  • Performance grading and scoring
  • Statistical comparison utilities

🌐 Real-World Examples

FastAPI Endpoint Benchmarking

from fastapi import FastAPI
from fastapi.testclient import TestClient
import pytest

app = FastAPI()

@app.get("/api/data")
async def get_data():
    # Simulate database query
    await asyncio.sleep(0.005)
    return {"data": "example"}

@pytest.mark.asyncio
async def test_fastapi_endpoint_performance(async_benchmark):
    async def make_request():
        with TestClient(app) as client:
            response = client.get("/api/data")
            return response.json()
    
    result = await async_benchmark(make_request, rounds=10)
    assert result['mean'] < 0.1  # Should respond within 100ms
    assert result['grade'] in ['A', 'B']  # Should have good performance grade

Quart API Endpoint Comparison

See the complete example in examples/quart_api_comparison.py:

from pytest_async_benchmark import a_vs_b_comparison
import asyncio
import aiohttp

async def test_api_v1():
    async with aiohttp.ClientSession() as session:
        async with session.get('http://localhost:5000/api/v1/data') as resp:
            return await resp.json()

async def test_api_v2():
    async with aiohttp.ClientSession() as session:
        async with session.get('http://localhost:5000/api/v2/data') as resp:
            return await resp.json()

# Compare API versions
a_vs_b_comparison(
    "API v1", test_api_v1,
    "API v2 (Optimized)", test_api_v2,
    rounds=10, iterations=5
)

Database Query Benchmarking

@pytest.mark.asyncio
async def test_database_query_performance(async_benchmark):
    async def fetch_user_data(user_id):
        async with database.connection() as conn:
            return await conn.fetch_one(
                "SELECT * FROM users WHERE id = ?", user_id
            )
    
    result = await async_benchmark(fetch_user_data, 123, rounds=5)
    assert result['mean'] < 0.05  # Should complete within 50ms
    assert result['p95'] < 0.1    # 95% of queries under 100ms

🎯 Using Markers

@pytest.mark.async_benchmark
@pytest.mark.asyncio
async def test_performance(async_benchmark):
    # Your benchmark test
    result = await async_benchmark(my_async_function)
    assert result is not None

πŸ“‹ API Reference

async_benchmark(func, *args, rounds=None, iterations=None, warmup_rounds=1, **kwargs)

Parameters:

  • func: The async function to benchmark
  • *args: Positional arguments to pass to the function
  • rounds: Number of measurement rounds (default: 5)
  • iterations: Number of iterations per round (default: 1)
  • warmup_rounds: Number of warmup rounds before measurement (default: 1)
  • **kwargs: Keyword arguments to pass to the function

Returns: A dictionary with comprehensive statistics including min, max, mean, median, stddev, percentiles, performance grade, and raw measurements.

Comparison Functions

  • quick_compare(func_a, func_b, **kwargs): Quick comparison returning winner and results
  • a_vs_b_comparison(name_a, func_a, name_b, func_b, **kwargs): Detailed comparison with terminal output
  • BenchmarkComparator: Class for multi-scenario benchmarking and analysis

πŸ“‹ Requirements

  • Python β‰₯ 3.9
  • pytest β‰₯ 8.3.5
  • pytest-asyncio β‰₯ 0.23.0 (automatically installed)

Note: Rich (for beautiful terminal output) is automatically installed as a dependency.

πŸš€ Development

# Clone the repository
git clone https://github.com/yourusername/pytest-async-benchmark.git
cd pytest-async-benchmark

# Install dependencies
uv sync

# Run tests
uv run pytest tests/ -v

# Run examples
uv run pytest examples/pytest_examples.py -v

# Test real-world Quart API comparison
uv run python examples/quart_api_comparison.py

# See advanced comparison features
uv run python examples/comparison_examples.py

πŸ› οΈ Code Quality and Formatting

This project uses Ruff for both linting and formatting:

# Check code for linting issues
uv run ruff check .

# Fix auto-fixable linting issues
uv run ruff check . --fix

# Check code formatting
uv run ruff format --check .

# Format code automatically
uv run ruff format .

# Run both linting and formatting in one go
uv run ruff check . --fix && uv run ruff format .

# Run all quality checks at once (linting, formatting, and tests)
uv run python scripts/quality-check.py

πŸ“‹ Release Readiness Check

Before creating a release, verify everything is ready:

# Run comprehensive release check
uv run python scripts/release-check.py

# This checks:
# βœ… Git repository status
# βœ… Version consistency 
# βœ… Code formatting and linting
# βœ… Test suite passes
# βœ… Package builds successfully
# βœ… All required files exist

πŸš€ Quick Quality Check

Run all quality checks at once:

# Run linting, formatting, tests, and release checks
python scripts/quality-check.py

# This will:
# πŸ”§ Fix linting issues automatically
# 🎨 Format code with Ruff
# πŸ§ͺ Run the full test suite
# πŸ“‹ Check release readiness

πŸš€ Automated Releases

This project uses GitHub Actions for automated testing and publishing to PyPI:

  • Continuous Integration: Tests run on every push for Python 3.9-3.13
  • Test Publishing: Automatic uploads to TestPyPI for testing releases
  • Production Releases: Secure publishing to PyPI using trusted publishing
  • Release Validation: Comprehensive checks ensure package quality

Creating a Release

  1. Update version in pyproject.toml and src/pytest_async_benchmark/__init__.py
  2. Run uv run python scripts/release-check.py to verify readiness
  3. Create a git tag: git tag v1.0.0 && git push origin v1.0.0
  4. Create a GitHub release to trigger automated PyPI publishing

See RELEASE_GUIDE.md for detailed release instructions.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“„ License

MIT License - see LICENSE file for details.


Built with ❀️ for the async Python community

About

pytest-async-benchmark: Modern pytest benchmarking for async code. πŸš€ Effortlessly benchmark asynchronous Python code, especially asyncio and frameworks like Quart/FastAPI. Provides an async_benchmark fixture for precise performance measurement of coroutines and async functions.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages