A lightweight Python library for reproducible computational experiments with an ultra-simple, smart API. From idea to insight in under 5 minutes, with zero configuration.
- π― Ultra-Simple API: Single
@experiment
decorator - that's it! - π Auto-Everything: Parameters, metrics, and results detected automatically
- π Smart Exploration: Automated parameter space exploration with multiple strategies
- π‘ Intelligent Insights: Automated pattern detection and recommendations
- π Web Dashboard: Beautiful real-time experiment monitoring
- π§ CLI Analytics: Powerful command-line tools for ad-hoc analysis
- π Query Interface: Find experiments using simple expressions like
"accuracy > 0.9"
- π Reproducible: Git commit tracking, environment capture, seed management
- πΎ Local-First: SQLite database - no external servers required
pip install rexf
from rexf import experiment, run
@experiment
def my_experiment(learning_rate, batch_size=32):
# Your experiment code here
accuracy = train_model(learning_rate, batch_size)
return {"accuracy": accuracy, "loss": 1 - accuracy}
# Run single experiment
run.single(my_experiment, learning_rate=0.01, batch_size=64)
# Get insights
print(run.insights())
# Find best experiments
best = run.best(metric="accuracy", top=5)
# Auto-explore parameter space
run.auto_explore(my_experiment, strategy="random", budget=20)
# Launch web dashboard
run.dashboard()
From idea to insight in under 5 minutes, with zero configuration.
RexF prioritizes user experience over architectural purity. Instead of making you learn complex APIs, it automatically detects what you're doing and provides smart features to accelerate your research.
import math
import random
from rexf import experiment, run
@experiment
def estimate_pi(num_samples=10000, method="uniform"):
"""Estimate Ο using Monte Carlo methods."""
inside_circle = 0
for _ in range(num_samples):
x, y = random.uniform(-1, 1), random.uniform(-1, 1)
if x*x + y*y <= 1:
inside_circle += 1
pi_estimate = 4 * inside_circle / num_samples
error = abs(pi_estimate - math.pi)
return {
"pi_estimate": pi_estimate,
"error": error,
"accuracy": 1 - (error / math.pi)
}
# Run experiments
run.single(estimate_pi, num_samples=50000, method="uniform")
run.single(estimate_pi, num_samples=100000, method="stratified")
# Auto-explore to find best parameters
run_ids = run.auto_explore(
estimate_pi,
strategy="grid",
budget=10,
optimization_target="accuracy"
)
# Get smart insights
insights = run.insights()
print(f"Success rate: {insights['summary']['success_rate']:.1%}")
# Find high-accuracy runs
accurate_runs = run.find("accuracy > 0.99")
# Compare experiments
run.compare(run.best(top=3))
# Launch web dashboard
run.dashboard() # Opens http://localhost:8080
# Random exploration
run.auto_explore(my_experiment, strategy="random", budget=20)
# Grid search
run.auto_explore(my_experiment, strategy="grid", budget=15)
# Adaptive exploration (learns from results)
run.auto_explore(my_experiment, strategy="adaptive", budget=25,
optimization_target="accuracy")
# Find experiments using expressions
high_acc = run.find("accuracy > 0.9")
fast_runs = run.find("duration < 30")
recent_good = run.find("accuracy > 0.8 and start_time > '2024-01-01'")
# Query help
run.query_help()
# Get next experiment suggestions
suggestions = run.suggest(
my_experiment,
count=5,
strategy="balanced", # "exploit", "explore", or "balanced"
optimization_target="accuracy"
)
for suggestion in suggestions["suggestions"]:
print(f"Try: {suggestion['parameters']}")
print(f"Reason: {suggestion['reasoning']}")
Analyze experiments from the command line:
# Show summary
rexf-analytics --summary
# Query experiments
rexf-analytics --query "accuracy > 0.9"
# Generate insights
rexf-analytics --insights
# Compare best experiments
rexf-analytics --compare --best 5
# Export to CSV
rexf-analytics --list --format csv --output results.csv
Launch a beautiful web interface:
run.dashboard() # Opens http://localhost:8080
Features:
- π Real-time experiment monitoring
- π Interactive filtering and search
- π‘ Automated insights generation
- π Statistics overview and trends
- π― Experiment comparison tools
import mlflow
import sacred
from sacred import Experiment
# Complex setup required
ex = Experiment('my_exp')
mlflow.set_tracking_uri("...")
@ex.config
def config():
learning_rate = 0.01
batch_size = 32
@ex.automain
def main(learning_rate, batch_size):
with mlflow.start_run():
# Your code here
mlflow.log_param("lr", learning_rate)
mlflow.log_metric("accuracy", accuracy)
from rexf import experiment, run
@experiment
def my_experiment(learning_rate=0.01, batch_size=32):
# Your code here - that's it!
return {"accuracy": accuracy}
run.single(my_experiment, learning_rate=0.05)
Feature | Traditional Tools | RexF |
---|---|---|
Setup | Complex configuration | Single decorator |
Parameter Detection | Manual logging | Automatic |
Metric Tracking | Manual logging | Automatic |
Insights | Manual analysis | Auto-generated |
Exploration | Write custom loops | run.auto_explore() |
Comparison | Custom dashboards | run.compare() |
Querying | SQL/Complex APIs | run.find("accuracy > 0.9") |
RexF uses a clean, modular architecture:
rexf/
βββ core/ # Core experiment logic (@experiment decorator)
βββ backends/ # Storage implementation (IntelligentStorage)
βββ intelligence/ # Smart features (insights, exploration, queries)
βββ dashboard/ # Web interface
βββ cli/ # Command-line tools
βββ run.py # Main user interface
- IntelligentStorage: Analytics-focused SQLite storage with advanced querying
- Simple API: Single
@experiment
decorator for zero-configuration usage - Smart Intelligence: Automated insights, exploration, and recommendations
- ExplorationEngine: Automated parameter space exploration (grid, random, adaptive)
- InsightsEngine: Pattern detection and automated analysis
- SuggestionEngine: Intelligent next experiment recommendations
- SmartQueryEngine: Expression-based experiment querying
RexF automatically captures:
- Experiment metadata: Name, timestamp, duration, status
- Parameters: Function arguments and defaults
- Results: Return values (auto-categorized as metrics/results/artifacts)
- Environment: Git commit, Python version, dependencies
- Reproducibility: Random seeds, system info
All data is stored locally in SQLite with no external dependencies.
RexF ensures reproducibility by automatically tracking:
- Code version: Git commit hash and diff
- Environment: Python version, installed packages
- Parameters: All function arguments and defaults
- Random seeds: Automatic seed capture and restoration
- System info: OS, hardware, execution environment
- β Phase 1: Simple API and smart features
- β Phase 2: Auto-exploration and insights
- β Phase 3: Web dashboard and CLI tools
- π Phase 4: Advanced optimization and ML integration
- π Phase 5: Cloud sync and collaboration features
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/dhruv1110/rexf.git
cd rexf
pip install -e ".[dev]"
pre-commit install
pytest tests/ -v --cov=rexf
MIT License - see LICENSE for details.
- Documentation: Read the Docs
- PyPI: https://pypi.org/project/rexf/
- GitHub: https://github.com/dhruv1110/rexf
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with β€οΈ for researchers who want to focus on science, not infrastructure.