Lab is a powerful, flexible test runner designed specifically for Ferret scripts. It enables automated testing of web scraping, browser automation, and API testing scenarios using Ferret Query Language (FQL).
π Perfect for:
- End-to-end web application testing
- Web scraping validation and monitoring
- API integration testing
- Browser automation testing
- Regression testing for web applications
Read the introductory blog post about Lab here!
- Features
- Installation
- Quick Start
- Test Suites
- Advanced Usage
- Configuration Reference
- Architecture
- Development
- Best Practices
- Troubleshooting
- Contributing
- License
- Parallel execution - Run multiple tests concurrently for faster feedback
- Configurable concurrency - Control the number of simultaneous test executions
- Test retry mechanism - Automatic retry of failed tests with customizable attempts
- Batch execution - Run tests multiple times with configurable intervals
- Built-in Ferret runtime - Execute tests using embedded Ferret engine
- Remote HTTP runtime - Connect to remote Ferret services via HTTP/HTTPS
- External binary runtime - Use custom Ferret CLI installations
- Multi-runtime testing - Test against different Ferret versions or configurations
- Local filesystem - Execute scripts from local directories
- Git repositories - Fetch and run tests directly from Git repos (HTTP/HTTPS)
- HTTP sources - Download and execute scripts from web URLs
- Glob pattern matching - Select multiple files using wildcard patterns
- Built-in HTTP server - Serve static files for testing web applications
- Multiple CDN endpoints - Host different content on various paths
- Custom aliases - Name your content endpoints for better organization
- Dynamic port allocation - Automatically find available ports
- Multiple output formats - Console and simple reporters available
- Detailed test results - Comprehensive execution metrics and timing
- Wait conditions - Test and wait for external services to be available
- Environment variable support - Configure tests via environment variables
Download the latest pre-built binaries from our releases page.
Linux:
curl -L https://github.com/MontFerret/lab/releases/latest/download/lab-linux-amd64.tar.gz | tar xz
sudo mv lab /usr/local/bin/
macOS:
curl -L https://github.com/MontFerret/lab/releases/latest/download/lab-darwin-amd64.tar.gz | tar xz
sudo mv lab /usr/local/bin/
Windows:
Download the .zip
file from releases and extract lab.exe
to your PATH.
The easiest way to install Lab on Unix-like systems:
curl -fsSL https://raw.githubusercontent.com/MontFerret/lab/master/install.sh | sh
This script automatically:
- Detects your operating system and architecture
- Downloads the appropriate binary
- Installs it to
/usr/local/bin/
- Makes it executable
Run Lab in a container without installing it locally:
# Pull the latest image
docker pull montferret/lab:latest
# Run a simple test
docker run --rm -v $(pwd):/workspace montferret/lab:latest /workspace/tests/
# With custom options
docker run --rm -v $(pwd):/workspace montferret/lab:latest \
--concurrency=4 --reporter=simple /workspace/tests/
Docker Compose Example:
version: '3.8'
services:
lab:
image: montferret/lab:latest
volumes:
- ./tests:/workspace/tests
- ./static:/workspace/static
command: ["--cdn=/workspace/static", "/workspace/tests/"]
For development or custom builds:
# Prerequisites: Go 1.23+ required
git clone https://github.com/MontFerret/lab.git
cd lab
go build -o lab .
# Or use the Makefile
make build
lab version
lab --help
The simplest way to run Ferret scripts with Lab:
# Execute a single FQL script
lab myscript.fql
# Run all FQL scripts in a directory
lab myscripts/
# Run with increased concurrency
lab --concurrency=4 myscripts/
# Run tests multiple times
lab --times=3 myscript.fql
Create a simple test file example.fql
:
LET doc = DOCUMENT("https://www.github.com", {
driver: "cdp",
userAgent: "Lab Test Runner"
})
// Wait for page to load
WAIT_ELEMENT(doc, "header")
// Extract page title
LET title = doc.title
// Return result
RETURN {
url: doc.url,
title: title,
hasGitHubLogo: ELEMENT_EXISTS(doc, "[aria-label*='GitHub']")
}
Run it:
lab example.fql
For browser automation, you'll need a Chrome/Chromium instance running in headless mode:
# Start Chrome in headless mode (separate terminal)
google-chrome --headless --remote-debugging-port=9222
# Run your tests (default CDP address)
lab --cdp=http://127.0.0.1:9222 browser-tests/
# Or use a custom CDP address
lab --cdp=http://localhost:9223 browser-tests/
$ lab example.fql
β example.fql (1.23s)
ββ Assertions: 1 passed, 0 failed
Tests: 1 passed, 0 failed
Time: 1.23s
Lab supports sophisticated test suites defined in YAML format, enabling you to create complex testing scenarios with assertions, parameters, and reusable components.
query:
text: |
LET doc = DOCUMENT("https://github.com/", { driver: "cdp" })
HOVER(doc, ".HeaderMenu-details")
CLICK(doc, ".HeaderMenu a")
WAIT_NAVIGATION(doc)
WAIT_ELEMENT(doc, '.IconNav')
FOR el IN ELEMENTS(doc, '.IconNav a')
RETURN TRIM(el.innerText)
assert:
text: RETURN T::NOT::EMPTY(@lab.data.query.result)
Save as github-test.yaml
and run:
lab github-test.yaml
Keep your FQL scripts separate and reference them in test suites:
navigation.fql:
LET doc = DOCUMENT(@url, { driver: "cdp" })
WAIT_ELEMENT(doc, "body")
RETURN doc.title
suite.yaml:
query:
ref: ./scripts/navigation.fql
params:
url: "https://example.com"
assert:
text: |
RETURN T::NOT::EMPTY(@lab.data.query.result)
AND T::CONTAINS(@lab.data.query.result, "Example")
name: "E-commerce User Journey"
description: "Test complete user purchase flow"
setup:
text: |
LET baseUrl = "https://demo-shop.example.com"
RETURN { baseUrl }
query:
text: |
LET doc = DOCUMENT(@lab.data.setup.result.baseUrl, { driver: "cdp" })
// Navigate to product
CLICK(doc, ".product-item:first-child a")
WAIT_NAVIGATION(doc)
// Add to cart
CLICK(doc, ".add-to-cart")
WAIT_ELEMENT(doc, ".cart-confirmation")
// Go to checkout
CLICK(doc, ".checkout-btn")
WAIT_NAVIGATION(doc)
RETURN {
currentUrl: doc.url,
cartItems: LENGTH(ELEMENTS(doc, ".cart-item")),
totalPrice: INNER_TEXT(doc, ".total-price")
}
assert:
text: |
LET result = @lab.data.query.result
RETURN T::CONTAINS(result.currentUrl, "checkout")
AND result.cartItems > 0
AND T::NOT::EMPTY(result.totalPrice)
cleanup:
text: |
// Clear cart or perform cleanup
RETURN "Cleanup completed"
Create reusable test suites with parameters:
query:
text: |
LET doc = DOCUMENT(@testUrl, {
driver: "cdp",
timeout: @pageTimeout
})
WAIT_ELEMENT(doc, @selector)
RETURN {
title: doc.title,
elementExists: ELEMENT_EXISTS(doc, @selector)
}
assert:
text: |
LET result = @lab.data.query.result
RETURN result.elementExists == true
Run with parameters:
lab --param=testUrl:"https://example.com" \
--param=pageTimeout:5000 \
--param=selector:"h1" \
test-suite.yaml
Use external data sources for comprehensive testing:
query:
text: |
LET testData = [
{ url: "https://site1.com", expectedTitle: "Site 1" },
{ url: "https://site2.com", expectedTitle: "Site 2" }
]
FOR test IN testData
LET doc = DOCUMENT(test.url, { driver: "cdp" })
WAIT_ELEMENT(doc, "title")
RETURN {
url: test.url,
expectedTitle: test.expectedTitle,
actualTitle: doc.title,
matches: doc.title == test.expectedTitle
}
assert:
text: |
FOR result IN @lab.data.query.result
FILTER result.matches != true
RETURN false
RETURN true
Lab supports multiple source locations for maximum flexibility:
# Single file
lab /path/to/test.fql
# Directory with glob patterns
lab "tests/**/*.fql"
lab tests/integration/
# Multiple paths
lab --files=tests/unit/ --files=tests/integration/ --files=scripts/smoke.fql
Fetch and execute tests directly from Git repositories:
# HTTPS Git repository
lab git+https://github.com/username/test-repo.git//tests/
# HTTP Git repository
lab git+http://git.example.com/tests.git//integration/
# Specific branch or tag
lab git+https://github.com/username/tests.git@v1.2.0//suite.yaml
# Private repositories (requires authentication)
lab git+https://username:token@github.com/private/repo.git//tests/
Download scripts from web URLs:
# Direct script URL
lab https://raw.githubusercontent.com/user/repo/main/test.fql
# Multiple HTTP sources
lab https://example.com/tests/suite1.yaml https://example.com/tests/suite2.yaml
Lab includes a built-in HTTP server for serving static content during tests:
# Serve files from ./website directory
lab --cdn=./website tests/
# Access in your FQL scripts
LET doc = DOCUMENT(@lab.cdn.website, { driver: "cdp" })
# Serve multiple directories
lab --cdn=./app --cdn=./api-mocks tests/
FQL Script:
// Access different endpoints
LET appPage = DOCUMENT(@lab.cdn.app, { driver: "cdp" })
LET apiData = DOCUMENT(@lab.cdn.api-mocks + "/users.json")
# Give custom names to your content
lab --cdn=./frontend@app --cdn=./mockdata@api tests/
FQL Script:
// Use custom aliases
LET homePage = DOCUMENT(@lab.cdn.app + "/index.html", { driver: "cdp" })
LET userData = DOCUMENT(@lab.cdn.api + "/user/123.json")
# Complex setup with multiple content sources
lab \
--cdn=./dist@webapp \
--cdn=./test-fixtures@fixtures \
--cdn=./mock-apis@mocks \
--concurrency=3 \
tests/e2e/
Lab can execute tests against remote Ferret instances instead of using the built-in runtime:
# Connect to remote Ferret service
lab --runtime=https://ferret.example.com/api tests/
# With custom headers and path
lab \
--runtime=https://ferret.example.com \
--runtime-param=headers:'{"Authorization": "Bearer token123"}' \
--runtime-param=path:"/v1/execute" \
tests/
The HTTP runtime sends POST requests with:
{
"query": "FQL script content",
"params": {
"key": "value"
}
}
Use custom Ferret CLI installations:
# Use specific Ferret binary
lab --runtime=bin:./custom-ferret tests/
# With additional parameters
lab \
--runtime=bin:/usr/local/bin/ferret-v0.18 \
--runtime-param=timeout:30 \
tests/
Test against multiple runtime versions:
# Test with built-in runtime
lab tests/ > builtin-results.txt
# Test with remote runtime
lab --runtime=https://ferret-v0.17.example.com tests/ > remote-v0.17-results.txt
# Compare results
diff builtin-results.txt remote-v0.17-results.txt
# Run up to 8 tests simultaneously
lab --concurrency=8 tests/
# Balance between speed and resource usage
lab --concurrency=4 --timeout=60 large-test-suite/
# Run each test 3 times for reliability testing
lab --times=3 tests/flaky/
# Retry failed tests up to 2 additional times
lab --attempts=3 tests/
# Add delay between test cycles
lab --times=5 --times-interval=10 stress-tests/
# Wait for services to be available before running tests
lab \
--wait=http://127.0.0.1:9222/json/version \
--wait=postgres://localhost:5432/testdb \
--wait-timeout=30 \
tests/integration/
Flag | Short | Environment Variable | Default | Description |
---|---|---|---|---|
--files |
-f |
LAB_FILES |
- | Location of FQL script files to run |
--timeout |
-t |
LAB_TIMEOUT |
30 |
Test timeout in seconds |
--cdp |
- | LAB_CDP |
http://127.0.0.1:9222 |
Chrome DevTools Protocol address |
--reporter |
- | LAB_REPORTER |
console |
Output reporter (console , simple ) |
--runtime |
-r |
LAB_RUNTIME |
- | URL to remote Ferret runtime |
--runtime-param |
--rp |
LAB_RUNTIME_PARAM |
- | Parameters for remote runtime |
--concurrency |
-c |
LAB_CONCURRENCY |
1 |
Number of parallel test executions |
--times |
- | LAB_TIMES |
1 |
Number of times to run each test |
--attempts |
-a |
LAB_ATTEMPTS |
1 |
Number of retry attempts for failed tests |
--times-interval |
- | LAB_TIMES_INTERVAL |
0 |
Interval between test cycles (seconds) |
--cdn |
- | LAB_CDN |
- | Directory to serve via HTTP |
--param |
-p |
LAB_PARAM |
- | Query parameters for tests |
--wait |
-w |
LAB_WAIT |
- | Wait for resource availability |
--wait-timeout |
--wt |
LAB_WAIT_TIMEOUT |
5 |
Wait timeout in seconds |
--wait-attempts |
- | LAB_WAIT_ATTEMPTS |
5 |
Number of wait attempts |
Set environment variables for consistent configuration across environments:
# Basic configuration
export LAB_TIMEOUT=60
export LAB_CONCURRENCY=4
export LAB_REPORTER=simple
# CDP configuration
export LAB_CDP=http://chrome-headless:9222
# Runtime configuration
export LAB_RUNTIME=https://ferret-api.example.com
export LAB_RUNTIME_PARAM='headers:{"API-Key":"secret123"}'
# Run tests
lab tests/
#!/bin/bash
# ci-test.sh
# Set CI-friendly defaults
export LAB_TIMEOUT=120
export LAB_CONCURRENCY=2
export LAB_REPORTER=simple
export LAB_ATTEMPTS=3
# Wait for services
lab \
--wait=http://app:3000/health \
--wait=postgres://db:5432/testdb \
--wait-timeout=60 \
tests/integration/
#!/bin/bash
# dev-test.sh
export LAB_CDP=http://localhost:9222
export LAB_TIMEOUT=30
export LAB_CONCURRENCY=1
# Serve local assets and run tests
lab \
--cdn=./dist@app \
--cdn=./fixtures@data \
tests/dev/
#!/bin/bash
# load-test.sh
# High concurrency for performance testing
lab \
--concurrency=20 \
--times=100 \
--times-interval=1 \
--timeout=10 \
tests/performance/
Configure remote Ferret runtime behavior:
# HTTP runtime with custom headers
lab \
--runtime=https://ferret.api.com \
--runtime-param='headers:{"Authorization":"Bearer token"}' \
--runtime-param='path:"/v2/execute"' \
--runtime-param='timeout:30' \
tests/
# Binary runtime with custom flags
lab \
--runtime=bin:/usr/local/bin/ferret \
--runtime-param='flags:["--timeout=60", "--verbose"]' \
tests/
Lab is built with a modular architecture that separates concerns and enables flexible testing scenarios:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Test Sources β β Test Runner β β Ferret β
β β β β β Runtime β
β β’ File System βββββΆβ β’ Orchestration βββββΆβ β
β β’ Git Repos β β β’ Parallelizationβ β β’ Built-in β
β β’ HTTP URLs β β β’ Retry Logic β β β’ Remote HTTP β
βββββββββββββββββββ β β’ Reporting β β β’ External Bin β
βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β CDN Server β
β β
β β’ Static Files β
β β’ Multi-tenant β
β β’ Auto Ports β
βββββββββββββββββββ
Handles fetching test files from various locations:
- FileSystem Source: Local directory and file access with glob pattern support
- Git Source: Clone and fetch files from Git repositories (HTTP/HTTPS)
- HTTP Source: Download scripts from web URLs
- Aggregate Source: Combines multiple source types
Manages Ferret script execution:
- Built-in Runtime: Uses embedded Ferret engine (default)
- Remote Runtime: HTTP-based communication with remote Ferret services
- Binary Runtime: Executes external Ferret CLI binaries
Orchestrates test execution:
- Parallel Processing: Manages concurrent test execution
- Retry Mechanism: Handles failed test retries
- Resource Management: Controls timeouts and resource allocation
- Lifecycle Management: Handles setup, execution, and cleanup phases
Built-in HTTP server for static content:
- Multi-endpoint: Serve multiple directories simultaneously
- Dynamic Ports: Automatic port allocation to avoid conflicts
- Alias Support: Custom naming for endpoints
Output formatting and result presentation:
- Console Reporter: Rich, colored output for interactive use
- Simple Reporter: Plain text output suitable for CI/CD
Test suite definition and validation:
- YAML Parser: Parse test suite definitions
- Parameter Injection: Handle runtime parameters and data binding
- Assertion Engine: Validate test results
- Input Processing: Parse command-line arguments and environment variables
- Source Resolution: Fetch test files from specified sources
- CDN Initialization: Start HTTP servers for static content (if needed)
- Runtime Setup: Initialize Ferret runtime (built-in or remote)
- Test Discovery: Find and parse test files and suites
- Parallel Execution: Run tests according to concurrency settings
- Result Collection: Gather execution results and timing data
- Reporting: Format and output results via selected reporter
- Cleanup: Stop CDN servers and clean up resources
- Modularity: Each component has a single responsibility
- Extensibility: Easy to add new source types, runtimes, or reporters
- Performance: Optimized for parallel execution and resource efficiency
- Reliability: Built-in retry mechanisms and error handling
- Flexibility: Support for various deployment scenarios and configurations
Prerequisites:
- Go 1.23 or later
- Git
Build Steps:
# Clone the repository
git clone https://github.com/MontFerret/lab.git
cd lab
# Install development tools
make install-tools
# Build the project
make build
# Or manually:
go build -o bin/lab -ldflags "-X main.version=dev" ./main.go
Development Workflow:
# Run tests
make test
# Or:
go test ./...
# Format code
make fmt
# Lint code
make lint
# Run all checks (vet, test, compile)
make build
# Run unit tests
go test -v ./...
# Run specific test suites
go test -v ./sources/...
go test -v ./runtime/...
# Run tests with coverage
make cover
lab/
βββ main.go # Application entry point
βββ cmd/ # CLI command implementations
βββ cdn/ # Static file server
βββ reporters/ # Output formatters
βββ runner/ # Test execution orchestration
βββ runtime/ # Ferret runtime implementations
βββ sources/ # Test file source handlers
βββ testing/ # Test suite definitions
βββ assets/ # Documentation assets
βββ Dockerfile # Container build definition
βββ Makefile # Build automation
βββ README.md # This file
- Implement the
Source
interface insources/
- Add URL scheme handling in
sources/source.go
- Add tests in
sources/
- Implement the
Runtime
interface inruntime/
- Add runtime type detection in
runtime/runtime.go
- Add configuration handling
- Implement the
Reporter
interface inreporters/
- Register the reporter in CLI flags
- Add output format tests
tests/
βββ unit/ # Unit tests for individual components
β βββ api/
β βββ ui/
βββ integration/ # Integration tests
β βββ user-flows/
β βββ data-validation/
βββ e2e/ # End-to-end tests
β βββ critical-path/
β βββ smoke/
βββ fixtures/ # Test data and assets
β βββ pages/
β βββ data/
βββ scripts/ # Reusable FQL scripts
βββ common/
βββ helpers/
- Use descriptive test names:
user-registration-flow.yaml
- Prefix test types:
smoke-
,regression-
,load-
- Use kebab-case for files:
checkout-process.fql
# Good: Descriptive names and clear structure
name: "User Authentication Flow"
description: "Verify user login, logout, and session management"
setup:
text: |
// Clear any existing sessions
// Set up test data
query:
text: |
// Main test logic with clear comments
assert:
text: |
// Specific, meaningful assertions
cleanup:
text: |
// Clean up test data
# Local development: Low concurrency
lab --concurrency=2 tests/
# CI environments: Medium concurrency
lab --concurrency=4 tests/
# Dedicated test infrastructure: High concurrency
lab --concurrency=8 tests/
- Use appropriate timeouts for different test types
- Implement proper cleanup in test suites
- Monitor memory usage with large test suites
- Use CDN for shared static assets
# Run faster tests first
lab tests/smoke/ && lab tests/integration/ && lab tests/e2e/
# Use tags for test categorization
lab tests/critical/ --timeout=60
lab tests/extended/ --timeout=300 --concurrency=1
- Never commit sensitive data in test files
- Use environment variables for credentials
- Sanitize test outputs that might contain secrets
- Use separate test environments for security testing
# Good: Use environment variables
export TEST_API_KEY="your-key-here"
lab --param=apiKey:$TEST_API_KEY tests/
# Bad: Hardcode in scripts
# Don't do this: LET apiKey = "secret-key-123"
Error: Failed to connect to CDP at http://127.0.0.1:9222
Solutions:
-
Start Chrome in headless mode:
google-chrome --headless --remote-debugging-port=9222 --no-sandbox
-
Check if Chrome is running:
curl http://127.0.0.1:9222/json/version
-
Use custom CDP address:
lab --cdp=http://localhost:9223 tests/
Error: Test timed out after 30 seconds
Solutions:
-
Increase timeout:
lab --timeout=60 tests/
-
Optimize test scripts:
-- Add explicit waits WAIT_ELEMENT(doc, ".loading", { displayed: false }) -- Use shorter timeouts for quick checks WAIT_ELEMENT(doc, ".button", { timeout: 5000 })
Error: Failed to clone repository
Solutions:
-
Check repository URL:
git clone https://github.com/user/repo.git # Test manually
-
Authentication for private repos:
lab git+https://username:token@github.com/private/repo.git//tests/
-
Use SSH for private repos:
# Set up SSH keys, then: lab git+ssh://git@github.com/private/repo.git//tests/
Error: Failed to start CDN server on port 8080
Solutions:
-
Lab automatically finds free ports, but you can specify:**
lab --cdn=./static@app:8081 tests/
-
Check for port conflicts:
netstat -tlnp | grep :8080
- Reduce concurrency:
--concurrency=2
- Implement proper cleanup in tests
- Use external binary runtime for memory-intensive tests
- Enable parallel execution:
--concurrency=4
- Use local CDN for static assets
- Optimize FQL scripts for better performance
- Profile tests to identify bottlenecks
# Enable detailed logging (if available)
export LOG_LEVEL=debug
lab tests/
# Use simple reporter for cleaner output
lab --reporter=simple tests/
# Test one file at a time
lab specific-test.fql
# Run with retries disabled
lab --attempts=1 problematic-test.fql
# Test FQL syntax with Ferret CLI
ferret -q "RETURN 1" # Should return [1]
We welcome contributions to Lab! Here's how to get started:
- Fork the repository on GitHub
- Create a feature branch:
git checkout -b feature/awesome-feature
- Make your changes and add tests
- Run the test suite:
make test
- Commit your changes:
git commit -am 'Add awesome feature'
- Push to the branch:
git push origin feature/awesome-feature
- Submit a pull request
- Write tests for new features
- Follow Go conventions and formatting (
make fmt
) - Pass all linting checks (
make lint
) - Update documentation for user-facing changes
- Keep commits atomic and write clear commit messages
When reporting bugs, please include:
- Lab version (
lab version
) - Operating system and version
- Go version (if building from source)
- Complete command that failed
- Full error message and stack trace
- Minimal reproduction case
Before requesting features:
- Check existing issues and discussions
- Describe the use case and problem you're solving
- Consider if it fits Lab's scope and philosophy
- Be prepared to help with implementation
- Discussion: Major features should be discussed in issues first
- Implementation: Write code with tests and documentation
- Review: Submit PR for code review
- Testing: Ensure all CI checks pass
- Merge: Maintainer will merge when ready
Lab is licensed under the Apache License 2.0.
Happy Testing! π
For more information about Ferret and FQL, visit the Ferret documentation.
Join our community on Discord for support and discussions.