**Because magic shouldn't be complicated. **
Spark is an automagion engine that seamlessly integrates with multiple LangFlow instances. Deploy AI-driven flows, schedule one-time or recurring tasks, and monitor everything with minimal fuss—no coding required.
- AutoMagik Agents: Develop production-level AI agents
- AutoMagik UI: Create agents using natural language with our dedicated UI
Spark provides two setup options:
- Linux-based system (Ubuntu/Debian recommended)
- Docker and Docker Compose (automatically installed on Ubuntu/Debian if not present)
For a production-ready local environment:
./scripts/setup_local.sh
For development with PostgreSQL and Redis Docker containers:
./scripts/setup_dev.sh
Both setup scripts will:
- Create necessary environment files
- Install Docker if needed (on Ubuntu/Debian)
- Set up all required services
- Install the CLI tool (optional)
- Guide you through the entire process
You'll have access to:
- Spark API: Running at http://localhost:8883
- PostgreSQL Database: Available at
localhost:15432
- Worker Service: Running and ready to process tasks
- CLI Tool: Installed (if chosen during setup)
The setup automatically verifies all services, but you can also check manually:
# Access API documentation
open http://localhost:8883/api/v1/docs # Interactive Swagger UI
open http://localhost:8883/api/v1/redoc # ReDoc documentation
# List flows (requires CLI installation)
source .venv/bin/activate
automagik-spark flow list
- API Server: Handles all HTTP requests and core logic
- Worker: Processes tasks and schedules
- Database: PostgreSQL with all required tables automatically created
- LangFlow (optional): Visual flow editor for creating AI workflows
- CLI Tool (optional): Command-line interface for managing flows and tasks
flowchart LR
subgraph Services
DB[PostgreSQL]
LF1[LangFlow Instance 1]
LF2[LangFlow Instance 2]
end
subgraph Spark
CLI[CLI]
API[API Server]
CW[Celery Worker]
W[Worker]
end
API -- uses --> DB
API -- triggers --> CW
W -- processes --> API
API -- integrates with --> LF1
API -- integrates with --> LF2
CLI -- controls --> API
API -- has UI --> UI[Automagik UI]
- API: Core service handling requests and business logic
- Worker: Processes tasks and schedules
- CLI: Command-line tool for managing flows and tasks
- PostgreSQL: Stores flows, tasks, schedules, and other data
- LangFlow: Optional service for creating and editing flows
For complete API documentation, visit:
- Swagger UI: http://localhost:8883/api/v1/docs
- ReDoc: http://localhost:8883/api/v1/redoc
- If you installed LangFlow, visit http://localhost:17860 to create your first flow
- Use the API at http://localhost:8883/api/v1/docs to manage your flows and tasks
- Try out the CLI commands with
automagik-spark --help
- Monitor task execution through logs and API endpoints
Spark collects anonymous usage analytics to help improve the project. This data helps us understand which features are most useful and prioritize development efforts.
- Command usage and performance metrics
- API endpoint usage patterns
- Workflow execution statistics
- System information (OS, Python version)
- Error rates and types
- Personal information or credentials
- Actual workflow data or content
- File paths or environment variables
- Database connection strings or API keys
Environment Variable:
export AUTOMAGIK_SPARK_DISABLE_TELEMETRY=true
CLI Commands:
# Disable permanently
automagik-spark telemetry disable
# Check status
automagik-spark telemetry status
# See what data is collected
automagik-spark telemetry info
# Use --no-telemetry flag for single session
automagik-spark --no-telemetry <command>
Opt-out File:
touch ~/.automagik-no-telemetry
Telemetry is automatically disabled in CI/testing environments.
Spark's future development focuses on:
- TBA
Spark: Bringing AI Automation to Life