A Julia-based worker template for the open-AIMS/reefguide distributed job processing system. This template provides a foundation for implementing custom job handlers that can process tasks from the ReefGuide job queue.
ReefGuideWorkerTemplate.jl is a template worker node in the reefguide system.
ReefGuide deploys ReefGuideWorker.jl
and ADRIAReefGuideWorker.jl
as Docker containers on ECS to consume jobs generated by users in the frontend. They each use their respective Julia library, and fork the template repository. Shown below.
graph TD
RG[ReefGuide] -->|deploys| RGW[ReefGuideWorker.jl<br/>Docker Container]
RG -->|deploys| ARGW[ADRIAReefGuideWorker.jl<br/>Docker Container]
RGW -->|uses| RGLib[ReefGuide.jl<br/>Julia Library]
ARGW -->|uses| ADRIALib[ADRIA.jl<br/>Julia Library]
RGW -->|forks| RGWT[ReefGuideWorkerTemplate.jl<br/> Template]
ARGW -->|forks| RGWT
style RG fill:#e1f5fe
style RGW fill:#f3e5f5
style ARGW fill:#f3e5f5
style RGLib fill:#e8f5e8
style ADRIALib fill:#e8f5e8
style RGWT fill:#fff3e0
Related repos include
- ReefGuide - the main ReefGuide repository
- ReefGuideWorker.jl - Julia Job Worker to run ReefGuide algorithms
- ReefGuide.jl - ReefGuide Julia library code
- ADRIA.jl - The ADRIA model Julia library code, used by ADRIAReefGuideWorker.jl
- ADRIAReefGuideWorker.jl - Julia Job Worker to run ADRIA algorithms for ReefGuide
This worker template integrates with the ReefGuide ecosystem, connecting to the ReefGuide web API to receive and process distributed computing tasks. The API_ENDPOINT
environment variable should point to your ReefGuide web API instance.
For local development: First spin up the local ReefGuide stack by following the setup instructions in the ReefGuide README. This will provide the necessary API endpoint and supporting services (database, S3 storage, etc.) that the worker connects to.
- Install Julia 1.11.x using juliaup:
curl -fsSL https://install.julialang.org | sh juliaup add 1.11 juliaup default 1.11
- Copy the environment template:
cp .env.local .env
- Navigate to the
sandbox/
directory - Initialize the project:
./init.sh
- Start the development environment:
./start.sh
- Copy the environment template:
cp .env.local .env
- Build the Docker image:
docker build . -t worker
- Run the container:
docker run --env-file .env worker
The worker operates on a polling-based architecture:
- Polling: Continuously polls the ReefGuide API for available jobs
- Job Assignment: Claims jobs that match its configured job types
- Processing: Executes the appropriate handler for each job type
- Completion: Reports results back to the API
- Idle Timeout: Automatically shuts down after a configurable period of inactivity
Job processing is handled through a type-safe registry system:
- Job Types: Defined as enums in
handlers.jl
(e.g.,TEST
) - Input/Output Types: Strongly typed payloads for each job type
- Handlers: Implement the
AbstractJobHandler
interface to process specific job types
The template includes a complete example:
# Input payload structure
struct TestInput <: AbstractJobInput
id::Int64
end
# Output payload structure
struct TestOutput <: AbstractJobOutput
end
# Handler implementation
struct TestHandler <: AbstractJobHandler end
function handle_job(::TestHandler, input::TestInput, context::JobContext)::TestOutput
@debug "Processing test job with id: $(input.id)"
sleep(10) # Simulate work
return TestOutput()
end
Configure the worker through environment variables (see .env.local
for examples):
API_ENDPOINT
: ReefGuide API base URLJOB_TYPES
: Comma-separated list of job types to handleWORKER_USERNAME/PASSWORD
: Authentication credentialsAWS_REGION
: AWS region for S3 storageS3_ENDPOINT
: Optional S3-compatible endpoint (for local development)
Workers can read from and write to S3-compatible storage:
- Each job assignment includes a unique
storage_uri
for outputs - Use the provided
JobContext
to access storage configuration - Support for both AWS S3 and local MinIO development environments
- Define the job type enum in
handlers.jl
- Create input/output payload structs
- Implement a handler struct and
handle_job
method - Register the handler in the
__init__()
function
- Development: Uses local MinIO for S3 storage and local API endpoint
- Production: Runs in AWS ECS Fargate with proper AWS S3 integration
- Docker: Includes multi-stage Dockerfile for containerized deployment
The worker is designed to be single-threaded per job but can utilize Julia's threading for computationally intensive tasks within job handlers.