This repository provides Docker Compose configurations for setting up a Stalwart email server using FoundationDB for metadata storage and MinIO for blob storage.
This is heavily inspired by https://gist.github.com/chripede/99b7eaa1101ee05cc64a59b46e4d299f - Thanks! Please check it out on how to configure Stalwart to use a setup like this.
The cluster configuration is deployed across four Virtual Machines (VMs), each situated in a distinct physical location. Inter-node communication is established via Tailscale.
The roles and services are distributed as follows:
- Two front-end nodes: These nodes handle incoming requests, running Nginx, Stalwart, FoundationDB, and MinIO.
- One back-end node: FoundationDB and MinIO.
- One back-end node: FoundationDB
- Docker with Docker Compose (or Podman)
- MinIO Client (
mc
) installed and configured git
(for cloning the Stalwart repository)
This guide helps you set up Stalwart with clustered FoundationDB and MinIO.
The standard Stalwart Docker image may not include FoundationDB support by default, or you might need a specific version. This setup requires a Stalwart image built with FoundationDB capabilities, tagged as stalwart-fdb:latest
.
a. Clone the official Stalwart Mail server repository (or your fork):
git clone https://github.com/stalwartlabs/stalwart-mail.git stalwart-mail-repo
cd stalwart-mail-repo
b. Build the Docker image. The Dockerfile for FoundationDB support is in the subdirectory like resources/docker/
. Compare it with the file stalwart-mail/build/Dockerfile
from this repo, as you want to add support for MinIO (and possibly other services) as well.
# Build the image
docker build -t stalwart-fdb:latest .
# Navigate back to the root of the cloned repository or your project directory
cd ../
Ensure the image stalwart-fdb:latest
is successfully built and available locally before proceeding. The docker-compose.yml
file in this repository refers to this image name.
Create a .env
file in the root of this project directory. Below is an example template. Adjust the placeholder values (like hostname.of.coordinator
, this.public.ip
, this.hostname
) to match your specific environment and server hostnames/IPs.
MINIO_RELEASE=RELEASE.2025-04-22T22-12-26Z
MINIO_ROOT_USER=adminuser
MINIO_ROOT_PASSWORD=verysecret
GRAFANA_ADMIN_USER=adminuser
GRAFANA_ADMIN_PASSWORD=alsoverysecret
FDB_VERSION=7.3.62
FDB_COORDINATOR=hostname.of.coordinator
FDB_NETWORKING_MODE=container
FDB_COORDINATOR_PORT=4500
FDB_PUBLIC_IP=this.public.ip # The public IP of the host running FDB
FDB_HOST_HOSTNAME=this.hostname # A unique hostname for this FDB node (e.g., server1)
FDB_CLUSTER_FILE=/var/fdb/fdb.cluster
FDB_ADDITIONAL_VERSIONS=""
The fdb/fdb.cluster
file is crucial for FoundationDB clients (like Stalwart Mail) and servers to locate the cluster coordinators.
The format of this file is description:id@ip1:port1,ip2:port2,...
.
description:id
: A unique identifier for your cluster (e.g.,docker:docker
as used in the providedfdb/fdb.cluster
file).ipN:portN
: The IP addresses and ports of your FoundationDB coordinator processes.
In this setup:
- The
fdb/fdb.cluster
file is mounted into both thestalwart-mail
andfdb
service containers. - The provided
fdb/fdb.cluster
file is:docker:docker@server1:4500,server2:4500,server3:4500,server4:4500
- You must ensure that
server1
,server2
,server3
,server4
are resolvable hostnames or IP addresses of your FoundationDB coordinator nodes, and that they are listening on port4500
(or the port specified inFDB_COORDINATOR_PORT
in your.env
file). These hostnames should correspond to theFDB_HOST_HOSTNAME
values for your FDB nodes if you are running multiple FDB instances. - The
FDB_COORDINATOR
variable in your.env
file should align with the information in yourfdb.cluster
file. Thefdb
service entrypoint usesFDB_CLUSTER_FILE
which points to this file. - Important: Review the coordinator list in
fdb/fdb.cluster
. Ensure this list accurately reflects your coordinator setup. Typically, you'd have an odd number of coordinators (e.g., 3 or 5) for resilience.
The nginx/nginx.conf
file configures Nginx to act as a reverse proxy for both Stalwart Mail services and the MinIO S3 API. This allows you to expose these services on standard ports and manage SSL/TLS termination centrally.
Key aspects of the configuration:
- Stalwart Mail Services:
- Nginx listens on standard mail ports:
25
(SMTP),993
(IMAPS),465
(SMTPS),587
(Submission), and443
(HTTPS for Stalwart web interface/JMAP/etc.). - It uses
upstream
blocks (e.g.,backend_smtp
,backend_imaps
) to define the Stalwart backend servers (e.g.,server1:1025
,server2:1025
). These should point to your Stalwart instances. The current configuration assumes Stalwart is reachable viaserver1
andserver2
on specific internal ports, which match the exposed ports in thestalwart-mail
service indocker-compose.yml
. proxy_protocol on;
is enabled for mail services. This sends client connection information (like the original IP address) to Stalwart. Ensure your Stalwart instances are configured to accept the proxy protocol.
- Nginx listens on standard mail ports:
- MinIO S3 API:
- Nginx listens on port
81
for S3 traffic. - The
server_name s3.yourdomain;
directive should be updated to your desired domain for accessing MinIO. - It proxies requests to the
minio_backend
upstream, which includesserver1:9000
,server2:9000
, andserver3:9000
. These should be the addresses of your MinIO server instances. client_max_body_size 5G;
allows for large file uploads. Adjust as needed.
- Nginx listens on port
- General:
- The Nginx service in
docker-compose.yml
usesnetwork_mode: host
. Adjust as needed. - Ensure that
server1
,server2
,server3
innginx.conf
are resolvable to the correct IP addresses of your backend Stalwart and MinIO instances. - For production, configure SSL/TLS for the MinIO endpoint (port 81) and ensure mail service ports are secured with SSL/TLS certificates. The
nginx.conf
proxies HTTPS on port 443 to Stalwart.
- The Nginx service in
To use this Nginx configuration:
- Ensure
nginx/nginx.conf
reflects your server hostnames/IPs and desired domain names. - If using SSL/TLS, place your certificate and key files (e.g., in
./nginx/certs
), uncomment the certs volume indocker-compose.yml
, and updatenginx.conf
.
After the FoundationDB cluster is running, you need to configure its redundancy mode and storage engine. This is typically done once per cluster.
-
Connect to one of your FoundationDB nodes using
fdbcli
. If you are using the Docker Compose setup provided, you can do this by running the following command on the Docker host where anfdb
service container is running:docker compose exec fdb fdbcli
-
Once inside the
fdbcli
prompt, configure the database. For a typical setup with SSDs, you would use:configure double ssd
This command sets the redundancy mode to
double
(meaning data is replicated twice) and the storage engine tossd
. Adjust these settings based on your specific hardware and resilience requirements. Refer to the FoundationDB documentation for more details on available options. -
You can verify the configuration by typing
status
in thefdbcli
.
Ensure the MinIO service is running before proceeding. This setup assumes MinIO is accessible via hostnames like server1
, server2
, server3
on port 9000
.
a. Create Buckets:
Create the necessary bucket (e.g., mydata
) on your MinIO instance (source). If you plan to use replication to another S3 target, create the bucket there as well. The target bucket must exist before setting up replication.
# Replace placeholders with your actual values.
# 'source' refers to the MinIO instance in this Docker Compose setup.
# Use one of your MinIO server hostnames/IPs (e.g., server1, server2, or server3 from your setup).
# The MINIO_ROOT_USER and MINIO_ROOT_PASSWORD are from your .env file.
mc alias set source http://server1_ip_or_hostname:9000 MINIO_ROOT_USER MINIO_ROOT_PASSWORD --api s3v4
# If replicating, 'target' refers to your backup/remote MinIO instance or S3-compatible service.
# mc alias set target http://<remote_s3_ip_or_hostname>:<remote_s3_port> S3_ROOT_USER S3_ROOT_PASSWORD --api s3v4
mc mb source/mydata
# mc mb target/mydata # Only if replicating to a target you manage with 'mc'
b. Enable Versioning:
Enable versioning on the source bucket, and on the target bucket if replicating.
mc version enable source/mydata
# mc version enable target/mydata # Only if replicating
This configuration includes a monitoring stack based on Prometheus and Grafana:
- Prometheus: Collects metrics from various exporters and services. Access the UI at
http://<your_host_ip>:9090
.- Configuration:
prometheus/etc/prometheus.yml
- Default Scrape Targets (as per
prometheus.yml
): Prometheus itself, Node Exporter, cAdvisor, MinIO (e.g.,server1:9000
), Stalwart Mail (stalwart-mail:8080
), FoundationDB Exporter (e.g.,server1:9188
). Ensure these targets inprometheus/etc/prometheus.yml
match your actual service hostnames/IPs and ports. The hostnames likeserver1
should be resolvable by Prometheus.
- Configuration:
- Grafana: Visualizes the metrics collected by Prometheus. Access the UI at
http://<your_host_ip>:3000
.- Default credentials (unless changed in
.env
):admin
/admin
(orGRAFANA_ADMIN_USER
/GRAFANA_ADMIN_PASSWORD
from.env
) - Provisioning:
grafana/provisioning/
- Default credentials (unless changed in
- Node Exporter: Exports host system metrics (CPU, RAM, disk, network) to Prometheus.
- cAdvisor: Exports container metrics (resource usage per container) to Prometheus.
- fdbexporter: Exports FoundationDB metrics to Prometheus.
This stack allows you to monitor the health and performance of the host system, Docker containers, FoundationDB, and MinIO. You can import pre-built Grafana dashboards via the Grafana UI (http://<your_host_ip>:3000
) using their IDs or by uploading their JSON definitions. Recommended dashboards include:
- Node Exporter Full (ID: 1860): Host system metrics.
- Docker and System Monitoring (ID: 193): Container metrics (from cAdvisor).
- MinIO Dashboard (ID: 13502): MinIO server metrics.
- Stalwart Mail Server: A dashboard is available here. Note: Requires enabling the Prometheus metrics endpoint in Stalwart's configuration.
- FoundationDB: A dashboard is available here.
Configure replication from your local MinIO instance (source) to a backup/target MinIO instance or S3-compatible service.
# Replace placeholders with your actual values
mc replicate add source/mydata \
--remote-bucket "arn:aws:s3:::mydata" \ # Example ARN for target bucket, adjust for your S3 provider
--storage-class STANDARD \ # Optional: specify storage class on target
--endpoint "http://SOME_OTHER_S3:9000" \ # Endpoint of the target S3 service
--access-key "TARGET_S3_ACCESS_KEY" \
--secret-key "TARGET_S3_SECRET_KEY" \
--replicate "delete,delete-marker,existing-objects" \
--priority 1
# Verify replication status
mc replicate status source/mydata --data
# If old items are not synced and you want to mirror them (use with caution):
# mc mirror --overwrite source/mydata target/mydata
Note: The mc replicate add
command structure can vary based on the S3 provider. The example above uses parameters common for MinIO-to-MinIO or MinIO-to-S3 replication. Replace placeholders with your actual values for the target S3 instance. The --remote-bucket
often requires an ARN or specific format. Consult MinIO and your target S3 provider's documentation.
Navigate to the root of this project directory and start all services using Docker (or Podman) Compose:
docker-compose up -d
This setup includes scripts to back up your FoundationDB data, MinIO bucket contents, and MinIO system configurations. These scripts are located in the backup/
directory. It's crucial to schedule these scripts to run regularly, for example, using cron
.
Important Considerations Before Scheduling:
- Script Paths: Ensure the paths to the scripts in your crontab entries are correct. The examples below assume your project root is
/path/to/your/project
. Adjust this accordingly. - Permissions: The scripts must be executable (
chmod +x backup/*.sh
). - Environment: Cron jobs run with a minimal environment. If scripts rely on environment variables not set in the script itself (e.g., for
docker compose
), you might need to source a profile file or set them directly in the crontab. The provided scripts are designed to be relatively self-contained or usedocker compose exec
which handles the container environment. mc
Alias Configuration:- The
minio_backup_local.sh
andminio-system_backup_local.sh
scripts rely on anmc
alias (defaulting tosource
). This alias must be configured on the machine where the cron job runs, pointing to your MinIO server. - For
minio-system_backup_local.sh
, themc
alias must be configured with MinIO admin credentials (root user/password or an access key with admin privileges). - Example
mc
alias setup:Ensure the# For regular bucket access (used by minio_backup_local.sh) mc alias set source http://your-minio-server-ip:9000 YOUR_ACCESS_KEY YOUR_SECRET_KEY --api s3v4 # For admin access (required by minio-system_backup_local.sh - use root credentials) mc alias set source http://your-minio-server-ip:9000 MINIO_ROOT_USER MINIO_ROOT_PASSWORD --api s3v4
source
alias used by the scripts matches the one you've configured with the appropriate permissions.
- The
- Log Files: The scripts generate log files. Monitor these logs for successful execution or errors. The default log locations are specified within each script.
- Backup Storage: Ensure the
LOCAL_BACKUP_DIR
(forminio_backup_local.sh
),BACKUP_DESTINATION_URL
(implicitly forfdb_backup_local.sh
via its configuration), andBACKUP_BASE_DIR
(forminio-system_backup_local.sh
) have sufficient free space.
This script initiates a backup of your FoundationDB cluster. It starts the backup_agent
if not already running and then triggers a backup.
- Configuration: Review and adjust variables at the top of
backup/fdb_backup_local.sh
(e.g.,FDB_SERVICE_NAME
,CLUSTER_FILE_PATH
,BACKUP_DESTINATION_URL
). TheBACKUP_DESTINATION_URL
is critical as it tells FDB where to store the backup files (e.g.,file:///backup
which corresponds to the./fdb/backup
volume mount indocker-compose.yml
). - Execution:
cd /path/to/your/project ./backup/fdb_backup_local.sh
- Crontab Example (daily at 2 AM):
0 2 * * * /path/to/your/project/backup/fdb_backup_local.sh >> /path/to/your/project/backup/fdb_backup_cron.log 2>&1
This script uses mc mirror
to back up a specified MinIO bucket to a local directory.
- Configuration:
- Edit
backup/minio_backup_local.sh
and set:MINIO_ALIAS
: Themc
alias for your source MinIO server (default:source
).BUCKET_NAME
: The name of the bucket to back up (default:stalwart
).LOCAL_BACKUP_DIR
: The absolute path to your local backup destination.MC_BIN
: Path to yourmc
binary if not in standard PATH for the cron user.
- Ensure the
mc
alias is configured correctly on the host running the script.
- Edit
- Execution:
cd /path/to/your/project # Not strictly necessary if script uses absolute paths, but good practice ./backup/minio_backup_local.sh
- Crontab Example (daily at 3 AM):
0 3 * * * /path/to/your/project/backup/minio_backup_local.sh >> /path/to/your/project/backup/minio_bucket_backup_cron.log 2>&1
This script exports MinIO's IAM configuration (users, groups, policies) and bucket policies. It requires the mc
alias to be configured with admin credentials.
- Configuration:
- Edit
backup/minio-system_backup_local.sh
and set:MINIO_ALIAS
: Themc
alias for your source MinIO server (default:source
). Must have admin privileges.BACKUP_BASE_DIR
: The absolute path where backup archives will be stored.
- Ensure
jq
is installed on the system running the script.
- Edit
- Execution:
cd /path/to/your/project # Not strictly necessary ./backup/minio-system_backup_local.sh
- Crontab Example (weekly, Sunday at 4 AM):
Note: IAM and system configurations typically change less frequently than bucket data, so a weekly backup might be sufficient, but adjust to your needs.
0 4 * * 0 /path/to/your/project/backup/minio-system_backup_local.sh >> /path/to/your/project/backup/minio_system_backup_cron.log 2>&1
- Instructions for SSL/TLS certificate setup for MinIO.