Docker image that runs a single cron job to sync files with S3 as defined via environment variables.
This image can be used to either sync files from a container to S3 or from S3 to a container.
It is intended that a single instance of this image will run a single cron job and sync process. If you have multiple directories to sync, then you should run multiple containers each with the appropriate configuration for that directory.
This image is built automatically and published to on the Docker Hub as ghcr.io/ice-bergtech/docker-sync-with-s3.
docker pull ghcr.io/ice-bergtech/docker-sync-with-s3:latest
- Clone this repo
- Copy
local.env.dist
tolocal.env
and update values as appropriate - Run
docker-compose up -d
ACCESS_KEY
- S3 Access KeySECRET_KEY
- S3 Secret Access KeyCRON_SCHEDULE
- Schedule for cron job, for every 15 minutes it would be:*/15 * * * *
SOURCE_PATH
- Source files to be synced, example:/var/www/uploads
DESTINATION_PATH
- Destination of where to sync files to, example:s3://my-bucket/site-uploads
BUCKET_LOCATION
- AWS Region for bucket, ex:us-east-1
LOGENTRIES_KEY
- (optional) If provided, the image will send command output to syslog with priorityuser.info
.S3SYNC_ARGS
- (optional) If provided, the arguments will be included in thes3cmd dsync
command. For example, settingS3SYNC_ARGS=--delete
will cause files in the destination to be deleted if they no longer exist in the source.ENDPOINT_URL
- endpoint url used in boto3 client
You will need to define volumes in your Docker configuration for sharing filesystem between your application containers and this sync container.