The solution provided works this way:
- Create a Docker-in-Docker image
- Run the image and download docker images from inside
- Create a new image from the running container
- Push the image to a registry
docker build . -t dind-ddev-base
Run the base image with bash
entrypoint:
docker run \
--privileged -it --rm \
--name dind-ddev-base \
--volume $(pwd)/imagelists:/imagelists \
--entrypoint=bash \
dind-ddev-base
Run a script that prepares the filesystem and download docker images.
ddev-download-images.sh
By the default the script downloads only the minimal images required by DDEV core.
You can pass additional images lists as arguments. Some lists are provided in imagelists/ folder:
ddev-download-images.sh /imagelists/aljibe.list /imagelists/metadrop.list
Add your custom lists to imagelists/
. As an example, you can obtain the list of all images used by a project with:
cd <path-to-ddev-project>
ddev debug compose-config |grep -i image: | grep -v built | awk '{print $2}' | sort | uniq -u > myproject.list
mv myproject.list <path-to-dind-ddev-project>/imagelists
This generates dind-ddev
image with all images inside.
Note we're adding a label to link the generated image to its original repo. If you're generating an image on your own, adjust it to suit your needs.
docker commit \
--change='ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]' \
--change='LABEL org.opencontainers.image.source=https://github.com/metadrop/dind-ddev' \
dind-ddev-base dind-ddev
Now you can stop the container from step 2.
Run the image:
docker run --privileged --rm --name dind-ddev dind-ddev
Check images are there:
docker exec dind-ddev docker image ls
export GH_USER=
export CR_PAT=
echo $CR_PAT | docker login ghcr.io -u $GH_USER --password-stdin
LOCAL_IMAGE=dind-ddev
docker image tag $LOCAL_IMAGE ghcr.io/metadrop/dind-ddev/$LOCAL_IMAGE:latest
docker push ghcr.io/metadrop/dind-ddev/$LOCAL_IMAGE:latest