-
-
Notifications
You must be signed in to change notification settings - Fork 354
feat: Dockerfile for faster CI/CD pipelines #1380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! I love the idea of being able to use gix clone
in pipelines.
I left some comments, and one more request: there should be a new job that validates the dockerfiles, probably by running them. That's probably quite costly but can hopefully be done within 15 minutes or so.
Finally, I wonder if two dockerfiles are needed, or if one could just endorse one base image - after all, both base images build a binary, the result should be the same. Or does one binary not run in most places? Or asked differently, is alpine
not producing the more compatible binary, so bookworm
is superseded?
Thanks again for your help.
README.md
Outdated
@@ -62,6 +62,31 @@ Follow linked crate name for detailed status. Please note that all crates follow | |||
|
|||
[semver]: https://semver.org | |||
|
|||
### Pipeline Integration | |||
|
|||
Some CI/CD pipelines leverage repository cloning. Below is a copy-paste-able example to build docker images for such workflows. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you provide more context? Which pipelines are you talking about? What's the general idea on how to get from images to using them in a pipeline?
Think blank slate, and I am good test for that as I wouldn't be able to use the docker images from this description.
README.md
Outdated
|
||
#### Pipeline Integration (recommended) | ||
|
||
Build an image without a target and then copy the binaries into your local image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get that - could you describe that without assuming too much docker knowledge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same copy-pasta command as above without the --target=pipeline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am trying to say that I don't understand the paragraph, and that it should be adjusted to fit people who don't do that every day. Ideally, no prior knowledge about this usually works is assumed.
README.md
Outdated
Build an image without a target and then copy the binaries into your local image | ||
|
||
```dockerfile | ||
COPY --from gitoxide:latest /bin/gix /usr/local/bin/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works if the gitoxide
image was already built, right? That wasn't clear to me and I thought maybe it pulls it from the network. Could that be clarified in the paragraph above as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Byron Do you have a docker hub account? If so, that (gitoxide:latest
or perhaps Bryon/gitoxide:latest
more likely, we could use paulbelt/gitoxide:latest
, but you are more... committed to this project than I am [pun intended]) should be the "official" image... and docker build
would automatically pull the image and COPY
the binary from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave dockerhub out of the picture for now as I have no plans to maintain an official image for now. This would mean the gitoxide:latest
name assumes a local image, which probably means the description on how to build that should come first.
README.md
Outdated
``` | ||
|
||
#### Pipeline Integration (base image) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there should be text here that explains what it's doing and why - it probably all flows together with the paragraphs above, and it feels like these could be re-ordered so the user first builds an image, then uses the image in some way, and of course also gets some idea on how to use it in a pipeline. My preference would be to show how to use such image with GitHub actions if at all possible.
docker/Dockerfile.alpine
Outdated
@@ -0,0 +1,69 @@ | |||
ARG GITOXIDE_VERSION=0.36.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move these files into etc/docker/
.
Alpine is based off the I'll write up an inception build instruction set using |
Wouldn't this mean that the alpine version could run anywhere where there is a recent-enough kernel? That seems to mean it supersedes the debian version which only runs on libc compatible distros, which seems less compatible. |
Correct. Trade-offs. I'll call that out explicitly. Most official docker projects build multiple architectures and base distros. |
Alright, let's go with just alpine then, MUSL seems to be the most compatible while keeping things simple. Would you be able to setup a CI step to build the docker image, to assure it keeps working? I probably won't run it locally very much beyond initial testing. Thanks again. |
The CI step in GH actions is beyond my muscle-memory. I'm giving it a go, but no guarantees on results |
... for linting purposes
Thanks a lot! With the improved documentation I was able to understand what's intended, and with that it could be pushed over the finishing line. Please note that I removed the CI integration as it was to slow, taking 1h to build with a budget of only 30 minutes - however, I noted that the dockerfile might stop working eventually so users are encouraged to fix it by PR. |
Some pipeline require repository cloning. Provide an alternative to
git
within such workflows