Updated 2025-06-30.
Self-hosted runners for GitHub Actions. Run all your workflows on your own infrastructure.
Configured to run on an automatically scaled set of runners (Kubernetes controller). Built to use Docker in Docker (DinD) to run the runners. Documentation includes the additional steps to use your custom runner image as the base image for the runners, which is most likely required as the default image for self-hosted ARC systems does not include everything within ubuntu-latest
image available for GitHub-hosted runners.
This documentation has been composed from my own notes. After many hours of trial and error, I've finally got it working and decided to write the notes down.
NOTE! If you have any strict security requirements, or any other specific needs, please make sure to review the documentation and any related code before using it.
You can find the entrypoint for the original GitHub documentation for self-hosted runners here.
The official Quickstart for Actions Runner Controller by GitHub is located behind this link.
This documentation is written for absolutely noobs and dummies like myself. I've tried to write it so that anyone can understand it and follow the steps, even without any previous experience with Kubernetes or GitHub Actions. If you've got any additional questions or feedback, please feel free to send me message or open an issue/PR.
However, please keep in mind the target audience. Many of these steps can be done in a different way. Maybe some steps can be skipped. Or maybe some tools or services can be used in a different way or replaced with something else. But the goal is to just get it working. So remember that this is just a guide and not a strict set of instructions. Also the implementation might not be the best possible or most optimized way to do it.
The goal of this project is to provide a self-hosted runner set to run all your GitHub Actions workflows for one of your own repositories. The complete runner set is configured to run on your own infrastructure.
The runner set is configured to run on an automatically scaled set of runners, powered by a Kubernetes controller. It will be configured to use Docker in Docker (DinD) to enable users to use any Docker features within their workflows. Please note that the provided assets are configured to run in DinD mode. Consult other documentation as well if you need a non-DinD setup.
This DinD runner set was my goal from the get-go, but I just could not find any comprehensive documentation from one place on how to do this. So I read the GitHub Actions documentation, issues, various forums, articles, and blog posts. And a lot of trial and error. But finally, I've got it working and decided to write the notes down into a single source. A really valuable resource has also been this great blog post by @some-natalie.
Start by identifying and choosing the hardware to run on. I've run it on a separate physical server machine running Proxmox. I've got 8 cores, 16GB of RAM and 256GB of storage. I think the listed hardware is kind of an low-endish setup. Less cores or less RAM might just not be enough.
I initialized a new VM on the Proxmox and installed Ubuntu Server 24.04 LTS on it. For the runner VM, I gave it 4 cores and 8GB of RAM. When needed, I added more RAM with swap. Proxmox is a great way to run multiple virtual machines on a single physical machine. Of course, it is not a prerequisite, but it's a great way to run the runner set on a separate, completely isolated VM, in a dedicated environment reserved just for the ARC set.
The runner set can be run on any machine, of course, but I recommend to run it on a separate, dedicated VM. This ensures that there are no conflicting installations, mismatching dependencies, Docker problems, or other environment issues.
Now I assume you've got the Ubuntu (or some Debian-based system) ready, whichever solution you've chosen.
Update all packages to the latest version. I also recommend to install openssh
to access the system via SSH. Configure SSH via /etc/ssh/sshd_config
. Read complete documentation of openssh
behind this link.
Important fields in the SSH configuration file include the following:
Port 22 # required to enable SSH
PermitRootLogin no # recommended to disable root login
PasswordAuthentication yes # required if you want to login with a password
Install Docker. Read the Docker documentation for the latest instructions. Note that it's enough to have the Docker Engine installed here, on the host. For example, in my understanding, the docker-compose
plugin is not needed for the host if it's installed on the runner image. The runner image will only need the access to the Docker Engine and access to the Docker socket (in DinD mode).
After the Docker installation, there's a good chance that you need to update the Docker permissions. Usually the commands you need to execute are the following:
sudo groupadd docker
sudo usermod -aG docker ${USER}
Install the Go programming language. Read Go documentation for the latest instructions.
On a headless system, files and folders can be fetched from the internet, for example, via curl
. Here you want to use curl -LO <url>
.
As an post-install step for Go, you might need to add the Go binary to your PATH
. Edit the ~/.bashrc
file and add the following lines:
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$(go env GOPATH)/bin
Install kubectl
. Read the Kubernetes documentation for the latest instructions.
Install kind
. Make sure to install kind
with go install
method as instructed here.
As the documentation states, the go install
will most likely add the binary under /home/user/go/bin
. You might need to add this to your PATH
variable. Edit the ~/.bashrc
file and add the following lines:
export PATH=$PATH:/home/user/go/bin
Install helm
from script. Read the Helm documentation and follow the instructions on "script installation".
Spawn the default kind cluster with kind create cluster
. You may also create a custom cluster, of course, with kind create cluster --name <cluster-name>
or configure it further if you want, but please note that the default configuration is sufficient for this project. Also the remaining steps in this documentation are based on the default cluster.
Initializing the cluster might take a while, so give it some time.
Create a new file called values.yml
with touch values.yml
. I've placed the file in the home directory of the user, but you can place it anywhere you want. However, the remaining commands in this documentation have to be executed from the same directory as the values.yml
file. This file contains the configurations for the runner set.
Open the file (for example, with nano values.yml
) and add the content found in the adjacent example file.
Set the minimum and maximum number of runners to some reasonable values. Low-end machines might not be able to handle more than a couple of runners. The computational strain of a single runner largely depends on the workflows you run on it.
If you want, you can set maximum hardware resource limits for the runners, along with many other configurations. This is optional and not required. The complete original values.yml
file provided by GitHub can be found here.
Please note that DinD mode will override some of the configurations. In general, do not tamper with the configuration if you are not sure what you are doing.
Update the runnerImage
(present in two different places). If you are using a custom runner image, update the image name and tag. If you are using the default runner image, you can comment it out.
With "Runner Images" I mean the images that are used to run the workflows. For the GitHub-hosted runners, the default image is ubuntu-latest
. For the self-hosted runners, the default image is actions/runner:latest
. The self-hosted image is a bit more limited than the GitHub-hosted one and most likely not sufficient for your needs.
To run your workflows on a custom runner image, you need to first create the custom image. The default documentation about this topic by GitHub can be found here.
I created a custom image called custom-arc-runner
. The source Dockerfile used to build the image can be found here. The prebuilt image ready for download is available here. If that's sufficient for your needs, you can freely use it out-of-the-box. Otherwise, you can create your own custom image and use my Dockerfile as a reference.
The official starter Dockerfile for a custom runner image can be found here. It's wise to preinstall the tools you need in your workflows into the custom runner image.
Build and push the image to Docker Hub. I've written more about this step in the image/README.md file. Perform this step before continuing with the rest of the documentation.
With my custom runner image now pushed to Docker Hub, the line in the values.yml
file would be docker.io/poser/custom-arc-runner:vX
where vX
is the tag of the image.
You don't have to opt for a custom image, as you can use the default one. If you do not specify the runnerImage
, the runner set will use the default one. Please note, that the default image for self-hosted ARC systems is not the same as ubuntu-latest
image available for GitHub-hosted runners. The default self-hosted image does not include everything within ubuntu-latest
.
So it's highly likely that you need to create a custom runner image. One option is always to start installing the tools you need within the workflows, but of course, this is not the most optimal solution.
To initialize the runner set, run the following command next to the values.yml
file:
NAMESPACE="arc-systems"
VERSION="0.12.1"
helm install arc \
--version "${VERSION}" \
--namespace "${NAMESPACE}" \
--create-namespace \
-f values.yml \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
NOTE! Make sure to update the VERSION
variable to the latest version of the runner set or use the specific version you want to use. All releases for arc
can be found here.
NOTE! You can change the NAMESPACE
to your liking, but in such case the remaining commands of this documentation require further adjustments.
From your GitHub account, go to Settings -> Developer settings -> Personal access tokens and create a new private access token, or PAT.
Add the following scopes:
admin:gpg_key
read:packages
repo
workflow
Not sure if all of these are needed, but I've added them all just to be safe.
Copy the PAT and save it somewhere safe. You will need it later.
Again, execute the following command next to the values.yml
file:
INSTALLATION_NAME="self-hosted-runners"
NAMESPACE="arc-runners"
GITHUB_CONFIG_URL="https://github.com/user/repo"
GITHUB_PAT="<PAT>"
VERSION="0.12.1"
helm install "${INSTALLATION_NAME}" \
--version "${VERSION}" \
--namespace "${NAMESPACE}" \
--create-namespace \
-f values.yml \
--set githubConfigUrl="${GITHUB_CONFIG_URL}" \
--set githubConfigSecret.github_token="${GITHUB_PAT}" \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
NOTE! The INSTALLATION_NAME
is the name of the runner set. You can use any name you want, but it's a good idea to use a name that is easy to remember and identify. This is the name you will have to use in the workflow files to actually get the runners to run the workflows.
NOTE! Make sure to update the VERSION
variable to the latest version of the runner set or use the specific version you want to use.
NOTE! Make sure to update the GITHUB_CONFIG_URL
variable to the URL of your GitHub repository. This is the URL of the repository where you want to run the workflows. The runner will not be able to run workflows from other repositories. It can be both a public or a private repository.
NOTE! Make sure to update the GITHUB_PAT
variable to the PAT you created in step 10.
To check the status of the runner set, run the following command:
helm list -A
You should see the following output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
arc arc-systems 1 2025-06-10 10:05:54.123735703 +0000 UTC deployed gha-runner-scale-set-controller-0.12.1 0.12.1
self-hosted-runners arc-runners 1 2025-06-10 10:10:27.055450595 +0000 UTC deployed gha-runner-scale-set-0.12.1 0.12.1
To check the status of the pods, run the following command:
kubectl get pods -n arc-systems
You should see the following output:
NAME READY STATUS RESTARTS AGE
arc-gha-rs-controller-57c67d4c7-wc5wb 1/1 Running 0 15m
self-hosted-runners-754b578d-listener 1/1 Running 0 10m
No pods should be restarting. However, it might take a while for the runners to be ready. So after the initial deployment, you might need to wait for a few minutes before the runners are ready. If they are restarting or exiting after 5 minutes, you might need to check the logs for the pods.
One common problem can be that you have named the runner set with a conflicting name. One repository can not have multiple runner sets with the same name.
Later on, when you've got actual workflows running, you can check the status of the workflow runner pods by running the following command:
kubectl get pods -n arc-runners
If you encounter some issues here, you can always check the logs for the pods. First identify the names of the relevant pods under the NAME
column from the output of the following command.
kubectl get pods -n arc-systems
If the names were to be arc-gha-rs-controller-57c67d4c7-wc5wb
and self-hosted-runners-754b578d-listener
, the logs would be available with:
kubectl logs arc-gha-rs-controller-57c67d4c7-wc5wb -n arc-systems
and
kubectl logs self-hosted-runners-754b578d-listener -n arc-runners
respectively.
If you need to follow logs as they happen, e.g. stream logs in real-time (like tail -f
), you can use the following command:
kubectl logs -f arc-gha-rs-controller-57c67d4c7-wc5wb -n arc-systems
(Assuming the aforementioned names of the pods.)
To display only the last N lines of the logs, you can use the following command:
kubectl logs --tail=20 arc-gha-rs-controller-57c67d4c7-wc5wb -n arc-systems
To show logs newer than a specified duration (e.g., 1h
, 5m
, 30s
), you can use the following command:
kubectl logs --since=5m arc-gha-rs-controller-57c67d4c7-wc5wb -n arc-systems
To view logs of a previous incarnation of a container (if it restarted), you can use the following command:
kubectl logs -p arc-gha-rs-controller-57c67d4c7-wc5wb -n arc-systems
More documentation about kubectl logs
can be found here.
If everything went wrong, you can always delete the runner set and start over. This following command will permanently delete everything Kubernetes-related from all namespaces. Please be careful with this command as you will lose everything on your machine.
helm ls -a --all-namespaces | awk 'NR > 1 { print "-n "$2, $1}' | xargs -L1 helm delete &&
kubectl delete all --all --all-namespaces &&
kind delete cluster