This is the configuration for shiny.dide.ic.ac.uk
, the departmental Shiny server. It uses twinkle
to look after setting apps and to set up the docker image that we run the server from.
This system is a set of docker containers acting as a load balanced shiny server, using three three services:
<apache> -- <haproxy> -- <shiny x 12>
where apache
presents the interface to the world and looks after https, haproxy
acts as a load balancer and we have 12 copies of shiny
, each capable of serving every application.
On the first deployment you will need to fetch credentials for the DNS server, do this with
./scripts/get-hdb-credentials
and create the data volume with
docker volume create shiny-data
You can create the certificate by running
docker compose run acme-buddy \
--domain shiny.dide.ic.ac.uk \
--email reside@imperial.ac.uk \
--dns-provider hdb \
--certificate-path /tls/certificate.pem \
--key-path /tls/key.pem \
--account-path /tls/account.json \
--oneshot
which is all the commands for acme-buddy
in docker-compose.yml
except --reload-container
and adding --oneshot
; this will create the certificates which results in a smoother startup of the proxy. This step is optional and only needs to be done the very first time.
Start the system
docker compose up -d
Bring everything down
docker compose down
Update containers with
docker compose pull
docker compose up -d --remove-orphans
after which you might want to run
docker image prune -f
to clean up images
Interact with the twinkle
cli
./twinkle <args...>
or if you need to just hack on something run
./twinkle-shell
to get a bash shell within the docker system, configured as it would be when running, but with a writeable /shiny-data
filesystem.
The configuration is spread over a few files, but only the first needs changing generally (some of the others would be changed if adapting to a different system)
site.yml
: the main configuration describing all applications. This is described intwinkle
docker-compose.yml
: the configuration of the whole systemshiny-server.conf
: the configuration for the shiny server itself. Very little here should ever need changing, though the log preservation is a matter to consider carefully and thesite_dir
needs to align with the values used indocker-compose.yml
for the shiny data volume and theTWINKLE_ROOT
environment variable.haproxy.cfg
: configuration for the load-balancer. This needs updating to match the number of server processes that you will run, and that needs to align with the number of replicas indocker-compose.yml
. It's theoretically possible to use DNS-based service discovery with docker compose and haproxy but we have not set that up yet because it's not that bad to just write things out really.httpd.conf
: configuration for apache. This will need configuration based on the site; theServerAdmin
andServerName
section certainly. The actual proxy part is simple and depends only on the service name (haproxy
) indocker-compose.yml
.
The https certificates are handled by acme-buddy
and this is somewhat specific to our use at Imperial, but should be easily adaptable. A volume is used to arrange the certificates generated by LetsEncrypt to be found by Apache. All configuration for this is contained within docker-compose.yml
There are two personas here, possibly corresponding to two people.
- The user is the person who wants the application added. This is probably a researcher at DIDE who has an application that they would like deployed
- The administrator is the person in RESIDE who will look after deploying the application
The first few rounds are error prone as this is where we find out exactly what the application needs in order to run. Be prepared for a few back-and-forths over teams or a laptop.
- The user makes a PR into this repository to create a new application section within
site.yml
. The format here should be fairly self explanatory and is explained in detail attwinkle
- The user adds provisioning information to their repository, describing packages that their application required (details forthcoming)
- The administrator merges the PR and pulls the updated copy of this repository on the server
- The administrator runs
./twinkle deploy <app>
to deploy the application through to staging (see below for complications for private repos) - The administrator should then go to the app page (
https://shiny.dide.ic.ac.uk/staging/<app>
) and see if it starts up. If it does not, then check the logs (./twinkle logs <app>
) and report the error to the user so that they can try and replicate/fix (see below for troubleshooting advice) - Once everything is OK, run
./twinkle sync --production <app>
to push the application through to production, where it will be available ashttps://shiny.dide.ic.ac.uk/<app>
The user should add themselves to the "Shiny server" channel on teams so they can be notified about updates.
For private applications (that is, applications where the source is in a private repository) we need a deploy key added to the repository. This is easiest where the shiny server administrator has administrative privileges to the repository in question as they can add the public key themselves.
Before deployment, the administrator runs (from the shiny server)
./twinkle deploy-key <app>
which creates a private ssh key and stores it on the server, then prints the public key and instructions to add it to the repository. Follow these instructions. After this step, which only needs to be performed once, everything should work as described above.
Generally this is much easier than the first installation, because the app has probably already worked once already.
- The user lets the administrator know that the application needs updating. This is easiest if they provide the URL to the application, because the last part of that is the application name that the administrator needs.
- The administrator runs
./twinkle deploy <app>
to deploy the application through to staging (see below for complications for private repos) and lets the user know to check (https://shiny.dide.ic.ac.uk/staging/<app>
) to see if things are looking OK (otherwise repeat these steps) - Once everything is OK, run
./twinkle sync --production <app>
to push the application through to production
Sometimes only the source wants updating (not the packages) and this can be done by using the lower-level twinkle commands
./twinkle update-src <app>
./twinkle sync <app>
The administrator can run commands to find out what is going on with applications on the server (users cannot do this directly - ask us to find things out please).
These are useful to read when the application does not start up properly or fails when running (see Troubleshooting below)
The administrator can read the most recent log by running
./twinkle logs <app>
This does not distinguish between staging and production, and by default returns the most recent log. You can list logs to read by running
./twinkle logs --list <app>
and then read a specific one by running
./twinkle logs --filename <filename> <app>
Applications might have been set up ages ago and we will have forgotten about it. You can run
./twinkle status <name>
to get a summary of the current state of an application, for example:
$ ./twinkle status mms-sd-
── mms-sd-workshop ────────────────────────────────────────────────────────────
✔ Package source at '7a2ac5e5', updated 2025-07-24 08:25:06.09081
! Packages installed at 2025-07-24 08:19:35.776024 (Source has changed since last installation)
✔ Deployed to staging at 2025-07-24 08:34:30.364466
! Deployed to production at 2025-07-22 15:33:54.171179 (Source and packages have changed since last sync)
You can also get a running history of (most) key events in an application's history with
./twinkle history <name>
for example
$ ./twinkle history mms-sd-workshop
── mms-sd-workshop ─────────────────────────────────────────────────────────────
2025-07-22 15:33:54.171179 sync-production sha=a247013e, lib=20250718162420, and production=TRUE
2025-07-24 08:19:20.382735 update-src sha=c21b56e3
2025-07-24 08:19:35.776024 install-packages sha=c21b56e3 and lib=20250724081921
2025-07-24 08:25:06.09081 update-src sha=7a2ac5e5
2025-07-24 08:34:30.364466 sync-staging sha=7a2ac5e5, lib=20250724081921, and production=FALSE
Only successful commands are logged. Times are UTC throughout.
These are all reasons we have seen applications fail to start.
The most common reason for failure of an application to start is missing packages. The user should add these to the provisioning information contained in their repository and let the administrator know so that they can try redeploying. This may take a few iterations and there are not many great ways of easily working out the full set of packages required by the application.
Relatedly, an installed package may fail to load. In this case, the image being used by the shiny server needs updating. Currently this is built in twinkle
but one could modify the setup here easily enough to build an image from that image and install additional system libraries. The output from pkgdepends
will flag (with a cross) system libraries that are missing, otherwise searching on the error message along with "ubuntu" should reveal the name of the package. These are typically installed with apt-get update && apt-get install ...
, and you can try this in the container if you want to iterate quickly (though with multiple workers this avenue will not generally be suitable for changing a live system).
Also relatedly, an application may try and use a missing shell command, e.g., LaTeX or similar. This can be fixed following the approach above.
Applications may fail if they try and write any data to disk; ideally they would run in a read-only filesystem, but this is not implemented yet. This seems annoying but is the safest way to manage multiple simultaneous users, accessed from multiple running server processes. If the application needs scratch space (e.g. a place to compile a LaTeX document) it should do so in a temporary directory. If the application wants some common shared persistent data (e.g., a database) this is actually quite hard to get right and not currently supported. Possible future solutions to this would want to enable the applications to connect to some database or have access to some persistent disk that is shared between all workers but design the application to allow multiple simultaneous writers. This sort of problem will require collaboration between the server setup and the application.
Installation may fail if you rely on packages that use Remotes-style github references, because we only use the the bundled PAT. It's best to get as many packages onto a universe and use the pkgdepends.txt
to set repo to prevent this. It would be possible to pass a github PAT through as an environment variable when running the installation, but that is not currently implemented.
pkgdepends may fail with conflicting packages. This can be quite hard to debug, and usually happens when you have multiple related packages that are found via Remotes:
in their DESCRIPTION
s. It's quite hard to get pkgdepends to relax about this but it will interpret one request for user/pkg
and another as user/pkg@branch
as conflicting because the former is interpreted as requiring main
and latter as requiring a particular branch. It may be possible to find the right query parameters to tame the dependency solver (see the docs). Alternative solutions are to install from an R universe and avoid these references or to use the script-based installation (via conan.R
) and control the process yourself.
If your app installs itself as a package, then the Remotes
field in the DESCRIPTION
can conflict with other references used in provisioning. pkgdepends
really does not like multiple references to the same package (e.g., a package foo
and also a github request for user/foo
), and you may need to strip down the provisioning request. Most of the time for an application that is based on a package, the package dependencies will get you most of the way there for dependencies.
If you move an application's source (e.g., tracking a fork) then you must delete the app before provisioning. We only configure the remote once, at the first time we run update-src
and then after that we just pull from that location.
- Replace the
shiny.dide.ic.ac.uk
domain with yours (httpd.conf
,docker-compose.yml
) - Replace
reside@imperial.ac.uk
with your email (httpd.conf
) - Make the https certificate system with something applicable to your use case (the current setup will work for people at Imperial College who have worked with ICT to get access to the DNS server records for subdomains they control)
- Update the number of containers in
docker-compose.yml
and reflect this inhaproxy.conf
- Replace the contents of
site.yml
with your own apps!