This module supports to configure multiple Vault servers to ensure high availability. If a request on one configured address failes, another will be tried. This is useful, if you don't have a load-balancer for your Vault cluster or you are using Caddy as a load balancer for it.
If you run Caddy inside a Nomad cluster, you can use Nomad to issue Vault tokens for it.
Important
A KV version 2 mount is required for this to work. When using a KV version 1 mount, you will encounter the error secret not found
.
A running Vault instance/cluster with an enabled KVv2 mount is required for using this module. At startup a check for required capabilities on the configured secrets path will be performed and error messages will be shown with the missing capabilties, if any. The following capabilities must be granted:
path "kv/metadata/caddy/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "kv/data/caddy/*" {
capabilities = ["create", "read", "update"]
}
path "kv/delete/caddy/*" {
capabilities = ["create", "update"]
}
Replace kv
with your KVv2 mount path and caddy
with your secrets path prefix, if you are using different values than the defaults ones.
To build caddy with this module run xcaddy build --with github.com/gerolf-vent/caddy-vault-storage
.
This module is based on the Vault api client, so it supports most of it's environment variables. The environment variables VAULT_ADDR
and VAULT_MAX_RETRIES
are ignored.
Name | Type | Default | Description |
---|---|---|---|
addresses | []string (or comma-separated string in Caddyfile) | None | One or more addresses of Vault servers (on the same cluster) |
token_path | string | $VAULT_TOKEN environment variable |
Local path to read the access token from. Updates on that file will be detected and automatically read. |
secrets_mount_path | string | "kv" |
Path of the KVv2 mount to use |
secrets_path_prefix | string | "caddy" |
Path in the KVv2 mount to use |
max_retries | int | 3 |
Limit of connection retries after which to fail a request |
lock_timeout | int | 60 |
Timeout for locks (in seconds) |
lock_check_interval | int | 5 |
Interval for checking lock status (in seconds) |
Run caddy run --config server.json
with the following configuration as server.json
:
{
"storage": {
"module": "vault",
"addresses": ["https://server1", "https://server2", "https://server3"],
"token_path": "./vault_token"
},
"logging": {
"logs": {
"default": {
"level": "DEBUG"
}
}
},
"apps": {
"http": {
"servers": {
"example": {
"listen": [":8000"],
"routes": [
{
"match": [{
"host": ["localhost"]
}],
"handle": [{
"handler": "static_response",
"body": "Hello, world!"
}]
}
]
}
}
}
}
}
Or caddy run --config Caddyfile --adapter caddyfile
with the following configuration as Caddyfile
:
{
storage vault {
addresses "https://server1,https://server2,https://server3"
token_path "./vault_token"
}
debug
}
localhost:8000
respond "Hello, world!"
Cou can use docker-compose up
or podman-compose up
in the repository root to run the Go test against a Vault instance. Extensive logging is enabled to debug any errors.