Skip to content

Conversation

@joker-at-work
Copy link
Contributor

@joker-at-work joker-at-work commented Sep 26, 2025

Please see the individual commits' messages.

This requires a change in the openstack/utils chart to change the endpoint host for the seed to Keystone.

Rolling this out also depends on the secrets getting updated.

version: 2.2.15
- name: redis
repository: oci://keppel.eu-de-1.cloud.sap/ccloud-helm
version: 2.2.15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't redis already in this file in the lines above?

Copy link
Contributor Author

@joker-at-work joker-at-work Oct 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I'm not sure why helm adds two exact same entries. Probably an implementation detail in helm, because we have 2 entries in Chart.yaml to get 2 redis instances, which use the same version. If the versions would differ, we'd probably had 2 different entries.

Edit: What might not be obvious in my answer: The file is auto-generated with helm dep up openstack/cinder.

In the api-paste.ini we still had sections that were meant to be used
with the `mitaka` release. We will not run that release anymore and
don't run it for a while, so we clean up that code now.
@sapcc-bot
Copy link
Contributor

Failed to validate the helm chart. Details. Readme.

@sapcc-bot
Copy link
Contributor

Failed to validate the helm chart. Details. Readme.

@sapcc-bot
Copy link
Contributor

Failed to validate the helm chart. Details. Readme.

@joker-at-work joker-at-work marked this pull request as ready for review October 24, 2025 13:23
@joker-at-work
Copy link
Contributor Author

Merging this, new pods will be created and the cinder-api pods "renamed" to cinder-api-external.
Nova will not currently use the internal pods, because a) the endpoints in Keystone still point to cinder-api i.e. the external pods and b) Nova doesn't currently use the internal endpoints. Therefore, I deem this quite safe to merge.

We want to configure different rate-limits and policies for
inter-service communication compared to user communication. To make this
work, we need different cinder-api pods. Keystone's service-catalog
already supports having different endpoints for these purposes.

We spawn a cinder-api-external and a cinder-api-internal deployment each
with their own cinder-api-external/internal-ratelimit-redis, since the
rate-limit-middleware doesn't support re-using the same redis for
multiple apis and it allows us to deploy the redis next to the
cinder-api in the same cluster, if we want to.

Since most of the configuration/YAML is the same between the two
cinder-api pods, we define template function and provide the individual
configuration as parameters.

During the rollout, there will be a time where the new internal endpoint
is not yet synced into the Keystone catalogue. Therefore, we also keep
the old "cinder-api" Service alive, pointing to the cinder-api-external
Deployment as before.

It should be easy to extend this pattern with an "admin" API endpoint as
supported by Keystone's catalogue if we choose to use that in the
future.

This commit contains some restructuring, e.g. introducing new
cinder-api-external-etc and cinder-api-internal-etc ConfigMaps to hold
and template API configuration and switching to projected volume
mounting in the api Deployment for better readability and static config
during the runtime.
Copy link
Contributor

@hemna hemna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just have to fix the conflicts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants