Development kick-off #1
Replies: 8 comments 20 replies
-
Good to hear good news! As soon as you develop the Kamaji Control Plane, I am going to check if it works fine in my company’s environment(OpenStack rocky) and report the result to you. Thank you for your hard work :) 👍 Please refer to this information. For OpenStackCluster, at least one of apiServerLoadBalancer, disableAPIServerFloatingIP, apiServerFixedIP is required. If Kamaji takes care of LoadBalancer, The expected template will be as below:
For KubeadmConfigTemplate,
The expected template will be as below:
|
Beta Was this translation helpful? Give feedback.
-
kudos for driving this work @prometherion, and most important for having a clear goal to iteratively experiment/showcase, report back, and improve the entire CAPI ecosystem, because ultimately the success of this is tied with the work of defining a robust and extensible contract with the providers that will take care of machines and the rest of the infrastructure. Going back on the specific work, based on the discussion at the latest Kubecon in Amsterdam, there is a general interest in a control plane in a pod solution, and most probably there is already something you can leverage for a quick start. I have also played personally around with this idea a couple of times, the last one in https://github.com/fabriziopandini/cluster-api-provider-kubemark/tree/kubemark-controlplane, but this is very experimental 😉 However, what is key is that If there is an expectation to capture the above community sometime in the future, it is crucial to design a clear boundary between what is generic and what is specific to kamaji. note: kcp as a acronym is overrated (CAPI kcp, https://www.kcp.io/), so I suggest using a different acronym, might be something that highlights that the CP runs in a pod |
Beta Was this translation helpful? Give feedback.
-
I'd like to propose a different name for the proposed instance What do you think about:
I wouldn't use a short name to avoid collisions if we're unaware of other providers since we're sharing the same API group. |
Beta Was this translation helpful? Give feedback.
-
The proposed design makes sense to me. Ideally, the approach taken here should be easily adaptable to other infrastructure configurations. For example, it should be possible to use microvm provider for control planes and bare metal for worker nodes, here is an article on this topic https://www.weave.works/blog/multi-cluster-kubernetes-on-microvms-for-bare-metal. From the perspective of the CAPI API, it should be agnostic to the combination of infrastructure providers being used. It's important for us to avoid ending in a situation where each similar use case, using different providers, takes a different approach. |
Beta Was this translation helpful? Give feedback.
-
I want to make sure some points as below
One question here! @prometherion Is there any expected problem about the kubeadm-based bootstrap process by both Kamaji provider and CAPI(kubeadm bootstrap) since you are going to use KubeadmConfigTemplate for worker nodes? |
Beta Was this translation helpful? Give feedback.
-
Some good news! Thanks to @sn4psh0t we're testing the provider in Netsons' infrastructure, many kudos to them for invoicing a I was successfully able to create a Kamaji Control Plane in the CAPI cluster, and the workers nodes in the OpenStack dev env. apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ local_hostname }}'
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: capi-quickstart
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
name: capo-cp-test
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackCluster
name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackCluster
metadata:
name: capi-quickstart
namespace: default
spec:
apiServerLoadBalancer:
enabled: false
disableAPIServerFloatingIP: true
apiServerFixedIP: ""
cloudName: openstack
dnsNameservers:
- 1.1.1.1
externalNetworkId: <REDACTED>
identityRef:
kind: Secret
name: capi-quickstart-cloud-config
managedSecurityGroups: true
nodeCidr: 10.6.0.0/24
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
metadata:
name: capo-cp-test
spec:
replicas: 2
version: v1.24.0
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
clusterName: capi-quickstart
replicas: 3
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: capi-quickstart-md-0
clusterName: capi-quickstart
failureDomain: <REDACTED>
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
name: capi-quickstart-md-0
version: v1.24.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha6
kind: OpenStackMachineTemplate
metadata:
name: capi-quickstart-md-0
namespace: default
spec:
template:
spec:
cloudName: <REDACTED>
flavor: <REDACTED>
identityRef:
kind: Secret
name: capi-quickstart-cloud-config
image: <REDACTED>
sshKeyName: <REDACTED> However, we're getting some issues with the @jds9090 proposed here to patch the
Digging the CAPO code-base, I ended up here: the OpenStack provider is blocking any update operation, except for:
All other changes are not allowed, and this blocks us since the workflow from our side is the following:
However, step nr. 4 is not possible, thus to test the whole lifecycle I had to hack a bit of the code and create a well-known address in advance, just to be sure everything was working as expected. My plan is to engage with the CAPO community in order to unblock ourselves from this: I'd say we could ask to ignore changes of the |
Beta Was this translation helpful? Give feedback.
-
I need quick feedback about the API specification for the The following sample is the combination between the resources managed by Kamaji itself combined with the one offered by CAPI objects. apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
metadata:
labels:
app.kubernetes.io/name: kamajicontrolplane
app.kubernetes.io/instance: kamajicontrolplane-sample
app.kubernetes.io/part-of: cluster-api-control-plane-provider-kamaji
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: cluster-api-control-plane-provider-kamaji
name: kamajicontrolplane-sample
spec:
dataStoreName: default
addons:
coreDNS: { }
konnectivity: { }
kubeProxy: { }
admissionControllers:
- AlwaysAdmit
- AlwaysDeny
- AlwaysPullImages
- CertificateApproval
- CertificateSigning
- CertificateSubjectRestriction
- DefaultIngressClass
- DefaultStorageClass
- DefaultTolerationSeconds
- DenyEscalatingExec
- DenyExecOnPrivileged
- DenyServiceExternalIPs
- EventRateLimit
- ExtendedResourceToleration
- ImagePolicyWebhook
- LimitPodHardAntiAffinityTopology
- LimitRanger
- MutatingAdmissionWebhook
- NamespaceAutoProvision
- NamespaceExists
- NamespaceLifecycle
- NodeRestriction
- OwnerReferencesPermissionEnforcement
- PersistentVolumeClaimResize
- PersistentVolumeLabel
- PodNodeSelector
- PodSecurity
- PodSecurityPolicy
- PodTolerationRestriction
- Priority
- ResourceQuota
- RuntimeClass
- SecurityContextDeny
- ServiceAccount
- StorageObjectInUseProtection
- TaintNodesByCondition
- ValidatingAdmissionWebhook
registry: registry.k8s.io
controllerManager:
extraVolumeMounts: [ ]
extraArgs:
- --cloud-provider=external
resources:
limits:
cpu: 500m
memory: 128Mi
containerImageName: kube-controller-manager
apiServer:
extraVolumeMounts: [ ]
extraArgs:
- --cloud-provider=external
resources:
requests:
cpu: 750m
memory: 256Mi
containerImageName: kube-apiserver
scheduler:
extraVolumeMounts: [ ]
extraArgs: [ ]
resources:
limits:
cpu: 500m
memory: 128Mi
containerImageName: kube-scheduler
kubelet:
preferredAddressTypes:
- Hostname
- InternalIP
- ExternalIP
cgroupfs: systemd
network:
serviceType: LoadBalancer
serviceLabels:
kamaji.clastix.io/service: external
serviceAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
certSANs:
- kamajicontrolplane-sample.eu-west-01.cloudapp.azure.com
deployment:
nodeSelector:
kubernetes.io/os: linux
runtimeClassName: runc
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 100%
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: kamaji.clastix.io/name
operator: In
values:
- kamajicontrolplane-sample
topologyKey: kubernetes.io/hostname
tolerations:
- effect: NoExecute
operator: Equal
value: workload
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
kamaji.clastix.io/name: kamajicontrolplane-sample
matchLabelKeys:
- pod-template-hash
extraInitContainers: [ ]
extraContainers: [ ]
extraVolumes: [ ]
replicas: 2
version: 1.27.0 You can notice we're missing some values:
Please, let me know if you find something odd, or any missing features I lost in the drafting.
|
Beta Was this translation helpful? Give feedback.
-
I'd like to thank all the people involved in this process, and I just wanted to share that the first version of the KamajiControlPlane CAPI provider is out as v0.1.0! I'm going to close this discussion and I'm encouraging you in giving it a try and spot bugs: looking forward to those in the Issue sections. Ad maiora! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This is a discussion providing a recap of the meeting hosted on the 2nd of May, 2023
Attendees
Recap
During the meeting, we agreed on starting implementing a Control Plane provider for Cluster API based on Kamaji.
The development playground will be OpenStack, @sn4psh0t from Netsons Cloud offered a dev environment.
Starting from a CAPO example following the proposed draft of the manifests:
Cluster
For the sake of sharing, the proposed
OpenStackCluster
content is the following.Worker machines
We can focus on the
MachineDeployment
reference for the compute nodes.Worker nodes will join the cluster thanks to the
KubeadmConfigTemplate
as follows:Along with the
OpenStackMachineTemplate
:All these manifests are transparent for Kamaji, mentioned for the sake of sharing.
KamajiControlPlane proposal
The following manifest is a draft for the
KamajiControlPlane
instance that will provide all the required information to provision a control plane, and sharing the required information with the underlying worker nodes to join the downstream cluster.User story/diagram
Cluster
definition with itkubeadm
machinery offered by Cluster APIRoadmap
Beta Was this translation helpful? Give feedback.
All reactions