Skip to content

docs: Particularize Initializers and Provide Sample Code #3412

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
217 changes: 184 additions & 33 deletions docs/content/concepts/workspaces/workspace-initialization.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Initializers are used to customize workspaces and bootstrap required resources u

### Defining Initializers in WorkspaceTypes

A `WorkspaceType` can specify an initializer using the `initializer` field. Here is an example of a `WorkspaceType` with an initializer.
A `WorkspaceType` can specify having an initializer using the `initializer` field. Here is an example of a `WorkspaceType` with an initializer.

```yaml
apiVersion: tenancy.kcp.io/v1alpha1
Expand All @@ -22,6 +22,15 @@ spec:
path: root
```

Each initializer has a unique name, which gets automatically generated using `<workspace-path-of-WorkspaceType>:<WorkspaceType-name>`. So for example, if you were to apply the aforementioned WorkspaceType on the root workspace, your initializer would be called `root:example`.

Since `WorkspaceType.spec.initializer` is a boolean field, each WorkspaceType comes with a single initializer by default. However each WorkspaceType inherits the initializers of its parent workspaces. As a result, it is possible to have multiple initializers on a WorkspaceType, but you will need to nest them.
Here is a example:

1. In `root` workspace, create a new WorkspaceType called `parent`. You will receive a `root:parent` initializer
2. In the newly created `parent` workspace, create a new WorkspaceType `child`. You will receive a `root:parent:child` initializer
3. Whenever a new workspace is created in the child workspace, it will receive both the `root:parent` as well as the `root:parent:child` initializer

### Enforcing Permissions for Initializers

The non-root user must have the `verb=initialize` on the `WorkspaceType` that the initializer is for. This ensures that only authorized users can perform initialization actions using virtual workspace endpoint. Here is an example of the `ClusterRole`.
Expand All @@ -37,6 +46,7 @@ rules:
resourceNames: ["example"]
verbs: ["initialize"]
```

You can then bind this role to a user or a group.

```yaml
Expand All @@ -54,46 +64,187 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
```

## initializingworkspaces Virtual Workspace

As a service provider, you can use the `initializingworkspaces` virtual workspace to manage workspace resources in the initializing phase. This virtual workspace allows you to fetch `LogicalCluster` objects that are in the initializing phase and request initialization by a specific controller.

This Virtual Workspace can fetch `LogicalCluster` either by specific its name or using wildcard.

### Endpoint URL path

`initializingworkspaces` Virtual Workspace provide a virtual api-server to access workspaces that are initializing with the specific initializer. These URLs are published in the status of WorkspaceType object.


```yaml
virtualWorkspaces:
- url: https://<front-proxy-ip>:6443/services/initializingworkspaces/<initializer>
```
## Writing Custom Initialization Controllers

This is an example URL path for accessing logical cluster apis for a specific initializer in a `initializingworkspaces` virtual workspace.
### Responsibilities Of Custom Intitialization Controllers

```yaml
/services/initializingworkspaces/<initializer>/clusters/*/apis/core.kcp.io/v1alpha1/logicalclusters
```
Custom Initialization Controllers are responsible for handling initialization logic for custom WorkspaceTypes. They interact with kcp by:

You can also use `LogicalCluster` name for the direct view, allowing to manage all resources within that logical cluster.
1. Watching for the creation of new LogicalClusters (the backing object behind Workspaces) with the corresponding initializer on them
2. Running any custom initialization logic
3. Removing the corresponding initializer from the `.status.initializers` list of the LogicalCluster after initialization logic has successfully finished

```yaml
/services/initializingworkspaces/<initializer>/clusters/<logical-cluster-name>/apis/core.kcp.io/v1alpha1/logicalclusters
```
In order to simplify these processes, kcp provides the `initializingworkspaces` virtual workspace.

### Example workflow
### The `initializingworkspaces` Virtual Workspace

* Add your custom WorkspaceType to the platform with an initializer.
As a service provider, you can use the `initializingworkspaces` virtual workspace to manage workspace resources in the initializing phase. This virtual workspace allows you to fetch `LogicalCluster` objects that are in the initializing phase and request initialization by a specific controller.

* Create a workspace with the necessary warrants and scopes. The workspace will stay in the initializing state as the initializer is present.
You can retrieve the url of a Virtual Workspace directly from the `.status.virtualWorkspaces` field of the corresponding WorkspaceType. Returning to our previous example using a custom WorkspaceType called "example", you will receive the following output:

* Use a controller to watch your initializing workspaces, you can interact with the workspace through the virtual workspace endpoint:
```sh
$ kubectl get workspacetype example -o yaml

```yaml
/services/initializingworkspaces/foo/clusters/*/apis/core.kcp.io/v1alpha1/logicalclusters
...
status:
virtualWorkspaces:
- url: https://<front-proxy-url>/services/initializingworkspaces/root:example
```

* Once you get the object, you need to initialize the workspace with its related resources, using the same endpoint

* Once the initialization is complete, use the same endpoint to remove the initializer from the workspace.
You can use this url to construct a kubeconfig for your controller. To do so, use the url directly as the `cluster.server` in your kubeconfig and provide a user with sufficient permissions (see [Enforcing Permissions for Initializers](#enforcing-permissions-for-initializers))

### Code Sample

When writing a custom initializer, the following needs to be taken into account:

* You need to use the kcp-dev controller-runtime fork, as regular controller-runtime is not able to work as under the hood all LogicalClusters have the sam name
* You need to update LogicalClusters using patches; They cannot be updated using the update api

Keeping this in mind, you can use the following example as a starting point for your intitialization controller

=== "main.go"

```Go
package main

import (
"context"
"fmt"
"log/slog"
"os"
"slices"
"strings"

"github.com/go-logr/logr"
kcpcorev1alpha1 "github.com/kcp-dev/kcp/sdk/apis/core/v1alpha1"
"github.com/kcp-dev/kcp/sdk/apis/tenancy/initialization"
"k8s.io/client-go/tools/clientcmd"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/kcp"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)

type Reconciler struct {
Client client.Client
Log logr.Logger
InitializerName kcpcorev1alpha1.LogicalClusterInitializer
}

func main() {
if err := execute(); err != nil {
fmt.Println(err)
os.Exit(1)
}
}

func execute() error {
kubeconfigpath := "<path-to-kubeconfig>"

config, err := clientcmd.BuildConfigFromFlags("", kubeconfigpath)
if err != nil {
return err
}

logger := logr.FromSlogHandler(slog.NewTextHandler(os.Stderr, nil))
ctrl.SetLogger(logger)

mgr, err := kcp.NewClusterAwareManager(config, manager.Options{
Logger: logger,
})
if err != nil {
return err
}
if err := kcpcorev1alpha1.AddToScheme(mgr.GetScheme()); err != nil {
return err
}

// since the initializers name is is the last part of the hostname, we can take it from there
initializerName := config.Host[strings.LastIndex(config.Host, "/")+1:]

r := Reconciler{
Client: mgr.GetClient(),
Log: mgr.GetLogger().WithName("initializer-controller"),
InitializerName: kcpcorev1alpha1.LogicalClusterInitializer(initializerName),
}

if err := r.SetupWithManager(mgr); err != nil {
return err
}
mgr.GetLogger().Info("Setup complete")

if err := mgr.Start(context.Background()); err != nil {
return err
}

return nil
}

func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&kcpcorev1alpha1.LogicalCluster{}).
// we need to use kcp.WithClusterInContext here to target the correct logical clusters during reconciliation
Complete(kcp.WithClusterInContext(r))
}

func (r *Reconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) {
log := r.Log.WithValues("clustername", req.ClusterName)
log.Info("Reconciling")

lc := &kcpcorev1alpha1.LogicalCluster{}
if err := r.Client.Get(ctx, req.NamespacedName, lc); err != nil {
return reconcile.Result{}, err
}

// check if your initializer is still set on the logicalcluster
if slices.Contains(lc.Status.Initializers, r.InitializerName) {

log.Info("Starting to initialize cluster")
// your logic here to initialize a Workspace

// after your initialization is done, don't forget to remove your initializer
// Since LogicalCluster objects cannot be directly updated, we need to create a patch.
patch := client.MergeFrom(lc.DeepCopy())
lc.Status.Initializers = initialization.EnsureInitializerAbsent(r.InitializerName, lc.Status.Initializers)
if err := r.Client.Status().Patch(ctx, lc, patch); err != nil {
return reconcile.Result{}, err
}
}

return reconcile.Result{}, nil
}
```

=== "kubeconfig"

```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <your-certificate-authority>
# obtain the server url from the status of your WorkspaceType
server: "<initializing-workspace-url>"
name: finalizer
contexts:
- context:
cluster: finalizer
user: <user-with-sufficient-permissions>
name: finalizer
current-context: finalizer
kind: Config
preferences: {}
users:
- name: <user-with-sufficient-permissions>
user:
token: <user-token>
```

=== "go.mod"

```Go
...
// replace upstream controller-runtime with kcp cluster aware fork
replace sigs.k8s.io/controller-runtime v0.19.7 => github.com/kcp-dev/controller-runtime v0.19.0-kcp.1
...
```
3 changes: 3 additions & 0 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,9 @@ markdown_extensions:
- pymdownx.superfences
# Enable note/warning/etc. callouts
- admonition
# Enable tabs
- pymdownx.tabbed:
alternate_style: true

# Live reload if any of these change when running 'mkdocs serve'
watch:
Expand Down
2 changes: 1 addition & 1 deletion pkg/reconciler/committer/committer.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ type Patcher[R runtime.Object] interface {
}

// CommitFunc is an alias to clean up type declarations.
type CommitFunc[Sp any, St any] func(context.Context, *Resource[Sp, St], *Resource[Sp, St]) error
type CommitFunc[Sp any, St any] func(_ context.Context, old *Resource[Sp, St], new *Resource[Sp, St]) error

// NewCommitter returns a function that can patch instances of R based on meta,
// spec or status changes using a cluster-aware patcher.
Expand Down