Skip to content

NSFS deployment with export of accounts (UID, GID configuration)

Guy Margalit edited this page Apr 21, 2021 · 13 revisions

This is a WIP feature.

Step 1 - Deploy Latest NooBaa to Kubernetes

Note: This feature is currently (on 2021-Apr-22) being under development, so it is recommended to use latest master releases, same or newer than the build tagged with master-20210419.

You will need to have your cluster ready with kubectl configured to use it.

Download the operator CLI:

curl 'https://noobaa-operator-cli.s3.amazonaws.com/noobaa-operator-master-20210419' > noobaa
chmod +x noobaa
sudo install noobaa

Use the CLI to deploy to the noobaa namespace:

noobaa install -n noobaa \
  --operator-image='noobaa/noobaa-operator:master-20210419' \
  --noobaa-image='noobaa/noobaa-core:master-20210419'

We suggest setting the current namespace to noobaa so you don’t need to add -n noobaa to all kubectl / noobaa commands:

kubectl config set-context --current --namespace noobaa

Step 2 - Setup Filesystem PVC

For NSFS to work it requires a PVC for the filesystem, with a ReadWriteMany accessMode so that we can scale the endpoints to any node in the cluster and still be able to share it.

It is expected that this PVC will be allocated from a provisioner, such as rook-ceph.cephfs.csi.ceph.com

In this example we are showing how to create a simple Local PV (similar to hostpath) for dev/test purposes:

Assuming the filesystem to expose is mounted in /nsfs in the node. We will create a local PV that represents the mounted file system on the node at /nsfs. Download and create the yamls attached below -

kubectl create -f nsfs-local-class.yaml
kubectl create -f nsfs-local-pv.yaml
kubectl create -f nsfs-local-pvc.yaml

nsfs-local-class.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nsfs-local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

nsfs-local-pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nsfs-vol
spec:
  storageClassName: nsfs-local
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  local:
    path: /nsfs/
  capacity:
    storage: 1Ti
  accessModes:
    - ReadWriteMany
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/os
              operator: Exists

nsfs-local-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nsfs-vol
spec:
  storageClassName: nsfs-local
  resources:
    requests:
      storage: 1Ti
  accessModes:
    - ReadWriteMany

Step 3 - Mount PVC in Endpoints (TODO: handle in operator)

We need the endpoints pods to mount the filesystem PVC. This step should be automated by the operator, but for now we should manually patch the endpoints deployment like this:

kubectl patch deployment noobaa-endpoint --patch '{
  "spec": { "template": { "spec": {
    "volumes": [{
      "name": "nsfs",
      "persistentVolumeClaim": {"claimName": "nsfs-vol"}
    }],
    "containers": [{
      "name": "endpoint",
      "volumeMounts": [{ "name": "nsfs", "mountPath": "/nsfs" }]
    }]
  }}}
}'

Step 4 - Create NSFS Resource (TODO: handle in operator)

A namespace resource is a configuration entity that represents the mounted filesystem in the noobaa system.

You need to provide it with some information:

  • name - choose how to name it, perhaps follow the same name as the PVC or the Filesystem. You will use this name later when creating buckets that use this filesystem.
  • nsfs_config with properties:
    • fs_root_path - The mount point of the filesystem in the endpoints (see step 3).
    • fs_backend (optional) - When empty will assume basic POSIX filesystem only. Supported backend types: NFSv4, CEPH_FS, GPFS. Setting the more specific backend will allow optimization based on the capabilities of the underlying filesystem.
noobaa api pool_api create_namespace_resource '{
  "name": "fs1",
  "nsfs_config": {
      "fs_root_path": "/nsfs/fs1",
      "fs_backend": "GPFS"
  }
}'

Step 5 - Setup S3 NSFS Account

Create an account with NSFS configuration:

  • Map the account to a UID/GID
  • Set up the directory for new buckets created from S3 for this account (TBD)
  • Note that allowed_buckets should be set to full_permission because the filesystem permissions of the UID will be used to resolve the allowed buckets for this account.
noobaa api account_api create_account '{
  "email": "jenia@noobaa.io",
  "name" : "jenia",
  "has_login": false,
  "s3_access": true,
  "allowed_buckets": { "full_permission": true },
  "nsfs_account_config": {
    "uid": *INSERT_UID*,
    "gid": *INSERT_GID*,
    "new_buckets_path": "TBD",
  }
}'

This should give out a response with the credentials to use

INFO[0001] ✅ RPC: account.create_account() Response OK: took 205.7ms 
access_keys:
- access_key: *NOOBAA_ACCOUNT_ACCESS_KEY*
  secret_key: *NOOBAA_ACCOUNT_SECRET_KEY*

You can also perform a list accounts command in order to see the configured NSFS accounts (besides all other accounts of the system)

noobaa api account_api list_accounts

If you are interested in a particular account you can read it directly

noobaa api account_api read_account '{
  "email": "jenia@noobaa.io"
}'

Step 6 - Setup the filesystem ACL/permissions

S3 access will be determined by the access of the S3 account UID/GID to the buckets and objects. The filesystem admin should set up the ACLs/permissions of the mounted FS path to the needed UIDs, GIDs that would be used to access it.

For dev/test the simplest way to set this up is to give full access to all:

mkdir -p /nsfs/fs1
chmod -R 777 /nsfs/fs1

Step 6 - Create Bucket(s)

NSFS Buckets are like creating an "export" for a filesystem directory in the S3 service.

The following API call will create a bucket with the specified name, and redirect it to a specified path from the NSFS resource that was created in Step 4.

noobaa api bucket_api create_bucket '{
  "name": "fs1-jenia-bucket",
  "namespace":{
    "write_resource": { "resource": "fs1", "path": "jenia/" },
    "read_resources": [ { "resource": "fs1", "path": "jenia/" }]
  }
}'

Step 8 - Connect S3 Client

Configure the S3 client application and access the FS via S3 from the endpoint

Application S3 config:

AWS_ACCESS_KEY_ID=*NOOBAA_ACCOUNT_ACCESS_KEY*
AWS_SECRET_ACCESS_KEY=*NOOBAA_ACCOUNT_SECRET_KEY*
S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status)
BUCKET_NAME=fs1-jenia-bucket
Clone this wiki locally