Skip to content

Conversation

jmtx1020
Copy link
Contributor

@jmtx1020 jmtx1020 commented Apr 24, 2025

Summary

  • Fixes logic for commandOverride in charts:
    • das
    • nitro
    • relay
  • Updates .Values.extraEnv to render when using commandOverride
    • Before, this update, .Values.extraEnv was completely ignored if using .Values.commandOverride.enable: true

Relay

Before

Prior to my changes, running the following:

helm template nitro ./ --namespace nitro --values ./test.values.yaml --debug

With the following values:

values.yaml
commandOverride:
enabled: true
command:
  - "echo"
args:
  - "hello"
extraEnv:
- name: NITROCI_PARENT__CHAIN_CONNECTION_URL
  valueFrom:
    secretKeyRef:
      name: ci-secret-nitro
      key: PARENT_CHAIN_URL_SEPOLIA
- name: NITROCI_PARENT__CHAIN_BLOB__CLIENT_BEACON__URL
  valueFrom:
    secretKeyRef:
      name: ci-secret-nitro
      key: PARENT_CHAIN_BLOB_CLIENT_URL_JOSE

Would result in the following error:

output.error.yaml
Error: YAML parse error on relay/templates/deployment.yaml: error converting YAML to JSON: yaml: line 41: mapping values are not allowed in this context
helm.go:86: 2025-04-24 22:43:29.062266 -0400 EDT m=+0.038616746 [debug] error converting YAML to JSON: yaml: line 41: mapping values are not allowed in this context
YAML parse error on relay/templates/deployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
  helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:144
helm.sh/helm/v3/pkg/releaseutil.SortManifests
  helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:104
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
  helm.sh/helm/v3/pkg/action/action.go:168
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
  helm.sh/helm/v3/pkg/action/install.go:316
main.runInstall
  helm.sh/helm/v3/cmd/helm/install.go:317
main.newTemplateCmd.func2
  helm.sh/helm/v3/cmd/helm/template.go:95
github.com/spf13/cobra.(*Command).execute
  github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
  github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
  github.com/spf13/cobra@v1.8.1/command.go:1041
main.main
  helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
  runtime/proc.go:283
runtime.goexit
  runtime/asm_amd64.s:1700

After

After this change has been made, we can apply these values:

values.yaml
commandOverride:
enabled: true
command:
- "echo"
args:
- "hello"

Then we get the following in our statefulset, note the containers.command and the containers.args

output.success.yaml
# Source: nitro/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nitro
  labels:
    helm.sh/chart: nitro
    app.kubernetes.io/name: nitro
    app.kubernetes.io/instance: nitro
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nitro.arbitrum.io/desiredReplicas: "1"
spec:
  serviceName: nitro-headless
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nitro
      app.kubernetes.io/instance: nitro
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/configmap: 2def090868624cc8ed8c95053793b17f7f02f6498256b28458b9bb462aa50d7b
      labels:
        app.kubernetes.io/name: nitro
        app.kubernetes.io/instance: nitro
        function: nitro
    spec:
      serviceAccountName: nitro
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:

      containers:
        - name: nitro
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command: ["echo"]
          args: ["hello"]
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP

          lifecycle:

          volumeMounts:
          - name: nitrodata
            mountPath: /home/user/data/
          - name: config
            mountPath: /config/
          resources:
            {}

      volumes:
      - name: config
        configMap:
          name: nitro
      terminationGracePeriodSeconds: 600
  volumeClaimTemplates:
    - metadata:
        name: nitrodata
        labels:
          app: nitro
          release: nitro
          heritage: Helm
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "500Gi"

DAS

Before

Prior to my changes, running the following:

helm template das-test ./ --namespace das --values ./values.yaml --values ./values.jose.yaml --debug

With the following values:

values.yaml
  lifecycle:
    postStart:
      exec:
        command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
    preStop:
      exec:
        command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]

  commandOverride:
    enabled: true
    command:
      - "/usr/local/bin/daserver"
    args:
      - --flag1=true
      - --flag2=false

  configmap:
    enabled: false

  extraEnv:
    - name: NITROCI_DATA__AVAILABILITY_PARENT__CHAIN__NODE__URL
      valueFrom:
        secretKeyRef:
          name: ci-secret-nitro
          key: PARENT_CHAIN_URL_SEPOLIA

  ci:
    secretManifest:
      enabled: true

Would result in the following error:

output.error.yaml
---
# Source: das/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: das-test
  labels:
    helm.sh/chart: das
    app.kubernetes.io/name: das
    app.kubernetes.io/instance: das-test
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: "das-test"
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: das
      app.kubernetes.io/instance: das-test
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
      labels:
        app.kubernetes.io/name: das
        app.kubernetes.io/instance: das-test
    spec:
      serviceAccountName: das-test
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:
      containers:
        - name: das
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command:
        - /usr/local/bin/daserver
          args:
        - --flag=true
          lifecycle:

            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - echo Hello from the postStart handler > /usr/share/message
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - nginx -s quit; while killall -0 nginx; do sleep 1; done
          volumeMounts:
          - name: localfilestorage
            mountPath: /data
          resources:
            {}
      volumes:

      terminationGracePeriodSeconds: 600
  volumeClaimTemplates:
    - metadata:
        name: localfilestorage
        labels:
          app: das
          release: das-test
          heritage: Helm
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "100Gi"
Error: YAML parse error on das/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 46: mapping values are not allowed in this context
helm.go:86: 2025-04-24 15:21:40.563851 -0400 EDT m=+0.048228533 [debug] error converting YAML to JSON: yaml: line 46: mapping values are not allowed in this context
YAML parse error on das/templates/statefulset.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:144
helm.sh/helm/v3/pkg/releaseutil.SortManifests
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:104
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
	helm.sh/helm/v3/pkg/action/action.go:168
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
	helm.sh/helm/v3/pkg/action/install.go:316
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:317
main.newTemplateCmd.func2
	helm.sh/helm/v3/cmd/helm/template.go:95
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.8.1/command.go:1041
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
	runtime/proc.go:283
runtime.goexit
	runtime/asm_amd64.s:1700

After

After this change has been made, we can apply these values:

values.yaml
commandOverride:
enabled: true
command:
  - "echo"
args:
  - "hello"
extraEnv:
- name: NITROCI_PARENT__CHAIN_CONNECTION_URL
  valueFrom:
    secretKeyRef:
      name: ci-secret-nitro
      key: PARENT_CHAIN_URL_SEPOLIA
- name: NITROCI_PARENT__CHAIN_BLOB__CLIENT_BEACON__URL
  valueFrom:
    secretKeyRef:
      name: ci-secret-nitro
      key: PARENT_CHAIN_BLOB_CLIENT_URL_JOSE

Then we get the following in our deployment, note the containers.command and the containers.args

output.success.yaml
---
# Source: relay/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: relay
  labels:
    helm.sh/chart: relay
    app.kubernetes.io/name: relay
    app.kubernetes.io/instance: relay
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: relay
      app.kubernetes.io/instance: relay
  template:
    metadata:
      annotations:
        checksum/configmap: e8b85124f358b9d451748ed8eedc904a1c7bb51ac73b710707a929f8c0238fd0
      labels:
        app.kubernetes.io/name: relay
        app.kubernetes.io/instance: relay
    spec:
      serviceAccountName: relay
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      containers:
        - name: relay
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command: ["echo"]
          args: ["hello"]
          lifecycle:

          volumeMounts:
          - name: config
            mountPath: /config/
          resources:
            {}
      volumes:
      - name: config
        configMap:
          name: relay

Nitro

Before

Prior to my changes, running the following:

helm template nitro ./ --namespace nitro --values ./test.values.yaml --debug

With the following values:

values.yaml
commandOverride:
enabled: true
command:
  - "echo"
args:
  - "hello"

Would result in the following error:

output.error.yaml
---
# Source: nitro/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nitro
labels:
  helm.sh/chart: nitro
  app.kubernetes.io/name: nitro
  app.kubernetes.io/instance: nitro
  app.kubernetes.io/version: "v3.5.5-90ee45c"
  app.kubernetes.io/managed-by: Helm
annotations:
  nitro.arbitrum.io/desiredReplicas: "1"
spec:
serviceName: nitro-headless
replicas: 1
selector:
  matchLabels:
    app.kubernetes.io/name: nitro
    app.kubernetes.io/instance: nitro
podManagementPolicy: Parallel
updateStrategy:
  type: RollingUpdate
template:
  metadata:
    annotations:
      checksum/configmap: 2def090868624cc8ed8c95053793b17f7f02f6498256b28458b9bb462aa50d7b
    labels:
      app.kubernetes.io/name: nitro
      app.kubernetes.io/instance: nitro
      function: nitro
  spec:
    serviceAccountName: nitro
    securityContext:

      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    initContainers:

    containers:
      - name: nitro
        securityContext:
          {}
        image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
        imagePullPolicy: Always
        command:
      - echo
        args:
      - hello
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP

        lifecycle:

        volumeMounts:
        - name: nitrodata
          mountPath: /home/user/data/
        - name: config
          mountPath: /config/
        resources:
          {}

    volumes:
    - name: config
      configMap:
        name: nitro
    terminationGracePeriodSeconds: 600
volumeClaimTemplates:
  - metadata:
      name: nitrodata
      labels:
        app: nitro
        release: nitro
        heritage: Helm
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: "500Gi"
Error: YAML parse error on nitro/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 50: mapping values are not allowed in this context
helm.go:86: 2025-04-24 22:30:39.558221 -0400 EDT m=+0.042771697 [debug] error converting YAML to JSON: yaml: line 50: mapping values are not allowed in this context
YAML parse error on nitro/templates/statefulset.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
  helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:144
helm.sh/helm/v3/pkg/releaseutil.SortManifests
  helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:104
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
  helm.sh/helm/v3/pkg/action/action.go:168
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
  helm.sh/helm/v3/pkg/action/install.go:316
main.runInstall
  helm.sh/helm/v3/cmd/helm/install.go:317
main.newTemplateCmd.func2
  helm.sh/helm/v3/cmd/helm/template.go:95
github.com/spf13/cobra.(*Command).execute
  github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
  github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
  github.com/spf13/cobra@v1.8.1/command.go:1041
main.main
  helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
  runtime/proc.go:283
runtime.goexit
  runtime/asm_amd64.s:1700

After

After this change has been made, we can apply these values:

values.yaml
commandOverride:
enabled: true
command:
- "echo"
args:
- "hello"

Then we get the following in our statefulset, note the containers.command and the containers.args

output.success.yaml
# Source: nitro/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nitro
  labels:
    helm.sh/chart: nitro
    app.kubernetes.io/name: nitro
    app.kubernetes.io/instance: nitro
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nitro.arbitrum.io/desiredReplicas: "1"
spec:
  serviceName: nitro-headless
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nitro
      app.kubernetes.io/instance: nitro
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/configmap: 2def090868624cc8ed8c95053793b17f7f02f6498256b28458b9bb462aa50d7b
      labels:
        app.kubernetes.io/name: nitro
        app.kubernetes.io/instance: nitro
        function: nitro
    spec:
      serviceAccountName: nitro
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:

      containers:
        - name: nitro
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command: ["echo"]
          args: ["hello"]
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP

          lifecycle:

          volumeMounts:
          - name: nitrodata
            mountPath: /home/user/data/
          - name: config
            mountPath: /config/
          resources:
            {}

      volumes:
      - name: config
        configMap:
          name: nitro
      terminationGracePeriodSeconds: 600
  volumeClaimTemplates:
    - metadata:
        name: nitrodata
        labels:
          app: nitro
          release: nitro
          heritage: Helm
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "500Gi"

DAS

Before

Prior to my changes, running the following:

helm template das-test ./ --namespace das --values ./values.yaml --values ./values.jose.yaml --debug

With the following values:

values.yaml
  lifecycle:
    postStart:
      exec:
        command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
    preStop:
      exec:
        command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]

  commandOverride:
    enabled: true
    command:
      - "/usr/local/bin/daserver"
    args:
      - --flag1=true
      - --flag2=false

  configmap:
    enabled: false

  extraEnv:
    - name: NITROCI_DATA__AVAILABILITY_PARENT__CHAIN__NODE__URL
      valueFrom:
        secretKeyRef:
          name: ci-secret-nitro
          key: PARENT_CHAIN_URL_SEPOLIA

  ci:
    secretManifest:
      enabled: true

Would result in the following error:

output.error.yaml
---
# Source: das/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: das-test
  labels:
    helm.sh/chart: das
    app.kubernetes.io/name: das
    app.kubernetes.io/instance: das-test
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: "das-test"
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: das
      app.kubernetes.io/instance: das-test
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
      labels:
        app.kubernetes.io/name: das
        app.kubernetes.io/instance: das-test
    spec:
      serviceAccountName: das-test
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:
      containers:
        - name: das
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command:
        - /usr/local/bin/daserver
          args:
        - --flag=true
          lifecycle:

            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - echo Hello from the postStart handler > /usr/share/message
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - nginx -s quit; while killall -0 nginx; do sleep 1; done
          volumeMounts:
          - name: localfilestorage
            mountPath: /data
          resources:
            {}
      volumes:

      terminationGracePeriodSeconds: 600
  volumeClaimTemplates:
    - metadata:
        name: localfilestorage
        labels:
          app: das
          release: das-test
          heritage: Helm
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "100Gi"
Error: YAML parse error on das/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 46: mapping values are not allowed in this context
helm.go:86: 2025-04-24 15:21:40.563851 -0400 EDT m=+0.048228533 [debug] error converting YAML to JSON: yaml: line 46: mapping values are not allowed in this context
YAML parse error on das/templates/statefulset.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:144
helm.sh/helm/v3/pkg/releaseutil.SortManifests
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:104
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
	helm.sh/helm/v3/pkg/action/action.go:168
helm.sh/helm/v3/pkg/action.(*Install).RunWithContext
	helm.sh/helm/v3/pkg/action/install.go:316
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:317
main.newTemplateCmd.func2
	helm.sh/helm/v3/cmd/helm/template.go:95
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/cobra@v1.8.1/command.go:1041
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:85
runtime.main
	runtime/proc.go:283
runtime.goexit
	runtime/asm_amd64.s:1700

After

After this change has been made, we can apply these values:

values.yaml
  lifecycle:
    postStart:
      exec:
        command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
    preStop:
      exec:
        command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]

  commandOverride:
    enabled: true
    command:
      - "/usr/local/bin/daserver"
    args:
      - --flag1=true
      - --flag2=false

  configmap:
    enabled: false

  extraEnv:
    - name: NITROCI_DATA__AVAILABILITY_PARENT__CHAIN__NODE__URL
      valueFrom:
        secretKeyRef:
          name: ci-secret-nitro
          key: PARENT_CHAIN_URL_SEPOLIA

  ci:
    secretManifest:
      enabled: true

Then we get the following in our statefulset, note the containers.command and the containers.args

output.success.yaml
---
# Source: das/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: das-test
  labels:
    helm.sh/chart: das
    app.kubernetes.io/name: das
    app.kubernetes.io/instance: das-test
    app.kubernetes.io/version: "v3.5.5-90ee45c"
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: "das-test"
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: das
      app.kubernetes.io/instance: das-test
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
      labels:
        app.kubernetes.io/name: das
        app.kubernetes.io/instance: das-test
    spec:
      serviceAccountName: das-test
      securityContext:

        fsGroup: 1000
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      initContainers:
      containers:
        - name: das
          securityContext:
            {}
          image: "offchainlabs/nitro-node:v3.5.5-90ee45c"
          imagePullPolicy: Always
          command: ["/usr/local/bin/daserver"]
          args: ["--flag1=true","--flag2=false"]
          env:

          - name: NITROCI_DATA__AVAILABILITY_PARENT__CHAIN__NODE__URL
            valueFrom:
              secretKeyRef:
                key: PARENT_CHAIN_URL_SEPOLIA
                name: ci-secret-nitro
          lifecycle:

            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - echo Hello from the postStart handler > /usr/share/message
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - nginx -s quit; while killall -0 nginx; do sleep 1; done
          volumeMounts:
          resources:
            {}
      volumes:

      terminationGracePeriodSeconds: 600
  volumeClaimTemplates:

@jmtx1020 jmtx1020 requested a review from a team as a code owner April 24, 2025 19:42
@jmtx1020 jmtx1020 changed the title update logic for commandOverride fix logic for commandOverride for all charts Apr 25, 2025
@jmtx1020 jmtx1020 changed the title fix logic for commandOverride for all charts fix commandOverride for all charts Apr 25, 2025
Copy link

github-actions bot commented Apr 25, 2025

Chart Installation Test succeeded ✅

The chart installation test for commit b6b98a28edd9583ef5e1d27b6049b3500078a122 has succeeded.

Changed charts: charts/das charts/nitro charts/relay

View workflow run
View unprivileged test run

@jmtx1020 jmtx1020 requested a review from chris-vest April 25, 2025 14:59
@jmtx1020 jmtx1020 requested a review from chris-vest April 25, 2025 15:12
Copy link
Contributor

@lambchr lambchr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amazing, thanks for fixing this! lgtm

@chris-vest
Copy link
Collaborator

@jmtx1020 Looks like the nitro chart needs a version bump!

@jmtx1020
Copy link
Contributor Author

jmtx1020 commented Apr 28, 2025

@jmtx1020 Looks like the nitro chart needs a version bump!

Will bump :) thanks for the headsup!

@jmtx1020 jmtx1020 requested review from chris-vest and lambchr April 28, 2025 16:21
@chris-vest chris-vest merged commit 6cfb1a5 into OffchainLabs:main Apr 29, 2025
7 checks passed
@chris-vest
Copy link
Collaborator

@jmtx1020 Thanks again for your contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants