Skip to content

make the HOST variable configurable #5529

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

menardorama
Copy link

@menardorama menardorama commented Sep 5, 2024

make the HOST variable configurable

Kubernetes clusters can be single stack or dual stack. This change address the recent change where dual stack cluster is set to be the default.

Setting can now be configurable in the chart

Relevant issues

Fixes #5517

Type

🐛 Bug Fix

Changes

Add a value for the HOST env variable in order to make the helm chart less restrictive on the environment where it is deployed

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

helm template . --set listen="0.0.0.0"

      containers:
        - name: litellm
          securityContext:
            {}
          image: "ghcr.io/berriai/litellm-database:main-v1.43.18"
          imagePullPolicy: IfNotPresent
          env:
            - name: HOST
              value: "0.0.0.0"
            - name: PORT
              value: "4000"

helm template .

      containers:
        - name: litellm
          securityContext:
            {}
          image: "ghcr.io/berriai/litellm-database:main-v1.43.18"
          imagePullPolicy: IfNotPresent
          env:
            - name: HOST
              value: ":"
            - name: PORT
              value: "4000"

krrishdholakia and others added 2 commits September 4, 2024 22:30
Kubernetes clusters can be single stack or dual stack. This change address the recent change where dual stack cluster is set to be the default.

Setting can now be configurable in the chart
Copy link

vercel bot commented Sep 5, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 5, 2024 7:48am

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had a follow up question

@@ -102,7 +102,7 @@ spec:
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: HOST
value: "::"
value: "{{ .Values.listen | default "::" }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this work for ipv4 addresses ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you just have to set the value to 0.0.0.0 for ipv4

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the default should be the original 0.0.0.0, as this PR addresses a regression introduced when it was replaced with ::

@qdrddr
Copy link

qdrddr commented Sep 5, 2024

are you sure the listen: "" in the values.yaml file should be under ingress and not the image? @menardorama & @ishaan-jaff

bcdeb63#diff-2105d8e1ded03b7b26c7c794db81f24dd466f6f793fd07a37120863950c2429dR59-R80

@krrishdholakia krrishdholakia changed the base branch from main to litellm_dev_05_08_2024 September 5, 2024 17:53
@krrishdholakia
Copy link
Contributor

@menardorama @qdrddr @laurb9 is this PR ready for merge? Seems like a few open comments. Ideally we can have this as part of tonight's release train

@menardorama
Copy link
Author

are you sure the listen: "" in the values.yaml file should be under ingress and not the image? @menardorama & @ishaan-jaff

bcdeb63#diff-2105d8e1ded03b7b26c7c794db81f24dd466f6f793fd07a37120863950c2429dR59-R80

I don't understand ....

You mean the place in the values ? Listen address has nothing to do with ingress but whatever as soon as it fix the issue.....
Actually my ci is patching the deployment after helm install I. Order to address the issue.

@qdrddr
Copy link

qdrddr commented Sep 6, 2024

are you sure the listen: "" in the values.yaml file should be under ingress and not the image? @menardorama & @ishaan-jaff
bcdeb63#diff-2105d8e1ded03b7b26c7c794db81f24dd466f6f793fd07a37120863950c2429dR59-R80

I don't understand ....

You mean the place in the values ? Listen address has nothing to do with ingress but whatever as soon as it fix the issue..... Actually my ci is patching the deployment after helm install I. Order to address the issue.

Yes. I was asking if you have selected the correct placement for the variable in the values file. Currently the listen is placed under ingress. If you click on the link you'll see it is placed under the ingress. Shouldn't it be under the image item?

bcdeb63#diff-2105d8e1ded03b7b26c7c794db81f24dd466f6f793fd07a37120863950c2429dR59-R80

CleanShot 2024-09-06 at 09 08 19

@laurb9
Copy link

laurb9 commented Sep 6, 2024

Yes. I was asking if you have selected the correct placement for the variable in the values file. Currently the listen is placed under ingress. If you click on the link you'll see it is placed under the ingress. Shouldn't it be under the image item?

It is actually at root level like masterkey, not under ingress. Github misinterprets where the comment is located based on the last key it saw.

@menardorama
Copy link
Author

Yes. I was asking if you have selected the correct placement for the variable in the values file. Currently the listen is placed under ingress. If you click on the link you'll see it is placed under the ingress. Shouldn't it be under the image item?

It is actually at root level like masterkey, not under ingress. Github misinterprets where the comment is located based on the last key it saw.

Exactly, now maybe it can be a good idea to have a dedicated section where masterkey, listen address and future litellm runtime settings are located together ?

@laurb9
Copy link

laurb9 commented Sep 6, 2024

Exactly, now maybe it can be a good idea to have a dedicated section where masterkey, listen address and future litellm runtime settings are located together ?

Something like proxy config ? moving masterkey would be a breaking change and out of scope though.

@menardorama
Copy link
Author

Exactly....

Even if the correct way would be that....

What about a section called runtimeConfig ?

If things won't move a lot on the runtime part this can be let on the root level

@krrishdholakia
Copy link
Contributor

Hey @menardorama @laurb9 @qdrddr is this PR ready for review / merge?

If there's additional improvements we can make, would it help to scope that into a separate PR?

@menardorama
Copy link
Author

In my opinion having to change where the master key setting is defined will have a huge impact on all deployments.

So for now adding another setting at the root level may be the best solution for now.

Now regarding the default value, is having 0.0.0.0 as default and permit to define it to "::" may be the best choice.

If yes, I can update my PR and you can merge it

Is this plan is ok for you ?

@krrishdholakia
Copy link
Contributor

Hey @menardorama i merged in #5587 which should address the host name issue. Closing this PR as it seemed to address the same issue. If you'd like to submit another PR for improvements on the existing helm chart, that would be great

@menardorama
Copy link
Author

Hey @menardorama i merged in #5587 which should address the host name issue. Closing this PR as it seemed to address the same issue. If you'd like to submit another PR for improvements on the existing helm chart, that would be great

Hi,
No problem but new settings needs to be defined in values.yaml even with only comments; this is to avoid having to deep dive in the helm chart to understand and find this setting....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: k8s dual stack IPv4/6 helm chart v0.1.413 and later readiness probe
5 participants