-
Notifications
You must be signed in to change notification settings - Fork 77
Closed
bentoml/yatai-deployment
#114Description
Yatai states that it works with Kubernetes clusters with version 1.20 or newer.
We have 1.26 and we get the following error:
I0214 08:25:37.323134 1 leaderelection.go:258] successfully acquired lease yatai-deployment/b292d523.yatai.ai
1.6763631373232365e+09 DEBUG events yatai-deployment-5bdcffb66d-rtfs5_6925bb24-d1cc-4a68-bdc5-92d57ff73ece became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"yatai-deployment","name":"b292d523.yatai.ai","uid":"4c621713-ea5f-4e63-9170-e585f3867a99","apiVersion":"coordination.k8s.io/v1","resourceVersion":"26214648"}, "reason": "LeaderElection"}
1.6763631373233054e+09 INFO Starting EventSource {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v2alpha1.BentoDeployment"}
1.676363137323376e+09 INFO Starting EventSource {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Deployment"}
1.6763631373233864e+09 INFO Starting EventSource {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v2beta2.HorizontalPodAutoscaler"}
1.6763631373233929e+09 INFO Starting EventSource {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Service"}
1.6763631373233986e+09 INFO Starting EventSource {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment", "source": "kind source: *v1.Ingress"}
1.6763631373234038e+09 INFO Starting Controller {"controller": "bentodeployment", "controllerGroup": "serving.yatai.ai", "controllerKind": "BentoDeployment"}
I0214 08:25:38.374123 1 request.go:601] Waited for 1.046202298s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/batch/v1?timeout=32s
1.6763631387267134e+09 ERROR controller-runtime.source if kind is a CRD, it should be installed before calling Start {"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/source/source.go:139
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:235
k8s.io/apimachinery/pkg/util/wait.poll
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:582
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:547
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/source/source.go:132
I0214 08:25:49.776787 1 request.go:601] Waited for 1.046409144s due to client-side throttling, not priority and fairness, request: GET:https://10.96.0.1:443/apis/maps.k8s.elastic.co/v1alpha1?timeout=32s
1.676363150129143e+09 ERROR controller-runtime.source if kind is a CRD, it should be installed before calling Start {"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/source/source.go:139
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:235
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:662
k8s.io/apimachinery/pkg/util/wait.poll
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:596
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
/go/pkg/mod/k8s.io/apimachinery@v0.25.0/pkg/util/wait/wait.go:547
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/source/source.go:132
1.6763631517485435e+09 INFO start cleaning up abandoned runner services {"func": "doCleanUpAbandonedRunnerServices"}
1.6763631517516932e+09 INFO finished cleaning up abandoned runner services {"func": "doCleanUpAbandonedRunnerServices"}
After some digging, I found the following issue kubernetes/ingress-nginx#8599 which states that you should upgrade the autoscaling in v2 if you have k8s higher than 1.23.
$ kubectl get apiservices | grep autoscaling
v1.autoscaling Local True 81d
v1alpha1.autoscaling.k8s.elastic.co Local True 4d
v2.autoscaling Local True 81d
icy-arctic-fox
Metadata
Metadata
Assignees
Labels
No labels