-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Open
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.lifecycle/frozenIndicates that an issue or PR should not be auto-closed due to staleness.Indicates that an issue or PR should not be auto-closed due to staleness.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
What happened:
My API ingresses started throwing 400 Bad Request No Required SSL Certificate Error at nginx level after I upgraded the controller from V1.10.0 to V1.12.0
The APIs with both mTLS and non mTLS works as expected on V1.10.0 but both are not working on V1.12.0
Below are my ingress-conf
- Ingress Non mTLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-internal
spec:
ingressClassName: nginx
rules:
- host: api.test.dev.edifecs.cloud
http:
paths:
- path: /Path1
pathType: Prefix
backend:
service:
name: test-276-277-rt-profile-service
port:
number: 9069
- path: /Path2
pathType: Prefix
backend:
service:
name: test-270-271-rt-profile-service
port:
number: 9072
- path: /Path3
pathType: Prefix
backend:
service:
name: test-278-rt-profile-service
port:
number: 9073
- Ingress with mTLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
nginx.ingress.kubernetes.io/auth-tls-secret: test/ca-secret
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
name: test-ingress-internal
spec:
ingressClassName: nginx
rules:
- host: api.test.dev.edifecs.cloud
http:
paths:
- path: /Path1
pathType: Prefix
backend:
service:
name: test-276-277-rt-profile-service
port:
number: 9069
- path: /Path2
pathType: Prefix
backend:
service:
name: test-270-271-rt-profile-service
port:
number: 9072
- path: /Path3
pathType: Prefix
backend:
service:
name: test-278-rt-profile-service
port:
number: 9073
tls:
- hosts:
- api.test.dev.edifecs.cloud
secretName: internal-api-tls
What you expected to happen:
With the version upgrade to V1.12.0, the APIs should perform as before how it was working with V1.10.0 for both mTLS ingress and non mTLS ingress
NGINX Ingress controller version (exec into the pod and run /nginx-ingress-controller --version
):
Kubernetes version (use kubectl version
): 1.30
Environment:
-
Cloud provider or hardware configuration: AWS
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename>
- If helm was used then please show output of
controller:
allowSnippetAnnotations: true
autoscaling:
enabled: true
maxReplicas: 2
config:
enable-real-ip: true
enable-underscores-in-headers: true
log-format-upstream: '{"timestamp": "$time_iso8601", "requestID": "$req_id", "proxyUpstreamName":
"$proxy_upstream_name", "proxyAlternativeUpstreamName": "$proxy_alternative_upstream_name","upstreamStatus":
$upstream_status, "upstreamAddr": "$upstream_addr", "httpRequest": {"requestMethod":
"$request_method", "requestUrl": "$host$request_uri", "status": $status,"requestSize":
$request_length, "responseSize": $upstream_response_length, "userAgent": "$http_user_agent",
"remoteIp": "$remote_addr", "referer": "$http_referer", "latency": "$upstream_response_time
s", "protocol": "$server_protocol"}}'
use-forwarded-headers: true
use-proxy-protocol: true
hsts-preload: true
extraArgs:
default-ssl-certificate: ingress/default-tls
metrics:
enabled: true
podAnnotations:
'"prometheus.io/port"': 10254
'"prometheus.io/scrape"': true
replicaCount: 1
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: false
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: nlb
loadBalancerSourceRanges:
- 10.0.0.0/8
tolerations:
- effect: NoSchedule
key: mainnode
operator: Exists
ariretiarno
Metadata
Metadata
Assignees
Labels
kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.lifecycle/frozenIndicates that an issue or PR should not be auto-closed due to staleness.Indicates that an issue or PR should not be auto-closed due to staleness.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Type
Projects
Status
No status