Skip to content

listeners doesn't work in an ipv6 disabled cluster #11175

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
LNJAD opened this issue May 7, 2025 · 5 comments · May be fixed by #11196
Open

listeners doesn't work in an ipv6 disabled cluster #11175

LNJAD opened this issue May 7, 2025 · 5 comments · May be fixed by #11196
Labels
Type: Bug Something isn't working

Comments

@LNJAD
Copy link

LNJAD commented May 7, 2025

kgateway version

v2.0.0

Kubernetes Version

v1.29.8+rke2r1

Describe the bug

I'm working in an rke2 cluster where we disabled completly ipv6
When trying to install kgateway and perform the simplest test by following the instructions with the sample-app, I encountered an error. After deploying a gateway, there is an issue related to the listeners.

Here is the log generated by the container:

[external/envoy/source/extensions/config_subscription/grpc/grpc_subscription_impl.cc:138] gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) http: malformed IP address: ::

Additionally, here is an extract of the Envoy configuration dump:

"dynamic_listeners": [
    {
     "name": "http",
     "error_state": {
      "failed_configuration": {
       "@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
       "name": "http",
       "address": {
        "socket_address": {
         "address": "::",
         "port_value": 8080,
         "ipv4_compat": true
        }
       },
       "filter_chains": [
        {
         "filters": [
          {
           "name": "envoy.filters.network.http_connection_manager",
           "typed_config": {
            "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
            "stat_prefix": "http",
            "rds": {
             "config_source": {
              "ads": {},
              "resource_api_version": "V3"
             },
             "route_config_name": "http"
            },
            "http_filters": [
             {
              "name": "envoy.filters.http.router",
              "typed_config": {
               "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
              }
             }
            ],
            "use_remote_address": true,
            "normalize_path": true,
            "merge_slashes": true
           }
          }
         ],
         "name": "http"
        }
       ]
      },
      "last_update_attempt": "2025-05-07T08:12:46.922Z",
      "details": "malformed IP address: ::"
     }
    }
]

Expected Behavior

I expected the gateway to be deployed successfully, with the listener properly configured and without any errors. The sample-app should work seamlessly when following the provided instructions.

Steps to reproduce the bug

  1. disable ipv6 on the kubernetes cluster on the kernel level
  2. install kgateway
  3. test kgateway with the sample-app

Additional Environment Detail

No response

Additional Context

No response

@LNJAD LNJAD added the Type: Bug Something isn't working label May 7, 2025
@ilrudie
Copy link
Contributor

ilrudie commented May 7, 2025

Thanks for the bug report! quick glance, it seems like you'd specifically need a node where the kernel has IPv6 disabled. Otherwise I think you can usually still listen on :: and the kernel just handles it for you even when single stack kube will never assign a v6 address.

@LNJAD
Copy link
Author

LNJAD commented May 7, 2025

It is indeed on a node where the ipv6 is disabled on the kernel level, that might explain why it couldn't handle ::

@AYDEV-FR
Copy link

AYDEV-FR commented May 9, 2025

Hi,

I have exactly the same problem. For security hardening reasons IPv6 is disabled at Kernel Level on each of my node...
Have you found a workaround?

@yuval-k
Copy link
Contributor

yuval-k commented May 9, 2025

we're thinking of adding a global setting to change the bind address to ipv4. would that be enough? or does this need to be per gw?

@AYDEV-FR
Copy link

AYDEV-FR commented May 9, 2025

From my perspective, a global setting to control the bind address would be sufficient. If per-gateway customization becomes necessary, it could be exposed via the GatewayParameters CRD, but I don't currently see a strong use case for it on my end. All of my nodes have IPv6 disabled, so I don't anticipate needing to override the default behavior on a per-gateway basis.

@lgadban lgadban moved this from Backlog to Planned in Kgateway Planning May 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Something isn't working
Projects
Status: Planned
Development

Successfully merging a pull request may close this issue.

4 participants