Skip to content

Connection pool biased towards slow connections #8244

@alexbrasetvik

Description

@alexbrasetvik

This paper on Metastable Failures in Distributed Systems has an excellent war-story about connection pooling logic that unintentionally favors slow connections:

Quoting from section 2.4 on Link Imbalance:

[I]f one of the network links is congested, then the queries that get sent across that link will reliably complete last. This interacts with the connection pool’s MRU policy where the most recently used connection is preferred for the next query. As a result, each spike of misses rearranges the connection pool so that the highest-latency links are at the top of the stack.

We experienced something similar using okhttp as a client, where disabling the okhttp connection pooling completely around 15:20 had a very obvious effect on load distribution. The metric is from the downstream server's perspective:

image

Our setup involves kube-proxy, which provides a virtual IP and round robins TCP connections to pods that are part of the service, i.e. all okhttp knows is that there's a single IP. The nature of the service is that some requests are naturally slow, while most are fast, i.e. there's an expected significant difference between p95 and p50.

Before ultimately disabling connection pooling (as a stop-gap, and to confirm our suspicions), we reduced the keep alive timeout, but not as aggressively as shared in this post, with a similar problem with okhttp's connection pooler: Kubernetes network load balancing using OkHttp client That had some effect, but the bias issue remained:

image

When the pool selects a connection, it picks the first available connection. The idle connection cleanup removes the idle connections that have been used the least.

Our theory is this: When cleanup happens, the connections to slow pods are more likely to be active, and thus not eligible for cleanup. Thus, over time, their connections will gravitate towards the front of the queue, and thus be selected increasingly frequently, making them even slower, and even more likely to be selected – much like the story from the linked paper.

Thanks a lot for making okhttp, and sorry I don't have a running test, but wanted to at least share our suspicions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions