Redis: Socket already opened #240
Replies: 10 comments 2 replies
-
Check your Redis config if you have any limit on connections. I have Kubernetes set up with two pods running Next.js and one pod running Redis, and I don't encounter any problems with connections. |
Beta Was this translation helpful? Give feedback.
-
Hey, @AlexisWalravens! I can’t but agree that the steps to reproduce is complicated. But maybe this dead simple solution might help. Before all, update the Then, add After these steps, add the following check to your // Check if the socket is already opened
if (!client.isOpen) {
await client.connect();
} Write back if it helps! |
Beta Was this translation helpful? Give feedback.
-
I have found that if your Redis server is unreachable during the building process, Redis will throw an |
Beta Was this translation helpful? Give feedback.
-
Hey ! @ezeparziale I don't think it's the connection limit since it's already used by another service I just added NextJS to it. But I'll make sure thanks! @better-salmon I will also try this solution, but I will not able to test it until early January as it a work project and I'm on holiday until then. Will let you know! |
Beta Was this translation helpful? Give feedback.
-
I was also encountering the same issue. By using @better-salmon workaround it get resolved. |
Beta Was this translation helpful? Give feedback.
-
@rutwik-fale-amla, it is possible that your app is falling back to the Please check if your app works with the |
Beta Was this translation helpful? Give feedback.
-
@better-salmon, I have checked by adding NEXT_PRIVATE_DEBUG_CACHE=1 in the .env file and getting the following log. "cache-handler-redis" is the name of the cache handler file. |
Beta Was this translation helpful? Give feedback.
-
This is my cache handler /* eslint-disable no-console */
/* eslint-disable @typescript-eslint/no-var-requires */
const { IncrementalCache } = require("@neshca/cache-handler");
const createLruCache = require("@neshca/cache-handler/local-lru").default;
const { reviveFromBase64Representation, replaceJsonWithBase64 } = require("@neshca/json-replacer-reviver");
const { createClient } = require("redis");
const REVALIDATED_TAGS_KEY = "sharedRevalidatedTags";
const client = createClient({
url: "redis://localhost:6379",
});
client.on("error", (error) => {
console.error("Redis error:", error);
});
IncrementalCache.onCreation(async () => {
// read more about TTL limitations https://caching-tools.github.io/next-shared-cache/configuration/ttl
const useTtl = false;
if (!client.isOpen) {
await client.connect();
}
const localCache = createLruCache({
useTtl,
});
function assertClientIsReady() {
if (!client.isReady) {
throw new Error("Redis client is not ready");
}
}
const redisCache = {
name: "cache-handler-redis",
async get(key) {
assertClientIsReady();
const result = await client.get(key);
if (!result) {
return null;
}
return JSON.parse(result, reviveFromBase64Representation);
},
async set(key, value, ttl) {
assertClientIsReady();
await client.set(key, JSON.stringify(value, replaceJsonWithBase64), useTtl && typeof ttl === "number" ? { EX: ttl } : undefined);
},
async getRevalidatedTags() {
assertClientIsReady();
const sharedRevalidatedTags = await client.hGetAll(REVALIDATED_TAGS_KEY);
const entries = Object.entries(sharedRevalidatedTags);
const revalidatedTags = entries.reduce((acc, [tag, revalidatedAt]) => {
acc[tag] = Number(revalidatedAt);
return acc;
}, {});
return revalidatedTags;
},
async revalidateTag(tag, revalidatedAt) {
assertClientIsReady();
await client.hSet(REVALIDATED_TAGS_KEY, {
[tag]: revalidatedAt,
});
},
};
return {
cache: [redisCache, localCache],
useFileSystem: !useTtl,
};
});
module.exports = IncrementalCache; |
Beta Was this translation helpful? Give feedback.
-
I moved Note that the import {IncrementalCache} from "@neshca/cache-handler";
import createRedisCache from "@neshca/cache-handler/redis-stack";
import createLruCache from "@neshca/cache-handler/local-lru";
import {createClient} from "redis";
const client = createClient({url: process.env.REDIS_URL});
client.on("error", (error) => console.error("Redis error:", error.message));
let connectPromise = client.connect();
function useTtl(maxAge) {
return maxAge * 1.5;
}
IncrementalCache.onCreation(async (cacheCreationContext) => {
await connectPromise;
const redisCache = await createRedisCache({client, useTtl});
const localCache = createLruCache({useTtl});
return {
cache: [redisCache, localCache],
useFileSystem: false,
};
});
export default IncrementalCache; |
Beta Was this translation helpful? Give feedback.
-
FYI, I noticed that the example app for this library has a bug can lead to "Socket already opened" errors: If you have a socket exception after connecting, and have this in your code:
... the client is now in a bad state. Rethrowing the exception bypassed the reconnection mechanism in node-redis, and you're left with a socket that has socket.isReady = false but socket.isOpen = true - this true value being why any further call to client.connect() will report that the socket is open (when in fact, it's not) Worse, any app built on that example code will find themselves never reconnecting to redis after any error, because the normal reconnection mechanism never gets called. @AlexisWalravens "When a second kubernetes pod is spawned the memory of the first pod is copied" - that isn't how kubernetes normally works? That kind of thing happens when you launch a child process - you can end up with sockets opened by the parent being open in the child - but that isn't what happens in kubernetes (the pods are children of your CRI, eg containerd/dockerd, not of each other) and couldn't cause that error message, because that's not from the underlying socket, it's from the node-redis wrapper around the socket. https://github.com/redis/node-redis/blob/6f79b49f731a1aaf57f42e16ea72774f062e57a1/packages/client/lib/client/socket.ts#L140 I think it's more likely you're running into some variant of the bug in the example app above. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Brief Description of the Bug
Hi, we use NextJS deployed on Docker and with Kubernetes and we use this package with the redis-string example as it is recommended in the NextJS docs to use another cache configuration when using kubernetes.
When a second kubernetes pod is spawned the memory of the first pod is copied, and as such the redis client connection too.
And as soon as the second pod starts to talk to redis I get a
Socket already opened
and the pod crashes because it tried to use the same connection socket as the first.I don't really know if it's the responsibility of this package to solve this but maybe you can help me.
Basically we would need a way to create a new connection if a new pod spawn, because the connection — if I understand correctly — is only initiated once when the NextJS process starts.
Thanks.
Severity
Major
Frequency of Occurrence
Always (when a second pod spawn)
Steps to Reproduce
Complicated...
Expected vs. Actual Behavior
A way to create another connection when a pod starts
Environment:
@neshca/cache-handler
version: 0.6.1next
version: 14.0.4Dependencies and Versions
redis
: 4.6.12Beta Was this translation helpful? Give feedback.
All reactions