Skip to content

httpServer.close triggers callback prematurely #59746

@killer-it44

Description

@killer-it44

Version

v22.11.0

Platform

Darwin 24.6.0 Darwin Kernel Version 24.6.0: Mon Jul 14 11:30:51 PDT 2025; root:xnu-11417.140.69~1/RELEASE_ARM64_T8112 arm64

Subsystem

http

What steps will reproduce the bug?

run the following script:

import http from 'http'
import assert from 'assert'

const port = 43210

const startServer = () => {
    const server = http.createServer((req, res) => {
        res.writeHead(200, { 'Content-Type': 'text/plain' }).end('hello')
    })
    server.on('connection', (socket) => {
        console.log('New connection')
        socket.on('close', () => console.log('Connection closed'))
    })
    return new Promise(resolve => server.listen(port, () => resolve(server)))
}

const ping = () => fetch(`http://localhost:${port}/`).then(res => res.text())

console.log('test 1 start')
const server1 = await startServer()
assert (await ping()) === 'hello'
assert (await ping()) === 'hello'
await new Promise(resolve => server1.close(resolve))
console.log('test 1 end')

console.log('test 2 start')
const server2 = await startServer()
assert (await ping()) === 'hello'
await new Promise(resolve => server2.close(resolve))
console.log('test 2 end')

How often does it reproduce? Is there a required condition?

the issue can always be reproduced (the script fails)

What is the expected behavior? Why is that the expected behavior?

the script should pass.
since we wait for the first server instance to fully close, before we create the second server instance and send a request to it. i assume my implementation is clean.

What do you see instead?

The script will fail when sending the request to the second server instance with the following error:

TypeError: fetch failed
    at node:internal/deps/undici/undici:13392:13
    at async file:///Users/d052927/git/gorilla-quiz/test-http-concurreny.js:28:9 {
  [cause]: Error: read ECONNRESET
      at TCP.onStreamRead (node:internal/stream_base_commons:216:20)
      at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {
    errno: -54,
    code: 'ECONNRESET',
    syscall: 'read'
  }
}

Additional information

My overall impression is that the server.close(callback) fires the callback too early, even when not all connections are fully closed yet, and this seems to create race conditions that lead to strange errors.

To show this, I added log outputs, which come out like this:

test 1 start
New connection
New connection
test 1 end
test 2 start
Connection closed
Connection closed

This indicates that, although the server1.close completed it's callback, which resolved the promise in line 23, the connections are not fully closed. They are still open after the second server got started. I am not sure if this is related to the issue, which seems rather to be with the second request.

2 more interesting observations:

  • even though server1 has not fully closed all open connections, this does not create issues with starting server2, which listens to the same port
  • when I remove one of the 2 "ping" that are sent to server1, then everything passes, even though the connection of the first ping is still closed "too late" (i.e. after server2 was already started)

Metadata

Metadata

Assignees

No one assigned

    Labels

    httpIssues or PRs related to the http subsystem.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions