Skip to content

Conversation

russellhancox
Copy link
Contributor

We use the RegisterReplacementScan function in one of our projects and keep a persistent DuckDB connection open. We've noticed that after some indeterminate period of time (presumably when a GC run occurs), queries that invoke the replacement scan callback will panic:

2025/09/03 18:35:14 http: panic serving 172.67.72.165:28012: runtime/cgo: misuse of an invalid Handle
 goroutine 516 [running]:
 net/http.(*conn).serve.func1()
 	/usr/local/go/src/net/http/server.go:1943 +0xb4
 panic({0x3a9a260?, 0x4684a60?})
 	/usr/local/go/src/runtime/panic.go:783 +0x120
 runtime/cgo.Handle.Value(0xffff20010a08?)
 	/usr/local/go/src/runtime/cgo/handle.go:130 +0x60
 github.com/marcboeker/go-duckdb/v2.replacement_scan_callback(0xffff5600a9a0, 0x49f4dc?, 0x40007a8770)
 	/.cache/go-mod/github.com/marcboeker/go-duckdb/v2@v2.3.5/replacement_scan.go:41 +0x40
 github.com/duckdb/duckdb-go-bindings/linux-arm64._Cfunc_duckdb_prepare_extracted_statement(0xffff2000be70, 0xffff2000ddc0, 0x0, 0x4000126870)
 	_cgo_gotypes.go:3981 +0x30
 github.com/duckdb/duckdb-go-bindings/linux-arm64.PrepareExtractedStatement.func1(...)
 	/.cache/go-mod/github.com/duckdb/duckdb-go-bindings/linux-arm64@v0.1.12/bindings.go:1466
 github.com/duckdb/duckdb-go-bindings/linux-arm64.PrepareExtractedStatement({0xffff2000be70}, {0xffff2000ddc0}, 0x0, 0x4000126860)
 	/.cache/go-mod/github.com/duckdb/duckdb-go-bindings/linux-arm64@v0.1.12/bindings.go:1466 +0xbc
 github.com/marcboeker/go-duckdb/v2.(*Conn).prepareExtractedStmt(0x400093ca00, {0x4000bb8ba0?}, 0x0)
 	/.cache/go-mod/github.com/marcboeker/go-duckdb/v2@v2.3.5/connection.go:193 +0x60
 github.com/marcboeker/go-duckdb/v2.(*Conn).prepareStmts(0x400093ca00, {0x46ca7b0, 0x6cb2aa0}, {0x4000bb8ba0, 0x24})
 	/.cache/go-mod/github.com/marcboeker/go-duckdb/v2@v2.3.5/connection.go:234 +0x23c
 github.com/marcboeker/go-duckdb/v2.(*Conn).QueryContext(0x400093ca00, {0x46ca7b0, 0x6cb2aa0}, {0x4000bb8ba0?, 0x6cb2aa0?}, {0x6cb2aa0, 0x0, 0x0})
 	/.cache/go-mod/github.com/marcboeker/go-duckdb/v2@v2.3.5/connection.go:82 +0x4c
 database/sql.ctxDriverQuery({0x46ca7b0?, 0x6cb2aa0?}, {0xffff6486f118?, 0x400093ca00?}, {0x0?, 0x0?}, {0x4000bb8ba0?, 0x92eb24?}, {0x6cb2aa0?, 0x3d38c40?, ...})
 	/usr/local/go/src/database/sql/ctxutil.go:48 +0xac
 database/sql.(*DB).queryDC.func1()
 	/usr/local/go/src/database/sql/sql.go:1786 +0xe0
 database/sql.withLock({0x46bea98, 0x400153b700}, 0x4000d70bb8)
 	/usr/local/go/src/database/sql/sql.go:3572 +0x74
 database/sql.(*DB).queryDC(0x400091e0d0?, {0x46ca7b0, 0x6cb2aa0}, {0x0, 0x0}, 0x400153b700, 0x4001609540, {0x4000bb8ba0, 0x24}, {0x0, ...})
 	/usr/local/go/src/database/sql/sql.go:1781 +0x11c
 database/sql.(*DB).query(0x400091e0d0, {0x46ca7b0, 0x6cb2aa0}, {0x4000bb8ba0, 0x24}, {0x0, 0x0, 0x0}, 0x78?)
 	/usr/local/go/src/database/sql/sql.go:1764 +0xb4
 database/sql.(*DB).QueryContext.func1(0x0?)
 	/usr/local/go/src/database/sql/sql.go:1742 +0x40
 database/sql.(*DB).retry(0x40007be120?, 0x4000d70db0)
 	/usr/local/go/src/database/sql/sql.go:1576 +0x4c
 database/sql.(*DB).QueryContext(0x4000937001?, {0x46ca7b0?, 0x6cb2aa0?}, {0x4000bb8ba0?, 0x4000ea0870?}, {0x0?, 0x4000d70ea8?, 0x4c7e6c?})
 	/usr/local/go/src/database/sql/sql.go:1741 +0x80
 database/sql.(*DB).Query(...)
 	/usr/local/go/src/database/sql/sql.go:1755
 ...

We've been running the changes in this PR and are no longer able to reproduce the issue.

@taniabogatsch
Copy link
Collaborator

Great that you found this and opened a PR! Indeed, if it wasn't pinned, it was likely garbage collected and the fix looks correct to me. :)

@taniabogatsch taniabogatsch added the fix Fixes a bug label Sep 4, 2025
@taniabogatsch taniabogatsch merged commit 100f18b into marcboeker:main Sep 4, 2025
32 checks passed
@russellhancox
Copy link
Contributor Author

@taniabogatsch Thank you for the very quick review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fix Fixes a bug

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants