You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.
Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.
Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.
SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).
[1] sqlc-dev/sqlc#3610
Copy file name to clipboardExpand all lines: CHANGELOG.md
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
22
22
- The reindexer maintenance service now reindexes all `river_job` indexes, including its primary key. This is expected to help in situations where the jobs table has in the past expanded to a very large size (which makes most indexes larger), is now a much more modest size, but has left the indexes in their expanded state. [PR #963](https://github.com/riverqueue/river/pull/963).
23
23
- The River CLI now accepts a `--target-version` of 0 with `river migrate-down` to run all down migrations and remove all River tables (previously, -1 was used for this; -1 still works, but now 0 also works). [PR #966](https://github.com/riverqueue/river/pull/966).
24
24
-**Breaking change:** The `HookWorkEnd` interface's `WorkEnd` function now receives a `JobRow` parameter in addition to the `error` it received before. Having a `JobRow` to work with is fairly crucial to most functionality that a hook would implement, and its previous omission was entirely an error. [PR #970](https://github.com/riverqueue/river/pull/970).
25
+
- Add maximum bound to each job's `attempted_by` array so that in degenerate cases where a job is locked many, many times (say it's snoozed hundreds of times), it doesn't grow to unlimited bounds. [PR #974](https://github.com/riverqueue/river/pull/974).
0 commit comments