Skip to content

fix(sqlite): introduce "update from..." to the parser #3610

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

IdoKendo
Copy link

This PR closes #3132 by adding the update from... syntax to the sqlite update_stmt parser.

Was tested locally with several different queries.

Please let me know if there's anything more required to get this in or if you have any comments.

@BorisRodman
Copy link

ooooh finally 🙏

@Awish021
Copy link

Any update on this? 🙏

@YaakovShami
Copy link

also waiting for this fix.

@Tomerfi1210
Copy link

Thank you for that !!!

@IdoKendo
Copy link
Author

Hey @kyleconroy,

I wanted to follow up on this PR, and see if there’s anything I can do to help move it toward merging. If there’s any feedback or additional work needed on my end, please let me know - I’d be happy to make adjustments to ensure it aligns with sqlc's vision.

Thanks!
Ido

@kyleconroy kyleconroy force-pushed the idokendo/fix-update-from-in-sqlite branch from 2c8b51e to 5b6ffd3 Compare November 25, 2024 06:38
@IdoKendo
Copy link
Author

IdoKendo commented Nov 26, 2024

Hey @kyleconroy 👋
I saw that you force pushed - I went ahead and fixed the failing test, please let me know what else is needed to move forward with this.

@IdoKendo
Copy link
Author

Hi again @kyleconroy , any chance you can find the time to review and help merge this PR? thanks! 🙏

@dosubot dosubot bot added the size:S This PR changes 10-29 lines, ignoring generated files. label May 1, 2025
@IdoKendo
Copy link
Author

IdoKendo commented May 1, 2025

Hi again @kyleconroy is there anything I can do to expedite getting this fix upstream?

@cademtz
Copy link

cademtz commented Jun 20, 2025

I'm waiting on this PR too. I had to clone this PR and host it myself so I can continue developing my backends. It's impossible to use SQLC otherwise.

@brandur
Copy link
Contributor

brandur commented Jul 5, 2025

@IdoKendo Great stuff!

I'm not a maintainer here, but I was just reading through the diff and one thing that struck me was there's no proof test-wise that the changes work. Do you think it'd make sense to get a basic internal/endtoend/testdata/ test into place that checks against this? They're relatively easy to write.

@kyleconroy Hey! This one might make a good addition for the next sqlc release cycle. There's a great many bugs left in the SQLite side of things, and this is definitely one of the annoying ones. For context, UPDATE ... WITH is supported by Postgres as well and has worked in sqlc for ages.

brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
brandur added a commit to riverqueue/river that referenced this pull request Jul 5, 2025
As described by #972, it may be possible for huge numbers of snoozes to
blow out a job row's `attempted_by` array as a job is locked over and
over again. Multiplied by many jobs, this can produce vast quantities of
data that gets sent over the network.

Here, put in a ratchet on `attempted_by` so that if the array becomes
larger than 100 elements, we knock the oldest one off in favor of the
most recent client and the most fresh 99.

Unfortunately the implementation isn't particularly clean in either
Postgres or SQLite. In Postgres it would've been cleaner if we'd had the
`attempted_by` in reverse order so the new client was on front because
the built-in array functions would be friendlier to that layout, but
because it's not, we have to do something a little hackier involving a
`CASE` statement instead.

SQLite is even worse. SQLite has no array functions at all, which
doesn't help, but moreover every strategy I tried ended up blocked by a
sqlc SQLite bug, so after trying everything I could think of, I ended up
having to extract the piece that does the array truncation into a SQL
template string to get this over the line. This could be removed in the
future if any one of a number of outstanding sqlc bugs are fixed (e.g.
[1]).

[1] sqlc-dev/sqlc#3610
@IdoKendo
Copy link
Author

IdoKendo commented Jul 6, 2025

@IdoKendo Great stuff!

I'm not a maintainer here, but I was just reading through the diff and one thing that struck me was there's no proof test-wise that the changes work. Do you think it'd make sense to get a basic internal/endtoend/testdata/ test into place that checks against this? They're relatively easy to write.

Hey @brandur, thanks!

You're right, I only tested the changed manually against the use-case mentioned as well as against previous use-cases to ensure it didn't break anything :)

I would be happy to add some tests into place but I'm afraid this PR has been sitting here too long and I need to get some guarantee that it is going to get merged upstream before I invest any more time and effort into it. As you can see in the history I tried to get some attention several times and got shunned.

If this is something the maintainer is willing to merge I will add tests to get it into the next release :)

Cheers

@brandur
Copy link
Contributor

brandur commented Jul 6, 2025

I would be happy to add some tests into place but I'm afraid this PR has been sitting here too long and I need to get some guarantee that it is going to get merged upstream before I invest any more time and effort into it. As you can see in the history I tried to get some attention several times and got shunned.

@IdoKendo 100.0% reasonable, and I'd request the same. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:S This PR changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SQLite UPDATE FROM statements not supported
7 participants