-
Notifications
You must be signed in to change notification settings - Fork 976
Convert JSON to VariantArray without copying (8 - 32% faster) #7911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is remarkably simpler than I had imagined it would need to be. Handing off ownership back and forth was a very useful trick.
My only concern is whether we might ever need to support a builder that isn't backed by Vec
? I'm guessing not, but wanted to double check.
I think eventually we might, but I think the only way to really do so is via some sort of trait and a templated builder. I think we can pretty far without doing so and And there are zero copy APIs to/from Vec for the underlying Arrow arrays which I think is a pretty nice property too |
bb1502a
to
e07069d
Compare
… buffers (#7912) # Which issue does this PR close? - closes #7805 - part of #6736 - part of #7911 # Rationale for this change I would like to be able to write Variants directly into the target buffer when writing multiple variants However, the current VariantBuilder allocates a new bufffer for each variant # What changes are included in this PR? 1. Add `VariantBuilder::new_with_buffers` and docs and tests You can see how this API can be used to write directly into a buffer in VariantArrayBuilder in this PR: - #7911 # Are these changes tested? Yes new tests # Are there any user-facing changes? New API
e07069d
to
8166cb6
Compare
8166cb6
to
e10d41d
Compare
Update here is I think I have incorporated @scovich's comments and I am quite pleased with how it is looking I think this code needs a few more tests and a benchmark or two and we'll be good. I'll try and work on those in the next few days |
impl<'a> Drop for VariantArrayVariantBuilder<'a> { | ||
/// If the builder was not finished, roll back any changes made to the | ||
/// underlying buffers (by truncating them) | ||
fn drop(&mut self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like this approach. I was thinking over the weekend that we may want to rework the other builders to follow a similar approach:
- They can truncate the metadata dictionary on rollback, which would eliminate the false allocations that survive a rollback today
- We can allocate the value bytes directly in the base buffer (instead of using a separate Vec)
- On rollback, just
truncate
(like here) - On success, use Vec::splice to insert value offset and field id arrays, which slides over all the other bytes
- On rollback, just
- Once we're using
splice
, it opens the door to pre-allocate the space for the value offset and field arrays, in case the caller knows how many fields or array elements there are.- If the prediction was correct,
splice
just replaces the pre-allocated space. - If incorrect, the pre-allocation is wasted (but we're no worse off than before -- the bytes just inject in)
- The main complication would be guessing how many bytes to encode each offset with.
- If the prediction was correct,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They can truncate the metadata dictionary on rollback, which would eliminate the false allocations that survive a rollback today
That is an excellent point
We can allocate the value bytes directly in the base buffer (instead of using a separate Vec)
That sounds like a great way to avoid the extra allocation
Once we're using splice, it opens the door to pre-allocate the space for the value offset and field arrays, in case the caller knows how many fields or array elements there are.
This is also a great idea 🤯
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a follow up, @klion26 has a PR up to implement this:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That other PR is nice improvement, but the splice
call still shifts bytes.
In order to not shift bytes at all, we'd have to pre-allocate exactly the right number of header bytes before recursing into the field values. And then the splice
call would just replace the zero-filled header region with the actual header bytes, after they're known (shifting bytes only if the original guess was incorrect).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the only way to do this is add some API in the ObjectBuilder somehow to pre-allocate this space (new_object_with_capacity()
perhaps 🤔 )
I added some benchmarks and my local results suggest that avoiding the allocations makes parsing small repeated json objects about 10% faster. I think once we stop copying stuff around in the sub builders, the other bencmarks will be quite a bit faster too
|
@@ -1047,16 +1047,16 @@ impl Drop for ObjectBuilder<'_> { | |||
/// | |||
/// Allows users to append values to a [`VariantBuilder`], [`ListBuilder`] or | |||
/// [`ObjectBuilder`]. using the same interface. | |||
pub trait VariantBuilderExt<'m, 'v> { | |||
fn append_value(&mut self, value: impl Into<Variant<'m, 'v>>); | |||
pub trait VariantBuilderExt { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no reason for the lifetimes to be attached to the trait itself -- if it is that means that the lifetimes trickle into the values -- since this trait is for actually constructing variant values (and copying the underlying bytes) I moved the lifetimes to just the arguments that need it
// TODO make this more efficient by avoiding the intermediate buffers | ||
let mut variant_builder = VariantBuilder::new(); | ||
variant_builder.append_value(variant); | ||
let (metadata, value) = variant_builder.finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The whole point of this PR is to avoid this copy here and instead write directly into the output
30ad86b
to
3b6aef6
Compare
# Which issue does this PR close? We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. - Part of #7911 - Part of #6736 - Follow on to #7905 # Rationale for this change I wrote benchmark some changes to the json decoder in #7911 but they are non trivial. To keep #7911 easier to review I have pulled the benchmarks out to their own PR # What changes are included in this PR? 1. Add new json benchmark 2. Include the `variant_get` benchmark added in #7919 by @Samyak2 # Are these changes tested? I tested them manually and clippy CI coverage ensures they compile # Are there any user-facing changes? No these are only benchmarks
3b6aef6
to
92e5c12
Compare
json_to_variant(input_string_array.value(i), &mut vb)?; | ||
let (metadata, value) = vb.finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The whole point if this PR is to avoid this copy / append
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
92e5c12
to
fce19eb
Compare
@@ -55,9 +55,14 @@ use std::sync::Arc; | |||
/// }; | |||
/// builder.append_variant_buffers(&metadata, &value); | |||
/// | |||
/// // Use `variant_builder` method to write values directly to the output array |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the key new API -- a builder that can write directly to the correct output array location
impl<'a> Drop for VariantArrayVariantBuilder<'a> { | ||
/// If the builder was not finished, roll back any changes made to the | ||
/// underlying buffers (by truncating them) | ||
fn drop(&mut self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a follow up, @klion26 has a PR up to implement this:
🤖 |
🤖: Benchmark completed Details
|
This one is now ready for review. I am quite pleased it already shows some benchmarks going 30% faster Along with the following PR from @klion26 I think our JSON conversion is about as fast as it is going to get until we move away from serde_json for parsing |
@scovich / @viirya / @klion26 I wonder if you have time to review this PR (or if you have, are you happy to have it merged)? I ask because avoiding this extra copy has come up a few times while contemplating how to do implement shredded writes so it would be nice to get it in so we can move on to shredding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the comment has not been submitted before, the change LGTM
let value_offset = self.value_offset; | ||
|
||
// get the buffers back from the variant builder | ||
let (mut metadata_buffer, mut value_buffer) = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, we can transfer the ownership with this way.
Co-authored-by: Congxian Qiu <qcx978132955@gmail.com>
Thanks @klion26 🙏 |
@@ -55,9 +55,14 @@ use std::sync::Arc; | |||
/// }; | |||
/// builder.append_variant_buffers(&metadata, &value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new approach is more efficient. Do we still need to have append_variant_buffers
? Does it still make sense to create a VariantBuilder
, build metadata and buffer and append them with append_variant_buffers
?
Looks like the new variant_builder
can cover it. If so, I think it doesn't make sense to keep existing approach which is less efficient. Users may use it unintentionally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does seem preferable to have only one good way of doing things, rather than leaving a less efficient other way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. removed in 99ea0c4
// Sanity Check: if the buffers got smaller, something went wrong (previous data was lost) | ||
let metadata_len = metadata_buffer | ||
.len() | ||
.checked_sub(metadata_offset) | ||
.expect("metadata length decreased unexpectedly"); | ||
let value_len = value_buffer | ||
.len() | ||
.checked_sub(value_offset) | ||
.expect("value length decreased unexpectedly"); | ||
|
||
if self.finished { | ||
// if the object was finished, commit the changes by putting the | ||
// offsets and lengths into the parent array builder. | ||
self.array_builder | ||
.metadata_locations | ||
.push((metadata_offset, metadata_len)); | ||
self.array_builder | ||
.value_locations | ||
.push((value_offset, value_len)); | ||
self.array_builder.nulls.append_non_null(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, these (sanity check, location updates) are finish-related operations, why don't put int finish
? Seems not proper to put in drop
.
|
||
// get the buffers back from the variant builder | ||
let (mut metadata_buffer, mut value_buffer) = | ||
std::mem::take(&mut self.variant_builder).finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recall the discussion in the drop
impl PR before, I think it is not good to put finish
in drop
. Should we do it in finish
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am on the fence about this. My natural inclination would also be to put finishing logic in finish
itself... but there's also a certain symmetry to having the finished-vs-not logic branches together in the impl Drop
?
As long as it's infallible, and gated by a finished check, I don't think it makes any meaningful difference? We need to track the finished flag either way, in order for drop to roll changes back correctly when finish wasn't called. Which was the problem previously -- unconditionally finishing on drop, even if the failure to finish
was intentional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One potential reason to move the finish logic out of drop
-- we rely on a finish
call at L307 above; imagine if that finish
call didn't actually trigger any changes until the unwinding stack frame dropped the object? Most likely we'd be working with unexpected buffers for the rest of this method?
Tho I guess if there were any observable side effects like that, the corresponding &mut
reference should still be live until drop, and the compiler would block any badness?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recall the discussion in the drop impl PR before, I think it is not good to put finish in drop. Should we do it in finish?
@scovich 's rationale about the symmetry is what lead me to this approach. However I agree moving the code out of drop is a good idea and I will do it
I looked briefly into doing this -- one challenge I found is that there is no API currently to get back the underlying buffer from a VariantBuilder (aka the equivalent of into_inner() or something). I can add this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 I played around with it and I agree it was a good change
Among other things this prevents the metadata builder from writing bytes into the metadata builder just to have to roll them back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 65714e5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few cleanups to consider, and some mild finish vs. drop controversy, but otherwise LGTM
@@ -55,9 +55,14 @@ use std::sync::Arc; | |||
/// }; | |||
/// builder.append_variant_buffers(&metadata, &value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does seem preferable to have only one good way of doing things, rather than leaving a less efficient other way?
impl<'a> Drop for VariantArrayVariantBuilder<'a> { | ||
/// If the builder was not finished, roll back any changes made to the | ||
/// underlying buffers (by truncating them) | ||
fn drop(&mut self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That other PR is nice improvement, but the splice
call still shifts bytes.
In order to not shift bytes at all, we'd have to pre-allocate exactly the right number of header bytes before recursing into the field values. And then the splice
call would just replace the zero-filled header region with the actual header bytes, after they're known (shifting bytes only if the original guess was incorrect).
|
||
// get the buffers back from the variant builder | ||
let (mut metadata_buffer, mut value_buffer) = | ||
std::mem::take(&mut self.variant_builder).finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One potential reason to move the finish logic out of drop
-- we rely on a finish
call at L307 above; imagine if that finish
call didn't actually trigger any changes until the unwinding stack frame dropped the object? Most likely we'd be working with unexpected buffers for the rest of this method?
Tho I guess if there were any observable side effects like that, the corresponding &mut
reference should still be live until drop, and the compiler would block any badness?
.expect("metadata length decreased unexpectedly"); | ||
let value_len = value_buffer | ||
.len() | ||
.checked_sub(value_offset) | ||
.expect("value length decreased unexpectedly"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These assertions should be impossible to trigger, no? And if anyway the result is a panic, is the unchecked integer underflow panic somehow worse than a failed expect?
(if we think these could actually trigger in practice, that seems like a reason to move the checks to a fallible finish
instead of infallible drop
?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(meanwhile, we should probably move those inside the if finished
block, since the else
doesn't use them and a panic during unwind is an immediate double-fault abort?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, they should be "impossible" to trigger
These assertions should be impossible to trigger, no? And if anyway the result is a panic, is the unchecked integer underflow panic somehow worse than a failed expect?
I think an integer underflow only panics in debug builds on rust (they silently underflow in release builds)
If the buffers have somehow been truncated prior to where they started, I think we should panic as soon as possible as something is seriously wrong / there is a serious bug somewhere.
Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
…-rs into alamb/append_variant_builder
🤖 |
🤖: Benchmark completed Details
|
pub fn finish(mut self) { | ||
self.finished = true; | ||
|
||
let metadata_offset = self.metadata_offset; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code for finishing / finalizing is moved to finish()
|
||
// get the buffers back from the variant builder | ||
let (mut metadata_buffer, mut value_buffer) = | ||
std::mem::take(&mut self.variant_builder).into_buffers(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note this now calls into_buffers
to get ownership of the buffers back but doesn't call finish
Thanks everyone. I am pretty stoked about this one. Now, on to shredding! |
Which issue does this PR close?
VariantArray
andVariantArrayBuilder
for constructing Arrow Arrays of Variants #7905Rationale for this change
In a quest to have the fastest and most efficient Variant implementation I would like to avoid copies if at all possible
Right now, to make a VariantArray first requires completing an individual buffer and appending it
to the array.
Let's make that faster by having the VariantBuilder append directly into the buffer
What changes are included in this PR?
VariantBuilder::new_from_existing
VariantArrayBuilder::variant_builder
that reuses the buffersAre these changes tested?
Are there any user-facing changes?
Hopefully faster performance