File loading #81
Lramseyer
started this conversation in
Design Opinions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Since up to this point, the contributors to vaporview have been in the form of adding support for other file readers, I am going to use this space to discuss file loading. As great as emails are, it would be nice to keep this an open discussion, so that others can refer to this.
Vaporview in a nutshell
Vaporview uses the custom editor API, which is a custom webpage running in an iframe. The only way to communicate with this webpage is via webview.postMessage() to send and vscode.postMessage() to receive. You can send data in the form of JS objects. Under the hood it uses JSON.stringify() more on that later.
In its current form, when variables are added to the viewer, only the metadata is sent over to the webview. This includes things like instance path, Variable ID, signal ID, Data encoding, bit width, variable type, etc. The webview then checks to see if it has the value change data for the variable that has been added. If it does not, it sends a request back to the extension for the value change data, and the document handler then has to go fetch the value change data and send it to the webview.
Due to the 64KB limitation of webassembly, we have to send the value change data in chunks to the webview. This is actually faster than sending everything over (for large signals) because vscode.postMessage() uses JSON.stringify(). Value change data is just a flat array of [time, value] pairs that gets parsed in the webview. Nothing crazy.
Considerations for large signals
For waveform dumps with lots of value change events (more than 10 million,) there's a noticeable slowdown. This is because the WASM file loader creates a buffer of all of the value change data before chunking it and sending it over. The naive approach would be to use LZ4 compression before sending it to the webview, and then decompress it. This result in about a 2.5X speed up, and it's already implemented but I think I can do better, which is why I haven't documented it yet.
Most of that time is consumed by allocating/de-allocating memory. So what I need to do, is set a max allocation size (Something in the Megabytes, I'll have to profile it to see what works best) compress, chunk, send. Unfortunately, this means that there are chunks of chunks, which is kind of obnoxious, but I think it's the best approach.
I'm still going to support the basic flat array schema as a fallback. This way, anyone who wants to implement a new file format won't have to worry about complicated data structures when getting started.
Let me know your thoughts on the matter!
Beta Was this translation helpful? Give feedback.
All reactions