-
-
Notifications
You must be signed in to change notification settings - Fork 763
Move Socket.IO initialization all the way to head + Pre-Mount Message Queueing #4756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
No. This is because our new behavior is in-principle "impossible to distinguish from a browser which just so happens to suffer a network lag shortly after load". The transport layer is in NiceGUI's hands. We can do whatever we want. If they need to time exactly from when the browser is fully loaded, they need |
Ok, you say this PR is 50% faster and this is basically the time it takes to |
The page does indeed get load in faster, since you are in essence parallelizing your network requests instead of one-by-one. Since Socket.IO handshake is actually 3 handshakes (Engine-IO, Socket-IO, and NiceGUI, check #4754 if you'd like a refresher), we incur time penalty of 3x the round-trip network latency, so it can be quite a lot, if your network is bad enough. So, sorry but I can't agree that "timelines in your screenshots look pretty similar", and there is definitely noticeable improvement. But my simplified timeline also shows the change: |
I have an analogy. Imagine Socket.IO is the engine in a gasoline vehicle. #4755 is tuning the engine start sequence so that when you turn the key, the engine spins up a bit faster. #4756 is getting someone to go downstairs and start the car for you ahead-of-time. If that doesn't make things clear, I don't know what will... |
… simultaneous bottleneck (#4798) ### Motivation Still in the mood of speed improvement. So I was reading https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script again... Basically, loading behaviour wise, there's 3 valid options:  1. Normal script 2. Defer script 3. Async script (Don't think about defer async. It has been mentioned that if async is set, ["If the attribute is specified with the defer attribute, the element will act as if only the async attribute is specified"](https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script#:~:text=If%20the%20attribute%20is%20specified%20with%20the%20defer%20attribute%2C%20the%20element%20will%20act%20as%20if%20only%20the%20async%20attribute%20is%20specified), meaning async + defer => async). I then took a look at the script situation in NiceGUI: 1. No script in head 2. Body's empty, no elements 3. Main script is a `<script type="module">`, which is ["defer by default"](https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script#:~:text=scripts%20%E2%80%94%20they-,defer%20by%20default,-.) Then I thought: **Can I have the rest of the NiceGUI's scripts be defer as well, since they need to run before the main script only, and not any earlier. We don't need to block up the parser while those scripts load?** And yes, indeed, there is performance uplift. ### Implementation - Change the scripts from normal scripts to defer scripts. - Left the ES module shim in-tact since that probably has to load ASAP. - Left Socket.IO in-tact because we'd have to move it anyways if we adapt #4756 - Both instances, maybe `defer` would make it even faster, or it would break things. I'll see when I have time. ### Progress - [x] I chose a meaningful title that completes the sentence: "If applied, this PR will..." - [x] The implementation is complete. - [x] Pytests have been added (or are not necessary). - [x] Documentation has been added (or is not necessary). -> No need pytest and documentation. As long as existing tests don't break, Id be happy ### Results showcase Using Fast 4G profile, 4x CPU slowdown. Before:  After:  Finish: 6.47s -> 5.99s DOMContentLoaded: 4.69s -> 4.38s Load: 5.69s -> 5.18s About save 5s. Reference: 4G network latency 165ms. I am not saying that we are saving 3x the network latency, but it just puts things into perspective. Worthy of note: - _No performance uplift_ if network is not slowed down and only CPU slowdown, since the network request would complete in an instant, and we didn't really block the parser for much. - _No performance uplift_ if CPU is not slowed down and only network slowdown, since the parsing would complete in an instant, and we can't block the parser if completes in a blink of the eye. - **Only notice performance uplift** if you apply BOTH network slowdown and CPU slowdown. May want to test with more slowdown and Slow 4G later. 3G's too much.
This PR is a more heavy-handed approach in making sure we establish Socket.IO initialization as soon as possible. It is benchmarked to be at least 50% faster than the current implementation, with higher performance gains under more adverse network connection.
Before PR:
After PR:
Explainer:
Currently, NiceGUI loads like this:
createApp
await ui.context.client.connected()
finishesMy PR change it to be like this
await ui.context.client.connected()
finishescreateApp
Since we begin talking to Socket.IO earlier, we:
await ui.context.client.connected()
finishes earlierTo-do items:
console.log
prints.PS: #4755 change it to be like this
createApp
beforeCreate
, setup Socket.IOmounted_app = this;
when app ismounted
onlyawait ui.context.client.connected()
finishes