-
Notifications
You must be signed in to change notification settings - Fork 35
Manual packet fragmentation #116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
93665dc
to
f226820
Compare
f226820
to
a9ee35a
Compare
e710aab
to
fb8ce9f
Compare
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #116 +/- ##
==========================================
+ Coverage 91.55% 93.14% +1.59%
==========================================
Files 16 15 -1
Lines 900 1124 +224
==========================================
+ Hits 824 1047 +223
- Misses 76 77 +1 ☔ View full report in Codecov by Sentry. |
No longer needed, but it will require Bevy 0.12.1
Splitting logic is wrong, still needs to be fixed.
Btw I think the replicon |
But why |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part 3.b done!
One problem with the new design is component updates and server events can reference an entity but arrive after that entity has despawned if the despawn occurs in a tick after the update's or event's tick. I think this needs to be documented clearly.
Oh does renet do buffering instead of waiting for ack before sending the next message? I need to read that part of the renet code. |
- Use checked conversion as in other places. - Use naming as in other places (I don't have a preference, I just prefer consistency).
- Use shorter trace messages. - Panic if tick is not found, I can't happen.
Logically better and it contains `next_update_index` that shouldn't be used or changed outside.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! We did it :)
This PR implements the following logic:
This not only helps us implement packet fragmentation, but also improves other things:
FixedUpdate
can see events before dropping them bevyengine/bevy#10077.I also fixed tick increment when no server is present. Needed to fix tests.
I also need to drop tick's entities that wasn't acknowledged for too long. I will probably do it in a separate PR.
Benchmark results:
While it shows that performance has been regressed for updates, I think that in real scenarios this implementation will outperform the old solution because it less sensitive to packet loss and re-serializes less things.