Skip to content

Manual packet fragmentation #116

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 63 commits into from
Dec 17, 2023
Merged

Manual packet fragmentation #116

merged 63 commits into from
Dec 17, 2023

Conversation

Shatur
Copy link
Contributor

@Shatur Shatur commented Nov 29, 2023

This PR implements the following logic:

  1. Collect all mappings, insertions, removals and despawns into so-called init message and send it over reliable channel. This message contains changes only for this replicon tick.
  2. Collect all component changes into so-called update message and send it over unreliable channel. This message contains changes since the last acknowledged tick for each entity. When insertion detected, then last acknowledged tick becomes this tick (since reliable message will reach the destination). Since this message contains only component changes, all data can be naturally splitted between messages by entities up to packet size.

This not only helps us implement packet fragmentation, but also improves other things:

  • We serialize most of the world only once, only updates get re-serialized if the client did not acknowledge them.
  • We no longer need despawn and removal trackers since we need them only since the last run. But it also requires 0.12.1: Wait until FixedUpdate can see events before dropping them bevyengine/bevy#10077.
  • We can avoid sending empty messages with tick when event (or now also components update!) occurs because we can just reference older init tick.
  • Client mappings no longer need to reference a tick.

I also fixed tick increment when no server is present. Needed to fix tests.
I also need to drop tick's entities that wasn't acknowledged for too long. I will probably do it in a separate PR.

Benchmark results:

entities send           time:   [68.015 µs 68.085 µs 68.167 µs]
                        change: [-35.330% -35.155% -34.971%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 2 outliers among 20 measurements (10.00%)
  2 (10.00%) high mild

entities receive        time:   [144.37 µs 144.86 µs 145.48 µs]
                        change: [+10.786% +11.708% +12.525%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 1 outliers among 20 measurements (5.00%)
  1 (5.00%) high mild

entities update send    time:   [64.368 µs 64.520 µs 64.728 µs]
                        change: [+2.6338% +4.2286% +5.7505%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 2 outliers among 20 measurements (10.00%)
  1 (5.00%) high mild
  1 (5.00%) high severe

entities update receive time:   [59.716 µs 60.222 µs 60.894 µs]
                        change: [+135.46% +140.30% +144.64%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 1 outliers among 20 measurements (5.00%)
  1 (5.00%) high mild

While it shows that performance has been regressed for updates, I think that in real scenarios this implementation will outperform the old solution because it less sensitive to packet loss and re-serializes less things.

@Shatur Shatur changed the title Initial implementation Manual packet fragmentation Nov 29, 2023
@Shatur Shatur force-pushed the packet-fragmentation branch 13 times, most recently from 93665dc to f226820 Compare December 4, 2023 20:56
@Shatur Shatur force-pushed the packet-fragmentation branch from f226820 to a9ee35a Compare December 4, 2023 21:23
@Shatur Shatur force-pushed the packet-fragmentation branch from e710aab to fb8ce9f Compare December 5, 2023 00:05
@codecov-commenter
Copy link

codecov-commenter commented Dec 5, 2023

Codecov Report

Attention: 18 lines in your changes are missing coverage. Please review.

Comparison is base (d14fcc4) 91.55% compared to head (7834cd8) 93.14%.
Report is 1 commits behind head on master.

Files Patch % Lines
src/server.rs 95.17% 7 Missing ⚠️
src/server/clients_info.rs 81.81% 6 Missing ⚠️
src/client.rs 96.32% 5 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #116      +/-   ##
==========================================
+ Coverage   91.55%   93.14%   +1.59%     
==========================================
  Files          16       15       -1     
  Lines         900     1124     +224     
==========================================
+ Hits          824     1047     +223     
- Misses         76       77       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Shatur Shatur marked this pull request as ready for review December 5, 2023 23:09
@UkoeHB
Copy link
Collaborator

UkoeHB commented Dec 15, 2023

Btw I think the replicon Reliable channel should actually be Unordered, then the client should buffer init messages that are out of order, and the server should be careful to update entity acks taking into account they are out-of-order.

@Shatur
Copy link
Contributor Author

Shatur commented Dec 15, 2023

But why Unordered? We need to apply init messages in their order. In Renet it's based on UDP, so it basically just buffer out of order messages for us.

Copy link
Collaborator

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part 3.b done!

One problem with the new design is component updates and server events can reference an entity but arrive after that entity has despawned if the despawn occurs in a tick after the update's or event's tick. I think this needs to be documented clearly.

@UkoeHB
Copy link
Collaborator

UkoeHB commented Dec 15, 2023

But why Unordered? We need to apply init messages in their order. In Renet it's based on UDP, so it basically just buffer out of order messages for us.

Oh does renet do buffering instead of waiting for ack before sending the next message? I need to read that part of the renet code.

UkoeHB and others added 13 commits December 16, 2023 12:42
- Use checked conversion as in other places.
- Use naming as in other places (I don't have a preference, I just
  prefer consistency).
- Use shorter trace messages.
- Panic if tick is not found, I can't happen.
Logically better and it contains `next_update_index` that shouldn't be
used or changed outside.
Copy link
Collaborator

@UkoeHB UkoeHB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! We did it :)

@Shatur Shatur merged commit 041905e into master Dec 17, 2023
@Shatur Shatur deleted the packet-fragmentation branch December 17, 2023 00:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Manual diff fragmentation
3 participants