-
Notifications
You must be signed in to change notification settings - Fork 2k
test(restore_latency): do not reuse netns #5287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5287 +/- ##
==========================================
+ Coverage 82.86% 82.91% +0.05%
==========================================
Files 250 250
Lines 26897 26897
==========================================
+ Hits 22288 22302 +14
+ Misses 4609 4595 -14
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mh, do you see any way we can make this work even if someone only runs a subset of these tests? Maybe something as simple as just always dropping the first datapoint when emitting metrics?
Good point, let me see. The problem is not a single datapoint for some reason, but the entire test result. |
@roypat Playing with it a bit more, I can't really think of a better way. The netnses are cached between "functions" (ie tests). After a test finishes, it returns all its nses to the pool so subsequent tests can reuse them. There is no reuse within a test, so we can't run something every test that would create an ns and omit metrics from such execution. I was thinking of "prefilling" nses at every test, ie create and return them to the pool immediately. But this doesn't help, because even though the nses are preallocated, they aren't used/initialised (the ioctl isn't called), so they aren't any faster than the ones created on-demand. |
This is to be able to disable netns reuse to get consistent performance results. Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
This is because with netns reuse, the first test variant spends time on initialising netns, while the subsequent test variants do not. Disable netns reuse to make the performance data consistent. Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
Changes
Disable netns reuse for test_restore_latency test.
Reason
This is because with netns reuse, the first test variant spends time on initialising netns, while the subsequent test variants do not. Disable netns reuse to make the performance data consistent.
License Acceptance
By submitting this pull request, I confirm that my contribution is made under
the terms of the Apache 2.0 license. For more information on following Developer
Certificate of Origin and signing off your commits, please check
CONTRIBUTING.md
.PR Checklist
tools/devtool checkstyle
to verify that the PR passes theautomated style checks.
how they are solving the problem in a clear and encompassing way.
[ ] I have updated any relevant documentation (both in code and in the docs)in the PR.
[ ] I have mentioned all user-facing changes inCHANGELOG.md
.[ ] If a specific issue led to this PR, this PR closes the issue.[ ] When making API changes, I have followed theRunbook for Firecracker API changes.
[ ] I have tested all new and changed functionalities in unit tests and/orintegration tests.
[ ] I have linked an issue to every newTODO
.rust-vmm
.