@@ -184,7 +184,7 @@ function to [`.buildkite/pipeline_perf.py`](../.buildkite/pipeline_perf.py). To
184
184
manually run an A/B-Test, use
185
185
186
186
``` sh
187
- tools/devtool -y test --ab [optional arguments to ab_test.py] run < dir A> < dir B> --test < test specification>
187
+ tools/devtool -y test --ab [optional arguments to ab_test.py] run < dir A> < dir B> --pytest-opts < test specification>
188
188
```
189
189
190
190
Here, _ dir A_ and _ dir B_ are directories containing firecracker and jailer
@@ -198,7 +198,7 @@ branch and the `HEAD` of your current branch, run
198
198
``` sh
199
199
tools/devtool -y build --rev main --release
200
200
tools/devtool -y build --rev HEAD --release
201
- tools/devtool -y test --no-build --ab -- run build/main build/HEAD --test integration_tests/performance/test_boottime.py::test_boottime
201
+ tools/devtool -y test --no-build --ab -- run build/main build/HEAD --pytest-opts integration_tests/performance/test_boottime.py::test_boottime
202
202
```
203
203
204
204
#### How to Write an A/B-Compatible Test and Common Pitfalls
@@ -213,9 +213,9 @@ dimension to match up data series between two test runs. It only matches up two
213
213
data series with the same name if their dimensions match.
214
214
215
215
Special care needs to be taken when pytest expands the argument passed to
216
- ` tools/ab_test.py ` 's ` --test ` option into multiple individual test cases. If two
217
- test cases use the same dimensions for different data series, the script will
218
- fail and print out the names of the violating data series. For this reason,
216
+ ` tools/ab_test.py ` 's ` --pytest-opts ` option into multiple individual test cases.
217
+ If two test cases use the same dimensions for different data series, the script
218
+ will fail and print out the names of the violating data series. For this reason,
219
219
** A/B-Compatible tests should include a ` performance_test ` key in their
220
220
dimension set whose value is set to the name of the test** .
221
221
0 commit comments