test_util: Parametrize some tests #528
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR refactors two tests in
test_util
-test_get_dict_key_from_value
andtest_get_launcher_from_installdir
- to use PyTest'sparametrize
feature. The rationale here is that instead of constructing test data inside of the tests for all different scenarios (in this case, many different launcher scenarios), we can instead pass in a list of scenarios to test.For example in
test_get_launcher_from_installdir
, we pass in the Steam launcher list, then the Lutris launcher list, etc, and then for each of the scenarios passed in, the test will run against those without us having to define all of that logic. This allows the test itself to be lean and generic. This also makes it easy to add additional scenarios.We can even name these scenarios using an
id
, so that when we run tests we can see the exact parametrized scenario. Previously we wouldn't have had any insight, whereas now if one parametrized test fails we can see the name (ID) of the failed test.Note that
test_get_random_game_name
was left out of this, as we'd probably want in future to have some kind of PyTest fixture for games from each launcher that we can re-use. So for now the implementation was left alone :-)I pushed for testing in general in ProtonUp-Qt, and I've been slacking. I've been trying to find the best pattern to go with for writing and thus submitting tests, and have been trying to learn as much as I can about writing "good" tests so that we can end up with a robust test suite that will give us confidence in that green "tick".
Hopefully this is a good first step in improving our tests and giving a good pattern on writing tests going forward. I'm already looking into writing some tests for the non-happy-path scenarios for the existing tests, and expanding the test suite. But that will come at a later date. 😉
Thanks!