-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Detailed Description
Each failed metric test would trigger the release of feedback to the user, explaining why the test failed and suggesting next steps to improve the FAIRness of the test object. This would be test-specific but easily understandable feedback, making no/few assumptions about the user's familiarity with concepts like internet protocols, signpositing and metadata standards.
Instead of focussing only how to pass the test, effectively introducing normative behaviour by the backdoor, the feedback should also make it clear how passing this test improves FAIRness of the test object. It could be a good to acknowledge that automated tests might not capture all possibilities of fulfilling the FAIR principle but explaining that methods that are easily machine-readable (i.e. fit for automated testing) have advantages for findability and accessibility, too. Showing people a minimalist example they can apply to their repository and adjust as they go could improve uptake. The feedback should be constructive and encouraging, as depending on where the user is in the research lifecycle, they might not feel ready to mint DOIs or purls just yet.
The feedback could be presented similarly to the debug messages, or as a separate block of text. It would need to be added into the client for display as well.
Context
When evaluating multiple automated assessment tools, we found that the feedback provided to the user is often missing, or too technical to help improving the FAIRness of the test object.
In F-UJI, the debug messages can help to understand where a test failed, but they are only debug messages so might not always be considered by the user, and they are often quite technical. Even together with the metric name, it is not always clear what could be done to improve the FAIRness.
Adding feedback that is easily understandable and actionable makes it clearer to the community why they should be interested in improving their compliance with the metrics, thereby increasing their motivation to take steps to do so.
Possible Implementation
Ideally, the feedback would be configurable through the metric YAML file. However, there might be some difficulty in tests that have more than one way of passing and wouldn't be able to reflect when the test object is "halfway there". This might not be a bad thing though, as long as the feedback is well worded.
In terms of presentation, displaying the feedback would need to become part of the web client, e.g. simpleclient
. Either as an additional section, or in boxes for each test similar to the debug messages.