You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Explain EvaluationCriterion "magic" and documenting Pydantic usage (#198)
* Add initial README around EvaluationCriterion
* Add Pydantic section to README.md
* Move EvaluationCriteria README to docstring
* Minor markdown problem
The chosen method allows us to document the available evaluation functions in an IDE friendly fashion and hides away
42
+
details like internal IDs (`"ef_...."`).
43
+
44
+
The actual `EvaluationCriterion` is created by overloading the comparison operators for the base class of an evaluation
45
+
function. Instead of the comparison returning a bool, we've made it create an `EvaluationCriterion` with the correct
46
+
signature to send over the wire to our API.
47
+
48
+
14
49
Parameters:
15
50
eval_function_id (str): ID of evaluation function
16
51
threshold_comparison (:class:`ThresholdComparison`): comparator for evaluation. i.e. threshold=0.5 and threshold_comparator > implies that a test only passes if score > 0.5.
0 commit comments