-
Notifications
You must be signed in to change notification settings - Fork 22
Open
Description
Hey,
I was trying to use Zaero-Shot-Classification Pipeline and build its Docker Image but it fails while running pytests tests command,
I am sharing the error log below, please help me out with this. I have not changed anything in the test cases, it is the same as it is provided in this repository.
.
#11 170.9 __________________________ test_multi_label_response ___________________________
#11 170.9
#11 170.9 requests = ({'hypothesis': 'The sentiment of the review is {}.', 'labels': ['negative', 'postive', 'neutral'], 'model_name': 'typ...m going out for food']}, {'multi_label': True, 'texts': ['food was great', 'food was bad', 'i
am going out for food']})
#11 170.9 response = ({'predictions': [{'label': 'postive', 'score': 0.8}, {'label': 'negative', 'score': 0.87}, {'label': 'postive', 'scor... 'postive'], 'score': [1.0, 0.85, 0.83]}, {'label': ['postive', 'negative', 'neutral'], 'scor
e': [0.67, 0.34, 0.14]}]})
#11 170.9
#11 170.9 def test_multi_label_response(requests, response):
#11 170.9 > assert response[2] == pipeline(requests[2])
#11 170.9
#11 170.9 tests/test_classifier.py:12:
#11 170.9 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
#11 170.9 src/classifier.py:63: in __call__
#11 170.9 predictions = classification_pipeline(texts, labels, hypothesis, multi_label=multi_label)
#11 170.9 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
#11 170.9
#11 170.9 self = <transformers.pipelines.zero_shot_classification.ZeroShotClassificationPipeline object at 0x7f08ca4f9f70>
#11 170.9 sequences = ['food was great', 'food was bad', 'i am going out for food']
#11 170.9 args = (['negative', 'postive', 'neutral'], 'The sentiment of the review is {}.')
#11 170.9 kwargs = {'multi_label': True}
#11 170.9
#11 170.9 def __call__(
#11 170.9 self,
#11 170.9 sequences: Union[str, List[str]],
#11 170.9 *args,
#11 170.9 **kwargs,
#11 170.9 ):
#11 170.9 """
#11 170.9 Classify the sequence(s) given as inputs. See the [`ZeroShotClassificationPipeline`] documentation for more
#11 170.9 information.
#11 170.9
#11 170.9 Args:
#11 170.9 sequences (`str` or `List[str]`):
#11 170.9 The sequence(s) to classify, will be truncated if the model input is too large.
#11 170.9 candidate_labels (`str` or `List[str]`):
#11 170.9 The set of possible class labels to classify each sequence into. Can be a single label, a string of
#11 170.9 comma-separated labels, or a list of labels.
#11 170.9 hypothesis_template (`str`, *optional*, defaults to `"This example is {}."`):
#11 170.9 The template used to turn each label into an NLI-style hypothesis. This template must include a {} or
#11 170.9 similar syntax for the candidate label to be inserted into the template. For example, the default
#11 170.9 template is `"This example is {}."` With the candidate label `"sports"`, this would be fed into the
#11 170.9 model like `"<cls> sequence to classify <sep> This example is sports . <sep>"`. The default template
#11 170.9 works well in many cases, but it may be worthwhile to experiment with different templates depending on
#11 170.9 the task setting.
#11 170.9 multi_label (`bool`, *optional*, defaults to `False`):
#11 170.9 Whether or not multiple candidate labels can be true. If `False`, the scores are normalized such that
#11 170.9 the sum of the label likelihoods for each sequence is 1. If `True`, the labels are considered
#11 170.9 independent and probabilities are normalized for each candidate by doing a softmax of the entailment
#11 170.9 score vs. the contradiction score.
#11 170.9
#11 170.9 Return:
#11 170.9 A `dict` or a list of `dict`: Each result comes as a dictionary with the following keys:
#11 170.9
#11 170.9 - **sequence** (`str`) -- The sequence for which this is the output.
#11 170.9 - **labels** (`List[str]`) -- The labels sorted by order of likelihood.
#11 170.9 - **scores** (`List[float]`) -- The probabilities for each of the labels.
#11 170.9 """
#11 170.9 if len(args) == 0:
#11 170.9 pass
#11 170.9 elif len(args) == 1 and "candidate_labels" not in kwargs:
#11 170.9 kwargs["candidate_labels"] = args[0]
#11 170.9 else:
#11 170.9 > raise ValueError(f"Unable to understand extra arguments {args}")
#11 170.9 E ValueError: Unable to understand extra arguments (['negative', 'postive', 'neutral'], 'The sentiment of the review is {}.')
#11 170.9
#11 170.9 ../lang/lib/python3.9/site-packages/transformers/pipelines/zero_shot_classification.py:179: ValueError
#11 170.9 =========================== short test summary info ============================
#11 170.9 FAILED tests/test_classifier.py::test_complete_response - ValueError: Unable ...
#11 170.9 FAILED tests/test_classifier.py::test_default_response - ValueError: Unable t...
#11 170.9 FAILED tests/test_classifier.py::test_multi_label_response - ValueError: Unab...
#11 170.9 ======================== 3 failed in 160.47s (0:02:40) =========================
#11 ERROR: executor failed running [/bin/sh -c pip install pytest --no-cache-dir && pytest tests -s -vv]: exit code: 1
Metadata
Metadata
Assignees
Labels
No labels