-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Open
Labels
ClientThis issue points to a problem in the data-plane of the library.This issue points to a problem in the data-plane of the library.EvaluationIssues related to the client library for Azure AI EvaluationIssues related to the client library for Azure AI EvaluationService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Description
- Package Name: azure.ai.evaluation
- Package Version: 1.11.0
- Operating System: Windows 11 Enterprise
- Python Version: 3.11
Describe the bug
We have an online scheduled job which is based on azure.ai.evaluation SDK. After running job for a long time, the job will fail due to the error pasted below
Traceback (most recent call last):
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_evaluate.py", line 792, in evaluate
return _evaluate(
^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_evaluate.py", line 925, in _evaluate
eval_result_df, eval_metrics, per_evaluator_results = _run_callable_evaluators(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_evaluate.py", line 1193, in _run_callable_evaluators
per_evaluator_results: Dict[str, __EvaluatorInfo] = {
^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_evaluate.py", line 1195, in <dictcomp>
"result": batch_run_client.get_details(run, all_results=True),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_batch_run/_run_submitter_client.py", line 95, in get_details
run = self._get_run(client_run)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_batch_run/_run_submitter_client.py", line 166, in _get_run
return cast(Future[Run], run).result()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.13/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.13/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/opt/python/3.11.13/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.13/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/python/3.11.13/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.13/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_batch_engine/_run_submitter.py", line 83, in submit
self.stream_run(run=run, storage=local_storage, raise_on_error=self._config.raise_on_error)
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_batch_engine/_run_submitter.py", line 208, in stream_run
RunSubmitter._print_run_summary(run, file_handler)
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_batch_engine/_run_submitter.py", line 243, in _print_run_summary
text_out.write(
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 255, in write
return self._prev_out.write(s)
^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 255, in write
return self._prev_out.write(s)
^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 255, in write
return self._prev_out.write(s)
^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 2977 more times]
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 253, in write
s = scrub_credentials(s) # Remove credential from string.
^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 78, in scrub_credentials
return CredentialScrubber.scrub(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_legacy/_common/_logging.py", line 100, in scrub
output = regex.sub(CredentialScrubber.PLACE_HOLDER, output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RecursionError: maximum recursion depth exceeded while calling a Python object
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/8ddf41a785527a5/server_utils/tasks/_task_shared.py", line 35, in wrapper
return func(task_conf)
^^^^^^^^^^^^^^^
File "/tmp/8ddf41a785527a5/server_utils/tasks/_ask_learn.py", line 114, in ask_learn_evaluation_task
base_learn_copilot_eval_task(
File "/tmp/8ddf41a785527a5/server_utils/tasks/_task_shared.py", line 190, in base_learn_copilot_eval_task
raise e
File "/tmp/8ddf41a785527a5/server_utils/tasks/_task_shared.py", line 138, in base_learn_copilot_eval_task
eval_result = evaluation_function(local_tmp_file, batch_id, task_conf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8ddf41a785527a5/server_utils/tasks/_ask_learn.py", line 82, in evaluate_ask_learn
return evaluate(
^^^^^^^^^
File "./.venv/lib/python3.11/site-packages/azure/ai/evaluation/_evaluate/_evaluate.py", line 828, in evaluate
raise EvaluationException(
azure.ai.evaluation._exceptions.EvaluationException: (InternalError) maximum recursion depth exceeded while calling a Python object
To Reproduce
It's hard to reproduce the error, the exception will definitely occur, but the specific time is uncertain.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Metadata
Metadata
Assignees
Labels
ClientThis issue points to a problem in the data-plane of the library.This issue points to a problem in the data-plane of the library.EvaluationIssues related to the client library for Azure AI EvaluationIssues related to the client library for Azure AI EvaluationService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that