Description
What happened?
I tested the Refine module using llama3-70b-instruct and mistral-large-2402, both provided by Bedrock. For optimization, I used the bootstrap_few_shot_random_search.
However, execution was interrupted after a few steps, accompanied by the following warnings and errors:
WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
ERROR dspy.utils.parallelizer: Error for Example (...) Adapter JSONAdapter failed to parse the LM response.LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}
Expected to find output fields in the LM response: [discussion, advice]
Actual output fields parsed from the LM response: []
. Set
provide_traceback=True
for traceback.
Refine: Attempt failed with temperature 0.0: Adapter JSONAdapter failed to parse the LM response.LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}
Expected to find output fields in the LM response: [discussion, advice]
Actual output fields parsed from the LM response: []
Average Metric: 18.00 / 22 (81.8%): 92%|█████████▏| 80/87 [01:39<00:08, 1.25s/it]2025/05/22 20:32:15 WARNING dspy.utils.parallelizer: Execution cancelled due to errors or interruption.
Error optimizing us.meta.llama3-3-70b-instruct-v1:0: Execution cancelled due to errors or interruption.
Steps to reproduce
Use Bedrock as provider.
I used the Refine module configured with a ChainOfThought component.
self.transform = dspy.Refine(
module=self.base_transform, # ChainOfThought
N=3, # Try up to 3 attempts
reward_fn=validation_reward,
threshold=0.9 # High threshold for quality
)
DSPy version
2.6.24