You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tested the Refine module using llama3-70b-instruct and mistral-large-2402, both provided by Bedrock. For optimization, I used the bootstrap_few_shot_random_search.
However, execution was interrupted after a few steps, accompanied by the following warnings and errors:
WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
ERROR dspy.utils.parallelizer: Error for Example (...) Adapter JSONAdapter failed to parse the LM response.
LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}
Expected to find output fields in the LM response: [discussion, advice]
Actual output fields parsed from the LM response: []
. Set provide_traceback=True for traceback.
Refine: Attempt failed with temperature 0.0: Adapter JSONAdapter failed to parse the LM response.
LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}
Expected to find output fields in the LM response: [discussion, advice]
Actual output fields parsed from the LM response: []
Average Metric: 18.00 / 22 (81.8%): 92%|█████████▏| 80/87 [01:39<00:08, 1.25s/it]2025/05/22 20:32:15 WARNING dspy.utils.parallelizer: Execution cancelled due to errors or interruption.
Error optimizing us.meta.llama3-3-70b-instruct-v1:0: Execution cancelled due to errors or interruption.
Steps to reproduce
Use Bedrock as provider.
I used the Refine module configured with a ChainOfThought component.
self.transform = dspy.Refine(
module=self.base_transform, # ChainOfThought
N=3, # Try up to 3 attempts
reward_fn=validation_reward,
threshold=0.9 # High threshold for quality
)
DSPy version
2.6.24
The text was updated successfully, but these errors were encountered:
use dspy==2.6.19. I was using the latest and i kept on downgrading one version at a time.
it looks like downgrading dspy to 2.6.19 (which also brought in older versions of litellm and openai) resolved the compatibility issues you were facing with the Gemini API and structured output.
It's not uncommon for rapid developments in libraries like DSPy and LiteLLM, and changes in how they interact with various model provider APIs, to sometimes lead to temporary incompatibilities or require specific version combinations to work smoothly, especially with newer or preview models.
What happened?
I tested the Refine module using llama3-70b-instruct and mistral-large-2402, both provided by Bedrock. For optimization, I used the bootstrap_few_shot_random_search.
However, execution was interrupted after a few steps, accompanied by the following warnings and errors:
Steps to reproduce
Use Bedrock as provider.
I used the Refine module configured with a ChainOfThought component.
DSPy version
2.6.24
The text was updated successfully, but these errors were encountered: