Skip to content

[Bug] Exception "Adapter JSONAdapter failed to parse the LM response" encountered in the Refine module when using Bedrock #8264

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Nasreddine opened this issue May 23, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Nasreddine
Copy link

What happened?

I tested the Refine module using llama3-70b-instruct and mistral-large-2402, both provided by Bedrock. For optimization, I used the bootstrap_few_shot_random_search.

However, execution was interrupted after a few steps, accompanied by the following warnings and errors:

WARNING dspy.adapters.json_adapter: Failed to use structured output format, falling back to JSON mode.
ERROR dspy.utils.parallelizer: Error for Example (...) Adapter JSONAdapter failed to parse the LM response.

LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}

Expected to find output fields in the LM response: [discussion, advice]

Actual output fields parsed from the LM response: []

. Set provide_traceback=True for traceback.
Refine: Attempt failed with temperature 0.0: Adapter JSONAdapter failed to parse the LM response.

LM Response: {"type": "function", "name": "json_tool_call", "parameters": {"discussion": "The predict module is to blame for the final reward being below the threshold. It failed to preserve the content of the HTML document in the XML output.", "advice": "{"predict": "The predict module should ensure that it preserves the content of the HTML document in the XML output, including all text content exactly as provided, without omitting, summarizing, paraphrasing, or altering any words in the legal text."}"}}

Expected to find output fields in the LM response: [discussion, advice]

Actual output fields parsed from the LM response: []

Average Metric: 18.00 / 22 (81.8%): 92%|█████████▏| 80/87 [01:39<00:08, 1.25s/it]2025/05/22 20:32:15 WARNING dspy.utils.parallelizer: Execution cancelled due to errors or interruption.

Error optimizing us.meta.llama3-3-70b-instruct-v1:0: Execution cancelled due to errors or interruption.

Steps to reproduce

Use Bedrock as provider.
I used the Refine module configured with a ChainOfThought component.

 self.transform = dspy.Refine(
            module=self.base_transform, # ChainOfThought
            N=3,  # Try up to 3 attempts
            reward_fn=validation_reward,
            threshold=0.9  # High threshold for quality
        )

DSPy version

2.6.24

@Nasreddine Nasreddine added the bug Something isn't working label May 23, 2025
@rafsid
Copy link

rafsid commented May 23, 2025

use dspy==2.6.19. I was using the latest and i kept on downgrading one version at a time.

it looks like downgrading dspy to 2.6.19 (which also brought in older versions of litellm and openai) resolved the compatibility issues you were facing with the Gemini API and structured output.
It's not uncommon for rapid developments in libraries like DSPy and LiteLLM, and changes in how they interact with various model provider APIs, to sometimes lead to temporary incompatibilities or require specific version combinations to work smoothly, especially with newer or preview models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants