Skip to content

1.8.3 get NotImplementedError for passing list into the prompt #1546

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks done
TwilightSpar opened this issue May 22, 2025 · 4 comments
Open
2 tasks done

1.8.3 get NotImplementedError for passing list into the prompt #1546

TwilightSpar opened this issue May 22, 2025 · 4 comments
Labels
anthropic bug Something isn't working

Comments

@TwilightSpar
Copy link

TwilightSpar commented May 22, 2025

  • This is actually a bug report.

What Model are you using?

  • Other: us.anthropic.claude-3-7-sonnet-20250219-v1:0 through aws bedrock

Describe the bug
got: NotImplementedError: Non-text prompts are not currently supported in the Bedrock provider.
when i pass a list into the prompt

To Reproduce

# a prompt example

a = ['aaa', 'bbb']
prompt = f"""
here is a list:
{a}
"""
# create a bedrock runtime and instructor_client
bedrock_runtime = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1",
)
instructor_client = instructor.from_bedrock(bedrock_runtime, mode=Mode.BEDROCK_JSON)

# send the prompt to llm using chat completion and a pydantic class
def get_llm_response_instructor(prompt: str,
                   pydantic_class,
                   model_id: str = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"):

    response = instructor_client.chat.completions.create(
        modelId=model_id,
        messages=[
            {
                "role": "user",
                "content": [{"text": f"{prompt}"}],
            }
        ],
        response_model=pydantic_class,
        max_retries=3,
        inferenceConfig={
            "temperature": 0
        }
    )

    return response

Expected behavior
There should not be an error, the list in a format string should be parsed correctly.
The code above works fine for version 1.8.2, I update to 1.8.3 today, and got this error.

Screenshots

Image

Image

Image

@github-actions github-actions bot added anthropic bug Something isn't working labels May 22, 2025
@dogonthehorizon
Copy link
Contributor

This should have a bedrock tag instead of anthropic. This is introduced in the last release via #1529

The intent with that PR was to align the chat.completions.create method with the rest of the providers in instructor, since the converse endpoint underneath has different behavior than the openai spec. I see this as a good example issue for why #1534 is needed.

@TwilightSpar I'm not a maintainer but I am willing to help deliver a fix. With the knowledge I have of the project it's desirable to keep the chat.completions.create consistent, so if we introduce a formal converse endpoint that supports the aws format is that an acceptable compromise?

@jxnl
Copy link
Collaborator

jxnl commented May 22, 2025

@dogonthehorizon is this somethin you can tackle? I dont have bedrock to verify some of these issues

@dogonthehorizon
Copy link
Contributor

@jxnl if we agree on that proposed fix then yea, not a problem

@TwilightSpar
Copy link
Author

TwilightSpar commented May 23, 2025

I find out that, if I use anthropic(claude) api requst format, it works. It seems that in 1.8.3, the chat.completions.create function change the api format from bedrock(claude) converse format to original claude api format:

# create a bedrock runtime and instructor_client
bedrock_runtime = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1",
)
instructor_client = instructor.from_bedrock(bedrock_runtime, mode=Mode.BEDROCK_JSON)

# send the prompt to llm using chat completion and a pydantic class
class User(BaseModel):
    name: str
    age: int

prompt = f"""
"Extract: Jason is 22 years old"
"""

response = instructor_client.chat.completions.create(
    modelId=model_id,
    messages=[
        {
            "role": "user",
             ###### this is the anthropic api format, without text field, just using the text as content
            "content": f"{prompt}",  
             ###### the original one is bedrock converse api format 
            # "content": [{"text": f"{prompt}"}],
        }
    ],
    response_model=User
)

Thanks for the links, @dogonthehorizon . A formal converse endpoint for bedrock is good enough. I always get confused when choosing the api format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
anthropic bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants