-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Labels
Description
What happened?
We are running LiteLLM in AWS.
Using other functionality successfully.
Set up Bedrock Knowledge Base and added it.
I am able to run it through the "Test Key" with the model and the Vector Store without issue.
When I run it from an application I get an error.
Using Curl:
curl https://myurl/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-mykey" \
-d '{
"model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
"messages": [{"role": "user", "content": "Who is on the Platform Engineering team?"}],
"tools": [
{
"type": "file_search",
"vector_store_ids": ["QZTOGKY5HL"]
}
]
}'
ERROR:
{
"error": {
"code": "400",
"message": "litellm.BadRequestError: BedrockException - {\"message\":\"3 validation errors detected: Value '' at 'toolConfig.tools.1.member.toolSpec.name' failed to satisfy constraint: Member must have length greater than or equal to 1; Value '' at 'toolConfig.tools.1.member.toolSpec.name' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_-]+; Value '' at 'toolConfig.tools.1.member.toolSpec.description' failed to satisfy constraint: Member must have length greater than or equal to 1\"}. Received Model Group=us.anthropic.claude-3-5-sonnet-20241022-v2:0Available Model Group Fallbacks=None",
"param": null,
"type": null
}
}
I get the same error when using OpenAI lib
# Get env variables
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_API_BASE = os.getenv("OPENAI_API_BASE")
MODEL = os.getenv("MODEL")
KB_ID="QZTOGKY5HL"
# set up OpenAI client
client = openai.OpenAI(
api_key=OPENAI_API_KEY,
base_url=OPENAI_API_BASE,
)
# send a chat message to the model
messages = [
{
"role": "user",
"content": "Who is on the Platform Engineering Team?"
}
]
tools = [
{
"type": "file_search",
"vector_store_ids": [KB_ID]
}
]
response = client.chat.completions.create(
model=MODEL,
messages = messages,
tools=tools,
)
If I remove the tools the call runs successfully (of course without KB) so I know settings are correct.
If I do a curl to get a list of Vector Stores I see the one I am using.
Relevant log output
{
"status": "failure",
"batch_models": null,
"usage_object": null,
"user_api_key": "3f1c7c212629bacf3fc183b9dc1dd5b9f569d2105dc2ded4f2bb15ef06cc1dbd",
"error_information": {
"traceback": " File \"/usr/lib/python3.13/site-packages/litellm/proxy/proxy_server.py\", line 3537, in chat_completion\n return await base_llm_response_processor.base_process_llm_request(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n ...<16 lines>...\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/proxy/common_request_processing.py\", line 391, in base_process_llm_request\n responses = await llm_responses\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 990, in acompletion\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 966, in acompletion\n response = await self.async_function_with_fallbacks(**kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 3536, in async_function_with_fallbacks\n raise original_exception\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 3350, in async_function_with_fallbacks\n response = await self.async_function_with_retries(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 3728, in async_function_with_retries\n raise original_exception\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 3619, in async_function_with_retries\n response = await self.make_call(original_function, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 3737, in make_call\n response = await response\n ^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1129, in _acompletion\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/router.py\", line 1088, in _acompletion\n response = await _response\n ^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/utils.py\", line 1492, in wrapper_async\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/utils.py\", line 1353, in wrapper_async\n result = await original_function(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.13/site-packages/litellm/main.py\", line 531, in acompletion\n raise exception_type(\n ~~~~~~~~~~~~~~^\n model=model,\n ^^^^^^^^^^^^\n ...<3 lines>...\n extra_kwargs=kwargs,\n ^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py\", line 2239, in exception_type\n raise e\n File \"/usr/lib/python3.13/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py\", line 939, in exception_type\n raise BadRequestError(\n ...<4 lines>...\n )\n",
"error_code": "400",
"error_class": "BadRequestError",
"llm_provider": "bedrock",
"error_message": "litellm.BadRequestError: BedrockException - {\"message\":\"3 validation errors detected: Value '' at 'toolConfig.tools.1.member.toolSpec.name' failed to satisfy constraint: Member must have length greater than or equal to 1; Value '' at 'toolConfig.tools.1.member.toolSpec.name' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_-]+; Value '' at 'toolConfig.tools.1.member.toolSpec.description' failed to satisfy constraint: Member must have length greater than or equal to 1\"}. Received Model Group=us.amazon.nova-lite-v1:0\nAvailable Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2"
},
"applied_guardrails": null,
"user_api_key_alias": "ai-en",
"user_api_key_org_id": null,
"requester_ip_address": "",
"user_api_key_team_id": "2fee7bea-c133-46df-9979-ede094b333aa",
"user_api_key_user_id": "default_user_id",
"guardrail_information": null,
"model_map_information": null,
"mcp_tool_call_metadata": null,
"additional_usage_values": {},
"user_api_key_team_alias": "Platform Engineering",
"vector_store_request_metadata": null
}
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.71.1
Twitter / LinkedIn details
No response