We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when calling embedding with an application inference profile.
import litellm litellm._turn_on_debug() resp = litellm.embedding( custom_llm_provider="bedrock", model="amazon.titan-embed-text-v1", model_id="arn:aws:bedrock:us-east-1:123412341234:application-inference-profile/abc123123", input=["testing"], aws_region_name="us-east-1" ) print(resp)
18:39:41 - LiteLLM:DEBUG: utils.py:324 - 18:39:41 - LiteLLM:DEBUG: utils.py:324 - Request to litellm: 18:39:41 - LiteLLM:DEBUG: utils.py:324 - litellm.embedding(custom_llm_provider='bedrock', model='amazon.titan-embed-text-v1', model_id='arn:aws:bedrock:us-east-1:123412341234:application-inference-profile/abc123123', input=['hererer'], aws_region_name='us-east-1') 18:39:41 - LiteLLM:DEBUG: utils.py:324 - 18:39:41 - LiteLLM:DEBUG: litellm_logging.py:422 - self.optional_params: {} 18:39:41 - LiteLLM:DEBUG: utils.py:324 - SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False 18:39:41 - LiteLLM:DEBUG: litellm_logging.py:422 - self.optional_params: {'model_id': 'arn:aws:bedrock:us-east-1:123412341234:application-inference-profile/abc123123', 'aws_region_name': 'us-east-1'} 18:39:41 - LiteLLM:DEBUG: base_aws_llm.py:121 - in get credentials aws_access_key_id=None aws_secret_access_key=None aws_session_token=None aws_region_name=us-east-1 aws_session_name=None aws_profile_name=None aws_role_name=None aws_web_identity_token=None aws_sts_endpoint=None 18:39:41 - LiteLLM:DEBUG: litellm_logging.py:746 - POST Request Sent from LiteLLM: curl -X POST \ https://bedrock-runtime.us-east-1.amazonaws.com/model/arn:aws:bedrock:us-east-1:123412341234:application-inference-profile/abc123123/invoke \ -H 'Content-Type: ap****on' -H 'X-Amz-Date: 20****1Z' -H 'X-Amz-Security-Token: Fw****==' -H 'Authorization: AW****f3' -H 'Content-Length: *****' \ -d '{'inputText': 'hererer'}' 18:39:41 - LiteLLM:DEBUG: utils.py:324 - RAW RESPONSE: {"Output": {"__type": "com.amazon.coral.service#UnknownOperationException"}, "Version": "1.0"} 18:39:41 - LiteLLM:DEBUG: utils.py:324 - RAW RESPONSE: 'embedding' Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'. 18:39:41 - LiteLLM:DEBUG: exception_mapping_utils.py:2243 - Logging Details: logger_fn - None | callable(logger_fn) - False 18:39:41 - LiteLLM:DEBUG: litellm_logging.py:2018 - Logging Details LiteLLM-Failure Call: [] Traceback (most recent call last): File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/main.py", line 3626, in embedding response = bedrock_embedding.embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/embedding.py", line 432, in embeddings return self._single_func_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/embedding.py", line 220, in _single_func_embeddings returned_response = AmazonTitanG1Config()._transform_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/amazon_titan_g1_transformation.py", line 76, in _transform_response embedding=_parsed_response["embedding"], ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'embedding' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tester/tmp/litellm_fix.py", line 52, in <module> resp = litellm.embedding( ^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/utils.py", line 1247, in wrapper raise e File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/utils.py", line 1125, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/main.py", line 3954, in embedding raise exception_type( ^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2214, in exception_type raise e File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2190, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: litellm.APIConnectionError: 'embedding' Traceback (most recent call last): File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/main.py", line 3626, in embedding response = bedrock_embedding.embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/embedding.py", line 432, in embeddings return self._single_func_embeddings( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/embedding.py", line 220, in _single_func_embeddings returned_response = AmazonTitanG1Config()._transform_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tester/venv_test/lib/python3.12/site-packages/litellm/llms/bedrock/embed/amazon_titan_g1_transformation.py", line 76, in _transform_response embedding=_parsed_response["embedding"], ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ KeyError: 'embedding'
No
1.65.6
No response
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
What happened?
Error when calling embedding with an application inference profile.
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.65.6
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: