OutputParserException: Parsing LLM output produced both a final answer and a parse-able action #29368
-
Checked other resources
Commit to Help
Example Codefrom database_llmdef import *
from langchain_community.agent_toolkits import create_sql_agent
db=database()# crate and read the database
llm=CustomLLM()# Defination of CustomLLM is given below. I have created customLLM class to invoke LLM through lambda function. Lambda function has uses bedrock.invoke_model function to call Antropic claude model version 2.1 and return json output back to my code. I cannot use BedrockLLM function from Langchain as it is required for me to call LLM through lambda ONLY.
agent_executor = create_sql_agent(llm, db=db, max_iterations=30, verbose=True, handle_parsing_errors=True)
question='MY question here?'
response=agent_executor.invoke({'input':question})
print(response)
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Run the LLM on the given input.
Override this method to implement the LLM logic.
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
The model output as a string. Actual completions SHOULD NOT include the prompt.
"""
#response=lambda_handler({'prompt':prompt}, context=None)
# Example usage
function_name = "lambda_function"# Replace with your Lambda function name
payload = {'prompt': prompt} # The input data you want to send to the Lambda function
# Invoke the Lambda function and get the response
lambda_response = invoke_lambda(function_name, payload)
return lambda_response
@property
def _identifying_params(self) -> Dict[str, Any]:
"""Return a dictionary of identifying parameters."""
return {
# The model name allows users to specify custom token counting
# rules in LLM monitoring applications (e.g., in LangSmith users
# can provide per token pricing for their model and monitor
# costs for the given LLM.)
"model_name": "CustomChatModel",
}
@property
def _llm_type(self) -> str:
"""Get the type of language model used by this chat model. Used for logging purposes only."""
return "custom" Description
System InfoSystem InfoSystem Information
Package Information
Optional packages not installed
Other Dependencies
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I modified inbuilt bedrockLLM class instead of using customLLM. Then it worked for me. |
Beta Was this translation helpful? Give feedback.
I modified inbuilt bedrockLLM class instead of using customLLM. Then it worked for me.