How to eliminate global variable usage in LangChain Runnable chain #31287
Replies: 9 comments 7 replies
-
To refactor your code and eliminate the use of the global variable
Here's a refactored version of your code: from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import AzureChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
def get_llm_from_state(state):
return AzureChatOpenAI(
model=state.get('model_name'),
deployment_name=state.get('deployment_name'),
azure_endpoint=state.get('azure_openai_endpoint'),
openai_api_key=state.get('openai_api_key'),
openai_api_type=state.get('openai_api_type'),
openai_api_version=state.get('openai_api_version'),
max_tokens=int(state.get('max_tokens', 1000))
)
def get_final_prompt(x):
prompt = ChatPromptTemplate.from_messages([
("system", x.get("answer_prompt", "Default system prompt")),
("human", "{question}")
])
return prompt.format_messages(question=x['question'])
def build_chain():
return (
RunnablePassthrough.assign(
init=RunnableLambda(lambda x: {'llm': get_llm_from_state(x)})
)
| RunnableLambda(lambda x: {'prompt': get_final_prompt(x), 'llm': x['llm']})
| RunnableLambda(lambda x: x['llm'].invoke(x['prompt']))
) Key Changes:
This refactoring makes your code more functional and modular, allowing for easier testing and maintenance. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
-
When I use the suggested fix I am getting the following error :
|
Beta Was this translation helpful? Give feedback.
-
Still getting the same error, after the initialization of the llm in the get_llm_from_state function |
Beta Was this translation helpful? Give feedback.
-
The error happens after the initialization of the llm and in the RunnablePassthrough.assign
|
Beta Was this translation helpful? Give feedback.
-
I have managed to solve the llm initialization problem by converting the output to dictionary with:
but now if I do the following :
I am getting the error:
|
Beta Was this translation helpful? Give feedback.
-
If I use :
I am getting a completely wrong answer and the sql code used is correct |
Beta Was this translation helpful? Give feedback.
-
This solution is still not working and invoking the final_prompt doesn't contain all the values in the runnable x in order for the llm to get the result |
Beta Was this translation helpful? Give feedback.
-
there's no Context in the langchain_core.runnables |
Beta Was this translation helpful? Give feedback.
-
Can you give me an example for my previous chain ? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm working with LangChain's Runnable interface to build a processing chain. Currently, I'm using a global variable llm_az to hold the language model instance, which I want to eliminate for better modularity and testability.
Here's a simplified version of my code:
How can I refactor this code to avoid using the global llm_az variable, perhaps by passing the language model instance through the chain in a more functional and modular way?
Beta Was this translation helpful? Give feedback.
All reactions