How does llm.bind_tools
work in langchain?
#29344
Replies: 2 comments
-
Hey @anmol-aidora! I'm here to assist you with any bugs, questions, or contributions you have. Let me know how I can help! The
In summary, To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
You are explaining what I can already see by looking through the code. Need to ask a Human |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I need to understand how exactly does langchain convert information from code to LLM prompt, because end of the day, the LLM will need only text to be passed to it.
If I am incorrect somewhere in my understanding, kindly point it out as well when answering the question
This is the bind_tools function:
class BaseChatOpenAI(BaseChatModel):
...
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type, Callable, BaseTool]],
...
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
On going to definition of super():
class Runnable(Generic[Input, Output], ABC):
...
def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
"""
Bind arguments to a Runnable, returning a new Runnable.
On going to RunnableBinding:
class RunnableBinding(RunnableBindingBase[Input, Output]):
"""Wrap a Runnable with additional functionality.
After this, I am not able to understand how exactly the bind_tools function pass info to the LLM.
Beta Was this translation helpful? Give feedback.
All reactions