Skip to content

Conversation

KanaSukita
Copy link

Description:

This PR extends ToolNode to support executing non-handoff tools in parallel while treating handoff tool calls (handoff_prefix, default "transfer_to") with special handling. Specifically:

Non-handoff tool calls are executed concurrently (threadpool or asyncio).

Multiple handoff calls are detected: only the first one is processed; subsequent ones are ignored with an error message.

Handoff calls can incorporate previous tool outputs into their state.messages.

Added error handling via a custom tool_error_handler, with explicit handling of AtaConnectionKickedOffError.

In chat_agent_executor, introduced a v3 mode to forward all tool_calls together, distinguishing handoffs more cleanly.

This improves agent flexibility in scenarios where multiple tool invocations and handoff operations need to be coordinated in the same step.

Issue:

N/A (new feature; not tied to a specific issue, but can help agent developers who need multi-tool and handoff orchestration).

Dependencies:

No new external dependencies introduced.

Reuses existing LangGraph execution utilities (get_executor_for_config, _run_one, _arun_one).

Tests & Docs:

from dotenv import load_dotenv
load_dotenv()

import asyncio
from typing import Annotated
from langchain.tools import tool
from langchain_openai import AzureChatOpenAI
from langgraph.prebuilt import create_react_agent, ToolNode
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.store.memory import InMemoryStore
from langgraph_swarm import create_handoff_tool, create_swarm   # test with langgraph_swarm==0.0.14 to avoid implementing swarm again

model = AzureChatOpenAI(
    deployment_name="gpt-4o",
)


@tool
def get_weather(city: Annotated[str, "The city to get the weather for"]) -> dict:
    """Get the current weather information for a specified city."""
    print("[debug] get_weather called")
    return {
        "city": city,
        "temperature_range": "14-20C",
        "conditions": "Sunny with wind.",
    }


@tool
def search_web(query: Annotated[str, "The content you awant to search"]) -> dict:
    """Search the web for information retrieval."""
    print("[debug] search_web called")
    return {"content": "The nvidia stock price is $185"}


@tool
def query_device(device_id: Annotated[str, "The Device id"]) -> dict:
    """Get the current status of a device."""
    print("[debug] query_device! called")
    return {"online": True, "device_name": "Happy Device", "password": "a123456789"}


# Configure the agent with Azure OpenAI
agent = create_react_agent(
    model,
    ToolNode(
        [
            query_device,
            create_handoff_tool(agent_name="weather_agent"),
            create_handoff_tool(agent_name="web_agent"),
        ]
    ),
    prompt="You are a helpful assistant",
    name="agent",
)

weather_agent = create_react_agent(
    model,
    ToolNode(
        [
            get_weather,
            create_handoff_tool(agent_name="agent"),
            create_handoff_tool(agent_name="web_agent"),
        ]
    ),
    prompt="You are a helpful assistant that can search the weather",
    name="weather_agent",
)

web_agent = create_react_agent(
    model,
    ToolNode(
        [
            search_web,
            create_handoff_tool(agent_name="agent"),
            create_handoff_tool(agent_name="weather_agent"),
        ]
    ),
    prompt="You are a helpful assistant that can search the web",
    name="web_agent",
)

workflow = create_swarm([agent, weather_agent, web_agent], default_active_agent="agent")


# Compile with checkpointer/store
checkpointer = InMemorySaver()
store = InMemoryStore()

app = workflow.compile(checkpointer=checkpointer, store=store)


async def main():
    res = await app.ainvoke(
        {
            "messages": [
                (
                    "user",
                    "What is the stock price of nvidia; What is the status of device '123'",
                )
            ],
        },
        config={"configurable": {"thread_id": "123"}},
    )

    for m in res["messages"]:
        m.pretty_print()


if __name__ == "__main__":
    asyncio.run(main())

Copy link

vercel bot commented Oct 1, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Preview Comments Updated (UTC)
langgraph-docs-preview Ignored Ignored Preview Oct 1, 2025 0:03am

Copy link
Collaborator

@eyurtsev eyurtsev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KanaSukita thanks for the PR.

Agents is moving to the langchain mono-repo.

alpha docs here: https://docs.langchain.com/oss/python/langchain/overview

We would appreciate an issue explaining the problem and a potential solution.

A clear description of the problem together with the use case will be a lot more helpful than an implementation. As the implementation requires design and will likely look different then the proposed PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants