control over responses with Streamable HTTP #1117
DinoChiesa
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'd love to see an example or demonstration of this; or citation of documentation that explains what's happening.
I am missing something at the moment.
I am using fastmcp 2.10.4. main includes the following:
...and I run it via
dotenv run python main.py
.Everything works, but I am not clear if I am taking advantage of streamable HTTP or not. As I understand the MCP spec allows batches of requests (defined here). And I suppose when using a chatbot/agent , the agent will decide if it wants to batch requests up or not. Q1: as an author of an MCP server that wants to support this, I don't need to do anything explicit in my code to make that happen. If I use FastMCP, it will just work. Is that right?
Assuming that is so, is there a way I can influence whether my tool sends back individual results or a batch of them?
I can see the client sends a batch of requests and the FastMCP Server responds with a batch of responses. But in some cases an individual responses may take some time to produce. Q2: Is there a way I can design my mcp.tool functions so that I can tell FastMCP , "return this request outside of a batch, right now"? Or is this one of those things I should just not worry about. Just let it go, let the framework handle it?
Beta Was this translation helpful? Give feedback.
All reactions