[Feature]: LiteLLM MCP Roadmap #9891
Replies: 4 comments 8 replies
-
Can this be made into a discussion? @ishaan-jaff it'll be easier to track |
Beta Was this translation helpful? Give feedback.
-
I think a very common problem is that very quickly the number of MCP servers you have installed explodes. I think that makes prompt engineering much more difficult because you have to carefully tune your prompt that it's not choosing the wrong tools for the task you plan to do. So I think it would be a great feature if the proxy would have several endpoints to offer different subsets of MCP servers that are installed so that you can have one endpoint for your IDE focused on software development tools and another for a general chat client. Just a mechanism to have different profiles with different sets of MCP servers available. |
Beta Was this translation helpful? Give feedback.
-
Would love to tackle this one for the initial proxy features:
One thing to consider is restricting litellm to only use HTTP streamable transport since SSE requires a constant connection. If we wanted to support SSE, we would need to either stay connected consistently or only connect on tool runs and periodically listen for new changes. I think the authentication is an important discussion and would be curious to see where you guys are going with that requirement. It makes sense to include a pass through of a token from the litellm proxy to the MCP servers for specific authn/authz at the server. Do we also want litellm proxy to be able store it's own cert to generate jwt like the new feature? |
Beta Was this translation helpful? Give feedback.
-
The Feature
Starting this to discuss LiteLLM MCP Roadmap
Are you a ML Ops Team?
No
Twitter / LinkedIn details
No response
Beta Was this translation helpful? Give feedback.
All reactions