Replies: 4 comments 3 replies
-
I think that local models are also important, especially where privacy matters most. Re example LLM selection, we do need a more consistent approach. We will also be gradually moving the examples to their own repos. It would be good to support alternative MCP implementations besides Docker Desktop. To do that, create a new profile. Parallel to Perhaps a first issue to create from this discussion would be add a profile with support for an open source MCP sever? |
Beta Was this translation helpful? Give feedback.
-
Regarding anthropic usage on Bedrock, I started a PR #423, it defines the model still supported by Anthropic, and only register the models if they appear in the 'embabel.models' configuration |
Beta Was this translation helpful? Give feedback.
-
We should discuss further your point about missing or empty tool groups. Perhaps in this case agents that require such tool groups should fail to deploy with a warning. Otherwise it can be misleading. |
Beta Was this translation helpful? Give feedback.
-
I've added a prominent log warning on empty tool groups (probably misconfiguration) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Per @gabz57
Hello, first of all many thanks for creating this library, I was trying to build my own kind of system, but the project you are writing here is very appreciable, and much more advanced than any others (python or java). I'm not used to write issues, this is more like a list of difficulties/suggestions to simplify the adoption of embabel-agent.
I wanted to try the code, the readme explains choices regarding the LLM providers selected at the early stage of the project, and invite us to create our own provider if needed, which is my case, and I guess I won't be the last :).
So, I cloned the repository and tried to create my own AmazonBedrockModels, and met a few challenges:
the model name and pricing at Amazon Bedrock differs in US, EU, AP... creating a BedrockModels file for US or EU only is simple, but doesn't cover the needs of every developers, I started simple by using EU version for now
Anthropic & OpenAi models are already provided with direct use of their respective apis, Google and AWS allow providing the same models, and the code in this lib already allow defining all theses "Llm" Beans. In some enterprises we are not allowed to use LLM located outside of the continent/country, so forcing the use of OpenAI or Anthropic apis probably impacts an early adoption of this project
There are some places in the code where Agents directly rely on a specific LLM, I couldn't find a good option to test my new provider without modifying existing agent code (and replace the model name by the new BedrockLlm constant in existing agents), having a way to inject any configured LLM should be preferable over relying on required LLM presence.
One can find a similar difficulty to start with MCP provided by Docker Desktop and its extension (which is a nice solution) :
Docker is free of use but Docker Desktop use in enterprise is under licence, so I my case I use colima instead and don't have access to Docker MCP extension. For a partial solution, one can replace the MCP client configuration (with default docker mcp proxy) with its own MCPs, but then comes another detail
It may be simply related to the early stage of the project, one currently hardcode the presence of a bunch of tools and MCP (that require an API key to work), and unless we comment +/- all agents and toolGroup/tools in the current code, they currently create holes (warning, timeout, etc...) during execution when tested via the shell, as agent/mcp are declared but not truly operational. I started by commenting most of them excepted the FactChecker agent and wikipedia-mcp tool group which doesn't require and API KEY
To summarize (manually), I would be very glad to see if these questions/suggestions will be in the roadmap, or if they don't match with the intention behind embabel-agent :
allow the use of any LLM in the core app (without relying directly on Anthropic or OpenAI apis)
remove the need for Docker Desktop Extension (licensed)
move tool/toolGroup definitions and agents in a place where they are not automatically enabled (ie: few people need a google-map tool), maybe by allowing composition via gradle/maven dependencies
BTW I would be very happy to contribute to this project, really :)
Beta Was this translation helpful? Give feedback.
All reactions