You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m trying to run the example script reddit_simulation_counterfactual.py using the provided config file control_100.yaml (I only modified the model path and related fields).
I deployed the Qwen2.5-7B-Instruct model with vLLM (version 0.6.3+corex.4.1.3), based on the deploy.py example, and enabled tool calling using the --enable-auto-tool-choice and --tool-call-parser hermes flags.
However, when I run the script, I get this repeated error message:
2025-07-09 11:15:42,230 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.vllm_model.VLLMModel object at 0xfef628e14400>
2025-07-09 11:15:42,230 - camel.agents.chat_agent - ERROR - An error occurred while running model Qwen2.5-7B, index: 0
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/camel/agents/chat_agent.py", line 1495, in _aget_model_response
response = await self.model_backend.arun(
File "/usr/local/lib/python3.10/site-packages/camel/models/model_manager.py", line 269, in arun
raise exc
File "/usr/local/lib/python3.10/site-packages/camel/models/model_manager.py", line 256, in arun
response = await self.current_model.arun(
File "/usr/local/lib/python3.10/site-packages/camel/models/base_model.py", line 394, in arun
result = await self._arun(messages, response_format, tools)
File "/usr/local/lib/python3.10/site-packages/camel/models/openai_compatible_model.py", line 225, in _arun
result = await self._arequest_chat_completion(messages, tools)
File "/usr/local/lib/python3.10/site-packages/camel/models/openai_compatible_model.py", line 255, in _arequest_chat_completion
return await self._async_client.chat.completions.create(
File "/usr/local/lib/python3.10/site-packages/openai/resources/chat/completions/completions.py", line 2454, in create
return await self._post(
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1784, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/usr/local/lib/python3.10/site-packages/openai/_base_client.py", line 1584, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'tools', 0, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 1, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 2, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 3, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 4, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 5, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 6, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 7, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 8, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 9, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}]", 'type': 'BadRequestError', 'param': None, 'code': 400}
ERROR - 2025-07-09 11:15:42,231 - social.agent - Agent 89 error: Unable to process messages: the only provided model did not run successfully. Error: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'tools', 0, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 1, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 2, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 3, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 4, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 5, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 6, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 7, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 8, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}, {'type': 'extra_forbidden', 'loc': ('body', 'tools', 9, 'function', 'strict'), 'msg': 'Extra inputs are not permitted', 'input': True}]", 'type': 'BadRequestError', 'param': None, 'code': 400}
It seems to complain about some field like strict in the tools definition, saying it’s not allowed.
I’m quite new to using vLLM and tool-calling LLMs in general, so I’m not sure where the issue is coming from.
Since I’m directly using the provided script and config from the repo, is this error because tool-calling hasn't been fully adapted for vLLM + Qwen yet in the example code? Or is there something I might have missed in the setup?
Any help or guidance would be greatly appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I’m trying to run the example script
reddit_simulation_counterfactual.py
using the provided config filecontrol_100.yaml
(I only modified the model path and related fields).I deployed the
Qwen2.5-7B-Instruct
model withvLLM
(version0.6.3+corex.4.1.3
), based on thedeploy.py
example, and enabled tool calling using the--enable-auto-tool-choice
and--tool-call-parser hermes
flags.However, when I run the script, I get this repeated error message:
It seems to complain about some field like
strict
in the tools definition, saying it’s not allowed.Here is part of the deployment script I used:
I didn’t change anything else in the repo.
I’m quite new to using vLLM and tool-calling LLMs in general, so I’m not sure where the issue is coming from.
Since I’m directly using the provided script and config from the repo, is this error because tool-calling hasn't been fully adapted for vLLM + Qwen yet in the example code? Or is there something I might have missed in the setup?
Any help or guidance would be greatly appreciated!
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions