-
Notifications
You must be signed in to change notification settings - Fork 1.6k
feat: add CAMEL abstraction for future support of new API style #3328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| def _choice_tool_calls_to_camel( | ||
| choice_msg: Any, | ||
| ) -> Optional[List[CamelToolCall]]: | ||
| tool_calls = getattr(choice_msg, "tool_calls", None) | ||
| if not tool_calls: | ||
| return None | ||
| result: List[CamelToolCall] = [] | ||
| for tc in tool_calls: | ||
| func = getattr(tc, "function", None) | ||
| name = getattr(func, "name", None) if func else None | ||
| args_str = getattr(func, "arguments", "{}") if func else "{}" | ||
| try: | ||
| import json | ||
|
|
||
| args = json.loads(args_str) if isinstance(args_str, str) else {} | ||
| except Exception: | ||
| args = {} | ||
| result.append( | ||
| CamelToolCall(id=getattr(tc, "id", ""), name=name or "", args=args) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve tool call IDs when adapting dict-based responses
The helper _choice_tool_calls_to_camel only reads tool call fields with getattr, which works for OpenAI SDK objects but silently drops data when the tool_calls list contains plain dicts. Many parts of the codebase synthesize ChatCompletion instances via ChatCompletion.construct(...) and pass dictionaries (see tests and model wrappers), so calling adapt_chat_to_camel_response on those objects yields CamelToolCall(id="", name="", args={}). Downstream consumers cannot route tool calls without the id or name. The adapter should also handle dict inputs (e.g. via tc.get("id")) to retain the tool call metadata.
Useful? React with 👍 / 👎.
|
Hi guys, please have some suggestions or comments on the design of the adapter CAMEL layer for compatible with current ChatCompletion style and future extensions |
|
thanks @MuggleJinx for the RFC, Currently in CAMEL all our messages are standardized and processed as If there are interface alignment challenges with directly adapting the |
also agree with we can get response and directly transfer to |
|
Hi @Wendong-Fan and @fengju0213. Sorry for the late reply. So I will continue this way, and try to extend it without breaking the backward compatibility. |
No worries! Maybe we don’t need to convert to ChatCompletion. In ChatAgent, it already returns a ChatAgentResponse, so maybe we can just extend ChatAgentResponse and support response api. Does that make sense? |
Hi Tao, no need to touch ChatAgentResponse for now. The agent API stays the same. Just need to update OpenAI model's API, that's it. |
|
I have make some updates, maybe you can review it for now if you have time? @Wendong-Fan @fengju0213 |
sure,will review asap @MuggleJinx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! @MuggleJinx left some comments below
|
|
||
| return self._handle_batch_response(response) | ||
| camel_resp = self._normalize_to_camel_response(response) | ||
| return self._handle_camel_response(camel_resp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we can unify _handle_camel_response and _handle_batch_response
| continue | ||
|
|
||
| # Map other roles to a user message for broad compat in Responses | ||
| role = "user" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently we no plan to support tool calls? all message here convert to user
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi Tao, thanks! it's ongoing!
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| raw: Optional[Dict[str, Any]] = None | ||
|
|
||
|
|
||
| class CamelModelResponse(BaseModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think unifying CamelModelResponse and ModelResponse would be a better approach. We can extend the existing ModelResponse class in camel.agents._types. then we can update BaseModelBackend.run/arun ,let it return like UnifiedModelResponse ,after that the old logic where using ChatCompletion can still read like UnifiedModelResponse..chat_completion
Co-authored-by: Tao Sun <168447269+fengju0213@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @MuggleJinx. I added a few minor comments, everything else seems fine so far other than the issues already mentioned by other reviewers.
| except Exception: | ||
| return {} | ||
| return {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be better to have some logging + better exception handling
| try: | ||
| # Pydantic model -> dict | ||
| usage_raw = usage_obj.model_dump() # type: ignore[no-any-return] | ||
| except Exception: | ||
| try: | ||
| import dataclasses | ||
|
|
||
| usage_raw = dataclasses.asdict(usage_obj) # type: ignore[arg-type] | ||
| except Exception: | ||
| usage_raw = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could avoid the nested try catch
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expection swallowed
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again, swallowed exception
| except Exception: | ||
| pass | ||
| raise TypeError( | ||
| f"Unsupported response type for normalization: {type(resp).__name__}" # noqa:E501 | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
likewise
| return self._request_parse(messages, response_format, tools) | ||
| result = self._request_parse(messages, response_format, tools) | ||
| if ( | ||
| os.environ.get("CAMEL_USE_CAMEL_RESPONSE", "false").lower() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please note to explain this env variable in the docstrings, as it is used many places.
| input_tokens=(usage_raw or {}).get("prompt_tokens"), | ||
| output_tokens=(usage_raw or {}).get("completion_tokens"), | ||
| total_tokens=(usage_raw or {}).get("total_tokens"), | ||
| raw=usage_raw or None, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a bit odd.

CAMEL Abstractions for OpenAI Responses API — Phase 0 & 1
Summary
This RFC proposes a model-agnostic messaging and response abstraction that
enables CAMEL to support the OpenAI Responses API while preserving full
backward compatibility with the existing Chat Completions plumbing. (issue #3028)
Phase 0 catalogs the current dependency surface. Phase 1 introduces new
abstractions and a Chat Completions adapter, delivering a pure refactor with
zero functional differences.
Motivation
The codebase directly consumes
ChatCompletionMessageParamas requestmessages and expects
ChatCompletionresponses (e.g., inChatAgent).The OpenAI Responses API uses segmented inputs and a
Responseobject withdifferent streaming and parsing semantics. A direct swap would break agents,
memories, token counting, and tool handling.
We therefore introduce CAMEL-native types that can be adapted both to legacy
Chat Completions and to Responses, enabling a staged migration.
Goals
CamelMessageandCamelModelResponsetypes.Non-Goals (Phase 1)
Design
New Modules
camel/core/messages.pyCamelContentPart— minimal content fragment (type:text|image_url).CamelMessage— model-agnostic message with role, content parts, optional name/tool_call_id.openai_messages_to_camel(List[OpenAIMessage]) -> List[CamelMessage]camel_messages_to_openai(List[CamelMessage]) -> List[OpenAIMessage]camel/responses/model_response.pyCamelToolCall— normalized tool call (id, name, args).CamelUsage— normalized usage withrawattached.CamelModelResponse— id, model, created,output_messages,tool_call_requests,finish_reasons,usage, andraw(provider response).camel/responses/adapters/chat_completions.pyadapt_chat_to_camel_response(ChatCompletion) -> CamelModelResponse.Type Relaxation
camel/agents/_types.py:ModelResponse.responseis relaxed toAnyto decoupleagent plumbing from provider schemas. Existing tests pass
MagicMockhere, andthe change avoids tight coupling when adapters are introduced.
Compatibility
ChatCompletionfrom themodel backend; the adapter is exercised via unit tests and can be opted into
in later phases.
BaseMessageor memory/token APIs in this phase.Testing
test/responses/test_chat_adapter.pybuilds a minimalChatCompletionviaconstruct()and validates:CamelModelResponse.output_messages.CamelToolCall.Alternatives Considered
and risk; the adapter path enables incremental, testable rollout.
Rollout Plan
adjust streaming/tool-calls to operate over
CamelModelResponse, and migratetoken counting to work from abstract messages.
OpenAIResponsesModelimplementingclient.responses.{create,parse,stream}with converters from
CamelMessagesegments and back intoCamelModelResponse.Future Work
CamelContentPartto include audio/video and tool fragments.