-
Notifications
You must be signed in to change notification settings - Fork 869
Support OpenAI models (API) directly #27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
models
Issues about model support
Comments
cc @aabmass in case interested |
Closed
tutumomo
pushed a commit
to tutumomo/adk-python
that referenced
this issue
Apr 26, 2025
* Enhance README for CrewAI Agent with A2A Protocol: - Added detailed explanation of functionality - Setup instructions, features, and limitations. - Included a sequence diagram for clarity on agent interactions - improved formatting for better readability. * Update README for LangGraph Currency Agent: - Expanded overview of the currency conversion agent and its functionality. - Added detailed setup instructions and technical implementation details. - Included key features and limitations of the agent. - Enhanced formatting and added a sequence diagram for better understanding of interactions. * Remove instructions for simulated streaming in CrewAI README * Remove references to streaming --------- Co-authored-by: kthota-g <kcthota@google.com>
Thanks for opening the discussion @codefromthecrypt. I'm wondering if this topic has been brought up with LiteLLM as well. If they had cleaner support for OTel and followed the semantic conventions, is there any benefit of using OpenAI SDK directly? |
@aabmass made a comment here about the status quo as I understand it! BerriAI/litellm#9972 (comment) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Please support OpenAI models (API) directly, as that opens up many options including ollama in a way compatible with opentelemetry. LiteLLM's telemetry support is callback based, so it requires manual setup, etc. If you used the OpenAI SDK or wrote direct HTTP calls, we could get better traces than we do today.
Right now, you can carefully re-route config to litellm, but it requires more dependencies and setup.
If the openai model used normal openai, it could use the normal openai instrumentation from opentelemetry with no programmatic setup
The text was updated successfully, but these errors were encountered: