You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add support for the IBM BeeAI Framework (https://github.com/i-am-bee) in the openllmetry to enhance and improve observability for generative AI workflows which are developed using BeeAI framework.
A short note on BeeAI - Its an open-source, IBM-developed framework for building, deploying, and serving scalable agent-based workflows with various AI models, enabling developers to create production-ready multi-agent systems in Python and TypeScript.
Background
BeeAI Framework helps building production grade MultiAgentic Systems which can carry out complex set of tasks using a simple chatbot interface rich with natural language user-agent interactions. It uses workflows construct to design agentic flows which helps developers to design their own agent architecture. As there is no one-size-fits-all agent architecture, one need a full freedom and flexibity in orchestrating agents of different subject-matter-expertise which are designed with a specific role and behaviors.
Currently BeeAI framework requires a standardized way to instrument observability data namely traces, metrics, and logs. With the help of augmenting observability data with well defined attributes, any consumer monitoring solutions can develop better insights on the performance, resources utilization, and cost dimensions of multiagentic systems deployed at large scale.
Proposed Solution
Implement BeeAI framework-compatible context propogation and attributes enriching traces,metrics,and logs in openllmetry, ensuring seamless integration & compliance with OpenTelemetry-based tracing.
Map existing openllmetry context attributes to BeeAI framework's schema wherever it is applicable.
Provide utilities or middleware to facilitate BeeAI framework adoption for users of openllmetry.
Open Questions
What is the best way to structure BeeAI Framework integration within openllmetry?
To what level spans can be added along with which set of well defined attributes applicable for BeeAI internal data and functional model ?
Should we provide automatic instrumentation for BeeAI Framework ?
Next Steps
Investigate BeeAI framework used data model and how it aligns with openllmetry.
Design an integration approach (manual vs. automatic context propagation).
Implement a prototype and test with supported AI model frameworks.
Gather feedback from users and iterate.
🎤 Why is this feature needed ?
In my use-cases wherein MultiAgentic applications are getting designed using BeeAI framework, it requires different level of Observability views over AI Agentic Workflows, Tasks, Agents, Tools, Prompts, Memory, llms. These observability views over core construct of Agentic system monitoring require rich set of semantic conventions & attributes to describe observability data - trace/spans, metrics, logs better for drawing rich set of performance, cost, resources requirement specific insights.
✌️ How do you aim to achieve this?
We would like to contribute to traceloop/openllmetry/packages/opentelemetry-instrumentation-beeai-framework package to enable openllmetry to bring standardised view on observability of multi-agentic systems - those getting developed based on using BeeAI framework.
🔄️ Additional Information
@gyliu513 & other team members from IBM would like to contribute to the creation of this feature.
👀 Have you spent some time to check if this feature request has been raised before?
I checked and didn't find similar issue
Are you willing to submit PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered:
Which component is this feature for?
Traceloop SDK
🔖 Feature description
Summary
Add support for the IBM BeeAI Framework (https://github.com/i-am-bee) in the openllmetry to enhance and improve observability for generative AI workflows which are developed using BeeAI framework.
A short note on BeeAI - Its an open-source, IBM-developed framework for building, deploying, and serving scalable agent-based workflows with various AI models, enabling developers to create production-ready multi-agent systems in Python and TypeScript.
Background
BeeAI Framework helps building production grade MultiAgentic Systems which can carry out complex set of tasks using a simple chatbot interface rich with natural language user-agent interactions. It uses workflows construct to design agentic flows which helps developers to design their own agent architecture. As there is no one-size-fits-all agent architecture, one need a full freedom and flexibity in orchestrating agents of different subject-matter-expertise which are designed with a specific role and behaviors.
Currently BeeAI framework requires a standardized way to instrument observability data namely traces, metrics, and logs. With the help of augmenting observability data with well defined attributes, any consumer monitoring solutions can develop better insights on the performance, resources utilization, and cost dimensions of multiagentic systems deployed at large scale.
Proposed Solution
Implement BeeAI framework-compatible context propogation and attributes enriching traces,metrics,and logs in openllmetry, ensuring seamless integration & compliance with OpenTelemetry-based tracing.
Map existing openllmetry context attributes to BeeAI framework's schema wherever it is applicable.
Provide utilities or middleware to facilitate BeeAI framework adoption for users of openllmetry.
Open Questions
What is the best way to structure BeeAI Framework integration within openllmetry?
To what level spans can be added along with which set of well defined attributes applicable for BeeAI internal data and functional model ?
Should we provide automatic instrumentation for BeeAI Framework ?
Next Steps
Investigate BeeAI framework used data model and how it aligns with openllmetry.
Design an integration approach (manual vs. automatic context propagation).
Implement a prototype and test with supported AI model frameworks.
Gather feedback from users and iterate.
🎤 Why is this feature needed ?
In my use-cases wherein MultiAgentic applications are getting designed using BeeAI framework, it requires different level of Observability views over AI Agentic Workflows, Tasks, Agents, Tools, Prompts, Memory, llms. These observability views over core construct of Agentic system monitoring require rich set of semantic conventions & attributes to describe observability data - trace/spans, metrics, logs better for drawing rich set of performance, cost, resources requirement specific insights.
✌️ How do you aim to achieve this?
We would like to contribute to traceloop/openllmetry/packages/opentelemetry-instrumentation-beeai-framework package to enable openllmetry to bring standardised view on observability of multi-agentic systems - those getting developed based on using BeeAI framework.
🔄️ Additional Information
@gyliu513 & other team members from IBM would like to contribute to the creation of this feature.
👀 Have you spent some time to check if this feature request has been raised before?
Are you willing to submit PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered: