-
Notifications
You must be signed in to change notification settings - Fork 115
feat: update beeai-framework integration #1829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Tomas2D - I've reviewed your changes - here's some feedback:
- In on_success for RequirementAgent, you construct generated_message but never append or emit it—make sure to add it to the history or outgoing payload like with other agent types.
- Filtering messages based only on m.text may still allow whitespace-only entries; consider stripping or normalizing the text before filtering to avoid empty messages.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In on_success for RequirementAgent, you construct generated_message but never append or emit it—make sure to add it to the history or outgoing payload like with other agent types.
- Filtering messages based only on m.text may still allow whitespace-only entries; consider stripping or normalizing the text before filtering to avoid empty messages.
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
67b07a7
to
3701254
Compare
3701254
to
fd52d5e
Compare
I have read the CLA Document and I hereby sign the CLA |
...ation/openinference-instrumentation-beeai/src/openinference/instrumentation/beeai/version.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Tomas2D thanks for maintaining this! Generally LGTM - left some minor comments but I probably lack the context you have so just ping me and I'll get this released. I'll trigger CI now.
python/instrumentation/openinference-instrumentation-beeai/poetry.lock
Outdated
Show resolved
Hide resolved
from beeai_framework.context import ( | ||
RunContextFinishEvent, | ||
) | ||
from beeai_framework.emitter import Emitter, EmitterOptions, EventMeta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With Otel instrumentation it's typical to lazily import so that the instrumentor doesn't cause an import error if you don't have the package. E.g. it can be bundled with other instrumentors and automatically be initialized when beeai gets added. Up to you ofc. but definitely a nice pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I updated all the imports to be lazy. All seems to be working.
|
||
if result.middleware: | ||
result.middleware(create_telemetry_middleware(self._tracer, span_kind.value)) | ||
self._uninstrument() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation should work fine with multiple instances; therefore, this is no longer needed. Removed.
I addressed all the issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Decorator Fails to Return Function Results
The exception_handler
decorator does not return the result of the wrapped function when no exception occurs, causing it to always return None
.
python/instrumentation/openinference-instrumentation-beeai/src/openinference/instrumentation/beeai/_utils.py#L56-L65
Lines 56 to 65 in 6449cb5
def exception_handler(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T | None]]: | |
@functools.wraps(func) | |
async def wrapped(*args: P.args, **kwargs: P.kwargs) -> T | None: | |
try: | |
await func(*args, **kwargs) | |
except Exception as e: | |
logger.error("Error has occurred in the telemetry package.", exc_info=e) | |
return wrapped |
Bug: Incorrect Version Retrieval in Instrumentation Package
The openinference-instrumentation-beeai
package incorrectly retrieves the version of beeai-framework
instead of its own version, leading to an incorrect version number for the instrumentation package.
python/instrumentation/openinference-instrumentation-beeai/src/openinference/instrumentation/beeai/__init__.py#L38-L39
Lines 38 to 39 in 6449cb5
try: | |
__version__ = version("beeai-framework") |
Bug: Race Condition in Parent-Child Event Processing
A ValueError
is raised in the _handler
method when a parent run ID is referenced but not found in _processes
. This occurs due to a race condition where child events may be processed before their parent events, causing instrumentation to fail or leading to crashes.
python/instrumentation/openinference-instrumentation-beeai/src/openinference/instrumentation/beeai/__init__.py#L114-L116
Lines 114 to 116 in 6449cb5
) | |
if event.trace.parent_run_id and not parent: | |
raise ValueError(f"Parent run with ID {event.trace.parent_run_id} was not found!") |
Was this report helpful? Give feedback by reacting with 👍 or 👎
...ference-instrumentation-beeai/src/openinference/instrumentation/beeai/processors/workflow.py
Show resolved
Hide resolved
"_original_tool_run", | ||
"_tracer", | ||
) | ||
__slots__ = ("_tracer", "_cleanup", "_processes") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Incorrect Version Retrieval in Instrumentation Package
The instrumentation package incorrectly retrieves its version from beeai-framework
instead of openinference-instrumentation-beeai
, leading to incorrect version reporting for the instrumentation package.
Locations (1)
else None | ||
) | ||
if event.trace.parent_run_id and not parent: | ||
raise ValueError(f"Parent run with ID {event.trace.parent_run_id} was not found!") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Asynchronous Event Processing Race Condition
A ValueError
is raised when a parent run ID is not found during event processing. This race condition occurs because events can arrive out of order in an asynchronous system, causing child events to be processed before their parents. This leads to the instrumentation crashing or rejecting legitimate events.
Locations (1)
python/instrumentation/openinference-instrumentation-beeai/pyproject.toml
Show resolved
Hide resolved
b15224e
to
fcb5a9c
Compare
@Tomas2D I ran CI on beeai and it seems it's red. Let us know if you have any questions about the setup. |
Completely rework the beeai-framework integration to be more general and easier to maintain.