-
Notifications
You must be signed in to change notification settings - Fork 340
feat(ai): add vercel ai integration #5858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Overall package sizeSelf size: 11.37 MB Dependency sizes| name | version | self size | total size | |------|---------|-----------|------------| | @datadog/libdatadog | 0.7.0 | 35.02 MB | 35.02 MB | | @datadog/native-appsec | 10.0.1 | 20.3 MB | 20.3 MB | | @datadog/native-iast-taint-tracking | 4.0.0 | 11.72 MB | 11.73 MB | | @datadog/pprof | 5.9.0 | 9.77 MB | 10.14 MB | | @opentelemetry/core | 1.30.1 | 908.66 kB | 7.16 MB | | protobufjs | 7.5.3 | 2.95 MB | 5.6 MB | | @datadog/wasm-js-rewriter | 4.0.1 | 2.85 MB | 3.58 MB | | @datadog/native-metrics | 3.1.1 | 1.02 MB | 1.43 MB | | @opentelemetry/api | 1.8.0 | 1.21 MB | 1.21 MB | | jsonpath-plus | 10.3.0 | 617.18 kB | 1.08 MB | | import-in-the-middle | 1.14.2 | 122.36 kB | 850.93 kB | | lru-cache | 10.4.3 | 804.3 kB | 804.3 kB | | source-map | 0.7.4 | 226 kB | 226 kB | | opentracing | 0.14.7 | 194.81 kB | 194.81 kB | | pprof-format | 2.1.0 | 111.69 kB | 111.69 kB | | @datadog/sketches-js | 2.1.1 | 109.9 kB | 109.9 kB | | lodash.sortby | 4.7.0 | 75.76 kB | 75.76 kB | | ignore | 7.0.5 | 63.38 kB | 63.38 kB | | istanbul-lib-coverage | 3.2.2 | 34.37 kB | 34.37 kB | | rfdc | 1.4.1 | 27.15 kB | 27.15 kB | | dc-polyfill | 0.1.10 | 26.73 kB | 26.73 kB | | @isaacs/ttlcache | 1.4.1 | 25.2 kB | 25.2 kB | | tlhunter-sorted-set | 0.1.0 | 24.94 kB | 24.94 kB | | shell-quote | 1.8.3 | 23.74 kB | 23.74 kB | | limiter | 1.1.5 | 23.17 kB | 23.17 kB | | retry | 0.13.1 | 18.85 kB | 18.85 kB | | semifies | 1.0.0 | 15.84 kB | 15.84 kB | | jest-docblock | 29.7.0 | 8.99 kB | 12.76 kB | | crypto-randomuuid | 1.0.0 | 11.18 kB | 11.18 kB | | ttl-set | 1.0.0 | 4.61 kB | 9.69 kB | | mutexify | 1.4.0 | 5.71 kB | 8.74 kB | | path-to-regexp | 0.1.12 | 6.6 kB | 6.6 kB | | koalas | 1.0.2 | 6.47 kB | 6.47 kB | | module-details-from-path | 1.0.4 | 3.96 kB | 3.96 kB |🤖 This report was automatically generated by heaviest-objects-in-the-universe |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5858 +/- ##
==========================================
+ Coverage 82.81% 83.23% +0.42%
==========================================
Files 476 478 +2
Lines 19664 19857 +193
==========================================
+ Hits 16284 16528 +244
+ Misses 3380 3329 -51 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
BenchmarksBenchmark execution time: 2025-07-23 18:48:50 Comparing candidate commit f9cfc5c in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 1272 metrics, 51 unstable metrics. |
Datadog ReportBranch report: ✅ 0 Failed, 1257 Passed, 0 Skipped, 20m 11.47s Total Time |
const noopTracer = { | ||
startActiveSpan () { | ||
const fn = arguments[arguments.length - 1] | ||
|
||
const span = { | ||
end () {}, | ||
setAttributes () { return this }, | ||
addEvent () { return this }, | ||
recordException () { return this }, | ||
setStatus () { return this } | ||
} | ||
|
||
return fn(span) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i guess this could be extracted out into an otel noop tracer that could be shared
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why even define a noop tracer that will ultimately be patched in the first place? Why not just returning a fake span directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've cleaned this up, although it's still local to the instrumentation, but i'm gonna resolve for now anyways since it's only used in this instrumentation. if we need a no-op dummy otel tracer for other instrumentations down the line i think we can refactor then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just returning a fake span directly?
mostly because we could be patching an actual otel-compatible tracer that someone is already using. i agree if we were just concerned about a dummy tracer/spans, then yeah we could patch in-place with respect to the dummy tracer, but i did it this way bc someone could actually be using a real tracer. lmk if that answers your question!
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
…r/vercel-ai-sdk-integration
for reviewers (when i open this up): currently this is all one PR - apm + llmobs. just the way i did the poc + clean up, but if this pr is too big i'm happy to separate it out! |
const noopTracer = { | ||
startActiveSpan () { | ||
const fn = arguments[arguments.length - 1] | ||
|
||
const span = { | ||
end () {}, | ||
setAttributes () { return this }, | ||
addEvent () { return this }, | ||
recordException () { return this }, | ||
setStatus () { return this } | ||
} | ||
|
||
return fn(span) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why even define a noop tracer that will ultimately be patched in the first place? Why not just returning a fake span directly?
/** | ||
* Resolves the following error: | ||
* | ||
* Error [ERR_REQUIRE_ESM]: require() of ES Module from ... not supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand this comment. If it's supported in CommonJS (which should be true according to the instrumentation) it should means we can also import it here no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it's not supported in commonjs for the restrictions specified in the condition below (early version 4 of vercel ai sdk + node < 22 not supported. i get this when just running a dummy script requiring ai
with ai@4.0.0.0
and Node 20
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if that makes sense, i can update the comment so it's not as confusing 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess my concern is that if it's supported, it should work in tests, and if it's not supported, then we should change the range. But I may still not be grasping the issue correctly 😅
}) | ||
|
||
after(() => { | ||
LLMObsSpanWriter.prototype.append.restore() | ||
process.removeAllListeners() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this safe? I think the test runner might have listeners on the process.
Hey hey crew. Fwiw... I've been silently following this PR in wait to try and weave this into the Motion codebase to get our stack onto LLMObservability. Currently we don't use dd-trace, and just export spans / traces through otel. I imagine once this lands I'll need to pull in dd-trace for LLM Observability. Would it be helpful at all if I were to test this (somehow?) and report back here? |
What does this PR do?
Adds APM and LLM Observability support for
ai@4.0.0
and greater.DISCLAIMER: Most LOC are from "cassettes" added locally used to mock and play back locally-recorded responses from provider APIs. These are stripped of any sensitive information/headers in the
ddapm-test-agent
image.The Vercel AI SDK provides OTel tracing of their operations under the hood. This gives us a nice "in" to patch the tracer used to intercept the
startActiveSpan
function, and various operations on the underlying span, and translate them into APM and LLM Observability spans.This integration works by doing exactly that - patching the tracer passed in, and if none is passed in, using a default one and enabling experimental telemetry so that the underlying Vercel AI SDK automatically uses this tracer.
For APM spans:
workflow
,llm
,embedding
, andtool
are applicable)Additional changes unrelated to the user-facing feature include:
useLlmobs
hook that will provide agetEvents
function to get APM spans and pre-encoded LLMObs span events. This is just a nice-to-have that can be used in the other integrations as well.Motivation
Closes #5410
MLOB-2980