Skip to content

Commit 751aa88

Browse files
Introduce tool calling to AI engineering (#350)
1 parent 1e264b4 commit 751aa88

File tree

4 files changed

+200
-71
lines changed

4 files changed

+200
-71
lines changed

ai-engineering/observe.mdx

Lines changed: 159 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ keywords: ["ai engineering", "rudder", "observe", "telemetry", "withspan", "open
55
---
66

77
import { Badge } from "/snippets/badge.jsx";
8+
import AIEngineeringInstrumentationSnippet from '/snippets/ai-engineering-instrumentation.mdx'
89

910
The **Observe** stage is about understanding how your deployed generative AI capabilities perform in the real world. After creating and evaluating a capability, observing its production behavior is crucial for identifying unexpected issues, tracking costs, and gathering the data needed for future improvements.
1011

@@ -22,7 +23,39 @@ The easiest way to get started is by wrapping your existing AI model client. The
2223

2324
The `wrapAISDKModel` function takes an existing AI model object and returns an instrumented version that will automatically generate trace data for every call.
2425

25-
```typescript
26+
<CodeGroup>
27+
28+
```typescript OpenAI
29+
// src/shared/openai.ts
30+
31+
import { createOpenAI } from '@ai-sdk/openai';
32+
import { wrapAISDKModel } from '@axiomhq/ai';
33+
34+
const openaiProvider = createOpenAI({
35+
apiKey: process.env.OPENAI_API_KEY,
36+
});
37+
38+
// Wrap the model to enable automatic tracing
39+
export const gpt4o = wrapAISDKModel(openaiProvider('gpt-4o'));
40+
export const gpt4oMini = wrapAISDKModel(openaiProvider('gpt-4o-mini'));
41+
```
42+
43+
```typescript Anthropic
44+
// src/shared/anthropic.ts
45+
46+
import { createAnthropic } from '@ai-sdk/anthropic';
47+
import { wrapAISDKModel } from '@axiomhq/ai';
48+
49+
const anthropicProvider = createAnthropic({
50+
apiKey: process.env.ANTHROPIC_API_KEY,
51+
});
52+
53+
// Wrap the model to enable automatic tracing
54+
export const claude35Sonnet = wrapAISDKModel(anthropicProvider('claude-3-5-sonnet-20241022'));
55+
export const claude35Haiku = wrapAISDKModel(anthropicProvider('claude-3-5-haiku-20241022'));
56+
```
57+
58+
```typescript Google Gemini
2659
// src/shared/gemini.ts
2760

2861
import { createGoogleGenerativeAI } from '@ai-sdk/google';
@@ -33,9 +66,26 @@ const geminiProvider = createGoogleGenerativeAI({
3366
});
3467

3568
// Wrap the model to enable automatic tracing
36-
export const geminiFlash = wrapAISDKModel(geminiProvider('gemini-2.5-flash-preview-04-17'));
69+
export const gemini20Flash = wrapAISDKModel(geminiProvider('gemini-2.0-flash-exp'));
70+
export const gemini15Pro = wrapAISDKModel(geminiProvider('gemini-1.5-pro'));
71+
```
72+
73+
```typescript Grok
74+
// src/shared/grok.ts
75+
76+
import { createXai } from '@ai-sdk/xai';
77+
import { wrapAISDKModel } from '@axiomhq/ai';
78+
79+
const grokProvider = createXai({
80+
apiKey: process.env.XAI_API_KEY,
81+
});
82+
83+
// Wrap the model to enable automatic tracing
84+
export const grokBeta = wrapAISDKModel(grokProvider('grok-beta'));
85+
export const grok2Mini = wrapAISDKModel(grokProvider('grok-2-mini'));
3786
```
3887

88+
</CodeGroup>
3989

4090
### Adding context with `withSpan`
4191

@@ -46,7 +96,7 @@ While `wrapAISDKModel` handles the automatic instrumentation, the `withSpan` fun
4696

4797
import { withSpan } from '@axiomhq/ai';
4898
import { generateText } from 'ai';
49-
import { geminiFlash } from '@/shared/gemini';
99+
import { gpt4o } from '@/shared/openai';
50100

51101
export default async function Page() {
52102
const userId = 123;
@@ -57,7 +107,7 @@ export default async function Page() {
57107
span.setAttribute('user_id', userId);
58108

59109
return generateText({
60-
model: geminiFlash, // Use the wrapped model
110+
model: gpt4o, // Use the wrapped model
61111
messages: [
62112
{
63113
role: 'user',
@@ -71,46 +121,118 @@ export default async function Page() {
71121
}
72122
```
73123

124+
### Instrumenting tool calls with `wrapTool`
74125

75-
## Setting up instrumentation
126+
For many AI capabilities, the LLM call is only part of the story. If your capability uses tools to interact with external data or services, observing the performance and outcome of those tools is critical. The Axiom AI SDK provides the `wrapTool` and `wrapTools` functions to automatically instrument your Vercel AI SDK tool definitions.
76127

77-
The Axiom AI SDK is built on the OpenTelemetry standard. To send traces, you need to configure a Node.js or edge-compatible tracer that exports data to Axiom.
128+
The `wrapTool` helper takes your tool's name and its definition and returns an instrumented version. This wrapper creates a dedicated child span for every tool execution, capturing its arguments, output, and any errors.
78129

79-
### Configuring the tracer
130+
```typescript
131+
// src/app/generate-text/page.tsx
132+
import { tool } from 'ai';
133+
import { z } from 'zod';
134+
import { wrapTool } from '@axiomhq/ai';
135+
import { generateText } from 'ai';
136+
import { gpt4o } from '@/shared/openai';
137+
138+
// In your generateText call, provide wrapped tools
139+
const { text, toolResults } = await generateText({
140+
model: gpt4o,
141+
messages: [
142+
{ role: 'system', content: 'You are a helpful assistant.' },
143+
{ role: 'user', content: 'How do I get from Paris to Berlin?' },
144+
],
145+
tools: {
146+
// Wrap each tool with its name
147+
findDirections: wrapTool(
148+
'findDirections', // The name of the tool
149+
tool({
150+
description: 'Find directions to a location',
151+
inputSchema: z.object({
152+
from: z.string(),
153+
to: z.string(),
154+
}),
155+
execute: async (params) => {
156+
// Your tool logic here...
157+
return { directions: `To get from ${params.from} to ${params.to}, use a teleporter.` };
158+
},
159+
})
160+
)
161+
}
162+
});
163+
```
80164

81-
You must configure an OTLP trace exporter pointing to your Axiom instance. This is typically done in a dedicated instrumentation file that is loaded before your application starts.
165+
### Complete instrumentation example
166+
167+
<Accordion title="Full end-to-end code example">
168+
Here's how all three instrumentation functions work together in a single, real-world example:
82169

83170
```typescript
84-
// src/instrumentation.node.ts
85-
86-
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
87-
import { Resource } from '@opentelemetry/resources';
88-
import { NodeSDK } from '@opentelemetry/sdk-node';
89-
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
90-
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
91-
import { initAxiomAI, tracer } from '@axiomhq/ai';
92-
93-
// Configure the SDK to export traces to Axiom
94-
const sdk = new NodeSDK({
95-
resource: new Resource({
96-
[ATTR_SERVICE_NAME]: 'nextjs-otel-example',
97-
}),
98-
spanProcessor: new SimpleSpanProcessor(
99-
new OTLPTraceExporter({
100-
url: `https://api.axiom.co/v1/traces`,
101-
headers: {
102-
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
103-
'X-Axiom-Dataset': process.env.AXIOM_DATASET!,
104-
},
105-
}),
106-
),
171+
// src/app/page.tsx
172+
173+
import { withSpan, wrapAISDKModel, wrapTool } from '@axiomhq/ai';
174+
import { generateText, tool } from 'ai';
175+
import { createOpenAI } from '@ai-sdk/openai';
176+
import { z } from 'zod';
177+
178+
// 1. Create and wrap the AI model client
179+
const openaiProvider = createOpenAI({
180+
apiKey: process.env.OPENAI_API_KEY,
107181
});
108-
sdk.start();
182+
const gpt4o = wrapAISDKModel(openaiProvider('gpt-4o'));
183+
184+
// 2. Define and wrap your tool(s)
185+
const findDirectionsTool = wrapTool(
186+
'findDirections', // The tool name must be passed to the wrapper
187+
tool({
188+
description: 'Find directions to a location',
189+
inputSchema: z.object({ from: z.string(), to: z.string() }),
190+
execute: async ({ from, to }) => ({
191+
directions: `To get from ${from} to ${to}, use a teleporter.`,
192+
}),
193+
})
194+
);
195+
196+
// 3. In your application logic, use `withSpan` to add context
197+
// and call the AI model with your wrapped tools.
198+
export default async function Page() {
199+
const userId = 123;
200+
201+
const { text } = await withSpan({ capability: 'get_directions', step: 'generate_ai_response' }, async (span) => {
202+
// You have access to the OTel span to add custom attributes
203+
span.setAttribute('user_id', userId);
204+
205+
return generateText({
206+
model: gpt4o, // Use the wrapped model
207+
messages: [
208+
{ role: 'system', content: 'You are a helpful assistant.' },
209+
{ role: 'user', content: 'How do I get from Paris to Berlin?' },
210+
],
211+
tools: {
212+
findDirections: findDirectionsTool, // Use the wrapped tool
213+
},
214+
});
215+
});
109216

110-
// Initialize the Axiom AI SDK with the tracer
111-
initAxiomAI({ tracer });
217+
return <p>{text}</p>;
218+
}
112219
```
113220

221+
This demonstrates the three key steps to rich observability:
222+
1. **`wrapAISDKModel`**: Automatically captures telemetry for the LLM provider call
223+
2. **`wrapTool`**: Instruments the tool execution with detailed spans
224+
3. **`withSpan`**: Creates a parent span that ties everything together under a business capability
225+
</Accordion>
226+
227+
## Setting up instrumentation
228+
229+
The Axiom AI SDK is built on the OpenTelemetry standard. To send traces, you need to configure a Node.js or edge-compatible tracer that exports data to Axiom.
230+
231+
### Configuring the tracer
232+
233+
You must configure an OTLP trace exporter pointing to your Axiom instance. This is typically done in a dedicated instrumentation file that is loaded before your application starts.
234+
235+
<AIEngineeringInstrumentationSnippet />
114236

115237
Your Axiom credentials (`AXIOM_TOKEN` and `AXIOM_DATASET`) should be set as environment variables.
116238

@@ -129,6 +251,9 @@ Key attributes include:
129251
* `gen_ai.prompt`: The full, rendered prompt or message history sent to the model (as a JSON string).
130252
* `gen_ai.completion`: The full response from the model, including tool calls (as a JSON string).
131253
* `gen_ai.response.finish_reasons`: The reason the model stopped generating tokens (e.g., `stop`, `tool-calls`).
254+
* **`gen_ai.tool.name`**: The name of the executed tool.
255+
* **`gen_ai.tool.arguments`**: The arguments passed to the tool (as a JSON string).
256+
* **`gen_ai.tool.message`**: The result returned by the tool (as a JSON string).
132257

133258
## Visualizing traces in the console
134259

ai-engineering/overview.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The core stages are:
2121

2222
* **Create**: Define a new AI capability, prototype it with various models, and gather reference examples to establish ground truth.
2323
* **Measure**: Systematically evaluate the capability's performance against reference data using custom graders to score for accuracy, quality, and cost.
24-
* **Observe**: Cultivate the capability in production by collecting rich telemetry on every execution. Use online evaluations to monitor for performance degradation and discover edge cases.
24+
* **Observe**: Cultivate the capability in production by collecting rich telemetry on every LLM call and tool execution. Use online evaluations to monitor for performance degradation and discover edge cases.
2525
* **Iterate**: Use insights from production to refine prompts, augment reference datasets, and improve the capability over time.
2626

2727
### What's next?

ai-engineering/quickstart.mdx

Lines changed: 4 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ description: "Install and configure the Axiom AI SDK to begin capturing telemetr
44
keywords: ["ai engineering", "getting started", "install", "setup", "configuration", "opentelemetry"]
55
---
66

7+
import AIEngineeringInstrumentationSnippet from '/snippets/ai-engineering-instrumentation.mdx'
8+
79
This guide provides the steps to install and configure the [`@axiomhq/ai`](https://github.com/axiomhq/ai) SDK. Once configured, you can follow the Rudder workflow to create, measure, observe, and iterate on your AI capabilities.
810

911
## Prerequisites
@@ -98,41 +100,7 @@ bun add \
98100

99101
</CodeGroup>
100102

101-
```typescript
102-
// src/instrumentation.ts
103-
104-
import 'dotenv/config'; // Make sure to load environment variables
105-
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
106-
import { resourceFromAttributes } from '@opentelemetry/resources';
107-
import { NodeSDK } from '@opentelemetry/sdk-node';
108-
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
109-
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
110-
import { initAxiomAI } from '@axiomhq/ai';
111-
112-
const tracer = trace.getTracer("my-tracer");
113-
114-
// Configure the NodeSDK to export traces to your Axiom dataset
115-
const sdk = new NodeSDK({
116-
resource: resourceFromAttributes({
117-
[ATTR_SERVICE_NAME]: 'my-ai-app', // Replace with your service name
118-
}),
119-
spanProcessor: new SimpleSpanProcessor(
120-
new OTLPTraceExporter({
121-
url: `https://api.axiom.co/v1/traces`,
122-
headers: {
123-
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
124-
'X-Axiom-Dataset': process.env.AXIOM_DATASET!,
125-
},
126-
}),
127-
),
128-
});
129-
130-
// Start the SDK
131-
sdk.start();
132-
133-
// Initialize the Axiom AI SDK with the configured tracer
134-
initAxiomAI({ tracer });
135-
```
103+
<AIEngineeringInstrumentationSnippet />
136104

137105
## Environment variables
138106

@@ -150,6 +118,6 @@ GEMINI_API_KEY="<YOUR_GEMINI_API_KEY>"
150118

151119
## What's next?
152120

153-
Now that your application is configured to send telemetry to Axiom, the next step is to start instrumenting your AI model calls.
121+
Now that your application is configured to send telemetry to Axiom, the next step is to start instrumenting your AI model and tool calls.
154122

155123
Learn more about that in the [Observe](/ai-engineering/observe) page of the Rudder workflow.
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
```typescript
2+
// src/instrumentation.ts
3+
4+
import 'dotenv/config'; // Make sure to load environment variables
5+
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
6+
import { resourceFromAttributes } from '@opentelemetry/resources';
7+
import { NodeSDK } from '@opentelemetry/sdk-node';
8+
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
9+
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
10+
import { trace } from "@opentelemetry/api";
11+
import { initAxiomAI } from '@axiomhq/ai';
12+
13+
const tracer = trace.getTracer("my-tracer");
14+
15+
// Configure the NodeSDK to export traces to your Axiom dataset
16+
const sdk = new NodeSDK({
17+
resource: resourceFromAttributes({
18+
[ATTR_SERVICE_NAME]: 'my-ai-app', // Replace with your service name
19+
}),
20+
spanProcessor: new SimpleSpanProcessor(
21+
new OTLPTraceExporter({
22+
url: `https://api.axiom.co/v1/traces`,
23+
headers: {
24+
Authorization: `Bearer ${process.env.AXIOM_TOKEN}`,
25+
'X-Axiom-Dataset': process.env.AXIOM_DATASET!,
26+
},
27+
}),
28+
),
29+
});
30+
31+
// Start the SDK
32+
sdk.start();
33+
34+
// Initialize the Axiom AI SDK with the configured tracer
35+
initAxiomAI({ tracer });
36+
```

0 commit comments

Comments
 (0)