Skip to content

Commit 2b54048

Browse files
obostjancicbitsandfoxes
authored andcommitted
fix(agents-insights): deprecate llm monitoring docs (#14217)
1 parent 43daccf commit 2b54048

File tree

3 files changed

+13
-8
lines changed

3 files changed

+13
-8
lines changed

develop-docs/sdk/telemetry/traces/modules/llm-monitoring.mdx

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,31 @@
11
---
22
title: LLM Monitoring
3+
sidebar_hidden: true
34
---
45

6+
<Alert level="warning" title="Deprecated">
7+
8+
This documentation is deprecated. Please use the [AI Agents Module](/sdk/telemetry/traces/modules/ai-agents) instead for AI/LLM monitoring and instrumentation.
9+
10+
</Alert>
11+
512
Sentry auto-generates LLM Monitoring data for common providers in Python, but you may need to manually annotate spans for other frameworks.
613

714
## Span conventions
815

916
### Span Operations
1017

1118
| Span OP | Description |
12-
|:------------------------|:-------------------------------------------------------------------------------------|
19+
| :---------------------- | :----------------------------------------------------------------------------------- |
1320
| `ai.pipeline.*` | The top-level span which corresponds to one or more AI operations & helper functions |
1421
| `ai.run.*` | A unit of work - a tool call, LLM execution, or helper method. |
1522
| `ai.chat_completions.*` | A LLM chat operation |
1623
| `ai.embeddings.*` | An LLM embedding creation operation |
1724

18-
19-
2025
### Span Data
2126

2227
| Attribute | Type | Description | Examples | Notes |
23-
|-----------------------------|---------|-------------------------------------------------------|------------------------------------------|------------------------------------------|
28+
| --------------------------- | ------- | ----------------------------------------------------- | ---------------------------------------- | ---------------------------------------- |
2429
| `ai.input_messages` | string | The input messages sent to the model | `[{"role": "user", "message": "hello"}]` | |
2530
| `ai.completion_tоkens.used` | int | The number of tokens used to respond to the message | `10` | required for cost calculation |
2631
| `ai.prompt_tоkens.used` | int | The number of tokens used to process just the prompt | `20` | required for cost calculation |

docs/platforms/javascript/common/tracing/span-metrics/examples.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ The frontend span initiates the trace and handles the file upload process. It pr
120120
Sentry.startSpan(
121121
{
122122
name: "LLM Client Interaction",
123-
op: "ai.client",
123+
op: "gen_ai.generate_text",
124124
attributes: {
125125
// Initial metrics available at request time
126126
"input.char_count": 280,
@@ -173,7 +173,7 @@ Sentry.startSpan(
173173
Sentry.startSpan(
174174
{
175175
name: "LLM API Processing",
176-
op: "ai.server",
176+
op: "gen_ai.generate_text",
177177
attributes: {
178178
// Model configuration - known at start
179179
"llm.model": "claude-3-5-sonnet-20241022",

docs/platforms/python/tracing/span-metrics/examples.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ from flask import jsonify
143143

144144
@app.route("/ask", methods=["POST"])
145145
def handle_llm_request():
146-
with sentry_sdk.start_span(op="llm", name="Generate Text") as span:
146+
with sentry_sdk.start_span(op="gen_ai.generate_text", name="Generate Text") as span:
147147
start_time = time.time() * 1000 # Convert to milliseconds
148148

149149
# Begin streaming response from LLM API
@@ -198,7 +198,7 @@ import openai
198198

199199
def process_llm_request(request_data):
200200
with sentry_sdk.start_span(
201-
op="llm",
201+
op="gen_ai.generate_text",
202202
name="Generate Text"
203203
) as span:
204204
start_time = int(time.time() * 1000) # Current time in milliseconds

0 commit comments

Comments
 (0)