You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: apisix/plugins/ai-request-rewrite.lua
+12-5Lines changed: 12 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ local model_options_schema = {
49
49
properties= {
50
50
model= {
51
51
type="string",
52
-
description="Model to execute."
52
+
description="Model to execute. Examples: \"gpt-3.5-turbo\" for openai, \"deepseek-chat\" for deekseek, or \"qwen-turbo\" for openai-compatible services"
Copy file name to clipboardExpand all lines: docs/en/latest/plugins/ai-request-rewrite.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ keywords:
5
5
- API Gateway
6
6
- Plugin
7
7
- ai-request-rewrite
8
-
description: This document contains information about the Apache APISIX ai-request-rewrite Plugin.
8
+
description: The ai-request-rewrite plugin intercepts client requests before they are forwarded to the upstream service. It sends a predefined prompt, along with the original request body, to a specified LLM service. The LLM processes the input and returns a modified request body, which is then used for the upstream request. This allows dynamic transformation of API requests based on AI-generated content.
9
9
---
10
10
11
11
<!--
@@ -29,20 +29,20 @@ description: This document contains information about the Apache APISIX ai-reque
29
29
30
30
## Description
31
31
32
-
The `ai-request-rewrite` plugin leverages predefined prompts and AI services to intelligently modify client requests, enabling AI-powered content transformation before forwarding to upstream services.
32
+
The `ai-request-rewrite` plugin intercepts client requests before they are forwarded to the upstream service. It sends a predefined prompt, along with the original request body, to a specified LLM service. The LLM processes the input and returns a modified request body, which is then used for the upstream request. This allows dynamic transformation of API requests based on AI-generated content.
| auth.header | No | Object | Authentication headers. Key must match pattern `^[a-zA-Z0-9._-]+$`. |
42
42
| auth.query | No | Object | Authentication query parameters. Key must match pattern `^[a-zA-Z0-9._-]+$`. |
43
43
| options | No | Object | Key/value settings for the model |
44
-
| options.model | No | String | Model to execute. |
45
-
| override.endpoint | No | String | To be specified to override the endpoint of the AI service |
44
+
| options.model | No | String | Model to execute. Examples: "gpt-3.5-turbo" for openai, "deepseek-chat" for deekseek, or "qwen-turbo" for openai-compatible services|
45
+
| override.endpoint | No | String | To be specified to override the endpoint of the LLM service, |
46
46
| timeout | No | Integer | Timeout in milliseconds for requests to AI service. Range: 1 - 60000. Default: 3000 |
47
47
| keepalive | No | Boolean | Enable keepalive for requests to AI service. Default: true |
48
48
| keepalive_timeout | No | Integer | Keepalive timeout in milliseconds for requests to AI service. Minimum: 1000. Default: 60000 |
The request body send to the LLM Service is as follows:
103
103
104
104
```json
105
105
{
@@ -117,7 +117,7 @@ The request body for AI Service is as follows:
117
117
118
118
```
119
119
120
-
The upstream service will receive a request like this:
120
+
The LLM processes the input and returns a modified request body, which replace detected sensitive values with a masked format then used for the upstream request:
0 commit comments