Skip to content

Commit 04fdeb1

Browse files
committed
resolve conflict
2 parents cf394db + b0f5259 commit 04fdeb1

File tree

7 files changed

+239
-77
lines changed

7 files changed

+239
-77
lines changed

docs/offline_inference.md

Lines changed: 110 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -3,31 +3,6 @@
33
## 1. Usage
44
FastDeploy supports offline inference by loading models locally and processing user data. Usage examples:
55

6-
### Text Completion Interface (LLM.generate)
7-
8-
```python
9-
from fastdeploy import LLM, SamplingParams
10-
11-
prompts = [
12-
"把李白的静夜思改写为现代诗",
13-
"Write me a poem about large language model.",
14-
]
15-
16-
# Sampling parameters
17-
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
18-
19-
# Load model
20-
llm = LLM(model="ERNIE-4.5-0.3B", tensor_parallel_size=1, max_model_len=8192)
21-
22-
# Batch inference (internal request queuing and dynamic batching)
23-
outputs = llm.generate(prompts, sampling_params)
24-
25-
# Output results
26-
for output in outputs:
27-
prompt = output.prompt
28-
generated_text = output.outputs.text
29-
```
30-
316
### Chat Interface (LLM.chat)
327
```python
338
from fastdeploy import LLM, SamplingParams
@@ -58,16 +33,116 @@ for output in outputs:
5833

5934
Documentation for `SamplingParams`, `LLM.generate`, `LLM.chat`, and output structure `RequestOutput` is provided below.
6035

61-
> Note: For X1 model output
36+
> Note: For reasoning models, when loading the model, you need to specify the reasoning_parser parameter. Additionally, during the request, you can toggle the reasoning feature on or off by configuring the `enable_thinking` parameter within `chat_template_kwargs`.
6237
6338
```python
64-
# Output results
39+
from fastdeploy.entrypoints.llm import LLM
40+
# 加载模型
41+
llm = LLM(model="baidu/ERNIE-4.5-VL-28B-A3B-Paddle", tensor_parallel_size=1, max_model_len=32768, enable_mm=True, limit_mm_per_prompt={"image": 100}, reasoning_parser="ernie-45-vl")
42+
43+
outputs = llm.chat(
44+
messages=[
45+
{"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
46+
{"type": "text", "text": "图中的文物属于哪个年代"}]}
47+
],
48+
chat_template_kwargs={"enable_thinking": False})
49+
50+
# 输出结果
6551
for output in outputs:
6652
prompt = output.prompt
6753
generated_text = output.outputs.text
68-
reasoning_text = output.outputs.resoning_content
54+
reasoning_text = output.outputs.reasoning_content
6955
```
7056

57+
### Text Completion Interface (LLM.generate)
58+
59+
```python
60+
from fastdeploy import LLM, SamplingParams
61+
62+
prompts = [
63+
"User: 帮我写一篇关于深圳文心公园的500字游记和赏析。\nAssistant: 好的。"
64+
]
65+
66+
# 采样参数
67+
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
68+
69+
# 加载模型
70+
llm = LLM(model="baidu/ERNIE-4.5-21B-A3B-Base-Paddle", tensor_parallel_size=1, max_model_len=8192)
71+
72+
# 批量进行推理(llm内部基于资源情况进行请求排队、动态插入处理)
73+
outputs = llm.generate(prompts, sampling_params)
74+
75+
# 输出结果
76+
for output in outputs:
77+
prompt = output.prompt
78+
generated_text = output.outputs.text
79+
```
80+
> Note: Text completion interface, suitable for scenarios where users have predefined the context input and expect the model to output only the continuation content. No additional `prompt` concatenation will be added during the inference process.
81+
> For the `chat` model, it is recommended to use the Chat Interface (`LLM.chat`).
82+
83+
For multimodal models, such as `baidu/ERNIE-4.5-VL-28B-A3B-Paddle`, when calling the `generate interface`, you need to provide a prompt that includes images. The usage is as follows:
84+
```python
85+
import io
86+
import os
87+
import requests
88+
from PIL import Image
89+
90+
from fastdeploy.entrypoints.llm import LLM
91+
from fastdeploy.engine.sampling_params import SamplingParams
92+
from fastdeploy.input.ernie_tokenizer import ErnieBotTokenizer
93+
94+
PATH = "baidu/ERNIE-4.5-VL-28B-A3B-Paddle"
95+
tokenizer = ErnieBotTokenizer.from_pretrained(os.path.dirname(PATH))
96+
97+
messages = [
98+
{
99+
"role": "user",
100+
"content": [
101+
{"type":"image_url", "image_url": {"url":"https://ku.baidu-int.com/vk-assets-ltd/space/2024/09/13/933d1e0a0760498e94ec0f2ccee865e0"}},
102+
{"type":"text", "text":"这张图片的内容是什么"}
103+
]
104+
}
105+
]
106+
107+
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
108+
images, videos = [], []
109+
for message in messages:
110+
content = message["content"]
111+
if not isinstance(content, list):
112+
continue
113+
for part in content:
114+
if part["type"] == "image_url":
115+
url = part["image_url"]["url"]
116+
image_bytes = requests.get(url).content
117+
img = Image.open(io.BytesIO(image_bytes))
118+
images.append(img)
119+
elif part["type"] == "video_url":
120+
url = part["video_url"]["url"]
121+
video_bytes = requests.get(url).content
122+
videos.append({
123+
"video": video_bytes,
124+
"max_frames": 30
125+
})
126+
127+
sampling_params = SamplingParams(temperature=0.1, max_tokens=6400)
128+
llm = LLM(model=PATH, tensor_parallel_size=1, max_model_len=32768, enable_mm=True, limit_mm_per_prompt={"image": 100}, reasoning_parser="ernie-45-vl")
129+
outputs = llm.generate(prompts={
130+
"prompt": prompt,
131+
"multimodal_data": {
132+
"image": images,
133+
"video": videos
134+
}
135+
}, sampling_params=sampling_params)
136+
137+
# 输出结果
138+
for output in outputs:
139+
prompt = output.prompt
140+
generated_text = output.outputs.text
141+
reasoning_text = output.outputs.reasoning_content
142+
143+
```
144+
>Note: The `generate interface` does not currently support passing parameters to control the thinking function (on/off). It always uses the model's default parameters.
145+
71146
## 2. API Documentation
72147

73148
### 2.1 fastdeploy.LLM
@@ -79,18 +154,20 @@ For ```LLM``` configuration, refer to [Parameter Documentation](parameters.md).
79154
> 2. After startup, the service logs KV Cache block count (e.g. `total_block_num:640`). Multiply this by block_size (default 64) to get total cacheable tokens.
80155
> 3. Calculate `max_num_seqs` based on cacheable tokens. Example: avg input=800 tokens, output=500 tokens, blocks=640 → `kv_cache_ratio = 800/(800+500)=0.6`, `max_seq_len = 640*64/(800+500)=31`.
81156
82-
### 2.2 fastdeploy.LLM.generate
157+
### 2.2 fastdeploy.LLM.chat
83158

84-
* prompts(str,list[str],list[int]): Input prompts (batch supported), accepts decoded token ids
159+
* messages(list[dict],list[list[dict]]): Input messages (batch supported)
85160
* sampling_params: See 2.4 for parameter details
86161
* use_tqdm: Enable progress visualization
162+
* chat_template_kwargs(dict): Extra template parameters (currently supports enable_thinking(bool))
163+
*usage example: `chat_template_kwargs={"enable_thinking": False}`*
87164

88-
### 2.3 fastdeploy.LLM.chat
165+
### 2.3 fastdeploy.LLM.generate
89166

90-
* messages(list[dict],list[list[dict]]): Input messages (batch supported)
167+
* prompts(str, list[str], list[int], list[list[int]], dict[str, Any], list[dict[str, Any]]): : Input prompts (batch supported), accepts decoded token ids
168+
*example of using a dict-type parameter: `prompts={"prompt": prompt, "multimodal_data": {"image": images}}`*
91169
* sampling_params: See 2.4 for parameter details
92170
* use_tqdm: Enable progress visualization
93-
* chat_template_kwargs(dict): Extra template parameters (currently supports enable_thinking(bool))
94171

95172
### 2.4 fastdeploy.SamplingParams
96173

docs/zh/offline_inference.md

Lines changed: 105 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -3,71 +3,146 @@
33
## 1. 使用方式
44
通过FastDeploy离线推理,可支持本地加载模型,并处理用户数据,使用方式如下,
55

6-
### 续写接口(LLM.generate)
6+
### 对话接口(LLM.chat)
77

88
```python
99
from fastdeploy import LLM, SamplingParams
1010

11-
prompts = [
12-
"把李白的静夜思改写为现代诗",
13-
"Write me a poem about large language model.",
11+
msg1=[
12+
{"role": "system", "content": "I'm a helpful AI assistant."},
13+
{"role": "user", "content": "把李白的静夜思改写为现代诗"},
1414
]
15+
msg2 = [
16+
{"role": "system", "content": "I'm a helpful AI assistant."},
17+
{"role": "user", "content": "Write me a poem about large language model."},
18+
]
19+
messages = [msg1, msg2]
1520

1621
# 采样参数
1722
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
1823

1924
# 加载模型
20-
llm = LLM(model="ERNIE-4.5-0.3B", tensor_parallel_size=1, max_model_len=8192)
21-
25+
llm = LLM(model="baidu/ERNIE-4.5-0.3B-Paddle", tensor_parallel_size=1, max_model_len=8192)
2226
# 批量进行推理(llm内部基于资源情况进行请求排队、动态插入处理)
23-
outputs = llm.generate(prompts, sampling_params)
27+
outputs = llm.chat(messages, sampling_params)
2428

2529
# 输出结果
2630
for output in outputs:
2731
prompt = output.prompt
2832
generated_text = output.outputs.text
2933
```
3034

31-
### 对话接口(LLM.chat)
35+
上述示例中```LLM```配置方式, `SamplingParams``LLM.generate``LLM.chat`以及输出output对应的结构体 `RequestOutput` 接口说明见如下文档说明。
36+
37+
> 注: 若为思考模型, 加载模型时需要指定`resoning_parser` 参数,并在请求时, 可以通过配置`chat_template_kwargs``enable_thinking`参数, 进行开关思考。
38+
39+
```python
40+
from fastdeploy.entrypoints.llm import LLM
41+
# 加载模型
42+
llm = LLM(model="baidu/ERNIE-4.5-VL-28B-A3B-Paddle", tensor_parallel_size=1, max_model_len=32768, enable_mm=True, limit_mm_per_prompt={"image": 100}, reasoning_parser="ernie-45-vl")
43+
44+
outputs = llm.chat(
45+
messages=[
46+
{"role": "user", "content": [ {"type": "image_url", "image_url": {"url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example2.jpg"}},
47+
{"type": "text", "text": "图中的文物属于哪个年代"}]}
48+
],
49+
chat_template_kwargs={"enable_thinking": False})
50+
51+
# 输出结果
52+
for output in outputs:
53+
prompt = output.prompt
54+
generated_text = output.outputs.text
55+
reasoning_text = output.outputs.reasoning_content
56+
```
57+
58+
### 续写接口(LLM.generate)
3259

3360
```python
3461
from fastdeploy import LLM, SamplingParams
3562

36-
msg1=[
37-
{"role": "system", "content": "I'm a helpful AI assistant."},
38-
{"role": "user", "content": "把李白的静夜思改写为现代诗"},
39-
]
40-
msg2 = [
41-
{"role": "system", "content": "I'm a helpful AI assistant."},
42-
{"role": "user", "content": "Write me a poem about large language model."},
63+
prompts = [
64+
"User: 帮我写一篇关于深圳文心公园的500字游记和赏析。\nAssistant: 好的。"
4365
]
44-
messages = [msg1, msg2]
4566

4667
# 采样参数
4768
sampling_params = SamplingParams(top_p=0.95, max_tokens=6400)
4869

4970
# 加载模型
50-
llm = LLM(model="ERNIE-4.5-0.3B", tensor_parallel_size=1, max_model_len=8192)
71+
llm = LLM(model="baidu/ERNIE-4.5-21B-A3B-Base-Paddle", tensor_parallel_size=1, max_model_len=8192)
72+
5173
# 批量进行推理(llm内部基于资源情况进行请求排队、动态插入处理)
52-
outputs = llm.chat(messages, sampling_params)
74+
outputs = llm.generate(prompts, sampling_params)
5375

5476
# 输出结果
5577
for output in outputs:
5678
prompt = output.prompt
5779
generated_text = output.outputs.text
5880
```
81+
> 注: 续写接口, 适应于用户自定义好上下文输入, 并希望模型仅输出续写内容的场景; 推理过程不会增加其他 `prompt `拼接。
82+
> 对于 `chat`模型, 建议使用对话接口(LLM.chat)。
5983
60-
上述示例中```LLM```配置方式, `SamplingParams``LLM.generate``LLM.chat`以及输出output对应的结构体 `RequestOutput` 接口说明见如下文档说明。
84+
对于多模模型, 例如`baidu/ERNIE-4.5-VL-28B-A3B-Paddle`, 在调用`generate接口`时, 需要提供包含图片的prompt, 使用方式如下:
85+
```python
86+
import io
87+
import os
88+
import requests
89+
from PIL import Image
90+
91+
from fastdeploy.entrypoints.llm import LLM
92+
from fastdeploy.engine.sampling_params import SamplingParams
93+
from fastdeploy.input.ernie_tokenizer import ErnieBotTokenizer
94+
95+
PATH = "baidu/ERNIE-4.5-VL-28B-A3B-Paddle"
96+
tokenizer = ErnieBotTokenizer.from_pretrained(os.path.dirname(PATH))
97+
98+
messages = [
99+
{
100+
"role": "user",
101+
"content": [
102+
{"type":"image_url", "image_url": {"url":"https://ku.baidu-int.com/vk-assets-ltd/space/2024/09/13/933d1e0a0760498e94ec0f2ccee865e0"}},
103+
{"type":"text", "text":"这张图片的内容是什么"}
104+
]
105+
}
106+
]
61107

62-
> 注: 若为X1 模型输出
108+
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
109+
images, videos = [], []
110+
for message in messages:
111+
content = message["content"]
112+
if not isinstance(content, list):
113+
continue
114+
for part in content:
115+
if part["type"] == "image_url":
116+
url = part["image_url"]["url"]
117+
image_bytes = requests.get(url).content
118+
img = Image.open(io.BytesIO(image_bytes))
119+
images.append(img)
120+
elif part["type"] == "video_url":
121+
url = part["video_url"]["url"]
122+
video_bytes = requests.get(url).content
123+
videos.append({
124+
"video": video_bytes,
125+
"max_frames": 30
126+
})
127+
128+
sampling_params = SamplingParams(temperature=0.1, max_tokens=6400)
129+
llm = LLM(model=PATH, tensor_parallel_size=1, max_model_len=32768, enable_mm=True, limit_mm_per_prompt={"image": 100}, reasoning_parser="ernie-45-vl")
130+
outputs = llm.generate(prompts={
131+
"prompt": prompt,
132+
"multimodal_data": {
133+
"image": images,
134+
"video": videos
135+
}
136+
}, sampling_params=sampling_params)
63137

64-
```python
65138
# 输出结果
66139
for output in outputs:
67140
prompt = output.prompt
68141
generated_text = output.outputs.text
69-
reasoning_text = output.outputs.resoning_content
142+
reasoning_text = output.outputs.reasoning_content
143+
70144
```
145+
> 注: `generate` 接口, 暂时不支持思考开关参数控制, 均使用模型默认思考能力。
71146
72147
## 2. 接口说明
73148

@@ -80,18 +155,21 @@ for output in outputs:
80155
> 2. 模型服务启动后,会在日志文件log/fastdeploy.log中打印如 `Doing profile, the total_block_num:640` 的日志,其中640即表示自动计算得到的KV Cache block数量,将它乘以block_size(默认值64),即可得到部署后总共可以在KV Cache中缓存的Token数。
81156
> 3. `max_num_seqs` 用于配置decode阶段最大并发处理请求数,该参数可以基于第1点中缓存的Token数来计算一个较优值,例如线上统计输入平均token数800, 输出平均token数500,本次计>算得到KV Cache block为640, block_size为64。那么我们可以配置 `kv_cache_ratio = 800 / (800 + 500) = 0.6` , 配置 `max_seq_len = 640 * 64 / (800 + 500) = 31`
82157
83-
### 2.2 fastdeploy.LLM.generate
84158

85-
* prompts(str,list[str],list[int]): 输入的prompt, 支持batch prompt 输入,解码后的token ids 进行输入
159+
### 2.2 fastdeploy.LLM.chat
160+
161+
* messages(list[dict],list[list[dict]]): 输入的message, 支持batch message 输入
86162
* sampling_params: 模型超参设置具体说明见2.4
87163
* use_tqdm: 是否打开推理进度可视化
164+
* chat_template_kwargs(dict): 传递给对话模板的额外参数,当前支持enable_thinking(bool)
165+
*使用示例`chat_template_kwargs={"enable_thinking": False}`*
88166

89-
### 2.3 fastdeploy.LLM.chat
167+
### 2.3 fastdeploy.LLM.generate
90168

91-
* messages(list[dict],list[list[dict]]): 输入的message, 支持batch message 输入
169+
* prompts(str, list[str], list[int], list[list[int]], dict[str, Any], list[dict[str, Any]]): 输入的prompt, 支持batch prompt 输入,解码后的token ids 进行输入
170+
*dict 类型使用示例`prompts={"prompt": prompt, "multimodal_data": {"image": images}}`*
92171
* sampling_params: 模型超参设置具体说明见2.4
93172
* use_tqdm: 是否打开推理进度可视化
94-
* chat_template_kwargs(dict): 传递给对话模板的额外参数,当前支持enable_thinking(bool)
95173

96174
### 2.4 fastdeploy.SamplingParams
97175

fastdeploy/engine/engine.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -961,6 +961,9 @@ def _setting_environ_variables(self):
961961
"FLAGS_pir_interpreter_record_stream_for_gc_cache":
962962
os.getenv("FLAGS_pir_interpreter_record_stream_for_gc_cache",
963963
default="1"),
964+
"FLAGS_parameters_persistent_mode_in_dy2st":
965+
os.getenv("FLAGS_parameters_persistent_mode_in_dy2st",
966+
default="1"),
964967
})
965968

966969
if self.cfg.splitwise_role != "mixed":

0 commit comments

Comments
 (0)