Skip to content

Commit a591090

Browse files
Fix doc5 rc0 (#3632)
* solve conflict * fix doc * fix en doc * fix link in doc * fix img link
1 parent fbb3b6e commit a591090

File tree

140 files changed

+10214
-2393
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

140 files changed

+10214
-2393
lines changed

docs/module_usage/tutorials/cv_modules/3d_bev_detection.en.md

Lines changed: 58 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -26,21 +26,48 @@ The 3D multimodal fusion detection module is a key component in the fields of co
2626
<tr>
2727
</table>
2828

29-
**Test Environment Description**:
30-
31-
- **Performance Test Environment**
32-
- **Test Dataset**: The above accuracy metrics are based on the <a href="https://www.nuscenes.org/nuscenes">nuscenes</a> validation set with mAP(0.5:0.95) and NDS 60.9, and the precision type is FP32.
33-
- **Hardware Configuration**:
34-
- GPU: NVIDIA Tesla T4
35-
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
36-
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
37-
38-
- **Inference Mode Description**
39-
40-
| Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
41-
|-------------|----------------------------------------|-------------------|---------------------------------------------------|
42-
| Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
43-
| High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
29+
<strong>Test Environment Description:</strong>
30+
31+
<ul>
32+
<li><b>Performance Test Environment</b>
33+
<ul>
34+
<li><strong>Test Dataset:</strong>The above accuracy metrics are based on the <a href="https://www.nuscenes.org/nuscenes">nuscenes</a> validation set with mAP(0.5:0.95) and NDS 60.9, and the precision type is FP32.</li>
35+
<li><strong>Hardware Configuration:</strong>
36+
<ul>
37+
<li>GPU: NVIDIA Tesla T4</li>
38+
<li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li>
39+
<li>Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
40+
</ul>
41+
</li>
42+
</ul>
43+
</li>
44+
<li><b>Inference Mode Description</b></li>
45+
</ul>
46+
47+
<table border="1">
48+
<thead>
49+
<tr>
50+
<th>Mode</th>
51+
<th>GPU Configuration </th>
52+
<th>CPU Configuration </th>
53+
<th>Acceleration Technology Combination</th>
54+
</tr>
55+
</thead>
56+
<tbody>
57+
<tr>
58+
<td>Normal Mode</td>
59+
<td>FP32 Precision / No TRT Acceleration</td>
60+
<td>FP32 Precision / 8 Threads</td>
61+
<td>PaddleInference</td>
62+
</tr>
63+
<tr>
64+
<td>High-Performance Mode</td>
65+
<td>Optimal combination of pre-selected precision types and acceleration strategies</td>
66+
<td>FP32 Precision / 8 Threads</td>
67+
<td>Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)</td>
68+
</tr>
69+
</tbody>
70+
</table>
4471

4572
## III. Quick Integration
4673
> ❗ Before quick integration, please install the PaddleX wheel package first. For details, refer to the [PaddleX Local Installation Guide](../../../installation/installation.en.md).
@@ -70,6 +97,8 @@ pip install open3d
7097
python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path="./output/"
7198
```
7299

100+
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/3d_bev_detection/02.png">
101+
73102
After running, the result obtained is:
74103

75104
```bash
@@ -152,6 +181,20 @@ The following is an explanation of relevant methods and parameters:
152181
<td>No</td>
153182
<td>None</td>
154183
</tr>
184+
<tr>
185+
<td><code>device</code></td>
186+
<td>The device used for model inference</td>
187+
<td><code>str</code></td>
188+
<td>It supports specifying specific GPU card numbers, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu".</td>
189+
<td><code>gpu:0</code></td>
190+
</tr>
191+
<tr>
192+
<td><code>use_hpip</code></td>
193+
<td>Whether to enable high-performance inference. </td>
194+
<td><code>bool</code></td>
195+
<td>None</td>
196+
<td><code>False</code></td>
197+
</tr>
155198
</table>
156199

157200
* The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

docs/module_usage/tutorials/cv_modules/3d_bev_detection.md

Lines changed: 58 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -30,21 +30,49 @@ comments: true
3030

3131
</table>
3232

33-
**测试环境说明:**
34-
35-
- **性能测试环境**
36-
- **测试数据集**:<a href="https://www.nuscenes.org/nuscenes">nuscenes</a>验证集 mAP(0.5:0.95), NDS 60.9, 精度类型为 FP32。
37-
- **硬件配置**
38-
- GPU:NVIDIA Tesla T4
39-
- CPU:Intel Xeon Gold 6271C @ 2.60GHz
40-
- 其他环境:Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
41-
42-
- **推理模式说明**
43-
44-
| 模式 | GPU配置 | CPU配置 | 加速技术组合 |
45-
|-------------|----------------------------------|------------------|---------------------------------------------|
46-
| 常规模式 | FP32精度 / 无TRT加速 | FP32精度 / 8线程 | PaddleInference |
47-
| 高性能模式 | 选择先验精度类型和加速策略的最优组合 | FP32精度 / 8线程 | 选择先验最优后端(Paddle/OpenVINO/TRT等) |
33+
<strong>测试环境说明:</strong>
34+
35+
<ul>
36+
<li><b>性能测试环境</b>
37+
<ul>
38+
<li><strong>测试数据集:</strong><a href="https://www.nuscenes.org/nuscenes">nuscenes</a>验证集 mAP(0.5:0.95), NDS 60.9, 精度类型为 FP32。</li>
39+
<li><strong>硬件配置:</strong>
40+
<ul>
41+
<li>GPU:NVIDIA Tesla T4</li>
42+
<li>CPU:Intel Xeon Gold 6271C @ 2.60GHz</li>
43+
<li>其他环境:Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
44+
</ul>
45+
</li>
46+
</ul>
47+
</li>
48+
<li><b>推理模式说明</b></li>
49+
</ul>
50+
51+
52+
<table border="1">
53+
<thead>
54+
<tr>
55+
<th>模式</th>
56+
<th>GPU配置</th>
57+
<th>CPU配置</th>
58+
<th>加速技术组合</th>
59+
</tr>
60+
</thead>
61+
<tbody>
62+
<tr>
63+
<td>常规模式</td>
64+
<td>FP32精度 / 无TRT加速</td>
65+
<td>FP32精度 / 8线程</td>
66+
<td>PaddleInference</td>
67+
</tr>
68+
<tr>
69+
<td>高性能模式</td>
70+
<td>选择先验精度类型和加速策略的最优组合</td>
71+
<td>FP32精度 / 8线程</td>
72+
<td>选择先验最优后端(Paddle/OpenVINO/TRT等)</td>
73+
</tr>
74+
</tbody>
75+
</table>
4876

4977
</details>
5078

@@ -77,8 +105,7 @@ pip install open3d
77105
python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path="./output/"
78106
```
79107

80-
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/refs/heads/main/images/images/pipelines/3d_bev_detection/02.png">
81-
108+
<img src="https://raw.githubusercontent.com/cuicheng01/PaddleX_doc_images/main/images/pipelines/3d_bev_detection/02.png">
82109

83110
运行后,得到的结果为:
84111
```bash
@@ -162,6 +189,20 @@ python paddlex/inference/models/3d_bev_detection/visualizer_3d.py --save_path=".
162189
<td>无</td>
163190
<td>无</td>
164191
</tr>
192+
<tr>
193+
<td><code>device</code></td>
194+
<td>模型推理设备</td>
195+
<td><code>str</code></td>
196+
<td>支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。</td>
197+
<td><code>gpu:0</code></td>
198+
</tr>
199+
<tr>
200+
<td><code>use_hpip</code></td>
201+
<td>是否启用高性能推理</td>
202+
<td><code>bool</code></td>
203+
<td>无</td>
204+
<td><code>False</code></td>
205+
</tr>
165206
</table>
166207

167208
* 其中,`model_name` 必须指定,指定 `model_name` 后,默认使用 PaddleX 内置的模型参数,在此基础上,指定 `model_dir` 时,使用用户自定义的模型。

docs/module_usage/tutorials/cv_modules/anomaly_detection.en.md

Lines changed: 55 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -29,21 +29,47 @@ Unsupervised anomaly detection is a technology that automatically identifies and
2929
</tbody>
3030
</table>
3131

32-
**Test Environment Description**:
33-
34-
- **Performance Test Environment**
35-
- **Test Dataset**: The above model accuracy indicators are measured from the MVTec_AD dataset.
36-
- **Hardware Configuration**:
37-
- GPU: NVIDIA Tesla T4
38-
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
39-
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
40-
41-
- **Inference Mode Description**
42-
43-
| Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
44-
|-------------|----------------------------------------|-------------------|---------------------------------------------------|
45-
| Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
46-
| High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
32+
<strong>Test Environment Description:</strong>
33+
<ul>
34+
<li><b>Performance Test Environment</b>
35+
<ul>
36+
<li><strong>Test Dataset:</strong>The above model accuracy indicators are measured from the MVTec_AD dataset.</li>
37+
<li><strong>Hardware Configuration:</strong>
38+
<ul>
39+
<li>GPU: NVIDIA Tesla T4</li>
40+
<li>CPU: Intel Xeon Gold 6271C @ 2.60GHz</li>
41+
<li>Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
42+
</ul>
43+
</li>
44+
</ul>
45+
</li>
46+
<li><b>Inference Mode Description</b></li>
47+
</ul>
48+
49+
<table border="1">
50+
<thead>
51+
<tr>
52+
<th>Mode</th>
53+
<th>GPU Configuration </th>
54+
<th>CPU Configuration </th>
55+
<th>Acceleration Technology Combination</th>
56+
</tr>
57+
</thead>
58+
<tbody>
59+
<tr>
60+
<td>Normal Mode</td>
61+
<td>FP32 Precision / No TRT Acceleration</td>
62+
<td>FP32 Precision / 8 Threads</td>
63+
<td>PaddleInference</td>
64+
</tr>
65+
<tr>
66+
<td>High-Performance Mode</td>
67+
<td>Optimal combination of pre-selected precision types and acceleration strategies</td>
68+
<td>FP32 Precision / 8 Threads</td>
69+
<td>Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)</td>
70+
</tr>
71+
</tbody>
72+
</table>
4773

4874
## III. Quick Integration <a id="quick"> </a>
4975
Before quick integration, you need to install the PaddleX wheel package. For the installation method of the wheel package, please refer to the [PaddleX Local Installation Tutorial](../../../installation/installation.en.md). After installing the wheel package, a few lines of code can complete the inference of the unsupervised anomaly detection module. You can switch models under this module freely, and you can also integrate the model inference of the unsupervised anomaly detection module into your project. Before running the following code, please download the [demo image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/uad_grid.png) to your local machine.
@@ -102,6 +128,20 @@ Relevant methods, parameters, and explanations are as follows:
102128
<td>None</td>
103129
<td>None</td>
104130
</tr>
131+
<tr>
132+
<td><code>device</code></td>
133+
<td>The device used for model inference</td>
134+
<td><code>str</code></td>
135+
<td>It supports specifying specific GPU card numbers, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu".</td>
136+
<td><code>gpu:0</code></td>
137+
</tr>
138+
<tr>
139+
<td><code>use_hpip</code></td>
140+
<td>Whether to enable high-performance inference. </td>
141+
<td><code>bool</code></td>
142+
<td>None</td>
143+
<td><code>False</code></td>
144+
</tr>
105145
</table>
106146

107147
* The `model_name` must be specified. After specifying `model_name`, the default model parameters built into PaddleX will be used. If `model_dir` is specified, the user-defined model will be used.

docs/module_usage/tutorials/cv_modules/anomaly_detection.md

Lines changed: 49 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -29,21 +29,48 @@ comments: true
2929
</tbody>
3030
</table>
3131

32-
**测试环境说明:**
33-
34-
- **性能测试环境**
35-
- **测试数据集**:MVTec_AD 数据集中的grid类别。
36-
- **硬件配置**
37-
- GPU:NVIDIA Tesla T4
38-
- CPU:Intel Xeon Gold 6271C @ 2.60GHz
39-
- 其他环境:Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
40-
41-
- **推理模式说明**
42-
43-
| 模式 | GPU配置 | CPU配置 | 加速技术组合 |
44-
|-------------|----------------------------------|------------------|---------------------------------------------|
45-
| 常规模式 | FP32精度 / 无TRT加速 | FP32精度 / 8线程 | PaddleInference |
46-
| 高性能模式 | 选择先验精度类型和加速策略的最优组合 | FP32精度 / 8线程 | 选择先验最优后端(Paddle/OpenVINO/TRT等) |
32+
<strong>测试环境说明:</strong>
33+
34+
<ul>
35+
<li><b>性能测试环境</b>
36+
<ul>
37+
<li><strong>测试数据集:</strong>MVTec_AD 数据集中的grid类别。</li>
38+
<li><strong>硬件配置:</strong>
39+
<ul>
40+
<li>GPU:NVIDIA Tesla T4</li>
41+
<li>CPU:Intel Xeon Gold 6271C @ 2.60GHz</li>
42+
<li>其他环境:Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2</li>
43+
</ul>
44+
</li>
45+
</ul>
46+
</li>
47+
<li><b>推理模式说明</b></li>
48+
</ul>
49+
50+
<table border="1">
51+
<thead>
52+
<tr>
53+
<th>模式</th>
54+
<th>GPU配置</th>
55+
<th>CPU配置</th>
56+
<th>加速技术组合</th>
57+
</tr>
58+
</thead>
59+
<tbody>
60+
<tr>
61+
<td>常规模式</td>
62+
<td>FP32精度 / 无TRT加速</td>
63+
<td>FP32精度 / 8线程</td>
64+
<td>PaddleInference</td>
65+
</tr>
66+
<tr>
67+
<td>高性能模式</td>
68+
<td>选择先验精度类型和加速策略的最优组合</td>
69+
<td>FP32精度 / 8线程</td>
70+
<td>选择先验最优后端(Paddle/OpenVINO/TRT等)</td>
71+
</tr>
72+
</tbody>
73+
</table>
4774

4875

4976
## 三、快速集成
@@ -107,6 +134,13 @@ for res in output:
107134
<td>无</td>
108135
</tr>
109136
<tr>
137+
<td><code>device</code></td>
138+
<td>模型推理设备</td>
139+
<td><code>str</code></td>
140+
<td>支持指定GPU具体卡号,如“gpu:0”,其他硬件具体卡号,如“npu:0”,CPU如“cpu”。</td>
141+
<td><code>gpu:0</code></td>
142+
</tr>
143+
<tr>
110144
<td><code>use_hpip</code></td>
111145
<td>是否启用高性能推理</td>
112146
<td><code>bool</code></td>

0 commit comments

Comments
 (0)