Skip to content

Commit 516f151

Browse files
authored
Update README and version number (#1554)
* update README and version number * update requires
1 parent 596d951 commit 516f151

File tree

6 files changed

+74
-87
lines changed

6 files changed

+74
-87
lines changed

README.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Paddle2ONNX 支持将 **PaddlePaddle** 模型格式转化到 **ONNX** 模型格
1010

1111
Paddle2ONNX 依赖PaddlePaddle3.0,我们建议您在以下环境下使用 Paddle2ONNX :
1212

13-
- PaddlePaddle == 3.0.0b2
13+
- PaddlePaddle == 3.0.0
1414
- onnxruntime >= 1.10.0
1515

1616
## 3 安装 Paddle2ONNX
@@ -51,19 +51,17 @@ paddle2onnx --model_dir model_dir \
5151

5252
| 参数 | 参数说明 |
5353
|----------------------------|-----------------------------------------------------------------------------------------------------------------|
54-
| --model_dir | 配置包含 Paddle 模型的目录路径 |
55-
| --model_filename | **[可选]** 配置位于 `--model_dir` 下存储网络结构的文件名 |
56-
| --params_filename | **[可选]** 配置位于 `--model_dir` 下存储模型参数的文件名称 |
57-
| --save_file | 指定转换后的模型保存目录路径 |
58-
| --opset_version | **[可选]** 配置转换为 ONNX 的 OpSet 版本,目前支持 7~19 等多个版本,默认为 9 |
59-
| --enable_onnx_checker | **[可选]** 配置是否检查导出为 ONNX 模型的正确性, 建议打开此开关, 默认为 True |
60-
| --enable_auto_update_opset | **[可选]** 是否开启 opset version 自动升级功能,当低版本 opset 无法转换时,自动选择更高版本的 opset进行转换, 默认为 True |
61-
| --deploy_backend | **[可选]** 量化模型部署的推理引擎,支持 onnxruntime/rknn/tensorrt, 默认为 onnxruntime |
62-
| --save_calibration_file | **[可选]** TensorRT 8.X版本部署量化模型需要读取的 cache 文件的保存路径,默认为 calibration.cache |
63-
| --version | **[可选]** 查看 paddle2onnx 版本 |
64-
| --external_filename | **[可选]** 当导出的 ONNX 模型大于 2G 时,需要设置 external data 的存储路径,推荐设置为:external_data |
65-
| --export_fp16_model | **[可选]** 是否将导出的 ONNX 的模型转换为 FP16 格式,并用 ONNXRuntime-GPU 加速推理,默认为 False |
66-
| --custom_ops | **[可选]** 将 Paddle OP 导出为 ONNX 的 Custom OP,例如:--custom_ops '{"paddle_op":"onnx_op"},默认为 {} |
54+
| --model_dir | 配置包含 Paddle 模型的目录路径 |
55+
| --model_filename | **[可选]** 配置位于 `--model_dir` 下存储网络结构的文件名 |
56+
| --params_filename | **[可选]** 配置位于 `--model_dir` 下存储模型参数的文件名 |
57+
| --save_file | 指定转换后的模型保存目录路径 |
58+
| --opset_version | **[可选]** 配置转换为ONNX的OpSet版本,目前支持7~19等多个版本,默认为 9 |
59+
| --enable_auto_update_opset | **[可选]** 是否开启opset version自动升级功能,当低版本opset无法转换时,自动选择更高版本的opset进行转换, 默认为 True |
60+
| --enable_onnx_checker | **[可选]** 配置是否检查导出为 ONNX 模型的正确性, 建议打开此开关, 默认为 True |
61+
| --enable_dist_prim_all | **[可选]** 是否开启组合算子拆解,默为 False |
62+
| --enable_optimization | **[可选]** 是否开启模型优化,默认为 True |
63+
| --enable_verbose | **[可选]** 是否打印更更详细的日志信息,默认为 False |
64+
| --version | **[可选]** 查看 paddle2onnx 版本 |
6765

6866

6967
### 4.4 裁剪ONNX

README_en.md

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -55,15 +55,12 @@ The adjustable conversion parameters are listed in the following table:
5555
| --params_filename | **[Optional]** Configure the name of the file to store model parameters under `--model_dir` |
5656
| --save_file | Specify the converted model save directory path |
5757
| --opset_version | **[Optional]** Configure the OpSet version converted to ONNX, currently supports multiple versions such as 7~19, the default is 9 |
58-
| --enable_onnx_checker | **[Optional]** Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is True |
5958
| --enable_auto_update_opset | **[Optional]** Whether to enable the opset version automatic upgrade function, when the lower version of the opset cannot be converted, automatically select the higher version of the opset for conversion, the default is True |
60-
| --deploy_backend | **[Optional]** Inference engine for quantitative model deployment, supports onnxruntime/rknn/tensorrt, the default is onnxruntime |
61-
| --save_calibration_file | **[Optional]** TensorRT 8.X version deploys the cache file that needs to be read to save the path of the quantitative model, the default is calibration.cache |
62-
| --version | **[Optional]** View paddle2onnx version |
63-
| --external_filename | **[Optional]** When the exported ONNX model is larger than 2G, you need to set the storage path of external data, the recommended setting is: external_data |
64-
| --export_fp16_model | **[Optional]** Whether to convert the exported ONNX model to FP16 format, and use ONNXRuntime-GPU to accelerate inference, the default is False |
65-
| --custom_ops | **[Optional]** Export Paddle OP as ONNX's Custom OP, for example: --custom_ops '{"paddle_op":"onnx_op"}, default is {} |
66-
59+
| --enable_onnx_checker | **[Optional]** Configure whether to check the correctness of the exported ONNX model, it is recommended to turn on this switch, the default is True |
60+
| --enable_dist_prim_all | **[Optional]** Whether to enable the decomposition of combined operators, the default is False
61+
| --enable_optimization | **[Optional]** Whether to enable onnx optimization, the default is True |
62+
| --enable_verbose | **[Optional]** Whether to show verbose logs, default False |
63+
| --version | **[Optional]** View paddle2onnx version |
6764
### 4.4 Pruning ONNX
6865

6966
If you need to adjust ONNX models, please refer to [ONNX related tools](./tools/onnx/README.md)

VERSION_NUMBER

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.0.0a5
1+
2.0.1

paddle2onnx/command.py

Lines changed: 54 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -53,69 +53,56 @@ def arg_parser():
5353
default=9,
5454
help="set onnx opset version to export",
5555
)
56-
parser.add_argument(
57-
"--deploy_backend",
58-
"-d",
59-
type=str,
60-
default="onnxruntime",
61-
choices=["onnxruntime", "tensorrt", "rknn", "others"],
62-
help="Quantize model deploy backend, default onnxruntime.",
63-
)
64-
parser.add_argument(
65-
"--save_calibration_file",
66-
type=str,
67-
default="calibration.cache",
68-
help="The calibration cache for TensorRT deploy, default calibration.cache.",
69-
)
70-
parser.add_argument(
71-
"--enable_onnx_checker",
72-
type=ast.literal_eval,
73-
default=True,
74-
help="whether check onnx model validity, default True",
75-
)
76-
parser.add_argument(
77-
"--enable_paddle_fallback",
78-
type=ast.literal_eval,
79-
default=False,
80-
help="whether use PaddleFallback for custom op, default is False",
81-
)
82-
parser.add_argument(
83-
"--version",
84-
"-v",
85-
action="store_true",
86-
default=False,
87-
help="get version of paddle2onnx",
88-
)
56+
# parser.add_argument(
57+
# "--deploy_backend",
58+
# "-d",
59+
# type=str,
60+
# default="onnxruntime",
61+
# choices=["onnxruntime", "tensorrt", "rknn", "others"],
62+
# help="Quantize model deploy backend, default onnxruntime.",
63+
# )
64+
# parser.add_argument(
65+
# "--save_calibration_file",
66+
# type=str,
67+
# default="calibration.cache",
68+
# help="The calibration cache for TensorRT deploy, default calibration.cache.",
69+
# )
8970
parser.add_argument(
9071
"--enable_auto_update_opset",
9172
type=ast.literal_eval,
9273
default=True,
9374
help="whether enable auto_update_opset, default is True",
9475
)
9576
parser.add_argument(
96-
"--enable_dist_prim_all",
77+
"--enable_onnx_checker",
9778
type=ast.literal_eval,
98-
default=False,
99-
help="whether enable dist_prim_all, default is False",
100-
)
101-
parser.add_argument(
102-
"--external_filename",
103-
type=str,
104-
default=None,
105-
help="The filename of external_data when the model is bigger than 2G.",
79+
default=True,
80+
help="whether check onnx model validity, default is True",
10681
)
10782
parser.add_argument(
108-
"--export_fp16_model",
83+
"--enable_dist_prim_all",
10984
type=ast.literal_eval,
11085
default=False,
111-
help="Whether export FP16 model for ORT-GPU, default False",
112-
)
113-
parser.add_argument(
114-
"--custom_ops",
115-
type=str,
116-
default="{}",
117-
help='Ops that needs to be converted to custom op, e.g --custom_ops \'{"paddle_op":"onnx_op"}\', default {}',
118-
)
86+
help="Whether to enable the decomposition of combined operators, default is False.",
87+
)
88+
# parser.add_argument(
89+
# "--external_filename",
90+
# type=str,
91+
# default=None,
92+
# help="The filename of external_data when the model is bigger than 2G.",
93+
# )
94+
# parser.add_argument(
95+
# "--export_fp16_model",
96+
# type=ast.literal_eval,
97+
# default=False,
98+
# help="Whether export FP16 model for ORT-GPU, default False",
99+
# )
100+
# parser.add_argument(
101+
# "--custom_ops",
102+
# type=str,
103+
# default="{}",
104+
# help='Ops that needs to be converted to custom op, e.g --custom_ops \'{"paddle_op":"onnx_op"}\', default {}',
105+
# )
119106
parser.add_argument(
120107
"--enable_optimization",
121108
type=ast.literal_eval,
@@ -128,6 +115,13 @@ def arg_parser():
128115
default=False,
129116
help="whether show verbose logs, default False",
130117
)
118+
parser.add_argument(
119+
"--version",
120+
"-v",
121+
action="store_true",
122+
default=False,
123+
help="get version of paddle2onnx",
124+
)
131125
return parser
132126

133127

@@ -162,16 +156,14 @@ def main():
162156
else:
163157
params_file = os.path.join(args.model_dir, args.params_filename)
164158

165-
if args.external_filename is None:
166-
args.external_filename = "external_data"
159+
# if args.external_filename is None:
160+
# args.external_filename = "external_data"
167161

168162
base_path = os.path.dirname(args.save_file)
169163
if base_path and not os.path.exists(base_path):
170164
os.mkdir(base_path)
171-
external_file = os.path.join(base_path, args.external_filename)
172-
custom_ops_dict = eval(args.custom_ops)
173-
174-
calibration_file = args.save_calibration_file
165+
# external_file = os.path.join(base_path, args.external_filename)
166+
# custom_ops_dict = eval(args.custom_ops)
175167
paddle2onnx.export(
176168
model_filename=model_file,
177169
params_filename=params_file,
@@ -183,11 +175,11 @@ def main():
183175
enable_onnx_checker=args.enable_onnx_checker,
184176
enable_experimental_op=True,
185177
enable_optimize=True,
186-
custom_op_info=custom_ops_dict,
187-
deploy_backend=args.deploy_backend,
188-
calibration_file=calibration_file,
189-
external_file=external_file,
190-
export_fp16_model=args.export_fp16_model,
178+
custom_op_info=None,
179+
deploy_backend="onnxruntime",
180+
calibration_file="",
181+
external_file="",
182+
export_fp16_model=False,
191183
enable_polygraphy=args.enable_optimization,
192184
)
193185

paddle2onnx/parser/pir_parser.cc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,7 @@ bool PaddlePirParser::LoadParams(const std::string& path) {
393393
std::vector<std::string> var_names;
394394
GetParamValueName(&var_names);
395395
P2OLogger(verbose_)
396-
<< "Get param's attribute 'param_namefrom' from pir::program successfully"
396+
<< "Get param's attribute 'param_name' from pir::program successfully."
397397
<< std::endl;
398398

399399
int64_t read_size = 0;

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ requires = [
44
"wheel",
55
"cmake>=3.16",
66
"setuptools-scm",
7-
"paddlepaddle==3.0.0.dev20241211",
7+
"paddlepaddle==3.0.0",
88
]
99
build-backend = "setuptools.build_meta"
1010

0 commit comments

Comments
 (0)