Skip to content

使用paddlex导出的onnx模型进行推理,在Ascend 310B arch64板卡上报错,cpu推理失败。 #3915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
younland opened this issue Apr 27, 2025 · 2 comments
Assignees

Comments

@younland
Copy link

描述问题

语义分割模型在310B板卡上cpu推理失败。

复现

1、使用paddlex训练自定义训练语义分割模型,训练完成后导出onnx

2、使用paddleseg仓库下的infer_onnx.py进行推理

  1. 您使用的模型数据集是?

  2. 请提供您出现的报错信息及相关log

`
2025-04-27 15:14:46.723595316 [E:onnxruntime:, inference_session.cc:1644 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/upsamplebase.h:201 onnxruntime::UpsampleMode onnxruntime::UpsampleBase::StringToUpsampleMode(const string&) mode attribute is . It can only be nearest(default) or linear or cubic.

Traceback (most recent call last):
File "/home/HwHiAiUser/paddle/PaddleSeg/deploy/python/infer_onnx.py", line 69, in
main(args)
File "/home/HwHiAiUser/paddle/PaddleSeg/deploy/python/infer_onnx.py", line 58, in main
sess = InferenceSession(args.onnx_file)
File "/usr/local/miniconda3/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/miniconda3/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/upsamplebase.h:201 onnxruntime::UpsampleMode onnxruntime::UpsampleBase::StringToUpsampleMode(const string&) mode attribute is . It can only be nearest(default) or linear or cubic.
`

@younland
Copy link
Author

在ascend 310B上进行模型转换失败:
(base) root@davinci-mini:/home/HwHiAiUser/paddle/PaddleX# paddlex --paddle2onnx --paddle_model_dir ./models/seg --onnx_model_dir ./models/seg --opset_version 11
Input dir: models/seg
Output dir: models/seg
Paddle2ONNX conversion starting...
2025-04-27 17:21:19 [WARNING] The .pdmodel file is deprecated in paddlepaddle 3.0 and will be removed in the future. Try to convert from .pdmodel file to json file.
I0427 17:21:19.828433 1023507 program_interpreter.cc:243] New Executor is Running.
[Paddle2ONNX] Start parsing the Paddle model file...
[ERROR][Paddle2ONNX] [OP: pd_op.pool2d] Adaptive only support static input shape.
[ERROR][Paddle2ONNX] There are unsupported features.


C++ Traceback (most recent call last):

0 paddle2onnx::Export(char const*, char const*, char**, int*, int, bool, bool, bool, bool, bool, paddle2onnx::CustomOp*, int, char const*, char**, int*, char const*, bool*, bool, char**, int)


Error Message Summary:

FatalError: Process abort signal is detected by the operating system.
[TimeInfo: *** Aborted at 1745745680 (unix time) try "date -d @1745745680" if you are using GNU date ***]
[SignalInfo: *** SIGABRT (@0xf9e13) received by PID 1023507 (TID 0xe7ffd9073020) from PID 1023507 ***]

Paddle2ONNX conversion failed with exit code -6
(base) root@davinci-mini:/home/HwHiAiUser/paddle/PaddleX# pip list | grep paddle
paddle2onnx 2.0.1
paddlepaddle 3.0.0rc1
paddleseg 0.0.0.dev0 /home/HwHiAiUser/paddle/PaddleSeg
paddlex 3.0.0rc1 /home/HwHiAiUser/paddle/PaddleX

@a31413510
Copy link
Contributor

pp-lite-seg目前不支持导出onnx模型,可以尝试使用其它的分割模型,如segformer、seaformer、OCRNet_HRNet系列

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants