PaddleX/3.0-rc/pipeline_deploy/high_performance_inference #3749
Replies: 2 comments
-
我想问一下,似乎paddleX要求aiofiles>=24.1.0,但是我又能看到最高aiofiles<24.0的gradio应用发布在paddlepaddle上...这是怎么回事? |
Beta Was this translation helpful? Give feedback.
-
pipeline_name: OCR text_type: general use_doc_preprocessor: False hpi_config: Serving: SubPipelines: SubModules: 使用 paddlex-hps:paddlex3.0.3-gpu 报 No inference backend and configuration could be suggested. Reason: Inference backend 'onnxruntime' is unavailable. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
PaddleX/3.0-rc/pipeline_deploy/high_performance_inference
https://paddlepaddle.github.io/PaddleX/3.0-rc/pipeline_deploy/high_performance_inference.html
Beta Was this translation helpful? Give feedback.
All reactions