PaddleX/latest/pipeline_deploy/high_performance_inference #2698
Replies: 7 comments 13 replies
-
有没有NPU的高性能支持,比如昇腾910b的,目前测试并发高的情况下,响应速度急剧下降,目前看是多进程对NPU资源的竞争性使用导致的。 |
Beta Was this translation helpful? Give feedback.
-
请问为什么pipeline中指定yaml文件的时候会报如下错误? Traceback (most recent call last):
File "D:\TestProject\pythonProject3\PaddleX\test.py", line 3, in <module>
pipeline = create_pipeline(
^^^^^^^^^^^^^^^^
File "D:\TestProject\pythonProject3\PaddleX\paddlex\inference\pipelines\__init__.py", line 119, in create_pipeline
return create_pipeline_from_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\TestProject\pythonProject3\PaddleX\paddlex\inference\pipelines\__init__.py", line 70, in create_pipeline_from_config
pipeline_name = config["Global"]["pipeline_name"]
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'pipeline_name' |
Beta Was this translation helpful? Give feedback.
-
搞得太复杂了,产品经理脑子有坑吗,这都啥呀 原来的套件都不维护了,搞这种大一统的东西,搞得不伦不类。我就想在windows项目中集成使用dll,好家伙,要搞docker了现在,搞成客户端和服务端通信模式了,佩服 我几百台电脑部署都给人家客户先装个docker吗? |
Beta Was this translation helpful? Give feedback.
-
高性能推理插件支持在windows11系统上安装使用吗? |
Beta Was this translation helpful? Give feedback.
-
您好,想了解一下后续有类似FastDeploy 之类的方案更进吗,因为我用再工业领域,部署的时候真的希望开箱即用,类似的这样的部署非常不便,而且对于硬件很极限,常部署在intel核显上。连4060都舍不得上。 |
Beta Was this translation helpful? Give feedback.
-
高性能能推理支持的模型列表在哪里? |
Beta Was this translation helpful? Give feedback.
-
目前 PaddleX 官方仅提供 CUDA 11.8 + cuDNN 8.9 的预编译包。CUDA 12 已经在支持中。 预计什么时候发布? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
PaddleX/latest/pipeline_deploy/high_performance_inference
https://paddlepaddle.github.io/PaddleX/latest/pipeline_deploy/high_performance_inference.html
Beta Was this translation helpful? Give feedback.
All reactions