Skip to content

Commit 3ff562a

Browse files
Bump up to version 0.3.0 (#371)
* Update VERSION_NUMBER * Update paddle_inference.cmake * Delete docs directory * release new docs * update version number * add vision result doc * update version * fix dead link * fix vision * fix dead link * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_CN.md * Update README_EN.md * Update README_EN.md Co-authored-by: leiqing <54695910+leiqing1@users.noreply.github.com>
1 parent bac1728 commit 3ff562a

File tree

174 files changed

+322
-6775
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

174 files changed

+322
-6775
lines changed

README_CN.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,15 @@
2828

2929
## 近期更新
3030

31-
- 🔥 **2022.8.18:发布FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/releases/tag/release%2F0.2.0)** <br>
31+
- 🔥 **2022.10.15:Release FastDeploy [release v0.3.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.3.0)** <br>
32+
- **New server-side deployment upgrade:更快的推理性能,一键量化,更多的视觉和NLP模型**
33+
- 集成 OpenVINO 推理引擎,并且保证了使用 OpenVINO 与 使用 TensorRT、ONNX Runtime、 Paddle Inference一致的开发体验;
34+
- 提供[一键模型量化工具](tools/quantization),支持YOLOv7、YOLOv6、YOLOv5等视觉模型,在CPU和GPU推理速度可提升1.5~2倍;
35+
- 新增加 PP-OCRv3, PP-OCRv2, PP-Matting, PP-HumanMatting, ModNet 等视觉模型并提供[端到端部署示例](examples/vision)
36+
- 新增加NLP信息抽取模型 UIE 并提供[端到端部署示例](examples/text/uie).
37+
-
38+
39+
- 🔥 **2022.8.18:发布FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.2.0)** <br>
3240
- **服务端部署全新升级:更快的推理性能,更多的视觉模型支持**
3341
- 发布基于x86 CPU、NVIDIA GPU的高性能推理引擎SDK,推理速度大幅提升
3442
- 集成Paddle Inference、ONNX Runtime、TensorRT等推理引擎并提供统一的部署体验

README_EN.md

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -28,17 +28,24 @@ English | [简体中文](README_CN.md)
2828
| **Face Alignment** | **3D Object Detection** | **Face Editing** | **Image Animation** |
2929
| <img src='https://user-images.githubusercontent.com/54695910/188059460-9845e717-c30a-4252-bd80-b7f6d4cf30cb.png' height="126px" width="190px"> | <img src='https://user-images.githubusercontent.com/54695910/188270227-1a4671b3-0123-46ab-8d0f-0e4132ae8ec0.gif' height="126px" width="190px"> | <img src='https://user-images.githubusercontent.com/54695910/188054663-b0c9c037-6d12-4e90-a7e4-e9abf4cf9b97.gif' height="126px" width="126px"> | <img src='https://user-images.githubusercontent.com/54695910/188056800-2190e05e-ad1f-40ef-bf71-df24c3407b2d.gif' height="126px" width="190px"> |
3030

31-
## Updates
31+
## 📣 Recent Updates
3232

33-
- 🔥 **2022.8.18:Release FastDeploy [release/v0.2.0](https://github.com/PaddlePaddle/FastDeploy/releases/tag/release%2F0.2.0)** <br>
34-
- **New server-side deployment upgrade: faster inference performance, support more vision model**
33+
- 🔥 **2022.10.15:Release FastDeploy [release v0.3.0](https://github.com/PaddlePaddle/FastDeploy/tree/release/0.3.0)** <br>
34+
- **New server-side deployment upgrade: support more CV model and NLP model**
35+
- Integrate OpenVINO and provide a seamless deployment experience with other inference engines include TensorRT、ONNX Runtime、Paddle Inference;
36+
- Support [one-click model quantization](tools/quantization) to improve model inference speed by 1.5 to 2 times on CPU & GPU platform. The supported quantized model are YOLOv7, YOLOv6, YOLOv5, etc.
37+
- New CV models include PP-OCRv3, PP-OCRv2, PP-TinyPose, PP-Matting, etc. and provides [end-to-end deployment demos](examples/vision/detection/)
38+
- New information extraction model is UIE, and provides [end-to-end deployment demos](examples/text/uie).
39+
40+
- 🔥 **2022.8.18:Release FastDeploy [release v0.2.0](https://github.com/PaddlePaddle/FastDeploy/tree/release%2F0.2.0)** <br>
41+
- **New server-side deployment upgrade: faster inference performance, support more CV model**
3542
- Release high-performance inference engine SDK based on x86 CPUs and NVIDIA GPUs, with significant increase in inference speed
36-
- Integrate Paddle Inference, ONNXRuntime, TensorRT and other inference engines and provide a seamless deployment experience
37-
- Supports full range of object detection models such as YOLOv7, YOLOv6, YOLOv5, PP-YOLOE and provides [End-To-End Deployment Demos](examples/vision/detection/)
38-
- Support over 40 key models and [Demo Examples](examples/vision/) including face detection, face recognition, real-time portrait matting, image segmentation.
43+
- Integrate Paddle Inference, ONNX Runtime, TensorRT and other inference engines and provide a seamless deployment experience
44+
- Supports full range of object detection models such as YOLOv7, YOLOv6, YOLOv5, PP-YOLOE and provides [end-to-end deployment demos](examples/vision/detection/)
45+
- Support over 40 key models and [demo examples](examples/vision/) including face detection, face recognition, real-time portrait matting, image segmentation.
3946
- Support deployment in both Python and C++
4047
- **Supports Rockchip, Amlogic, NXP and other NPU chip deployment capabilities on edge device deployment**
41-
- Release Lightweight Object Detection [Picodet-NPU Deployment Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/linux/picodet_detection), providing the full quantized inference capability for INT8.
48+
- Release Lightweight Object Detection [Picodet-NPU deployment demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/develop/object_detection/linux/picodet_detection), providing the full quantized inference capability for INT8.
4249

4350
## Contents
4451

@@ -71,7 +78,7 @@ English | [简体中文](README_CN.md)
7178
- python >= 3.6
7279
- OS: Linux x86_64/macOS/Windows 10
7380

74-
##### Install Library with GPU Support
81+
##### Install Fastdeploy SDK with CPU&GPU support
7582

7683
```bash
7784
pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
@@ -83,7 +90,7 @@ pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdep
8390
conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=8.2
8491
```
8592

86-
##### Install CPU-only Library
93+
##### Install Fastdeploy SDK with only CPU support
8794

8895
```bash
8996
pip install fastdeploy-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html

VERSION_NUMBER

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
0.0.0
1+
0.3.0

cmake/paddle_inference.cmake

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ endif(WIN32)
4848

4949

5050
set(PADDLEINFERENCE_URL_BASE "https://bj.bcebos.com/fastdeploy/third_libs/")
51-
set(PADDLEINFERENCE_VERSION "2.4-dev")
51+
set(PADDLEINFERENCE_VERSION "2.4-dev1")
5252
if(WIN32)
5353
if (WITH_GPU)
5454
set(PADDLEINFERENCE_FILE "paddle_inference-win-x64-gpu-${PADDLEINFERENCE_VERSION}.zip")
File renamed without changes.
File renamed without changes.
File renamed without changes.

docs/api/function.md

Lines changed: 0 additions & 278 deletions
This file was deleted.

0 commit comments

Comments
 (0)