Skip to content

Commit c721773

Browse files
ziqi-jinjiangjiajunrootDefTruthfelixhjh
authored
[Model] Add python API for the detection result and modify YOLOv7 docs (#708)
* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * modify ppmatting backend and docs * modify ppmatting docs * fix the PPMatting size problem * fix LimitShort's log * retrigger ci * modify PPMatting docs * modify the way for dealing with LimitShort * add python comments for external models * modify resnet c++ comments * modify C++ comments for external models * modify python comments and add result class comments * fix comments compile error * modify result.h comments * python API for detection result * modify yolov7 docs * modify python detection api Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
1 parent d8d030b commit c721773

File tree

5 files changed

+118
-30
lines changed

5 files changed

+118
-30
lines changed
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
person
2+
bicycle
3+
car
4+
motorcycle
5+
airplane
6+
bus
7+
train
8+
truck
9+
boat
10+
traffic light
11+
fire hydrant
12+
stop sign
13+
parking meter
14+
bench
15+
bird
16+
cat
17+
dog
18+
horse
19+
sheep
20+
cow
21+
elephant
22+
bear
23+
zebra
24+
giraffe
25+
backpack
26+
umbrella
27+
handbag
28+
tie
29+
suitcase
30+
frisbee
31+
skis
32+
snowboard
33+
sports ball
34+
kite
35+
baseball bat
36+
baseball glove
37+
skateboard
38+
surfboard
39+
tennis racket
40+
bottle
41+
wine glass
42+
cup
43+
fork
44+
knife
45+
spoon
46+
bowl
47+
banana
48+
apple
49+
sandwich
50+
orange
51+
broccoli
52+
carrot
53+
hot dog
54+
pizza
55+
donut
56+
cake
57+
chair
58+
couch
59+
potted plant
60+
bed
61+
dining table
62+
toilet
63+
tv
64+
laptop
65+
mouse
66+
remote
67+
keyboard
68+
cell phone
69+
microwave
70+
oven
71+
toaster
72+
sink
73+
refrigerator
74+
book
75+
clock
76+
vase
77+
scissors
78+
teddy bear
79+
hair drier
80+
toothbrush

examples/vision/detection/yolov7/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@ wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
1818
# 导出onnx格式文件 (Tips: 对应 YOLOv7 release v0.1 代码)
1919
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
2020

21-
# 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件(请暂时不要使用 "--end2end",我们后续将支持带有NMS的ONNX模型的部署)
22-
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
21+
# 如果您的代码版本中有支持NMS的ONNX文件导出,请使用如下命令导出ONNX文件,并且参考`yolov7end2end_ort` 或 `yolov7end2end_trt`示例使用
22+
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt
2323

2424

2525
```

examples/vision/detection/yolov7/README_EN.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# YOLOv7 Prepare the model for Deployment
44

55
- YOLOv7 deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branching code, and [COCO Pre-Trained Models](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1).
6-
6+
77
- (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#export ONNX model) operation; *.trt and *.pose models do not support deployment.
88
- (2)As for YOLOv7 model trained on customized data, please follow the operations guidelines in [Export ONNX model](#Export-ONNX-Model) and then refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Tutorials) to complete the deployment.
99

@@ -16,8 +16,8 @@ wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
1616
# Export onnx file (Tips: in accordance with YOLOv7 release v0.1 code)
1717
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
1818

19-
# If your code supports exporting ONNX files with NMS, please use the following command to export ONNX files (do not use "--end2end" for now. We will support deployment of ONNX models with NMS in the future)
20-
python models/export.py --grid --dynamic --weights PATH/TO/yolov7.pt
19+
# If your code supports exporting ONNX files with NMS, please use the following command to export ONNX files, then refer to the example of `yolov7end2end_ort` or `yolov7end2end_ort`
20+
python models/export.py --grid --dynamic --end2end --weights PATH/TO/yolov7.pt
2121
```
2222

2323
## Download the pre-trained ONNX model

fastdeploy/vision/visualize/visualize_pybind.cc

100755100644
Lines changed: 31 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,17 @@ namespace fastdeploy {
1818
void BindVisualize(pybind11::module& m) {
1919
m.def("vis_detection",
2020
[](pybind11::array& im_data, vision::DetectionResult& result,
21-
float score_threshold, int line_size, float font_size) {
21+
std::vector<std::string>& labels, float score_threshold,
22+
int line_size, float font_size) {
2223
auto im = PyArrayToCvMat(im_data);
23-
auto vis_im = vision::VisDetection(im, result, score_threshold,
24-
line_size, font_size);
24+
cv::Mat vis_im;
25+
if (labels.empty()) {
26+
vis_im = vision::VisDetection(im, result, score_threshold,
27+
line_size, font_size);
28+
} else {
29+
vis_im = vision::VisDetection(im, result, labels, score_threshold,
30+
line_size, font_size);
31+
}
2532
FDTensor out;
2633
vision::Mat(vis_im).ShareWithTensor(&out);
2734
return TensorToPyArray(out);
@@ -40,8 +47,7 @@ void BindVisualize(pybind11::module& m) {
4047
[](pybind11::array& im_data, vision::FaceAlignmentResult& result,
4148
int line_size) {
4249
auto im = PyArrayToCvMat(im_data);
43-
auto vis_im =
44-
vision::VisFaceAlignment(im, result, line_size);
50+
auto vis_im = vision::VisFaceAlignment(im, result, line_size);
4551
FDTensor out;
4652
vision::Mat(vis_im).ShareWithTensor(&out);
4753
return TensorToPyArray(out);
@@ -86,12 +92,13 @@ void BindVisualize(pybind11::module& m) {
8692
return TensorToPyArray(out);
8793
})
8894
.def("vis_mot",
89-
[](pybind11::array& im_data, vision::MOTResult& result,float score_threshold, vision::tracking::TrailRecorder record) {
90-
auto im = PyArrayToCvMat(im_data);
91-
auto vis_im = vision::VisMOT(im, result, score_threshold, &record);
92-
FDTensor out;
93-
vision::Mat(vis_im).ShareWithTensor(&out);
94-
return TensorToPyArray(out);
95+
[](pybind11::array& im_data, vision::MOTResult& result,
96+
float score_threshold, vision::tracking::TrailRecorder record) {
97+
auto im = PyArrayToCvMat(im_data);
98+
auto vis_im = vision::VisMOT(im, result, score_threshold, &record);
99+
FDTensor out;
100+
vision::Mat(vis_im).ShareWithTensor(&out);
101+
return TensorToPyArray(out);
95102
})
96103
.def("vis_matting",
97104
[](pybind11::array& im_data, vision::MattingResult& result,
@@ -107,8 +114,7 @@ void BindVisualize(pybind11::module& m) {
107114
[](pybind11::array& im_data, vision::HeadPoseResult& result,
108115
int size, int line_size) {
109116
auto im = PyArrayToCvMat(im_data);
110-
auto vis_im =
111-
vision::VisHeadPose(im, result, size, line_size);
117+
auto vis_im = vision::VisHeadPose(im, result, size, line_size);
112118
FDTensor out;
113119
vision::Mat(vis_im).ShareWithTensor(&out);
114120
return TensorToPyArray(out);
@@ -131,8 +137,8 @@ void BindVisualize(pybind11::module& m) {
131137
[](pybind11::array& im_data, vision::KeyPointDetectionResult& result,
132138
float conf_threshold) {
133139
auto im = PyArrayToCvMat(im_data);
134-
auto vis_im = vision::VisKeypointDetection(
135-
im, result, conf_threshold);
140+
auto vis_im =
141+
vision::VisKeypointDetection(im, result, conf_threshold);
136142
FDTensor out;
137143
vision::Mat(vis_im).ShareWithTensor(&out);
138144
return TensorToPyArray(out);
@@ -194,15 +200,16 @@ void BindVisualize(pybind11::module& m) {
194200
vision::Mat(vis_im).ShareWithTensor(&out);
195201
return TensorToPyArray(out);
196202
})
197-
.def_static("vis_mot",
198-
[](pybind11::array& im_data, vision::MOTResult& result,float score_threshold,
199-
vision::tracking::TrailRecorder* record) {
200-
auto im = PyArrayToCvMat(im_data);
201-
auto vis_im = vision::VisMOT(im, result, score_threshold, record);
202-
FDTensor out;
203-
vision::Mat(vis_im).ShareWithTensor(&out);
204-
return TensorToPyArray(out);
205-
})
203+
.def_static(
204+
"vis_mot",
205+
[](pybind11::array& im_data, vision::MOTResult& result,
206+
float score_threshold, vision::tracking::TrailRecorder* record) {
207+
auto im = PyArrayToCvMat(im_data);
208+
auto vis_im = vision::VisMOT(im, result, score_threshold, record);
209+
FDTensor out;
210+
vision::Mat(vis_im).ShareWithTensor(&out);
211+
return TensorToPyArray(out);
212+
})
206213
.def_static("vis_matting_alpha",
207214
[](pybind11::array& im_data, vision::MattingResult& result,
208215
bool remove_small_connected_area) {

python/fastdeploy/vision/visualize/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,11 @@
2020

2121
def vis_detection(im_data,
2222
det_result,
23+
labels=[],
2324
score_threshold=0.0,
2425
line_size=1,
2526
font_size=0.5):
26-
return C.vision.vis_detection(im_data, det_result, score_threshold,
27+
return C.vision.vis_detection(im_data, det_result, labels, score_threshold,
2728
line_size, font_size)
2829

2930

0 commit comments

Comments
 (0)