Skip to content

Commit ad5c9c0

Browse files
ziqi-jinjiangjiajunrootDefTruthfelixhjh
authored
[Model] Modify SR (#674)
* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (#11) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (#16) * Develop (#11) (#12) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (#13) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (#14) * Fix compile problem in different python version (#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (#22) * modify ppmatting backend and docs * modify ppmatting docs * fix the PPMatting size problem * fix LimitShort's log * retrigger ci * modify PPMatting docs * modify the way for dealing with LimitShort * add python comments for external models * modify resnet c++ comments * modify C++ comments for external models * modify python comments and add result class comments * fix comments compile error * modify result.h comments * modify examples doc and code for SR models * code style * retrigger ci * python file code style * fix examples links * fix examples links * fix examples links Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
1 parent 86f05e9 commit ad5c9c0

File tree

24 files changed

+417
-404
lines changed

24 files changed

+417
-404
lines changed

examples/vision/sr/basicvsr/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
| 模型 | 参数大小 | 精度 | 备注 |
2020
|:----------------------------------------------------------------------------|:-------|:----- | :------ |
21-
| [BasicVSR](https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tgz) | 30.1MB | - |
21+
| [BasicVSR](https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar) | 30.1MB | - |
2222

2323
**注意**:非常不建议在没有独立显卡的设备上运行该模型
2424

examples/vision/sr/basicvsr/cpp/infer.cc

Lines changed: 80 additions & 83 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ const char sep = '\\';
2020
const char sep = '/';
2121
#endif
2222

23-
void CpuInfer(const std::string& model_dir,
24-
const std::string& video_file, int frame_num) {
23+
void CpuInfer(const std::string& model_dir, const std::string& video_file,
24+
int frame_num) {
2525
auto model_file = model_dir + sep + "model.pdmodel";
2626
auto params_file = model_dir + sep + "model.pdiparams";
2727
auto model = fastdeploy::vision::sr::BasicVSR(model_file, params_file);
@@ -32,167 +32,165 @@ void CpuInfer(const std::string& model_dir,
3232
}
3333
// note: input/output shape is [b, n, c, h, w] (n = frame_nums; b=1(default))
3434
// b and n is dependent on export model shape
35-
// see https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
35+
// see
36+
// https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
3637
cv::VideoCapture capture;
3738
// change your save video path
3839
std::string video_out_name = "output.mp4";
3940
capture.open(video_file);
40-
if (!capture.isOpened())
41-
{
42-
std::cout<<"can not open video "<<std::endl;
41+
if (!capture.isOpened()) {
42+
std::cout << "can not open video " << std::endl;
4343
return;
4444
}
4545
// Get Video info :fps, frame count
4646
// it used 4.x version of opencv below
4747
// notice your opencv version and method of api.
4848
int video_fps = static_cast<int>(capture.get(cv::CAP_PROP_FPS));
49-
int video_frame_count = static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
49+
int video_frame_count =
50+
static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
5051
// Set fixed size for output frame, only for msvsr model
5152
int out_width = 1280;
5253
int out_height = 720;
53-
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count << std::endl;
54+
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count
55+
<< std::endl;
5456

5557
// Create VideoWriter for output
5658
cv::VideoWriter video_out;
5759
std::string video_out_path("./");
5860
video_out_path += video_out_name;
5961
int fcc = cv::VideoWriter::fourcc('m', 'p', '4', 'v');
60-
video_out.open(video_out_path, fcc, video_fps, cv::Size(out_width, out_height), true);
61-
if (!video_out.isOpened())
62-
{
62+
video_out.open(video_out_path, fcc, video_fps,
63+
cv::Size(out_width, out_height), true);
64+
if (!video_out.isOpened()) {
6365
std::cout << "create video writer failed!" << std::endl;
6466
return;
6567
}
6668
// Capture all frames and do inference
6769
cv::Mat frame;
6870
int frame_id = 0;
6971
bool reach_end = false;
70-
while (capture.isOpened())
71-
{
72+
while (capture.isOpened()) {
7273
std::vector<cv::Mat> imgs;
73-
for (int i = 0; i < frame_num; i++)
74-
{
74+
for (int i = 0; i < frame_num; i++) {
7575
capture.read(frame);
76-
if (!frame.empty())
77-
{
76+
if (!frame.empty()) {
7877
imgs.push_back(frame);
79-
}else{
78+
} else {
8079
reach_end = true;
8180
}
8281
}
83-
if (reach_end)
84-
{
82+
if (reach_end) {
8583
break;
8684
}
8785
std::vector<cv::Mat> results;
8886
model.Predict(imgs, results);
89-
for (auto &item : results)
90-
{
87+
for (auto& item : results) {
9188
// cv::imshow("13",item);
9289
// cv::waitKey(30);
9390
video_out.write(item);
94-
std::cout << "Processing frame: "<< frame_id << std::endl;
91+
std::cout << "Processing frame: " << frame_id << std::endl;
9592
frame_id += 1;
9693
}
9794
}
98-
std::cout << "inference finished, output video saved at " << video_out_path << std::endl;
95+
std::cout << "inference finished, output video saved at " << video_out_path
96+
<< std::endl;
9997
capture.release();
10098
video_out.release();
10199
}
102100

103-
void GpuInfer(const std::string& model_dir,
104-
const std::string& video_file, int frame_num) {
101+
void GpuInfer(const std::string& model_dir, const std::string& video_file,
102+
int frame_num) {
105103
auto model_file = model_dir + sep + "model.pdmodel";
106104
auto params_file = model_dir + sep + "model.pdiparams";
107105

108106
auto option = fastdeploy::RuntimeOption();
109107
option.UseGpu();
110-
auto model = fastdeploy::vision::sr::BasicVSR(
111-
model_file, params_file, option);
108+
auto model =
109+
fastdeploy::vision::sr::BasicVSR(model_file, params_file, option);
112110

113111
if (!model.Initialized()) {
114112
std::cerr << "Failed to initialize." << std::endl;
115113
return;
116114
}
117115
// note: input/output shape is [b, n, c, h, w] (n = frame_nums; b=1(default))
118116
// b and n is dependent on export model shape
119-
// see https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
117+
// see
118+
// https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
120119
cv::VideoCapture capture;
121120
// change your save video path
122121
std::string video_out_name = "output.mp4";
123122
capture.open(video_file);
124-
if (!capture.isOpened())
125-
{
126-
std::cout<<"can not open video "<<std::endl;
123+
if (!capture.isOpened()) {
124+
std::cout << "can not open video " << std::endl;
127125
return;
128126
}
129127
// Get Video info :fps, frame count
130128
int video_fps = static_cast<int>(capture.get(cv::CAP_PROP_FPS));
131-
int video_frame_count = static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
129+
int video_frame_count =
130+
static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
132131
// Set fixed size for output frame, only for msvsr model
133132
int out_width = 1280;
134133
int out_height = 720;
135-
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count << std::endl;
134+
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count
135+
<< std::endl;
136136

137137
// Create VideoWriter for output
138138
cv::VideoWriter video_out;
139139
std::string video_out_path("./");
140140
video_out_path += video_out_name;
141141
int fcc = cv::VideoWriter::fourcc('m', 'p', '4', 'v');
142-
video_out.open(video_out_path, fcc, video_fps, cv::Size(out_width, out_height), true);
143-
if (!video_out.isOpened())
144-
{
142+
video_out.open(video_out_path, fcc, video_fps,
143+
cv::Size(out_width, out_height), true);
144+
if (!video_out.isOpened()) {
145145
std::cout << "create video writer failed!" << std::endl;
146146
return;
147147
}
148148
// Capture all frames and do inference
149149
cv::Mat frame;
150150
int frame_id = 0;
151151
bool reach_end = false;
152-
while (capture.isOpened())
153-
{
152+
while (capture.isOpened()) {
154153
std::vector<cv::Mat> imgs;
155-
for (int i = 0; i < frame_num; i++)
156-
{
154+
for (int i = 0; i < frame_num; i++) {
157155
capture.read(frame);
158-
if (!frame.empty())
159-
{
156+
if (!frame.empty()) {
160157
imgs.push_back(frame);
161-
}else{
158+
} else {
162159
reach_end = true;
163160
}
164161
}
165-
if (reach_end)
166-
{
162+
if (reach_end) {
167163
break;
168164
}
169165
std::vector<cv::Mat> results;
170166
model.Predict(imgs, results);
171-
for (auto &item : results)
172-
{
167+
for (auto& item : results) {
173168
// cv::imshow("13",item);
174169
// cv::waitKey(30);
175170
video_out.write(item);
176-
std::cout << "Processing frame: "<< frame_id << std::endl;
171+
std::cout << "Processing frame: " << frame_id << std::endl;
177172
frame_id += 1;
178173
}
179174
}
180-
std::cout << "inference finished, output video saved at " << video_out_path << std::endl;
175+
std::cout << "inference finished, output video saved at " << video_out_path
176+
<< std::endl;
181177
capture.release();
182178
video_out.release();
183179
}
184180

185-
void TrtInfer(const std::string& model_dir,
186-
const std::string& video_file, int frame_num) {
181+
void TrtInfer(const std::string& model_dir, const std::string& video_file,
182+
int frame_num) {
187183
auto model_file = model_dir + sep + "model.pdmodel";
188184
auto params_file = model_dir + sep + "model.pdiparams";
189185
auto option = fastdeploy::RuntimeOption();
190186
option.UseGpu();
191-
option.UseTrtBackend();
192187
// use paddle-TRT
188+
option.UseTrtBackend();
189+
option.EnablePaddleTrtCollectShape();
190+
option.SetTrtInputShape("lrs", {1, 2, 3, 180, 320});
193191
option.EnablePaddleToTrt();
194-
auto model = fastdeploy::vision::sr::BasicVSR(
195-
model_file, params_file, option);
192+
auto model =
193+
fastdeploy::vision::sr::BasicVSR(model_file, params_file, option);
196194

197195
if (!model.Initialized()) {
198196
std::cerr << "Failed to initialize." << std::endl;
@@ -201,81 +199,80 @@ void TrtInfer(const std::string& model_dir,
201199

202200
// note: input/output shape is [b, n, c, h, w] (n = frame_nums; b=1(default))
203201
// b and n is dependent on export model shape
204-
// see https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
202+
// see
203+
// https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/video_super_resolution.md
205204
cv::VideoCapture capture;
206205
// change your save video path
207206
std::string video_out_name = "output.mp4";
208207
capture.open(video_file);
209-
if (!capture.isOpened())
210-
{
211-
std::cout<<"can not open video "<<std::endl;
208+
if (!capture.isOpened()) {
209+
std::cout << "can not open video " << std::endl;
212210
return;
213211
}
214212
// Get Video info :fps, frame count
215213
int video_fps = static_cast<int>(capture.get(cv::CAP_PROP_FPS));
216-
int video_frame_count = static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
214+
int video_frame_count =
215+
static_cast<int>(capture.get(cv::CAP_PROP_FRAME_COUNT));
217216
// Set fixed size for output frame, only for msvsr model
218-
//Note that the resolution between the size and the original input is consistent when the model is exported,
217+
// Note that the resolution between the size and the original input is
218+
// consistent when the model is exported,
219219
// for example: [1,2,3,180,320], after 4x super separation [1,2,3,720,1080].
220-
//Therefore, it is very important to derive the model
220+
// Therefore, it is very important to derive the model
221221
int out_width = 1280;
222222
int out_height = 720;
223-
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count << std::endl;
223+
std::cout << "fps: " << video_fps << "\tframe_count: " << video_frame_count
224+
<< std::endl;
224225

225226
// Create VideoWriter for output
226227
cv::VideoWriter video_out;
227228
std::string video_out_path("./");
228229
video_out_path += video_out_name;
229230
int fcc = cv::VideoWriter::fourcc('m', 'p', '4', 'v');
230-
video_out.open(video_out_path, fcc, video_fps, cv::Size(out_width, out_height), true);
231-
if (!video_out.isOpened())
232-
{
231+
video_out.open(video_out_path, fcc, video_fps,
232+
cv::Size(out_width, out_height), true);
233+
if (!video_out.isOpened()) {
233234
std::cout << "create video writer failed!" << std::endl;
234235
return;
235236
}
236237
// Capture all frames and do inference
237238
cv::Mat frame;
238239
int frame_id = 0;
239240
bool reach_end = false;
240-
while (capture.isOpened())
241-
{
241+
while (capture.isOpened()) {
242242
std::vector<cv::Mat> imgs;
243-
for (int i = 0; i < frame_num; i++)
244-
{
243+
for (int i = 0; i < frame_num; i++) {
245244
capture.read(frame);
246-
if (!frame.empty())
247-
{
245+
if (!frame.empty()) {
248246
imgs.push_back(frame);
249-
}else{
247+
} else {
250248
reach_end = true;
251249
}
252250
}
253-
if (reach_end)
254-
{
251+
if (reach_end) {
255252
break;
256253
}
257254
std::vector<cv::Mat> results;
258255
model.Predict(imgs, results);
259-
for (auto &item : results)
260-
{
256+
for (auto& item : results) {
261257
// cv::imshow("13",item);
262258
// cv::waitKey(30);
263259
video_out.write(item);
264-
std::cout << "Processing frame: "<< frame_id << std::endl;
260+
std::cout << "Processing frame: " << frame_id << std::endl;
265261
frame_id += 1;
266262
}
267263
}
268-
std::cout << "inference finished, output video saved at " << video_out_path << std::endl;
264+
std::cout << "inference finished, output video saved at " << video_out_path
265+
<< std::endl;
269266
capture.release();
270267
video_out.release();
271268
}
272269

273270
int main(int argc, char* argv[]) {
274271
if (argc < 4) {
275-
std::cout
276-
<< "Usage: infer_demo path/to/model_dir path/to/video frame number run_option, "
277-
"e.g ./infer_model ./vsr_model_dir ./person.mp4 0 2"
278-
<< std::endl;
272+
std::cout << "Usage: infer_demo path/to/model_dir path/to/video frame "
273+
"number run_option, "
274+
"e.g ./infer_model ./vsr_model_dir ./vsr_src.mp4 0 2"
275+
<< std::endl;
279276
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
280277
"with gpu; 2: run with gpu and use tensorrt backend."
281278
<< std::endl;

examples/vision/sr/basicvsr/python/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,11 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/BasicVSR_reds_x4.tar
1717
tar -xvf BasicVSR_reds_x4.tar
1818
wget https://bj.bcebos.com/paddlehub/fastdeploy/vsr_src.mp4
1919
# CPU推理
20-
python infer.py --model BasicVSR_reds_x4 --video person.mp4 --frame_num 2 --device cpu
20+
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device cpu
2121
# GPU推理
22-
python infer.py --model BasicVSR_reds_x4 --video person.mp4 --frame_num 2 --device gpu
22+
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu
2323
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
24-
python infer.py --model BasicVSR_reds_x4 --video person.mp4 --frame_num 2 --device gpu --use_trt True
24+
python infer.py --model BasicVSR_reds_x4 --video vsr_src.mp4 --frame_num 2 --device gpu --use_trt True
2525
```
2626

2727
## BasicVSR Python接口

examples/vision/sr/basicvsr/python/infer.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ def build_option(args):
3030
option.use_gpu()
3131
if args.use_trt:
3232
option.use_trt_backend()
33+
option.enable_paddle_trt_collect_shape()
34+
option.set_trt_input_shape("lrs", [1, 2, 3, 180, 320])
3335
option.enable_paddle_to_trt()
3436
return option
3537

@@ -56,7 +58,7 @@ def build_option(args):
5658
# Create VideoWriter for output
5759
video_out_dir = "./"
5860
video_out_path = os.path.join(video_out_dir, video_out_name)
59-
fucc = cv2.VideoWriter_fourcc(*"mp4v")
61+
fucc = cv2.VideoWriter_fourcc(* "mp4v")
6062
video_out = cv2.VideoWriter(video_out_path, fucc, video_fps,
6163
(out_width, out_height), True)
6264
if not video_out.isOpened():

examples/vision/sr/edvr/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818

1919
| 模型 | 参数大小 | 精度 | 备注 |
2020
|:--------------------------------------------------------------------------------|:-------|:----- | :------ |
21-
| [EDVR](https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tgz) | 14.9MB | - |
21+
| [EDVR](https://bj.bcebos.com/paddlehub/fastdeploy/EDVR_M_wo_tsa_SRx4.tar) | 14.9MB | - |
2222

2323
**注意**:非常不建议在没有独立显卡的设备上运行该模型
2424

0 commit comments

Comments
 (0)