-
-
Notifications
You must be signed in to change notification settings - Fork 407
Description
Hello!
I'm using jetson orin AGX with Deepstream 7.1.
I'm compile and copy to deepstream lib folder latest version of libnvdsinfer_custom_impl_Yolo.so
I'm create onnx file from pt (YOLOV8) with latest version of python script export_yoloV8.py.
python3 export_yoloV8.py -w bbox.pt --size 1280 1280 --dynamic --opset 17
After that, I insert this file into the simple gstreamer pipeline containing nvinfer. The infer configuration file matches the configuration from your git.
After the first launch of the pipeline, an engine file was created and the output video contains correct detections.
I'm visualizing detections with nvdsosd plugin.
After that, I launch exactly the same pipeline, but specify the generated engine file in nvinfer plugin config. (option model-engine-file)
But when re-launching the pipeline with a new engine file generate output video, which contain millions of incorrect detections and time of processing was increased significaly.
What could be the reason for this?
Here is example of gstreamer pipeline
video_source=....
config_dir=....
panorama_dir
/usr/bin/gst-launch-1.0 -e \
filesrc location=$video_source/panorama.mp4 ! \
qtdemux ! \
h265parse ! \
nvv4l2decoder num-extra-surfaces=14 name=camera-left ! \
nvvideoconvert compute-hw=1 nvbuf-memory-type=2 ! \
'video/x-raw(memory:NVMM), format=RGBA, width=7232, height=2504' ! \
streammux.sink_0 \
nvstreammux name=streammux compute-hw=1 nvbuf-memory-type=2 batch-size=1 width=7232 height=2504 enable-padding=false attach-sys-ts=0 buffer-pool-size=12 async-process=true sync-inputs=0 live-source=0 batched-push-timeout=40000 ! \
nvdspreprocess name=preprocessor_bbox unique-id=40 config-file=$config_dir/inference/preprocess.txt ! \
nvinfer name=infer_bbox gpu-id=0 unique-id=1 input-tensor-meta=1 config-file-path=$config_dir/inference/infer.txt ! \
nvdsosd ! \
nvvideoconvert compute-hw=1 nvbuf-memory-type=0 ! \
'video/x-raw(memory:NVMM), format=I420' ! \
queue name=queue_panorama ! \
nvv4l2h265enc bitrate=20000000 ! \
h265parse ! \
hlssink2 async-handling=true location=$panorama_dir/pano%05d.mp4 target-duration=4 max-files=15000 playlist-length=15000 playlist-location=$panorama_dir/playlist.m3u8
Config file for nvdspreprocess gstreamer plugin
[property]
enable=1
target-unique-ids=1
network-input-order=0
process-on-frame=1
unique-id=40
gpu-id=0
maintain-aspect-ratio=1
symmetric-padding=1
processing-width=1280
processing-height=1280
scaling-buf-pool-size=18
tensor-buf-pool-size=18
network-input-shape=6;3;1280;1280
network-color-format=0
tensor-data-type=0
tensor-name=input
scaling-pool-memory-type=2
scaling-pool-compute-hw=1
scaling-filter=1
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation
[user-configs]
pixel-normalization-factor=0.0039215697906911373
[group-0]
src-ids=0
custom-input-transformation-function=CustomTransformation
process-on-roi=1
roi-params-src-0=8;0;1666;1666;1642;0;1346;1346;2948;0;1346;1346;4254;0;1346;1346;5560;0;1666;1666;692;1226;5666;1196
Config file for nvinfer gstreamer plugin
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=../../../models/bbox.onnx
model-engine-file=../../../model_b6_gpu0_fp32.engine
labelfile-path=../../labels.txt
batch-size=6
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
classifier-async-mode=0
output-tensor-meta=0
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
#TODO: 3x3
detected-min-w=3
detected-min-h=3
topk=300