You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-6Lines changed: 20 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
-
# Video object detection example
1
+
# Video object detection demo for IoTeX
2
2
3
-
This example combines and integrates two simpler examples, the video decoder and the [deep learning server](https://github.com/veracruz-project/veracruz-examples/tree/main/deep-learning-server).
4
-
The video decoder uses [`openh264`](https://github.com/veracruz-project/openh264) to decode an H264 video into individual frames, which are converted to RGB and made palatable to an object detector built on top of the [Darknet neural network framework](https://github.com/veracruz-project/darknet). The output is a list of detected objects, associated with their detection probability, and an optional prediction image showing each detected object in a bounding box.
3
+
This program combines and integrates two simpler examples, the video decoder and the [deep learning server](https://github.com/veracruz-project/veracruz-examples/tree/main/deep-learning-server).
4
+
The video decoder uses [`Mbed TLS`](https://github.com/veracruz-project/mbedtls/) to decrypt an H264 video, then uses [`openh264`](https://github.com/veracruz-project/openh264) to decode the video into individual frames, which are converted to RGB and made palatable to an object detector built on top of the [Darknet neural network framework](https://github.com/veracruz-project/darknet). The output is a list of detected objects, associated with their detection probability, and an optional prediction image showing each detected object in a bounding box.
5
5
6
6
## Build
7
7
* Install [`wasi sdk 14`](https://github.com/WebAssembly/wasi-sdk) and set `WASI_SDK_ROOT` to point to its installation directory
@@ -27,6 +27,13 @@ The video decoder uses [`openh264`](https://github.com/veracruz-project/openh264
## Encrypt the video and export the keying material
31
+
* Go to `aes-ctr-enc-dec/` and run the encryption program (it gets automatically built):
32
+
```
33
+
cargo run <path to H264 video> <path to encrypted video> <key path> <iv path> -e
34
+
```
35
+
* The video gets encrypted with the freshly generated keying material (key, IV)
36
+
30
37
## File tree
31
38
* The program is expecting the following file tree:
32
39
```
@@ -37,8 +44,12 @@ The video decoder uses [`openh264`](https://github.com/veracruz-project/openh264
37
44
+---- *.png
38
45
+-- yolov3.cfg (configuration)
39
46
+-- yolov3.weights (model)
40
-
+ video_input/
41
-
+-- in.h264 (H264 video)
47
+
+ program_internal/ (directory used internally by the program)
48
+
+ s3_app_input/
49
+
+-- in_enc.h264 (encrypted H264 video)
50
+
+ user_input/ (keying material to decrypt the video)
51
+
+-- iv
52
+
+-- key
42
53
```
43
54
44
55
## Execution outside Veracruz
@@ -65,7 +76,7 @@ There are several ways to do that. In any case the [file tree](#file-tree) must
65
76
### As a WebAssembly binary in the [`freestanding execution engine`](https://github.com/veracruz-project/veracruz/tree/main/sdk/freestanding-execution-engine)
* Depending on your environment, run `./deploy_vod_big_linux.sh` or `./deploy_vod_big_nitro.sh` to generate the policy, deploy the Veracruz components and run the computation
76
87
* The prediction images can be found in the executing directory
0 commit comments