Replies: 3 comments 1 reply
-
Nevermind, I realized I wasn't rewinding the file descriptor for the output correctly. |
Beta Was this translation helpful? Give feedback.
-
Although I indeed wasn't rewinding the output file descriptors correctly, this wasn't the root issue for the difference between I can't share this particular model, but the results from Python are: [[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]
[184 21 14 11 14 10 13 14 13 16 26 13 11 13 12 16 14 14 12 13 13 13 15 15 12 17 25 11]]
[2025-02-14T14:51:46Z INFO simple_file] Results
0: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
1: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
2: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
3: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
4: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
5: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
6: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
7: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
8: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11]
9: 0(184) [184, 21, 14, 11, 14, 10, 13, 14, 13, 16, 26, 13, 11, 13, 12, 16, 14, 14, 12, 13, 13, 13, 15, 15, 12, 17, 25, 11] and [2025-02-14T14:52:35Z INFO simple_file] Results
0: 8(30) [25, 26, 25, 11, 20, 15, 14, 14, 30, 18, 20, 20, 15, 13, 16, 13, 15, 15, 11, 18, 16, 14, 22, 26, 27, 15, 14, 17]
1: 8(84) [25, 58, 12, 8, 21, 10, 13, 14, 84, 10, 14, 18, 11, 13, 13, 17, 15, 14, 19, 11, 13, 13, 15, 15, 25, 14, 14, 12]
2: 1(101) [30, 101, 21, 12, 16, 9, 13, 14, 41, 11, 15, 29, 13, 13, 13, 28, 16, 15, 11, 15, 14, 14, 18, 16, 8, 14, 14, 16]
3: 0(78) [78, 16, 13, 9, 16, 48, 13, 14, 14, 13, 28, 16, 11, 13, 14, 12, 15, 68, 12, 12, 14, 14, 22, 14, 9, 14, 13, 9]
4: 12(96) [28, 10, 25, 27, 10, 38, 13, 14, 19, 7, 20, 23, 96, 13, 9, 45, 14, 13, 21, 18, 14, 12, 7, 21, 6, 13, 14, 16]
5: 12(128) [57, 3, 15, 34, 5, 26, 11, 12, 14, 4, 11, 22, 128, 12, 21, 11, 11, 60, 32, 15, 12, 11, 8, 13, 8, 37, 5, 6]
6: 0(128) [128, 53, 14, 9, 15, 20, 14, 14, 13, 8, 18, 13, 25, 13, 11, 25, 15, 14, 10, 13, 14, 14, 21, 15, 11, 14, 14, 14]
7: 1(101) [80, 101, 8, 21, 8, 10, 11, 13, 12, 60, 12, 3, 41, 11, 14, 15, 12, 17, 11, 8, 14, 15, 9, 11, 27, 12, 16, 8]
8: 24(80) [16, 13, 25, 18, 13, 14, 13, 14, 21, 28, 21, 29, 16, 13, 18, 7, 12, 14, 14, 19, 16, 13, 11, 32, 80, 14, 14, 15]
9: 1(167) [25, 167, 18, 12, 14, 17, 13, 14, 15, 12, 14, 36, 12, 13, 16, 14, 16, 15, 12, 13, 14, 14, 15, 15, 8, 14, 14, 15] |
Beta Was this translation helpful? Give feedback.
-
The same effects can actually be seen in the Output - ARTPEC-8 with TensorFlow Lite: vdo_larod[4165]: Person detected: 5.49% - Car detected: 80.00%
vdo_larod[4165]: Ran inference for 17 ms
vdo_larod[4165]: Converted image in 4 ms
vdo_larod[4165]: Person detected: 4.31% - Car detected: 88.63% and E.G. Output - Google TPU: vdo_larod[31476]: Start fetching video frames from VDO
vdo_larod[31476]: Converted image in 14 ms
vdo_larod[31476]: Person detected: 62.35% - Car detected: 11.37%
vdo_larod[31476]: Ran inference for 19 ms
vdo_larod[31476]: Converted image in 7 ms
vdo_larod[31476]: Person detected: 62.35% - Car detected: 10.59%
vdo_larod[31476]: Ran inference for 6 ms It seems they didn't test it on |
Beta Was this translation helpful? Give feedback.
-
When training and converting a Tensorflow model to Tensorflow lite, I can define the Input layer as
Then when loading the model on
axis-a8-dlpu-tflite
I get the input dims are correctly[10, 224, 224, 3]
, input pitches[1505280, 150528, 672, 3]
, output dims[10, 28]
and ouput pitches[280, 28]
. However, the inference results are wildly inaccurate, almost random outputs.Using
cpu-tflite
does not seem to have the almost random outputs. It seems specific to running on the Artpec DLPU.Passing a batch_size of None at the Input, which aligns with all the examples and Model Zoo models, seems to behave ok. However, there doesn't appear to be any way to point Larod at a file descriptor and have it operate on a batch greater than 1.
Can we run inference on batches?
Beta Was this translation helpful? Give feedback.
All reactions