Skip to content

Automatic Evaluation

Vinay Sharma edited this page Jun 28, 2018 · 4 revisions

Description

DetectionSuite supports both a Qt based user interface and some command line based applications both requiring a config file to run. Some users might prefer using the command line tools which can give results in a single run without the need to use the Graphical User Interface.

One such significant tool is Automatic Evaluator, which can evaluate multiple networks on a single dataset or multiple datasets in a single run.

All you need is config file containing details about the dataset(s) and network(s).

The results are then written in CSV files in the output directory specified.

To run this tool simply build this repository and navigate to build/Tools/AutomaticEvaluator and run ./automaticEvaluator -c config.yml

Here config.yml is your required config file and some sample examples to create the same is detailed below.

Creating Config File

Given below is a sample config file to run Automatic evaluator on COCO dataset for 2 inferencers.

inputPath:  /opt/datasets/coco/annotations/instances_train2014.json

readerImplementation:  COCO

readerNames:  /opt/datasets/names/coco.names

inferencerWeights: [ /opt/datasets/weights/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb,
                     /opt/datasets/weights/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb ]

inferencerConfig:  [ /opt/datasets/cfg/foo.cfg,
                     /opt/datasets/cfg/foo.cfg ]

inferencerImplementation: [ tensorflow, 
                            tensorflow ]

inferencerNames: [ /opt/datasets/names/coco.names,
                   /opt/datasets/names/coco.names ]

outputCSVPath: /opt/datasets/output

As you can see there are two networks being used for inferencing SSD_MobileNet and SSD_Inception, and therefore --inferencerConfig, --inferencerImplementation and inferencerNames also contain 2 entries mapping respectively to inferencer weights in the correct order.

Now, most of you must be wondering, "but that's too lengthy", wait we have a solution. If any of the property is same, then you can skip writing it multiple times.

So, the above config file can substantially reduced to a minimal version as mentioned below.

Minimised Config File

inputPath: /opt/datasets/coco/annotations/instances_train2014.json

readerImplementation: COCO

readerNames: /opt/datasets/names/coco.names

inferencerWeights: [ /opt/datasets/weights/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb, 
                     /opt/datasets/weights/ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb ]

inferencerConfig:  /opt/datasets/cfg/foo.cfg

inferencerImplementation:  tensorflow

inferencerNames:  /opt/datasets/names/coco.names


outputCSVPath:  /opt/datasets/output

To be more precise, the program loops over all the config parameters just like for weights and if nothing is left to loop, then it uses the last value for inferencing with the inferencer weights. So, in the above case after looping once, loop will end for inferencerImplementation, but --inferencerWeights are still left, so the last value of --inferencerImplementation i.e tensorflow will be used with the --inferencerWeights. Similar concept can be used to minimise config file for datasets part.

Using Multiple Frameworks

Below is a sample file for inferencing using Multiple Frameworks:

--inputPath
/opt/datasets/coco/annotations/instances_train2014.json

--readerImplementation
COCO

--readerNames
/opt/datasets/names/coco.names

--inferencerWeights
/opt/datasets/weights/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
/opt/datasets/weights/VGG_VOC0712_SSD_512x512_iter_120000.h5
/opt/datasets/weights/VGG_VOC0712_SSD_512x512_iter_240000.h5

--inferencerConfig
/opt/datasets/cfg/foo.cfg
/opt/datasets/cfg/kerasInferencer.cfg

--inferencerImplementation
tensorflow
keras

--inferencerNames
/opt/datasets/names/coco.names
/opt/datastes/names/voc.names

--outputCSVPath
/opt/datasets/output

NOTE: In the above example you can see, that a VOC trained network is being used to evalute on COCO Ground Truth. This tool supports such evaluation by mapping Pascal VOC classnames to COCO classnames. And this mapping is very robust, can map synonmys and sub classes also.

Also, you can observe, there are 3 inferencer weight files, but only 2 --inferencerConfig, --inferencerImplementation and --inferencerNames. In such a case as mentioned above the last value will be mapped again. So, Keras will be mapped again to the 3rd weights file, similarly, voc.names will be used again by the 3rd weights file.

Clone this wiki locally