Skip to content

Milestones

List view

  • See how well a standard neural network (e.g. AlexNet) does when trained on a large-scale categorization task in the 3d world system. Having first gotten models into the system (see issue #22), we'll want to run a categorization benchmark. This involves: 1) Produce a dataset by having an avatar do stuff in an environment. Will take ~5 hrs to produce 1M images. (see for example https://github.com/dicarlolab/curiosity/blob/master/curiosity/datasources/datasource_actions_motion.py) but to get the right sequences: (a) the agent has to explore the environment quickly enough so that you get lots of different views and or you don't use every frame (b) the reset function with new objects should be called fairly often This step will produce a HDF5 file containing the images. 2) then produce a randomized-ordered HDF5 from the the thing in step (1). probably you should create this as a group within the same HDF5 file object as from (1). 3) use https://github.com/dicarlolab/curiosity/blob/master/curiosity/datasources/hdf5_handler.py to serve the randomized dataset. see for example: https://github.com/dicarlolab/curiosity/blob/master/curiosity/datasources/images_and_counts.py and https://github.com/dicarlolab/curiosity/blob/master/curiosity/datasources/images_futures_and_actions.py In fact, look at https://github.com/dicarlolab/curiosity/blob/master/scripts/counttest.py which basically does the object softmax detection test.

    Overdue by 9 year(s)
    Due by October 12, 2016
    3/11 issues closed