You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 5, 2024. It is now read-only.
I've read the paper beyond imitation 0-shot task transfer on robots by learning concepts as cognitive programs && several of your other papers really enjoyed it.
I'm currently planning to replicate your results in https://github.com/ARISE-Initiative/robosuite to really understand what is going on by implementing the VCC in 2D in Python 3.7.4 && training it on 2D input && output examples so that general concepts can be extracted.
Then I plan to train it on 3D input && output examples to see if the VCC in 3D primitives can run the concept of 12 3D bricks into a 3D wall.
I'm confused as to how to train the VCC in 2D. The training examples file training_examples.pkl seems to be a minimalistic representation of a series of input && output images generated by using primitive_shapes.py && generating an unknown number of examples for each concept.
I'm also confused as to how the VCC EXTRACTS the concept representation exactly from these training examples.
Like are the input && output training examples all viewed by the vision hierarchy (VH) before the VCC is given novel input_scenes && specific nodes at the top of the vision hierarchy of neural-recursive cortical network (neural-RCN) represent these concepts such as: