Skip to content
/ pvrobo Public

Code for the "When Pre-trained Visual Representations Fall Short: Limitations in Visuo-Motor Robot Learning" paper.

Notifications You must be signed in to change notification settings

tsagkas/pvrobo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

When Pre-trained Visual Representations Fall Short:
Limitations in Visuo-Motor Robot Learning

Nikolaos Tsagkas 1,2, Andreas Sochopoulos1, Duolikun Danier1,
Sethu Vijayakumar1,2, Chris Xiaoxuan Lu3, Oisin Mac Aodha1

1University of Edinburgh, 2Edinburgh Centre for Robotics, 3UCL,

🌐 Website | 📝 Paper

github_afa.mp4

Abstract

The integration of pre-trained visual representations (PVRs) into visuo-motor robot learning has emerged as a promising alternative to training visual encoders from scratch. However, PVRs face critical challenges in the context of policy learning, including temporal entanglement and an inability to generalise even in the presence of minor scene perturbations. These limitations hinder performance in tasks requiring temporal awareness and robustness to scene changes. This work identifies these shortcomings and proposes solutions to address them. First, we augment PVR features with temporal perception and a sense of task completion, effectively disentangling them in time. Second, we introduce a module that learns to selectively attend to task-relevant local features, enhancing robustness when evaluated on out-of-distribution scenes. Our experiments demonstrate significant performance improvements, particularly in PVRs trained with masking objectives, and validate the effectiveness of our enhancements in addressing PVR-specific limitations.

Installation

1. Dependencies

sudo apt update
sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3

2. Set up Environment

conda env create -f conda_env.yml
conda activate pvrobo

⚠️ Important: change in src/cfgs/config.yaml root_dir to your project root directory.

Pytorch

conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install timm==1.0.3

3. Simulation

Install MuJoCo version 2.1 and mujoco-py

  1. Please follow the instructions in the mujoco-py package.
  2. You should make sure that the GPU version of mujoco-py gets built, so that image rendering is fast. An easy way to ensure this is to clone the mujoco-py repository, change this line to Builder = LinuxGPUExtensionBuilder, and install from source by running pip install -e . in the mujoco-py root directory. You can also download our changed mujoco-py package and install from source.

MetaWorld

Download the package from here.

pip install -e /path/to/dir/metaworld

Download PVR checkpoints

ResNet-based PVRs and ViT-based ones that are not available in timm, we load the pre-trained weights from the following checkpoints:

Model Architecture Highlights Link
MoCo v2 ResNet-50 Contrastive learning, momentum encoder download
SwAV ResNet-50 Contrast online cluster assignments download
DenseCL ResNet-50 Dense contrastive learning, learn local features download
VICRegL ResNet-50 Learn global and local features download
VFS ResNet-50 Encode temporal dynamics download
R3M ResNet-50 Learn visual representations for robotics download
VIP ResNet-50 Learn representations and reward for robotics download
iBOT ViT-B/16 Combine self-distillation with MIM download

After downloading a pre-trained vision model, place it under PVM-Robotics/pretrained/ folder. Please don't modify the file names of these checkpoints.

How to use?

Generate expert demonstrations

Modify in the ./expert_demos/generate.py file the task names you wish to generate demos for.

python -m expert_demos.generate

Train policies via Behaviour Cloning

Example for training a policy with AFA+TE on DINOv1 features for the bin-picking-v2 task. See src/scripts/run.sh for more variations.

cd ./src
python train_bc.py \
    agent=bc \
    suite=metaworld \
    suite/metaworld_task=bin_picking \
    suite.num_eval_episodes=100 \
    suite.eval_every_frames=80000 \
    suite.num_train_frames=80000 \
    agent.backbone=vit \
    agent.embedding_name=vit_base_patch16_224.dino \
    agent.feat_extraction=per_patch \
    agent.use_proprio=True \
    agent.supervise_proprio=False \
    agent.supervision_mode=None \
    agent.use_tenc=True \
    agent.tenc_dim=64 \
    agent.tenc_scale=100 \
    agent.positional_encoding=False \
    agent.num_heads=12 \
    num_demos=25 \
    batch_size=128 \
    use_wandb=False \
    seed=100 \
    exp_prefix=BC \
    device=cuda:0

Acknowledgments

This work was supported by the United Kingdom Research and Innovation (grant EP/S023208/1), EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems (RAS). Funding for DD was provided by the Edinburgh Laboratory for Integrated Artificial Intelligence - EPSRC (EP/W002876/1).

Citation

Consider giving as a ⭐ to reveive notifications. Also, if you found the paper useful for your research, consider citing the paper. Finally, consider citing the following works that made ours possible: For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal, R3M: A Universal Visual Representation for Robot Manipulation, The Unsurprising Effectiveness of Pre-Trained Vision Models for Control.

@article{tsagkas2025pretrainedvisualrepresentationsfall,
    title={When Pre-trained Visual Representations Fall Short: Limitations in Visuo-Motor Robot Learning},
    author={Tsagkas, Nikolaos and Sochopoulos, Andreas and Danier, Duolikun and Vijayakumar, Sethu and Xiaoxuan Lu, Chris and Mac Aodha, Oisin},
    journal={arXiv preprint arXiv:2502.03270},
    year={2025},
}

About

Code for the "When Pre-trained Visual Representations Fall Short: Limitations in Visuo-Motor Robot Learning" paper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published