https://github.com/pathak22/zeroshot-imitation
Deepak Pathak*, Parsa Mahmoudieh*, Guanghao Luo*, Pulkit Agrawal*, Dian Chen,
Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, Trevor Darrell
University of California, Berkeley
This is the implementation for the ICLR 2018 paper Zero Shot Visual Imitation. We propose an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. The key insight is the intuition that, for most tasks, reaching the goal is more important than how it is reached.
@inproceedings{pathakICLR18zeroshot,
Author = {Pathak, Deepak and
Mahmoudieh, Parsa and Luo, Guanghao and
Agrawal, Pulkit and Chen, Dian and
Shentu, Yide and Shelhamer, Evan and
Malik, Jitendra and Efros, Alexei A. and
Darrell, Trevor},
Title = {Zero-Shot Visual Imitation},
Booktitle = {ICLR},
Year = {2018}
}
git clone -b master --single-branch https://github.com/pathak22/zeroshot-imitation.gitcd zeroshot-imitation/# (1) Install requirements:sudo apt-get install python-tk virtualenv venvsource $PWD/venv/bin/activate pip install --upgrade pip pip install numpy pip install -r src/requirements.txt# (2) Install Caffe: http://caffe.berkeleyvision.org/install_apt.htmlgit clone https://github.com/BVLC/caffe.git sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler sudo apt-get install libatlas-base-dev sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev sudo apt-get install --no-install-recommends libboost-all-devcd caffe/ # edit Makefile.configmake all -j make pycaffe make test -j make runtest -j# Note: If you are using conda, then its easy:# $ conda install -c conda-forge caffe# $ conda install -c conda-forge opencv=3.2.0
Data can be downloaded at google drive link. This is the same data as used in Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation.
You will need the rope9 dataset and img_mean.npy from this download.
Then, download the AlexNet weights, bvlc_alexnet.npy from here
put rope9 data in data/datasets/rope9
put img_mean.npy in data/img_mean.npy
put bvlc_alexnet.npy in nets/bvlc_alexnet.npy
python -i train.py# fwd_consist=True to turn foward consistency loss on,# or leave it False for to just learn the inverse modelr = RopeImitator('name', fwd_consist=True)# to train baseline, turn baseline_reg=True. note that fwd_consist should be turned on as well (historical accident)r = RopeImitator('name', fwd_consist=True, baseline_reg=True)# Restore old models, if any. default of model_name is just current model namer.restore(iteration, model_name='name of old model')# trainingr.train(num_iters)
Note that the accuracies presented is not a good measure of real world performance. The purpose of forward consistency is to learn actions consistent with state transistions, which don't necessarily have to be the ground truth actions.
Paper
Project Website
Videos