1

Reinforcement Learning with Prototypical Representations

Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency

Self-Supervised Policy Adaptation during Deployment

Task-Agnostic Morphology Evolution

Visual Imitation Made Easy

Learning Predictive Representations for Deformable Objects Using Contrastive Estimation

Robust Policies via Mid-Level Visual Representations

Automatic Curriculum Learning through Value Disagreement

Generalized Hindsight for Reinforcement Learning

Reinforcement Learning with Augmented Data