Awesome Open Source
Awesome Open Source

State Representation Learning Zoo with PyTorch (part of S-RL Toolbox)

A collection of State Representation Learning (SRL) methods for Reinforcement Learning, written using PyTorch.

SRL Zoo Documentation:

S-RL Toolbox Documentation:

S-RL Toolbox Repository:

Available methods:

  • Autoencoder (reconstruction loss)
  • Denoising Autoencoder (DAE)
  • Forward Dynamics model
  • Inverse Dynamics model
  • Reward prediction loss
  • Variational Autoencoder (VAE) and beta-VAE
  • SRL with Robotic Priors + extensions (stereovision, additional priors)
  • Supervised Learning
  • Principal Component Analysis (PCA)
  • Triplet Network (for stereovision only)
  • Combination and stacking of methods
  • Random Features
  • [experimental] Reward Prior, Episode-prior, Perceptual Similarity loss (DARLA), Mutual Information loss

Related papers:


Documentation is available online:


Please read the documentation for more details, we provide anaconda env files and docker images.

Learning a State Representation

To learn a state representation, you need to enforce constrains on the representation using one or more losses. For example, to train an autoencoder, you need to use a reconstruction loss. Most losses are not exclusive, that means you can combine them.

All losses are defined in losses/ The available losses are:

  • autoencoder: reconstruction loss, using current and next observation
  • denoising autoencoder (dae): same as for the auto-encoder, except that the model reconstruct inputs from noisy observations containing a random zero-pixel mask
  • vae: (beta)-VAE loss (reconstruction + kullback leiber divergence loss)
  • inverse: predict the action given current and next state
  • forward: predict the next state given current state and taken action
  • reward: predict the reward (positive or not) given current and next state
  • priors: robotic priors losses (see "Learning State Representations with Robotic Priors")
  • triplet: triplet loss for multi-cam setting (see Multiple Cameras section in the doc)


  • reward-prior: Maximises the correlation between states and rewards (does not make sense for sparse reward)
  • episode-prior: Learn an episode-agnostic state space, thanks to a discriminator distinguishing states from same/different episodes
  • perceptual similarity loss (for VAE): Instead of the reconstruction loss in the beta-VAE loss, it uses the distance between the reconstructed input and real input in the embedding of a pre-trained DAE.
  • mutual information loss: Maximises the mutual information between states and rewards

All possible arguments can be display using python --help. You can limit the training set size (--training-set-size argument), change the minibatch size (-bs), number of epochs (--epochs), ...

Datasets: Simulated Environments and Real Robots

Although the data can be generated easily using the RL repo in simulation (cf Generating Data), we provide datasets with a real baxter:


You can download an example dataset here.

Train an inverse model:

python --data-folder data/path/to/dataset --losses inverse

Train an autoencoder:

python --data-folder data/path/to/dataset --losses autoencoder

Combining an autoencoder with an inverse model is as easy as:

python --data-folder data/path/to/dataset --losses autoencoder inverse

You can as well specify the weight of each loss:

python --data-folder data/path/to/dataset --losses autoencoder:1 inverse:10

Please read the documentation for more examples.

Running Tests

Download the test datasets kuka_gym_test and kuka_gym_dual_test and put it in data/ folder.



CUDA out of memory error

  1. python --data-folder data/staticButtonSimplest
RuntimeError: cuda runtime error (2) : out of memory at /b/wheel/pytorch-src/torch/lib/THC/generic/

SOLUTION 1: Decrease the batch size, e.g. 32-64 in GPUs with little memory.

SOLUTION 2 Use simple 2-layers neural network model python --data-folder data/staticButtonSimplest --model-type mlp

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (53,198
deep-learning (3,906
pytorch (2,318
neural-network (735
reinforcement-learning (551
autoencoder (79
representation-learning (70
vae (69