Keras Autoencoders

Alternatives To Keras Autoencoders
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Mmdnn5,725
38 months ago10July 24, 2020333mitPython
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
Tensorspace4,487212 years ago13April 20, 201923apache-2.0JavaScript
Neural network 3D visualization framework, build interactive and intuitive model in browsers, support pre-trained deep learning models from TensorFlow, Keras, TensorFlow.js
Keras Vis2,5922813 years ago11July 06, 2017104mitPython
Neural network visualization toolkit for keras
Torchinfo1,7341221 days ago28May 28, 202222mitPython
View model summaries in PyTorch!
Quiver1,536
3 years ago1October 30, 201732mitJavaScript
Interactive convnet features visualization for Keras
Hiddenlayer1,531452 years ago3April 24, 202048mitPython
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.
Labml1,31662 months ago131July 05, 202222mitJupyter Notebook
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
Fabrik1,062
3 years ago69gpl-3.0Python
:factory: Collaboratively build, visualize, and design neural nets in browser
Tools To Design Or Visualize Architecture Of Neural Network1,017
2 years ago4
Tools to Design or Visualize Architecture of Neural Network
Picasso990
15 years ago5July 20, 201718epl-1.0Python
:art: A CNN visualizer
Alternatives To Keras Autoencoders
Select To Compare


Alternative Project Comparisons
Readme

keras-autoencoders

This github repro was originally put together to give a full set of working examples of autoencoders taken from the code snippets in Building Autoencoders in Keras. These examples are:

  • A simple autoencoder / sparse autoencoder: simple_autoencoder.py
  • A deep autoencoder: deep_autoencoder.py
  • A convolutional autoencoder: convolutional_autoencoder.py
  • An image denoising autoencoder: image_desnoising.py
  • A variational autoencoder (VAE): variational_autoencoder.py
  • A variational autoecoder with deconvolutional layers: variational_autoencoder_deconv.py

All the scripts use the ubiquitous MNIST hardwritten digit data set, and have been run under Python 3.5 and Keras 2.1.4 with a TensorFlow 1.5 backend, and numpy 1.14.1. Note that it's important to use Keras 2.1.4+ or else the VAE example doesn't work.

Latent Space Visualization

In order to bring a bit of added value, each autoencoder script saves the autoencoder's latent space/features/bottleneck in a pickle file.

An autoencoder is made of two components, the encoder and the decoder. The encoder brings the data from a high dimensional input to a bottleneck layer, where the number of neurons is the smallest. Then, the decoder takes this encoded input and converts it back to the original input shape, in this case an image. The latent space is the space in which the data lies in the bottleneck layer.

The latent space contains a compressed representation of the image, which is the only information the decoder is allowed to use to try to reconstruct the input as faithfully as possible. To perform well, the network has to learn to extract the most relevant features in the bottleneck.

Autoencode latent space

A great explanation by Julien Despois on Latent space visualization can be found here, and from where I nicked the above explanation and diagram!

The visualizations are created by carrying out dimensionality reduction on the 32-d (or 128-d) features using t-distributed stochastic neighbor embedding (t-SNE) to transform them into a 2-d feature which is easy to visualize.

visualize_latent_space.py loads the appropriate feaure, carries out the t-SNE, saves the t-SNE and plots the scatter graph. Note that at the moment you have to some commenting/uncommenting to get to run the appropriate feature :-( .

Here a are some 32-d examples:

simple autoencoder latent space

sparse autoencoder latent space

deep autoencoder latent space

And the output from the 2-d VAE latent space output:

variational autoencoder latent space

variational autoencoder latent space

Popular Keras Projects
Popular Visualization Projects
Popular Machine Learning Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Visualization
Keras
Decoder
Autoencoder
Vae