Project Name | Stars | Downloads | Repos Using This | Packages Using This | Most Recent Commit | Total Releases | Latest Release | Open Issues | License | Language |
---|---|---|---|---|---|---|---|---|---|---|
Mmdnn | 5,725 | 3 | 8 months ago | 10 | July 24, 2020 | 333 | mit | Python | ||
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML. | ||||||||||
Tensorspace | 4,487 | 2 | 1 | 2 years ago | 13 | April 20, 2019 | 23 | apache-2.0 | JavaScript | |
Neural network 3D visualization framework, build interactive and intuitive model in browsers, support pre-trained deep learning models from TensorFlow, Keras, TensorFlow.js | ||||||||||
Keras Vis | 2,592 | 28 | 1 | 3 years ago | 11 | July 06, 2017 | 104 | mit | Python | |
Neural network visualization toolkit for keras | ||||||||||
Torchinfo | 1,734 | 12 | 21 days ago | 28 | May 28, 2022 | 22 | mit | Python | ||
View model summaries in PyTorch! | ||||||||||
Quiver | 1,536 | 3 years ago | 1 | October 30, 2017 | 32 | mit | JavaScript | |||
Interactive convnet features visualization for Keras | ||||||||||
Hiddenlayer | 1,531 | 4 | 5 | 2 years ago | 3 | April 24, 2020 | 48 | mit | Python | |
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. | ||||||||||
Labml | 1,316 | 6 | 2 months ago | 131 | July 05, 2022 | 22 | mit | Jupyter Notebook | ||
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱 | ||||||||||
Fabrik | 1,062 | 3 years ago | 69 | gpl-3.0 | Python | |||||
:factory: Collaboratively build, visualize, and design neural nets in browser | ||||||||||
Tools To Design Or Visualize Architecture Of Neural Network | 1,017 | 2 years ago | 4 | |||||||
Tools to Design or Visualize Architecture of Neural Network | ||||||||||
Picasso | 990 | 1 | 5 years ago | 5 | July 20, 2017 | 18 | epl-1.0 | Python | ||
:art: A CNN visualizer |
This github repro was originally put together to give a full set of working examples of autoencoders taken from the code snippets in Building Autoencoders in Keras. These examples are:
All the scripts use the ubiquitous MNIST hardwritten digit data set, and have been run under Python 3.5 and Keras 2.1.4 with a TensorFlow 1.5 backend, and numpy 1.14.1. Note that it's important to use Keras 2.1.4+ or else the VAE example doesn't work.
In order to bring a bit of added value, each autoencoder script saves the autoencoder's latent space/features/bottleneck in a pickle file.
An autoencoder is made of two components, the encoder and the decoder. The encoder brings the data from a high dimensional input to a bottleneck layer, where the number of neurons is the smallest. Then, the decoder takes this encoded input and converts it back to the original input shape, in this case an image. The latent space is the space in which the data lies in the bottleneck layer.
The latent space contains a compressed representation of the image, which is the only information the decoder is allowed to use to try to reconstruct the input as faithfully as possible. To perform well, the network has to learn to extract the most relevant features in the bottleneck.
A great explanation by Julien Despois on Latent space visualization can be found here, and from where I nicked the above explanation and diagram!
The visualizations are created by carrying out dimensionality reduction on the 32-d (or 128-d) features using t-distributed stochastic neighbor embedding (t-SNE) to transform them into a 2-d feature which is easy to visualize.
visualize_latent_space.py loads the appropriate feaure, carries out the t-SNE, saves the t-SNE and plots the scatter graph. Note that at the moment you have to some commenting/uncommenting to get to run the appropriate feature :-( .
Here a are some 32-d examples:
And the output from the 2-d VAE latent space output: