Returnn

The RWTH extensible training framework for universal recurrent neural networks
Alternatives To Returnn
Project NameStarsDownloadsRepos Using ThisPackages Using ThisMost Recent CommitTotal ReleasesLatest ReleaseOpen IssuesLicenseLanguage
Pytorch71,0033,3416,72811 hours ago37May 08, 202312,762otherPython
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pytorch Tutorial27,137
a month ago85mitPython
PyTorch Tutorial for Deep Learning Researchers
Darknet24,616
3 days ago1,962otherC
Convolutional Neural Networks
Gnnpapers14,779
5 days ago12
Must-read papers on graph neural networks (GNN)
Openface14,711
3 months ago10apache-2.0Lua
Face recognition with deep neural networks.
Neural Networks And Deep Learning14,073
7 months ago8Python
Code samples for my book "Neural Networks and Deep Learning"
Brain.js13,796198742 months ago75April 12, 202368mitTypeScript
🤖 GPU accelerated Neural networks in JavaScript for Browsers and Node.js
Playground11,048
4 months ago117apache-2.0TypeScript
Play with neural networks!
Convnetjs10,38942a year ago1June 11, 201574mitJavaScript
Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.
Sonnet9,58741112 months ago28March 27, 202035apache-2.0Python
TensorFlow-based neural network library
Alternatives To Returnn
Select To Compare


Alternative Project Comparisons
Readme

Welcome to RETURNN

GitHub repository. RETURNN paper 2016, RETURNN paper 2018.

RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.

The high-level features and goals of RETURNN are:

  • Simplicity
    • Writing config / code is simple & straight-forward (setting up experiment, defining model)
    • Debugging in case of problems is simple
    • Reading config / code is simple (defined model, training, decoding all becomes clear)
  • Flexibility
    • Allow for many different kinds of experiments / models
  • Efficiency
    • Training speed
    • Decoding speed

All items are important for research, decoding speed is esp. important for production.

See our Interspeech 2020 tutorial "Efficient and Flexible Implementation of Machine Learning for ASR and MT" video (slides) with an introduction of the core concepts.

More specific features include:

  • Mini-batch training of feed-forward neural networks
  • Sequence-chunking based batch training for recurrent neural networks
  • Long short-term memory recurrent neural networks including our own fast CUDA kernel
  • Multidimensional LSTM (GPU only, there is no CPU version)
  • Memory management for large data sets
  • Work distribution across multiple devices
  • Flexible and fast architecture which allows all kinds of encoder-attention-decoder models

See documentation. See basic usage and technological overview.

Here is the video recording of a RETURNN overview talk (slides, exercise sheet; hosted by eBay).

There are many example demos which work on artificially generated data, i.e. they should work as-is.

There are some real-world examples such as setups for speech recognition on the Switchboard or LibriSpeech corpus.

Some benchmark setups against other frameworks can be found here. The results are in the RETURNN paper 2016. Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels are in TensorFlow LSTM benchmark.

There is also a wiki. Questions can also be asked on StackOverflow using the RETURNN tag.

Popular Network Projects
Popular Neural Network Projects
Popular Networking Categories
Related Searches

Get A Weekly Email With Trending Projects For These Categories
No Spam. Unsubscribe easily at any time.
Python
Deep Learning
Network
Tensorflow
Kernel
Neural Network
Neural
Gpu
Recurrent Neural Networks
Decoding
Theano