DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates. Patches to this project are no longer accepted by Intel. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.
For fast iteration and model exploration, neon has the fastest performance among deep learning libraries (2x speed of cuDNNv4, see benchmarks).
See the new features in our latest release. We want to highlight that neon v2.0.0+ has been optimized for much better performance on CPUs by enabling Intel Math Kernel Library (MKL). The DNN (Deep Neural Networks) component of MKL that is used by neon is provided free of charge and downloaded automatically as part of the neon installation.
On a Mac OSX or Linux machine, enter the following to download and install
neon (conda users see the guide), and use it to train your first multi-layer perceptron. To force a python2 or python3 install, replace
make below with either
make python2 or
git clone https://github.com/NervanaSystems/neon.git cd neon make . .venv/bin/activate
Starting after neon v2.2.0, the master branch of neon will be updated weekly with work-in-progress toward the next release. Check out a release tag (e.g., "git checkout v2.2.0") for a stable release. Or simply check out the "latest" release tag to get the latest stable release (i.e., "git checkout latest")
From version 2.4.0, we re-enabled pip install. Neon can be installed using package name nervananeon.
pip install nervananeon
It is noted that aeon needs to be installed separately. The latest release v2.6.0 uses aeon v1.3.0.
Between neon v2.1.0 and v2.2.0, the aeon manifest file format has been changed. When updating from neon < v2.2.0 manifests have to be recreated using ingest scripts (in examples folder) or updated using this script.
The gpu backend is selected by default, so the above command is equivalent to if a compatible GPU resource is found on the system:
python examples/mnist_mlp.py -b gpu
When no GPU is available, the optimized CPU (MKL) backend is now selected by default as of neon v2.1.0, which means the above command is now equivalent to:
python examples/mnist_mlp.py -b mkl
If you are interested in comparing the default mkl backend with the non-optimized CPU backend, use the following command:
python examples/mnist_mlp.py -b cpu
Alternatively, a yaml file may be used run an example.
To select a specific backend in a yaml file, add or modify a line that contains
backend: mkl to enable mkl backend, or
backend: cpu to enable cpu backend. The gpu backend is selected by default if a GPU is available.
The Intel Math Kernel Library takes advantages of the parallelization and vectorization capabilities of Intel Xeon and Xeon Phi systems. When hyperthreading is enabled on the system, we recommend the following KMP_AFFINITY setting to make sure parallel threads are 1:1 mapped to the available physical cores.
export OMP_NUM_THREADS=<Number of Physical Cores> export KMP_AFFINITY=compact,1,0,granularity=fine
export OMP_NUM_THREADS=<Number of Physical Cores> export KMP_AFFINITY=verbose,granularity=fine,proclist=[0-<Number of Physical Cores>],explicit
For more information about KMP_AFFINITY, please check here. We encourage users to set out trying and establishing their own best performance settings.
The complete documentation for neon is available here. Some useful starting points are:
For any bugs or feature requests please:
For other questions and discussions please post a message to the neon-users Google group