neon™ 2.0: Optimized for Intel® Architectures

2017-06-28

Author Bio Image

Jayaram Bobba

Jayaram Bobba is a senior software engineer in the AI Products Group at Intel. He works on graph compilers and CPU optimizations for Machine Learning frameworks like Tensorflow and neon. He also leads the team developing the CPU backend for Nervana Graph. During his time at Intel, he has contributed to many binary translation projects and HW/SW codesign ranging from microarchitecture enhancements to SW algorithm improvements. Jayaram has a PhD from University of Wisconsin in Computer Architecture.

neon™ is a deep learning framework created by Nervana Systems with industry leading performance on GPUs thanks to its custom assembly kernels and optimized algorithms. After Nervana joined Intel, we have been working together to bring superior performance to CPU platforms as well. Today, after the result of a great collaboration between the teams, we are excited to announce neon™ 2.0 with optimizations for Intel CPUs using Intel’s Math Kernel library (Intel® MKL).

On an Intel Xeon® processor E5 v4 server platform (code named Broadwell), the optimized implementation provides up to 98x speedup on popular benchmarks and topologies. For example, GoogLeNet v1 inference throughput is 539 images/sec on the Xeon platform, enabling high throughput inference with neon on CPUs. Neon also demonstrates state of the art CPU performance on topologies such as ResNet-50 (training throughput at 53 images/sec on Xeon systems). Users are also expected to see improved performance on Intel Xeon (code named Skylake) and Intel Xeon Phi (code named Knights Mill) coming out later this year. We hope that these optimizations will allow data scientists and machine learning researchers to leverage readily available CPUs to develop deep learning models.

Intel® MKL library provides CPU optimized implementations for widely used primitives like convolution, pooling, activation functions and normalization. These MKL primitives exploit the full vectorization and parallelization capabilities of Intel Architecture in contrast to existing vanilla implementations. Information for other MKL-optimized frameworks can be found for Tensorflow, MXnet, and Caffe.

We have developed a new neon backend (NervanaMKL) that utilizes MKL primitives where available. The following neon ops are currently optimized with MK: 2D direct convolution, Pooling, Relu, BatchNorm, MergeSum and MergeBroadcast.

To achieve peak performance, MKL primitives require N-dimensional input data to be laid out in specific SIMD-friendly formats. To reduce the burden on neon users, we have incorporated the plumbing required for automatic data layout tracking and conversion into the NervanaMKL backend. We have also rewritten elementwise operations using OpenMP to speed up execution. Together all these optimizations provide a significant performance boost for both training and inference tasks.

We have validated the correctness of the implementation on a variety of models that are provided along with the neon framework. Performance has been optimized for various ImageNet-based models like Alexnet, GoogLeNet-v1, and ResNet. Figure 1 shows the training performance improvement for Convnet-Alexnet, Convnet-GoogLeNet v1, and ResNet-50 (with real image dataset) with the neon MKL backend on a Intel Xeon system.

Figure 1: Performance improvement with Intel MKL on Intel Xeon processor E5 v4 (codename Broadwell) CPUs

We encourage users to check out MKL-optimized neon v2.0 and try out their favorite models on IA platforms. The DNN component of MKL is provided free of charge and downloaded automatically as part of the neon installation.

 

Future neon™ Release Features

Future neon v2.x releases will feature performance optimizations for a broader range of models including GANs and DeepSpeech2. Neon v3.0 will feature Intel® Nervana™ Graph support, enabling multinode training, new models such as ResNet-Inception, SSD, and a wide range of Reinforcement Learning models.

 

Installation Instructions on Ubuntu 16.04 Systems:

1)    Install prerequisites
sudo apt-get install python-pip python-virtualenv libhdf5-dev libyaml-dev pkg-config

2)    Get and install neon v2.0
git clone https://github.com/NervanaSystems/neon.git
cd neon
make

3)  Activate virtualenv in neon root directory
. .venv/bin/activate

4)    Run basic neon examples without MKL backend under the neon root directory to get neon baseline performance (-e 1 means running for just 1 epoch)
python examples/cifar10_conv.py -e 1

5)    Run basic neon examples with MKL backend under the neon root directory to get boosted neon performance
python examples/cifar10_conv.py -b mkl -e 1

BDW System Configuration:

Figure 2: BDW system configuration

 

neon™ 2.0 Key Contributors:
Peng Zhang (Development), Wei Wang (Benchmarking and Documentation), Dawn Stone (Validation)

Jayaram Bobba is a senior software engineer in the AI Products Group at Intel. He works on graph compilers and CPU optimizations for Machine Learning frameworks like Tensorflow and neon. He also leads the team developing the CPU backend for Nervana Graph. During his time at Intel, he has contributed to many binary translation projects and HW/SW codesign ranging from microarchitecture enhancements to SW algorithm improvements. Jayaram has a PhD from University of Wisconsin in Computer Architecture.

Peng Zhang is a software engineer in the Software Service Group at Intel. He works on the optimization of Deep Learning frameworks including Neon and Torch. He has a Master’s Degree from Tsinghua University in Control Science and Engineering.

Wei Wang is a software engineer in the AI Products Group at Intel. He works on benchmarking Machine Learning frameworks like Neon and Caffe. He has a PhD from University of Delaware in High-Performance Computing (HPC).

Dawn Stone is a software engineer in the AI Products Group at Intel. She works on validating Machine Learning frameworks including Intel Nervana™ Graph and neon™.

 

Related Blog Posts

Intel® Nervana™ Graph Beta

We are building the Intel Nervana Graph project to be the LLVM for deep learning, and today we are excited to announce a beta release of our work we previously announced in a technical preview. We see the Intel Nervana Graph project as the beginning of an ecosystem of optimization passes, hardware backends and frontend…

Read more

#Intel Nervana Graph #neon

Training Generative Adversarial Networks in Flexpoint

Training Generative Adversarial Networks in Flexpoint With the recent flood of breakthrough products using deep learning for image classification, speech recognition and text understanding, it’s easy to think deep learning is just about supervised learning. But supervised learning requires labels, which most of the world’s data does not have. Instead, unsupervised learning, extracting insights from…

Read more

#neon

End-to-end speech recognition with neon

By: Anthony Ndirango and Tyler Lee Speech is an intrinsically temporal signal. The information-bearing elements present in speech evolve over a multitude of timescales. The fine changes in air pressure at rates of hundreds to thousands of hertz convey information about the speakers, their location, and help us separate them from a noisy world. Slower changes in…

Read more

#neon