"Not so fast, FFT": Winograd

Mar 04, 2016

Deep learning thrives on speed. Faster training enables the construction of larger and more complex networks to tackle new domains such as speech or decision making.

Recently, small convolutional filter sizes have become an important component in convolutional neural networks such as Google’s AlphaGo network or Microsoft’s deep residual networks. While most convolutions are computed with the fast fourier transform (FFT) algorithm, the rising prominence of small 3×3 filter sizes makes the way for a lesser known technique specialized for small filter sizes: Winograd’s minimal filtering algorithms (Lavin and Gray, 2015).

We have implemented the Winograd algorithm on GPUs and benchmarked performance and convergence on state-of-the-art networks. Depending on the network architecture, training with Nervana’s Winograd algorithm yields speed-ups of 2-3x over NVIDIA’s cuDNN v4 kernels.

We benchmarked speed on:

  • NVIDIA Titan X GPU (fixed at 1 Ghz)
  • Intel® Core™ i7 CPU 975 @ 3.33GHz

with several convolutional networks operating on the Imagenet dataset: VGG, GoogleNet, Microsoft’s deep residual networks. We tested different minibatch sizes and compared the computation time of the 3×3 layers between Nervana Winograd and NVIDIA cuDNN v4.

Performance was measured in units of algorithmic speedup. Computation speed (e.g. images/second) was normalized to the maximum theoretical speed of the direct convolutional approach.  Values above one indicate how efficiently a faster algorithm is implemented. 

For the 3×3 layers of the VGG model, Nervana Winograd is up to 3 times faster than cuDNN v4 (see Figure below). These speed-ups were also realized when we measured the end-to-end forward propagation and backward propagation times. 

end-to-end forward propagation

We obtained similar results with GoogleNetv2 and MSRA networks.

measuring backward propagation times

small convolutional filter sizes

Not only is Nervana Winograd fast, but also numerically accurate. For example, AlexNet convergence is exactly the same as with direct convolution, as shown below.

AlexNet convergence

These results are also confirmed by third-party benchmarking. Our Winograd implementation is open-source, and can be found in the latest release of our deep learning library, neon.  Look forward to a forthcoming part 2 with more technical details on Nervana Winograd. We are actively working on improving Nervana Winograd to allow your networks to train faster and handle more complex problems.

 

References

Lavin, Andrew and Gray, Scott (2015). Fast Algorithms for Convolutional Neural Networks. http://arxiv.org/abs/1509.09308

Related Blog Posts

neon v2.3.0: Significant Performance Boost for Deep Speech 2 and VGG models

We are excited to announce the release of neon™ 2.3.0.  It ships with significant performance improvements for Deep Speech 2 (DS2) and VGG models running on Intel® architecture (IA). For the DS2 model, our tests show up to 6.8X improvement1,4 with the Intel® Math Kernel Library (Intel® MKL) backend over the NumPy CPU backend with…

Read more

#neon

BDW-SKX Normalized Throughput

neon v2.1.0: Leveraging Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

We are excited to announce the availability of neon™ 2.1 framework. An optimized backend based on Intel® Math Kernel Library (Intel® MKL), is enabled by default on CPU platforms with this release. neon™ 2.1 also uses a newer version of the Intel ® MKL for Deep Neural Networks (Intel ® MKL-DNN), which features optimizations for…

Read more

#neon #Release Notes

neon™ 2.0: Optimized for Intel® Architectures

neon™ is a deep learning framework created by Nervana Systems with industry leading performance on GPUs thanks to its custom assembly kernels and optimized algorithms. After Nervana joined Intel, we have been working together to bring superior performance to CPU platforms as well. Today, after the result of a great collaboration between the teams, we…

Read more

#neon