Intel Joins Open Neural Network Exchange Ecosystem to Expand Developer Choice in Deep Learning Frameworks

2017-10-10

Author Bio Image

Jason Knight

Jason is the Senior Technology Officer for Intel AI Products. He has a PhD in computational biology where he developed hierarchical Bayesian statistical models for classification of cancer tumor expression data. In addition, he developed high performance Markov chain Monte Carlo techniques to discover gene regulatory networks in this data using Bayesian networks. He then applied these techniques on the world’s largest database of human genomes at Human Longevity Inc. before jumping into the heart of machine learning at Nervana to advance what is possible with machine learning.

As part of Intel’s commitment to furthering artificial intelligence across the industry, Intel announced that it is joining Microsoft*, Facebook*, and others to participate in the Open Neural Network Exchange (ONNX) project. By joining the project, we plan to further expand the choices developers have on top of frameworks powered by the Intel® Nervana™ Graph library and deployment through our Deep Learning Deployment Toolkit. Developers should have the freedom to choose the best software and hardware to build their artificial intelligence model and not be locked into one solution based on a framework. Deep learning is better when developers can move models from framework to framework and use the best hardware platform for the job.

Intel Nervana Graph is a hardware-independent, open source library that enables deep learning frameworks to achieve maximum performance across a wide variety of hardware platforms. In a similar light, the ONNX format by Microsoft and Facebook is designed to give users their choice of framework so they are free to choose the best tool for model construction, training, and deployment.

We plan to enable users to convert ONNX models to and from Intel Nervana Graph models, giving users an even broader selection of choice in their deep learning toolkits. These converters will be simple to use (they are a conversion from one protobuf format to another) and bidirectional. In addition to these converters, we are also participating in the open development of ONNX to make sure it continues to evolve into a format that can keep pace with rapid developments in both deep learning algorithms and hardware.

For an example of how ONNX and Intel Nervana Graph compatibility is beneficial for users, imagine this scenario: Your colleague has trained a new language model in CNTK and you’d like to implement a multi-modal fusion model that builds on top of your colleague’s model while also integrating camera input. You download a SqueezeNet* trained model from the Pytorch* model zoo, import both models into a neon framework (through the ONNX and Intel Nervana Graph converters), add some fusion layers, and then train the final layers using your laptop or an Intel® Xeon® optimized cloud instance. From there you are free to convert back to ONNX to deploy in Caffe* 2, or quantize and prune using the Intel® Deep Learning Deployment Toolkit to prepare for mobile deployment.

Accelerating time-to-solution is always of paramount importance to developers, and increased interoperability is one crucial way we want to support users in this. Through supporting the ONNX format, Intel can empower developers’ optimal choice in hardware and software in order to use the most suitable tools for the task.