Jun 20, 2016   |   Scott Clark

Much Deeper, Much Faster: Deep Neural Network Optimization with SigOpt and Nervana Cloud

By: Scott Clark, Ian Dewancker, and Sathish Nagappan Tools like neon, Caffe, Theano, and TensorFlow make it easier than ever to build custom neural networks and to reproduce groundbreaking research. Current advancements in cloud-based platforms like the Nervana Cloud enable practitioners to seamlessly build, train, and deploy these powerful methods. Finding the best configurations of these deep nets and efficiently tuning their parameters, however, remains…

Read more

#Case Studies #Platform

Jan 06, 2017   |   Yinyin Liu

Building Skip-Thought Vectors for Document Understanding

The idea of converting natural language processing (NLP) into a problem of vector space mathematics using deep learning models has been around since 2013. A word vector, from word2vec [1], uses a string of numbers to represent a word’s meaning as it relates to other words, or its context, through training. From a word vector,…

Read more

#Case Studies

Dec 08, 2016   |   Anthony Ndirango

End-to-end speech recognition with neon

By: Anthony Ndirango and Tyler Lee Speech is an intrinsically temporal signal. The information-bearing elements present in speech evolve over a multitude of timescales. The fine changes in air pressure at rates of hundreds to thousands of hertz convey information about the speakers, their location, and help us separate them from a noisy world. Slower changes in…

Read more

#neon

Machine Learning

Oct 31, 2017   |   Shashi Jain, Katie Fritsch

A Summer of Space Exploration with Intel and NASA

This summer, Intel has been collaborating with the NASA Frontier Development Lab (FDL) , an AI R&D accelerator targeting knowledge gaps useful to the space program. The NASA FDL, hosted at the SETI Institute, was established to apply AI to five specific challenges in areas relevant to the space program: Planetary Defense (defending the Earth…

Read more

#NASA

Dec 29, 2016   |   Jennifer Myers

neon v1.8.0 released!

Highlights from this release include:  * Skip Thought Vectors example * Dilated convolution support * Nesterov Accelerated Gradient option to SGD optimizer * MultiMetric class to allow wrapping Metric classes * Support for serializing and deserializing encoder-decoder models * Allow specifying the number of time steps to evaluate during beam search * A new community-contributed Docker image…

Read more

#Release Notes

Dec 22, 2015   |   Tambet Matiisen

Guest Post (Part I): Demystifying Deep Reinforcement Learning

Two years ago, a small company in London called DeepMind uploaded their pioneering paper “Playing Atari with Deep Reinforcement Learning” to Arxiv. In this paper they demonstrated how a computer learned to play Atari 2600 video games by observing just the screen pixels and receiving a reward when the game score increased. The result was remarkable,…

Read more

#neon