Artificial Intelligence at the Edge

Nov 29, 2017

Author Bio Image

Naveen Rao

Vice President & General Manager of Intel's AI Products Group

Imagine being able to…
… have a camera-enabled assistant monitor your aging parents to make sure they are alert and healthy
… autonomously watch for product imperfections in factories without human interference
… identify and locate lost hikers by using vision-enhanced drones to automatically send help
… automatically recognize your petsitter and let him or her into your house

These are just a few examples of how artificial intelligence (AI) at the edge, combined with connected devices, could improve quality of life and business and help solve problems facing consumers and businesses today.

A convergence of several overlapping technology trends is making new usages like these possible. Edge computing – another name for applications, data, and services located at the edge of a network rather than in a centralized datacenter – is poised to grow by 35 percent annually and become a $34 billion industry by 2023. Meanwhile, the development of human-aware AI systems and the deployment of AI technologies beyond the datacenter are huge opportunities thanks to the available compute power in today’s systems.

The benefits of AI at the edge are well demonstrated in the Smart Home, where the technology can help people manage the day-to-day running of the home and provide peace of mind and. Let’s say, for example, you have security cameras installed at your house, run a small business and have dozens of packages delivered monthly. How do you protect your home with so many people ringing your doorbell? In a truly Smart Home, connected devices throughout the residence will be able to perceive and respond to activity inside and outside the house, learning what is normal, what is an anomaly and what requires action. Making these types of decisions in real time with artificial intelligence requires deep learning capabilities at the edge.

Intel and Amazon are enabling artificial intelligence at the edge with AWS DeepLens, announced today at the AWS re:Invent global summit.  DeepLens is the world’s first fully programmable, deep-learning enabled wireless video camera designed for developers of all abilities to grow their machine learning skills and enable them to deliver innovation in AI today, so we can all experience the capabilities of this technology tomorrow.

At Intel, we are committed to developing a diverse array of products that simplify the complexity of developing AI solutions ‑ from the data center and the cloud to embedded systems at the edge. The dynamic nature and rapid expansion of artificial intelligence workloads requires an adaptive and optimized set of software and services for developers to utilize as they build their own solutions.

We are thrilled to be collaborating with AWS across both edge and cloud-based solutions, covering the entire spectrum of AI.  AWS DeepLens represents the latest addition to Intel’s artificial intelligence solutions with Amazon which include the Intel® Speech Enabled Developer Toolkit, the Intel powered Amazon Echo Show and Echo Look, and the Intel® Xeon® Scalable processors for AWS’s new C5 instance family.  C5 instances offer the lowest price per vCPU in the Amazon EC2 family and are ideal for running advanced compute-intensive workloads including machine and deep learning. The Intel AI portfolio has a broad suite of products and software solutions, from the edge to the cloud, enabling developers to focus on their science and drive the next wave of innovation in AI.

At Intel, we believe artificial intelligence has the power to solve some of the world’s biggest problems and are excited to be working with Amazon to enable developers to build solutions important to them.

Author Bio Image

Naveen Rao

Vice President & General Manager of Intel's AI Products Group