BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Q&A with Movidius, a Division of Intel Who Just Launched the Neural Compute Stick

Q&A with Movidius, a Division of Intel Who Just Launched the Neural Compute Stick

Leia em Português

This item in japanese

Bookmarks

Recently Movidius (a division of Intel's New Technology Group) released the neural compute stick: a usb-based development kit that runs embedded neural networks. With this stick users can run neural network and computer vision models on devices with low computational power, like a Raspberry Pi. InfoQ reached out to Gary Brown, marketing director for Movidius, Intel New Technology Group, and asked him a few questions.

InfoQ: Could you describe what the Movidius Neural Compute Stick is, and list some of its key features?

Gary Brown: I’m very excited that in the last month we launched the new Movidius™ Neural Compute Stick, the first USB-based development kit for ultra-low power embedded deep neural networks. It’s basically a self-contained deep learning accelerator and development kit that enables the deployment of deep learning inference and artificial intelligence (AI) applications at the edge. It’s powered by the Movidius Myriad™ 2 Vision Processing Unit (VPU) inside to support deep neural network inferences. And the device runs on a standard USB 3 port, requires no additional power or hardware, enabling users to seamlessly deploy PC-based prototypes to a wide range of devices natively and without cloud connectivity.

InfoQ: What developers would be interested in the Movidius Neural Compute Stick?

Brown: We focus on two audiences with our new development kit: embedded developers interested in learning about machine learning, and developers with expertise in machine learning and AI that are seeking an embedded platform for research. The Movidius Neural Compute Stick can be used in a variety of research and prototyping instances—such as robotic vision, autonomous drone, or intelligent security camera development—that leverage deep neural network (DNN) techniques for scene-segmentation, object tracking or object classification and recognition applications. And because it has the Myriad 2 VPU, it’s capable of energy-efficient deep neural network inferences within a small form factor.

InfoQ: The getting-started page says developers need a laptop running Ubuntu. Are OSX or Windows users able to use the compute stick?

Brown: Ubuntu Linux runs the Neural Compute SDK, the tools used to compile, tune, and validate the deep neural networks that the developer plans to run on the Neural Compute Stick. We have not yet announced expanded tools to support other operating systems. The key to the Neural Compute Stick is the ability to run DNNs in real-time, and we launched the devkit supporting both X86 platforms as well as Raspberry Pi, to capture the attention of a wide audience.

InfoQ: For the Neural Compute Stick, currently only the neural network framework Caffe is supported. Will there be support for other frameworks like Tensorflow, CNTK, or Theano (which all support Keras)?

Brown: Our current focus is to bring rich support for networks based on the popular Caffe framework, but we are working on plans to support other frameworks.

InfoQ: The Neural Compute Stick is powered by your Vision Processing Unit (VPU). What does this mean for users who want to run neural networks on text or audio data?

Brown: While there are certainly some challenges in natural language processing, the focus of our VPU-powered Neural Compute Stick is to provide easy access to a development platform supporting primarily visual intelligence applications. VPUs are specifically designed for accelerating machine vision tasks, which can include image processing, computer vision processing, and deep learning.

InfoQ: We were wondering if and how the stick is able to work with containerized applications. Is there Docker support or any ways to use the compute stick with such applications?

Brown: The MvNC SDK provides an API framework that supports native application development on x86_64 with Ubuntu 16.04 and Raspberry PI with Jessie. Developers can use this framework to integrate Movidius Neural Compute Stick accelerated real-time deep neural network into their applications. I encourage developers to look at our developer page to see how the API works. Whether the overall application is running with a contained platform, provided there is USB support and a Movidius Neural Compute Stick plugged-in, developers are able to accelerate their applications using the stick to run the DNN in real-time.

InfoQ: At the moment, what type of networks are supported?

Brown: Currently, Movidius Neural Compute Stick supports a number of example neural networks including GoogLeNet and AlexNet and others. Developers can run their custom networks as long as they are trained using Caffe framework. For details on which Caffe layers are supported, please visit https://developer.movidius.com/ for more details on supported layers.

InfoQ: A while ago Intel released products such as the RealSense (which was also added to the Intel Euclid). Are there ways to combine the NCS and RealSense technology?

Brown: Yes, of course. For developers creating deep neural networks requiring depth information, Intel RealSense technology can be combined as an input to a deep neural network running on Movdius Neural Compute Stick.

InfoQ: Although the NCS has only been released recently we were wondering if you already saw it being used in innovative ways.

Brown: I’m glad you asked, because when we launched the Neural Compute Stick at the CVPR conference, we were thrilled to meet so many developers (both corporate developers and academic researchers) who had some unique ways of employing neural networks for visual intelligence. For example, we found interesting DNNs to solve problems for robotic vacuum cleaners, autonomous flying drones, driverless cars, and even applications that carefully change your hair color in a video feed and a company that generates GIFs automatically from video sources using machine learning. We’re already seeing a wide variety of innovative users, and look forward to seeing more creative applications emerge from the developer community.

About Gary Brown

Gary Brown is the director of marketing for Movidius, an Intel company and division of Intel’s New Technology Group, managing product marketing and strategic partnerships. Brown has previously held leadership positions in marketing and applications engineering at Dolby Laboratories, Tensilica, and Cadence. His background in embedded DSP and his passion for computer vision technology helps him navigate Intel computer vision technology into this new era of machine intelligence.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT