BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Apple’s iPhone X Has Custom Neural Engine Processor Built In

Apple’s iPhone X Has Custom Neural Engine Processor Built In

Leia em Português

This item in japanese

Speaking in the Steve Jobs Theatre at Apple Park yesterday, Philip Schiller, senior vice president of worldwide marketing at Apple, described some of the technology behind the facial recognition system in the newly announced iPhone X including a dedicated neural engine built into the A11 chip.

As shown above, the facial recognition system is enabled by a new "TrueDepth" camera system made up of an infrared (IR) camera, flood illuminator, front camera and dot projector. When you look at the iPhone X it detects your face with the flood illuminator. The infrared camera takes an IR image, and then the dot projector sends out "over 30,000 invisible IR dots. We use the IR image and the dot pattern and we push them through neural networks to create a mathematical model of your face. And then we check that mathematical model against the one that we stored that you set up earlier to see if it is a match and unlock your phone."

Schiller stated that the neural networks have been trained to avoid easy spoofing attacks, such as photographs that can fool Samsung's Galaxy S8, and that "they've even gone and worked with professional mask makers and makeup artists in Hollywood to protect against these attempts to beat Face ID". He went on to say that the chance that a random person in the population could unlock your iPhone using Face ID is 1 in 1,000,000 vs. 1 in 50,000 for Touch ID. The data is stored locally on the device in Apple's secure enclave. The processing is also handled locally on the device, and it requires user attention to unlock.

To process the neural networks on the phone itself Apple has built their first ever neural engine into the A11 chip. The neural engine is actually a pair of processing cores dedicated to handling "specific machine learning algorithms". These algorithms are what power various advanced features in the iPhone, including Face ID, Animoji, and augmented reality apps. Schiller stated that these cores can run 600 billion operations per second.

This emphasis on processing on-device is typical of Apple's approach to machine learning. We saw this in 2016 when the company talked about its work on differential privacy, and again at WWDC this year with the introduction of the CoreML library it has added to iOS 11. By having hardware on the phone itself that's dedicated to AI processing, Apple sends less data off-device and better protects users' privacy. It is also means the phone is able to handle tasks without needing a constant data connection.

The move can also be seen in the context of a more general industry push towards specialist hardware for AI workloads. Google has already designed two generations of processors for data centres, and some companies are also following Apple's lead of pushing machine learning tasks on-device. Chinese tech giant Huawei put a similar "Neural Processing Unit" in its Kirin 970 system-on-chip, saying it can handle tasks like image recognition 20 times faster than a regular CPU, and Google has announced Tensorflow Lite, a new version of Tensorflow optimised for mobile phones.

Rate this Article

Adoption
Style

BT