BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Machine Learning on Mobile and Edge Devices with TensorFlow Lite: Daniel Situnayake at QCon SF

Machine Learning on Mobile and Edge Devices with TensorFlow Lite: Daniel Situnayake at QCon SF

Leia em Português

At QCon SF, Daniel Situnayake presented "Machine Learning on Mobile and Edge Devices with TensorFlow Lite". TensorFlow Lite is a production-ready, cross-platform framework for deploying ML on mobile devices and embedded systems, and this was the main topic of the presentation. The key takeaways from this talk included understanding and getting started with TensorFlow Lite, and how to implement on-device machine learning on various devices – specifically microcontrollers – and optimizing the performance of machine learning models.


Situnayake, developer advocate for TensorFlow Lite at Google, began the presentation by explaining what machine learning is. In a nutshell, he summarizes it as follows:

Traditionally a developer feeds rules and data into an application, which then output answers, while with machine learning the developer or data scientist feeds in the answers and data, and the output are rules that can be applied in the future.

Next, he provided a few examples of the traditional programming, and subsequently, a demo of how machine learning would work with Google’s Teachable Machine. Finally, he pointed out that the two main parts of machine learning are training and inference:

The inference is most useful to do on edge devices, while training usually takes a lot of power, memory and time; three things edge devices don’t have.

After explaining machine learning, Situnayake went into the process of inference in a machine learning application, and talked about TensorFlow Lite covering that process with tooling.


Source: https://qconsf.com/system/files/presentation-slides/daniel_situnayake_-_tensorflow_lite_-_qcon_sf.pdf

According to Situnayake, the drivers for implementing machine learning on devices are threefold:

  • Lower latency
  • Reduced reliance on network connectivity
  • Privacy-preservation

With these drivers, a whole new set of products and services can be made available on devices, ranging from video modification in real-time to looking up definitions of words by scanning them with a phone. Currently, Situnayake stated more than 1000 applications supported by TensorFlow Lite run on more than three billion devices worldwide.

Beyond mobile devices, TensorFlow Lite can work on things like Raspberry Pi (embedded Linux), edge TPUs (Hardware Accelerators), and microcontrollers, which allows for machine learning "on the edge". With machine learning on the edge, developers may not have to worry about bandwidth, latency, privacy, security, and complexity. However, there are challenges, such as limited compute power, especially on the microcontroller, and limited memory and battery life. Yet, Situnayake said TensorFlow Lite mitigates some of these challenges, and allows developers to convert an existing machine learning model for use in TensorFlow Lite and deploy it on any device.

Tensor Flow Lite consists of four parts:

  • It offers several models out-of-the-box, which developers can use or customize
  • It allows conversion of existing models found online or created by an organization
  • It provides support for various languages and operating systems to support the converted model and allow deployment on any device
  • It offers tools to enable optimization of the models to let them run faster and take up less space on the devices

Getting started is easy, Situnayake said; a developer makes a Tensor Flow Lite model and deploys and runs it on an edge device. Even if a developer doesn’t have a model, they can get the models from the TensorFlow Lite website, including sample apps. Situnayake showed some examples such as PoseNet to estimate a location of the body and limbs, and MobileBERT to tackle text understanding problems.

Furthermore, Situnayake pointed out that the new support library for TensorFlow Lite is available to simplify development by providing API’s for pre- and post-processing, and in the future auto-generation of code. He showed some code with and without using the supported library.

Also, Situnayake talked about microcontrollers -- small chipsets that do not offer an operating system, and provide only a small amount of memory, code space and compute power. TensorFlow Lite offers an efficient interpreter, which is optimized for the tiny microcontrollers to run machine learning models. Situnayake provided some use cases with the microcontrollers such as speech detection, person and gesture detection.

Lastly, Situnayake discussed making models perform well on the devices. TensorFlow offers tools and techniques for improving performance across various devices – varying from hardware accelerators to pruning technique.

Developers can learn more about TensorFlow Lite through an online course on Udacity if interested in inferences on iOS, Android and Raspberry Pi. For those interested more in the microcontroller side, there will be a book available soon that focuses on TinyML.

Additional information about Daniel Situnayake's QCon San-Francisco talk can be found on the conference website; the slides are available, and the video of the talk will be released on InfoQ over the coming months.

Rate this Article

Adoption
Style

BT