BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google ML Kit SDK Now Focuses on On-Device Machine Learning

Google ML Kit SDK Now Focuses on On-Device Machine Learning

This item in japanese

Bookmarks

Google has introduced a new ML Kit SDK aimed at working in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.

Focusing ML Kit on on-device machine learning means your app will not experience any network latency and will be able to work offline. Additionally, the new ML Kit SDK keeps all of its data locally, which is a key requirement to build privacy-preserving applications.

ML Kit SDK retains its original feature set covering vision and natural language processing. Vision-related features include barcode scanning, face detection, image labeling, object detection and tracking, and text recognition. Natural language-related features include language identification from a string of text, on-device translation, and smart reply.

Google is now recommending to use the new ML Kit SDK for new apps and to migrate from the older, Cloud-based version for existing ones. If you need the more advanced capabilities provided by the old version, such as custom model deployment and AutoML Vision Edge, you can use Firebase Machine Learning.

As mentioned, though, Google is also making its first steps to extend ML Kit SDK so it can support replacing its default models with custom TensorFlow Lite models. Initially, only Image Labeling and Object Detection and Tracking support this capability, but Google has plans to include more APIs.

The new ML Kit SDK will be available through Google Play Services, which means it needs not be packaged with an app binary, making its size smaller.

As a final note, ML Kit SDK includes two new APIs that provide Entity Extraction, to detect text entities and make them actionable, and Pose Detection, which supports 33 skeletal points and makes possible hands and feet tracking. The new APIs are available to interested developers through Google early access program.

Rate this Article

Adoption
Style

BT