Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Google’s Tensorflow Roadmap Includes Better XLA Compilation and Distributed Computing

Google’s Tensorflow Roadmap Includes Better XLA Compilation and Distributed Computing

Google announced the next iteration of TensorFlow development. TensorFlow is the machine learning platform developed by Google, was open sourced seven years ago and it is now one of the most starred projects on GitHub. The other one is Pytorch, the ML platform developed by Facebook and open sourced too. The development roadmap for the next few TensorFlow releases is based on four pillars: fast and scalable, applied machine learning, ready to deploy, and simplicity.

For the fast and scalable pillar, the development will be focused on XLA compilation because Google thought XLA would become the industry standard for deep learning compilers. The goal is to make model training and inference workflows faster on CPU and GPU. The development will also be focused on distributed computing: with DTensor, the models will be trained on multiple devices to unlock the future ultra-large model training and deployment. Also, performance is important, so Google will invest to optimize the algorithm performances such as mixed-precision and reduced-precision computation to increase the speed on GPUs and TPUs.

For the applied ML pillar, Google will invest in KerasCV and KerasNLP packages, designed for applied CV and NLP use cases including the large array of pre-trained models. This pillar will be also based on developer resources: more code examples, guides, and documentation for popular and applied machine learning use cases will be added in order to reduce the barrier to entry of machine learning.

For the ready-to-deploy pillar, the efforts will be focused on easier exporting models to mobile, edge, server backends, and JavaScript. In particular, exporting models to TFLite and TF.js will be easier to call. The C++ native APIs are in the development stage and will be easier to deploy models developed using JAX and TensorFlow Serving and to mobile and web with TF lite and TF.js.

NumPy API and easier debugging experience will be the core features of the fourth pillar: simplicity. Tensorflow will adopt the NumPy APIs standard for numerics in order to become more consistent and easy to understand. Better debugger capabilities will be implemented in order to minimize the time-to-solution for developers.

Google promises that the new Tensorflow releases will be 100% backward compatible, in this way the engineers can adopt the latest versions immediately without fear that the existing codebase might break.

The preview of the new Tensorflow capabilities is planned for Q2 2023 and the production release versions are planned for the same year. The roadmap and updates can be followed on the official blog.

About the Author

Rate this Article