BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News PyTorch 1.1 Release Improves Performance, Adds New APIs and Tools

PyTorch 1.1 Release Improves Performance, Adds New APIs and Tools

This item in japanese

Facebook AI Research announced the release of PyTorch 1.1. The latest version of the open-source deep learning framework includes improved performance via distributed training, new APIs, and new visualization tools including native support for TensorBoard.

In a recent blog post, the Facebook team highlighted several improvements to the framework. First on the list is support for TensorBoard, a deep-learning visualization tool developed by Google as part of their TensorFlow framework. TensorFlow is currently a more popular choice among deep-learning developers than PyTorch, and TensorBoard's visualization features give Google's platform an advantage. PyTorch's new integration with TensorBoard may help close that gap.

The team also pointed out improvements to PyTorch's JIT compiler and distributed training. Distributed training improvements allow PyTorch models to be split across multiple GPUs for training; this lets developers build larger models that are too big to fit into a single GPU's memory, but still take advantage of GPUs for reducing training time. This is another area where TensorFlow has the lead.

The JIT compiler allows PyTorch models to be exported in a form that can run "in various production environments like mobile or IoT," using a native C++ library. The improvements to the JIT compiler include support for new data types as well as user-defined classes, allowing developers to write more complex model code while still maintaining the performance gains at run time.

Another new feature is a module implementing multi-headed attention. This is a new deep-neural-network architecture that achieves results comparable to recurrent neural networks (RNNs) on sequential learning tasks such as language translation. However, the attention networks do not include recursion and can be trained in a fraction of the time.

Users are particularly excited about performance improvements; several PyTorch modules have been re-written, in some cases achieving speed gains of up to 30x. One commenter on Hacker News stated that:

The `nn.BatchNorm` CPU inference speedup is a Big Deal™ to us.

The new release also includes new tools "as well as products and services from [Facebook's] partnership with industry leaders such as Google." One such collaboration with Google is the inclusion of the latest version of PyTorch in Google's AI Platform Notebooks, a hosted JupyterLab service in the Google Cloud Platform.

Two of the new open-source tools, Ax and BoTorch, are used by Facebook internally for a wide range of tasks, including machine-learning hyperparameter optimization, improving efficiency of infrastructure, and optimizing video playback. Both tools are released under the MIT License, which commenters noted is a departure from their previous open-source licensing strategy.

Full release notes as well as source code for PyTorch 1.1 are available on GitHub.
 

Rate this Article

Adoption
Style

BT