BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News PyTorch 1.3 Release Adds Support for Mobile, Privacy, and Transparency

PyTorch 1.3 Release Adds Support for Mobile, Privacy, and Transparency

Facebook recently announced the release of PyTorch 1.3. The latest version of the open-source deep learning framework includes new tools for mobile, quantization, privacy, and transparency.

Engineering director Lin Qiao took the stage at the recent PyTorch Developer Conference in San Francisco to highlight new features in the release, framing them with PyTorch's core principles of developer efficiency and building for scale. For building at scale, the release introduces new model quantization capabilities as well as support for mobile platforms and tensor-processing units (TPUs). Developer efficiency tools include tools for model transparency and data privacy.

PyTorch Mobile brings support for "full TorchScript inference on mobile," which allows developers to use the same set of APIs as on other hardware platforms. TorchScript models can now run on Android and iOS without any format conversion, shortening the cycle from research to production. Previously, running PyTorch-developed models on mobile required converting it to the ONNX format then loading it into Caffe2.

Mobile and embedded devices have limited storage and compute power compared to other platforms, and running state-of-the-art models on these devices is challenging. To address this, app developers often use model quantization. This not only reduces the storage requirements of the model parameters, it also speeds up model inference by using integer operations instead of floating-point operations. PyTorch 1.3 includes support for post-training dynamic quantization, static post training quantization, and quantization aware training.  Quantization can reduce model size by 4x, and increase inference speeds by 2x to 4x.

PyTorch 1.3 broadens the framework's support for various cloud platforms. The release includes the ability to train models on Google Cloud Platform's TPUs, with ability to scale up from a single TPU to mutli-rack TPU pods. According to Facebook, these pods can "complete ML workloads in minutes or hours that previously took days or weeks on other systems." The PyTorch team also announced that Alibaba Cloud now supports one-click deployments of PyTorch notebooks and PyTorch integration with several of Alibaba's infrastructure-as-a-service offerings.

Model transparency---the ability to understand models and explain their results---has become an increasingly active area of research. The new PyTorch release includes a model-interpretability tool named Captum, designed to "help developers working in PyTorch understand why their model generates a specific output." Captum provides a wide variety of algorithms for attribution, which are grouped into:

  • General attribution, which determines the effect each input feature has on the result
  • Layer attribution, which determines the effect each neuron in a layer has on the result
  • Neuron attribution, which determines the effect each input feature has on the activation of a particular neuron

Users are also concerned about the privacy and the use of their personal data for training models. With this release, Facebook has open-sourced CrypTen, a research platform for privacy-preserving machine-learning. CrypTen integrates with PyTorch by implementing encrypted tensors and performing ML computations on encrypted data. In a blog post announcing the release, the team said:

CypTen offers a bridge between the PyTorch platform that is already familiar to thousands of ML researchers and the long history of academic research on algorithms and systems that work effectively with encrypted data.

The new features in PyTorch, particularly for mobile, may reflect PyTorch's attempt to expand beyond its dominance in the resarch community into commercial applications, where Google's TensorFlow framework is more popular. Earlier this year, Google introduced post-training quantization in their TensorFlow Lite mobile offering and cloud-TPU training in Tensorflow 2.0. Users on Hacker News noted that TensorFlow has better support for production pipelines. Others also pointed out that the new mobile offering cannot leverage the iPhone's Neural Engine hadware accelerator, which requires the use of Apple's Core ML library.

On Twitter, Julien Chaumond, co-founder and CTO at HuggingFace (a natural-language processing company that uses PyTorch), compared PyTorch Mobile performance to Core ML:

On iOS, Core ML runs a ResNet50 inference in 20ms (including image preprocessing). (on iPhone 11). By comparison, the PyTorch mobile demo app ships ResNet18 (~40% of the parameters of ResNet50). A forward pass takes 80ms on the same device. So I would say for now, pros and cons of PyTorch mobile are:
Pros:
- Rather seamless model deployment (you ship a torchscript-ed .pt inside your app)
Cons:
- Very slow for medium/large models
- No GPU (or Neural engine) support.
- At least for now, requires you to write Obj-C++.

The PyTorch 1.3 source code and detailed release notes are available on GitHub.

Rate this Article

Adoption
Style

BT