BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News PyTorch 2.1 Release Supports Automatic Dynamic Shape Support and Distributed Training Enhancements

PyTorch 2.1 Release Supports Automatic Dynamic Shape Support and Distributed Training Enhancements

The latest version of PyTorch supports automatic dynamic shape support and enhancements in distributed training. PyTorch 2.1 release was announced in keynote session at the recent PyTorch Conference 2023. ExecuTorch was introduced to enhance PyTorch's performance on mobile and edge devices. There were other additional announcements made in the keynote presentation including the addition of new members to PyTorch Foundation and a November Docathon.

PyTorch 2.1

"torch.compile() works fantastically well for many PyG models. Overall, we observe runtime improvements of nearly up to 300%." - Matthias Fey, PyG maintainer

One notable feature for PyTorch 2.1  is the Automatic Dynamic Shape Support in torch.compile, which allows for dynamic input shapes in model architectures. This feature removes the constraint of fixed input shapes, providing more flexibility for different use cases.

In the realm of distributed training, enhancements were introduced through torch.distributed.checkpoint. This update improves the efficiency of distributed training by enabling the saving and loading of training jobs across multiple ranks in parallel. This feature is essential for managing long-running training tasks and ensuring a smoother training process.

PyTorch 2.1 also added support for the NumPy API within torch.compile, enhancing interoperability between PyTorch and NumPy. This integration facilitates the execution of NumPy code on various devices, making code generation more efficient. Performance improvements were also a part of this release, with CPU Inductor Enhancements, AVX512 Support, and enhanced implementation of the Scaled-Dot-Product-Attention mechanism. The prototype release of torch.export was introduced, providing a mechanism for capturing full graphs and enabling torch.export-based quantization to reduce model size and improve inference speed on edge devices and mobile platforms.

ExecuTorch

The introduction of ExecuTorch marks a substantial stride in PyTorch's endeavor to refine its performance on mobile and edge devices. A notable feature of ExecuTorch is its Lightweight Operator Registry, tailored for managing a diverse range of PyTorch models. This registry streamlines the handling of operators, the core building blocks of PyTorch models, ensuring optimal runtime performance.

On-device model profiling is introduced to analyze and optimize model performance directly on target devices. This real-time profiling is crucial for identifying performance bottlenecks and tuning models for better efficiency and lower latency, particularly beneficial in real-time in various domains such as Augmented Reality, Virtual Reality, and IoT.

PyTorch Foundation adds new members

We at Google are excited to be a founding member of the PyTorch Foundation and we're excited for the opportunity to work closely with other leaders in Al to help grow this amazing and innovative community. - Google

The PyTorch Foundation welcomed Huawei and Lightning AI as its newest premier members. Huawei's entry aims at optimizing PyTorch to harness the full potential of its Ascend computing platform, which is revered for its robust computing performance in AI applications.

Lightning AI is the developer behind PyTorch Lightning, a lightweight wrapper around PyTorch, has been instrumental in making the code more structured and reusable, thus simplifying the lives of researchers and developers. The engagement of Lightning AI with the PyTorch Foundation emphasizes its commitment to bolstering the PyTorch ecosystem.

Docathon

The PyTorch community organized a Docathon for November 2023. This initiative is aimed at refining and expanding the framework's documentation to ensure it remains up-to-date and user-friendly. Developers wishing to become more involved with PyTorch 2.1 can check the PyTorch YouTube Channel for videos from the conference or the event schedule for some presenter sides.

About the Author

Rate this Article

Adoption
Style

BT