InfoQ Homepage TensorFlow Content on InfoQ
-
TensorFlow DTensor: Unified API for Distributed Deep Network Training
Recently released TensorFlow v2.9 introduces a new API for the model, data, and space-parallel (aka spatially tiled) deep network training. DTensor aims to decouple sharding directives from the model code by providing higher-level utilities to partition the model and batch parameters between devices.
-
TensorFlow Similarity Supports Fast Query Search Index on Pre-trained Models
Francois Chollet and his team recently released a Python library for TensorFlow, called TensorFlow Similarity. Similarity learning is the process of finding similar items, from similar clothes in images to person identification using face pictures. Deep-learning models have used a method called contrastive learning to increase accuracy and efficiency in learning similarity between images.
-
Google's Dev Library is a Curated Collection of Projects about Google Tech
Google has launched a new initiative aimed at creating a curated collection of open source projects related to Google technologies. Google's Dev Library will not only contain code repositories, but also articles, tools, and tutorials collected from various Internet sources.
-
Google Integrates TensorFlow Lite with Android, Adds Automatic Acceleration
Google has announced a new mobile ML stack, dubbed Android ML Platform and built around TensorFlow Lite, which aims to solve a number of problems that developers find when using on-device machine learning.
-
TensorFlow 3D: Deep Learning for Autonomous Cars’ 3D Perception
Google has released TensorFlow 3D, a library that adds 3D deep-learning capabilities to the TensorFlow machine-learning framework. The new library brings tools and resources that allow researchers to develop and deploy 3D scene understanding models.
-
TensorFlow 2.4 Release Includes CUDA 11 Support and API Updates
The TensorFlow project announced the release of version 2.4.0 of the deep-learning framework, featuring support for CUDA 11, cuDNN 8, and NVIDIA's Ampere GPU architecture, as well as new strategies and profiling tools for distributed training. Other API updates include mixed-precision in Keras and a NumPy frontend.
-
Apple's ML Compute Framework Accelerates TensorFlow Training
As part of the recent macOS Big Sur release, Apple has included the ML Compute framework. ML Compute provides optimized mathematical libraries to improve training on CPU and GPU on both Intel and M1-based Macs, with up to a 7x improvement in training times using the TensorFlow deep-learning library.
-
TensorFlow 2.3 Features Pipeline Bottleneck Reduction and Improved Preprocessing
The TensorFlow project announced the release of version 2.3.0, featuring new mechanisms for reducing input pipeline bottlenecks, Keras layers for pre-processing, and memory profiling.
-
Google Announces TensorFlow 2 Support in Object Detection API
Google announced support for TensorFlow 2 (TF2) in the TensorFlow Object Detection (OD) API. The release includes eager-mode compatible binaries, two new network architectures, and pre-trained weights for all supported models.
-
Google ML Kit SDK Now Focuses on On-Device Machine Learning
Google has introduced a new ML Kit SDK aimed at working in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.
-
Uber Open-Sources AI Abstraction Layer Neuropod
Uber open-sourced Neuropod, an abstraction layer for machine learning frameworks that allows researchers to build models in the framework of their choice while reducing the effort of integration, allowing the same production system to swap out models implemented in different frameworks. Neuropod currently supports several frameworks, including TensorFlow, PyTorch, Keras, and TorchScript.
-
Google Open-Sources New Higher Performance TensorFlow Runtime
Google open-sourced the TensorFlow Runtime (TFRT), a new abstraction layer for their TensorFlow deep-learning framework that allows models to achieve better inference performance across different hardware platforms. Compared to the previous runtime, TFRT improves average inference latency by 28%.
-
Google Releases Quantization Aware Training for TensorFlow Model Optimization
Google announced the release of the Quantization Aware Training (QAT) API for their TensorFlow Model Optimization Toolkit. QAT simulates low-precision hardware during the neural-network training process, adding the quantization error into the overall network loss metric, which causes the training process to minimize the effects of post-training quantization.
-
Google Introduces TensorFlow Developer Certification
Google has launched a certification program for its deep-learning framework TensorFlow. The certification exam is administered using a PyCharm IDE plugin, and candidates who pass can be listed in Google's world-wide Certification Directory.
-
Google Announces Beta Launch of Cloud AI Platform Pipelines
Google Cloud Platform (GCP) recently announced the beta launch of Cloud AI Platform Pipelines, a new product for automating and managing machine learning (ML) workflows, which leverages the open-source technologies TensorFlow Extended (TFX) and Kubeflow Pipelines (KFP).