InfoQ Homepage TensorFlow Content on InfoQ
-
Microsoft Open-Sources Project Petridish for Deep-Learning Optimization
A team from Microsoft Research and Carnegie Mellon University has open-sourced Project Petridish, a neural architecture search algorithm that automatically builds deep-learning models that are optimized to satisfy a variety of constraints. Using Petridish, the team achieved state-of-the-art results on the CIFAR-10 benchmark with only 2.2M parameters and five GPU-days of search time.
-
Google Open-Sources ALBERT Natural Language Model
Google AI has open-source A Lite Bert (ALBERT), a deep-learning natural language processing (NLP) model, which uses 89% fewer parameters than the state-of-the-art BERT model, with little loss of accuracy. The model can also be scaled-up to achieve new state-of-the-art performance on NLP benchmarks.
-
TensorFlow 2.1.0 Will Be the Last Version to Support Python 2
The TensorFlow project announced a release candidate for version 2.1.0. In addition to several improvements and bug fixes, this release will be the last version of the deep-learning framework to support Python 2.
-
Google Introduces New Metrics for AI-Generated Audio and Video Quality
Google AI researchers published two new metrics for measuring the quality of audio and video generated by deep-learning networks, the Fréchet Audio Distance (FAD) and Fréchet Video Distance (FVD). The metrics have been shown to have a high correlation with human evaluations of quality.
-
Machine Learning on Mobile and Edge Devices with TensorFlow Lite: Daniel Situnayake at QCon SF
At QCon SF, Daniel Situnayake presented "Machine learning on mobile and edge devices with TensorFlow Lite". TensorFlow Lite is a production-ready, cross-platform framework for deploying ML on mobile devices and embedded systems, and was the main topic of the presentation.
-
Google Introduces TensorFlow Enterprise in Beta
In a recent blog post, Google announced TensorFlow Enterprise, a cloud-based TensorFlow machine learning service that includes enterprise-grade support and managed services.
-
PyTorch and TensorFlow: Which ML Framework is More Popular in Academia and Industry
An article that was recently published on the gradient is examining the current state of Machine Learning frameworks in 2019. The article is utilizing some metrics to argue the point that PyTorch is quickly becoming the dominant framework for research, whereas TensorFlow is the dominant framework for applications in the industry. In this article we will dive into their differences.
-
Databricks' Unified Analytics Platform Supports AutoML Toolkit
Databricks recently announced the Unified Data Analytics Platform, including an automated machine learning tool called AutoML Toolkit. The toolkit can be used to automate various steps of the data science workflow.
-
Facebook Open-Sources RoBERTa: an Improved Natural Language Processing Model
Facebook AI open-sourced a new deep-learning natural-language processing (NLP) model, Robustly-optimized BERT approach (RoBERTa). Based on Google's BERT pre-training model, RoBERTa includes additional pre-training improvements that achieve state-of-the-art results on several benchmarks, using only unlabeled text from the world-wide web, with minimal fine-tuning and no data augmentation.
-
Denis Magda on Continuous Deep Learning with Apache Ignite
At the recent ApacheCon North America, Denis Magda spoke on continuous machine learning with Apache Ignite, an in-memory data grid. Ignite simplifies the machine-learning pipeline by performing training and hosting models in the same cluster that stores the data, and can perform "online" training to incrementally improve models when new data is available.
-
Waymo Shares Autonomous Vehicle Dataset for Machine Learning
Waymo, the self-driving technology company, released a dataset containing sensor data collected by their autonomous vehicles during more than five hours of driving. The set contains high-resolution data from lidar and camera sensors collected in several urban and suburban environments in a wide variety of driving conditions and includes labels for vehicles, pedestrians, cyclists, and signage.
-
New Technique Speeds up Deep-Learning Inference on TensorFlow by 2x
Researchers at North Carolina State University recently presented a paper at the International Conference on Supercomputing (ICS) on their new technique, "deep reuse" (DR), that can speed up inference time for deep-learning neural networks running on TensorFlow by up to 2x, with almost no loss of accuracy.
-
Google Releases Post-Training Integer Quantization for TensorFlow Lite
Google announced new tooling for their TensorFlow Lite deep-learning framework that reduces the size of models and latency of inference. The tool converts a trained model's weights from floating-point representation to 8-bit signed integers. This reduces the memory requirements of the model and allows it to run on hardware without floating-point accelerators and without sacrificing model quality.
-
Google Releases TensorFlow.Text Library for Natural Language Processing
Google released a TensorFlow.Text, a new text-processing library for their TensorFlow deep-learning platform. The library allows several common text pre-processing activities, such as tokenization, to be handled by the TensorFlow graph computation system, improving consistency and portability of deep-learning models for natural-language processing.
-
Google Announces TensorFlow Graphics Library for Unsupervised Deep Learning of Computer Vision Model
At a presentation during Google I/O 2019, Google announced TensorFlow Graphics, a library for building deep neural networks for unsupervised learning tasks in computer vision. The library contains 3D rendering functions written in TensorFlow, as well as tools for learning with non-rectangular mesh-based input data.