InfoQ Homepage Machine Learning Content on InfoQ
-
Amazon Rekognition Introduces Streaming Video Events
AWS recently announced the general availability of Streaming Video Events, a new feature of Amazon Rekognition to provide real-time alerts on live video streams.
-
New GraphWorld Tool Accelerates Graph Neural-Network Benchmarking
Google AI has recently released GraphWorld, a tool to accelerate performance benchmarking in the area of graph neural networks (GNNs). GraphWorld is a configurable framework to generate graphs with a variety of structural properties like different node degree distributions and Gini index.
-
TensorFlow DTensor: Unified API for Distributed Deep Network Training
Recently released TensorFlow v2.9 introduces a new API for the model, data, and space-parallel (aka spatially tiled) deep network training. DTensor aims to decouple sharding directives from the model code by providing higher-level utilities to partition the model and batch parameters between devices.
-
Meta AI’s New Data Set to Accelerate Renewable Energy Catalyst Discovery for Hydrogen Fuel
Meta AI recently announced that it will soon release an entirely new data set for green hydrogen fuel ML modeling and simulation, focused on oxide catalysts for the oxygen evolution reaction (OER), a critical chemical reaction used in green hydrogen fuel production via wind and solar energy.
-
Google Announces General Availability of Cloud TPU VMs
Last year Google introduced Cloud TPU Virtual Machines (VMs), which provide direct access to TPU host machines in preview. Today, Cloud TPU VMs are generally available, including the new TPU Embedding API, which can accelerate ML Based ranking and recommendation workloads.
-
Amazon SageMaker Serverless Inference Now Generally Available
Amazon recently announced that SageMaker Serverless Inference is generally available. Designed for workloads with intermittent or infrequent traffic patterns, the new option provisions and scales compute capacity according to the volume of inference requests the model receives.
-
Serving Deep Networks in Production: Balancing Productivity vs Efficiency Tradeoff
A recently published work provides an alternative modality for serving deep neural networks. It enables utilizing eager-mode model code directly at production workloads by using embedded CPython interpreters. The goal is to reduce the engineering effort to bring the models from the research stage to the end-user and to create a proof-of-concept platform for migrating future numerical libraries.
-
From Natural Language Queries to Insights: GCP BigQuery Data QnA Usage in Twitter
The Twitter engineering team has shared architectural details of their Qurious data insights platform and its advantages for real-time analysis. Designed for internal business customers, the platform allows users to analyze Twitter’s BigQuery data using natural language queries and create dashboards.
-
AlphaCode: Competitive Code Synthesis with Deep Learning
AlphaCode study brings promising results for goal-oriented code synthesis using deep sequence-to-sequence models. It extends the previous networks and releases a new dataset named CodeContests to contribute to future research benchmarks.
-
Waymo Releases Block-NeRF 3D View Synthesis Deep-Learning Model
Waymo released a ground-breaking deep-learning model called Block-NeRF for large-scale 3D world-view synthesis reconstructed from images collected by its self-driving cars. NeRF has the ability to encode surface and volume representation in neural networks.
-
How GitHub Uses Machine Learning to Extend Vulnerability Code Scanning
Applying machine learning techniques to its rule-based security code scanning capabilities, GitHub hopes to be able to extend them to less common vulnerability patterns by automatically inferring new rules from the existing ones.
-
PipelineDP Brings Google’s Differential-Privacy Library to Python
Google and OpenMined have released PipelineDP, a new open-source library that allows researchers and developers to apply differentially private aggregations to large datasets using batch-processing systems.
-
LambdaML: Pros and Cons of Serverless for Deep Network Training
A new study entitled "Towards Demystifying Serverless Machine Learning Training" aims to provide an experimental analysis of training deep networks by leveraging serverless platforms. FaaS for training has challenges due to its distributed nature and aggregation step in the learning algorithms. Results indicate FaaS can be a faster (for lightweight models) but not cheaper alternative than IaaS.
-
Meta AI’s Convolution Networks Upgrade Improves Image Classification
Meta AI released a new generation of improved Convolution Networks, achieving state-of-the-art performance of 87.8% accuracy on Image-Net top-1 dataset and outperforming Swin Transformers on COCO dataset where object detection performance is evaluated. The new design and training approach is inspired by the Swin Transformers model.
-
Evaluating Continual Deep Learning: a New Benchmark for Image Classification
Continual learning aims to preserve knowledge across deep network training iterations. A new dataset entitled "The CLEAR Benchmark: Continual LEArning on Real-World Imagery" has recently been published. The goal of the study is to establish a consistent image classification benchmark with the natural time evolution of objects for a more realistic comparison of continual learning models.