InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
PipelineDP Brings Google’s Differential-Privacy Library to Python
Google and OpenMined have released PipelineDP, a new open-source library that allows researchers and developers to apply differentially private aggregations to large datasets using batch-processing systems.
-
Alibaba Open-Sources AutoML Algorithm KNAS
Researchers from Alibaba Group and Peking University have open-sourced Kernel Neural Architecture Search (KNAS), an efficient automated machine learning (AutoML) algorithm that can evaluate proposed architectures without training. KNAS uses a gradient kernel as a proxy for model quality, and uses an order of magnitude less compute power than baseline methods.
-
LambdaML: Pros and Cons of Serverless for Deep Network Training
A new study entitled "Towards Demystifying Serverless Machine Learning Training" aims to provide an experimental analysis of training deep networks by leveraging serverless platforms. FaaS for training has challenges due to its distributed nature and aggregation step in the learning algorithms. Results indicate FaaS can be a faster (for lightweight models) but not cheaper alternative than IaaS.
-
Meta AI’s Convolution Networks Upgrade Improves Image Classification
Meta AI released a new generation of improved Convolution Networks, achieving state-of-the-art performance of 87.8% accuracy on Image-Net top-1 dataset and outperforming Swin Transformers on COCO dataset where object detection performance is evaluated. The new design and training approach is inspired by the Swin Transformers model.
-
Evaluating Continual Deep Learning: a New Benchmark for Image Classification
Continual learning aims to preserve knowledge across deep network training iterations. A new dataset entitled "The CLEAR Benchmark: Continual LEArning on Real-World Imagery" has recently been published. The goal of the study is to establish a consistent image classification benchmark with the natural time evolution of objects for a more realistic comparison of continual learning models.
-
Meta Unveils AI Supercomputer for the Metaverse
Meta has unveiled its AI Research SuperCluster (RSC) supercomputer, aimed at accelerating AI research and helping the company build the metaverse. The RSC will help the company build new and better AI models, working across hundreds of different languages, and to develop new augmented reality tools.
-
University Researchers Develop Brain-Computer Interface for Robot Control
Researchers from École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Texas at Austin (UT) have developed a brain-computer interface (BCI) that allows users to modify a robot manipulator's motion trajectories. The system uses inverse reinforcement learning (IRL) and can learn a user's preferences using less than five demonstrations.
-
Google Introduces Autoscaling for Cloud Bigtable for Optimizing Costs
Cloud Bigtable is a fully-managed, scalable NoSQL database service for large operational and analytical workloads on the Google Cloud Platform (GCP). And recently, the public cloud provider announced the general availability of Bigtable Autoscaling, which automatically adds or removes capacity in response to the changing demand for applications allowing cost optimizations.
-
Amazon OpenSearch Adds Anomaly Detection for Historical Data
Amazon OpenSearch recently introduced the support of anomaly detection for historical data. The machine learning based feature helps identifying trends, patterns, and seasonality in OpenSearch data.
-
OpenAI Announces Question-Answering AI WebGPT
OpenAI has developed WebGPT, an AI model for long-form question-answering based on GPT-3. WebGPT can use web search queries to collect supporting references for its response, and on Reddit questions its answers were preferred by human judges over the highest-voted answer 69% of the time.
-
InfoQ 2022 Events: Get Ready to Deep-Dive with Leading Software Practitioners
Our events will be both online (InfoQ Live and QCon Plus) and in-person once again with our QCon software development conferences in London (April 4-6) and San Francisco (October 24-28). Join us to find practical inspiration to help you adopt the patterns and practices this year.
-
AI Listens by Seeing as Well
Meta AI released a self-supervised speech recognition model that also uses video and achieves 75% better accuracy for some amount of data than current state-of-the-art models. This new model, Audio-Visual Hidden BERT (AV-HuBERT), uses audiovisual features for improving models based only on hearing speech. Visual features used are based on lip-reading, similar to what humans do.
-
Meta and AWS to Collaborate on PyTorch Adoption
Meta and AWS will work together to improve the performance for customers of applications running PyTorch on AWS and accelerate how developers build, train, deploy, and operate artificial intelligence and machine-learning models.
-
Facebook Open-Sources Two Billion Parameter Multilingual Speech Recognition Model XLS-R
Facebook AI Research (FAIR) open-sourced XLS-R, a cross-lingual speech recognition (SR) AI model. XSLR is trained on 436K hours of speech audio from 128 languages, an order of magnitude more than the largest previous models, and outperforms the current state-of-the-art on several downstream SR and translation tasks.
-
MLCommons Announces Latest MLPerf Training Benchmark Results
Engineering consortium MLCommons recently announced the results of the latest round of their MLPerf Training benchmark competition. Over 158 AI training job performance metrics were submitted by 14 organizations, with the best results improving up to 2.3x compared to the previous round.