InfoQ Homepage Artificial Intelligence Content on InfoQ
-
Google Releases Two New NLP Dialog Datasets
Researchers from Google AI released two new dialog datasets for natural-language processing (NLP) development: Coached Conversational Preference Elicitation (CCPE) and Taskmaster-1. The datasets contain thousands of conversations as well as labels and annotations for training digital assistants to better determine users' preferences and intentions.
-
WebExpo 2019: Make Healthcare Affordable and Accessible Using Tech and AI
Anna Zawilska, lead user researcher at Babylon Health, recently presented at WebExpo 2019 in Prague the lessons learnt from their experience delivering remote healthcare through a combination of technology and Artificial Intelligence (AI). Babylon Health came to adjust three key assumptions underpinning their product development.
-
Facebook, Microsoft, and Partners Announce Deepfake Detection Challenge
Facebook, Microsoft, the Partnership on AI, and researchers from several universities have created the Deepfake Detection Challenge (DDC), a contest to produce AI that can detect misleading images and video that have been created by AI. The challenge includes several grants and awards for the teams that create the best AI solution, using the DDC's dataset of real and fake videos.
-
Waymo Shares Autonomous Vehicle Dataset for Machine Learning
Waymo, the self-driving technology company, released a dataset containing sensor data collected by their autonomous vehicles during more than five hours of driving. The set contains high-resolution data from lidar and camera sensors collected in several urban and suburban environments in a wide variety of driving conditions and includes labels for vehicles, pedestrians, cyclists, and signage.
-
Introducing KiloGram, a New Technique for AI Detection of Malware
A team of researchers recently presented their paper on KiloGram, a new algorithm for managing large n-grams in files, to improve machine-learning detection of malware. The new algorithm is 60x faster than previous methods and can handle n-grams for n=1024 or higher. The large values of n have additional application for interpretable malware analysis and signature generation.
-
How Artificial Intelligence Impacts Designing Products
Artificial intelligence is changing the way that we interact with technology; eliminating unnecessary interfaces makes interaction with machines more humane, argued Agnieszka Walorska at ACE conference 2019. The expectations towards customer experience have changed, and one factor that is becoming more and more important to this change is machine learning.
-
New Technique Speeds up Deep-Learning Inference on TensorFlow by 2x
Researchers at North Carolina State University recently presented a paper at the International Conference on Supercomputing (ICS) on their new technique, "deep reuse" (DR), that can speed up inference time for deep-learning neural networks running on TensorFlow by up to 2x, with almost no loss of accuracy.
-
Predicting the Future, Amazon Forecast Reaches General Availability
In a recent blog post, Amazon announced the general availability (GA) of Amazon Forecast, a fully managed, time series data forecasting service. Amazon Forecast uses deep learning from multiple datasets and algorithms to make predictions in the areas of product demand, travel demand, financial planning, SAP and Oracle supply chain planning and cloud computing usage.
-
University Research Teams Open-Source Natural Adversarial Image DataSet for Computer-Vision AI
Research teams from three universities recently released a dataset called ImageNet-A, containing natural adversarial images: real-world images that are misclassified by image-recognition AI. When used as a test-set on several state-of-the-art pre-trained models, the models achieve an accuracy rate of less than 3%.
-
Microsoft Open-Sources TensorWatch AI Debugging Tool
Microsoft Research open-sourced TensorWatch, their debugging tool for AI and deep-learning. TensorWatch supports PyTorch as well as TensorFlow eager tensors, and allows developers to interactively debug training jobs in real-time via Jupyter notebooks, or to build their own custom UIs in Python.
-
Baidu Open-Sources ERNIE 2.0, Beats BERT in Natural Language Processing Tasks
In a recent blog post, Baidu, the Chinese search engine and e-commerce giant, announced their latest open-source, natural language understanding framework called ERNIE 2.0. They also shared recent test results including achieving state-of-the art (SOTA) results and outperforming existing frameworks, including Google’s BERT and XLNet in 16 NLP tasks in both Chinese and English.
-
The First AI to Beat Pros in 6-Player Poker, Developed by Facebook and Carnegie Mellon
Facebook AI Research’s Noam Brown and Carnegie Mellon’s professor Tuomas Sandholm recently announced Pluribus, the first Artificial Intelligence program able to beat humans in 6 player hold-em poker. In the past years, computers have progressively improved, beating humans in checkers, chess, Go, and the Jeopardy TV show. Poker poses more challenges around information asymmetry and bluffing.
-
Researchers Develop Technique for Reducing Deep-Learning Model Sizes for Internet of Things
Researchers from Arm Limited and Princeton University have developed a technique that produces deep-learning computer-vision models for internet-of-things (IoT) hardware systems with as little as 2KB of RAM. By using Bayesian optimization and network pruning, the team is able to reduce the size of image recognition models while still achieving state-of-the-art accuracy.
-
Google Adds New Integrations for the What-If Tool on Their Cloud AI Platform
In a recent blog post, Google announced a new integration of the What-If tool, allowing data scientists to analyse models on their AI Platform – a code-based data science development environment. Customers can now use the What-If tool for their XGBoost and Scikit Learn models deployed on the AI Platform.
-
Google Releases Post-Training Integer Quantization for TensorFlow Lite
Google announced new tooling for their TensorFlow Lite deep-learning framework that reduces the size of models and latency of inference. The tool converts a trained model's weights from floating-point representation to 8-bit signed integers. This reduces the memory requirements of the model and allows it to run on hardware without floating-point accelerators and without sacrificing model quality.