InfoQ Homepage Natural Language Processing Content on InfoQ
-
OpenAI Announces GPT-3 Model for Image Generation
OpenAI has trained a 12B-parameter AI model based on GPT-3 that can generate images from textual description. The description can specify many independent attributes, including the position of objects as well as image perspective, and can also synthesize combinations of objects that do not exist in the real world.
-
Facebook Open-Sources Multilingual Speech Recognition Deep-Learning Model
Facebook AI Research (FAIR) open-sourced Cross-Lingual Speech Recognition (XSLR), a multilingual speech recognition AI model. XSLR is trained on 53 languages and outperforms existing systems when evaluated on common benchmarks.
-
AWS Introduces HealthLake and Redshift ML in Preview
AWS introduced preview releases of Amazon HealthLake service and a feature for Amazon Redshift called Redshift ML during re:Invent 2020 in December. Amazon HealthLake is a data lake service that helps healthcare, health insurance, and pharmaceutical companies to derive value out of their data with the help of NLP. Redshift ML is a service that provides a gateway into SageMaker to Redshift users.
-
AI Models from Google and Microsoft Exceed Human Performance on Language Understanding Benchmark
Research teams from Google and Microsoft have recently developed natural language processing (NLP) AI models which have scored higher than the human baseline score on the SuperGLUE benchmark. SuperGLUE measures a model's score on several natural language understanding (NLU) tasks, including question answering and reading comprehension.
-
Rasa Announces Open Source AI Assistant Framework 2.0
Rasa, the customizable open source machine learning framework to automate text and voice-based AI assistants, has released version 2.0 with significant improvements to dialogue management, training data format, and interactive documentation. In addition, the latest release reduces the learning curve to get started while expanding configuration options for advanced users.
-
Large-Scale Multilingual AI Models from Google, Facebook, and Microsoft
Researchers from Google, Facebook, and Microsoft have published their recent work on multilingual AI models. Google and Microsoft have released models that achieve new state-of-the-art performance on NLP tasks measured by the XTREME benchmark, while Facebook has produced a non-English-centric many-to-many translation model.
-
AI Training Method Exceeds GPT-3 Performance with 99.9% Fewer Parameters
A team of scientists at LMU Munich have developed Pattern-Exploiting Training (PET), a deep-learning training technique for natural language processing (NLP) models. Using PET, the team trained a Transformer NLP model with 223M parameters that out-performed the 175B-parameter GPT-3 by over 3 percentage points on the SuperGLUE benchmark.
-
Microsoft Obtains Exclusive License for GPT-3 AI Model
Microsoft announced an agreement with OpenAI to license OpenAI's GPT-3 deep-learning model for natural-language processing (NLP). Although Microsoft's announcement says it has "exclusively" licensed the model, OpenAI will continue to offer access to the model via its own API.
-
Salesforce Releases Photon Natural Language Interface for Databases
A team of scientists from Salesforce Research and Chinese University of Hong Kong have released Photon, a natural language interface to databases (NLIDB). The team used deep-learning to construct a parser that achieves 63% accuracy on a common benchmark and an error-detecting module that prompts users to clarify ambiguous questions.
-
Google's BigBird Model Improves Natural Language and Genomics Processing
Researchers at Google have developed a new deep-learning model called BigBird that allows Transformer neural networks to process sequences up to 8x longer than previously possible. Networks based on this model achieved new state-of-the-art performance levels on natural-language processing (NLP) and genomics tasks.
-
AI Conference Recap: Facebook, Google, Microsoft, and Others at ACL 2020
At the recent Annual Meeting of the Association for Computational Linguistics (ACL), research teams from several tech companies, including Facebook, Google, Microsoft, Amazon, and Salesforce presented nearly 200 papers out of a total of 779 on a wide variety of AI topics related to Natural Language Processing (NLP).
-
Alexa Adds Conversations and Deep-Linking Based Control for Mobile Apps
Alexa Conversations, recently launched in beta, aim to enable the creation of custom skills with fewer code thanks to a new AI-based approach. Alongside Alexa Conversations, Amazon has also announced Alexa for Apps, which allows Alexa users to interact with their mobile phones using Alexa.
-
Google ML Kit SDK Now Focuses on On-Device Machine Learning
Google has introduced a new ML Kit SDK aimed at working in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.
-
OpenAI Announces GPT-3 AI Language Model with 175 Billion Parameters
A team of researchers from OpenAI recently published a paper describing GPT-3, a deep-learning model for natural-language with 175 billion parameters, 100x more than the previous version, GPT-2. The model is pre-trained on nearly half a trillion words and achieves state-of-the-art performance on several NLP benchmarks without fine-tuning.
-
Google Open-Sources AI for Using Tabular Data to Answer Natural Language Questions
Google open-sourced Table Parser (TAPAS), a deep-learning system that can answer natural-language questions from tabular data. TAPAS was trained on 6.2 million tables extracted from Wikipedia and matches or exceeds state-of-the-art performance on several benchmarks.