BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News AI Conference Recap: Facebook, Google, Microsoft, and Others at ACL 2020

AI Conference Recap: Facebook, Google, Microsoft, and Others at ACL 2020

This item in japanese

At the recent Annual Meeting of the Association for Computational Linguistics (ACL), research teams from several tech companies, including Facebook, Google, Microsoft, Amazon, and Salesforce presented nearly 200 papers out of a total of 779 on a wide variety of AI topics related to Natural Language Processing (NLP).

The conference was held online in early July and featured a keynote from Amazon Scholar Kathleen R. McKeown. In addition to workshops and tutorials on NLP topics, AI researchers from business and academia presented 779 papers describing their latest work. Prominent tech companies were well represented by their AI researchers: Microsoft contributed 56 papers, including one awarded Best Paper, Facebook 32, Google 31, IBM 20, Amazon 17, and Salesforce 9.

The ACL conference is the "premier conference of the field of computational linguistics." This year's event, the 58th meeting, was slated to be held from July 5th through July 10th in Seattle, Washington; however, due to global pandemic concerns, it became a completely virtual event. This year's conference theme was "Taking Stock of Where We’ve Been and Where We’re Going," inviting papers reflecting on 60 years of progress in NLP. Following this theme, Amazon Scholar Kathleen R. McKeown gave a keynote address titled "Rewriting the Past: Assessing the Field through the Lens of Language Generation," in which she showed clips from interviews with experts in the field about the past, present, and future directions of NLP.

The event's format was similar to most academic conferences and featured 779 presentations of research papers from 25 different NLP areas, 8 NLP tutorials, and 19 workshops---small, focused "sub-conferences" usually lasting a single day. This year's conference had a record number of paper submissions: 3,429, which is more than double the number submitted just two years ago, and the USA and China accounted for over 60% of the submissions. The acceptance rate was about 25%, in line with recent years.

Microsoft researchers contributed over 50 papers, including the conference's Best Paper winner, titled Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. This paper, co-authored by University of California, Irvine, professor Samir Singh, introduces CheckList, "a task agnostic methodology for testing NLP models." Describing the motivation for the paper, Singh noted:

[W]e increasingly see NLP models that beat humans on accuracy on various datasets, yet we know that these models are not as good as humans for many of these tasks...what can we do about this mismatch in how we currently evaluate these models and what we think is their 'true' performance?

CheckList produces test cases for NLP models by perturbing input statements that have expected outputs; for example, a "negation" perturbation would change "I love the food" to "I didn't love the food", with the expected sentiment classification going from positive to negative. The team used CheckList on several commercial sentiment analysis models, including Microsoft Azure Text Analytics, Google Cloud's Natural Language, and Amazon Comprehend, and found that none of the models did well on the negation test. The team has released an open-source version of the CheckList tool with sample code for replicating the paper's results.

Tech giants Google and Facebook contributed around 30 papers each on several topics, including machine translation and several new iterations on the BERT Transformer model. Google introduced MobileBERT, a compact model for "resource-limited" devices that is 4.3x smaller and 5.5x faster than the large BERT model. Google also introduced BLEURT, a fine-tuned BERT model that is used as an evaluation metric for other natural-language generation models. Facebook presented papers on BART, a generalization of BERT that achieves new state-of performance on several NLP tasks; CamemBERT, a Transformer-based model for French-language tasks; and TaBERT, a model for answering questions about tabular data, similar to Google's TAPAS.

Several tech companies were sponsors of the event in addition to contributing content. Google, Amazon, Apple, and Bloomberg were "Diamond" level sponsors; while Amazon researchers presented 17 papers and the keynote, Bloomberg presented only six and Apple only three. Facebook and IBM were "Platinum" sponsors, with IBM also presenting 20 papers.

In a Hacker News discussion about the Best Paper awards, AI researcher Jeff Huang shared a link to a historical list of such awards; in response another user noted:

It's kind of surprising to me that Microsoft Research is the all-time leader, gathering more best paper awards [in computer science] than any top university. Wow. This in itself tells a story about Microsoft Research.

Many of the conference papers along with the code used in them are available on Papers with Code.
 

Rate this Article

Adoption
Style

BT