BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Baidu Open-Sources ERNIE 2.0, Beats BERT in Natural Language Processing Tasks

Baidu Open-Sources ERNIE 2.0, Beats BERT in Natural Language Processing Tasks

This item in japanese

In a recent blog post, Baidu, the Chinese search engine and e-commerce giant, announced their latest open-source, natural language understanding framework called ERNIE 2.0. They also shared recent test results, including achieving state-of-the art (SOTA) results and outperforming existing frameworks, including Google’s BERT and XLNet in 16 NLP tasks in both Chinese and English.

ERNIE 2.0, more formally known as Enhanced Representation through kNowledge IntEgration, is a continual pre-training framework for language understanding. Using a continual pre-training approach, Baidu feels that this approach creates opportunities in language understanding:

We proposed a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through constant multi-task learning. In this framework, different customized tasks can be incrementally introduced at any time and are trained through multi-task learning that permits the encoding of lexical, syntactic and semantic information across tasks. When given a new task, our framework can incrementally train the distributed representations without forgetting the parameters of previous tasks.

Using continual learning allows the model to remember a previously learned task when learning new ones. This approach was inspired by how humans learn. Yu Sun, a Baidu researcher, explains:

Humans are capable of continuously accumulating the information acquired by study or experience to efficiently develop new skills. With continual learning, the model should be able to perform well on new tasks thanks to the knowledge acquired during previous training.

Image source: http://research.baidu.com/Blog/index-view?id=121

The continual pre-training framework approach differs from the pre-trained procedures used by BERT, XLNet and ERNIE 1.0. While these projects have made several improvements in NLP tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question-answer matching, they tend to solve many simple tasks that are dependent upon the co-occurrence of words of sentences. For example:

BERT constructed a bidirectional language model task and a next sentence prediction task to capture the co-occurrence information of words and sentences; XLNet constructed a permutation language model task to capture the co-occurrence information of words.

In order to benchmark ERNIE 2.0’s performance, the Baidu team compared their results, with existing SOTA pre-trained models, against the English GLUE dataset and 9 popular Chinese datasets. The outcome was:

ERNIE 2.0 outperformed BERT and XLNet on 7 GLUE language understanding tasks and beat BERT on all 9 of the Chinese NLP tasks, such as machine reading comprehension built on the DuReader dataset, sentiment analysis and question answering.

To maintain the integrity of the experiments, the Baidu research team evaluated the performance of the base models and large models of each comparison method on GLUE.

Image source: http://research.baidu.com/Blog/index-view?id=121

For additional information on ERNIE 2.0, Baidu Research has made their research paper publicly available. In addition, code and models that have been pre-trained on English are available in the ERNIE GitHub repository. 

Rate this Article

Adoption
Style

BT