BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News ExBERT, a Tool for Exploring Learned Representations in NLP Models

ExBERT, a Tool for Exploring Learned Representations in NLP Models

This item in japanese

MIT-IBM AI Labs and Harvard NLP Group have released a live demo of their interactive visualization tool for exploring learned representations in Transformers models called exBERT, along with a pre-publication and the source-code.

The interactive tool helps NLP researchers gain insights into the meaning of the powerful contextual representations formed by Transformers models. Because these models are built by a sequence of learned self attention mechanisms, it is important to analyze exactly what the attention has learned to spot any inductive bias.

By probing whether the representations capture linguistic features or positional information, exBERT renders visualizations which provide insight into both the attention and the token embeddings for the model and corpus. exBERT is named after Google’s language model called BERT (Bidirectional Encoder Representations from Transformers), but it is important to note that any Transformer model and corpus can be applied to any domains or languages on exBERT.

In the pre-publication the researchers ran a case study with BERT, because it is the most commonly used Transformer model for representation learning and it has numerous applications to transfer learning. Using The Wizard of Oz as the reference corpus, the layers and heads at which BERT learns the linguistic features of a masked token are explored and analyzed with the tool.

For each token in a given corpus, exBERT displays a view of the attention and the internal representations. In the Attention View, users can change layers, select heads, and view the aggregated attention.

Tokens can be masked, and a token can be searched over the whole corpus to feed results in the Corpus View which shows the highest-similarity matches, giving users an understanding of the representation.

As AI applications become further embedded into our daily lives, emphasizing Explainable AI (XAI) becomes more important. Many tools have been developed to visualize attention in NLP models, from attention matrix heatmaps to bipartite graph representations. exBERT was partially inspired by one of these open-source tools called BertViz, built for visualizing multi-head self-attention in the BERT model.

The exBERT researchers believe that BertViz made large steps toward making exploration of BERT’s attention faster and more interactive, but they added in the pre-publication that "interpreting attention patterns without understanding the attended-to embeddings, or relying on attention alone for a faithful interpretation, can lead to faulty interpretations."

exBERT looks to combine the advantages of static analysis with a more dynamic and intuitive view into both the attention and internal representations of the underlying model.

Rate this Article

Adoption
Style

BT