Facilitating the spread of knowledge and innovation in professional software development



Choose your language

InfoQ Homepage News Microsoft and University of Maryland Researchers Announce FreeLB Adversarial Training System

Microsoft and University of Maryland Researchers Announce FreeLB Adversarial Training System


Researchers from Microsoft and the University of Maryland (UMD) announced Free Large-Batch (FreeLB), a new adversarial training technique for deep-learning natural-language processing (NLP) systems that improves accuracy, increasing RoBERTa's scores on the General Language Understanding Evaluation (GLUE) benchmark and achieving the highest score on AI2 Reasoning Challenge (ARC) benchmark.

The team, drawn from Microsoft's Language and Information Technologies group and Professor Tom Goldstein's lab at UMD, provided a detailed description of FreeLB in a paper published on arXiv. The method works by adding noise to the word embeddings of input sentences when fine-tuning a pre-trained model such as RoBERTa.

FreeLB builds on previous work done by Goldstein's lab, in which adversarial training is performed "for free" by re-using the gradient information that is a result of standard training algorithms; the gradient is used to calculate a perturbation that is added to the input samples to create adversarial inputs. By including these samples in the fine-tuning training, the team was able to improve a BERT-based model's score from 78.3% to 79.4%, and a RoBERTa-large model from 88.5% to 88.8% on GLUE. The FreeLB training also took the top spot on the ARC leaderboard with 85.44% and 67.75% on ARC-Easy and ARC-Challenge respectively.

Adversarial training for image classifiers has been a focus for many researchers, especially those interested in autonomous vehicles. The FreeLB team notes that while this improves the robustness of the models, it often reduces their accuracy. However, NLP systems usually see an improved accuracy with adversarial training. There are several techniques for generating adversarial inputs to NLP systems by manipulating the input text: for example, by adding distracting sentences, or even by changing single words or characters (a technique that can also be used to help explain a model's output).

By contrast, FreeLB does not directly manipulate the input text. Instead, it adds a perturbation to the embedding vectors used to encode the input. Embeddings, frequently used as the first step in an NLP system, convert each word in the input vocabulary into a high-dimensional vector that often has interesting properties on its own. For example, words with similar meanings are often "close" to each other in embedding space.

FreeLB uses the gradient information from training to adjust the location of words in the embedding space with the maximum distance possible, without changing the output generated by the model. The authors claim this is even more effective than modifying text directly, as it can "make manipulations on word embeddings that are not possible in the text domain." The perturbations are done during fine-tuning when a pre-trained model is further trained on a task-specific dataset, such as a set of questions and answers. Because this training process calculates gradients, they are available "for free" to compute perturbations, which effectively creates new training examples.

FreeLB's implementation has not been open-sourced, although other projects from the Goldstein group's have been, including the previous work on adversarial training of images which is available on GitHub. The team notes that:

Investigating the reason for the discrepancy between the outcomes of adversarial training for images and text is an interesting future direction.

The Microsoft team has also open-source some of its other work, including a recently-released system for visual question answering called ReGAT, also available on GitHub.

We need your feedback

How might we improve InfoQ for you

Thank you for being an InfoQ reader.

Each year, we seek feedback from our readers to help us improve InfoQ. Would you mind spending 2 minutes to share your feedback in our short survey? Your feedback will directly help us continually evolve how we support you.

Take the Survey

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p


Is your profile up-to-date? Please take a moment to review and update.

Note: If updating/changing your email, a validation request will be sent

Company name:
Company role:
Company size:
You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.