BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Google Announces AI-Generated Summaries for Google Docs

Google Announces AI-Generated Summaries for Google Docs

This item in japanese

Bookmarks

Google has announced a new feature for their Docs app that will automatically generate a summary of the document content. The summarization is powered by a natural language processing (NLP) AI model based on the Transformer architecture.

The model was described in a blog post written by Mohammad Saleh, a software engineer from Google Research's Brain Team, and Anjuli Kannan, a software engineer from Google Docs. The model is based on PEGASUS, an NLP system for abstractive text summarization developed by the Brain Team. PEGASUS uses a pre-training scheme called Gap Sentence Prediction (GSP), which teaches the model to re-generate full sentences that have been masked from input text; in particular, the masked sentences are chosen based on how important they are for generating a summary of the text. The current model used by Google Docs features several improvements on PEGASUS, including fine-tuning on a high-quality dataset and knowledge distillation to improve latency and reduce memory footprint. According to Saleh and Kannan,

We hope the automatic suggestions now offered in Google Workspace make it easier for writers to annotate their documents with summaries, and help readers comprehend and navigate documents more easily.

A common technique for NLP tasks, including abstractive text summarization, is to train a sequence-to-sequence model, which takes as input a sequence of tokens (for example, letters or even whole words), feeds the input sequence into an encoder that produces a latent representation of the sequence, then uses a decoder to convert that latent representation to an output sequence. Because these models need large amounts of training data, most use pre-trained Transformers as components; typically the pre-training objective is a masked language model (MLM), where random tokens in the input are masked, and the model must predict the correct value for the masked tokens.

PEGASUS extends this objective by masking entire sentences. However, instead of simply masking random sentences, the training algorithm uses a heuristic to choose the most important sentences in the text to mask, which improves the model's performance on summarization. First, all sentences in the input are scored for importance using a ROUGE1-F1 metric, and the top m sentences are masked. Using this scheme, PEGASUS achieved state-of-the-art performance on 12 summarization benchmarks.

To productize PEGASUS for Google Docs required overcoming a "number of challenges." The first was fine-tuning, as the pre-trained model was "easily confused" due to the diversity of documents in its pre-training corpus. Google found that PEGASUS could achieve good performance after fine-tuning a small, high-quality dataset which was carefully curated to contain good examples of summaries. Next, Google addressed several production serving challenges, including model size and inefficiencies. In particular, the decoder stage of the model was changed from a Transformer to a recurrent neural network (RNN).

Although PEGASUS did set new performance records on several benchmarks, and is still the current leader on some, other research teams have developed models that outperform it on some tasks. Salesforce recently published a paper describing a summarization model that can "summarize a whole book" and uses 380x fewer parameters than GPT-3. In addition, Meta AI recently open sourced two summarization models that outperform PEGASUS on several benchmarks, and researchers from Tsinghua University open-sourced their GLM model which currently has the top score for one of the benchmarks.

Google AI shared their announcement on Twitter, sparking several comments by users. Daniel McNichol, a data scientist and startup founder, shared his experience with the summarization feature:

[Tested it with] some blog posts & it's...good? Not all that illuminating (sometimes just a sentence from [original]) and would like it to "say more"...[definitely] does best with executive summary style recaps of straightforward "reports."

Although Google has not open-sourced its latest model, the code and models for the original PEGASUS system are available on GitHub.

About the Author

Rate this Article

Adoption
Style

BT