BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News University Researchers Publish Results of NLP Community Metasurvey

University Researchers Publish Results of NLP Community Metasurvey

Researchers from New York University, University of Washington, and Johns Hopkins University have published the results of the NLP Community Metasurvey, which compiles the opinions of 480 active NLP researchers about several issues in the natural language processing AI field. The survey also includes meta-questions about the perceived opinions of other researchers.

The goal of the survey was to learn about the actual distribution of beliefs in the NLP community as well as sociological beliefs---what researchers think other researchers think. The survey was targeted at researchers who have published at least two NLP papers in the last three years. The questions cover six categories related to NLP research in particular, as well as artificial general intelligence (AGI) and social concerns; the team chose questions that are frequently discussed in the community and are subjects of public disagreement. In the results, the team found that a large majority of respondents think NLP research will have a positive impact on the future, and a narrow majority believes that recent progress in large language models (LLM) are significant steps toward AGI. According to the survey team:

By necessity, we are covering a subjectively chosen set of questions and reducing many complex issues into simplified scales, but we hope that the outcome can serve as a reference point for community discussion and for future surveys. This is not the final word in any debate, but we hope it will spark new discussions as an initial study of the range of positions people hold and ways in which the community may mis-model itself.

The survey questions covered the following categories:

  • State of the Field: the role of industry and the near-term possibility of an "AI winter"
  • Scale, Inductive Bias, and Adjacent Fields: whether large scale is sufficient or if linguistic expertise is needed to solve NLP problems
  • AGI and Major Risks: if NLP research is making progress toward AGI and if AGI is a risk society
  • Language Understanding: whether language models actually understand language
  • Promising Research Programs: is NLP research on the right track?
  • Ethics: if NLP has a positive impact and whether certain research areas are ethical

In addition to specifying whether they agreed with a question or not, the respondents were asked to predict what percentage of other respondents would agree with the question. The goal of collecting these meta-responses was to help researchers understand sociological beliefs, since mistaken sociological beliefs can "slow down communication and lead to wasted effort."

Questions about the role of scaling AI models showed "striking mismatches" between what NLP researchers actually believe compared to what they think community beliefs are. Survey respondents estimated that nearly 50% of researchers would agree that scaling can solve "practically any" problem, and that less than 40% would agree that linguistic theory and expert design would be needed to solve important problems. However, in a Twitter thread highlighting some of the results, lead author Julian Michael pointed out:

Less than 20% of the field thinks that scaling up existing techniques will be enough to solve all applied NLP problems. A majority thinks that insights from linguistics or cognitive science will be an important part of future progress.

In a Hacker News discussion about the limits of current AI technology, AI writer and researcher Gwern Branwen referred to the NLP survey results and defended scaling, saying:

AGI & scaling critics are still in the majority, despite posturing as an oppressed minority...If you believe in scaling, you are still in a small minority of researchers pursuing an unpopular and widely-criticized paradigm. (That it is still producing so many incredible results and appearing so dominant despite being so disliked and small is, IMO, to its credit and one of the best arguments for why new researchers should go into scaling - it is still underrated.)

While the survey paper contains charts and summaries of the data, the survey website notes that a web-based dashboard for exploring the results is "coming soon."
 

About the Author

Rate this Article

Adoption
Style

BT