BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Book Review: Andrew McAfee and Erik Brynjolfsson's "The Second Machine Age"

Book Review: Andrew McAfee and Erik Brynjolfsson's "The Second Machine Age"

Bookmarks

Key Takeaways

  • A combination of exponential growth in computing power and the increasing digitization of all our data is propelling the recent advances in technology (most of which are advances in machine learning).
  • There are no clear measures for the impact of the major technological advances of recent years—traditional measures like GDP are inadequate.
  • The “second machine age” will have increasing economic inequality as a side-effect because of the winner-takes-all nature of digital markets.

Machine learning has long powered many products we interact with daily–from "intelligent" assistants like Apple's Siri and Google now, to recommendation engines like Amazon's that suggest new products to buy, to the ad ranking systems used by Google and Facebook. More recently, machine learning has entered the public consciousness because of advances in "deep learning"–these include AlphaGo's defeat of Go grandmaster Lee Sedol and impressive new products around image recognition and machine translation.

In this series, we'll give an introduction to some powerful but generally applicable techniques in machine learning. These include deep learning but also more traditional methods that are often all the modern business needs. After reading the articles in the series, you should have the knowledge necessary to embark on concrete machine learning experiments in a variety of areas on your own.

This InfoQ article is part of the series "An Introduction To Machine Learning". You can subscribe to receive notifications via RSS.

 

Andrew McAffee and Erik Brynjolfsson begin their book The Second Machine Age with a simple question: what innovation has had the greatest impact on human history? “Innovation” is meant in the broadest sense: agriculture and the domestication of animals were innovations, as were the advent of various religions and forms of government, the printing press, and the cotton gin. But which of these changed the course of humanity the most (and how even is that determined)? To start, McAffee and Brynjolfsson suggest population and measures of social development as approximate yardsticks, and using either of them, the arc of human history decisively moves “up and to the right” (as Silicon Valley startups would have of all of their metrics) starting around 1765. The authors argue that the trigger for this growth was James Watt’s steam engine, a general purpose technological innovation more than three times as efficient as its predecessors and one that essentially kicked-off the industrial revolution.

McAffee and Brynjolfsson, researchers at the MIT Center for Digital Business who have made careers studying the impact of the internet on business, believe that we’re on the precipice of another such revolution—a second “machine age”—and provide some anecdotal evidence for this. These examples all have the same form: a decade ago we were frustratingly far from progress in the area, and then, almost overnight, the problems had been solved (generally by advances in machine learning). The work here progressed in the same way that Ernest Hemingway described how people go bankrupt in The Sun Also Rises: “gradually, then suddenly.”

Among the examples: self-driving cars, which are now completely unremarkable on the freeways of Northern California, only a decade ago seemed out of reach. As recently as 2004, DARPA’s “Grand Challenge” to build a car that could autonomously navigate a course in the desert ended disastrously, with all the entrants failing just a few hours in (the media derided the competition as a “Debacle in the Desert”). There was also IBM’s Jeopardy-winning Watson, which thoroughly demolished the two most successful human Jeopardy contests ever. Watson absorbed massive amounts of information, including the entirety of Wikipedia, and was able to answer instantaneously and correctly even when the clues involved typical-for-Jeopardy puns and indirection (it correctly offered “Pentathlon” as the answer to “A 1976 entree in the ‘modern’ this was kicked out for wiring his epee to score points without touching his foe”). And although it was developed after the book was published, we could add Deepmind's AlphaGo, the first Go program ever to beat a professional player. In October 2015, AlphaGo defeated the reigning 3-time European champion Fan Hui 5-0, and in March 2016, it defeated Lee Sedol, the top Go player in the world over the past decade, 4-1. Because Go is so combinatorially complex—on average the number of possible moves a player can make is almost an order of magnitude more than the equivalent number in chess—it was generally believed that we were still several years away from achievements like those of AlphaGo.

Why has the progress here been so sudden in the past several years? One plausible, specific answer for many of these advances goes unmentioned: developments in neural networks and deep learning. But McAffee and Brynjolfsson focus on three higher-level explanations.

First, there’s the exponential growth described by Moore’s Law: transistor density doubles every eighteen months. Citing a rough rule-of-thumb put forth by Ray Kurzweil that things meaningfully change after 32 doublings (once you’re in the “second half of the chess board”), and the fact that the Bureau of Economic Analysis first cited “information technology” as a corporate investment category in 1958, the authors peg 2006 as when Moore’s Law put us into a new regime of computing.

Second, there’s the trend of the digitization of everything: maps, books, speech—they’re all being stored digitally in a form that’s amenable for processing and analysis. For example, Waze, the navigation app,

uses several streams of information: digitized street maps, location coordinates for cars broadcast by the app, and alerts about traffic jams, among others. It’s Waze’s ability to bring these streams together and make them useful for its users that causes the service to be so popular.

Digitized information is so powerful because it can be reproduced without cost and therefore used in innumerable applications.

Finally, they describe innovation as being driven by a recombination of existing technologies:

The Web itself is a pretty straightforward combination of the Internet’s much older TCP/IP data transmission network; a markup language called HTML that specified how text, pictures, and so on should be laid out; and a simple PC application called a ‘browser’ to display the results. None of these elements was particularly novel. Their combination was revolutionary.

As the internet facilitates the availability of information and other resources, this process of recombination accelerates—“Today, people with connected smartphones or tablets anywhere in the world have access to many (if not most) of the same communications resources and information that we do while sitting in our offices.”

So, it seems, we’re on the brink of a revolution—these mind-boggling technologies being anecdotal evidence of that—but how will that revolution manifest itself? A growth in population like the one that attended the industrial revolution is impossible, so is this a revolution just of awe and wonder, or is there a measure that captures just how fast and meaningfully these technologies are changing the world? While the authors talk about the inadequacy of traditional economic measures to capture the change (GDP in particular is the bugbear here—“When a business traveler calls home to talk to her children via Skype, that may add zero to GDP, but it’s hardly worthless”), they do not offer a clear metric for at least the positive impact (or “bounty”) of recent progress.

On the other hand, McAffee and Brynjolfsson do an admirable job of talking concretely about at least one of the negative impacts of all this change: economic inequality (or what they call the “spread”). “Digital technologies can replicate valuable ideas, insights, and innovations at very low cost,” they write, and “[t]his creates bounty for society and wealth for innovators, but diminishes the demand for previously important types of labor, which can leave many people with reduced incomes.” To those who may argue that tax policy, the influence of the finance industry, or social norms are the source of growing inequality, the authors note that inequality in Sweden, Finland, and Germany has actually increased more rapidly over the past twenty or thirty years than it has in the U.S. Technology is the culprit here, and it has been more disruptive in recent years for two reasons.

The first, and primary, reason is that work in digital goods, machine learning algorithms, internet software, and so forth is not subject to capacity constraints. The best laborer can only sell so many hours of his or her work, leaving the second best laborer opportunities (though at an appropriately lower rate). On the other hand,

a software programmer who writes a slightly better mapping application—one that loads a little faster, has slightly more complete data, or prettier icons—might completely dominate a market. There would be little, if any, demand for the tenth-best mapping application, even if it got the job done almost as well.

This effect is magnified by globalization, the second reason. Local leaders, who were previously safe servicing their users, are now getting disrupted by global leaders—a locally produced mapping application has no advantage over Google Maps whereas a local plumber is not in danger of competition from a better, foreign plumber.

While the discussion on inequality as a consequence of recent advances in technology in general and artificial intelligence in particular was thorough, I felt the arguments and coverage were weaker in two areas. First, the policy recommendations were mostly quite generic (almost admittedly so—the authors referred to them as “Econ 101” policies). These included suggestions to focus on schooling (emphasizing “ideation, large-frame pattern recognition, and complex communication instead of the three Rs”), to encourage startups, to support science and immigration, and to upgrade infrastructure. While these are all sound policy suggestions, they are generically good ones and don’t specifically address the issues around new artificial intelligence. Their “long term” recommendations do try to be a little more targeted towards the employment impact of new technology, but they seem somewhat fatalistic—perhaps a basic income or a negative income tax, they suggest, could help all those who will be displaced. Second, while inequality is a major issue, the authors discuss other difficult problems only in passing in the closing pages of the book. These include issues of privacy, fragility in highly coupled systems, and the possibility of the “singularity” and machine self-awareness.

The Second Machine Age was first published in 2014 (and issued in paperback last year), and it feels like it just barely missed deep learning as a framework for understanding why progress has been so significant recently and for anticipating upcoming issues and challenges. In a summary of research that now feels oddly archaic, the authors write that “innovators often take cues from biology as they’re working, but it would be a mistake to think that this is always the case, or that major recent AI advances have come about because we’re getting better at mimicking human thought.”

“Current AI, in short, looks intelligent, but it’s an artificial resemblance. That might change in the future.” Indeed it has, and we’re beginning to see all the consequences of these changes.

About the Book Authors

Erik Brynjolfsson is the director of the MIT Center for Digital Business and one of the most cited scholars in information systems and economics. He is a cofounder of MIT's Initiative on the Digital Economy, along with Andrew McAfee. He and McAfee are the only people named to both the Thinkers 50 list of the world’s top management thinkers and the Politico 50 group of people transforming American politics.

Andrew McAfee is a principal research scientist at the MIT Center for Digital Business and the author of Enterprise 2.0. He is a cofounder of MIT's Initiative on the Digital Economy, along with Erik Brynjolfsson. He and Brynjolfsson are the only people named to both the Thinkers 50 list of the world’s top management thinkers and the Politico 50 group of people transforming American politics.

 

Machine learning has long powered many products we interact with daily–from "intelligent" assistants like Apple's Siri and Google now, to recommendation engines like Amazon's that suggest new products to buy, to the ad ranking systems used by Google and Facebook. More recently, machine learning has entered the public consciousness because of advances in "deep learning"–these include AlphaGo's defeat of Go grandmaster Lee Sedol and impressive new products around image recognition and machine translation.

In this series, we'll give an introduction to some powerful but generally applicable techniques in machine learning. These include deep learning but also more traditional methods that are often all the modern business needs. After reading the articles in the series, you should have the knowledge necessary to embark on concrete machine learning experiments in a variety of areas on your own.

This InfoQ article is part of the series "An Introduction To Machine Learning". You can subscribe to receive notifications via RSS.

Rate this Article

Adoption
Style

BT