BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News 55th Anniversary of Moore's Law

55th Anniversary of Moore's Law

This item in japanese

Lire ce contenu en français

Bookmarks

April 2020 marks 55 years since Intel co-founder Gordon Moore published 'Cramming more components onto integrated circuits (pdf)', the paper that subsequently became known as the origin for his eponymous law. For over 50 of those years Intel and its competitors kept making Moore's law come true, but more recently efforts to push down integrated circuit feature size have been hitting trouble with limitations in economics and physics that force us to consider what happens in a post Moore's law world.

Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years.

As Bryan Cantrill points out in his QCon San Francisco 2019 presentation 'No Moore Left to Give: Enterprise Computing after Moore's Law' there are a number of effects noted by Moore in his original paper that we've taken for granted for much of the microprocessor era:

  1. Transistor speed increasing exponentially over time
  2. Transistors per dollar increasing exponentially over time
  3. Transistor density increasing exponentially over time
  4. Transistors in a package increase exponentially over time

Speed increasing was the first thing to run into problems with physics. As clock frequencies passed 3GHz in the mid noughts, the Dennard scaling effect that had allowed manufacturers to increase frequency without increasing overall power consumption broke down. At that point the additional transistors made possible by Moore's law went into providing more compute cores on each CPU. Manufacturers also continued to improve single thread performance with deeper instruction pipelining and speculative execution (with the latter becoming the source of security problems like Spectre and Meltdown).

Plot by Karl Rupp from his microprocessor trend data (CC BY 4.0 license)

Despite clock speed levelling off at around 3.5GHz, Moore's law kept on serving up greater density. But an evil twin to Moore's law started to emerge - the cost of building new fabrication plants (and to develop the underlying techniques needed for smaller feature sizes) kept rising exponentially. This has left Intel mostly stuck at 14nm 'FinFET' devices since 2014, and struggling to scale production of 10nm devices. Meanwhile Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung have brought 7nm devices to market at scale, and have 5nm plants that are expecting to ship volume production this year. 3nm parts using gate-all-around field-effect-transistors (GAAFET) are in development, but could be the end of the road for feature size shrinkage as quantum tunneling effects become more problematic.

The end of Moore's law doesn't however mean the end of CPU innovation. As Jessie Frazelle observes in 'Chipping away at Moore's law,' many manufacturers have shifted to constructing CPUs out of multiple chiplets that form a muli-chip module. Even this was anticipated by Moore in 1965 when he wrote, "It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected." In effect, the symmetric multiprocessing (SMP) that used to employ multiple CPU sockets have simply been shrunk into a single CPU (or system on chip [SOC]) package with its own internal interconnect fabric such as AMDs Infinity Fabric (IF) or Intel's Advanced Interface Bus (AIB). Just as accelerators like Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs) have moved from standard interconnects to specialised high speed interconnects, we can expect them to next move onto the CPU's interconnect fabric, and hence inside the CPU's packaging, to achieve higher throughput and lower latency.

Whilst single thread performance has continued to improve, that rate of improvement has slowed since 2006 and may be the next thing to flatten. This could bring the advantages and disadvantages of various instruction set architectures (ISAs) into sharp focus as x86, ARM and RISC-V compete to make the best use of the silicon available to them from below, and provide the best optimisation capabilities to the tool chains running on top. Multi tasking operating systems, virtualisation, and multi threaded programming have all provided means to keep systems busy as they've shifted to multi core. But parallel programming, for everything except trivially parallel problems, remains a challenge for most developers and an area where we can expect tools to improve, particularly as they make more use of machine learning techniques.

Moore's law has served up improvements in speed, capability and economy for 55 years; which goes back to before the advent of the microprocessor. Our industry and society as a whole has taken those improvements for granted over that time. We now find ourselves going through another flat spot, like when Dennard scaling ran out around 2006. But as we saw then, that doesn't mean everything comes to a halt, it just means that we need to adjust to new norms in how we build systems and write software for them.

Rate this Article

Adoption
Style

BT