The End of More – The Death of Moore’s Law

 A version of this article first appeared in IEEE Spectrum.

For most of our lives the idea that computers and technology would get, better, faster, cheaper every year was as assured as the sun rising every morning. The story “GlobalFoundries Stops All 7nm Development“ doesn’t sound like the end of that era, but for anyone who uses an electronic device, it most certainly is.

Technology innovation is going to take a different direction.


GlobalFoundries was one of the three companies that made the most advanced silicon chips for other companies (AMD, IBM, Broadcom, Qualcomm, STM and the Department of Defense.) The other foundries are Samsung in South Korea and TSMC in Taiwan. Now there are only two pursuing the leading edge.

This is a big deal.

Since the invention of the integrated circuit ~60 years ago, computer chip manufacturers have been able to pack more transistors onto a single piece of silicon every year. In 1965, Gordon Moore, one of the founders of Intel, observed that the number of transistors was doubling every 24 months and would continue to do so. For 40 years the chip industry managed to live up to that prediction. The first integrated circuits in 1960 had ~10 transistors. Today the most complex silicon chips have 10 billion. Think about it. Silicon chips can now hold a billion times more transistors.

But Moore’s Law ended a decade ago. Consumers just didn’t get the memo.

No More Moore – The End of Process Technology Innovation
Chips are actually “printed,” not with a printing press but with lithography, using exotic chemicals and materials in a “fab” (a chip fabrication plant – the factory where chips are produced). Packing more transistors in each generation of chips requires the fab to “shrink” the size of the transistors. The first transistors were printed with lines 80 microns wide. Today Samsung and TSMC are pushing to produce chips with features few dozen nanometers across.That’s about a 2,000-to-1 reduction.

Each new generation of chips that shrinks the line widths requires fabs to invest enormous amounts of money in new chip-making equipment.  While the first fabs cost a few million dollars, current fabs – the ones that push the bleeding edge – are over $10 billion.

And the exploding cost of the fab is not the only issue with packing more transistors on chips. Each shrink of chip line widths requires more complexity. Features have to be precisely placed on exact locations on each layer of a device. At 7 nanometers this requires up to 80 separate mask layers.

Moore’s Law was an observation about process technology and economics. For half a century it drove the aspirations of the semiconductor industry. But the other limitation to packing more transistors onto to a chip is a physical limitation called Dennard scaling– as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area. This basic law of physics has created a “Power Wall” – a barrier to clock speed – that has limited microprocessor frequency to around 4 GHz since 2005. It’s why clock speeds on your microprocessor stopped increasing with leaps and bounds 13 years ago.  And why memory density is not going to increase at the rate we saw a decade ago.

This problem of continuing to shrink transistors is so hard that even Intel, the leader in microprocessors and for decades the gold standard in leading fab technology, has had problems. Industry observers have suggested that Intel has hit several speed bumps on the way to their next generation push to 10- and 7-nanometer designs and now is trailing TSMC and Samsung.

This combination of spiraling fab cost, technology barriers, power density limits and diminishing returns is the reason GlobalFoundries threw in the towel on further shrinking line widths . It also means the future direction of innovation on silicon is no longer predictable.

It’s the End of the Beginning
The end of putting more transistors on a single chip doesn’t mean the end of innovation in computers or mobile devices. (To be clear, 1) the bleeding edge will advance, but almost imperceptibly year-to-year and 2) GlobalFoundaries isn’t shutting down, they’re just no longer going to be the ones pushing the edge 3) existing fabs can make current generation 14nm chips and their expensive tools have been paid for. Even older fabs at 28-, 45-, and 65nm can make a ton of money).

But what it does mean is that we’re at the end of guaranteed year-to-year growth in computing power. The result is the end of the type of innovation we’ve been used to for the last 60 years. Instead of just faster versions of what we’ve been used to seeing, device designers now need to get more creative with the 10 billion transistors they have to work with.

It’s worth remembering that human brains have had 100 billion neurons for at least the last 35,000 years. Yet we’ve learned to do a lot more with the same compute power. The same will hold true with semiconductors – we’re going to figure out radically new ways to use those 10 billion transistors.

For example, there are new chip architectures coming (multi-core CPUs, massively parallel CPUs and special purpose silicon for AI/machine learning and GPU’s like Nvidia), new ways to package the chips and to interconnect memory, and even new types of memory. And other designs are pushing for extreme low power usage and others for very low cost.

It’s a Whole New Game
So, what does this mean for consumers? First, high performance applications that needed very fast computing locally on your device will continue their move to the cloud (where data centers are measured in football field sizes) further enabled by new 5G networks. Second, while computing devices we buy will not be much faster on today’s off-the-shelf software, new features– facial recognition, augmented reality, autonomous navigation, and apps we haven’t even thought about –are going to come from new software using new technology like new displays and sensors.

The world of computing is moving into new and uncharted territory. For desktop and mobile devices, the need for a “must have” upgrade won’t be for speed, but because there’s a new capability or app.

For chip manufacturers, for the first time in half a century, all rules are off. There will be a new set of winners and losers in this transition. It will be exciting to watch and see what emerges from the fog.

Lessons Learned

  • Moore’s Law – the doubling of every two years of how many transistors can fit on a chip – has ended
  • Innovation will continue in new computer architectures, chip packaging, interconnects, and memory
  • 5G networks will move more high-performance consumer computing needs seamlessly to the cloud
  • New applications and hardware other than CPU speed (5G networks, displays, sensors) will now drive sales of consumer devices
  • New winners and losers will emerge in consumer devices and chip suppliers

5 Responses

  1. I would add that concurrent software will become increasingly important. Existing programming paradigms are surprisingly single-threaded, and multi-threaded programming is notoriously hard to write correctly. Languages and frameworks that are “parallel native” will be key to the next wave of performance. Erlang and Elixir, or others that rely on concurrency tools like actors, come to mind.

  2. What about FPGA as part of this analysis. I think new rules are coming from this area of ‘programming hardware’. What do you think?

  3. Hello Steve – a useful article – comprehensive – and provides an accurate chronology. Do like the headline – although, like you, a product of the high-tech field for four (4) decades and grew-up with Moore’s Law. About a decade ago – particularly, in the High-Performance Computing segment of the business – we saw price/performance ratios declining (with some level of confusion). Do see that this has changed and understand the constraints and contributing factors – although it now appears that the game has changed to: application diversity and an emphasis on how special-purpose morphs to general-purpose. Example: remember when FPGA technology was utilized in ASIC design – and for accelerating certain functions/applications (typically, as a sub-routine engine). Now, look at today’s FPGA-based, platforms – versatile – have found their way into many, previously mainline applications for general-purpose compute engines – and other areas (like Network Acceleration/Network Security, etc., etc.). Still like – although do not adhere to Moore’s Law – although let’s give Moore credit for his vision and practical emphasis (up to a point in time). And – let’s not count Moore’s Law out – as it is difficult to predict and determine – what the technologies of the future will be. Good read, Steve – and thanks for sharing your Industry insights. Author: Market Warfare: Leadership & Domination Over Competitors

  4. Could this also mean an increasing demand for computing power (see blockchain reward use cases)?

Leave a Reply

Discover more from Steve Blank

Subscribe now to keep reading and get access to the full archive.

Continue reading