Moore's Law is not dead!


Staff member
May 28, 2019
I have to give credit to PcPer for this one. This could easily apply to CPU's and GPU's even though I'm posting it here. Godfrey Cheng, Head of Global Marketing, for TSMC has posted an interesting blog on their site. In it he goes into detail to try and debunk some of the perceptions attributed to Moore's Law.

"Moore's Law is actually misnamed as a law as it more accurate to describe it as a guideline of historical observation and future prediction of the number of transistors in a semiconductor device or chip. These observations and predictions have largely held true for the past several decades. As we approach a new decade, some appear to share an opinion that Moore's Law is dead. "

Well I'm far from having the technical expertise to prove or disprove whether or not we've reached actual limits but some companies have really shown minor incremental performance increases for the better part of the last 10 years. Over the course of a decade it might look good but from year to year not so much.

"First, let's discuss the elephant in the room. Some people believe that Moore's Law is dead because they believe it is no longer possible to continue to shrink the transistor any further. Just to give you an idea of the scale of the modern transistor, the typical gate is about 20 nanometers long. A water molecule is only 2.75 Angstrom or 0.275 nanometer in diameter! You can now start counting the number of atoms in a transistor. At this scale, many factors limit the fabrication of the transistor. The primary challenge is the control of materials at the atomic level. How do you place individual atoms to create a transistor? How do you do this for billions of transistors found on a modern chip? How do you build these chips that have billions of transistors in a cost effective manner? "

"Beyond the individual transistor, we also need to look at the system level density. Circling back and looking at the classic compute tasks of CPUs and GPUs, the modern chip has extremely fast transistor clock speeds that approach 5 gigahertz and beyond. The central challenge to these compute tasks is actually to keep the CPU and GPU cores fed with data. While this is classically a software challenge, modern architectures and methods for threading have squarely put the performance bottleneck at the hardware level. We have finally seen the limitations of memory caching in the era of big data analytics and AI. "

This is something I can relate to. For years 5Ghz was a magic number only a very special few could reach. Some only with LN2 or something similar. We've finally reached the point you can get it on air straight out of the box. In regards to GPU's both sides of the fence are still dancing around 2Ghz, although GPU's typically have a lot more cores, but we're seeing significant increases in Vram speeds.

"Advanced packaging today brings memory close to the logic. Typically, logic cores are fed through standalone memory chips through interfaces such as DDR or GDDR. The physical distance between the memory device and the logic cores limit performance through increased latency. Bandwidth is also limited with discrete memory as they only offer limited interface width. Additionally, power consumption for discrete logic and memory also govern a device's overall performance, especially in applications such as smartphones or IOT devices as there is limited ability to dissipate the thermal energy radiated by discrete devices. "

These are only a small bit of what he stated and I highly recommend clicking on the link above to read the rest. Whether we agree, or disagree, it's an interesting read.

"I cannot do justice to this topic through a simplified blog. TSMC will be providing a keynote on the future of the transistor at Hotchips on August 20th on Stanford campus. Our lead researcher, Dr. Philip Wong, will be providing the keynote called “What Will the Next Node Offer Us?” Come join us and see Dr. Wong's presentation on the future of the semiconductor and why Moore's Law is not dead! "
Become a Patron!