AMD Thinks Moore’s Law Still Has Six to Eight Years Left

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,214
Points
83
NVIDIA CEO Jensen Huang declared in September that Moore's Law is dead, but AMD doesn't seem to agree, with CTO Mark Papermaster suggesting in a new interview about how the observation that implies transistors in integrated circuits double about every two years still has some life left.

See full article...
 
Man, gotta love that last name: Papermaster. Sounds like some kind of unknown superhero.

Well, I don't know. Jensen declared it so it must be so, right?
 
Man, gotta love that last name: Papermaster. Sounds like some kind of unknown superhero.

Well, I don't know. Jensen declared it so it must be so, right?

Tell them I am the PAPERMASTER. Uhhh sir it's a news article... Oh right... ummm Mark... the PAPERMASTER?
 
Doesn't really matter.
I mean its cool and all that we make smaller and smaller and cram more and more, but has mentioned, you can make more and more dies too, matter of fact, you can go back to more specialized separe chips working together in the motherboard. Yes theres difficulties in that too, but it all works out to circumvent the need to make dies with smaller in size, yet bigger in number circuits. To me moores 'law' is a bit of a silly observation to talk about, but, if we are going keep talking about it, I think if you take this law and applied it system wide (however big you want the system, a desktop, a server rack so on, it will probably apply in some forms for a long long time. Also, it may be that silicon will hit a very hard wall in terms of nm, thats fine, why not make chips the size of my face? Factory investment will mature, and going for acreage when its the only thing you have left to do will not be a big deal. I guess a rethink of semiconductors is also possible, new materials so on.
 
Doesn't really matter.
I mean its cool and all that we make smaller and smaller and cram more and more, but has mentioned, you can make more and more dies too, matter of fact, you can go back to more specialized separe chips working together in the motherboard. Yes theres difficulties in that too, but it all works out to circumvent the need to make dies with smaller in size, yet bigger in number circuits. To me moores 'law' is a bit of a silly observation to talk about, but, if we are going keep talking about it, I think if you take this law and applied it system wide (however big you want the system, a desktop, a server rack so on, it will probably apply in some forms for a long long time. Also, it may be that silicon will hit a very hard wall in terms of nm, thats fine, why not make chips the size of my face? Factory investment will mature, and going for acreage when its the only thing you have left to do will not be a big deal. I guess a rethink of semiconductors is also possible, new materials so on.

I think the near future is going to have more and more designs like Apple's M1 and M2 and Nvidias 4090.

Let me explain.

Overclockers have always known that boosting voltages can allow for higher clocks, but that it doesn't scale well. As you reach the limit of the chip you start creating a ton of extra heat and power consumption for small marginal improvements in performance.

Over the last several years the strategy has been to boost per core clocks as much as possible through a combination of process yields and voltage management to be able to boost clocks when needed.

The M1, M2 and 4090 seem to take a different approach. They are HUGE chips and because they are huge, they don't need massive voltages to create high clocks, they can instead utilize their high transistor count and lower voltages for better scaling and produce performance that way.

This would be a little bit tricky to accomplish on the AMD/Intel CPU side while still maintaining x86 compatibility, but for GPU's it is much easier, since they are essentially massively parallelized devices, so the sky is the limit to how many "cores" you add. Just add more of them, and lower the voltage to where it maximizes performance per watt.

But this is what I think we will see more and more of over the next few years, at least in GPU's Core counts massively increasing, running at relatively low clocks to maximize perf/watt, and getting their performance increases that way.

The 4090 accomplishes this with an insanely large unified chip, which probably has horrible yields and helps contribute to its high price, but AMD may have an advantage here with chiplets, if they can just make the interconnects work. If they can make the GPU a modularized set of core chips that communicate over an interconnect, they can theoretically just keep adding modules, and pushing the core count sky high, while still having reasonable yields and keeping the costs slightly more reasonable.

As far as Moores law goes, people forget that mores law was really just counting transistors not performance. A lot of people have read way more into Moores law over the years than they really should have.
 
but AMD may have an advantage here with chiplets, if they can just make the interconnects work. If they can make the GPU a modularized set of core chips that communicate over an interconnect, they can theoretically just keep adding modules, and pushing the core count sky high, while still having reasonable yields and keeping the costs slightly more reasonable.
That's pretty much what Sam Naffziger said.

https://www.thefpsreview.com/2022/1...-explains-advantages-of-rdna3-chiplet-design/
 
designs like Apple's M1 and M2
Apples chip is actually pretty small — the base M1 is just 119mm2

Now the Ultra is sitting on a 980mm2 die, but that’s actually 2 480mm2 Max dies sitting on an interposer. So it’s like nVidia’s larger die plus AMD chiplet design.

The difference between the base 119 and the Max 480 is just more graphics cores (8 vs 32) - but those and options inbetween are monolithic.

The 4090 is 608mm2 for comparison.
 
Apples chip is actually pretty small — the base M1 is just 119mm2

Now the Ultra is sitting on a 980mm2 die, but that’s actually 2 480mm2 Max dies sitting on an interposer. So it’s like nVidia’s larger die plus AMD chiplet design.

The difference between the base 119 and the Max 480 is just more graphics cores (8 vs 32) - but those and options inbetween are monolithic.

The 4090 is 608mm2 for comparison.
I was wondering about the size differences.
 
Apples chip is actually pretty small — the base M1 is just 119mm2

I could have sworn I had read it had a huge transistor count.

Maybe I was misinformed there. I'm not really an Apple connoisseur. Still, I do think this will be the way forward for GPU's for the next couple of years.
 
I could have sworn I had read it had a huge transistor count.

Maybe I was misinformed there. I'm not really an Apple connoisseur. Still, I do think this will be the way forward for GPU's for the next couple of years.
They do - Apple usually gets access to a node a year or so early (and they pay through the nose for it).

The smallest M1 is 16B, the Ultra clocks in at 114B on TSMC 5nm — the base design is just over 2 years old now

The more recent M2 is the same 119mm2 and bumps up to 20B transistors on the TSMC N5P node (there are no Pro, Max or Ultra M2s yet)

The 4090 is 76B TSMC 4nm
 
Last edited:
Tell them I am the PAPERMASTER. Uhhh sir it's a news article... Oh right... ummm Mark... the PAPERMASTER?
The PaperMaster is a weapon that was co-developed in secret by the major players in the semiconductor manufacturing industry to constrain the effects of Rock's Law (also known as Moore's second law).
 
Last edited:
Become a Patron!
Back
Top