NVIDIA and AMD’s Next-Generation Flagship GPUs Rumored to Draw over 420 Watts of Power

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,075
Points
83
nvidia-geforce-rtx-3090-badge-close-up-1024x576.jpg
Image: NVIDIA



NVIDIA and AMD’s next-generation flagship graphics cards, which are expected to leverage the Lovelace and RDNA 3 architectures, respectively, could be more power hungry than anyone might have imagined. This is according to the latest rumors from prominent leakers such as kopite7kimi, who recently commented on speculation about NVIDIA’s flagship GeForce RTX 40 Series products featuring a TDP of at least 400 watts. While that specification already exceeds the GeForce RTX 3090’s 350-watt TDP by a notable amount, kopite7kimi suggested that the TDP for the flagship models will go even higher, stating that “400 is not enough.” As for AMD, Beyond3D forum member and alleged insider Bondrewd recently shut down speculation that Navi 31 could draw as much as 500 watts. That prompted 3DCenter.org to seek out a more probable estimate by looking into the...

Continue reading...


 
Yeah, back before 32nm power use used to scale almost perfectly linearly with node size, all else being equal.

Ever since 32nm that scaling has become worse and worse. There is still a node to node benefit, but it is much smaller than it used to be.

Pesky sub-atomic effects...
 
Yeah, back before 32nm power use used to scale almost perfectly linearly with node size, all else being equal.

Ever since 32nm that scaling has become worse and worse. There is still a node to node benefit, but it is much smaller than it used to be.

Pesky sub-atomic effects...
That could decide to devote a node to power savings instead of performance. Can you imagine the whining though if the 40X0 line had the same performance, but half the power draw?
 
Yeah, back before 32nm power use used to scale almost perfectly linearly with node size, all else being equal.

Ever since 32nm that scaling has become worse and worse. There is still a node to node benefit, but it is much smaller than it used to be.

Pesky sub-atomic effects...
I gotta wonder if such a comparison could even be made today. You'd never be able to keep enough of the variables in check to really say, right?

Not that it didn't at least seem like power draw could drop node-to-node in a somewhat linear fashion. Just that complexity continues to skyrocket alongside transistor shrinks.

Also that the power envelopes are targeted. End users may not really be deterred by an extra power lead outside of the SFF realm, but I'd bet OEMs sure do, especially when it comes to mobile parts! I expect there to be solid market incentives for GPU manufacturers to target specific power consumption bands, limiting or bolstering just how much they take advantage of new process nodes, which will also distort our perception of the efficiency gains.
 
That could decide to devote a node to power savings instead of performance. Can you imagine the whining though if the 40X0 line had the same performance, but half the power draw?
There are separate nodes for power saving versus high performance. It's just they don't tend to clock up very well.
 
The move to 5nm TSMC (if true) will massively improve perf/watt over their current node.

NVIDIA will be going from 44.56 million transistors per mm² to 173 million transistors per mm²

This rumor is likely the A100 replacement and that already has a TDP of 400 watts so this is not really even news.
 
That could decide to devote a node to power savings instead of performance. Can you imagine the whining though if the 40X0 line had the same performance, but half the power draw?

There would certainly be applications for that, especially in mobile or compact/quiet systems, but personally I'll take raw brute power over anything else for my desktop.

I have a pretty beefy custom water loop though :p
 
The move to 5nm TSMC (if true) will massively improve perf/watt over their current node.

NVIDIA will be going from 44.56 million transistors per mm² to 173 million transistors per mm²
So this really has me thinking. They could basically build 3090 performance at 1/4 of the die size. I would actually love to see a tick generation - same performance as the 3090, but dramatically more supply.
 
I'd be all for that as well just a generational improvement on power and efficiency. Let more people into that tier of performance.
 
Become a Patron!
Back
Top