NVIDIA GeForce RTX 4080 Could Have a 420-Watt TGP

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,237
Points
83
The NVIDIA GeForce RTX 4080 may have a TGP that's 100 watts higher than the GeForce RTX 3080 (10 GB).

Go to post
 
So if the TDP of the card is 33% higher than the TDP of the previous generation.... wouldn't we expect the performance increase to be greater than 33%. Otherwise All they are doing is pumping more clocks through the same ICP at the cost of more power.
 
So if the TDP of the card is 33% higher than the TDP of the previous generation.... wouldn't we expect the performance increase to be greater than 33%. Otherwise All they are doing is pumping more clocks through the same ICP at the cost of more power.
Power doesn’t tend to scale linearly with performance. It usually follows the square — it takes 4x the power to double performance.

Just a rough thumbrule used when dealing with electrical and mechanical power - it doesn’t exactly apply to computing power, but doesn’t stray too terribly far as a general rule.
 
So if the TDP of the card is 33% higher than the TDP of the previous generation.... wouldn't we expect the performance increase to be greater than 33%. Otherwise All they are doing is pumping more clocks through the same ICP at the cost of more power.
From the sound of it, they'll need to do at least that. AMD apparently has a pretty large improvement in store if rumors are to be believed (even in part).

Just a rough thumbrule used when dealing with electrical and mechanical power - it doesn’t exactly apply to computing power, but doesn’t stray too terribly far as a general rule.
They can optimize the curve quite a bit, but they are limited in terms of what a particular fabrication node will allow. Node selection of course being a part of the design process itself!
 
Power doesn’t tend to scale linearly with performance. It usually follows the square — it takes 4x the power to double performance.

Just a rough thumbrule used when dealing with electrical and mechanical power - it doesn’t exactly apply to computing power, but doesn’t stray too terribly far as a general rule.
Let me introduce you to PASCAL.

Certaily the exception rather than the rule but it proves it can be done.
 
Let me introduce you to PASCAL.

Certaily the exception rather than the rule but it proves it can be done.
Might need some context there. I know of Pascal, the SI unit for pressure. Pascal, the programming language, and Pascal, the mathematician who was the namesake.

But generally, I still stand by my thumbrule: for a given technology/node/etc - if you want to double the performance, you need to quadruple the power.
 
Touché - completely missed the nVidia reference.


*edit*
So I went and took a deeper look at Pascal, since it was provided as a reference. I plotted out TFlops vs TDP -- no overclocks, just whatever Wikipedia had in their chart -- only for the TSMC 16 process node cards (as has been mentioned, process node will significantly affect this curve)

True, it's not square - so yeah, it's a good example where the architecture is scaling pretty well with respect to power (or showcasing that nvidia hit limits with latency or temperature or something else before power could really stretch it's legs). It isn't even really close. But you can see it does have a polynomial shape to it - it's going up greater than linear.

1654572906505.png
1654572928087.png
 
Last edited:
Wait, what is TGP? Is that a typo or some new bs metric so you can't do direct comparisons with previous hardware?
 
Might need some context there. I know of Pascal, the SI unit for pressure. Pascal, the programming language, and Pascal, the mathematician who was the namesake.

But generally, I still stand by my thumbrule: for a given technology/node/etc - if you want to double the performance, you need to quadruple the power.
PASCAL as in the GTX 10x0 family.
 
Wait, what is TGP? Is that a typo or some new bs metric so you can't do direct comparisons with previous hardware?
The GPU folks have moved away from TDP and now use either TGP (power just from the GPU) or TBP (total board power)

Idk why - nothing on the board uses anywhere near as much power as the GPU does, but I guess if you can shave the 10 or 15W the vram and other aux chips use from your number it’s good? It does help distinguish I guess; with a cpu it’s clear it’s only what’s on the die, but with an add-in graphics card TDP is a bit more ambiguous
 
I think it depends on what you want from the number.

With a CPU, while there's not much you can change, you're generally neither counting DRAM power consumption nor the need to cool the heat produced by anything other than the CPU with the CPU cooler.

On a GPU, these are all inextricably linked - power phases, memory, supporting ICs, and the GPU. All one power plane in relation to the rest of the system, and all cooled by one cooler.

So to me, from a power and cooling perspective, TBP makes sense - but if you're trying to dial in on how a specific GPU is performing, then maybe TGP is more useful?
 
I'm already pulling more than that in my rather extreme 6900xt.

I'm ready :p

(Not sure I am ready for the probable sticker shock though)
 
I'm already pulling more than that in my rather extreme 6900xt.
I just have to flip a switch to get my 3080 12GB GTW3 to pull 450W.

Thing is, this is Nvidia pushing it out at 420W, which is significantly higher - and if they're not pushing things to the very ragged edge, that means that OEMs will be able to dump another hundred or so watts into the cards for their custom versions.
 
800watt GPU... Crap guess I'll need a new power... oh wait I'm good nevermind.
 
800watt GPU... Crap guess I'll need a new power... oh wait I'm good nevermind.
I've been thinking it, but the new standards (i.e., the single 600W connectors...) aren't really implemented and available for testing let alone for purchase.
 
Become a Patron!
Back
Top