AMD Radeon RX 6000 Series GPUs to Feature TBPs As High As 355 Watts

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,871
Points
113
amd-radeon-rx-6000-series-design-sneak-peek-1024x576.jpg
Image: AMD



We previously shared a leak from Patrick Schur that suggested AMD’s flagship Radeon RX 6000 Series graphics card would feature a TGP of 255 W. Due to a difference in the way NVIDIA and AMD brand their total power spec (AMD uses Total Board Power [TBP], while NVIDIA uses Total Graphics Power [TGP]), Igor Wallossek has published consumption figures that give us a better idea of how much power red team’s RDNA 2 cards really use.



After figuring in...

Continue reading...
 
This is getting more and more interesting.

Who would have thought that in 2020 with smaller and smaller process nodes and after years of focus on mobile low power stuff, both major GPU competitors would be coming out with power monsters.

I'm really hoping AMD hits it out of the park, because Nvidia's launch has been an utter disaster. They could used to be knocked down a few pegs.

I hadn't planned on going AMD this generation, but it may just be the first time since my Radeon HD 7970 that I do!

I'm excited for this launch for once.
 
This is getting more and more interesting.

Who would have thought that in 2020 with smaller and smaller process nodes and after years of focus on mobile low power stuff, both major GPU competitors would be coming out with power monsters.

I'm really hoping AMD hits it out of the park, because Nvidia's launch has been an utter disaster. They could used to be knocked down a few pegs.

I hadn't planned on going AMD this generation, but it may just be the first time since my Radeon HD 7970 that I do!

I'm excited for this launch for once.
If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.
 
If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.

Don't get me wrong, I'm not complaining. I'm just saying I never would have predicted this a year ago.
 
I am hoping the new AMD GPUs are as disruptive as the Ryzen family has been.

The GPU market needs some serious competition at the high end.

Yo Intel, I'm speaking at you too.
 
on the upper tiers of cards does power consumption really matter?

Anyone slapping a $800-1500 card in their PC shouldn't be too worried if they need to replace their PSU.
 
on the upper tiers of cards does power consumption really matter?

Anyone slapping a $800-1500 card in their PC shouldn't be too worried if they need to replace their PSU.
If I was concerned about power consumption I would stick to using the IGP. As long as the performance is there to back up the power usage and it doesn't overload the breaker for the room the PC is in then I'm good.
 
The terms confuse me, Total board power seems straightforward, as in how much the whole cards eats up.. but then its not that?
Total graphics power could be anything as far as the words go. Soo.. huh, what?
 
The terms confuse me, Total board power seems straightforward, as in how much the whole cards eats up.. but then its not that?
Total graphics power could be anything as far as the words go. Soo.. huh, what?
I have no idea. To the end user nothing really matters besides what the whole card is using, so the way NVIDIA does it makes more sense to me.
 
If you never want to see significant performance bumps again, we could start lowering the power consumption. But nobody wants to see that. Ampere does look to be about 40% more efficient in rasterization, but it is also about 70% faster. Had NVIDIA stuck to the status quo of 250W on the top card then people would have been complaining about the anemic 30% performance bump in addition to the supply issue.
I never saw anyone complaining about Pascal using "too little power"
Unfortunately the trend of doing "more with less" has pretty much died with Ampere.
 
If I was concerned about power consumption I would stick to using the IGP. As long as the performance is there to back up the power usage and it doesn't overload the breaker for the room the PC is in then I'm good.

Lower power consumption ---> lower heat ---->lower fan noise--->better stability---->better OC

And cheaper too.
 
I never saw anyone complaining about Pascal using "too little power"
Unfortunately the trend of doing "more with less" has pretty much died with Ampere.
At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.
 
At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.
On 2nd thoughts the RTX3070 is cheaper, faster and consumes less power than the RTX2080Ti. That's what I want from a video card.

So maybe ampere does not scale very well on a bigger die. Might benefit from moving to 7nm?
 
At 40% better efficiency the 3080 would be just as fast as the 2080 on 150W compared to 215W. But nobody wants to buy a card in the same product tier that offers the same amount of performance just because it uses less power. But if that is what you want then go ahead and buy a 3050 when/if that comes out.

Oh that's actually incorrect. If you're putting in a specialized backplane to host 12 video cards, the less crazy cooling and insane power draw you can get for performance is actually a good thing. It all depends on your use case. But talking consumer only... for custom desktops like most of us run yea power draw is kind of a wash.

But for the vast majority of users out there running on laptops or prebuilt systems from Dell or HP they kind of want good performance too and not insane power needs because it's going in a ventless cabinet so they don't have to look at the thing.
 
Those estimates just look high... a decent 120mm case fan pulls < 2w at full power... how in the world did he end up with 15w for fans? And what is "other power"? It feels more like a slide that started with 320w and then divided the numbers to make them add up, more than a slide that started with anything AMD said with "normal" values and ended up at 320w.

Anyways, if we actually think for ourselves, it may be much different.

5700XT had a TDP of 180w... and a TGP of 225w... what AMD (and NVIDIA) call TGP is actually TOTAL GRAPHICS POWER or TOTAL BOARD POWER, not just the power required by the GPU itself.

Courtesy of igors lab June 2019...
1603318792231.png

Now with the actual definition in mind and knowing the 5700XT (non OC models) pretty much drew exactly at 225w... why are we now all the sudden believing this new TGP term (it's not new, it's been used for a while) has some magical new meaning? I mean, who knows what this thing will really draw, but his numbers seem A.) Inflated, and B.) Assume they chose to redefine a term that is already in use to mean something else.

I dunno, fun to speculate, but all the baseless guessing and mixing of terms makes me think nobody has a real clue and they just make up random stuff for clicks ;).
 
Lower power consumption ---> lower heat ---->lower fan noise--->better stability---->better OC
Yes!
And cheaper too.
In terms of cost to support such a part, sure; but many times parts that do the same work in a lower power envelope are sold at a premium themselves.
But talking consumer only... for custom desktops like most of us run yea power draw is kind of a wash.
Plus or minus fifty watts I agree. If it takes another hundred watts I'd want to be more careful.
 
Yeah, will be nice to see real games and power draw. Also would be great to see the 6900XT as well, keep in mind these numbers that are beating the 3090 in Raster aren't even their top end card.
 
That's interesting I wonder if they are going the route of.

Screw Ray Tracing We have raster performance so good that you can take away from it for RT and it won't even be a big deal. ;) Meanwhile Nvidia is going with cores specifically designed for RT.
 
Become a Patron!
Back
Top