AMD Radeon RX 6800 XT Overclocking

Brent_Justice

Administrator
Staff member
Joined
Apr 23, 2019
Messages
788
Points
93
banner-1.png




Introduction



AMD launched the Radeon RX 6000 series of video cards on November 18th, 2020, also known as Big Navi.  At the very high-end is the $999 Radeon RX 6900 XT.  Right below that is the $649 Radeon RX 6800 XT and then below that is the $579 Radeon RX 6800.  The video card that compares to NVIDIA’s most recent video card launch, is the Radeon RX 6800 XT at $649.  It competes directly with NVIDIA’s GeForce RTX 3080.  At only $50 apart, with the advantage in price going...

Continue reading...
 
Good tip on adjusting the minimum frequency. I'll have to try that on my 6800. I set the max freq to 2450 with an undervolt to 975mV, but didn't touch the minimum.
 
What is the performance gain of adjusting memory independent of the GPU clock? Is the card bandwidth starved, so we should be looking for AIB cards with higher mem frequency, or does it not matter much?
 
What is the performance gain of adjusting memory independent of the GPU clock? Is the card bandwidth starved, so we should be looking for AIB cards with higher mem frequency, or does it not matter much?
I don't think big navi is bandwith starved thanks to the Infinity Cache.
 
I don't think big navi is bandwith starved thanks to the Infinity Cache.
In which case, does keeping the memory clock lower allow any extra room on the GPU side? Could you even underclock the memory and see gains as you free up a little power draw for the GPU?
 
The video card is definitely more engine starved than memory. It benefits more from GPU frequency increase in my experience. Especially in the case of Ray Tracing.
 
This looks like a first take, on the Nvidia side, depending upon card, upping the fan speed can decrease GPU clocks due to having less available power to the GPU. Wonder if 100% fan speed here was taking some power away from the GPU? Also same with memory.

Power requirements for fans follow roughly, double the speed, power goes up 8x, of course fan efficiency curves can affect that ratio a lot. Something to look at maybe, on Nvidia more constrain power wise cards, fan speed does affect GPU clock rate negatively at times when upped (the benefit of cooling sometimes does not outweigh the loss of available power to the GPU in other words). Some of the Nvidia users are decoupling the card fans and using board power to supply them and getting a significant bump in the GPU clock speed.

A lot of variations from benchmarks, particularly from site to site, is from ambient temperature, where fans don't have to turn as fast or not, due to these higher power cards, they require beefier more powerful fans, affecting power to the GPU more than previous cards. So better coolers, more efficient fans could give 15w+ more to the GPU in other words. Getting an amp reading from the fan from 50% to 100% fan speed would be interesting data to consider and for comparison sakes from ref to AIB to AIB.
 
Last edited:
Wonder if 100% fan speed here was taking some power away from the GPU?
Industrial 120mm (loud and powerful) fans are upwards of 1W, and even if you have three of them on a cooler, most fan headers are only rated for 2W, as most computer fans are much less than 0.5W. Not sure what’s on a GPU header but can nearly guarantee it’s single digit wattage.

I know on overclocks you want all the power you can get, but I have a hard time believing it’s the difference in 75% fan and 100% fan speed.

But hey, if you have some good evidence otherwise I’m willing to believe, it is 2020 after all and anything can happen.
 
Industrial 120mm (loud and powerful) fans are upwards of 1W, and even if you have three of them on a cooler, most fan headers are only rated for 2W, as most computer fans are much less than 0.5W. Not sure what’s on a GPU header but can nearly guarantee it’s single digit wattage.

I know on overclocks you want all the power you can get, but I have a hard time believing it’s the difference in 75% fan and 100% fan speed.

But hey, if you have some good evidence otherwise I’m willing to believe, it is 2020 after all and anything can happen.
I think you got amps and watts mixed up. Fans at full speed, the higher spinning ones will take way more than 1W. If we have the fan data for the 6800 XT, power rating then that would be the easiest way. Measuring load either directly with amp meter or setting up the computer with a constant load as in all case fans constant speed, constant CPU power etc. then measuring the change in power from wall, by ramping fans from off to max and correcting for power supply efficiency maybe would give a rough measurement.

Anyways here is a replacement fan for a 1070Ti 1080Ti Zotac Mini: It requires .46amps, times that by 12v at max speed, 5.5w. I would suspect the bigger fans on the newer cards may exceed that power of 5.5w for each fan. If 6w each at 100%, 18w total when at 100%, at 50% fan speed it would be roughly 1/8 the power or around 2w.


Once I get my 3090, I can setup a loop like with 3dMark or something with a consistent GPU load, manually adjust fans from 50% to 100% and watch clock frequency, since there would be a time delay before the cooler heats up or cools down you should be able to catch any additional power or loss to the GPU with frequency change. Also of note, those that put water blocks on their cards can see big jumps in clock speeds for the 6800 XT, it maybe more than just cooling, you also have more available power to the GPU as in 10w-20w more without having to power the fans.
 
OK, I was able to find a relatively easy way to see effect of fan speed and power on a video card. Article used 100% fan speed while OCing, question is will the increase power from the fans running at 100% have an effect on the outcome? I cannot answer that dealing with the 6800 XT, but was able to test with an EVGA 1080 Ti SC Black.

EVGA ICX has extra sensors which also helps quantify any power increase by ramping up the fan speed. The EVGA 1080 TI SC Black has two 90mm fans which at 50% turn at 1840 RPM and at 100% 3699 RPM.

With card basically idle and relatively steady clock with fans at 0 RPM, Board Power Draw was 59.2w, GPU-Z is using the video card power sensors:

0percentFanSpeed.png

At 50% fan speed, 59.9w board power:

50percentFanSpeed.png

At 100%, 64.4w board power:

100percentFanSpeed.png

With two fans, from 0% to 100%, power changed 5.2w, from 50% to 100% power changed 4.5w where most of the power increase came from going to 100% from 50%. This is expected from how fans behave with power, Pwr is roughly proportional to (RPM2/RPM1)^3

Anyways I am having a hard time concluding this would have a significant impact to power to the GPU, with a 3 90mm fan configuration using the same EVGA fans as is on this 1080 Ti, we would be looking at 8w max power consumption, which out of let say 300w OC is only 3% of the power budget. Are the 6800 XT fans much more power hungry or on other GPU's?

I did a test using Aida 64 GPU stress test, which put a constant load on the GPU, raised the clock frequency until it did not change anymore which I am thinking indicated power limited. Changed fan speed from 100% to 0% and saw 0 percent change in clock speed. I tried 3dMark but as soon as I lost focus with 3dMark it would stop working. At least with my 1080 Ti fan speed appears to have very little effect from power usage of them.

On EVGA forums it was discussed that powering the fans on MB will give extra power to GPU and increase max clock speed. While true, not sure it will be that beneficial or significant, especially if fans are like 60% or less drawing less than 2w. Once I get the 3090, I will explore this more.
 
Last edited:
Become a Patron!
Back
Top