NVIDIA GeForce RTX 2080 Ti FE Overclocking

Brent_Justice

Administrator
Staff member
Joined
Apr 23, 2019
Messages
931
Points
93
Introduction



Last year, we wrote an overclocking article about the GeForce RTX 2070 Super FE capabilities. In the interim, we’ve realized that it’s about time for us to work our way through the current generation of video cards on the market and do an overclocking article on each so you can see how much headroom each card will have and what performance gains you can expect. These overclocking articles will also serve as a basis for our comparison to retail cards as we strap them to the test bench over the next few months.







Our Overclocking Methodology



Overclocking video cards can sometimes be more of an art than a...

Continue reading...
 
Glad to see the test system setup up in performance. Also seems like we need a new 4k gaming card if a overclocked 2080ti can't hit 60 anymore in these current titles.
 
Sad to say that unless NV can deliver on that 20-30% increase rumor even the next release will still have difficulty. The problem is that games are advancing to such a degree that even the best GPU now is still 2-3 years behind the most demanding games. RDR2 will be crushing GPU's for years to come. Metro, Control, and a few others will do the same but not as much.
 
Sad to say that unless NV can deliver on that 20-30% increase rumor even the next release will still have difficulty. The problem is that games are advancing to such a degree that even the best GPU now is still 2-3 years behind the most demanding games. RDR2 will be crushing GPU's for years to come. Metro, Control, and a few others will do the same but not as much.
I considered delaying new games for a few years so the cost of GPU upgrades would be lessened to get max iq.
But it wont help that much with a slower advance in GPU performance, the very fastest current card will still be needed to get the best from many older games.
After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!
 
After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!
Pretty sure we got a story about that coming out soon.
 
Chiplets for CPU's has been embraced very quickly. I don't see why Video cards will be vastly different in that specific regard. As long as the controllers delivering content to the chiplets can do it well.
 
After this next gen, the move to chiplets is sure to take a while to get right (ie bug free +high enough performance) and be cost effective.
I hope NVidia have something up their sleeve that will leave the crickets in my wallet something to feed on!

What if AMD had something up their sleeve?

I do agree that chiplets have promise -- GPUs are already highly parallel - much more so than even CPUs. Chiplet has the ability to drop overall GPU price, as you aren't requiring one single large die to cram all those cores into.. so yields could be theoretically better, giving lower price points.

But I think Chiplets will run into the traditional road block though: power. Your process node and architecture are always going to limit you on the clockspeed/power curve, and as long as your trying to fit into a PCI form factor, your max power is going to be capped by just physical size of heat removal equipment. Chiplets don't really do much at all to address that, other than giving you a bit more area to dissipate heat across.
 
Imagine a grid layout with chiplets and memory chips on a board layout with a big actively cooled heatsink over all of it. You could have grid layouts of 4 chiplets surrounded by memory and controllers... and just have one big card that's one big GPU in reality with all of the pieces spread through the card. not sure how efficient that would be... but it sounds cool! ;) Then the card could only turn on the chiplets it needs for what is being used/demanded of it. mmmm... interesting... too advanced I think.

The problem with what I just said above is the parallelism of the processing. What's going to coordinate all of that in a timely manner for local rendering and delivery to display devices and all. It seems like it would almost need PCIE 4.x bandwidth.... or more even.
 
What if AMD had something up their sleeve?

I do agree that chiplets have promise -- GPUs are already highly parallel - much more so than even CPUs. Chiplet has the ability to drop overall GPU price, as you aren't requiring one single large die to cram all those cores into.. so yields could be theoretically better, giving lower price points.

But I think Chiplets will run into the traditional road block though: power. Your process node and architecture are always going to limit you on the clockspeed/power curve, and as long as your trying to fit into a PCI form factor, your max power is going to be capped by just physical size of heat removal equipment. Chiplets don't really do much at all to address that, other than giving you a bit more area to dissipate heat across.
Problem with AMD is driver quality. I've been subjected to this with my last 2 AMD cards and had a terrible time of it, my very first card (X1800) was ok though.
The 2nd bad experience caused me to sell my 290x prematurely and go NVidia. It was plain sailing from then on, a real relief.
I've been reading about the problems with Navi, I would have sold my card if I owned one, that would drive me mad.
As things are I am warned off AMD as they still dont have a handle on it.

I think it will be a little easier to cool chiplets because they can be grouped further apart preventing the cooler from saturating as easily within a single cooling area.
The smaller process will undoubtedly throw up issues but this wont be unique to chiplet designs
 
  • Like
Reactions: _k_
I’ve had no end of driver problems with my 980GTX since Win10 and I’m hardly alone there...

not discounting your experience, I just don’t buy it anymore when someone says “drivers” and then posts about cards that are nearly a decade old.
 
I’ve had no end of driver problems with my 980GTX since Win10 and I’m hardly alone there...

not discounting your experience, I just don’t buy it anymore when someone says “drivers” and then posts about cards that are nearly a decade old.

In my case its worth pointing out the cards were new and the problems continued for the period of time I owned them (at least 1.5 yrs each).

I stayed with Windows 7 for most of my gaming to avoid issues with Windows 10.
Theres no point running the gauntlet when I dont have to.
 
I haven't found the need to overclock my GPU before, I've been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn't expect much as I've got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

To my surprise I'm now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven't even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.
 
I haven't found the need to overclock my GPU before, I've been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn't expect much as I've got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

To my surprise I'm now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven't even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.

Touching the vcore will likely destabilize the memory OC as you're probably bumping up against your total board power limit.

I would suggest trying to max out the GPU and memory separately to see what each is capable of, then figure out the right mix for performance...
 
I haven't found the need to overclock my GPU before, I've been using it at stock clocks for 10 months, but now with CP2077 upcoming and Control running like a dog I decided to give it a go. I didn't expect much as I've got probably one of the lowest binned units out there. A GIGABYTE Turbo with a blower cooler from factory. Which I replaced with an Arctic Accelero Xtreme IV.

To my surprise I'm now at +250 core and +600 memory. Which netted me a +10% score increase in the Port Royale benchmark. Which translates to getting from 35 to 41 FPS average in it. And I haven't even touched the vcore yet, assuming I can at all on this card. A quick google search suggests I got lucky as most seem to top out between 150-200 core OC.
The better the cooler (and more tweaked its bios already is) on the card, the lower an increase would be if fitting an Accelero cooler, stated as an example not a recommendation.
ie cards with better coolers/bios already run at much higher speed so there will be a lot less overclock headroom.
Actual clock speed, fully stable, is the only real benchmark.

What I like is the slight automatic performance increase without overclocking due to lower temps and that its so darn quiet with max fan speed using an Accelero!
Its a great cooler.
 
Last edited:
The better the cooler (and more tweaked its bios already is) on the card, the lower an increase would be if fitting an Accelero cooler, stated as an example not a recommendation.
ie cards with better coolers/bios already run at much higher speed so there will be a lot less overclock headroom.
Actual clock speed, fully stable, is the only real benchmark.

What I like is the slight automatic performance increase without overclocking due to lower temps and that its so darn quiet with max fan speed using an Accelero!
Its a great cooler.
It's ugly as sin, though, but I do appreciate that they prioritized function over form.
 
Become a Patron!
Back
Top