EVGA Raises Power Limit of GeForce RTX 3080 FTW3 ULTRA to 450 W

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
evga-geforce-rtx-3080-ftw3-ultra-gaming-angled-1024x576.jpg
Image: EVGA



Those of you who were able to score a GeForce RTX 3080 FTW3 ULTRA but desired more power are in luck, as EVGA has released a new beta BIOS that increases its ceiling admirably. The update, which was shared by EVGA product manager Jacob Freeman last night, boosts the GeForce RTX 3080 FTW3 ULTRA’s power limit from 400 W to 450 W. This should come in handy for overclockers who want to stretch and maximize the GPU’s potential.



We’ve copied Freeman’s forum post below, which includes instructions on how to download and install the BIOS. Before getting too crazy, users should ensure that they have a proper cooling setup and capable power supply (e.g., 850 W PSU that’s 80 Plus Gold certified).



New XOC...

Continue reading...


 
So a video card now uses 400W of power. "great" That's actually getting very wasteful.

" 850 W PSU that’s 80 Plus Gold certified "

What does gold certification have to do with anything? It doesn't mean the PSU can provide more power, it's just efficiency, and more a gimmick anyway. There aren't huge differences in efficiency if you drop gold certification.
 
I couldn't care less about efficiency in my desktop rig. As long as I can keep everything cool, I'm good to go. I'll trade efficiency for performance everytime.
 
So a video card now uses 400W of power. "great" That's actually getting very wasteful.
I don't see it as wasteful if you're getting 400W worth of work done in return. Granted I'm not sure I want a 400W+ GPU, but if that's what it takes to get the work I need done, done, then that's what it takes.

" 850 W PSU that’s 80 Plus Gold certified "

What does gold certification have to do with anything? It doesn't mean the PSU can provide more power, it's just efficiency, and more a gimmick anyway. There aren't huge differences in efficiency if you drop gold certification.
This seems like a politically-correct approach to advising folks not to use trash PSUs. You're right about efficiency, so I imagine that there's some effort involved to call out poorly built stuff without inviting lawsuits by naming names.
 
So a video card now uses 400W of power. "great" That's actually getting very wasteful.

" 850 W PSU that’s 80 Plus Gold certified "

What does gold certification have to do with anything? It doesn't mean the PSU can provide more power, it's just efficiency, and more a gimmick anyway. There aren't huge differences in efficiency if you drop gold certification.
It's more as a CYA to the manufacturer that the user was informed that an efficient PSU needs to be used. Gold and higher units also typically handle transient spikes better.
 
I couldn't care less about efficiency in my desktop rig. As long as I can keep everything cool, I'm good to go. I'll trade efficiency for performance everytime.

Hmm.. I agree... to a point. At some point, even if your keeping the components cool, your exhausting so much heat out the chassis you effectively have a space heater, and it's heating up the room. And if you need to start adding in needing a portable AC unit, then you start needing dedicated power circuits, and it starts to snowball pretty quick. And if you have more than one computer in the room, start multiplying all that.

So I agree, I don't care, all the way until I get to the point where I have to care. It's not like my home office is a data center, but I don't want it to feel like a sauna either.
 
Waiting on this for a step up for this card unless a 3090 appears beforehand that I can buy. I do have the 850W Gold PSU so at least I'm good with that.
 
Hmm.. I agree... to a point. At some point, even if your keeping the components cool, your exhausting so much heat out the chassis you effectively have a space heater, and it's heating up the room. And if you need to start adding in needing a portable AC unit, then you start needing dedicated power circuits, and it starts to snowball pretty quick. And if you have more than one computer in the room, start multiplying all that.

So I agree, I don't care, all the way until I get to the point where I have to care. It's not like my home office is a data center, but I don't want it to feel like a sauna either.
I care, but to a much earlier point since I also use my hosts for distributed computing. Not only does it heat up the office like mad, but it wastes money to have a low efficiency unit. My solar panels generate about 20kwh extra power daily - it literally starts coming out of my pocket if I exceed that so I’ve got Platinum or titanium power supplies in Most of my systems.

one very interesting thing Dan said in a different thread was that better VRM solutions are more efficient. I really want to know how efficient - are we talking 1w, or 25w+?
 
I care, but to a much earlier point since I also use my hosts for distributed computing. Not only does it heat up the office like mad, but it wastes money to have a low efficiency unit. My solar panels generate about 20kwh extra power daily - it literally starts coming out of my pocket if I exceed that so I’ve got Platinum or titanium power supplies in Most of my systems.

one very interesting thing Dan said in a different thread was that better VRM solutions are more efficient. I really want to know how efficient - are we talking 1w, or 25w+?

It's the same concept as power efficiency in a power supply. More phases / power stages means the workload can be spread out more. This means not having to run the VRMs at 80% capacity. This can theoretically keep them running in a sweet spot where they are most efficient. The primary reason they do this is to reduce heat and increase component life. However, being realistic, the motherboard itself doesn't pull a whole lot of power anyway. Total system power numbers are mostly CPU and GPU. Other components are typically very little of it.
 
I care, but to a much earlier point since I also use my hosts for distributed computing. Not only does it heat up the office like mad, but it wastes money to have a low efficiency unit. My solar panels generate about 20kwh extra power daily - it literally starts coming out of my pocket if I exceed that so I’ve got Platinum or titanium power supplies in Most of my systems.

one very interesting thing Dan said in a different thread was that better VRM solutions are more efficient. I really want to know how efficient - are we talking 1w, or 25w+?
Well, "efficiency" as per this discussion is units of performance per watt: for your DC, how many points you get versus how much in electric your using. Different architectures running different algorithms will all have different efficiencies. It's very similar to how miners are looking at it: Hash rate per kWh used, etc. A miner wouldn't want "fastest possible, efficiency be damned", they want the best return, and that isn't always the absolute fastest solution available.

As it pertains to power supplies, it's AC Power input versus DC Power output - so a little bit different context. Similar story with VRMs - that would be measured as DC in / DC out
 
Well, "efficiency" as per this discussion is units of performance per watt: for your DC, how many points you get versus how much in electric your using. Different architectures running different algorithms will all have different efficiencies. It's very similar to how miners are looking at it: Hash rate per kWh used, etc. A miner wouldn't want "fastest possible, efficiency be damned", they want the best return, and that isn't always the absolute fastest solution available.

As it pertains to power supplies, it's AC Power input versus DC Power output - so a little bit different context. Similar story with VRMs - that would be measured as DC in / DC out
Sure, there is a bit of technicality to it, but the ac to dc conversion efficiency still ends up impacting points per day per watt.

I’ve got a small stack of raspberry pis that are pretty efficient in points per day per watt, but they would do a lot better in that regard if I could find 96% efficient power bricks instead of 80%.

for my main PC I want “**** the torpedoes, full speed ahead” mode when I’m gaming, and when I’m not gaming and Running F@H instead, I want “sips electricity like fine wine”.
 
I don't see it as wasteful if you're getting 400W worth of work done in return. Granted I'm not sure I want a 400W+ GPU, but if that's what it takes to get the work I need done, done, then that's what it takes.
Unfortunately more watts is just more heat, not necessarily more work. If manufacturers don't start to focus on efficiency just build bigger and bigger more power hungry gpus it will get out of hand. IMO it already is. I'd like to get work done as well, but not dumping 450W of heat into my room at the same time. My PC already increases ambient room temp by 2-3° degrees as is. And I'm in a 30m2 room.
 
Become a Patron!
Back
Top