AMD’s RNDA 3 GPUs Rumored to Beat NVIDIA’s Lovelace GPUs in Performance and Efficiency

That's what they said about AMD making a comeback against Intel.
They've done it in terms of performance - where available, I think AMD CPUs are almost always preferred at the same price unless an OEM really borked something up. Laptop offerings are starting to look especially good, beyond their aged APUs.

And the availability part is the clincher. Everyone (including Intel, through their own stumbles) is having to ration fab output, and that means skipping on products that would otherwise be widely available. That also means that with companies selling basically everything they can make, 'success' is bounded by manufacturing output, so while AMD is doing great on the CPU performance front, they're not gaining the marketshare that they could be. Same with GPUs really, since AMD cards are going for a premium for reasons other than their gaming performance.

In any case, I'll root for anyone who goes up against nVidia. **** nVidia.
Not sure where this comes from; whether it's drivers, features, top-end performance, Nvidia tends to push the envelope. Competitors getting closer does tend to get them to push a little harder and I'm definitely looking forward to what Intel has on the table (again, mostly not about gaming).
 
I seem to remember Fermi being a horrible mess and AMD's Cypress cards pretty well walking over everything nVidia had that generation. Fermi may have been "faster" but that didn't make it better.
You just sent me down memory lane... actually traded a later Fermi-based card for a Cayman-based card and then bought a second for Crossfire, mostly because they had more VRAM and I was running what was at the time a higher-resolution display.

AMD was also first with Turbo on Cayman the next generation.
But it wasn't until after Cayman that AMD got called to carpet for their horrendous multi-GPU frame-pacing issues, i.e., negative effective performance scaling, and that's one trade I regret. Still the single GPUs were great once the drivers matured, though I had no problem with the Fermi. Both Cayman GPUs served for years under other owners after I'd moved on to a pair of surprisingly-efficient Kepler cards.

And there were a lot of times where GCN/Polaris cards held the lower and middle tiers pretty well on value propositions.
Undeniable! And for gaming, those are still decent enough cards. I wish they had better transcoder support, as these are a no-go for Plex, that they had better drivers for content creation, as I and reviewers have experienced crashes (even lately as these are still shipping in APUs!), and that AMD hadn't decided to pull Windows Server support for some inexplicable reason when any basic Nvidia GPU or Intel iGPU for that matter will just work.

And AMD pulled out the console contracts - twice in a row.
IIRC, that first one basically saved their bacon. They'd also customized their GPU for Microsoft for the generation before so had some experience, and unlike Nvidia who apparently every other major company basically hates (I guess except TSMC, who probably likes the money), AMD was in a position to both offer a combined CPU + GPU solution that no one else had or even has today, while also quite likely providing a bargain for Microsoft and Sony.

I bet both console vendors and console developers also appreciated the more standardized architecture at the hardware level.
 
Still the single GPUs were great once the drivers matured, though I had no problem with the Fermi. Both Cayman GPUs served for years under other owners after I'd moved on to a pair of surprisingly-efficient Kepler cards.

Back then you bought an AMD card for the performance you hoped they would get a year after you bought the card, good to see their drivers are better these days.
 
That's what they said about AMD making a comeback against Intel.

In any case, I'll root for anyone who goes up against nVidia. **** nVidia.
Thing is that intel pretty much remained stagnant for years, focusing on mobile rather than desktop. And it took AMD that long to finally reach parity and even surpass intel. Nvidia does have dropped the ball before, but unless there's some major manufacturing/architechture issues, I don't see it happening.


BTW **** nVidia=Love nVidia? 😉 😉 🤣 🤣
 
Last edited:
Thing is that intel pretty much remained stagnant for years, focusing on mobile rather than desktop. And it took AMD that long to finally reach parity and even surpass intel. Nvidia does have dropped the ball before, but unless there's some major manufacturing/architechture issues, I don't see it happening.
Yeah, it's rare for nVidia to hit a misstep - more often it's just been delaying tech (Volta comes to mind) because they were only competing against themselves. Fermi is the most recent that I can think of, and they were able to quickly pivot off of that. I guess you could say space invaders as well, but that didn't feel like as big an issue as an entire architecture being bad.

AMD is very much chasing a moving target - and nVidia is a fast moving target with big R&D resources, deep packets, a lot of momentum and brand recognition.

That said, TSMC 5nm is looking to be crowded. That could be the big bottleneck, especially since Intel has also signed up there and Apple has been there for a while, but since AMD is also set to use TDMC 5nm for RDNA 3 I don't know that it presents any opportunities to AMD - just adding to the traffic jam.

Next gen is shaping up to be the same logistical clusterf&*k that this generation has been. The only silver lining here is that 5nm will be very mature by the time we get there - Apple has been using it for a while now and will have moved to 3nm by then (assuming it's ready).
 
One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE "Ampere Next" so AMD will indeed be the new King of the hill if only for a short term.

Another thing is that since RDNA3 is an evolution of RDNA2, I don't really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
 
One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE "Ampere Next" so AMD will indeed be the new King of the hill if only for a short term.

Another thing is that since RDNA3 is an evolution of RDNA2, I don't really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.

In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft's API for RT is more native to AMD, or maturity in software was needed, only time will tell.
 
One thing that seems consistent in most rumors is that RDNA3 will be out BEFORE "Ampere Next" so AMD will indeed be the new King of the hill if only for a short term.

Another thing is that since RDNA3 is an evolution of RDNA2, I don't really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
Not sure either of those will necessarily be true.

They are both possible, but hardly given.

But a lot could change with hardware RT even without big changes to the rasterization side -- just having it's first generation under it's belt they will see how the process works better and can better optimize hardware for it (caches, bandwidth, instructions, etc). And, just dropping to 5nm won't necessarily mean AMD gets a performance crown - being the first to 7nm didn't get them that either, and nVidia is burning a lot of power right now to claim that crown - that may be something AMD is unable or unwilling to do, depending on how RDNA 3 tapes out.
 
Another thing is that since RDNA3 is an evolution of RDNA2, I don't really see any major changes in RT and AI so even if RDNA3 is faster in rasterizing, it may still lag in RT.
Both AMD and Nvidia started developing RT hardware before they knew how developers were going to implement it, given the development cycles that this class of hardware involves. It's entirely possible for RDNA3 to be a direct evolution of RDNA2 and be significantly better at RT.

AI / ML I'd be less sure about given that there's very little standardization around it all, it's fast moving, and we've seen very little on the consumer side. RTX Voice comes to mind as one visible example.

In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft's API for RT is more native to AMD, or maturity in software was needed, only time will tell.
Overall it's most likely a product of learning how to use RT, but use it less. Nvidia got the ball rolling and the first RT efforts showed that it needed to be implemented sparingly, lest we get another BF:V implementation that tanks performance without obvious benefit.

I'm sure that developing RT for the latest consoles has driven developers toward more creative and less heavy-handed solutions given the hardware constraints.
 
Back then you bought an AMD card for the performance you hoped they would get a year after you bought the card, good to see their drivers are better these days.
I think I've bought exactly two GPUs within six months of their release, ever: the 9700 Pro, and the GTX670.

What I'd found with ATi and now AMD drivers, and again not with their latest hardware, is that the lag is also when games are released. These days it might be a week or two, tossing a coin for which vendor has the most / biggest problems. I do remember having to turn Crossfire off in order to get Skyrim to run, and I didn't really have issues like that with Kepler or Maxwell in SLI.
 
In watching reviews and gaming and such the gap in RT performance between AMD and Nvidia is quickly closing. Either because more RT is being designed for AMD processes in specific, Microsoft's API for RT is more native to AMD, or maturity in software was needed, only time will tell.
I'd like for this to be true. But apart from FSR finally hitting, and that helping boost AMD's RT efforts, I haven't seen anything that has really affected the delta between AMD and nVidia really. Apart from some other driver boost like FSR, or just plain ole' Fine Wine effect, I don't know that we will see much change until we get new hardware - the hardware is what it is.
 
Radeon 9700Pro. Enough said.

The 9700 Pro is a 21 year old card created by ATI. Enough said.

My statement stands, even for nVidia's ****ty cards.

Anyone playing the "performance isn't the only metric" doesn't seem to understand what this rumor is saying.
Almost nobody gives a **** about power consumption unless they've already lost the performance race.
 
Almost nobody gives a **** about power consumption unless they've already lost the performance race.
Hmm. No, I don't agree. The two very much go hand in hand.

You have a somewhat hard cap on energy use. The efficiency of your architecture is going to define your top performance because you are more or less capped at the amount of power you can throw at it.

Top tier cards are almost always 300-350W TGP. Even if you were to pump more power in via more or varied PCI-E connectors, your still stuck by being in a standard PC enclosure and moving heat out via a double or triple slot cooler (or AIO cooler). Going much beyond that power envelope, stock, is courting disaster. Of course, overclocks are going to be capable of exceeding that, but that's why an overclock is an overclock and not just "stock".

So yeah, power matters, even at the top end. Otherwise, why wouldn't AMD/nVidia just crank the dial up to 11 and get even more out of them?

Intel is right now learning this lesson all over again - they had learned it once with Prescott->Core. Now they need to go from Skylake to ???
 
Hmm. No, I don't agree. The two very much go hand in hand.

You have a somewhat hard cap on energy use. The efficiency of your architecture is going to define your top performance because you are more or less capped at the amount of power you can throw at it.

Top tier cards are almost always 300-350W TGP. Even if you were to pump more power in via more or varied PCI-E connectors, your still stuck by being in a standard PC enclosure and moving heat out via a double or triple slot cooler (or AIO cooler). Going much beyond that power envelope, stock, is courting disaster. Of course, overclocks are going to be capable of exceeding that, but that's why an overclock is an overclock and not just "stock".

So yeah, power matters, even at the top end. Otherwise, why wouldn't AMD/nVidia just crank the dial up to 11 and get even more out of them?

Intel is right now learning this lesson all over again - they had learned it once with Prescott->Core. Now they need to go from Skylake to ???


I didn't say power didn't matter. People generally don't use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.
 
I didn't say power didn't matter. People generally don't use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.
Good point.

I mostly see efficiency as it relates to being able to crank up the clocks more I guess, but your right, not everyone looks at it that way.
 
I didn't say power didn't matter. People generally don't use power consumption as a metric over performance as a bullet point unless their preferred card has already lost that performance race. Power is an important factor when reviewing a card as a whole but unless the TGP is totally f***** most people will buy for higher performance over lower power consumption.
Power draw by itself doesn't mean too much, but it does set the bar for heat and noise output. Noise can be dealt with at a cost if the available coolers aren't great, but generally that means that the heat output is felt even more.

This is also important for smaller systems. More efficient parts means less performance compromise for SFFs, whether that be due to hard cooling limits or just not wanting the system to sound like a jet engine. Also a pretty big deal for laptops though users usually don't get much choice there.

But if power draw is close, say within 10% under load? Point taken.

Good point.

I mostly see efficiency as it relates to being able to crank up the clocks more I guess, but your right, not everyone looks at it that way.
This, but note that being more efficient doesn't always translate into being able to crank up the clocks. It's pretty clear, for example, that AMD would probably give up no performance ground at all to Intel if they could get another 10% or so more clockspeed out of Ryzen regardless of power draw. As it stands AMD is likely to retain the efficiency advantage for a few more years while still seeing performance challenges from Intel despite Intel's fab troubles, simply because Intel can clock their parts higher.

With GPUs it's really all over the place though. Nvidia seemed to take the efficiency ground back with Kepler, but I think that the majority of that has been due to AMDs inability to keep up with Nvidia's pace of innovation since as they've only been able to present a challenge between Nvidia's release cycles and only by pushing out parts that stretched power envelopes and users' willingness to subject themselves to screaming blowers, like the R9 290 cards. And those were only 250W stock!

For what it's worth, chiplets have a lot of promise, and AMD has shown with their CPUs that they can navigate around the many potential pitfalls of breaking up a processor without unduly affecting performance. The greatly increased memory latency on Zen 2 CPUs just doesn't seem to matter with all the cache they threw on the memory controller die, for example, and since GPUs are far less latency sensitive than CPUs, they very well could pull this off.

Still, I don't envy anyone who has to deal with the first round of drivers :cool:
 
The 9700 Pro is a 21 year old card created by ATI. Enough said.

My statement stands, even for nVidia's ****ty cards.

Anyone playing the "performance isn't the only metric" doesn't seem to understand what this rumor is saying.
Almost nobody gives a **** about power consumption unless they've already lost the performance race.
Well technically the 9700pro was created by ArtX which ATI aquired back in 2000 IIRC.

On a side note, its funny how ArtX paved the way for ATI/AMD future cards, while nvidia didn't really use 3dfx technologies at all.
 
Last edited:
Well technically the 9700pro was created by ArtX which ATI aquired back in 2000 IIRC.

On a side note, its funny how ArtX paved the way for ATI/AMD future cards, while nvidia didn't really use 3dfx technologies at all.


I think that's an unfair statement. They used ATI tech and learned how to do SLI at the time from what they acquired. I think it's crap what happened to 3dfx and they lost the race by mismanagement rather than lack of tech.

Regardless as long as the leader keeps passing between the major manufacturers it is only good for us. And currently if you're looking for an upgrade from the previous generation of Nvidia or AMD card you will take what you can get within your budget. Otherwise you might as well start looking at the next generation.
 
I think that's an unfair statement. They used ATI tech and learned how to do SLI at the time from what they acquired. I think it's crap what happened to 3dfx and they lost the race by mismanagement rather than lack of tech.

Regardless as long as the leader keeps passing between the major manufacturers it is only good for us. And currently if you're looking for an upgrade from the previous generation of Nvidia or AMD card you will take what you can get within your budget. Otherwise you might as well start looking at the next generation.
The only thing that nvidia implemented from 3dfx SLI was the name. Actually nvidia SLI has much more in common with Rage Fury MAXX as it used AFR.

Don't get me wrong some people claim 3dFX was father of 3d pc gaming and I tend to agree. But by the time nvidia bought it, it was already years behind the competition.
 
Radeon 9700Pro. Enough said.

The 9700 Pro was only possible through ATi's acquisition of another company called ArtX. It was that technology that made the Radeon 9700Pro possible. Again, this was ATi, not AMD. AMD purchased ATi after that.
 
Become a Patron!
Back
Top