AMD Won’t Launch NVIDIA DLSS Competitor Until It’s Ready for Both Radeon Graphics Cards and Next-Gen Consoles

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,222
Points
113
amd-radeon-logo-red-circular-1024x576.jpg
Image: AMD



AMD is reportedly in no rush to launch its NVIDIA DLSS competitor, FidelityFX Super Resolution, until the technology can be enabled on not only Radeon RX 6000 Series graphics cards but next-gen PlayStation 5 and Xbox Series X|S consoles as well. Red team’s plans were shared in a video published by popular YouTuber Linus Sebastian today, who revealed that AMD was planning a cross-platform approach and debut for the highly anticipated technology. FidelityFX Super Resolution is a collaboration between AMD and Microsoft that aims to enable improved performance by leveraging machine learning to upscale images with minimal degradation in visual quality.



“Curiously missing from [today’s Radeon RX 6700 XT] presentation is any further explanation as to how and when FidelityFX Super Resolution, that’s AMD’s answer to NVIDIA’s DLSS, is going to make an appearance,”...

Continue reading...
 
Something that will work on Windows, Xbox, and PS5 with no tensor cores and not using DirectML (as the PS5 doesn't support it)? So maybe in a few years...
 
Something that will work on Windows, Xbox, and PS5 with no tensor cores and not using DirectML (as the PS5 doesn't support it)? So maybe in a few years...
Yup.

This kind of means that AMDs solution is both going to take a good while longer and also be inferior.

Granted that they are so late to the game that they pretty much don't stand a chance of 'catching up', especially if DLSS continues to improve. Their only real chance is someone like Microsoft forcing a standard that actually catches on!
 
Looks like they have hands in too many pies, console holding back PC, with AMD kit.
If PC DLSS could be better than console I bet they will hold it back.
This doesnt sit well, NVidia need competition.
 
NVidia need competition.
Nvidia's been competing with themselves quite well over the years. After all, most of the time the main point of comparison is Nvidia's last release. And they still push the boundaries of the whole ecosystem year after year.

Sure, it'd be nice to see AMD (or Intel or...) put some competitive pressure on Nvidia, which would force prices perhaps a bit lower should supply problems be solved, but it's hard to support the idea that we're in a bad situation primarily because Nvidia lacks real competition.
 
Nvidia's been competing with themselves quite well over the years. After all, most of the time the main point of comparison is Nvidia's last release. And they still push the boundaries of the whole ecosystem year after year.

Sure, it'd be nice to see AMD (or Intel or...) put some competitive pressure on Nvidia, which would force prices perhaps a bit lower should supply problems be solved, but it's hard to support the idea that we're in a bad situation primarily because Nvidia lacks real competition.
I'm interested in AMD doing better.
It helps NVidia push the envelope.
 
I'm interested in AMD doing better.
It helps NVidia push the envelope.
But hasnt Nvidia always pushed, regardless of what AMD does/doesn't?
AMD's market share in GPU's is so small (and has been for quite a while) that Nvidia certainly has had no need to push, and yet they keep pushing.
 
But hasnt Nvidia always pushed, regardless of what AMD does/doesn't?
AMD's market share in GPU's is so small (and has been for quite a while) that Nvidia certainly has had no need to push, and yet they keep pushing.
I havent said anything about NVidias actual performance, only it helps to push them.
Both of you are putting words in my mouth, I havent mentioned what you are claiming.

ps
I own a 3090.
 
The reality is, NVIDIA has always pushed the boundaries. It realizes that it needs to continue to innovate in order to get people to buy its products year after year and to upgrade as frequently as possible. It does everything it can to influence developers to push new technologies as well for this purpose. That's not to say that NVIDIA always nails it with every product release. They obviously don't. The 20 series was somewhat lack luster in that it didn't really provide a solid upgrade from the 10 series, and only the RTX 2080 Ti was faster than its predecessor in any meaningful way.

Competition does help drive them forward as NVIDIA tends to release products it doesn't even need to simply to extend its lead over AMD and sell a few more cards in a given time frame. However, unlike Intel, NVIDIA realizes that product stagnation will hurt them in the long run by allowing competitors or potential competitors to leapfrog ahead.
 
I disagree. I think we've seen GPUs move forward in spurts in the last 10-15 years, and it's almost always predicated by how the competition between AMD/nVidia has done - but there is a lag between that response (which makes sense - it takes time to take a product from design through production).

nVidia certainly has not always pushed boundaries. Everyone wants to forget Tesla/Fermi era, where nVidia was absolutely trounced by AMD, and i'd argue that was exactly what pushed nVidia to hit all those big strides from Keplar -> Maxwell -> Pascal. Now bring Navi into picture, and a bit later we see a really lackluster Pascal->Turing jump to a great Turing->Ampere generation. And somewhere in all of that Volta mysteriously disappeared...

On the flip side, AMD has been perpetually playing catchup in marketshare, but that doesn't mean they haven't had good products that can spur competition.

The 5000 series was a home run - it took a long time for nVidia to be able to really respond to that. GCN had really long legs, it didn't always take the performance crown, but it's scalability and extensibility lead to a lot of derivative generations that stayed competitive for a really long time. Polaris was a great success in the lower-mid range tier, and got AMD the console contracts.

I think nVidia plays a lot more to the Stock Market game than AMD does - and most everything nVidia does is with an eye to what it will do to their share price. They are in a position right now that if they fall behind AMD by almost any metric, their share price will get hammered. So they very much watch what AMD does, and they do their best to maintain a comfortable margin ahead of AMD. They are fortunate to have a healthly leg up on AMD for most of their corporate history, so when they do have lulls in innovation it hasn't decimated them, but I don't think that means nVidia has consistently advanced forward, I think you can very clearly see a call-and-response in nVidia's product releases compared to what AMD does.
 
So far its anyone's guess how will SuperRes will perform or the IQ. Some people claim it will have a higher performance hit than DLSS with lower or equivalent IQ. On a side note, I recall nvidia promising better IQ/perf over time when DLSS was first announced, but DLSS1.0 were never upgraded. bummer...

I guess its posible as AMD doesn't have tensor cores, but it may have the compatibility edge. Although nvidia already stroke a punch with UE integration (and rumor says unity integration is coming too).

BTW I wonder why there was no mention of Super Res during the 6700XT presentation, I mean that's the card that might benefit the most from it...
 
I'm not convinced the tensor cores are doing all that much in the first place... unless your actually doing something with AI on them. I have a feeling nVidia could have run DLSS 2.0 on other-than-RTX SKUs if they wanted to, it sounds exactly like the type of lock-in and up-sell that nVidia is notorious for.
 
I disagree. I think we've seen GPUs move forward in spurts in the last 10-15 years, and it's almost always predicated by how the competition between AMD/nVidia has done - but there is a lag between that response (which makes sense - it takes time to take a product from design through production).

nVidia certainly has not always pushed boundaries. Everyone wants to forget Tesla/Fermi era, where nVidia was absolutely trounced by AMD, and i'd argue that was exactly what pushed nVidia to hit all those big strides from Keplar -> Maxwell -> Pascal. Now bring Navi into picture, and a bit later we see a really lackluster Pascal->Turing jump to a great Turing->Ampere generation. And somewhere in all of that Volta mysteriously disappeared...

On the flip side, AMD has been perpetually playing catchup in marketshare, but that doesn't mean they haven't had good products that can spur competition.

The 5000 series was a home run - it took a long time for nVidia to be able to really respond to that. GCN had really long legs, it didn't always take the performance crown, but it's scalability and extensibility lead to a lot of derivative generations that stayed competitive for a really long time. Polaris was a great success in the lower-mid range tier, and got AMD the console contracts.

I think nVidia plays a lot more to the Stock Market game than AMD does - and most everything nVidia does is with an eye to what it will do to their share price. They are in a position right now that if they fall behind AMD by almost any metric, their share price will get hammered. So they very much watch what AMD does, and they do their best to maintain a comfortable margin ahead of AMD. They are fortunate to have a healthly leg up on AMD for most of their corporate history, so when they do have lulls in innovation it hasn't decimated them, but I don't think that means nVidia has consistently advanced forward, I think you can very clearly see a call-and-response in nVidia's product releases compared to what AMD does.

Yes, NVIDIA reacts to AMD and it helps to drive them forward, but products that fall short of what we as consumers like to see in terms of advancement aren't necessarily an indicator of the company being lazy or not trying to push the envelope. We often forget that creating new architectures and moving technology forward is a creative process with some iterations being far more meaningful than others. Not really moving forward all that much isn't necessarily due to a lack of trying.

NVIDIA being trounced by AMD on occasion doesn't prove a lack of drive to innovate. Remember, an architecture takes years to design and at some point these companies are committed and can't just roll back the clock and start over, or change tact super quickly. Intel took five years to course correct from Netburst and that course correction even resulted in scraping Tejas entirely, which it had spent a fortune on at the time. Even some of ATi and AMD's advancements weren't on their own merits. They were a matter of good business through solid acquisitions rather than a product of its own R&D efforts at the time. One of ATi's few victories against NVIDIA was a result of that. ATi bought the technology that created the 9700Pro through acquisition of a company called ArtX rather than its own engineering prowess.

I absolutely disagree with the idea that AMD's RX 5000 series was a home run. Performance wise it was lackluster and the profit margins had to be cut to the bone just to move units. AMD was forced to cut MSRP on cards upwards of $100 before the RX 5000 officially launched as NVIDIA's 20 series refresh cards would have dominated in both price/performance across the board had it not done so. That's not really a "home run." That's scraping buy with an inferior product that has to adjust prices constantly and have profits slashed just to bring in any revenue for the company that created it.
 
Not RX, HD - 12 years ago. My mistake, I forgot the poor naming convention...
 
I'm not convinced the tensor cores are doing all that much in the first place... unless your actually doing something with AI on them. I have a feeling nVidia could have run DLSS 2.0 on other-than-RTX SKUs if they wanted to, it sounds exactly like the type of lock-in and up-sell that nVidia is notorious for.
AFAIK the first implementation of DLSS "1.5" in Control, didn't use the tensor cores, but the shading cores. But later it was updated to DLSS2.0 running on tensor cores and its both faster and with better IQ.
 
Not RX, HD - 12 years ago. My mistake, I forgot the poor naming convention...

The HD 5000 was fantastic. I had an HD 5970 back in the day and to this day, it remains one of the best video cards I've ever owned.
 
Yes, NVIDIA reacts to AMD and it helps to drive them forward, but products that fall short of what we as consumers like to see in terms of advancement aren't necessarily an indicator of the company being lazy or not trying to push the envelope. We often forget that creating new architectures and moving technology forward is a creative process with some iterations being far more meaningful than others. Not really moving forward all that much isn't necessarily due to a lack of trying.

NVIDIA being trounced by AMD on occasion doesn't prove a lack of drive to innovate. Remember, an architecture takes years to design and at some point these companies are committed and can't just roll back the clock and start over, or change tact super quickly. Intel took five years to course correct from Netburst and that course correction even resulted in scraping Tejas entirely, which it had spent a fortune on at the time. Even some of ATi and AMD's advancements weren't on their own merits. They were a matter of good business through solid acquisitions rather than a product of its own R&D efforts at the time. One of ATi's few victories against NVIDIA was a result of that. ATi bought the technology that created the 9700Pro through acquisition of a company called ArtX rather than its own engineering prowess.

I absolutely disagree with the idea that AMD's RX 5000 series was a home run. Performance wise it was lackluster and the profit margins had to be cut to the bone just to move units. AMD was forced to cut MSRP on cards upwards of $100 before the RX 5000 officially launched as NVIDIA's 20 series refresh cards would have dominated in both price/performance across the board had it not done so. That's not really a "home run." That's scraping buy with an inferior product that has to adjust prices constantly and have profits slashed just to bring in any revenue for the company that created it.
Also, nvidia is the dominant player in almost every market its in. PC Gaming, HPC, AI, Science, Automotive, self driving, content creation, hollywood productions, rendering, you name it. About the only market it absolutely failed was mobile (no wonder they want ARM really bad).

So nvidia does have plenty of competition in several markets, not just AMD and don't forget intel is getting into the ring pretty soon.
 
Also, nvidia is the dominant player in almost every market its in. PC Gaming, HPC, AI, Science, Automotive, self driving, content creation, hollywood productions, rendering, you name it. About the only market it absolutely failed was mobile (no wonder they want ARM really bad).

So nvidia does have plenty of competition in several markets, not just AMD and don't forget intel is getting into the ring pretty soon.

If Intel comes out with something competitive at the top end with their cards I will be shocked. Pleasantly mind you, but shocked.
 
nVidia certainly has not always pushed boundaries. Everyone wants to forget Tesla/Fermi era, where nVidia was absolutely trounced by AMD

Unless you're talking about stability, drivers, multi-GPU...

I absolutely regretted buying the AMD GPUs I did at that time, while the Nvidia GPUs were pretty good if not dominating.

Neither were perfect, mind you, but there's a big difference between 'it'll probably work' and 'it might work someday'. I did indeed get tired of waiting for something in the AMD ecosystem to get fixed so that I could use the hardware close to its potential.

However, unlike Intel, NVIDIA realizes that product stagnation will hurt them in the long run by allowing competitors or potential competitors to leapfrog ahead.

I really, really want to challenge you on this one. The idea that Intel would have actually chosen to 'hold back' 10nm and 7nm so that they could milk 14nm more seems to be a common complaint, but one that leaves me flabbergasted.

Still, even if I think that the characterization is inaccurate, the history isn't, and while Intel has paid a price, they haven't truly paid for their mistake, intentional or not.

If Intel comes out with something competitive at the top end with their cards I will be shocked. Pleasantly mind you, but shocked.

So say we all!
 
I'm not convinced the tensor cores are doing all that much in the first place... unless your actually doing something with AI on them. I have a feeling nVidia could have run DLSS 2.0 on other-than-RTX SKUs if they wanted to, it sounds exactly like the type of lock-in and up-sell that nVidia is notorious for.
It could be run on anything but using the tensor cores processes it much faster and concurrently.

I mean the difference between adding 1.5ms to a rendered frame and 4ms is massive relatively.
 
Become a Patron!
Back
Top