AMD FidelityFX Super Resolution 2.0 Is Seemingly Just as Good as NVIDIA DLSS 2.0

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
AMD's FidelityFX Super Resolution (FSR) 2.0 has officially debuted today, and early reviewers seem to be pretty happy with what red team has achieved with the second iteration of its upscaling technology.

Go to post
 
Again very impressive work by amd here across the board. No ai needed no special cores needed. That with heir driver updates make some mature cards even better.
 
That is very interesting.

In theory upsampling methodologies as used by AMD should not be capable of competing with AI generated geometry type methods like Nvidia uses.

I wonder what magic is making this possible?

Also, I wonder if there will be an RSR version of this, or if the technique requires explicit game support in order to work.
 
That is very interesting.

In theory upsampling methodologies as used by AMD should not be capable of competing with AI generated geometry type methods like Nvidia uses.

I wonder what magic is making this possible?

Also, I wonder if there will be an RSR version of this, or if the technique requires explicit game support in order to work.

Well, given that FSR1.0 was about as good or sometimes better than DLSS1.x, it seems only natural that FSR 2.0 compares with DLSS 2.x, so I'm going to say you are mistaken.

IIRC CONTROL DLSS 1 implementation initially was shader based and actually produced pretty good results, but then DLSS 2.0 came out and it was just better. So upscaling CAN be done without dedicated hardware. Now lets see what DLSS 3 brings to the table.

I'm more impressed that FSR 2.0 only gets a minor performance hit compared to FSR1.0 while having much better IQ.
 
Well, given that FSR1.0 was about as good or sometimes better than DLSS1.x, it seems only natural that FSR 2.0 compares with DLSS 2.x, so I'm going to say you are mistaken.

IIRC CONTROL DLSS 1 implementation initially was shader based and actually produced pretty good results, but then DLSS 2.0 came out and it was just better. So upscaling CAN be done without dedicated hardware. Now lets see what DLSS 3 brings to the table.

I'm more impressed that FSR 2.0 only gets a minor performance hit compared to FSR1.0 while having much better IQ.

I mean, these are different strategies.

Everyone knows you cant add detail that isn't there to an image. It gets back to the old "enhance" trope always used in spy/police shows and movies.

1652394895000.png

FSR and DLSS use two different methods.

AMDs FSR is the simplest. It uses various combinations of upscaling algorithms and sharpening filters to try to minimize the quality loss (but there will always be quality loss).

Nvidia's method is more sophisticated. It tries to fill in geometry using AI based pattern matching. This will result in more native resolution type of sharpness, but will result in other types of errors, when it misinterprets the geometry that is needed, because after all it is dealing with incomplete information.

There is no free lunch. You can't make something out of nothing. Upscaling algporithms will always have quality loss, and you can never eliminate AI geometry matching from occasionally guessing wrong/ With the game specific "training" required to make DLSS work, you can minimize it, but it will still always be there.
 
I mean, these are different strategies.

Everyone knows you cant add detail that isn't there to an image. It gets back to the old "enhance" trope always used in spy/police shows and movies.

View attachment 1596

FSR and DLSS use two different methods.

AMDs FSR is the simplest. It uses various combinations of upscaling algorithms and sharpening filters to try to minimize the quality loss (but there will always be quality loss).

Nvidia's method is more sophisticated. It tries to fill in geometry using AI based pattern matching. This will result in more native resolution type of sharpness, but will result in other types of errors, when it misinterprets the geometry that is needed, because after all it is dealing with incomplete information.

There is no free lunch. You can't make something out of nothing. Upscaling algporithms will always have quality loss, and you can never eliminate AI geometry matching from occasionally guessing wrong/ With the game specific "training" required to make DLSS work, you can minimize it, but it will still always be there.
Are you telling me CSI is FAKE?????? no way :LOL: :p :D
 
FSR and DLSS use two different methods.
FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
 
FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
Interesting.

I did not realize they completely changed their method DLSS 2.x I was under the impression it was just an improved AI method like the initial release.

How does temporal based upsacling work without introducing input lag? Wouldn't it need to peek into th efuture at the next frame, and thus delay everything by a frame?
 
FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.

FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
The point of the tensor cores is to speed up the processing.

Instead of the upscaling adding 5ms to the frame it adds 2ms.

So if you are getting 60fps at 1080p and want to temporally upscale to 4K, it would be the difference between 53.5fps (tensor cores) and 46.2fps (no tensor cores)

These numbers are made up but the overall picture is what tensor cores are supposed to offer.
 
Interesting.

I did not realize they completely changed their method DLSS 2.x I was under the impression it was just an improved AI method like the initial release.

How does temporal based upsacling work without introducing input lag? Wouldn't it need to peek into th efuture at the next frame, and thus delay everything by a frame?
They can’t look at future frames, only past - which is why it has problems with some ghosting and types of motion. AMD talked a bit about how they reduce that via “disocclusion maps”, nVidia just says it’s AI magic.
 
What I find funny is that just a week ago there were still people that claimed that FSR1.0 was just as good as DLSS2.x, but now the general consensus is that FSR sucks but FSR2.0 is as good or even better than DLSS2.x.

BTW I really hope FSR1.0 and DLSS 2.x games get a FSR 2.0 upgrade. I think most DLSS1.x games never made the jump to DLSS2.x
 
Last edited:
What I find funny is that just a week ago there were still people that claimed that FSR1.0 was just as good as DLSS2.x, but now the general consensus is that FSR sucks but FSR2.0 is as good or even better than DLSS2.x.

BTW I really hope FSR1.0 games get a FSR 2.0 upgrade. I think most DLSS1.x games never made the jump to DLSS2.x

Hopefully FSR 2.0 will just wind up bewing an upgrade to RSR and thus work in all titles without the need for any in game support.
 
I think most DLSS1.x games never made the jump to DLSS2.x
Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).

I really like the new DLSS stuff but if FSR 2.0 gains wider adoption I'm sure I'll enjoy it too. It's great having something non-proprietary but I also like how NVs approach offloads some of the work onto other hardware as well. In the end, whatever works is awesome for us all.
 
Hopefully FSR 2.0 will just wind up bewing an upgrade to RSR and thus work in all titles without the need for any in game support.
Don't think that's gonna happen just as NIS will not bring DLSS to all games.

The thing here is the motion vectors that are necessary for both FSR2.0 and DLSS to work their magic. That's why TAA is a prerequisite as it already uses motion vectors.

But an improved RSF/NIS should be possible with the lessons learned from DLSS/FSR2.0.
 
Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).

I really like the new DLSS stuff but if FSR 2.0 gains wider adoption I'm sure I'll enjoy it too. It's great having something non-proprietary but I also like how NVs approach offloads some of the work onto other hardware as well. In the end, whatever works is awesome for us all.
While I really don't care about propietary or open source standards, particularly on GPUs; (AMD doesn't really have the greatest track record using open standards), the more choices, the better.

Now lets see what DLSS3.0 and XeSS bring to the table.
 
Become a Patron!
Back
Top