AMD's FidelityFX Super Resolution (FSR) 2.0 has officially debuted today, and early reviewers seem to be pretty happy with what red team has achieved with the second iteration of its upscaling technology.
Go to post
Go to post
That is very interesting.
In theory upsampling methodologies as used by AMD should not be capable of competing with AI generated geometry type methods like Nvidia uses.
I wonder what magic is making this possible?
Also, I wonder if there will be an RSR version of this, or if the technique requires explicit game support in order to work.
Well, given that FSR1.0 was about as good or sometimes better than DLSS1.x, it seems only natural that FSR 2.0 compares with DLSS 2.x, so I'm going to say you are mistaken.
IIRC CONTROL DLSS 1 implementation initially was shader based and actually produced pretty good results, but then DLSS 2.0 came out and it was just better. So upscaling CAN be done without dedicated hardware. Now lets see what DLSS 3 brings to the table.
I'm more impressed that FSR 2.0 only gets a minor performance hit compared to FSR1.0 while having much better IQ.
Are you telling me CSI is FAKE?????? no wayI mean, these are different strategies.
Everyone knows you cant add detail that isn't there to an image. It gets back to the old "enhance" trope always used in spy/police shows and movies.
View attachment 1596
FSR and DLSS use two different methods.
AMDs FSR is the simplest. It uses various combinations of upscaling algorithms and sharpening filters to try to minimize the quality loss (but there will always be quality loss).
Nvidia's method is more sophisticated. It tries to fill in geometry using AI based pattern matching. This will result in more native resolution type of sharpness, but will result in other types of errors, when it misinterprets the geometry that is needed, because after all it is dealing with incomplete information.
There is no free lunch. You can't make something out of nothing. Upscaling algporithms will always have quality loss, and you can never eliminate AI geometry matching from occasionally guessing wrong/ With the game specific "training" required to make DLSS work, you can minimize it, but it will still always be there.
FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.FSR and DLSS use two different methods.
Interesting.FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.
FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
The point of the tensor cores is to speed up the processing.FSR 2.0 is pretty much the same thing as DLSS 2.x. Both are temporal-based upscalers with sharpeners. One (supposedly) uses AI/Tensor cores for ... magic? The other just uses shader cores. But as far as implementation / high level overview, both are pretty darn close now with the 2.0 revisions to each.
FSR 1 was not temporal based, it was strictly per frame. Not really similarly, but in the same vein of "having matured" -- DLSS 1.0 used static AI training. Both DLSS 2 and FSR 2 are basically just a temporal-based upscaler - the only real difference now being that one runs on Tensor cores, while the other runs on basically any GPU with shader cores.
They can’t look at future frames, only past - which is why it has problems with some ghosting and types of motion. AMD talked a bit about how they reduce that via “disocclusion maps”, nVidia just says it’s AI magic.Interesting.
I did not realize they completely changed their method DLSS 2.x I was under the impression it was just an improved AI method like the initial release.
How does temporal based upsacling work without introducing input lag? Wouldn't it need to peek into th efuture at the next frame, and thus delay everything by a frame?
What I find funny is that just a week ago there were still people that claimed that FSR1.0 was just as good as DLSS2.x, but now the general consensus is that FSR sucks but FSR2.0 is as good or even better than DLSS2.x.
BTW I really hope FSR1.0 games get a FSR 2.0 upgrade. I think most DLSS1.x games never made the jump to DLSS2.x
Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).I think most DLSS1.x games never made the jump to DLSS2.x
Don't think that's gonna happen just as NIS will not bring DLSS to all games.Hopefully FSR 2.0 will just wind up bewing an upgrade to RSR and thus work in all titles without the need for any in game support.
While I really don't care about propietary or open source standards, particularly on GPUs; (AMD doesn't really have the greatest track record using open standards), the more choices, the better.Yeah, it took forever for Metro Exodus but I think its only with the Enhanced edition, and SOTTR (and unexpectedly ROTTR), to get it, and even then you have to opt into a beta version for the Tomb Raider games to get it because they also bundled in some social media bs which pissed off a bunch of people (ironically nobody was forced to use it but that didn't stop people from complaining).
I really like the new DLSS stuff but if FSR 2.0 gains wider adoption I'm sure I'll enjoy it too. It's great having something non-proprietary but I also like how NVs approach offloads some of the work onto other hardware as well. In the end, whatever works is awesome for us all.