NVIDIA Still Has the Best Upscaling Solution, AMD FSR 3.1 vs. DLSS 3.5 Testing in Horizon Forbidden West and Ratchet & Clank: Rift Apart Shows

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,877
Points
113
AMD FidelityFX Super Resolution 3.1, a new iteration of the advanced open-source upscaling technology that red team launched for PC gamers in June, isn't better than NVIDIA DLSS 3.5 in the image quality department despite all of the improvements and new features that it has introduced, including the reduction of ghosting and enhanced frame generation, according to new tests for Horizon Forbidden West and Ratchet & Clank: Rift Apart that have surfaced online.

See full article...
 
You know what I'm tired of... vendoors trying to pass cards that are not meant for 1440p or 4k gaming as being capable of delivering a good experience there. It's just to drive monitor sales.

Starting out first build.. get what you like but stick with a target resolution of 1080p or 1440p max and spend more than you want on the video card if you're going 1440p and want a good experience. PERIOD.
 
While it is true - I think - that DLSS provides both better quality upscaling and frame generation than AMD"s implementations, they are pretty close, and unless you freeze-frame and pixel-peep it is difficult to tell the difference when everything is moving.

The best experience is still the native experience. (Well, the best experience is the DLAA capability from DLSS, but at native resolution to deal with aliasing)

What sucks is that scaling has just been a massive gift to game devs to be lazy and do ****ty ports and bad development because "we can just enable scaling by default anyway" and everything just looks ****tier than it used to. It's maddening.
 
While it is true - I think - that DLSS provides both better quality upscaling and frame generation than AMD"s implementations, they are pretty close, and unless you freeze-frame and pixel-peep it is difficult to tell the difference when everything is moving.
The bigger issue is that FSR started way behind, and that many 'FSR' titles don't have anything close to the current / latest version.

And it's still behind enough to be noticeable.

This results in the ability to leverage DLSS with an Nvidia GPU just being a better experience overall when it comes to upscaling, and while some devs may appear to use it as a crutch, DLSS is still useful throughout the product stack.
 
Yep and something I often find annoying that's left out of these discussions other than the mentioning of proprietary tech, is that DLSS relies on tensor cores to do the work rather than the GPU thus allowing it to work on other things. DLSS looks better but often will have higher FPS than other solutions as well, even w/o using FrameGen, because of that.

I think it's great that everyone and their dog is working on better upscaling solutions, something that more experienced PC users have been doing for a long time already on their own, but the writing was also on the wall decades ago that a single GPU was often reaching the limits of what it could do and needed help for other tasks. I keep hoping that AMD's chiplet approach will someday allow it enough muscle to overpower the needs of RT, shaders, etc. and FSR will evolve to assist but until then, NVIDIA has provided hardware out of the gate with the first RTX cards to empower DLSS and continues to improve upon it with impressive results.

On the other side of the fence, yep native is better, and once again NVIDIA is at the forefront here too with DLAA. Instead of using those same RTX tensor cores to upscale, games that support DLAA can use them to take the native image and increase ant-aliasing with the tensor cores. I first tried it with my RTX 4090 at 4K while playing Horizon Forbidden West and it definitely showed improvement to the point the game didn't look like a console game anymore, combined with all the other settings at max and seeing this in HDR on LG 4K OLED displays.
 
DLSS's benefit over FSR is that DLSS has trained reference images for the games it works in, thus it is able to more precisely know what the image is supposed to look like in reality. Whereas FSR does not, it's a simple spatial and temporal scaler. Until AMD creates machine learning that is capable of being trained from a reference image, it will always be inferior. Hardware aside, there is nothing wrong with using the compute units for upscaling processes, but of course, if you want to leverage efficiency then separate processing is helpful, just not required. However, in terms of Frame Generation, you most definitely want some form of computation units for that to process vector data provided by the game. Faster flow accelerators, for example, can provide better accuracy in frame prediction.

I am a huge advocate of things like DLAA, however, I think that form of the technology is the way to go in gaming. Every game I play, I use DLAA instead of upscaling, at my native resolution, I love the image quality.

Also, the trope about game dev's using upscaling as a crutch is just not true. For developers, upscaling is the last thing added to a game, they optimize in other ways way before upscaling is added. In fact, a lot of things in games these days are upscaled natively from lower resolution or compressed files in-game engine. The whole idea of LOD is exactly that. A game is never rendered at it's 'native' or uncompressed quality. Part of optimization is figuring out how to represent something of a higher quality, but at a lower cost to resources, so in a sense, everything is 'upscaled' from the native texture size, or polygon count, or mesh size, etc...
 
Side note regarding Frame Generation. When enabled FG consumes 1 to 2GB of your vRam. So if you have a card with only 8GB vRam, be sure try both with and without FG as you might get a better experience with it turned off. Best thing is to turn on an overlay from Afterburner so you can see the currently used vRam amounts while playing with various video settings such as resolution and texture quality, then decide if you can afford the vRam for FG. If you have insufficient vRam for FG, it will potentially hurt performance.

It is looking like the upcoming 5060 and 5050 GPU's will only have 8GB vRam.
 
Well, the best experience is the DLAA capability from DLSS, but at native resolution to deal with aliasing
I am a huge advocate of things like DLAA, however, I think that form of the technology is the way to go in gaming. Every game I play, I use DLAA instead of upscaling, at my native resolution, I love the image quality.
I don't have the words to describe how much I love the image quality that DLAA gives me. Every game I play that supports DLAA, that sh1t gets turned on with the quickness. And it upsets me when a game does NOT support DLAA.
 
Last edited:
Become a Patron!
Back
Top