Just like DLSS1, I really didn't expect nvidia to ace frame generation on its first try, but the technology does look promising.
If only the RTX4070Ti would drop in price I might be really tempted.
I'm going to be honest. I didn't think it was
possible for frame generation to work well. At least not for gaming.
With static content you can always delay the audio by two frames, skip a head a frame, and use a frame worths of processing time to interpolate the two.
With gaming, that's going to introduce two frames worth of input lag, which would be awful.
The DLSS3 method must be predicting the next frame rather than waiting for it, otherwise it would be completely useless for games, but how they do that reliably is completely beyond me. They must be running some sort of detection algorithm on your inputs (keyboard, mouse, etc.). Once trained on how a scene in a game reacts to movement, the AI must be predicting what the next frame in the future will look like based on a combination of the previous frame and the keyboard and mouse movements, and then render the next frame based on these predictions, faster than the GPU itself could render it from scratch.
My thoughts were that under ideal circumstances this might be possible, but in practice it would fail, be laggy and look awful.
At least in Dying Light 2 that is not the case. Either that, or I am finally so old now that I can no longer detect input lag.
I'm honestly a little bit in disbelief. What they are accomplishing, at least in this one title,
shouldn't be possible.
While I am happy to use it in Dying Light 2, I am going to scrutinize it in every title I test it on, before trying to use it, and I probably would not use it in any multiplayer/competitive titles just in case.