NVIDIA DLSS 3.5, Featuring Ray Reconstruction (RR), Has Been Announced at GamesCom 2023 and Is Set to Arrive This Fall

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
8,124
Points
113
NVIDIA DLSS 3.5 was announced at this year's GamesCom 2023 opening night and is said to arrive this coming fall. The next iteration appears to have a focus on a new feature called Ray Reconstruction (RR) to provide significantly improved visuals when using ray tracing effects. RR is a type of denoiser and is said to have been trained on five times more data than DLSS 3. According to VideoCardz denoisers often create graphical anomalies due to a lack of data, but also strip away data that is needed for upscaling while also reducing color data.

See full article...
 
This is massive. Up to like a 10% fps increase using it with FG and DLSS in cyberpunk from the example they showed.

I know very few games will get the tech and even fewer will get as much of a benefit but I'm excited.
 
This is massive. Up to like a 10% fps increase using it with FG and DLSS in cyberpunk from the example they showed.

I know very few games will get the tech and even fewer will get as much of a benefit but I'm excited.

The point isn't the performance increase, the point is the image quality improvement vs a traditional denoiser. RR also incurs a cost, just like a denoiser, but the point is AI can do it better, that's the claim anyway.
 
The point isn't the performance increase, the point is the image quality improvement vs a traditional denoiser.
Hopefully this will help with RT 'swimming/noise' especially on lower resolution DLSS.

If it works as they advertise haha
 
Hopefully this will help with RT 'swimming/noise' especially on lower resolution DLSS.

If it works as they advertise haha

I think the intended benefit is when RT is used with upscaling, it tends to lose quality due to temporal AA, the denoiser has artifacts, but with this new AI method it employs a different method to find patterns to fill in the empty pixels that can do a better job at it when using upscaling with temporal AA.
 
I wonder if this will eliminate the trails of moving objects off in the distance. It's not as bad as it used to be but it is still there with DLSS 3.
 
So the tensor cores will do the denoising vía AI, instead of just using a denoiser filter algorithm.

When rtx was first announced, I wandered why did it use an algorithm instead of AI, I guess it wasn't ready just yet
 
Interesting that it works on previous gen cards as well. Not that I'm complaining just uncharacteristic.
 
Separate Frame Generation and DLSS3 in your mind. DLSS 3 works on all RTX GPUs, RR works on all RTX GPUs, Frame Generation only works on RTX 40 GPUs.
 
Yea clearly it's confused the eff out of my old ***.
 
Yeah, it's become a cluster f in trying to decipher. Happy for the features but it is getting confusing and game dev's are not even consistent if/where they put the toggles in. Some are under display and some under graphical features. For most of us that doesn't really matter since we're going to hunt and tweak wherever anything is anyway but I have noticed that.
 
They showed off this in the Alan Wake 2 trailer so it is massively improving the IQ and response time in 'fully raytraced' scenarios

One of my big complaints with PT in CP2077 was the smeariness of the fences and barbed wire.
 

Attachments

  • 1692814975172.png
    1692814975172.png
    1.8 MB · Views: 4
  • 1692815126378.png
    1692815126378.png
    634.2 KB · Views: 4
Separate Frame Generation and DLSS3 in your mind. DLSS 3 works on all RTX GPUs, RR works on all RTX GPUs, Frame Generation only works on RTX 40 GPUs.
If I'm not mistaken DLSS3=DLSS2+Framegeneration (only on RTX40), but DLSS3.5 = DLSS2+AI Denoising on pre RTX40 but also = DLSS2+Framegeneration+AI Denoising on RTX40 (and up?).

There have been rumors that frame generation could be coming to RTX30, but I don't think so.
 
There have been rumors that frame generation could be coming to RTX30, but I don't think so.
I don't think so either, because only the RTX 4000 series has the Optical Flow Accelerators needed for Frame Generation.
 
I don't think so either, because only the RTX 4000 series has the Optical Flow Accelerators needed for Frame Generation.
It's possible based on efficiency of code. Remember those Tensor cores or general compute cores on the Nvidia cards are in effect programmable processors able to do all sorts of code calls. One of the reasons these cards (well the 3000 series) were so good at mining. So Nvidia COULD... but I doubt they will. they tend to abandon their older generational cards pretty quick.
 
It's possible based on efficiency of code. Remember those Tensor cores or general compute cores on the Nvidia cards are in effect programmable processors able to do all sorts of code calls. One of the reasons these cards (well the 3000 series) were so good at mining. So Nvidia COULD... but I doubt they will. they tend to abandon their older generational cards pretty quick.
Brent mentioned this in his first RTX 4090 Founders Edition review, and a bit more detail in general regarding all the new stuff with the 40 series which I strongly recommend for a read or re-read as I've done on multiple occasions. The short end is even though Ampere is capable of it, Ada is optimized for it and it's unlikely even the best code could ever get even a 3090 Ti to compare to a 4090 and so on.

https://www.thefpsreview.com/2022/10/11/nvidia-geforce-rtx-4090-founders-edition-review/
 
I don't think so either, because only the RTX 4000 series has the Optical Flow Accelerators needed for Frame Generation.

Negative, all RTX GPUs of every series have Optical Flow Accelerators via the Tensor Cores.

The difference is simply that the RTX 40 series is much faster and more accurate.
 
Become a Patron!
Back
Top