NVIDIA Unveils DLSS 5, to Help Achieve Photorealistic Lighting While Honoring the Artist’s Original Intent and Image Quality

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
10,546
Reaction score
7,428
NVIDIA has introduced DLSS 5, its next iteration of upsampling technology, which is now aimed at rendering photorealistic images. DLSS 5 will utilize a real-time neural rendering model to “infuse pixels with photoreal lighting and materials,” and NVIDIA claims it is the biggest breakthrough since it first debuted real-time ray tracing in 2018 and the […]

See full article...
 
I think nvidia needs a new acronym for its AI derived technologies. It got bad enough with framegen, but now this?

From what I've seen so far, it looks great on some games, different but not necessarily better in others and worse in the rest. Lot's more of tweaking to do. The big question, other than IQ is how will it perform on current hardware, dual RTX5090 is out of the question, what will be the minimum requirements?
 
So is this a case where you might need two GPUs - one for graphics, and one for AI - kind of like the old PhysX setups?
 
Hey two 5090s is just a booster. Its better if you have one 5090 for rendering and one Blackwell 6000 for ai work.
 
So is this a case where you might need two GPUs - one for graphics, and one for AI - kind of like the old PhysX setups?
Why are people so fixated on the dual RTX5090? It's a proof of concept so to speak, lots of work and optimization to do until release. I think nvidia rushed too early to show it, but it was all it had to show pc gaming wise. No mention of next gen gaming cards at all, not even in the pipeline.
 
Yep, their goal is scale it down and presumably be supported on other cards and single GPUs at that. Still no word if it'll go beyond 40 or 50 gen since both share similar AI tech but it could even make it to 30 series.

However, what I find is hugely being missed is that they're running two GPUs. Afaik NVIDIA killed that off when the RTX 20 series launched. Sure would be fun if the driver code for that got leaked and folks started running their own Dual NV GPU setups for gaming. We know DX supports multi GPU but nobody uses it.
 
However, what I find is hugely being missed is that they're running two GPUs. Afaik NVIDIA killed that off when the RTX 20 series launched. Sure would be fun if the driver code for that got leaked and folks started running their own Dual NV GPU setups for gaming. We know DX supports multi GPU but nobody uses it.
It's the same as using an Intel GPU to do XeSS MFG (something) separately. It's not in the render pipeline like multi-GPU has been in the past.



Honestly I'm surprised that typically tech-oriented people are losing their collective minds over this, and in general attacking the positive coverage that Digital Foundry posted.

It's like they skipped the whole 'proof of concept' part, the whole 'tunable by developers' (which implies that it'll probably be tunable by end-users too), and went straight to the 'AI slop' refrain.

It sounds like how every AMD Radeon acolyte went ballistic when Nvidia released RTX and DLSS. Years of tearing down a pair of technologies that paved the way for the basis of modern gaming.

And they're doing it again.



So here's the thing. And I dare not post this on Reddit!

Nvidia has opened up a vector for using ..."AI"... to improve lighting using in-game assets. This I am down for. Like anything 'AI', there's a lot of variables involved - models, context building - yet I can see this approach opening up a tremendous amount of fidelity advancement that simply wouldn't be possible except for unobtainable brute-force approaches.

And realistically, brute force would just be silly in a lot of the cases that were showcased for DLSS5. Take the forest / wooded area overview clips. We still can't do that with current hardware, we're really not even close to that level of path-tracing performance.

Now imagine a developer (or a modder!) feeds the model the appropriate context of what said wooded area should look like and lets it go to town. Same for a face, or a car model, or an explosion, whatever.

Why is it hard to see the potential here?
 
Yep, their goal is scale it down and presumably be supported on other cards and single GPUs at that. Still no word if it'll go beyond 40 or 50 gen since both share similar AI tech but it could even make it to 30 series.

However, what I find is hugely being missed is that they're running two GPUs. Afaik NVIDIA killed that off when the RTX 20 series launched. Sure would be fun if the driver code for that got leaked and folks started running their own Dual NV GPU setups for gaming. We know DX supports multi GPU but nobody uses it.
multigpu requieres developer support. No one wants to implement it for some reason.
 
multigpu requieres developer support. No one wants to implement it for some reason.
For sure but it's interesting on how the specific workloads were delegated here. Definitely nothing like SLI/Crossfire but more reminds of how you could dedicate PhysX back in the day via NVIDIA control panel before it became completely outdated. I ran some setups doing this and there were impressive gains for a time until NVIDIA deprecated it.

It would be awesome if DLSS functions, as demoed here, could be done the same but I know NV would never allow that to happen since it kill certain GPU sales.This would be done at the driver level and developers need only to support DLSS features while having nothing to do regarding multiple GPUs since that could be managed by NV APP or CP.
 
I think the Lossless Scaling tools lets you use a 2nd card to do the upscaling/framegen
 
I have a feeling that the dual 5090 setup was just for bragging rights. Wouldn't be as impressive if it just used a RTX5060 for DLSS5. We'll know for sure in a few months
 
I am not against the idea... crafting an interactive ai 3d model based on a phrase.. yea I know that will only be used for gaming... I wouldn't use it... oh you mean... and kt would look like Jessica Alba...


Maybe I do support this..
 
Honestly I'm surprised that typically tech-oriented people are losing their collective minds over this, and in general attacking the positive coverage that Digital Foundry posted.

I see a few camps on this:

A) You've got the camp that vehemently despises anything AI-related, and thus hate it just because it is inherently AI.
B) You've got the camp that are OK with AI-rendered things, but hate it because it just looks bad, or is inconsistent.
C) You've got the camp that are neutral, or will wait to hold judgment until its final form, realizing DLSS 1 was pretty bad at inception, but DLSS Upscaling has grown over time to be good now, so potentially it will get better.
D) You've got the camp that loves it, seeing the potential for better image quality in games.

I'm of the "wait and see" camp. I see no need to directly condemn something before it is in its final form, fully implemented, fully tested in practice, and see how it is implemented in-game. I am also not going to praise it for the same reason. I see the potential, and I'm curious from a gamer and tech enthusiast perspective. I see the positives, and I see the potential negative consequences of such things, such as graphical errors and inconsistencies, and artistic style discussions.

I think people are perhaps missing the potential this could have for older games, which have poor image quality or lighting, or models because they are just simply older games, with weaker graphics capability. This could perhaps, in a way, almost be like remastering an old game, to make it look better, potentially. Might actually make replayability of older games a thing. Kinda what NVIDIA Remix was supposed to be. I see some potential there.

I do agree with the opinion that it should not be called DLSS 5; it should not be wrapped up into the DLSS technology package, but instead be called its own thing, with a unique name, and not tied into DLSS or its feature set. Make it a driver control panel option, even, and definitely an option in-game to toggle on/off. As long as it is optional and the gamer's choice.

From the DF article, it sounds like it is tied to the Frame Generation option. Therefore, if you don't want it enabled, it sounds like you can just not use Frame Gen, and it won't be enabled anyway. You have to have Frame Gen enabled for it to work. So, since a lot of people don't like Frame Gen anyway (me included), most probably won't have it enabled, and hopefully won't be enabled by default then in-game.

If it isn't enabled by default in-game, and is an option you manually have to enable in-game with a toggle, and defaults to OFF, then I don't see the issue with it being there as an option.

Key points: Off By Default, Toggle Option, you have to choose to Enable, A gamer's personal choice to enable or disable it.


As for DF, they were just reporting on this new feature, and don't deserve the hate they have been given for reporting a new feature shown at GTC. Would people rather not be informed? When tech sites get attacked by people for reporting on new technologies and showing it off, it shows the extreme bias of people and discourages further coverage. That is not cool, don't attack the reporters.

Now, can people be a little more level-headed when new things are announced? Or is the default to just 'rage against the machine' with every new announcement now? Calm down, see how this develops.
 
Last edited:
As for DF, they were just reporting on this new feature, and don't deserve the hate they have been given for reporting a new feature shown at GTC. Would people rather not be informed?
I agree with what you wrote, this quote being the exception. And I strongly encourage trolls to stop threatening the safety of the DF crew. They must be putting crazy in the food supply now. People's cheese slides off the cracker over the most minor of disagreements.

DF 100% deserves the massive blow back they are receiving over heavily biased reporting. They have failed repeatedly over the years, to choose their journalistic integrity over glazing companies to maintain access, sometimes exclusive access. The type of access Nvidia threatened to remove, if Gamers Nexus and Hardware Unboxed would not take "editorial direction". I think they have blackballed GN?

You asked if people do not want to be informed. What they did in their initial reaction video was more infomercial than information. It could have come directly from Nvidia marketing and advertising, and many 1Ks of their viewers called them out on it. The video got ratio'd hard. In their follow up, rushed and akward damaged control discussion, with Alex weighing, it is apparent this is a major learning experience for them. You can see they are all haggard and distressed, downright glum even. Their take away from this SNAFU should be that your relationships with the companies and the viewers has to be carefully balanced or you fall off the tightrope and right into the pool of sharks with laserbeams on their heads.

But this too shall pass, and the channel will keep growing. LTT has shown time and again, that drama is only a temporary setback, and the engagement you get from it is a long term net positive.
 
I am not against the idea... crafting an interactive ai 3d model based on a phrase
Not just based on a phrase - or even not at all, but could be I guess?

'Context' in this case is whatever is fed to the model, right? So if you design a model to take 3D and texture assets and apply lighting, then you can have the model enhance the lighting of whatever those assets are used in.

That's how I'm seeing this potentially be used to good effect. Obviously the whole 'beautify' filter being shown off here is a step too far, but by the gods they got everyone talking about it!
 
I'm of the "wait and see" camp. I see no need to directly condemn something before it is in its final form, fully implemented, fully tested in practice, and see how it is implemented in-game. I am also not going to praise it for the same reason. I see the potential, and I'm curious from a gamer and tech enthusiast perspective. I see the positives, and I see the potential negative consequences of such things, such as graphical errors and inconsistencies, and artistic style discussions.

Yeah, same here. I'm excited by the possibilities and that's generally my perspective on new technologies, which is why I used DLSS and RTX as examples. I'm not blind to how this technology will be misused either, but I prefer to hope that the benefits will outweigh the downsides.

I think people are perhaps missing the potential this could have for older games, which have poor image quality or lighting, or models because they are just simply older games, with weaker graphics capability. This could perhaps, in a way, almost be like remastering an old game, to make it look better, potentially. Might actually make replayability of older games a thing. Kinda what NVIDIA Remix was supposed to be. I see some potential there.

I just played through the Mass Effect: Legendary Edition (slightly refreshed original trilogy), using DSR scaling at 4x, 7860x3200 in my case. I found DLDSR at 2.25 (5750x2400) to not be that effective first, and 4x to not really be any better - I just simply turned the setting up to max and lived with it because I was more focused on playing the game.

And I realized, before this DLSS5 announcement, that what I really needed was an 'AIAA' (AI anti-aliasing) that could handle the pixel crawl from all of the high-contrast vertical and horizontal lines in the Mass Effect games. They all suffered from the same problem; character models and anything with detail looked great with DSR, but DSR could do nothing for the classic aliasing problem. Instead, what I think could work would be a model that could detect aliasing and then apply an even higher super-sampling and blending method, maybe with a transformation, kind of like how 3Dfx did it originally with their 4x SSAA that rotated the pixels 45 degrees before combining.

And of course, Mass Effect would look incredible with a light touch of DLSS5 lighting!

I do agree with the opinion that it should not be called DLSS 5; it should not be wrapped up into the DLSS technology package, but instead be called its own thing, with a unique name, and not tied into DLSS or its feature set. Make it a driver control panel option, even, and definitely an option in-game to toggle on/off. As long as it is optional and the gamer's choice.

This I agree with wholeheartedly. I don't blame Nvidia for leveraging their DLSS branding power, but yeah, it should be a separate 'track' like Framegen is, and they should have made it excruciatingly clear that it can be toggled, perhaps announcing that it can be toggled with a keyboard shortcut.
 
I said ever since AI became a thing that it would eventually complement/replace RT/PT. Brute force just doesn't cut it, even if you only raytrace some parts. This seems like the natural evolution. Thing is nvidia went too far with the overuse of Botox in its first demo.
 
I said ever since AI became a thing that it would eventually complement/replace RT/PT. Brute force just doesn't cut it, even if you only raytrace some parts. This seems like the natural evolution. Thing is nvidia went too far with the overuse of Botox in its first demo.

I think we're moving toward a new 'paradigm' of rendering; like variable-rate shading, but on steroids. Will be even better if they can support a wide range of gaming modes as well as enforce deterministic results so that things don't look widely different from one scene to another.
 
At the end of the day, my opinion is that one should not be so focused on the journey of how the graphics were achieved; rather, focus on the output/final result of those graphics, and if it's a good game.

We can always judge the output and tell if it has errors or looks bad. But the journey of how the game is developed and achieving those graphics, let the developers cook it up however they want, it is, after all, their game, and their intent.

As the saying goes, Vote with your Wallet. If you don't like the game in its final form, well, don't play it. If you do like what you see, then just enjoy your game. It's not that big of a deal, really; it's just a game.

From a tech-enthusiast, and review-site perspective, our responsbility is to test it, show you expected performance and image quality so that you can make an informed purchasing decision. I'm going to take a stand-back, when evaluating these things and just inform you how the hardware is operating with it and what it's doing, so you know what you are buying. I'm not going to make the decision for you; instead, I will provide information, and YOU make the decision for yourself.
 
Become a Patron!
Back
Top