NVIDIA Unveils DLSS 5, to Help Achieve Photorealistic Lighting While Honoring the Artist’s Original Intent and Image Quality

At the end of the day, my opinion is that one should not be so focused on the journey of how the graphics were achieved; rather, focus on the output/final result of those graphics, and if it's a good game.

We can always judge the output and tell if it has errors or looks bad. But the journey of how the game is developed and achieving those graphics, let the developers cook it up however they want, it is, after all, their game, and their intent.

As the saying goes, Vote with your Wallet. If you don't like the game in its final form, well, don't play it. If you do like what you see, then just enjoy your game. It's not that big of a deal, really; it's just a game.

From a tech-enthusiast, and review-site perspective, our responsbility is to test it, show you expected performance and image quality so that you can make an informed purchasing decision. I'm going to take a stand-back, when evaluating these things and just inform you how the hardware is operating with it and what it's doing, so you know what you are buying. I'm not going to make the decision for you; instead, I will provide information, and YOU make the decision for yourself.

Thing is, people bitch about nvidia every day, all day long. They don't care about gamers, fake frames, ****ty upscaling, skyrocket prices and the list goes on.

Yet nvidia has more than 90% market share on gaming cards. People is not voting with their wallets.
 
More information has been revealed about how it works:


It is indeed just taking a 2D frame plus motion vectors and applying Generative AI to it. It does this through the Frame Generation algorithm, hence the need to have Frame Gen enabled. It's utilizing the "fake frames" generation capability of Fram Gen, to add Generative AI to a frame, thus changing the image inheriently. In other words, Frame Gen is changing the image to an AI-rendered frame. It's really more akin to a Frame Gen technology, then, hence also why it is tied to the DLSS package.

This is really just taking Frame Generation to the next level. That's what this is, Frame Gen 2.0, basically.

There's a lot more in the video, so I suggest watching it.

Now that more information has been revaled as to how it works, I am less impressed with it. Based on this new information, it is disruptive to the game art style and game developer intent. My opinion is shifting toward this being a negative feature for gaming, overall. Similar to Frame Gen, I don't see it as useful. However, and I stress, in very unique circumstances, it could make an older game look, well, different/better? Maybe, but maybe not. Maybe, like Frame Gen, it will have too many errors and inconsistencies to be useful.

Now, all of this opinion is possible since new information has come to light. I only wish people would be a little more level headed, and would have waited for more information to come to rational conclusions, rather than the absolute sensationalized rage posting I've seen about this on social media. People need to step back, calm down, take a moment to understand what it is, how the technology works, what it is doing, and then come to rational conclusions. That is what I have tried to do. Now that I've seen what it is actually doing, I come to this conclusion and find it very problematic for gaming.
 
Last edited:
I'm getting sick of this game art style and developer intent. Which is the developer intent? 4k ultra settings path tracing?, low settings 720p @30fps? is anything else disruptive of the art style? Does a switch version disrupt the intended look? Does a high-end pc with ultrawide monitor?
 
I'm getting sick of this game art style and developer intent. Which is the developer intent? 4k ultra settings path tracing?, low settings 720p @30fps? is anything else disruptive of the art style? Does a switch version disrupt the intended look? Does a high-end pc with ultrawide monitor?


I have time-stamped this, click the video. You will notice Grace, the intent is to fullfill an emotion of the character on the screen, by the developer, for the scene in question, considering the storyline. The AI-generated version changes the intent of the character that the developer, well, intended for that scene, simply by changing the look of the character. Watch Daniel explain it, watch that whole section, and his explanation.

To directly answer your question, the rendered frame is the intent; anything AI-generated by NVIDIA would not be the developer's intent.
 
Yeah, this is a very badly conducted reveal - I mean, we knew that, but the fact that any assumptions about how it is implemented and level of control are... well, wrong.

We can only hope that Nvidia is taking the feedback seriously. Which, at point 5 of Daniel's video, it looks like that's accounted for at the minimum, but that there's still some work to be done. And some trust to be earned back.
 
Well, people that claim it's just an instagram filter, are spot on... kind of.

It's clear that DLSS5 takes the frame and "reconstructs" the image adding detail, modifying the lighting, contrast and color. It does NOT mess with the polygons but can alter the textures and does that in real time.

Compared to original DLSS, which reconstructs the frame based on what the original image looks like. The final output should look at least as good if not better than the original rendered at say 4k. DLSS5 on the other hand doesn't appear to take the original frame as a target, but what it should look with "better/more realistic" lighting and textures. In the end it renders a completely different "enhanced frame". Whether the final frame looks like the creator wants is yet to be seen.

IMO the subsurface scattering effect alone makes a huge difference in IQ, but let's get rid of the botox.

The fall season release seems rushed; I wouldn't be surprised if it's pushed back.
 
DLSS 5 kinda reminds me of Clippy: "It looks like you are typing a letter. Would you like me to help you?"

The AI algorithm is looking and interpreting what the image is trying to do with the materials, and then basically saying, let me re-interpret that and do that for you.

The problem is, since it's the same algorithm for every game, there will be a 'sameness' or likeness for every game; they will all start to share the same style and feel, removing uniqueness.
 
This is supposed to be a tool and a demo of the tool that developers can use... right?
 
It will all be AI slop, good slop and bad slop. AI all over the place, the entire process, meh whatever, its a matter of choosing. People will buy, by the billions, all kinds of AI slop, no problem, just look at Marvel " movies".
But I dont have to see them, so I dont.
 
So am I to understand that it is more efficient compute-wise to re-interpret and generated a new frame between two actual frames than to just reproduce a new frame interpolated between the frames on either side (for a given page refresh range)?

Or is this an example of team Green attempting to create artificial demand by steering the market toward one of their strategic corporate technologies (i.e., all things AI)?
 
It's more efficient than path-tracing to a cinematic level, don't you think?

The real problem here is that they used a recognizably 'AI slop' model for the demonstration, and people are rightly taking offense at the demo results.

Now I would expect Nvidia to do the work on the backend to allow tuning and context injection so that both developers and modders can utilize the tooling to get an end result that is actually just a straight improvement over the base render.
 
I mean, if we keep going down this path - graphics programming won't be anything other than a series of AI prompts.
 
What boggles my mind is how much juice this takes to run, What will be needed to support this AND raster performance at the same time on a single card? Not against it per say... just trying to understand impact.

Could I run an smaller Nvidia card to handle the AI workload and render with my 7900xtx? Questions abound...
 
I think part of the negativity may be due to the seemingly ham fisted "AI" everything / centric approach Nvidia is taking. The dont worry we will take care of everything with our AI, just come to daddy leather jacket.
That said, is there even another way to advance visuals , raytracing etc as far as it has?
Well, i dont know, neither does AMD or Intel it seems.
I do know gpu chips are reaching absurdity in power and in some ways complexity, which is a problem for price.
Same question arises, is there another way, same answer I guess.
It has to be other ways, just likely would require to send MS and whatever direct X or whatever bull to the pits of hell.
 
Could I run an smaller Nvidia card to handle the AI workload and render with my 7900xtx? Questions abound...
hahaha like nVidia would ever allow you to mate their product with an inferior brand.

You'll be lucky if it doesn't require whatever the latest generation is, just because they can do that and force sales.

(you have a good point though, despite my joking)
 
I think part of the negativity may be due to the seemingly ham fisted "AI" everything / centric approach Nvidia is taking. The dont worry we will take care of everything with our AI, just come to daddy leather jacket.
That said, is there even another way to advance visuals , raytracing etc as far as it has?
Well, i dont know, neither does AMD or Intel it seems.
I do know gpu chips are reaching absurdity in power and in some ways complexity, which is a problem for price.
Same question arises, is there another way, same answer I guess.
It has to be other ways, just likely would require to send MS and whatever direct X or whatever bull to the pits of hell.
Apple would like to present the M series of SOC's. I'm really looking at their stuff now far more than I have before.
 
Apple would like to present the M series of SOC's. I'm really looking at their stuff now far more than I have before.
Apple seems to only, or only be able to, provide token gestures toward gaming.

Which is interesting because they could've partnered with Valve and had this settled already... but I assume neither company wants to enter into a partnership where they'd be put into a position to share revenue?

Either way, I'm pretty much all in on Apple for anything mobile; really, anything that doesn't need to run Windows (gaming) or Linux (servers), and actually both of those markets are ones that Apple could serve but choose not to.
 
That said, is there even another way to advance visuals , raytracing etc as far as it has?
I speculated earlier that perhaps our concept of 'rendering' will be upended by being able to use generative models on a per-frame basis.

My theory is based on the idea of combining all of the technologies we've seen so far, to the point where instead of the 'generative' part being on the end of the pipeline like DLSS5 is, it is moved to the front.

So, you'd get the 'render engine model' all of the context - the game assets - and then let it figure out how much traditional 'work', i.e. path tracing, shading, post-processing, that needs to be done to output the next frame at whatever quality and resolution level is specified. The goal would be to balance how much processing power - or even power draw at the wall - is needed to render each frame.

And the reason to make the change is that you'd want something that could be scaled to any base hardware anywhere, and that you'd want to be able to minimize the latency penalty from using these models for real-time interactions. Gaming is one major interaction, but VR and AR are definitely on the plate too (if not also a part of gaming).
 
Which is interesting because they could've partnered with Valve and had this settled already... but I assume neither company wants to enter into a partnership where they'd be put into a position to share revenue?
Why would Apple partner with Valve when Apple brings in almost 10x the revenue that Valve does, just on App Store Gaming revenue alone, let alone everything else Apple creates revenue with. Apple could buy Valve outright five times over just with the cash they have on hand if they wanted to. Seems it would be the other way around, with Valve wanting to negotiate an in into the App Store**

Apple is the biggest gaming company out there, by far. It's funny to see everyone here just write them off because they don't consider Mobile stuff gaming, but it certainly makes money.

By only making token gestures towards gaming, Apple pulled in more video game money than the entire PC gaming segment combined. And I'll fully admit that just because you made a lot of money doesn't necessary mean you are good at what you do - but money talks pretty loudly, and just because I may not care for mobile gaming much, I can't argue against the facts there.

** Don't get me wrong, I do ~not~ think Valve should do this - Valve has a very good niche in PC gaming going on and I respect that they haven't tried to grow just for growth's sake and stick to what they are good at. But Apple has almost no incentive to join forces with Valve to get on the PC market, but Valve could have a big financial incentive to try to get into the App Store and expand their reach.
 
Last edited:
Become a Patron!
Back
Top