AMD Criticizes NVIDIA’s Approach to AI for Gaming

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,255
Points
83
NVIDIA has made huge waves in PC gaming with some of its AI-powered technologies, the latest of which include DLSS 3, introducing another significant advancement in the form of Frame Generation, but the competition doesn't seem to be as impressed as the average GeForce gamer. Speaking to Japan's 4Gamer, David Wang, AMD SVP of Radeon Technologies Group, gave his thoughts about NVIDIA's approach with AI for gaming and suggested that the company hasn't been leveraging it in the best way, arguing that AI should be used for more advanced purposes that go beyond image upscaling (e.g., NPC behavior). AMD's own upscaling solution, FidelityFX Super Resolution, was achieved without an inference accelerator, Wang noted in his argument.

See full article...
 
Genuinely baffling that NVIDIA is even in the conversation regarding NPC AI when NVIDIA doesn't make consumer CPUs

Intel CPUs have AI acceleration while AMD CPUs do not that I'm aware of

So I have no idea what the thought process was here

edit:: FSR2 is an inferior solution to DLSS. So again, why are they pointing out how inferior their products are?
 
Feels a little like an schoolyard bitching because they are not quite as good as the other guy.
 
Feels a little like an schoolyard bitching because they are not quite as good as the other guy.
What AMD said (not spending die size on unnecessary features) would make sense if their GPUs were either better in raster or cheaper (true for RDNA2 not true for RDNA 3)

For example in the initial versions of zen, unlike competing Intel products there was no IGP but AMD made up by value/perf. Only with zen4 have they invested in (cheap 6nm) igpu
 
What AMD said (not spending die size on unnecessary features) would make sense if their GPUs were either better in raster or cheaper (true for RDNA2 not true for RDNA 3)

For example in the initial versions of zen, unlike competing Intel products there was no IGP but AMD made up by value/perf. Only with zen4 have they invested in (cheap 6nm) igpu
This is a good point.

AMD is taking advantage of nVidia pricing right now and going "just under". I don't think you can make the "selling them stuff they don't need" claim without a significant price difference or performance difference. AMD isn't presenting either right now - which implies either they are overcharging or inefficiently using their die space.

The even more interesting point - just looking at die sizes: the 7900XT is 529mm (5nm) @ 58,000M transistors, the 4080 is 379 (4nm) @ 49,000M transistors. So looking at that, you go scratching your head - for roughly equivalent rastering performance - if nVidia is packing that with stuff you don't need, but is coming in at about 2/3 the die size... then wtf is AMD doing with all that area? YOu can't say it's because of differing process node, because the transistor count tracks the same way. And if die size loosely coorelates to manufacturing cost - nVidia has more room to come down on pricing than AMD does if it came to a price war.

All that said -- I do think Tensor cores are ridiculous on a consumer card. I think DLSS doesn't really require them - FSR more or less proves you don't need AI accelerators to do decent upscaling. I think it's just nVidia pushing them out there to get them into the wild and hoping they become leveraged at some point in the future, where, lo and behold, there's already an installed base of them sitting idle. But there's the risk that by the time someone figures out a handy use for them at the consumer level, the hardware that exists today isn't adequate or proper for whatever the algorithm is in the future, and what AMD says becomes awfully prescient - wasted silicon.

I do think there's a place for some modest AI acceleration - Apple even has it on their SOCs to help with things like voice recognition and biometrics. But I don't think every single gaming computer needs to have the capability to train a full deep node neural net, or that we should just be inventing excuses to use it that don't really need it, like DLSS.
 
Last edited:
I am confuse.

I would have sworn someone told me only the first version of DLSS upscaling was AI based, and that DLSS 2 is algorithmically based, just like FSR. Is this not accurate?
 
I am confuse.

I would have sworn someone told me only the first version of DLSS upscaling was AI based, and that DLSS 2 is algorithmically based, just like FSR. Is this not accurate?
Who knows? All I get is a bunch of marketing buzzword salad on how any DLSS works really.

From nVidia:
Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates while generating beautiful, crisp game images. It gives gamers the performance headroom to maximize ray tracing settings and increase output resolutions.

 
Sometimes quick and dirty does the trick. Nvidia isn't doing it that way with dlss, and thats okay too.
 
The even more interesting point - just looking at die sizes: the 7900XT is 529mm (5nm) @ 58,000M transistors, the 4080 is 379 (4nm) @ 49,000M transistors. So looking at that, you go scratching your head -

The RDNA 3 die size is different as it consists of different parts, the GCD is 304.5mm² at 5nm which is the actuall graphics chip then you got the 6 MCD's at 37.52mm² x6 at 6nm

The MCM design may have added extra size/transistors to make it work, I don't realy knows.
 
The RDNA 3 die size is different as it consists of different parts, the GCD is 304.5mm² at 5nm which is the actuall graphics chip then you got the 6 MCD's at 37.52mm² x6 at 6nm

The MCM design may have added extra size/transistors to make it work, I don't realy knows.

Yup, at least they are no longer using GloFo 12nm for the uncore stuff like they did on early Ryzens.
 
The RDNA 3 die size is different as it consists of different parts, the GCD is 304.5mm² at 5nm which is the actuall graphics chip then you got the 6 MCD's at 37.52mm² x6 at 6nm

The MCM design may have added extra size/transistors to make it work, I don't realy knows.
Good point
 
Become a Patron!
Back
Top