What AMD said (not spending die size on unnecessary features) would make sense if their GPUs were either better in raster or cheaper (true for RDNA2 not true for RDNA 3)
For example in the initial versions of zen, unlike competing Intel products there was no IGP but AMD made up by value/perf. Only with zen4 have they invested in (cheap 6nm) igpu
This is a good point.
AMD is taking advantage of nVidia pricing right now and going "just under". I don't think you can make the "selling them stuff they don't need" claim without a significant price difference or performance difference. AMD isn't presenting either right now - which implies either they are overcharging or inefficiently using their die space.
The even more interesting point - just looking at die sizes: the 7900XT is 529mm (5nm) @ 58,000M transistors, the 4080 is 379 (4nm) @ 49,000M transistors. So looking at that, you go scratching your head - for roughly equivalent rastering performance - if nVidia is packing that with stuff you don't need, but is coming in at about 2/3 the die size... then wtf is AMD doing with all that area? YOu can't say it's because of differing process node, because the transistor count tracks the same way. And if die size loosely coorelates to manufacturing cost - nVidia has more room to come down on pricing than AMD does if it came to a price war.
All that said -- I do think Tensor cores are ridiculous on a consumer card. I think DLSS doesn't really require them - FSR more or less proves you don't need AI accelerators to do decent upscaling. I think it's just nVidia pushing them out there to get them into the wild and hoping they become leveraged at some point in the future, where, lo and behold, there's already an installed base of them sitting idle. But there's the risk that by the time someone figures out a handy use for them at the consumer level, the hardware that exists today isn't adequate or proper for whatever the algorithm is in the future, and what AMD says becomes awfully prescient - wasted silicon.
I do think there's a place for some modest AI acceleration - Apple even has it on their SOCs to help with things like voice recognition and biometrics. But I don't think every single gaming computer needs to have the capability to train a full deep node neural net, or that we should just be inventing excuses to use it that don't really need it, like DLSS.