AMD Won’t Launch NVIDIA DLSS Competitor Until It’s Ready for Both Radeon Graphics Cards and Next-Gen Consoles

I really, really want to challenge you on this one. The idea that Intel would have actually chosen to 'hold back' 10nm and 7nm so that they could milk 14nm more seems to be a common complaint, but one that leaves me flabbergasted.

I wasn't suggesting Intel wanted its' process node problems on the manufacturing side or that it wasn't motivated to advance its manufacturing. Far from it. However, I do think Intel was largely apathetic on the desktop which is evidenced by the fact that Intel gave us something other than quad cores on the desktop as soon as AMD was doing it for the same or less money.

Intel was only concerned about performance per watt in the server and mobile markets with desktop CPU's being scaled down or scaled up versions of one or the other depending on if we were talking about HEDT or mainstream market segments. This is also why we saw largely lateral moves between each generation of desktop part for almost ten years now. At best, we saw low, single digit IPC improvements from generation to generation and even those were largely only in specific workloads.

I'm sure that if Intel could have done something about its manufacturing issues, it would have. Especially since increasing core counts massively reduces margins and yields on larger monolithic dies.
 
However, I do think Intel was largely apathetic on the desktop which is evidenced by the fact that Intel gave us something other than quad cores on the desktop as soon as AMD was doing it for the same or less money.
As I've understood it, Intel had planned to move to eight cores from four for the standard desktop part after Skylake.

And since there wasn't an 'after Skylake' for four or five years after initially planned, they had to do something else. Like putting six, then eight, then ten Skylake cores on 14nm, the first of which we did see around when Ryzen first hit the market.

I'm somewhat disappointed myself that AMD took so long to get Zen and it's platforms stable in the market, and there's still no credible upgrade from my current 9900K for gaming, certainly nothing worth exchanging platforms over, from either vendor. I picked up the 9900K before Ryzen 2000 / Zen+ platforms became reliably viable, and while I paid a bit more for the CPU, I paid less than I otherwise would have for the board and RAM, as at the time boards were pretty dodgy at similar price levels and Zen was very picky about RAM.

Still, it's clear Intel had to duck and dodge when Ryzen hit. They most certainly ducked and dodged well enough, as somehow they're still in the game with 14nm and both high performance and volume samples of their 10nm tech nowhere to be seen, much less their 7nm tech!
 
I really, really want to challenge you on this one. The idea that Intel would have actually chosen to 'hold back' 10nm and 7nm so that they could milk 14nm more seems to be a common complaint, but one that leaves me flabbergasted.
I don't think Intel would intentionally hold that back, but I think they inadvertently shot themselves in the foot by diverting R&D and talent away from the Foundry race when they felt they were comfortably ahead, and it came back to bite them a decade (or more) later.
 
It could be run on anything but using the tensor cores processes it much faster and concurrently.

I mean the difference between adding 1.5ms to a rendered frame and 4ms is massive relatively.
Just curious - this isn't meant to be adversarial, so please don't take it that way. But how do you know the difference between with and without using a tensor core?
 
I don't think Intel would intentionally hold that back, but I think they inadvertently shot themselves in the foot by diverting R&D and talent away from the Foundry race when they felt they were comfortably ahead, and it came back to bite them a decade (or more) later.
Absolutely.

Intel is still financially in a good position, and they're selling everything they make; but they could be doing so, so much better on a number of fronts.
 
Just curious - this isn't meant to be adversarial, so please don't take it that way. But how do you know the difference between with and without using a tensor core?

I can only go off of the technical data provided - I am just using peak processing power of the silicon. I thought I saw a comparison somewhere showing the frametime difference with and without tensor accelerated but I am not able to find it.
 
Don't forget that presumably every core in the AMD 6000 series chips or Big Navi, have a hard coded process path (think like a core) dedicated to doing what Nvidia's Tensor cores do. Much like hyperthreading I haven't seen evidence that using these detrimentally impacts the other performance, though it does lack overall efficiency, I think it is still a good path forward.

Of course we need more games using it. But so far I'm pleased with what I have.
 
Don't forget that presumably every core in the AMD 6000 series chips or Big Navi, have a hard coded process path (think like a core) dedicated to doing what Nvidia's Tensor cores do.
It's on the very long list of 'forgettable' AMD / ATI features that, by the time any software actually had an implementation, was for a later and probably incompatible version.

It'd be nice if that is not the case this time, but given how far AMD is behind in their DLSS 'competitor', well, they'd better get on it!
 
As it stands, AMD would not be my first GPU choice in regards to features.

I understand that for many many ppl DLSS and RT was/is pointless, But not for me.

Choices now are Raw Horsepower vs Raw Horsepower + Features.
 
Become a Patron!
Back
Top