AMD Could Be Prepping a Flagship Next-Gen GPU Featuring 96 Compute Units with Memory on a 512-Bit Bus

One of the reasons I invested in getting a 5090 in the first place was to try out 3x or 4x frame gen. I found 2x to be interesting and wondered where the ideal threshold was for latency vs increased reported frames/fluidity. I think 3x is a sweet spot for now, and while yes, native would be great, I've found an interesting balance with native 60ish plus MFG 3x to be a pretty amazing mix with throwing everything but the kitchen sink into max graphical settings. I also set frame caps that, when combined with all this, keeps GPU power/temps manageable for optimal TDP/noise levels and latency is only just barely noticeable. I know I'm talking about NV tech mostly, but I feel it somewhat still applies to any MFG application. I can't describe the smile on my face watching a GPU like this hang in the 400W or less territory while hitting 90-110 FPS and temps hanging in the 60-65 °C area with fans peaking around 60-70% in a mATX case when doing DLAA 4K. I'm no expert on AMD GPUs, but I'm sure there are some similar scenarios with FSR4 on a compatible GPU and if this upcoming flagship comes to pass, we'll see more such stories from users.
 
This is just a framegen artifact - upscaling doesn't incur a noticeable penalty.

True. Framegen is the main culprit.

But even with the framegen penalty, your choice is either 60FPS as a baseline of input and fluidity, and using framegen and getting 144FPS fluidity.

Or you can get 144 fps native if you don't use RT, assuming your card can push that natively. Which in some cases it can, especially if you're running 1440p.

I'm kind of poking fun at the 'more fillrate!' side.

Yeah, 8k and up really doesn't have a market yet, and maybe never will.

So yeah, looking at today's installed base, it's not all there yet; but it will be, and we'll all be the better for it.
Dunno if I agree with that. xtor budgets aren't increasing the way they used to.
 
clarification:

AMD is preparing dual purpose graphics GCD chiplets that can be used in Servers/PC/APUs/Consoles or as stand alone GPUs

the GPUs are prepared as per Microsoft order

  1. AT0 — Cloud gaming — 512 bit bus gddr7 192 CU
  2. AT1 — scrapped
  3. AT2 — series x2 (first party by Microsoft) — 192 bit bus gddr7 72? CU
  4. AT3 — replacement for strix halo — 384? bit bus lpddr6 48 CU
  5. AT4 — series S2 / handhelds (third party & competition to steam machine) — 128 bit bus lpddr5x/lpddr6 24 CU
GPU lineup could be
  1. 10090xt — AT0 cut down to 384 bit bus gddr7 & 150+ CU
  2. 10080xt — further cut down AT0/10090xt
  3. 10070xt — AT2 (Magnus) 72 CU
  4. 10060xt — AT3 (Medusa Halo) 48 CU
  5. 10050xt — AT4 (Medusa Premium) 24 CU
 
Why cut it down? And will we ever actually see that product?
i am guessing cut down from 512 bit bus. 512 bit bus could consume lots of power, so AMD may not want to go full balls to the wall, is my guess
 
i am guessing cut down from 512 bit bus. 512 bit bus could consume lots of power, so AMD may not want to go full balls to the wall, is my guess
My guess is to differentiate their AI cards from prosumer cards.
 
Microsoft has contracted AMD for xcloud gaming hardware chip which can virtualize & run 8 series s2 instance at one go

series s2 = ps5 pro
series x2 = triple series s2

AMD is expected to cut that chip by 25% for top xx90 card

so basically 10090xt should be equal to 6x 9060xt or 6x PS5 pro

 
Yes you are right. That's why I put what I did in parenthesis.

However, that gamble, for all intents and purposes, is how industries grow and innovate.

AMD couldn't beat Intel at their own game/plan. So they went a different direction so to speak.
They'll have to do the same with Nvidia.

The thing with intel is that they dropped the ball too many times, till AMD met and then surpassed them. I don't see nvidia making the same mistake.
 
The thing with intel is that they dropped the ball too many times, till AMD met and then surpassed them. I don't see nvidia making the same mistake.
They didn't drop the ball, they just stagnated with zero innovation since they didn't have to until Ryzen 1000 series. Intel was still pumping out quad core CPU's. They've been scrambling ever since.

Core Ultra needs some refinement. We'll see with the 300 series.
 
True. Framegen is the main culprit.



Or you can get 144 fps native if you don't use RT, assuming your card can push that natively. Which in some cases it can, especially if you're running 1440p.



Yeah, 8k and up really doesn't have a market yet, and maybe never will.


Dunno if I agree with that. xtor budgets aren't increasing the way they used to.
Yeah, just like 1080p wasn't ever to be a market... :rolleyes: ;)

4K is quickly becoming the new standard if only because of consoles. There are lots of 4k gaming capable TVs with high refresh rates that you can use for pc gaming. I wouldn't touch a 1440p monitor if my life depended on it.
 
They didn't drop the ball, they just stagnated with zero innovation since they didn't have to until Ryzen 1000 series. Intel was still pumping out quad core CPU's. They've been scrambling ever since.

Core Ultra needs some refinement. We'll see with the 300 series.
Multiple problems with their manufacturing process + minimal performance increase + massive power draw and heat output= dropped ball.
 
We're well beyond that being a reality. Everything is moving towards ray tracing, upscaling and frame generation.

I'm OK with it.

I was hoping for Ray Tracing to be a non-issue by now. As in cards having enough horsepower that PathTracing would be the norm, not the alternative. Now I don't think it will ever come, I think RT will be replaced with realtime AI rendering.
 
I was hoping for Ray Tracing to be a non-issue by now. As in cards having enough horsepower that PathTracing would be the norm, not the alternative. Now I don't think it will ever come, I think RT will be replaced with realtime AI rendering.

- Yup, if you can light a scene with a simple AI "shader" pass instead of doing all the dirty work of actually casting rays and such for a fraction of the computational cost with even 95% of the accuracy... why the hell would you waste your time with actual RT or even PT?
 
The thing with intel is that they dropped the ball too many times, till AMD met and then surpassed them. I don't see nvidia making the same mistake.

They didn't drop the ball, they just stagnated with zero innovation since they didn't have to until Ryzen 1000 series. Intel was still pumping out quad core CPU's. They've been scrambling ever since.

Keeping in mind - Intel's architectures didn't fall behind, their fabrication capabilities did. As soon as they could fab their more advanced designs, they did, and they were monsters (see Alder Lake versus Zen 3). In terms of pushing what a single core can do, and what a single core can do efficiently, Intel hasn't really ever fallen behind.

4K is quickly becoming the new standard if only because of consoles. There are lots of 4k gaming capable TVs with high refresh rates that you can use for pc gaming. I wouldn't touch a 1440p monitor if my life depended on it.

Even higher - I'm looking at 5k2k (LGs new 39" OLED, likely in an Alienware monitor), because OLED has issues when used in a typical CRT / LCD R-G-B sub-pixel arrangement. I expect to be running scaling on the desktop and DLSS (etc.) in games to keep framerates playable with my 5080.

I was hoping for Ray Tracing to be a non-issue by now. As in cards having enough horsepower that PathTracing would be the norm, not the alternative. Now I don't think it will ever come, I think RT will be replaced with realtime AI rendering.

"Lightweight" RT is absolutely possible up and down the spectrum. That is evidenced by console games using RT.

But we knew full-on Path Tracing (and more) wouldn't be quick. That's really several orders of magnitude more demanding that the RT that consoles are able to push today. If I'd have to guess, we're maybe halfway there.
 
Keeping in mind - Intel's architectures didn't fall behind, their fabrication capabilities did. As soon as they could fab their more advanced designs, they did, and they were monsters (see Alder Lake versus Zen 3). In terms of pushing what a single core can do, and what a single core can do efficiently, Intel hasn't really ever fallen behind.



Even higher - I'm looking at 5k2k (LGs new 39" OLED, likely in an Alienware monitor), because OLED has issues when used in a typical CRT / LCD R-G-B sub-pixel arrangement. I expect to be running scaling on the desktop and DLSS (etc.) in games to keep framerates playable with my 5080.



"Lightweight" RT is absolutely possible up and down the spectrum. That is evidenced by console games using RT.

But we knew full-on Path Tracing (and more) wouldn't be quick. That's really several orders of magnitude more demanding that the RT that consoles are able to push today. If I'd have to guess, we're maybe halfway there.
Actually, full ray tracing is more taxing than path tracing. thing is that only a handful of effects are used for raytracing, mainly reflections, global illumination, shadows, etc. hence the lower performance hit.
 
The thing with intel is that they dropped the ball too many times, till AMD met and then surpassed them. I don't see nvidia making the same mistake.

I could argue that Nvidia is taking the same path Intel is that lead them to falling behind.

Intel catered to their enterprise/business/server market. Lots of higher core count CPUs, but for servers only while leaving the consume stuff at low core count like quad cores and what other things that seen their server market benefit.

What's Nvidia doing? Catering to the AI boom building powerful GPU setups for them while leaving the consumer stuff in limbo. Sure, they still got the best GPUs for consumers, but at what cost and what quantity?
 
I could argue that Nvidia is taking the same path Intel is that lead them to falling behind.

Intel catered to their enterprise/business/server market. Lots of higher core count CPUs, but for servers only while leaving the consume stuff at low core count like quad cores and what other things that seen their server market benefit.

What's Nvidia doing? Catering to the AI boom building powerful GPU setups for them while leaving the consumer stuff in limbo. Sure, they still got the best GPUs for consumers, but at what cost and what quantity?


Don't think for one second that AMD won't jump into the AI bandwagon. If nvidia bails on the consumer market, you can be sure AMD will follow.
 
I never said they wouldn't.

AMD also capitalized on their server market with Zen. They managed to do both.

Wouldn't be far fetched to say they couldn't do a similar strategy with GPUs also. AFAIK, they're already making "AI Focused" GPUs for that market anyway.
 
Become a Patron!
Back
Top