Speculation about AMD’s Continued Commitment to High-End Consumer GPUs Increases as More Rumors Circulate

AMD lacks in ray tracing.

I wonder what they can do, given that rumours have it that the PS5 pro will have 2x to 4x the ray tracing performance of the PS5, if they can add that or better to their discrite GPU's they should be a whole lot more desirable.

Intel even has pretty decent ray tracing, AMD should get there eventually you would think.
 
I wonder what they can do, given that rumours have it that the PS5 pro will have 2x to 4x the ray tracing performance of the PS5, if they can add that or better to their discrite GPU's they should be a whole lot more desirable.

Intel even has pretty decent ray tracing, AMD should get there eventually you would think.

It's all about where they put the silicon, or rather, where they put resources into which parts of the silicon. AMD could make a Ray Tracing killer, if they spent the die space dedicated to it. The question is, how much are they willing to dedicate? Their chips are pretty small by comparison, to NVIDIA, so sure, if they made bigger monolithic dies, sure, but are they willing to spend that cost..... ?
 
I'm ok with this approach.

Someone has to have the top tier card, but it really isn't a revenue driver in and of itself. Sure, you sell some, and the margins are great, but it's low volume. It's mostly just there to drive marketing and down-tier sales. Whoever loses the horse race at the top tier loses almost all of those benefits and is stuck with a very expensive to develop, low volume part that has to take a haircut on the margin to compete.
 
I don't think AMD has to release a RTX5900 Killer, but if they can stay neck and neck with say a RTX 5700 for a good price, that would be something.

And please fix RT and FSR.

BTW nvidia has been selling its top tier card like hotcakes for generations they have been historically hard to find, with the possible exception of the RTX2080Ti and some titans. So, there's a lucrative market there which nvidia has kept for itself for quite a while.
 
As some others have pointed out, this is not the first time AMD has focused on the mainstream market, but it really hasn't payed off, AMD hasn't been able to gain significant market share regardless.

On a sidenote, where are all the multchip GPUs? there should be cards from intel, amd and even nvidia by now.
 
It's all about where they put the silicon, or rather, where they put resources into which parts of the silicon. AMD could make a Ray Tracing killer, if they spent the die space dedicated to it. The question is, how much are they willing to dedicate? Their chips are pretty small by comparison, to NVIDIA, so sure, if they made bigger monolithic dies, sure, but are they willing to spend that cost..... ?
I think it has more to do with the implementation, rather than the die space. AMD doesn't really have dedicated RT cores a la nvidia/intel. They mostly make the compute units to do the heavylifting. Same goes for FSR.
 
Last edited:
AMD doesn't really have dedicated RT cores a la nvidia/intel. They mostly make the compute units to do the heavylifting.

Which takes die space

And for RT, well yes and no, RDNA 3 does have a dedicated ray accelerator per CU which calculates the ray intersection and testing, but BVH traversal is shader based. Moving more of the flow to the fixed function accelerator would be beneficial, and I suspect that is what RDNA 3.5 or RDNA 4 will do, hopefully. Also, just straight up doubling the Ray Accelerators per CU as well would be an easy option, but again, die space. It all comes back to die space.
 
Last edited:
Which takes die space

And for RT, well yes and know, RDNA 3 does have a dedicated ray accelerator per CU which calculates the ray intersection and testing, but BVH traversal is shader based. Moving more of the flow to the fixed function accelerator would be beneficial, and I suspect that is what RDNA 3.5 or RDNA 4 will do, hopefully. Also, just straight up doubling the Ray Accelerators per CU as well would be an easy option, but again, die space. It all comes back to die space.
Agree to disagree, implementation, it doesn't matter how much die space you use, it's how you use it efficiently.

You could argue that AMD is wasting die space with its ray accelerator as it's not as efficient as nvidia or even intel.
 
As some others have pointed out, this is not the first time AMD has focused on the mainstream market, but it really hasn't payed off, AMD hasn't been able to gain significant market share regardless.

On a sidenote, where are all the multchip GPUs? there should be cards from intel, amd and even nvidia by now.
Lets take a step back & get a historic view

Polaris — genuine market share attempt. $200

RDNA 1 — Pseudo market share attempt. $400
In reality this was a stop gap release between GCN & RDNA2 with RT

Now again we have a stop gap between RDNA & UDNA

What AMD is going after is not market share but (AI) developer share with UDNA 1 (RDNA 5??)

Basically nvidia is a vertically integrated company (like apple) that sells cUda

AMD wants to sell Udna
 
Welp.

I guess I won't be buying any AMD GPU's for the next few years.

Long historical background follows, but it makes a point in the end , I promise.

__________________________________________________

I've been a serial early adopter of new higher resolutions for almost 20 years now.

In 2004-2005 some time (can't remember) I bought a 1920x1200 (Dell 2405FPOW I think?) screen back when that was still exotic. I quickly found that my poor Geforce 6800GT could not handle anything at that resolution. The result? I just kind of stopped playing games for 5 years.

Then in 2009-2010 some time (can't remember) I slowly started getting into games again, first only Civ4, but then I started catching up on the titles I had missed. And in 2010 I made the same mistake again. I bought a 30" 2560x1600 Dell U3011 and then proceeded to chase acceptable framerates for the next 3 years,

First I played some older titles using a fanless Radeon HD 5750, then I won the silicon lottery when I bought a GX470 that overclocked beyond 480 speeds, I briefly used a GTX580, but was using a custom form factor SFF case at the time, and couldn't get a large enough PSU to make it reliable, so I had to back off and quickly sold it.

1725986835964.png

Having realized the drawbacks of SFF (at least the custom varieties as they existed at the time) I decided to move back to full sized desktops. I got an AMD FX990 and stuck a "wait for the amazing new bulldozer and drop in upgrade later" Phenom II 1090T in it.

IN this case I started with a single Radeon HD 6970 (monstrous for the time triple slot Asus DirectCUII version) and when that wasn't enough, added a second one in Crossfire.

1725987190539.png
(Note the ultra rare CoolIT EPIC 180mm AIO I convinced MainGear to sell me. They later decided they could only sell them in complete systems, due maybe to licensing limitations? I don't know. I think I'm the only person who got a standalone one. It was the most AIO capacity you could get at the time. More than the dual 120mm Corsair H100. Between the 180mm fans and AIO and the huge triple slot GPU's, scale is really thrown off here.)
Whole system for good measure:

1725987463604.png

But dual Radeon HD 6970's in Crossfire were also insufficient for 2560x1600. I quickly found that Crossfire was not all that great. Scaling was poor, and minimum framerates were even worse. I theorized that having a single faster GPU would be better, so as soon as the Radeon HD7970 launched in December 2011 I bought one.

And I was right, it was indeed a better experience than the dual 6970's. But it still wasn't fast enough.

I decided to try to do some custom cooling by mounting a corsair AIO to the 7970, but I slipped with my screwdriver and killed it by cutting a trace :(

I then went without a GPU for a while (I actually picked up a basic GTX 460 768MB version to hold me over short term) and then when the GTX 680 launched in 2012, I jumped on it at launch.

It was even better than the 7970 had been, but it still just wasn't enough to keep up with the 2560x1600 screen.

I finally was able to get acceptable frame rates at 2560x1600 in 2013 when the Kepler 6GB Titan launched and I bought one at launch.

Of course, that only lasted for a few months, before newer titles challenged even the highly overclocked Titan at 2560x1600.

...and then in 2015 I proved that I never learn my lesson, and once again became a high resolution early adopter when I moved to 4k by getting a 48" Samsung JS9000 TV, and started the process all over again.

1725988580741.png

I needed the relatively new HDMI 2.0 standard to be able to display 4k without dropping Chroma so it was time for more GPU upgrades. First one 980ti (I knew that wasn't going to be enough, but I could only find one in stock at the time). Then a second 980 ti in SLI

1725988731930.png

Having done multi-GPU in the past and hated it, I guess I had thought that SLI must be better than Crossfire. It wasn't. I still got disappointing scaling and terrible minimum framerates.

So, when the Pascal Titan X launched a year later, I jumped on it. I also decided to maximize my ability to overclock it by building a custom water loop. I was going to get acceptable framerates at 4k come hell or high water (literally)

1725989019824.png

And it was a success.

The JS9000 did not have any kind of FreeSync/G-Sync/VRR (as TV's didn't at the time) but I was finally able to v-sync it at 60hz and never have the framerate drop below 60fps.

At least for a little while.

As new titles came out, it took increasingly extreme measures.

By the time I got around to playing Deus Ex Mankind Divided in late 2016 I was already using some serious tricks to get smooth framerates.

Ultra settings had me hovering between 38 and 45fps.

I created a custom 21:9 ultrawide resolution (3840x1646) and played letterboxed. This boosted framerates significantly over 4k, but still wasn't enough to play it at 60hz v-synced with even a little MSAA.

Then I had a lightbulb moment. This is a TV. Here in the U.S. that means 60hz, but in Europe that means 50hz. Maybe it has a 50hz mode? Indeed it did. So I was able to play Deus Ex v-synced at 3840x1646@50hz.

It wasn't ideal, but it got me through the game.

That title was one of the heaviest I played for a while. I'd use the letterbox + 50hz trick a few more times over the next couple of years, but I didn't have to do it frequently.

Then in 2019 I bought an Asus XG438Q 4k screen that had FreeSync2 and Nvidia had recently decided to support FreeSync via "G-Sync Compatible" so it worked, so I didn't have to bother with v-sync anymore, which made things easier. Still, occasionally letterboxed it though.

1725990408128.png


I had wanted to upgrade for more performance already when the 2080ti came out in 2018, but first I was frustrated by my inability to find one in stock, and then by the time I was able to get my hands on one, the whole "RTX Space Invaders" thing was blowing up, so I decided against it. By the time RTX Space Invaders was blowing over, it felt too late in the product cycle to dump big money on a top end GPU, so I decided to wait for next gen.

Then the pandemic hit, the RTX 3000 series hit, and it was scaled to high heavens. I refused to over-pay for a GPU, so I passed on the 3090 and 3090ti as well.

Eventually I had played through my backlog of old titles, and the old Pascal Titan (now 5 years old) was really not cutting it for anything newer. I sucked up my pride and went for an overpriced XTX 6900xt Speedster ZERO EKWB edition. The thing was a champ, and with a custom power design and highly binned XTH chip I **** near hit some overclocking records on it.

I was finally playing new titles at full resolution again, with one exception. If you turned on RT it turned into a slide show.

So a few months later I cut my losses, and picked up a 4090. Not exactly at launch, but pretty close thereafter, once I was able to get my hands on one.

1725990916864.png

I'm still occasionally struggling to get adequate framerates at 4k Ultra settings (without enabling bullshit like scaling and frame generation) and this bugs me.

__________________________________________________

Aaaanyway...


This was just the really longwinded way for me to say, I have been struggling to get what I consider acceptable performance out of even the highest end GPU's since ~2005.

Since AMD hasn't been competitive at the high end for most of that time, with a few notable exceptions when I tired the SLI 6970's and 7970 (~2010-2012) and again with the 6900xt (2021) this has completely precluded AMD as an option at all for me.

If they are opting to not participate in the high end again, I guess that removes AMD as an option at all for me on the GPU side for the next few years, and will probably result in even less competition at the high end, and thus even worse pricing for the GPU's I "need".

And that sucks.

I don't suppose Intel will dream up an Arc that challenges a future 5090? :p
 
If they are opting to not participate in the high end again, I guess that removes AMD as an option at all for me on the GPU side for the next few years, and will probably result in even less competition at the high end, and thus even worse pricing for the GPU's I "need".

AMD didn't say never.
But there are 2 prerequisites tho
  1. UDNA
  2. Successful multi GCD chiplets
Hopefully above can happen with RDNA 5, if not there is always RDNA 6

I don't suppose Intel will dream up an Arc that challenges a future 5090?
Intel is using GPU as a trojan horse to sell their AI hardware (AMD has also realized this with UDNA)
Another thing to remember is that Intel already has 2/3rds of the GPU market. They just need to upgrade everyone to Lunar Lake & then they can use same software stack to target both games & AI
 
Last edited:
AMD didn't say never.
But there are 2 prerequisites tho
  1. UDNA
  2. Successful multi GCD chiplets
Hopefully above can happen with RDNA 5, if not there is always RSNA 6


Intel is using GPU as a trojan horse to sell their AI hardware (AMD has also realized this with UDNA)
Another thing to remember is that Intel already has 2/3rds of the GPU market. They just need to upgrade everyone to Lunar Lake & then they can use same software stack to target both games & AI

Seriously.

AI can eff right off. All of it.

I don't want AI in any way shape or form on any platform I use.

I don't want to ever see AI generated content.

I don't want to ever correspond with anyone using AI to write or otherwise express themselves.

I don't want to ever buy any product containing or developed with the use of AI.
 
Seriously.

AI can eff right off. All of it.

I don't want AI in any way shape or form on any platform I use.

I don't want to ever see AI generated content.

I don't want to ever correspond with anyone using AI to write or otherwise express themselves.

I don't want to ever buy any product containing or developed with the use of AI.
Amen.
 
look for upcoming new AI tech in games real soon.
 
It's a ****ty solution to a problem we shouldn't be having. We should be rendering at native instead.
I think upscaling is awesome, thing is that I hoped it would bring new life to older cards or make mainstream cards 4k capable (which for a little while, it did). But it ended up being pretty much mandatory on high end cards with newer titles just for them to be playable.

Long gone are the days of "brute force" rendering.
 
It's a ****ty solution to a problem we shouldn't be having. We should be rendering at native instead.
Amen again.

It's also kinda sh1tty that devs automatically expect us to use this stuff, and even rope that sh1t into the recommend system requirements! For example, Black Myth Wukong's recommended specs say: "The above specifications were tested with DLSS/FSR/XeSS enabled." Wow.

But it ended up being pretty much mandatory on high end cards with newer titles just for them to be playable.
Yeah, which is some real bullsh1t.

Long gone are the days of "brute force" rendering.
At least those of us on lower resolutions (like me on 1440p) can still use native rendering as an option in most cases (even while using RT sometimes).
 
Become a Patron!
Back
Top