AMD Ryzen 7 5800X3D vs AMD Ryzen 7 5800X: A Cache Value?

Brent_Justice

Administrator
Staff member
Joined
Apr 23, 2019
Messages
931
Points
93
Introduction AMD is not taking a back seat to gaming performance, with the recent launch of Intel’s Alder Lake CPUs Intel has proven they are back in the game. AMD isn’t taking this lightly. By innovating in unique ways AMD is finding ways that can improve gaming performance on CPUs to stay on top of […]

Go to post
 
I'll say that, for me, coming from a 3700X, I've seen much bigger FPS gains. I wasn't able to use resizable bar with that CPU and seeing it in action now with Horizon Zero Dawn has been impressive as well. It used to take ~30+ seconds to load and now it's more like 10. I don't know if Crysis 1-2 remastered use but they seem to load a little quicker now but that could also be due to those last patches Crytek put out.

I only just went to using an AIO with this chip but I've been able to keep the Plex 360 at stock speeds and it's been able to keep this CPU at ~65c or less during gaming, mostly in the 50s, and that was in a room that was 75 degrees Fahrenheit at the time so I'm happy everything will work well in the summer. The two combined have also resulted in my GPU running a few degrees cooler as well which was an unexpected bonus. Up until recently, I had been setting the AIO to max fan speeds for gaming but now I see that is totally unnecessary.

For me, gaming just feels smoother than when I had the 3700X, even in titles like SOTR and Metro Exodus. Granted, since I never had a 5800X I couldn't say what that would've been like but it was also Brent's comparison review between the two that first got my attention in regards to an upgrade. He'd already seen some unusual FPS gains back then.

I game almost entirely at 5120x1440 with my rig, usually at 100 Hz so I can use HDR with my monitor but then 120 Hz for non-HDR gaming, and things definitely seem more fluid regardless if there's a significant gain in frames but even then I'm seeing anywhere from ~5 to ~20, depending on the game plus the lows seem to have gone up a bit. I also use DLSS on the quality setting as much as possible to try and offload some work from the GPU and give a little more to the CPU. I had games that would hang ~70-90 FPS that are now basically hammering 100 FPS or at the least the high 90s. Obviously, like @Brent_Justice 's review shows, moving from a 5800X is mixed at best and its ability to use PBO will definitely pull ahead but there's a cost of power and heat when you do that as well. I'm loving that I can now cut back on the fans and still enjoy more than I used to when everything was maxed. Definitely not a chip for everyone but for me, for the games I play, I'm loving it.
 
Yes, coming from a 3700X that makes sense. The goal of this review though was to see what advantages, if any, versus the 5800X. If you had purchased a 5800X now, you would save quite a bit of money over the 5800X3D, and still get similar gameplay performance as the 5800X3D, yet you'd save money, and have the ability to use PBO, and everything else besides gaming would also just run faster with the 5800X due to the higher clock speed natively. That is honestly the best value until Zen 4 comes out the end of this year, which is not too far away now.
 
I'm only just coming to realize how much I've shorted myself over the years in using more mid-grade CPUs. Moving forward I'll definitely be aiming a bit higher, but for now, pretty happy. My previous home desktop CPUs have been in the order of 2600K(OC'd to 4.2 GHz)>4930K(OC'd to 4.3 GHz-still in the cave), 3700X(stock)>5800X3D(stock) so I'm still getting used to identifying other performance metrics of how CPUs can affect high-end GPUs. I've been using the fastest GPUs I can afford for some time since I stopped using SLI: 1080 Ti>2080 Ti>3090, and now I've got a 3090 Ti coming via EVGA Step Up. Thanks again for the great review!

edit: Just to add that I'm only including what I consider modern CPUs in my previous desktop rigs. It'd take some thought to remember all those other ones from the 80s-2000s-lol!
 
This is a good comparison of the similarly priced 5800X vs 5800X3D.

It does not, however, actually address the title question: how much does the cache actually benefit?

You can only answer that question if you either lower the clocks/voltages to match on the 5800X (essentially under clock it), or lower the cache to match on the 5800X3D (which I don't know of any handy tool to accomplish this, but wouldn't call it impossible). You should expect them to perform identically except for cases where the cache actually benefits -- or detriments, although I can't think of any cases were it ~should~ do that, but this test as run would completely mask if that were the case, since you would just think it's because of the clocks difference.

Or you could change the title.

Here, we see many situations where the faster clocks allow the older chip to eek out the lead, and a few where the cache actually plays a big role, but it's not just the effect of the cache being evaluated because you have the clock mismatch.

Also - curious, why disable Resizable BAR?

The chart on Pg 6 where you break down the results so far -- nice to have the chart. It needs some coloring or something so you can tell which one was the winner, or include it in a bar graph or something. Just a chart of numbers is... hard to really interpret.

The CPU speed charts on Pg 8 are awesome, they do illustrate the clock delta that I did complain about allowing above -- but if you're going to allow it, and it's a valid point in a CPU vs CPU comparison (just not in a cache-only benefit analysis), may as well make it easier to interpret. They would be better if you could fix them so they have the same vertical range - would make it better to make a cross comparison, especially since they are sitting side by side.
 
Last edited:
This is interesting.

I haven't been in the CPU market lately, so I haven't kept up. I had heard of the new cache AMD was bringing out, but when I saw the 5800X3D I had assumed it was just the same CPU as the 5800X but with the improved cache. I didn't realize they were cutting the core clock so much.

Curious choice. I wonder why they did that. To get better yields? Or to offset the extra power used by the cache?

I'd be curious how they look clock to clock, to tease out just how much good the cache itself does.

Also, I thought it was really odd that 1080p performed worse than 1440p in Ms Flight Sim....
 
And also, we have done coming from a 3700X to the 5800X benchmarks and gaming performance here:

https://www.thefpsreview.com/2021/10/06/amd-ryzen-7-5800x-vs-ryzen-7-3700x-performance-review/

This review is one of the reasons I am pissed at AMD for shafting us TRX40 users and refusing to give us a Zen3 version.

I hate compromises. I want a build that gives me everything a HEDT system offers (primarily ECC, high PCIe lane counts and IOMMU, but high core counts don't hurt either) without sacrificing top of the line game performance.

I hate compromises. They drive me nuts.
 
Curious choice. I wonder why they did that. To get better yields? Or to offset the extra power used by the cache?
My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.
 
My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.

Is the L3 cache directly on the core? It ought to be able to operate at a different voltage than the rest of the package...
 
Is the L3 cache directly on the core? It ought to be able to operate at a different voltage than the rest of the package...
Not an engineer so I don’t know — could be some limits on AM3 pinouts at play there - you only get so many to do different voltages.
 
My understanding is the stacked cache is more voltage sensitive, so the 3D version has lower voltage limits, which in turn limited the clocks. The TDP remained unchanged, but you just can't get as much if you can't boost the voltage as far.
The cache also acts as an insulator with respect to heat transfer - a disadvantage of stacking technology at this time. Wattage isn't changing as you note, yet temperatures from the same sensors as on the 5800X at the same power draw may be higher. And thus temperature throttling limits lower, leading to the need for more cooling capacity to get the best out of the CPU.

It will be interesting to see how far the more adventurous 'extreme' overclockers are able to push the 5800X3D. Last I checked they were manually flipping bits in BIOS images to try to turn on things that AMD thinks should be left off :D
 
It would be nice to see the min frames as well here.
Im curious if in the stress cases the cache is helping or not.
I fell thats a pretty important metric to expose for all video card/cpu reviews.
 
The cache also acts as an insulator with respect to heat transfer - a disadvantage of stacking technology at this time. Wattage isn't changing as you note, yet temperatures from the same sensors as on the 5800X at the same power draw may be higher. And thus temperature throttling limits lower, leading to the need for more cooling capacity to get the best out of the CPU.

It will be interesting to see how far the more adventurous 'extreme' overclockers are able to push the 5800X3D. Last I checked they were manually flipping bits in BIOS images to try to turn on things that AMD thinks should be left off :D

Honestly,. I don't even understand why they have to stack it on the CPU core.

It's not like there isn't PLENTY of space underneath the heat spreader to put them side by side and allow for more efficient cooling.

Here is the 5800x for reference:

1653079087700.png

Shift one of those chiplets to the side and you have almost half the area under the heat spreader to work with, where you could shift in a 3rd chiplet containing the cache.
 
I too would have liked to see the impact of the cache by itself - although that may never be truly possible because the latency is increased (but then, that is part of the cost of having more cache).

To be honest, the games that show the most benefit may be those that are least optimized, because they have been designed to access more data at once than will fit in the cache. Both Crysis and Flight Simulator feature vast vistas which can drain frame rates. I suspect some database operations could also benefit significantly from this, and that overall games are likely to operate more smoothly, even if they don't go as fast at their peak.

I think the real benefit will come five years down the line when every new CPU has 96MB cache, and software (especially games) expects to make use of that, because you won't need to get a new CPU to keep up. Of course, might want it to go a bit faster, but the CPU might also be less likely to bottleneck a new graphics card (whereas now, the card is clearly the bottleneck for 4K, and to an extent 1440p).

It'll also benefit your electricity bills over that time. If you use the CPU continuously, you'll be using 20W less, the temperature will be 8°C lower, and the fan may be quieter as a result. This is a big win from AMDs perspective of performance per watt. Even if you don't game all the time, many run BOINC or similar to use up idle CPU.
 
Honestly,. I don't even understand why they have to stack it on the CPU core.
This page talks a bit about the construction.


As to why it's on the CCD and not off on another chiplet by itself: my guess would be distance/latency. If you have to go out over Infinity Fabric for a cache hit the speed is going to go down by orders of magnitude (granted, still faster than going out to system RAM, but...). 5800X3d only has 1 CCD, so it's a best case scenario with respect to latency all around.

There were probably also some tradeoffs to make sure it could drop in existing AM3 motherboards relatively painlessly.
 
This page talks a bit about the construction.


As to why it's on the CCD and not off on another chiplet by itself: my guess would be distance/latency. If you have to go out over Infinity Fabric for a cache hit the speed is going to go down by orders of magnitude (granted, still faster than going out to system RAM, but...). 5800X3d only has 1 CCD, so it's a best case scenario with respect to latency all around.

There were probably also some tradeoffs to make sure it could drop in existing AM3 motherboards relatively painlessly.

So, I was thinking about this, and I guess I don't fully understand how they mate the stacked silicon. I'd imagine the lithography would have to happen separately, and then they are somehow mated together when done.

I guess what I am thinking is, if they can already manufacture them separately and then mate them stacked on top of each-other, shouldn't it be a trivial change to do the same thing and then mate them side by side for better thermals?

My best guess here is that the 5800X3D is more of a learning tech demo for future highly dense products than it is intended to be a long term product in its current form. You know, test the technology out on something relatively simple, and learn from it, before trying to stack the **** out of super-dense Epyc's.

Because you can do stuff lie this in the lab all you want, but the real learning from a new technology starts when you hit the market. Things always break in new and novel ways you couldn't possibly have guessed or prepared for in the test lab when they actually get used in the field.
 
So, I was thinking about this, and I guess I don't fully understand how they mate the stacked silicon. I'd imagine the lithography would have to happen separately, and then they are somehow mated together when done.
Through-Silicon Vias (TSVs).
 
I really wish they had released a 5950X3D. Something like BF5 multiplayer already puts a 5800X at 100% cpu usage.
Honestly,. I don't even understand why they have to stack it on the CPU core.

It's not like there isn't PLENTY of space underneath the heat spreader to put them side by side and allow for more efficient cooling.

Here is the 5800x for reference:

View attachment 1617

Shift one of those chiplets to the side and you have almost half the area under the heat spreader to work with, where you could shift in a 3rd chiplet containing the cache.
Part of the benefit of vertical stacking is it adds no latency due to there being almost no added signaling distance. If you add a separate die then you're running through infinity fabric. A single CCD's infinity fabric only has 1/3 the read bandwidth and 1/6th the write bandwidth compared to the on die L3 cache.
My best guess here is that the 5800X3D is more of a learning tech demo for future highly dense products than it is intended to be a long term product in its current form. You know, test the technology out on something relatively simple, and learn from it, before trying to stack the **** out of super-dense Epyc's.
I think the 5800X3D is just them doing the bare minimum to be able to say they have the fastest gaming cpu. They had already being sampling Milan X Epyc CPUs to key customers last year, long before the 5800X3D was even announced. They make significantly more money using 3d v-cache equipped CCDs for Epyc so the 5800X3D is really just for bragging rights and consumer mindshare.
 
Become a Patron!
Back
Top