AMD Ryzen 9 7950X3D Gaming Performance CPU Review

Brent_Justice

Administrator
Staff member
Joined
Apr 23, 2019
Messages
885
Points
93
Introduction Ever since AMD launched its Zen 4 architecture and Ryzen 7000 series CPUs, gamers everywhere have been asking about the 3D V-Cache models. It is finally here, AMD is launching its first of the Zen 4 3D V-Cache CPUs today with its AMD Ryzen 9 7950X3D Gaming Processor with an MSRP of $699.99. Both the Ryzen 9 7900X3D and Ryzen 9 7950X3D will be available on January 28th, 2023, however, AMD is only sampling the Ryzen 9 7950X3D for testing today. We are going to review the Ryzen 9 7950X3D with a gaming focus in this review. This is […]

See full article...
 
Thank you Brent for the wonderful article. Kinda glad I stuck with my 7700X for my office machine. It's a very nice CPU for what I use it for. Going with the 7800X3D wouldn't be that much of a bump at all it would seem.
 
We don't know how the 7800X3D will be yet. Considering it only has 1 CCD to worry about, it could be much more efficient, with less latency, and just the bees knees 🤷‍♂️ My prediction will be it will be very power efficient. I do worry about the frequency difference, however, for productivity.
 
Seems like if you're already on a 5800X3D it's not much, if at all, an upgrade. At least for gaming at 1440 or 4K.
 
I forgot to bring this up in the review, so I'll add it here:

This CPU may potentially not be a good CPU for multi-tasking while gaming, for example, streamers or content creators recording gameplay and streaming at the same time. Since it parks cores while gaming, this could potentially reduce the multi-tasking ability while gaming. Streaming and recording performance could be affected. The entire point of a 16-core CPU could be compromised under such circumstances. My addendum to this is that I have not tested this scenario, and I have no proof that this is true. It is a theory and subject worth discussing.
 
Really looking at this cpu/motherboard/ram upgrade requirement to come from a previous gen... this gen just feels like a skip for me. My 5900x is trucking along just fine and not feeling like a bottleneck at 1440p at all.
 
The power usage is the most impressive part but other than that it is a bit of a meh. Why spend the cash to help AMD and MS beta test this scheduler stuff in win10/11? Much like with e-cores of 12th gen Intel last year.
 
Power usage is impressive, but are we really impressed? It's turning off cores, and running at lower frequencies, and is capped at a lower temp of 89c, and has a lowered TDP, should a lower power profile really surprise or impress under those circumstances? If it could maintain the same clock as 7950X, and keep all cores running at the same time, and still have less power, I'd be more impressed by that.
 
Minimum FPS always determines playability. In games that benefit from the added cache, the 5800X3D showed much of its benefit in min FPS. We can't draw a conclusion on Zen4 X3D without mins.

I wonder how much MS paid AMD to gimp their CPU by making it rely on XBox Game Bar. Inexcusable.
 
Streaming and recording performance could be affected.
I think this will depend significantly on what 'streaming' means. The combination of layout tools, audio tools, endpoints, GPU hardware and GPU acceleration used, all would factor in.

For the 'pros', having only eight cores enabled while streaming is probably not going to be an issue. They'll have a workflow set up that leverages hardware processing before (audio equipment with DSPs for major processing steps, to limit VST use in OBS, for example), and after (using an external/secondary box for recording and streaming live). For these folks quality, ease of use (though not necessarily setup and troubleshooting), and reliability are paramount.

If there's a limitation to be found, I expect it to be found first by those trying to do all of this on the same box, 'in the box'. I'd balance that with the knowledge that a 7600X is plenty fast for 'gaming', so a 7950X3D with one CCD disabled or the future 7800X3D will both probably have enough grunt to spare, especially if GPU encoding is leveraged properly.



Power usage is impressive, but are we really impressed? It's turning off cores, and running at lower frequencies, and is capped at a lower temp of 89c, and has a lowered TDP, should a lower power profile really surprise or impress under those circumstances? If it could maintain the same clock as 7950X, and keep all cores running at the same time, and still have less power, I'd be more impressed by that.
Since AMD is able to keep performance up while reducing power usage - well, that's impressive. They did so while adding cache and increasing complexity.

Granted there's a pretty clear line drawn from decreased clockspeeds and voltages to lower power usage, and it looks like the 3D V-cache CCD is being binned for lower leakage, which should result in better efficiency at lower clockspeeds. So not inexplicable, but going in the right direction.



I wonder how much MS paid AMD to gimp their CPU by making it rely on XBox Game Bar. Inexcusable.
No need for conspiracy theories - AMD has been open about working with Microsoft to implement appropriate scheduling behavior for games on their dual-CCD parts that have one 3D V-cache-bearing die. From the benchmarks we've seen, while not perfect at launch (what is?), the solution is certainly working as intended, when it's working as intended.

Also note that AMD needs a software-based solution, since their scheduler isn't implemented in hardware (like Intel's P+E core solution), and that Intel still needed to work with OS developers to ensure that the P+E core arrangement was also supported.

For a personal example, while trying to run the FSPR Handbrake benchmark, I found that my 12700K running on Windows 10 last year would not properly schedule the Handbrake encode. This resulted in performance over an order of magnitude slower than expected, and I begrudgingly upgraded to Windows 11. I believe that this functionality has been backported to Windows 10 since, but as I've made 11 work for my needs, I haven't gone back and tested it.

As for 'why the Xbox Game Bar', well, I guess we should ask, how else should they have done it? How should it be exposed to gamers?
 
No need for conspiracy theories - AMD has been open about working with Microsoft to implement appropriate scheduling behavior for games on their dual-CCD parts that have one 3D V-cache-bearing die. From the benchmarks we've seen, while not perfect at launch (what is?), the solution is certainly working as intended, when it's working as intended.

Also note that AMD needs a software-based solution, since their scheduler isn't implemented in hardware (like Intel's P+E core solution), and that Intel still needed to work with OS developers to ensure that the P+E core arrangement was also supported.

For a personal example, while trying to run the FSPR Handbrake benchmark, I found that my 12700K running on Windows 10 last year would not properly schedule the Handbrake encode. This resulted in performance over an order of magnitude slower than expected, and I begrudgingly upgraded to Windows 11. I believe that this functionality has been backported to Windows 10 since, but as I've made 11 work for my needs, I haven't gone back and tested it.

As for 'why the Xbox Game Bar', well, I guess we should ask, how else should they have done it? How should it be exposed to gamers?
The AMD Chipset driver is required - that seems to be a good place for it.

I go to AMD Adrenalin software for my AMD video card settings. AMD has Precision Boost Overdrive for CPUs. None of that requires any XBox bullshit from Windows.
 
I'm still keeping an eye on this one but I'm not as enthusiastic about it as I was before. Here's hoping some updates, wherever they need to be, help it reach more of its potential. Since most of my builds end up running close to 10 years a pop I was hoping this, or the 7900X3D would be the platform to build my next with. The 7800X3D is sounding promising but I just don't know. Meanwhile, prices to do something with a 13900K keep dropping.
 
I'm still keeping an eye on this one but I'm not as enthusiastic about it as I was before. Here's hoping some updates, wherever they need to be, help it reach more of its potential. Since most of my builds end up running close to 10 years a pop I was hoping this, or the 7900X3D would be the platform to build my next with. The 7800X3D is sounding promising but I just don't know. Meanwhile, prices to do something with a 13900K keep dropping.

There's not an urgent benefit for you to upgrade from your 5800X3D. At least according to current benches. A couple titles it's better, but for the most part at 1440 and 4k there isn't much of a difference.

Unless productivity apps are more important to you. In which case, yeah, a huge improvement.
 
There's not an urgent benefit for you to upgrade from your 5800X3D.
It's all good. I was thinking of a completely new build to replace the 4930K rig with but the intent is that whatever comes next is built to perform as needed for the next 7-10 years.

Here's the house layout:
1. 5800X3D is the main rig in the living room.
2. 3700X is in the bedroom
3. 4930K is in the cave.

Basically whatever comes next will push 5800X3D to the cave and it'll go in the living room as the new main rig. GPU-wise, the jury is totally out until we hear more about what's next.
 
@Brent_Justice Thanks for the great review. I noticed on page 2 that your screenshot of the Xbox Game Bar overlay menu is from a camera. Yeah, I ran into the same issue as well. You can't take screenshots of the Game Bar. Pretty annoying. Gotta grab your phone or another camera and literally take a picture of your display's screen.

AMD could have saved itself a lot of work and game issues for gamers by simply giving both CCDs the 3D V-Cache and taking a slight frequency hit. We think this would be better so that we avoid issues like potentially having games not utilizing the correct CCD with cache and having to mess with XBOX Game Bar software versions, KGL profiles, services and processes running, and potential core parking issues with games. We think gamers and productivity users would have preferred cache on both CCDs with a slight frequency hit to avoid all of these issues. In our opinion, both CCDs should have had the 3D V-Cache, and a slight frequency hit should have been accepted.
I agree 100%. I'm kinda disappointed only one CCD gets the 3D cache. The CPU I would most likely get for myself would be the 7900X3D, but that has the same issue as the 7950X3D, with the 3D cache only being on one CCD. I don't like asymmetrical designs. Intel can f*ck off with their big.LITTLE P-core/E-core bullsh1t. Like I think most enthusiasts, I'm excited to see how the 7800X3D turns out. Although Brent brought up a good point in his review, the 3D cache has far less impact compared to the regular 7950X than the 5800X3D did compared to the 5800X. We still need to see how the 7900X3D and 7800X3D turn out, but I think overall the 3D cache matters a lot less for Zen 4 than it did for Zen 3. I don't think the price jump over the non-3D CPUs is worth it, at least for the 7950X3D.

I usually keep Xbox Game Bar disabled, but I guess if I had one of these Zen 4 X3D CPUs I would have to re-enable it.

AMD software to make sure the OS and applications are using the right CCDs appropriately reminds me of AMD's Dual Core Optimizer software back when they came out with their first dual-core CPUs. I had to use that software on my Opteron 165 system. I remember back then also having to disable one of the cores for some games, cuz games at the time would get confused when they saw two cores (or two physical CPUs), which would result in worse performance, crashes, or odd behavior.

It does bring up the obvious question, what if a new game is released and not yet updated in the KGL game list? Will you just not receive game performance with 3D V-Cache? This is an obvious concern, and the right question to have. It does remind us of the old SLI/CrossFire profile days, and we have learned since then that the profile method of support is never preferred, cause when there isn’t a profile, the gamer suffers.
TechPowerUp said in their review: "It seems that the game detection failed in some titles like Spider-Man and God of War, which post higher numbers when we override the AMD software and use the 'prefer cache' BIOS setting. If you encounter such a case, it's really easy to fix though. Press Win+G to open Microsoft Game Bar overlay, click on settings and tick the 'remember this is a game' checkbox. Done! No fiddling with executable names or the registry. Still, I suspect that a vast majority of people will not be aware of this solution, or even the fact that the game detection might not work on all games, that's why AMD must ensure they track down these cases and fix them quickly."

So I assume you can do the same thing with a new game that is not in the KGL game list, and just add it yourself.


Since most of my builds end up running close to 10 years a pop I was hoping this, or the 7900X3D would be the platform to build my next with.
I heard rumors that Zen 5 is gonna be quite the difference in CPU architecture, while Zen 4 was still basically just an upgrade of the Zen 3 and 2 design. Maybe you should wait to check out the Zen 5s, if you want to build a system to last 10 years. I hope all Zen 5 CPU models get 3D cache, and on all CCDs. My current system is the first one that I kept using for almost a decade (gonna be 9 years old this coming Fall), usually I move to a new system after about 4 years. I didn't plan things that way though, it just kinda happened cuz I'm broke! LAWLZ
 
TechPowerUp said in their review: "It seems that the game detection failed in some titles like Spider-Man and God of War, which post higher numbers when we override the AMD software and use the 'prefer cache' BIOS setting. If you encounter such a case, it's really easy to fix though. Press Win+G to open Microsoft Game Bar overlay, click on settings and tick the 'remember this is a game' checkbox. Done! No fiddling with executable names or the registry. Still, I suspect that a vast majority of people will not be aware of this solution, or even the fact that the game detection might not work on all games, that's why AMD must ensure they track down these cases and fix them quickly."

So I assume you can do the same thing with a new game that is not in the KGL game list, and just add it yourself.
And you could force things to work with SLI/CrossFire profiles as well when a profile didn't exist. I'd argue, the gamer/end-user shouldn't have to. It should just work with the end-user not having to give it a second thought. And it could have worked just fine this way if both CCDs just had the cache in the first place. All of this would never have been an issue. There would be no reliance on XBOX Game Bar, and there would not be any games that get degradation in performance.

I think gamers would have accepted a frequency hit of 200MHz pegging all 16-cores to retain 16-cores for productivity, but have smooth, trouble-free, and always-on, best gaming performance at the same time. Plus, those productivity apps that benefited from the large cache, would also then benefit, all the time, with no latency hit, without having to do tricks to force workloads onto a specific CCD.
 
Last edited:
I don't like asymmetrical designs. Intel can f*ck off with their big.LITTLE P-core/E-core bullsh1t.
I get that this is a personal opinion, but to be clear, AMDs solution is asymmetrical in a different way than the P+E core solution that Intel is using. Further, Big.Little has been around for quite a while; in Linux, it already just worked. Windows needed a bit more help, but also 'just works' now.

AMDs solution seems to present a paradoxical quandary, to me. 3D V-cache is only actually beneficial for workloads that would be cache limited, but not memory bandwidth limited. Meaning, a workload too big to fit in the cache of a standard part, but not so big that there is a need to reach out to main memory.

And the paradox is this: unlike the Big.Little schema, where the 'P' cores are always faster than the 'E' cores, the X3D die on a 7900X3D or 7950X3D will be slower than the standard die in situations that aren't cache constrained, because the standard die can clock higher. This means that there's no 'easy' way to identify where a process would be better to schedule. Not all games are cache sensitive after all, and there are some enterprise (so, 'workstation') type workloads that are cache sensitive.

I'm with Brent, this is a needless bit of complexity, and for the prices for the parts on offer, one could make a better case for a 13900K as a 'plug and play' high-end solution for a mix of gaming and workstation tasks.
 
...but to be clear, AMDs solution is asymmetrical in a different way than the P+E core solution that Intel is using
Yeah I wasn't trying to say they were the same, that's two very completely different architectures, and they are asymmetrical for very different reasons, with Zen 4 X3D just being asymmetrical with the 3D cache being only on one CCD. I just wanted to say in general I don't like asymmetrical designs. Well for desktop CPUs at least. Just one CCD having 3D cache (and slightly slower cores than the other CCD) is at least better than having some cores that are weaker and use a different instruction set, like with Alder Lake and Rocket Lake.

Further, Big.Little has been around for quite a while
Yeah, and I think it makes more sense in the mobile space. I don't think it was really necessary on desktop.

And the paradox is this: unlike the Big.Little schema, where the 'P' cores are always faster than the 'E' cores, the X3D die on a 7900X3D or 7950X3D will be slower than the standard die in situations that aren't cache constrained, because the standard die can clock higher. This means that there's no 'easy' way to identify where a process would be better to schedule. Not all games are cache sensitive after all, and there are some enterprise (so, 'workstation') type workloads that are cache sensitive.
Which is exactly why I feel CPU internal hardware resources, be they cores or cache or whatever-the-f*ck, should always be equal.


I think gamers would have accepted a frequency hit of 200MHz pegging all 16-cores to retain 16-cores for productivity, but have smooth, trouble-free, and always-on, best gaming performance at the same time. Plus, those productivity apps that benefited from the large cache, would also then benefit, all the time, with no latency hit, without having to do tricks to force workloads onto a specific CCD.
I absolutely would have accepted that frequency hit.
 
Just one CCD having 3D cache (and slightly slower cores than the other CCD) is at least better than having some cores that are weaker and use a different instruction set, like with Alder Lake and Rocket Lake.
I guess my point above is that the solution AMD is using is actually worse, for the moment. I get not liking Intel's approach; I've never really warmed up to big.little myself. But I can't deny that it works extremely well, and that AMDs approach is still half-baked from the end-user perspective. Nothing wrong with the hardware of course and when the software does its job, the gains are at least present when they're not outright impressive.

But, and that's a hard but, it's hard to imagine an optimal outcome. The 'logic' behind scheduling threads between the two CCDs gets complicated to describe, and seems like it'll be messy to implement.

Which is exactly why I feel CPU internal hardware resources, be they cores or cache or whatever-the-f*ck, should always be equal.
I feel the same way, I just don't think we're going to get what we want :)
 
Become a Patron!
Back
Top