No Manual Overclocking for AMD Ryzen 7000X3D Series CPUs

I met with Scott as well. The boost frequencies will be different on each CCD, with the non 3D CCD being slower.

The way he explained it to me is that for light threaded workloads the non-3D side of the 7950X3D should perform the same as the 7950X. Of course, this assumes the threads get to the right cores. Under a full cinebench load, performance will be lower than the 7950X, however, the margin of difference will not be as large as the 7800X3D vs 7800X.

He also mentioned some games would still be better off getting assigned to the non3D side based upon performance characteristics.
I would have expected the non 3D CCD to have a higher boost given it doesn’t have the cache getting in the way.
 
So if there is a more performant CCD on these causes based on task... are they writing updated code for the various kernel drivers to properly exploit the correct cores/ccds based on the process in question.

In thinking about it logically speaking this sounds similar to Intel's big little solution.
 
The difference is that Intel has silicon on the CPU, in hardware, to direct those threads, whereas AMD does not, it relies on software/Windows.
 
The difference is that Intel has silicon on the CPU, in hardware, to direct those threads, whereas AMD does not, it relies on software/Windows.
Ok, I get it.. my question is do we KNOW that is the case? If not then I would wonder why it isn't. Perhaps for tuning purposes.
 
This is disappointing to say the least.

The 7950X is a strong CPU for Primegrid and the 5800X3D is very strong also due to the extra L3. Given that the 7950X3D will only have half it's CCDs with the extra cache that means Primegrid will only benefit while using half the 7950X3D.
 
This is disappointing to say the least.

The 7950X is a strong CPU for Primegrid and the 5800X3D is very strong also due to the extra L3. Given that the 7950X3D will only have half it's CCDs with the extra cache that means Primegrid will only benefit while using half the 7950X3D.

AMD seems to think not many apps benefit from the cache, outside of gaming. I am not familiar with Primegrid, I'll check it out.
 
Ok, I get it.. my question is do we KNOW that is the case? If not then I would wonder why it isn't. Perhaps for tuning purposes.

Yes, we know this is the case. The question is, is the hardware really necessary? AMD seems to think it isn't, and software is just fine, Intel seems to think it needs a hardware solution, so........... /shrug
 
This is disappointing to say the least.

The 7950X is a strong CPU for Primegrid and the 5800X3D is very strong also due to the extra L3. Given that the 7950X3D will only have half it's CCDs with the extra cache that means Primegrid will only benefit while using half the 7950X3D.
I really wonder about the logic used here. Did they not do it because they wanted to comfortably say they have x.x ghz rating still and not have less savvy consumers not understand why the cache is such a big deal?
 
Yes, we know this is the case. The question is, is the hardware really necessary? AMD seems to think it isn't, and software is just fine, Intel seems to think it needs a hardware solution, so........... /shrug
That could just be Intel being Intel and wanting to be the only people in control. Will be interesting to see when it comes to real world testing. Is having only one CCD on the bigger processors going to be more bottle neck than help? If threads get assigned that benefit from the cache to the other CCD with out of order processing is the reassembly of the threads going to cause it to perform as if none of the processes had the extra cache in the first place?

It's not as if Microsoft has been johnny on the spot with their thread allocations with AMD and their CCD implementations thus far.
 
That could just be Intel being Intel and wanting to be the only people in control. Will be interesting to see when it comes to real world testing. Is having only one CCD on the bigger processors going to be more bottle neck than help? If threads get assigned that benefit from the cache to the other CCD with out of order processing is the reassembly of the threads going to cause it to perform as if none of the processes had the extra cache in the first place?

It's not as if Microsoft has been johnny on the spot with their thread allocations with AMD and their CCD implementations thus far.

Also consider these are different implementations here, a CCD is not the same as an Intel Performance and E-Core. A CCD is a complex of cores, 8-cores in each CCD, connected by Infinity Fabric. It really is apples to oranges.

Also, Zen has had multiple CCDs since its introduction, which was the whole basis of the chiplet technology. As such, Microsoft has been working with AMD for a long time in managing threads across the complexes. This isn't new. Also, the 5800X3D has been out a while, and also went through its growing pains already. In interviews, I've seen AMD confident about where they are with this, so we'll just have to see how it all plays out in the real-world when reviews come out.

It's always smarter to go with a software solution if you can save money by not having to implement hardware. If software can solve the problem, it's always the cheaper solution.
 
Also consider these are different implementations here, a CCD is not the same as an Intel Performance and E-Core. A CCD is a complex of cores, 8-cores in each CCD, connected by Infinity Fabric. It really is apples to oranges.

Also, Zen has had multiple CCDs since its introduction, which was the whole basis of the chiplet technology. As such, Microsoft has been working with AMD for a long time in managing threads across the complexes. This isn't new. Also, the 5800X3D has been out a while, and also went through its growing pains already. In interviews, I've seen AMD confident about where they are with this, so we'll just have to see how it all plays out in the real-world when reviews come out.

It's always smarter to go with a software solution if you can save money by not having to implement hardware. If software can solve the problem, it's always the cheaper solution.

This is the first time AMD has introduced a processor with CCD's that do not have essentially matched performance from a design perspective.

I look forward to this working fine out of the box. But an 8000 series CPU where they are ostensibly ALL x3d vcache enabled I think might make more sense and have the bugs ironed out then. Or... perhaps they have already worked with MS and Linux and other developers to solve this problem ahead of time and that is why they didn't introduce a 59**x3d CPU last gen.
 
The CCD is the same, it just has stacked cache on top, so underneath the CCD it's the same. But I get what you are saying, technically they have different performance profiles. But in this case, much slighter than the difference between P-Cores and E-Cores, which are two completely different CPU architectures, based on entirely different structures.
 
Puget Systems did some testing with a Ryzen 7950X in "content creation" workloads to determine the impact of Precision Boost Overdrive (PBO) and CPB (Core Performance Boost) on performance and thermals, and while the results may not come as a surprise to those on recent AMD or even Intel platforms, they're worth bearing in mind by anyone more concerned about energy, cooling, noise, stability, etc., than squeezing every last drop of performance from their CPU.

(PBO and CPB were not independently tested; they were both either enabled or disabled together in the benchmarks, so for the sake of clarity I won't mention CPB outside of quotations.)

Though the effect of PBO on performance was not negligible in all benchmarks, in many cases there were no gains at all. The same was not true of PBO's effect on temperature. See the graph and paragraphs below the heading "Ryzen 7950X PBO and CPB CPU Temperature" from the link.
On the whole, it is amazing how much disabling PBO and CPB reduces the CPU temperature. On heavy sustained loads like Cinebench multi core, we went from 95C to just 65C – that is a 30C drop in CPU temperature!

Even relatively lighter loads like Photoshop went from temperatures ranging from 60-80C with PBO/CPB enabled to just ~42C with them off. Others like Premiere Pro, After Effects, and DaVinci Resolve were frequently hitting 95C with PBO/CPB, and dropped to 40-62C with then disabled. As a reminder, these are workloads that saw no difference in performance, yet the 7950X CPU is as much as 30C, or even 40C, cooler with PBO/CPB disabled.
It's also worth mentioning that motherboards deserve special attention, because their automatic or default settings cannot be trusted. Puget observed that by default, the motherboard they used overclocked the CPU beyond AMD's specifications. The board was a Gigabyte X670E AORUS MASTER, but they noted that the behavior is common across brands:
The issue is that the motherboard we used (the Gigabyte X670E AORUS MASTER) was defaulting to overclocking the CPU beyond AMD's official specifications. From what we have seen, this isn't restricted to Gigabyte motherboards, but something that almost every brand seems to be doing. This also isn't something entirely new, or restricted to AMD. On the Intel side, we have dealt with a setting called "MultiCore Enhancement" for years, which allows the CPU to run all the cores at the maximum boost frequency when they should be scaling based on the number of cores that are being used.
(emphasis mine)

I've read of cases in which overaggressive motherboard settings have destroyed CPUs in a matter of weeks.


Again, these results will come as no surprise to many here. And this has nothing to with the "3D" parts specifically, of course, but does challenge the wisdom of overclocking even by generally accepted automatic methods that require little to no user intervention.
 
AMD seems to think not many apps benefit from the cache, outside of gaming. I am not familiar with Primegrid, I'll check it out.
Primegrid basically looks for large prime numbers.

The sub projects that run on CPUs have a cache footprint (looking for the chart now) and the tasks will run much faster if you can keep all the data within' the L3 cache. Which I believe the highest cache requirement for the highest sub project is GFN22 which requires somewhere around 40MB of L3 cache. These numbers will continue to go up as larger and larger primes are searched.


image.png


Chart that shows how much L3 cache each sub project uses on the CPU. If you plan on doing some L3 cache testing.
 
Last edited:
It's also worth mentioning that motherboards deserve special attention, because their automatic or default settings cannot be trusted. Puget observed that by default, the motherboard they used overclocked the CPU beyond AMD's specifications. The board was a Gigabyte X670E AORUS MASTER, but they noted that the behavior is common across brands:
This is interesting, because I have had PBO disabled on my Asus motherboard since... I built the rig. And I didn't think anything of it. But I just went and checked in Ryzen Master, and sure enough, it was still using the PBO profile - even though it was disabled in the BIOS.

Turned that off, turned on Eco mode -- I notice absolutely zero difference in gaming or general use. I'd say something about temps, but even while gaming, the games I play it's rare to see the CPU go past 5%, and I still see the clocks get up and hit 4.7-4.8 when they should. Granted, I don't have a top notch cooler or anything on there, but still.

It's like... PBO only exists to cheese some specific benchmarks. (It might, I am not officially making any accusation - I think the concept behind PBO is a very good idea - just seems like it does as much harm as it does good).
 
Become a Patron!
Back
Top