AMD’s Mysterious Ryzen 7000 Series Processor Was Running at 5.5 GHz without an Overclock

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
One of the more exciting segments of AMD's Computex 2022 Keynote was the portion where AMD Chair and CEO Dr. Lisa Su brought out a demo of a mysterious Ryzen 7000 Series processor running Ghostwire: Tokyo at a very impressive clock of 5.5 GHz.

Go to post
 
I don't know that I would... use 15% improved IPC which should intone a theoretical 15% improvement in performance at the SAME clock speed add into that a what... 20% improvement in clock speed? wow... that's like what... 30% overall theoretical improvement in speed for CPU tasks. Not bad.... Couple that with the INSANE amount of I/O Bandwidth and that's an impressive part.
 
Let's see, if i try to talk about the changes between Zen 3 and Zen 4:

The stuff I've seen more or less confirmed:
The CCDs jump to 5nm and 6nm on the I/O die.... and the move to DDR5 and PCI 5. A bump in L2 cache to 1MB. The AM5 socket allows for a higher TDP of 170W (up from 145W) - but that doesn't necessarily mean the new SKUs will all have a higher TDP. Higher boost clocks - up to 5.5Ghz. And an IGP on the I/O die!

There's a rumor that there could be low power CCDs and high performance CCDs - similar to little.BIG that Intel and Apple have moved to - but I can't find anything concrete about that.

There's rumor that V-cache from the 5800X3D will be present, but again, I can't find anything concrete.
 
The AM5 socket allows for a higher TDP of 170W (up from 145W)
This one I do not understand. They're running about the same number of pins that Intel is, which does make sense, and Intel 12th-gen CPUs would pull 400W+ through the socket if you let them and could keep them cool. And that's nothing new.

Either AMD is sandbagging in terms of what the socket is capable of - and just look at those 22 stages of 90A power on the premiere motherboards! - or these are 'marketing' numbers. First crop of AM5 boards are more overbuilt than the top-end 12th-gen boards were when it comes to power delivery.

Higher boost clocks - up to 5.5Ghz.
Alder Lake can do this on ambient cooling, assuming that it's very very good ambient cooling and one has a very very good sample (so the best of the 12900KS CPUs). What will be impressive is Zen 4 hitting that speed more often and doing so at less frightening voltage and subsequently power draw levels.

And an IGP on the I/O die!
This has been a criticism of mine concerning Ryzen for a long time. AMD has leading graphics IP, and they just left that on the table. The APUs were also seriously neglected.

I'm not expecting competitive gaming performance here; I'd actually not even want that - but getting modern graphics IP in there is huge. AMD really needs a competitor to Quicksync, not just the hardware but also industry support, and this will help them get the marketshare to drive adoption.

There's a rumor that there could be low power CCDs and high performance CCDs - similar to little.BIG that Intel and Apple have moved to - but I can't find anything concrete about that.
I'm getting the "yes, but not yet" impression. It seems that, on paper, Zen 4 cores won't be as stout as the P-cores found in Alder Lake. Or maybe only just. Right now it does make sense for them to push their peak performance, as that's where they can really win mindshare; they fall behind there, and their marketing challenge grows.

And from what we've seen in the mobile space, AMD looks to be very close in terms of efficiency, and seemingly ahead when power limits drop to ultrabook levels.

There's rumor that V-cache from the 5800X3D will be present, but again, I can't find anything concrete.
I'm betting they're split on this. The v-cache limits thermal headroom which then puts a serious cap on raw compute throughput - so it's only useful when the CPU is cache limited. And there's only so many consumer applications where that's the case.

In other situations, the v-cache makes the CPU both more expensive to produce and slower, and from the enthusiast perspective, harder to cool for peak performance.
 
I keep seeing people as trying to write off the iGPU as a good thing. All I see is more junk I’ll disable and never use. I’ve never used an integrated Intel gpu on any of my systems and always appreciated that AMD didn’t make me buy junk I don’t want. If I want an iGPU, I’ll go buy an APU. Otherwise, leave that crap off my system and use the same transistor budget for more cache
 
I keep seeing people as trying to write off the iGPU as a good thing. All I see is more junk I’ll disable and never use. I’ve never used an integrated Intel gpu on any of my systems and always appreciated that AMD didn’t make me buy junk I don’t want. If I want an iGPU, I’ll go buy an APU. Otherwise, leave that crap off my system and use the same transistor budget for more cache
I can't criticize you for your perspective - ideally, we wouldn't need iGPUs on desktop systems. And for purpose-built systems, I generally agree.

I can only point out that you and I are not representative of the majority of AMDs customers. And that while I prefer not to rely on iGPUs, I do find them useful in a pinch.
 
If I have to pay 50 bucks more but get a net 30% improvement in performance AND get an iGPU on the same die... I'm not heartbroken. I think AMD was more interested in waiting for a faster memory solution and the iGPU only being on DDR5 driven machines seems to echo that. But that's an effect looking for a cause.
 
If the igpu is so small and simple is worth it to put it in for every chip. This way tgey don't miss any segment , from office machines and up.
 
I don't see the point of an iGPU when APUs exist.
If I want an iGPU, I’ll go buy an APU. Otherwise, leave that crap off my system and use the same transistor budget for more cache
Basically this.
 
I don't know that I would... use 15% improved IPC which should intone a theoretical 15% improvement in performance at the SAME clock speed add into that a what... 20% improvement in clock speed? wow... that's like what... 30% overall theoretical improvement in speed for CPU tasks. Not bad.... Couple that with the INSANE amount of I/O Bandwidth and that's an impressive part.
From my reading of articles online I have seen 15% improved single threaded performance, not 15% improved IPC. Let me know if you've seen differently.
 
From my reading of articles online I have seen 15% improved single threaded performance, not 15% improved IPC. Let me know if you've seen differently.
Just going off of the AMD statement. I'm betting that 15% improvement in IPC is in select paths, not all.
 
Sounds impressive, but the proof is in the pudding.

I'll reserve judgment until it is actually on the market and we can put it through it's paces.

I still remember the Pentium IV's inflated clocks.
 
The iGPU would actually be handy for office desktop systems, and for general configuration / troubleshooting if a normal GPU isn't available. BUT, why force it into the chip? Plenty of older AMD chipsets had graphics built onto the motherboard... yeah they were worthless for gaming, but fine for desktop/server duty. Why take up valuable CPU die space when you could throw it onto the motherboard chipset?
 
I'm not sure where you've seen a claimed 15% IPC improvement?
Ok you are correct. I was going off of articles I had seen posted and apparently I misunderstood 15% improvement on IPC performance or however it was phrased as a straight up IPC improvement. My bad.
 
I keep seeing people as trying to write off the iGPU as a good thing. All I see is more junk I’ll disable and never use. I’ve never used an integrated Intel gpu on any of my systems and always appreciated that AMD didn’t make me buy junk I don’t want. If I want an iGPU, I’ll go buy an APU. Otherwise, leave that crap off my system and use the same transistor budget for more cache
I can say of all the Intel CPU's I've had with IGP:

I'd never pay extra for the feature, but I'm usually glad it's there at some point or another in the lifespan of the build.

Intel IGP was also nice for QuickSync when I could get it working, but I would have issues with drivers and couldn't ever get it to work reliably for me.
 
but I would have issues with drivers and couldn't ever get it to work reliably for me.
This does depend on workloads a bit; I mostly use it for Plex, when using it deliberately. The key here is that generally speaking, you shouldn't notice that it's working unless you're looking specifically for it.
 
The iGPU would actually be handy for office desktop systems, and for general configuration / troubleshooting if a normal GPU isn't available. BUT, why force it into the chip? Plenty of older AMD chipsets had graphics built onto the motherboard... yeah they were worthless for gaming, but fine for desktop/server duty. Why take up valuable CPU die space when you could throw it onto the motherboard chipset?
Wondered about this too, but I think it makes less sense.
Does anyone do this at all for the general market? Unless I'm missed something mobos with video are something from the past past, like pre 2000. As complicated as cpus have gotten, with the IO and plenty of stuff that also used to be on the mobo, I don't see how mobos with video make sense at this point. Unless you are making basically a powerful GPU into a mobo, basically a barebones console.
 
Well if you consider the availability and cost of dedicated GPU's having an IGPU on every available CPU makes a lot of sense. If I want to make my wife or son a SFF system to do school work, surf the web and watch youtube with I can do that without having to invest in any kind of dedicated GPU. That's a win. And if it isn't really adding to the base cost... win win.
 
Become a Patron!
Back
Top