AMD Ryzen 7000 Series Processors to Feature Up to 16 Cores at Launch

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
11,067
Points
83
AMD users who are planning to pick up red team's next flagship desktop processor later this year should expect a similar core and thread configuration as the current flagship.

Go to post
 
Come on AMD.

Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.
 
Come on AMD.

Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.

I think you're over stating a bit. Remember we're talking about PCIE 5.x lanes. Those are what.. lets look at the bandwidth scale.

4 doubles three and 5 doubles 4. So 4 PCIE 5.x lanes is equivalent to 16 PCIE 3.x lanes. I meant... from a bandwidth perspective I think we're ok. With proper management of traffic you could get quite a lot done on 4 PCIE 5.x lanes. Unless you needed dedicated constant access to the lane.
 
Come on AMD.

Screw the cores. 8 cores are enough for just about everyone today. Add a few more for future proofing (maybe 10 or 12?) and that's all anyone shopping for a non-HEDT system really needs. If you are doing lots of VM's or rendering/encoding, the type of stuff that calls for a ton of cores you should probably be using a HEDT system anyway. The consumer parts do not need this many cores.

Just give us a few more PCIe lanes instead. Instead of the 24 lanes, 16 used for the GPU, 4 for a single m.2 slot, and the last 4 for the chipset, how about we get like 50% more lanes. 36 total lanes would make all the difference. It would allow for a couple more expansion slots or more m.2 slots.

24 is just way too restrictive. It limits the PC from truly being a PC where you add the hardware you need/want. It turns it into a limited reduced function device.
You're definitely pointed in the right direction. I, personally like having the cores (my 5900 I just got is AMAZING). Cap the cores at 12 like you suggest and open up the PCIe lanes. Win/Win!
 
The lanes become less important when the bandwidth is so large. With PCIe 5 you'd have a tough time saturating it.
 
The lanes become less important when the bandwidth is so large. With PCIe 5 you'd have a tough time saturating it.
This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?
 
This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?
Downgrades the lanes used to be PCIe4. a PCIe4x16 card will not look like a PCIe5x8 in a PCIe5 slot. In essence, that doubled bandwidth for those 16 lanes is lost, thus why more lanes would be great!

To answer the question though, a 3080Ti put into a PCIe5x4 slot will be in essence a PCIe4x4 device, able to consume 7.877 GB/s (vs 15.754 GB/s for PCIe5)
 
The lanes become less important when the bandwidth is so large. With PCIe 5 you'd have a tough time saturating it.

This. 24 lanes of pcie 5 is 48 lanes of pcie 4, though pcie 5 components are pretty slim pickings.

How does a 4 lane pcie 5 slot look to a pcie 4 device? Ex: if you put a 3080ti in a pcie 5 4x slot, how much bandwidth does it get?


1653778862100.png

The thing is, unless you use some sort of PLX chip you can't pool all that bandwidth.

If you stick a 8x gen 1 device in an 8x gen5 slot, it still uses all 8x lanes, even if it is just operating in gen 1 mode, and a fraction (~1/16) of the max bandwidth those Gen 5 lanes offer.

Gen 5 devices are almost non-existent on the market today. Even gen 4 devices are rare outside of GPU's and certain high end NVMe SSD's

Most expansion cards you are going to wind up buying are going to be Gen 3 or Gen 2. Some may even be Gen 1. Heck, if an old sound device works fine on 1x gen 1, why redesign it?

And that's the problem. 24x lanes of Gen5 is an absolutely massive amount of bandwidth, but you can't really make the most of it without some sort of PLX chip that helps keep the lanes to the CPU maxxed at gen 5 speeds and pools that bandwidth and divy's it up across devices and slots of different generations.

The problem is PLX chips are expensive, add power consumption, and most importantly cause added latency on the PCIe bus, which is detrimental to performance, which is why we see so few motherboards use them.


So, what does this mean in practice?

Since most of us use 16x capable discrete GPU's, and don't want to risk them losing an ounce of performance (even though in general 8x is probably fine) we are going to seek to have them maxed out at 16x.

So, 16/24 goes to the GPU, and any GPU today is probably only using half the potential bandwidth of those 16x lanes, since it is only Gen4, but the 16x lanes are used none the less, because the protocol negotiates the speed at the lower of the capabilities of the host and the GPU. 8x Gen5 may provide the same amount of bandwidth as 16x Gen4, but the GPU can't do anything with 8x Gen5, if it fdrops to 8x, it will be getting 8x Gen4 bandwidth, so it runs at 16x gen4, and uses all 16 of those lanes.

So, once the GPU is out of the way we have 24-16 = 8 lanes left.

4 of those lanes are going to go to an NVMe device. Again, just like with the GPU it is going to use all lanes, even though it can't connect with the latest gen protocol.

Now we have 4 lanes left. These are all going to go to the chipset.

The chipset is ironically the only device in the system that makes good use of the lanes. At least in cases where the motherboard and CPU are at the same generation (backwards compatibility sometimes allows you to stick a newer CPU into an older motherboard, in which case, the chipset lanes will only connect at the older PCIe gen standard) They uses some sort of internal PLX like capability to spread out the total 4x Gen5 bandwidth over all on board devices, and in some designs share whatever bandwidth is left over to some extra PCIe slots or secondary m.2 slot. These are nice to have, but again, PLX=Latency, and that reduces performance.

So I guess my point is, the total bandwidth really doesn't matter. Unless something changes about how these things work, you could have gen 30 PCIe bandwidth, and those 24 lanes will still be too restrictive. Gen 5 has some pretty impressive raw bandwidth numbers, but the way backwards compatibility and PCIe lanes works means in most cases it is simply not usable for anyhting but the bare minimum 1 GPU and 1 NVMe drive.
 
I don't think it's the device that is the problem... it's the physical wiring. Just because you have nothing in a PCIE 4x slot doesn't mean those 4 PCIE lanes are available to be allocated to another device.

What this DOES mean is if you only need 1 lane of bandwidth you can have VERY nice boards almost server like with 16 PCIE 1x lanes at gen 5 and have PCIE 3.x x 4 ike bandwidth available if the card is a PCIE 5 card. MEANING... you could have a literal **** TON of I/O be managed by these chips finally catching up to older gen Power PC or RS/6000 hardware.
 
According to this: https://www.anandtech.com/show/17399/amd-ryzen-7000-announced-zen4-pcie5-ddr5-am5-coming-fall
"AM5 also brings quad-channel (128-bit) DDR5 support to AMD's platforms, which promises a significant boost in memory bandwidth."

Is that true?! Daaaaaang, it seems like AMD brought HEDT down to the mainstream segment! All we need now are more PCIe lanes, like @Zarathustra said.
It's true - in that each DDR5 module itself represents two 32bit 'channels'. So with two modules, you now have four 32bit channels - which is, yes, quad-channel 128bit. Thing is, DDR4 (and all prior) were 64bit channels and one per module, so you had dual (64bit) channel 128bit memory instead.
 
It's true - in that each DDR5 module itself represents two 32bit 'channels'. So with two modules, you now have four 32bit channels - which is, yes, quad-channel 128bit. Thing is, DDR4 (and all prior) were 64bit channels and one per module, so you had dual (64bit) channel 128bit memory instead.
Hmm interesting.

This ~should~ help with latency some I guess - more parallel channels.
 
Hmm interesting.

This ~should~ help with latency some I guess - more parallel channels.
It actually does, apparently. In terms of 'latency', DDR5 is pretty difficult to quantify. It should be far worse than the best DDR4, if measurements are any indication (let alone the cycle latencies involved), but most performance testing of the better release DDR5 kits put it equal to DDR4 at worse, and better at many things.

Future DDR5 dies will likely leave DDR4 far behind across the board. And by future, probably the second major release from the three main manufacturers for higher-end bins.
 
Become a Patron!
Back
Top