News: Sneak Peak at MSI X570 MEG ACE

alxlwson

Moderator
Staff member
Joined
May 27, 2019
Messages
235
Points
28
Anand has a pretty sweet peek of the new MEG x570 board, dubbed "ACE".

Fully loaded with Gen 4.0 PCI-E, active PCH cooling, a sweet-*** color theme of black and gold paying homage to AMD50, and no RGB in sight! Could this be an AM4 successor to the highly acclaimed MEG Creation X399 board?!

IMG_20190526_111711_575px.jpg
 
Another good looking board out of MSI. I do see the heat pipe going to the chipset. I haven't seen that in a very long time. Judging by the vents that might be an active fan in there too.
 
Another good looking board out of MSI. I do see the heat pipe going to the chipset. I haven't seen that in a very long time. Judging by the vents that might be an active fan in there too.


I can definitely see a fan under those vents.

Will be interesting to see how companies offer monoblocks for these boards
 
3 M.2 slots are interesting...

Not really new. There are plenty of existing motherboards that do that. However, PCIe 4.0 should actually give us the available bandwidth to fully leverage a 3xNVMe RAID 0 array.
 
Looking forward to seeing some reviews on this along with their new cpus.
 
I really want to see what the street prices for these boards are. For what they cost I may end up waiting to see what Threadripper 3 costs.
 
What!!! No RGB!! 😭

The MEG Creastion boards aren't targeted at the gamer. That said, I do expect something to light up on that even if its just the dragon logo on the chipset. That's all that GIGABYTE does on its Designare series. I'm pretty sure the last MEG Creation boards did that as well.
 
Another good looking board out of MSI. I do see the heat pipe going to the chipset. I haven't seen that in a very long time. Judging by the vents that might be an active fan in there too.


I'm not crazy about the racy aesthetics, but if it winds up being a good board, it doesn't matter. I don't spend that much time looking at my desktop after I've built it. My case has a window, but the case is on the floor with the window side up against a wall, so who cares? :p

Besides, I rarely have an audience in my office at home.

What I am really interested in is how they make all of these PCIe ports work with so few combined lanes off of the CPU and chipset. Some sort of Multiplexing maybe?

I'm really hoping for some creative solutions in this regard so I can make the most of the few lanes these systems have. At an absolute minimum I'll need 16x Gen 3 for a GPU, 4x Gen 3 for an M.2 Drive and 8x Gen 2 for my dual 10gig NIC. As long as I can get that to work at full speed, I'm in for a Ryzen 3000 system.

Anything extra in addition to that is all bonus and appreciated.

I really wish they wouldn't waste lanes and power on on board Wifi though. No desktop build of mine will ever use Wifi in any way shape or form.
 
I'm not crazy about the racy aesthetics, but if it winds up being a good board, it doesn't matter. I don't spend that much time looking at my desktop after I've built it. My case has a window, but the case is on the floor with the window side up against a wall, so who cares? :p

Besides, I rarely have an audience in my office at home.

The aesthetics don't bother me. I'm not wild about them in this particular case, but I'll need to see it in person. I don't think it looks bad, but there are better looking boards out there.

What I am really interested in is how they make all of these PCIe ports work with so few combined lanes off of the CPU and chipset. Some sort of Multiplexing maybe?

Well, its pretty simple actually. The Ryzen 1000 and 2000 series CPU's have 24x PCIe lanes. 16x for graphics. 4x dedicated for an M.2 storage solution and 4x is dedicated as a link to the PCH. The PCH has another PCIe controller, in this case a Gen 2.0 controller which gives you 8x PCIe Gen 2.0 lanes. The manufacturer's can effectively do what they want with the 8x PCIe 2.0 lanes. They also have the option to split the 4x dedicated M.2 storage into either two, x2 PCIe M.2 slots or an x2 M.2 slot and SATA ports. Though, at the time of this writing I've never seen any motherboard manufacturer do that.

The PCIe slots themselves are typically split between the PCH's x8 PCIe lanes and the CPU's x16 lanes. The primary PCIe or PEG slot gets x16 slots until another slot is populated. Then PCIe switches turn that into an x8 slot and grant another x8 lanes to the secondary PCIe x16 slot, which is only capable of x8 lanes of bandwidth electrically. This can usually be further divided into an x4 slot for a x8/x4/x4 configuration such as running 3x AMD GPU's in Crossfire. All three of these PCIe "x16" slots are actually still coming off the CPU, with the bandwidth being split across the three slots. SLI certification can not be sought for 3-Way configurations with less than an x8/x8/x8 configuration, which is why no AM4 motherboard has 3-Way SLI certification.

Typically, any left over lanes from the PCH which were not dedicated to an additional M.2 slot, GbE NIC, SATA ports, etc. are then available for PCIe x16 Gen 2.0 slots. This usually results in a configuration where you end up with the CPU giving you x8/x4/x4 with the PCH giving you two or three PCIe x1 Gen 2.0 slots.

This is where Intel's Z270, Z370 and Z390 chipsets all do better. The Intel CPUs give you x16 PCIe lanes which can generally only be split twice. The PCH however, gives you 24 PCIe Gen 3.0 PCIe lanes. In theory, AMD has a storage advantage as its M.2 slot (at least one) goes straight to the CPU. However, in actual speed testing there is no advantage. The DMI 3.0 bus has virtually no overhead between the PCH and the CPU. It is technically shared with anything connected to the PCH, but outside of specific benchmarking situations this is all academic as its not a problem in the real world unless you start doing M.2 RAID arrays. In which case, AMD has the same problem.

On HEDT, its basically the same for Intel's X299 as it is for Z270 on up. The CPU's have more PCIe lanes but the chipset has the same 24 PCIe Gen 3.0 lanes. For AMD's X399, its the same in which the CPU has far more PCIe lanes and the motherboard's chipset is the same as it is for the mainstream segment. Because the CPU's PCIe controllers have up to 48 lanes (Intel) and 64 lanes (AMD), you get a ton more flexibility on the HEDT side. The higher core counts, increased I/O capacity and capability, as well as the increased memory bandwidth are why people like these platforms.

I'm really hoping for some creative solutions in this regard so I can make the most of the few lanes these systems have. At an absolute minimum I'll need 16x Gen 3 for a GPU, 4x Gen 3 for an M.2 Drive and 8x Gen 2 for my dual 10gig NIC. As long as I can get that to work at full speed, I'm in for a Ryzen 3000 system.

Anything extra in addition to that is all bonus and appreciated.

First off, you do not need x16 lanes of PCIe Gen 3.0 lanes for the GPU. This has been debunked over and over again. It takes very specific use cases for their to be any difference here and its pretty minimal to non-existent. It shows up in benchmarks as a few FPS. Depending on your resolution and GPU, this is a non-issue. All AMD Ryzen X370, X399, X470 and X570 motherboards give you a dedicated Gen 3 x4 connection for an M.2 SSD. That's because those lanes are dedicated to the processor and bypass the PCH. This has been the case from day one.

The x8 lanes for a 10GbE NIC is where you will have a problem. The reason being is that the x16 lanes for the CPU that are dedicated for graphics generally always turn into x8/x8 implementations when you use the second PCIe x16 slot. The only way you avoid this is on X399. With X570, this may not end up being the case. I don't know for sure at this point. But on the mainstream platform thus far, this simply has not ever been possible without the use of some god awful power sucking, heat generating, latency inducing PLX chip. And that's always been a sham as you still end up going over the CPU's limited x16 PCIe lanes for graphics at the end of the day. So you are still effectively doing the same thing as having an x8/x8 configuration albeit with some more flexibility in terms of slot layout and PCIe lane switching.

I really wish they wouldn't waste lanes and power on on board Wifi though. No desktop build of mine will ever use Wifi in any way shape or form.

Yeah, I hear you on that but the fact is that many people do. They live in apartments and don't want to run cords everywhere. I've built probably half a dozen machines or so for family and friends who are all using the WiFi built into the motherboards no matter how much **** I give them for it. The gamers out of the bunch get disconnected while playing games periodically, but still refuse to run wires. Its maddening.
 
Well, its pretty simple actually. The Ryzen 1000 and 2000 series CPU's have 24x PCIe lanes. 16x for graphics. 4x dedicated for an M.2 storage solution and 4x is dedicated as a link to the PCH. The PCH has another PCIe controller, in this case a Gen 2.0 controller which gives you 8x PCIe Gen 2.0 lanes. The manufacturer's can effectively do what they want with the 8x PCIe 2.0 lanes. They also have the option to split the 4x dedicated M.2 storage into either two, x2 PCIe M.2 slots or an x2 M.2 slot and SATA ports. Though, at the time of this writing I've never seen any motherboard manufacturer do that.

The PCIe slots themselves are typically split between the PCH's x8 PCIe lanes and the CPU's x16 lanes. The primary PCIe or PEG slot gets x16 slots until another slot is populated. Then PCIe switches turn that into an x8 slot and grant another x8 lanes to the secondary PCIe x16 slot, which is only capable of x8 lanes of bandwidth electrically. This can usually be further divided into an x4 slot for a x8/x4/x4 configuration such as running 3x AMD GPU's in Crossfire. All three of these PCIe "x16" slots are actually still coming off the CPU, with the bandwidth being split across the three slots. SLI certification can not be sought for 3-Way configurations with less than an x8/x8/x8 configuration, which is why no AM4 motherboard has 3-Way SLI certification.

Typically, any left over lanes from the PCH which were not dedicated to an additional M.2 slot, GbE NIC, SATA ports, etc. are then available for PCIe x16 Gen 2.0 slots. This usually results in a configuration where you end up with the CPU giving you x8/x4/x4 with the PCH giving you two or three PCIe x1 Gen 2.0 slots.

This is where Intel's Z270, Z370 and Z390 chipsets all do better. The Intel CPUs give you x16 PCIe lanes which can generally only be split twice. The PCH however, gives you 24 PCIe Gen 3.0 PCIe lanes. In theory, AMD has a storage advantage as its M.2 slot (at least one) goes straight to the CPU. However, in actual speed testing there is no advantage. The DMI 3.0 bus has virtually no overhead between the PCH and the CPU. It is technically shared with anything connected to the PCH, but outside of specific benchmarking situations this is all academic as its not a problem in the real world unless you start doing M.2 RAID arrays. In which case, AMD has the same problem.

On HEDT, its basically the same for Intel's X299 as it is for Z270 on up. The CPU's have more PCIe lanes but the chipset has the same 24 PCIe Gen 3.0 lanes. For AMD's X399, its the same in which the CPU has far more PCIe lanes and the motherboard's chipset is the same as it is for the mainstream segment. Because the CPU's PCIe controllers have up to 48 lanes (Intel) and 64 lanes (AMD), you get a ton more flexibility on the HEDT side. The higher core counts, increased I/O capacity and capability, as well as the increased memory bandwidth are why people like these platforms.

Yeah, since my 10Gig NIC is Gen 2 it should have more than enough bandwidth off of the 4x Gen 4 lanes going to the chipset, even with other things going at the same time.

I'm hoping at least one motherboard maker gives us a board where I can use 16x off of the CPU for the GPU, 4x off of the CPU for a single m.2 drive, and then have an 8x slot off of the chipset for my NIC. From what I understand this type of configuration should be possible. I just don't know how likely it will be... :/


First off, you do not need x16 lanes of PCIe Gen 3.0 lanes for the GPU. This has been debunked over and over again. It takes very specific use cases for their to be any difference here and its pretty minimal to non-existent. It shows up in benchmarks as a few FPS. Depending on your resolution and GPU, this is a non-issue.

Yeah, I know this intellectually, both from past tests, and from looking at the PCIe bandwidth utilization chart in RivaTuner which I usually have installed as part of MSI Afterburner, and have open on a side window when I run games.

I just can't seem to get over the urge to have the full 16x to the GPU "just in case", primarily because everything else in my system I have some margin. On my GPU I have next to no margin. I need it to perform as fast as it possibly can or I don't get my 60fps at 3840x2160

The x8 lanes for a 10GbE NIC is where you will have a problem. The reason being is that the x16 lanes for the CPU that are dedicated for graphics generally always turn into x8/x8 implementations when you use the second PCIe x16 slot. The only way you avoid this is on X399. With X570, this may not end up being the case. I don't know for sure at this point. But on the mainstream platform thus far, this simply has not ever been possible without the use of some god awful power sucking, heat generating, latency inducing PLX chip. And that's always been a sham as you still end up going over the CPU's limited x16 PCIe lanes for graphics at the end of the day. So you are still effectively doing the same thing as having an x8/x8 configuration albeit with some more flexibility in terms of slot layout and PCIe lane switching.

Well, the x570 supposedly has 16x PCIe lanes the motherboard vendors can use as they see fit. I hope at least one of them makes an 8x slot available off of the chipset I can stick my NIC in. I'm not concerned about the bandwidth. It's a 2x 10gigabit NIC, so we are talking 20 gigabit. Both the NIC and the PCIe lanes are full duplex, so I shouldn't have to account for that.

The NIC is Gen 2 8x, so it is at most capable of using 4,000 MB/s out of the ~8,000 MB/s available from the 4x Gen 4 slots going from the CPU to the Chipset. This leaves ~ half of that bandwidth for the on board devices, which is probably fine. Sound probably doesn't use much, given that old sound boards were fine using 1x Gen1 slots. That and the NIC will likely never come close to this 4,000 MB/s max, due to 20Gigabit being ~ 2560 MB/s. There is likely some overhead on top of that for the protocol, but I can't imagine it would max out the full 4,000 MB/s of 8x Gen 2.

So I am keeping my fingers crossed that at least one motherboard vendor makes this kind of configuration available.

It should be possible to do, but my desires are unconventional these days though, so I probably won't get what I want.


Yeah, I hear you on that but the fact is that many people do. They live in apartments and don't want to run cords everywhere. I've built probably half a dozen machines or so for family and friends who are all using the WiFi built into the motherboards no matter how much **** I give them for it. The gamers out of the bunch get disconnected while playing games periodically, but still refuse to run wires. Its maddening.


I know. I don't understand the strong dislike people have of wires. They aren't THAT difficult to run, and if you think they are messy it is easy to hide them behind various raceways and organizers.

Wired solutions will always be faster, more reliable and more secure.

I only use wireless anything if I absolutely have to, for devices like phones where their entire purpose is that they are mobile. Anything stationary gets a dedicated wired ethernet line. I don't even use bluetooth or wireless mice and keyboards if I can help it. I'd rather everything be wired.
 
I put this as an answer to your thread elsewhere as its relevant here. Plus, there are more things at the bottom in reply to your post above.

Well, except for what's already leaked out there or what some sites have been authorized to say so far, X570 and Ryzen 3000 series CPU's are under embargo until 7/7/19. That said, I don't have any information, or not much information that's not already out there in some form. What it looks like from the block diagrams I've seen online is that X570 incorporates some type of switching technology on the back end like Intel's HSIO that allows the motherboard vendors some flexibility that they didn't have previously for configuring their motherboard features.

View attachment 14

This is an example of what I'm talking about. You see where it says: "Pick one". That's for motherboard manufacturers to choose from. They can decide how they want to adjust the configuration.

So, from the CPU side nothing has changed. You get 20x PCIe lanes, 4x of which are reserved for communication with the PCH. Then you have 4x dedicated USB 10Gbps ports and dedicated storage interface comprised of four PCIe lanes. These can be allocated as a single 4x NVMe slot or it can be split into two 2x NVMe slots or it can incorporate 2x SATA ports and a single 2x NVMe slot. None of the manufacturers are likely to do anything but use this for a single 4x NVMe M.2 slot. It will almost always be a type 22110 slot.

On the PCH side, we have a dedicated 8x PCIe slot and after that the choices are in the diagram. We will probably see several variations on one pick one category and the second will probably generally be allocated as SATA ports if I had to guess. Or we'll see some PCIe switches which will allow us to have slots or an extra 2x or 4x cluster of SATA ports which will be shared. That will give us a potential for 6 to 8 SATA 6Gbps ports and the last PCIe x4 lanes that will be shared with the SATA ports will almost certainly be allocated as another x4 M.2 slot. This one, potentially supporting SATA as well as PCIe devices.

From the X570 motherboards I've seen pictures of so far, I think I'm 100% spot on with the above assuming I'm reading this correctly. The motherboards all seem to have three PCIe x16 slots. Two will be allocated as x16/0 or x8/x8 with an additional x8 slot tied to the PCH. So that's an improvement, and where all the new lanes are technically going. There are also cases where we are seeing some PCIe x1 slots scattered around the expansion slot area, so that's where one o the "pick ones" are going. The second "pick one" is definitely going towards SATA and M.2 slots. The first pick one is likely also M.2 as well. I'm seeing 3x M.2 slots on most of these boards and only one of that can go to the CPU directly.

Based on what you said in the other thread, I don't think you can get what you want out of X470 or even Z390. I think you have to decide whether or not to go with X399 or X570. No one uses PLX chips anymore because they are extremely expensive and they don't improve performance, just flexibility. The reason I bring up X399 is because it will give you what you want and there are deals to be had on Threadripper 12 core and 16 core parts. Plus, in theory, you should be able to get a drop in upgrade when Threadripper 3000 series chips hit in Q4 2019. Plus, X399 is a bargain compared to the price of some of these X570 motherboards. So that's something to think about.

Yeah, since my 10Gig NIC is Gen 2 it should have more than enough bandwidth off of the 4x Gen 4 lanes going to the chipset, even with other things going at the same time.

That's not how it works. If you have a PCIe 4.0 slot and put a PCIe Gen 2.0 device in it, that slot will operate in 2.0 mode. For your controller to work the way you want it to, you need an x8 slot electrically to pass x8 PCIe Gen 2.0 data across the bus. The connectors all line up the same way. You can't suddenly get an x4 PCIe 4.0 slot that only covers half your card's physical connector to operate at full bandwidth.

I'm hoping at least one motherboard maker gives us a board where I can use 16x off of the CPU for the GPU, 4x off of the CPU for a single m.2 drive, and then have an 8x slot off of the chipset for my NIC. From what I understand this type of configuration should be possible. I just don't know how likely it will be... :/

This is where I think you are in luck. Given the above block diagram and the physical boards I've seen, I think this is actually possible and likely.

Yeah, I know this intellectually, both from past tests, and from looking at the PCIe bandwidth utilization chart in RivaTuner which I usually have installed as part of MSI Afterburner, and have open on a side window when I run games.

I just can't seem to get over the urge to have the full 16x to the GPU "just in case", primarily because everything else in my system I have some margin. On my GPU I have next to no margin. I need it to perform as fast as it possibly can or I don't get my 60fps at 3840x2160

Well, I have x16 lanes for my GPU and I can't even get 60FPS all the time at 3840x2160. There are some games like Destiny 2 where this doesn't always happen at max settings. Even so, its generally been proven that in most cases full x16 lanes of bandwidth aren't needed, even for high end GPU's. I can still understand where you are coming from here as its one of the many reasons I've stuck with HEDT platforms.

Well, the x570 supposedly has 16x PCIe lanes the motherboard vendors can use as they see fit. I hope at least one of them makes an 8x slot available off of the chipset I can stick my NIC in. I'm not concerned about the bandwidth. It's a 2x 10gigabit NIC, so we are talking 20 gigabit. Both the NIC and the PCIe lanes are full duplex, so I shouldn't have to account for that.

As I said, I think you are going to be in luck with X570. With earlier boards, this isn't ever going to happen.

The NIC is Gen 2 8x, so it is at most capable of using 4,000 MB/s out of the ~8,000 MB/s available from the 4x Gen 4 slots going from the CPU to the Chipset. This leaves ~ half of that bandwidth for the on board devices, which is probably fine. Sound probably doesn't use much, given that old sound boards were fine using 1x Gen1 slots. That and the NIC will likely never come close to this 4,000 MB/s max, due to 20Gigabit being ~ 2560 MB/s. There is likely some overhead on top of that for the protocol, but I can't imagine it would max out the full 4,000 MB/s of 8x Gen 2.

Again, this isn't how the slots work. If you stick an x8 dual 10GbE NIC into a Gen 4.0 x8 PCIe slot, all 8x lanes are going to get used. However, their speed will be downgraded and operate at Gen 2.0 bandwidth. And yes, there is overhead, but it isn't huge.

So I am keeping my fingers crossed that at least one motherboard vendor makes this kind of configuration available.

It should be possible to do, but my desires are unconventional these days though, so I probably won't get what I want.

Its certainly possible and like I said, I think you'll get close enough on X570 to make you happy.

I know. I don't understand the strong dislike people have of wires. They aren't THAT difficult to run, and if you think they are messy it is easy to hide them behind various raceways and organizers.

Wired solutions will always be faster, more reliable and more secure.

I only use wireless anything if I absolutely have to, for devices like phones where their entire purpose is that they are mobile. Anything stationary gets a dedicated wired ethernet line. I don't even use bluetooth or wireless mice and keyboards if I can help it. I'd rather everything be wired.

Typically, your going to have wires of some kind anyway. I think its about effort and some people simply not knowing how to hide the wires or being unwilling to do it in a rental space. I'd leave the bastard running around the edge of the room on the floor if that's what it took. Frequently, I did this over the years. I also learned to hide them under base boards or under the edge of the carpet. It isn't that difficult to do.
 
That's not how it works. If you have a PCIe 4.0 slot and put a PCIe Gen 2.0 device in it, that slot will operate in 2.0 mode. For your controller to work the way you want it to, you need an x8 slot electrically to pass x8 PCIe Gen 2.0 data across the bus. The connectors all line up the same way. You can't suddenly get an x4 PCIe 4.0 slot that only covers half your card's physical connector to operate at full bandwidth.

I'm aware of that. I'm talking about the 4x Gen4 uplink lanes from the CPU to the Chipset.

There is clearly some sort of multiplexing that allows everything coming off of the chipset to share the full ~8000 MB/s of that link among all things on the Chipset, otherwise it wouldnt work.


All I am saying is that I am not worried about running the nic off of the Chipset despite it being 8x, as I don't think it will actually use enough bandwidth to bottleneck the CPU uplink.
 
I'm aware of that. I'm talking about the 4x Gen4 uplink lanes from the CPU to the Chipset.

There is clearly some sort of multiplexing that allows everything coming off of the chipset to share the full ~8000 MB/s of that link among all things on the Chipset, otherwise it wouldnt work.


All I am saying is that I am not worried about running the nic off of the Chipset despite it being 8x, as I don't think it will actually use enough bandwidth to bottleneck the CPU uplink.

No, it isn't multiplexing. It's a bottleneck. And yes, it does work and always has. The fact is none of those things will generally ever pull their maximum bandwidth outside of specific stress tests. Typically, your network connection is only pulling very little data when web browsing. Even in a full download, it only pulls what your ISP or the web hosts servers will allow you to pull down at a time. So your almost always throttled to some extent. Its the same for storage. It isn't transferring data at full speed constantly. Audio isn't always out putting anything, etc.

You only ever see a constraint on the PCH link when you do 2 and 3 drive RAID 0 arrays and run a benchmark. Something you probably won't do 24/7. Typically, even in those array's, you only hit limits in read speeds. On the writes your usually well within the realm of what the link can do with bandwidth left over.
 
No, it isn't multiplexing. It's a bottleneck. And yes, it does work and always has. The fact is none of those things will generally ever pull their maximum bandwidth outside of specific stress tests. Typically, your network connection is only pulling very little data when web browsing. Even in a full download, it only pulls what your ISP or the web hosts servers will allow you to pull down at a time. So your almost always throttled to some extent. Its the same for storage. It isn't transferring data at full speed constantly. Audio isn't always out putting anything, etc.

You only ever see a constraint on the PCH link when you do 2 and 3 drive RAID 0 arrays and run a benchmark. Something you probably won't do 24/7. Typically, even in those array's, you only hit limits in read speeds. On the writes your usually well within the realm of what the link can do with bandwidth left over.

Agreed. It is extremely rare for everything to be at max bandwidth demand at the same time. Especially considering that in the real world we generally have a mixture of expansion cards from older generations, not the latest greatest Gen 3/4 stuff.
 
No, it isn't multiplexing. It's a bottleneck.


Ok, Multiplexing may be the incorrect term, but here is my understanding of how it works. Please correct me if I am wrong:

Normally when you have lanes that go straight to the CPU you are going to be limited to the highest common PCIe gen between the CPU lanes and the expansion card you are plugging in.

- Plug in a 16x Gen 3 card into a 16X gen 3 slot and everything works at 16x Gen 3.
- Plug in a 16x Gen 2 Card into a 16x Gen 3 slot and everything works at 16x Gen 2.
- Plug in a 16x Gen 2 card into an 8x Gen 3 slot, and everything works at 8x Gen 2.

My understanding is that the lanes coming off of the chipset operate differently - however.

On an x570, A Ryzen 3000 CPU will have a permanent 4x PCIe Gen 4 link to the chipset. This represents ~8000MB/s of bandwidth.

The x570 somehow pools this 8000MB/s of bandwidth and uses it for everything downstream of the chipset, including it's 16x PCIe lanes. This pooling is what I was referring to as "multiplexing". That may be the wrong terminology. I am poorly read on the terminology here.

So, since we are talking about pooled bandwidth between these downstream chipset PCIe lanes and the CPU, you could technically utilize the 16x downstream chipset lanes in Gen 2 and get the full 16x bandwidth to the CPU via those 4x uplink Gen 4 lanes?

So, in the example above, the 16x Gen 2 card establishes a 16x Gen 2 link to the chipset. This is an 8000MB/s link.

Then, the chipset transmits this data on to the CPU using its 4x Gen 4 links which also provide 8000MB/s.

Provided nothing else is using the bandwidth at the same time, you could technically max out the bandwidth of the 16x Gen 2 card, despite only having 4x links to the CPU.

Is this an accurate assessment?

This pooling is the only way I can picture it working, because if it is just plain switching, then it would be terrible.
 
You got my attention at "No RGB".


Then you might like this board I have been eyeballing.

I have no idea if it is actually going to be any good and have good overclocking and power delivery features, but I LOVE that it is "Not a Gaming Board".

Asus Pro WS x570 ACE

15

I'm really hoping this one:

a) Does not suck

b) Does not cost 2x more because it contains the words "workstation"

Time will tell. It is my favorite thus far.
 
Last edited:
Ok, Multiplexing may be the incorrect term, but here is my understanding of how it works. Please correct me if I am wrong:

Normally when you have lanes that go straight to the CPU you are going to be limited to the highest common PCIe gen between the CPU lanes and the expansion card you are plugging in.

- Plug in a 16x Gen 3 card into a 16X gen 3 slot and everything works at 16x Gen 3.
- Plug in a 16x Gen 2 Card into a 16x Gen 3 slot and everything works at 16x Gen 2.
- Plug in a 16x Gen 2 card into an 8x Gen 3 slot, and everything works at 8x Gen 2.

My understanding is that the lanes coming off of the chipset operate differently - however.

On an x570, A Ryzen 3000 CPU will have a permanent 4x PCIe Gen 4 link to the chipset. This represents ~8000MB/s of bandwidth.

The x570 somehow pools this 8000MB/s of bandwidth and uses it for everything downstream of the chipset, including it's 16x PCIe lanes. This pooling is what I was referring to as "multiplexing". That may be the wrong terminology. I am poorly read on the terminology here.

So, since we are talking about pooled bandwidth between these downstream chipset PCIe lanes and the CPU, you could technically utilize the 16x downstream chipset lanes in Gen 2 and get the full 16x bandwidth to the CPU via those 4x uplink Gen 4 lanes?

So, in the example above, the 16x Gen 2 card establishes a 16x Gen 2 link to the chipset. This is an 8000MB/s link.

Then, the chipset transmits this data on to the CPU using its 4x Gen 4 links which also provide 8000MB/s.

Provided nothing else is using the bandwidth at the same time, you could technically max out the bandwidth of the 16x Gen 2 card, despite only having 4x links to the CPU.

Is this an accurate assessment?

This pooling is the only way I can picture it working, because if it is just plain switching, then it would be terrible.

Anything coming from the PCH and going to the CPU has to share that highway. In this case, a PCIe Gen 4.0 link between the CPU and the chipset PCH. Yes, you can theoretically max out that 8000MB/s link, but this is something you can generally only do through synthetic testing or through actions that are not typical of how the system is used. IE: They are not real use case scenarios.

Case in point, we saw no real world difference going from DMI 2.0 (PCIe 2.0 x4) to DMI 3.0 (PCIe 3.0 x4) between the PCH and the CPU on Intel systems. No single component attached to the PCH was capable of maxing out that link. Hell, even together they couldn't do it unless you were talking about super fast NVMe drives. Even then, on a single drive you were pretty much fine. You'd have to have really worked to max everything out and it would still generally cover everything. 4000MB/s for GbE internet, audio, SATA is more than plenty. You still had more bandwidth left. Where you run into problems is with NVMe RAID arrays. For along time, two of them in RAID 0 could only saturate the bus in a benchmark in read tests. In writes, they were no where close to the limit. Its when we started seeing motherboards with three NVMe M.2 slots that we saw problems. At that point, the third drive gave a slight boost in write performance and nothing in read performance.

Now we have drives that are getting faster all the time. If your using 3x NVMe drives in RAID 0 that can handle reads of 3,500MB/s, then yes, you can saturate that link and the drives won't operate as fast as they should. If you take the maximum theoretical throughput of every device that could be on the PCH, the link to the chipset always comes up short. This has been the case for close to a decade. In the real world, that's just not what generally happens.
 
Become a Patron!
Back
Top