So annoying high end motherboards with Wifi.

Also, it seems like people don't really care about high end ethernet. Remember KillerNic? Even they switched to wifi.

They didn't switch to WiFi. That's not an accurate statement. They simply offer a WiFi product as well as embedded wired PHY's. The wireless solution from Killer Networking is actually very good. I've tested them quite a few times and they typically beat the Intel adapters in my transfer tests. Their wired solutions aren't as good, but they are better than they used to be.
 
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
 
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.

That would be a better layout for sure. However, I'm not sure how the manufactures would make it to where the drives would be secure. I see why they have been designed as they are for so long.

They would have to have an adaptor of sorts I think.
 
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
M.2 Slot Key E sometimes come vertical... that's how it is on my asrock and MSI boards... I assume others as well. The only thing they take space away from is fitting more stuff on the MB back panel, so maybe a pair of USB ports? So they really don't take up much space when pre-installed on most motherboards and are upgradable (even though most never do). This is why I don't understand, what do people feel like they're gaining if this wasn't included? Heck if you really dislike it, just remove it, it's like 3 screws (typically), 2 for the cover and 1 that holds the card in :p.
 
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.

I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
 
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.

If we could vertically mount these along an edge of the motherboard that would give us far better cooling access and easier to make thicker high capacity nvme cards. But the slot would need to be reinforced of course.

If a motherboard maker does that well it will start a trend.
 
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
IMO a plastic clamp would be more than enough to fit a vertical m2 ssd, not that it needs much support to begin with.

I don't really have an issue with m2 size, as it essentialy doubles with newer process or stacking.
 
519dLGp6QTL._AC_SY355_.jpg


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
 
519dLGp6QTL._AC_SY355_.jpg


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them

I've seen cards like this that have built in raid controllers giving you 4 PCIE 3.0 x4 slots you can do in a raid 0 or raid 10, or I imagine a raid 5 but probably not.
 
519dLGp6QTL._AC_SY355_.jpg


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
I have one of those on a server. Single M2 slot though.

BTW, sharing a x4 bus is not the best idea, since you are effectively cutting your bandwidth in half.
 
I have one of those on a server. Single M2 slot though.

BTW, sharing a x4 bus is not the best idea, since you are effectively cutting your bandwidth in half.
This is true, but your IOPS are still much higher, so random IO can sustain higher read/write speeds. Just depends on your work loads and stuff. Obviously if you have a choice not sharing bandwidth is better, but if you are limited it still works well. Heck, just look at how much sharing there is now on Intel boards ;). They all share a pcie 3.0 link to the chipset, so I'd you've got 2 NVMEs, you're already sharing.
 
BTW one more thing missing in many MBs, Pci-e x4 slots.
 
BTW one more thing missing in many MBs, Pci-e x4 slots.

Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
 
Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
I feel your pain.

I still have a couple of 1gb eth cards and a raid controller from old servers
 
I would still prefer a return to a "nothing on board" era, and instead have 8 fully maxed out PCIe slots to put the best of the best of what I choose in.

I can't remember the last time I used an on board device for pretty much anything. Not sound, not network, not SATA, and definitely not wifi. The only exception is USB.

Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......

That's what has kept me on HEDT platforms for so long, but HEDT today is really disappointing. Current gen Threadrippers and Xeons are just workstations, not HEDT.

I was prepared to drop some serious cash on the Threadripper 7000 series, but it turned out to be a massive disappointment.

I'd totally buy a consumer motherboard and CPU. I'd just need at least simultaneous working 16x and 8x slots at full bandwidth. One for the GPU, the other for my NIC. I'd like more, but everything else I'd be willing to do without.

The sad part is, going over the AMD chipset layouts, I should be able to have a working 8x slot without dropping my 17x slot to 8x, but I haven't been able to find even a single motherboard designer that has gone this route.

It makes me so frustrated. I hate PC's today. They just keep taking away more and more of my choice every year that goes by.

Back in the day building a PC actually meant something. You had to learn and design a system, and then you could prioritize the pars that were important to you not just whatever lowest bidder garbage the motherboard manufacturer decided to integrate.

They have essentially ruined the whole point of building your own.

It makes me sad angry and annoyed.
 
They have essentially ruined the whole point of building your own.
Not QUITE but very close. Man I would LOVE the ability to simply turn off the onboard sata controller and the on board NIC's and On board sound card if that would just recover those PCIE lanes for my PCIE slots. Give me THAT at least!
 
Back in the day building a PC actually meant something. You had to learn and design a system, and then you could prioritize the pars that were important to you not just whatever lowest bidder garbage the motherboard manufacturer decided to integrate.

Just to have all those parts share the PCI bus...

Not QUITE but very close. Man I would LOVE the ability to simply turn off the onboard sata controller and the on board NIC's and On board sound card if that would just recover those PCIE lanes for my PCIE slots. Give me THAT at least!

This is nice in theory but.. it really doesn't improve your IO capacity much. Sound/SATA/etc hang off the chipset which is connected to the CPU using a 4x PCIe 4.0 link (on the AMD Zen 4 platform) or on the Z790 chipset, it's either 4x or 8x, depending on motherboard implementation (mostly 4x).

So, if remove SATA/onboard NIC and then plug in a 10GB NIC/8x card that's routed through the chipset... you'll not be able to hit 10GB even if there's no other between the CPU and chipset...
 
Just to have all those parts share the PCI bus...



This is nice in theory but.. it really doesn't improve your IO capacity much. Sound/SATA/etc hang off the chipset which is connected to the CPU using a 4x PCIe 4.0 link (on the AMD Zen 4 platform) or on the Z790 chipset, it's either 4x or 8x, depending on motherboard implementation (mostly 4x).

So, if remove SATA/onboard NIC and then plug in a 10GB NIC/8x card that's routed through the chipset... you'll not be able to hit 10GB even if there's no other between the CPU and chipset...

True, but there is (I presume) some sort of PCIe Switch (PLX Switch) in the chipset. Otherwise it wouldn't be able to break out the lanes to all the things that are integrated.

So, those Gen4 4x lanes to the chipset can have their bandwidth pooled by the PCIe switch, and assigned arbitrarily. 4 Gen 4 lanes is 7.88 GB/s

While most fancy server NICs are 8x, they are usually only Gen2 or Gen3. With a PCIe switch, there is no reason you couldn't have an upstream Gen4 4x link appear as 32x Gen1 links 16x Gen2 links, 8x Gen3 links or any combination of the above, provided the switch is capable of enough lanes to make that happen.

In fact, a single link 10gig NIC only uses 1.2GB/s (plus overhead). Make it a dual port and you only need 2.4GB/s

Heck, a 25Gbit single port NIC is only 3.125GB/s. Dual port is 6.25GB/s

That's the beauty of PCIe switching. You are no longer limited by traditional gen and lane count negotiation.

The only downsides are:
- Cost (which is huge for standalone PCIe switches by Broadcom, Microchip and the like, but the mass produced ones in chipsets wind up being relatively cheap)
- Power use (a few watts)
- A tiny amount of latency.

Now, you could run into I/O constriction across the chipset if you are maxing out all the onboard stuff, and your theoretical 8x NIC at the same time, but in our case, with almost everything on board disabled, that would be unlikely, and even if you didn't disable everything like suggested, how often do you really max out every subsystem in your PC at the same time?

What we are suggesting is totally possible. The AMD and Intel Chipsets (according to PR docs) can even be configured this way if desired. The Motherboard manufacturers just havent chosen to release any models like this. Presumably because they assume there isn't a large enough market to juystify doing so. I think they are wrong. There are lots of enthusaists like @Grimlakin and myself who would absolutely pounce on them given the opportunity.
 
Become a Patron!
Back
Top