So annoying high end motherboards with Wifi.

Dan_D

Administrator
Staff member
Joined
May 6, 2019
Messages
1,406
Points
113
Also, it seems like people don't really care about high end ethernet. Remember KillerNic? Even they switched to wifi.
They didn't switch to WiFi. That's not an accurate statement. They simply offer a WiFi product as well as embedded wired PHY's. The wireless solution from Killer Networking is actually very good. I've tested them quite a few times and they typically beat the Intel adapters in my transfer tests. Their wired solutions aren't as good, but they are better than they used to be.
 

Stoly

Sort-of-Regular
Joined
Sep 18, 2020
Messages
197
Points
43
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
 

Daniel_Doty

"stinger608"
Staff member
Joined
Aug 9, 2020
Messages
178
Points
43
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
That would be a better layout for sure. However, I'm not sure how the manufactures would make it to where the drives would be secure. I see why they have been designed as they are for so long.

They would have to have an adaptor of sorts I think.
 

Ready4Droid

Slightly less n00b
Joined
Jul 9, 2019
Messages
126
Points
28
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
M.2 Slot Key E sometimes come vertical... that's how it is on my asrock and MSI boards... I assume others as well. The only thing they take space away from is fitting more stuff on the MB back panel, so maybe a pair of USB ports? So they really don't take up much space when pre-installed on most motherboards and are upgradable (even though most never do). This is why I don't understand, what do people feel like they're gaining if this wasn't included? Heck if you really dislike it, just remove it, it's like 3 screws (typically), 2 for the cover and 1 that holds the card in :p.
 

Dan_D

Administrator
Staff member
Joined
May 6, 2019
Messages
1,406
Points
113
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
 

Grimlakin

FPS Regular
Joined
Jun 24, 2019
Messages
1,714
Points
113
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
If we could vertically mount these along an edge of the motherboard that would give us far better cooling access and easier to make thicker high capacity nvme cards. But the slot would need to be reinforced of course.

If a motherboard maker does that well it will start a trend.
 

Stoly

Sort-of-Regular
Joined
Sep 18, 2020
Messages
197
Points
43
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
IMO a plastic clamp would be more than enough to fit a vertical m2 ssd, not that it needs much support to begin with.

I don't really have an issue with m2 size, as it essentialy doubles with newer process or stacking.
 

Brian_B

FPS Regular
Joined
May 28, 2019
Messages
1,403
Points
113


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
 

Grimlakin

FPS Regular
Joined
Jun 24, 2019
Messages
1,714
Points
113


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
I've seen cards like this that have built in raid controllers giving you 4 PCIE 3.0 x4 slots you can do in a raid 0 or raid 10, or I imagine a raid 5 but probably not.
 

Stoly

Sort-of-Regular
Joined
Sep 18, 2020
Messages
197
Points
43


That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.

I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
I have one of those on a server. Single M2 slot though.

BTW, sharing a x4 bus is not the best idea, since you are effectively cutting your bandwidth in half.
 

Ready4Droid

Slightly less n00b
Joined
Jul 9, 2019
Messages
126
Points
28
I have one of those on a server. Single M2 slot though.

BTW, sharing a x4 bus is not the best idea, since you are effectively cutting your bandwidth in half.
This is true, but your IOPS are still much higher, so random IO can sustain higher read/write speeds. Just depends on your work loads and stuff. Obviously if you have a choice not sharing bandwidth is better, but if you are limited it still works well. Heck, just look at how much sharing there is now on Intel boards ;). They all share a pcie 3.0 link to the chipset, so I'd you've got 2 NVMEs, you're already sharing.
 

Stoly

Sort-of-Regular
Joined
Sep 18, 2020
Messages
197
Points
43
BTW one more thing missing in many MBs, Pci-e x4 slots.
 

Grimlakin

FPS Regular
Joined
Jun 24, 2019
Messages
1,714
Points
113
BTW one more thing missing in many MBs, Pci-e x4 slots.
Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
 

Stoly

Sort-of-Regular
Joined
Sep 18, 2020
Messages
197
Points
43
Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
I feel your pain.

I still have a couple of 1gb eth cards and a raid controller from old servers
 
Become a Patron!
Top