LazyGamer
FPS Junkie
- Joined
- Sep 5, 2020
- Messages
- 2,647
- Points
- 113
Actually, yeah, I think you're right.I thought they used qualcomm chipsets. Maybe on some models?
Actually, yeah, I think you're right.I thought they used qualcomm chipsets. Maybe on some models?
Also, it seems like people don't really care about high end ethernet. Remember KillerNic? Even they switched to wifi.
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
M.2 Slot Key E sometimes come vertical... that's how it is on my asrock and MSI boards... I assume others as well. The only thing they take space away from is fitting more stuff on the MB back panel, so maybe a pair of USB ports? So they really don't take up much space when pre-installed on most motherboards and are upgradable (even though most never do). This is why I don't understand, what do people feel like they're gaining if this wasn't included? Heck if you really dislike it, just remove it, it's like 3 screws (typically), 2 for the cover and 1 that holds the card in .I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
I wish m2 slots were layed out vertically like pci-e slots. The current layout it takes too much space, so getting more than one takes a big chunk of mobo realstate.
I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
IMO a plastic clamp would be more than enough to fit a vertical m2 ssd, not that it needs much support to begin with.I've said for years that M.2 is literally the worst form factor for desktop PC storage. It takes up a massive amount of PCB real estate and it's not ideal for cooling. The form factor also limits storage density as you can only fit so many chips on those tiny PCB's. That said, I understand how and why they ended up taking over. It's mostly due to the industry being more mobile focused and consumer SSD makers not wanting to invest in a form factor like the 2.5's and AIC's that are common to the commercial market. DIY PC's like ours are a very small percentage of the sales pie.
That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.
I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
I have one of those on a server. Single M2 slot though.
That gets them vertical. Not saying it should necessarily eat the PCI slot.. although why not, I guess, since it's on the PCI bus anyway.
I think, for most people the current implementation is fine. That's how they get installed in laptops and other SFF anyway, which is really what M.2 was made for. But if you were wanting to have a lot of drives in a desktop, yeah, you need a way to stack them in with some density, and going vertical makes a lot of sense. This PCI card is just an example, but it could easily hold a lot of M.2's on a single card, restricted really only by the bandwidth you want to feed them
This is true, but your IOPS are still much higher, so random IO can sustain higher read/write speeds. Just depends on your work loads and stuff. Obviously if you have a choice not sharing bandwidth is better, but if you are limited it still works well. Heck, just look at how much sharing there is now on Intel boards . They all share a pcie 3.0 link to the chipset, so I'd you've got 2 NVMEs, you're already sharing.I have one of those on a server. Single M2 slot though.
BTW, sharing a x4 bus is not the best idea, since you are effectively cutting your bandwidth in half.
BTW one more thing missing in many MBs, Pci-e x4 slots.
I feel your pain.Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
Yea for those of us that deal with enterprise hardware this would be nice. I mean when I decommission and literally TRASH an older server that has a 4 port 10gb ethernet card in it... I should be able to use that in my desktop! ... hummmm......
Not QUITE but very close. Man I would LOVE the ability to simply turn off the onboard sata controller and the on board NIC's and On board sound card if that would just recover those PCIE lanes for my PCIE slots. Give me THAT at least!They have essentially ruined the whole point of building your own.
Back in the day building a PC actually meant something. You had to learn and design a system, and then you could prioritize the pars that were important to you not just whatever lowest bidder garbage the motherboard manufacturer decided to integrate.
Not QUITE but very close. Man I would LOVE the ability to simply turn off the onboard sata controller and the on board NIC's and On board sound card if that would just recover those PCIE lanes for my PCIE slots. Give me THAT at least!
Just to have all those parts share the PCI bus...
This is nice in theory but.. it really doesn't improve your IO capacity much. Sound/SATA/etc hang off the chipset which is connected to the CPU using a 4x PCIe 4.0 link (on the AMD Zen 4 platform) or on the Z790 chipset, it's either 4x or 8x, depending on motherboard implementation (mostly 4x).
So, if remove SATA/onboard NIC and then plug in a 10GB NIC/8x card that's routed through the chipset... you'll not be able to hit 10GB even if there's no other between the CPU and chipset...