So annoying high end motherboards with Wifi.

About the only thing I'd be willing to give up on a motherboard at this time is SATA; and SATA is going away on it's own regardless (see the switching / sharing of PCIe lanes on modern motherboards between PCIe slots, SATA ports, and M.2 slots).

I get some folks don't want WIFI, and if a machine will be in a server rack or guaranteed to be wired I get that, but that's really a rare exception versus nearly every real-world user. As much as a purist as I have been I'm using WIFI on my desktop right now and not seeing any problems in a single-family home with two users, so maybe a maximum of a half-dozen devices active at once. Obviously deployments in more contested environments (apartments, more users) would benefit from additional access points.

While most fancy server NICs are 8x, they are usually only Gen2 or Gen3.
The Marvell (nee Aquantia) 10Gbit chipsets are PCIe 3.0 x4 and absolutely have everything a workstation would need in terms of interface support. You see these integrated regularly on boards with consumer chipsets. Those boards typically also have a second 2.5Gbit or 1Gbit interface as well, some even having two.

See here from ASUS' Z790 Pro Art:
1706215048712.png
 
About the only thing I'd be willing to give up on a motherboard at this time is SATA; and SATA is going away on it's own regardless (see the switching / sharing of PCIe lanes on modern motherboards between PCIe slots, SATA ports, and M.2 slots).

Agreed. SATA needs to go away. If you really want SATA, get an M.2 PCIe adapter with a SATA controller on it and stick it in a secondary or tertiary m.2 slot.

I get some folks don't want WIFI, and if a machine will be in a server rack or guaranteed to be wired I get that, but that's really a rare exception versus nearly every real-world user. As much as a purist as I have been I'm using WIFI on my desktop right now and not seeing any problems in a single-family home with two users, so maybe a maximum of a half-dozen devices active at once. Obviously deployments in more contested environments (apartments, more users) would benefit from additional access points.

I use WiFi. I use it on my phone, and I use it on my laptop, and that is it.

I have 4 desktops in the house, two servers, 3 HTPC frontends a couple of printers, etc. etc.

If something isn't mobile, it gets hard wired. The less WIFI, the better. Signal degredation and interference, wanting to use full duplex, range issues, etc. It's just not worth it.

I have never once used it in a stationary computer, and I never will.


The Marvell (nee Aquantia) 10Gbit chipsets are PCIe 3.0 x4 and absolutely have everything a workstation would need in terms of interface support. You see these integrated regularly on boards with consumer chipsets. Those boards typically also have a second 2.5Gbit or 1Gbit interface as well, some even having two.

See here from ASUS' Z790 Pro Art:
View attachment 2926

The Aquantia chips are indeed pretty good. it took me a while to warm up to them and build comfort, after using 10gig SFP+ Intel NIC's for so many years but I have slowly been building confidence in them.

But the problem is that they work well now. They work well now.

I started moving to 10gig networking at home in 2014 with a pair of Brocade fiber adapters for a direct secondary link between my workstation and my NAS server in the basement.

I have been eyeballing 40gig and 25gig NIC's for years now waiting for the right opportunity to pounce. I probably would have done it already if not for the fact that I am really concerned that I am going to need to upgrade to something that is not my Threadripper and then won't have enough PCIe lanes to keep them.

This frustrates me to no end.

Not being able to transfer expensive PCIe based adapters from machine to machine during an upgrade kills so much of the enthusiast stuff that makes me feel alive.
 
Now, you could run into I/O constriction across the chipset if you are maxing out all the onboard stuff, and your theoretical 8x NIC at the same time, but in our case, with almost everything on board disabled, that would be unlikely, and even if you didn't disable everything like suggested, how often do you really max out every subsystem in your PC at the same time?
A single PCIe 4.0 NVME drive connected to the chipset is all you need...
 
The Marvell (nee Aquantia) 10Gbit chipsets are PCIe 3.0 x4 and absolutely have everything a workstation would need in terms of interface support. You see these integrated regularly on boards with consumer chipsets. Those boards typically also have a second 2.5Gbit or 1Gbit interface as well, some even having two.
The Aquantia chips are indeed pretty good. it took me a while to warm up to them and build comfort, after using 10gig SFP+ Intel NIC's for so many years but I have slowly been building confidence in them.

What I will say about the Aquantia 10gig NIC's though, is that even compared to my now 15 year old dual port Intel x520-DA2's, maxing out the Aquantia uses significantly more CPU than maxing out the x520.

And I imagine newer Intel and other enterprise NIC's are even better than the positively geriatric x520.
 
Become a Patron!
Back
Top