Two things: 10Gbase-T requires significantly more power, and new cabling. And since internet connections really aren't widely available over 5Gbit, well, there's just not much demand; so cost and limited demand put a damper on adoption, especially with inexpensive solutions from Intel and Realtek that do 2.5Gbit.
Do people really only use their home networks to access the internet?
I probably have 100 times or more traffic that stays 100% local on my LAN than I have traffic that does out to the wan.
That seems like such a failure of imagination. There is SO much more to networking than just "internet".
I'm using the ISPs fiber modem / router / access point (until I get brave enough to install the UDM Pro...), and I've gotten 700Mbps up and down, across the house, with an iPhone 10. The ITX system in my living room next to the bedroom where the access point is? 1Gbit up and down. That's on WiFi 6, 5GHz.
I get the reservations for WiFi for gaming use, and by my testing those are still valid. Bandwidth aside, connections just aren't stable for all types of gaming, and I still get ping spikes and packet drops despite the bandwidth being present. But for everything not gaming? No issues.
I'll admit, I haven't played with the latest wifi standards. Everything in my house is still at 802.11ac. Because I don't do any heavy lifting with wifi, I have had no reason to upgrade. Phones, tablets and sometimes laptops access the wifi access points, but everything that is stationary (including all the big stuff) is either copper or fiber.
That said, back in the b/g/n days I had nothing but trouble. The box may have said 600Mbit/s but in reality even if you held the device a couple of inches from the antenna of the router, you got about 10mbit. This was likely due to living in a very dense neighborhood with lots of 2.4ghz congestion (other routers, cordless phones, malfunctioning microwave ovens, you name it) but it was
really bad.
Congestion, walls, EM interference, the fact that the standard is flawed and only half duplex, and - of course - the inherent security issues with broadcasting your network over the air just turns me off from wifi completely.
While it could probably be faster if you live in the boonies away from other people, ever since that experience I just don't trust anything wireless. I keep it as locked down as possible on a separate VLAN, and use it only for trivial light stuff that I don't care about and which has to be mobile. If something is stationary, come hell or high water it gets a wired connection. I don't care if I have to move heaven and earth to make it happen. No way a stationary device ever uses wifi in my house. It is against my religion
SLI is dead, CFX is... probably dead? That's the long and short of it. The extra expense for PCIe switching, especially for PCIe 5.0 (despite its lack of uses), is just unpalatable given the lack of market demand.
I don't think I've seen any modern consumer boards support an x16 and an x8 slot simultaneously. Lots of other crazy stuff (cue ASUS and Gigabyte high-end boards!), but not that.
That's the part that bugs me. I don't do SLI I tried CFX once in 2011. It sucked, and I quickly moved back to single GPU. Then I tried SLI in 2015. It also sucked, and I moved back to single GPU.
SLI/CFX is not the reason for having ample expansion.
What if I want to drop in an extra NIC or two? Or a storage controller card? Or a capture card? Or additional USB ports? or any number of things?
PC's used to be about flexibility. You install the hardware in the slots that you want or need for your application.
Who knows what delicious potentially really awesome hardware expansion never happens simply because PCIe expansion is now bviewed by many as "just something you use for your GPU". Something like PhysX could never happen today. They wouldn't have a capable enough port of installing themselves in.
USB keeps getting faster and faster, but it is still an unreliable underperforming mess and should
never be used as an alternative to proper ISA/VLB/PCI/AGP/PCIe expansion ports.
The last few generations of consumer boards have really destroyed the core of what the PC is, a general purpose highly modular computing platform that is infinitely customizable.
Instead we find outrselves in a living hell of dangling external USB devices that can disconnect if you look at them wrong.
Because of this, the market for PCIe expansion devices is tanking, outside of enterprise, and that is is awful.
I'd tell you to just build a separate gaming system. You really, really want to have space to properly cool a modern CPU and GPU. Can still be relatively small, just not that small, IMO.
Cooling is not an issue for me. Anyhting would wind up receiving water blocks dn being router into my loop with overkill radiator capacity.
What I've looked into has been those ribbon-based PCIe x4 risers that are M.2 on one end and PCIe on the other, basically converting an M.2 NVMe slot into a PCIe slot, and then hooking up a 10Gbit (etc.) card with that. Many ITX boards put one M.2 slot on the top and one M.2 slot on the back of the board, so that back slot I'd try to figure out how to route somewhere accessible.
That is also an option. Unfortunately most decent 10gig and 25gig PCIe adapters I have seen are 8x.
And that's a shame. If you run the numbers even a Gen3 4x slot should be enough for a 25gbit adapter, providing ~30.78Gbti worth of bandwidth.
Early on I thought they just made them 8x, for the benefit of people who use older systems, so they could fall back on older generation PCIe and still get the required bandwidth, so I figured, in theory, you ought to be able to just install them in a 4x slot and as long as you provided enough bandwidth they would be fine.
I have done some limited testing with this (granted on
really old NIC's like the 2007 era Intel 82598EB chip). In Theory a 4x Gen 2 connection should have provided 15.625 Mbit/s, full duplex, more than enough for a single 10mbit port, but in practice it didn't work
at all. The card threw a hissy fit when it didn't have all 8 lanes.
Now, who knows, maybe newer designs work better, but I did some preliminary research by asking the question
on the servethehome forums, and it seems like others have had similar problems with 10 gig and 25gig cards in 4x slots. I didn't have a ton of responses though, so I do not confidently feel like I am sure it won't work.
I'd experiment myself, but 25gig adapters still arent quite cheap enough to just buy on a whim for a test.
A 25gig direct connection two systems would be
perfect. I could use a ZFS pool constructed from the many m.2 slots on the Threadripper to provide additional storage on the dedicated gaming box via 25mbit ethernet. We are talking ~3GB/s, which ought to be plenty.
If you do test it yourself, I'd love to hear your results.
Edit: As I am writing this, I did another eBay check. There are suddenly
a lot of Mellanox 25gbe NIC's on eBay at about the $100 "buy it now" pricepoint, which isn't too bad. They must just be old enough to start being decommissioned in large numbers now. I don't really have any experience with Mellanox though.
My first jump into 10gig networking in 2014 didn't go too well. OI bought a few Brocade BR1020 controllers and I had nothing but trouble with them. After a few years of dealing witth poor and intermittent performance, I bought a bunch of Intel X520 SFP+ NIC's, and they have - in conjunction with proper Intel and Finisar modules (I understand Intels modules arejust rebranded Finisar modules, which is why I tried them) things "just worked". I'd be a little hesitant to experiment with non-intel NIC's again...
The Intel 25gig nics still cost a little more though...
Oh, and I might have to upgrade the wole server, as the quad port ones I am finding (E810-XXVDA4) are 16x PCIe Gen4, and my server is Gen3. Again, unless they are 16x gen 4, so that they can provide adequate bandwidth both over Gen3 (16x) and gen4 (8x) but buying it and finding out is a risk, especially when they are going for $600 used on eBay.
According to my calculations:
Configuration | Gbit/s |
16x gen 4 | 252.16 |
8x gen 4 | 126.08 |
16x gen 3 | 123.125 |
8x gen 3 | 61.5625 |
I see no reason why a quad 25gig adapter should require 16x Gen4, unless it has
monstrous overhead.
This is all of course unless I can live with the dual port XXV710-DA2's which according to Intels ark page are 8x gen3...
And I might be able to.
Another option is to check to see if there are any 100Gbit QSFP28 adapters I can use a four port breakout cable with... Maybe some of those are Gen3... A quick search results in a few HPE Intel based nics with a single QSFP28 port, and they
are 8x Gen3 (which suggests they are a blocking bandwidth starved design) but the asking price is an obscene $720 per nic. Other sites have them listed as selling refurbed for $399 not that long ago, so I don't know what is going on there. Popular demand driving up prices?