Ethernet Technology Consortium Announces 800 Gigabit Ethernet Specification for Ludicrous Speeds

Tsing

The FPS Review
Staff member
Joined
May 6, 2019
Messages
12,595
Points
113
ethernet-yellow-cabling-1024x576.jpg
Image: jarmoluk (Pixabay)



Few of us will be able to enjoy the privilege, but the Ethernet Technology Consortium (ETC) – which counts Arista, Broadcom, Cisco, Dell, Google, Mellanox, and Microsoft as members – has made its new 800 Gigabit Ethernet specification official. This is a massive leap from the 25 and 50 Gigabit speeds that the organization was originally founded to develop and manage (the ETC was originally known as the 25 Gigabit Ethernet Consortium, but it had to change its name due to ever-increasing speed ceilings).



“Ethernet is evolving very quickly and as a group, we felt that having 25G in the name was too constraining for the scope of the consortium,” said...

Continue reading...


 
Why have I never heard of 25 and 50 gigabit Ethernet?

I always though of the standards as:

10 mbit
100 mbit
1 gbit
10 gbit
40 gbit
100 gbit
 
I certainly get the need for better and faster network transport.

We have more and more and more data moving about the planet etc.

My question... What do I do with all my cat 6? I do wonder what type pf cabling this requires do we finally have to use optical?
 
I certainly get the need for better and faster network transport.

We have more and more and more data moving about the planet etc.

My question... What do I do with all my cat 6? I do wonder what type pf cabling this requires do we finally have to use optical?


Well, electrons move through copper at the speed of light. I've heard fiber optics only move at about 70% of the speed of light (they bounce at angles down the line, it adds a bit of overhead)... so I don't know that optical is necessarily any faster - just no interference/EMF to deal with.
 
Well, electrons move through copper at the speed of light. I've heard fiber optics only move at about 70% of the speed of light (they bounce at angles down the line, it adds a bit of overhead)... so I don't know that optical is necessarily any faster - just no interference/EMF to deal with.

Usually optical is used for power savings and for range. Copper uses more power and thus can drive up costs in the data center.

It's also fairly trivial to get 25 mile line to work using a 10GBase-ER SFP+ module on each end and single mode fiber.

Copper technically has slightly lower latency, but the difference is negligible. In the nanoseconds, a tiny amount of time compared to the time it takes a switch to actually process a packet.

This likely has more to do with the time needed to convert the electrical signal to an optical one and then back to an electrical signal at the receiving end that it has to do with differences in speed, at most cable lengths. If you have really long lines, like the 25 mile one above, that may be different.

The speed of light converst to about 186 miles per millisecond, so at the speed of light that full 25 mile run in 0.13 Ms. Same would take 0.19ms at 0.7c, so a difference of 0.04ms, or 40,000ns, so here the speed might be more important, assuming of course you can get your electrons to just run down a 25 mile copper cable without any kind of switching or amplification, which may add its own latency.
 
This is backbone stuff for datacenters. Not likely to see this in our homes for the next 20 years. I mean we are just getting 2.5g at home now, 10g is still stupid expensive.

We use a mix of 1g, 10g, and 16/32g at work for different purposes. Between sites the WAN link is 10M (tops) and we never see anywhere near those speeds because of all the other factors.

I mean even the 10g SFPs for work are $100+ each, plus the cost of HBA, fiber, switch, etc. if you need to copy huge amounts of data between machines at home, your fastest and cheapest bet is still a USB hdd
 
Last edited:
I certainly get the need for better and faster network transport.

We have more and more and more data moving about the planet etc.

My question... What do I do with all my cat 6? I do wonder what type pf cabling this requires do we finally have to use optical?

Cat 6 isn't going anywhere. Cat5e is still 1g/2.5 and even 10g capable (at short runs), and we've been using cat 5e since the freaking 90's

If you need faster at home, you can get cheap used fiber and HBAs off ebay. Peer to peer though, fiber switches are expensive. Also... drivers could be a chore.

We are unlikely to see widespread adoption of even 10g cat6 ethernet in home in the next 10 years. You can do it now but it ain't cheap.
 
Last edited:
This is backbone stuff for datacenters. Not likely to see this in our homes for the next 20 years. I mean we are just getting 2.5g at home now, 10g is still stupid expensive.

As long as you aren't afraid of used server pulls and enterprise switches it is doable.

Not the copper stuff of course. I'm talking fiber.


As long as you get out of your comfort zone it is totally doable.

A single port Intel X520 NIC goes for between $35 and $70 from a reputable seller on eBay. A dual port goes for $50 to $100.

Sometimes they come with transceivers for that price, sometimes not. If you need them you can usually find a Finisar 10GBaseT-SR transciever on eBay for $20 and $25. I just bought a lot of four for $80 bucks.

An Enterprise switch with between 24 and 48 gigabit copper ports and 2-4 10gig SFP+ slots will bo between $75 and $150. I bought an Aruba s2500-48T a while back for only $125 and it has been serving me great.

If you do your research you can even find a direct attach copper cable for short (1.5 to 20ft) distances.

It is a little trickier with the enterprise gear though as not all SFP+ and DAC cables are compatible with a NIC's and switches. SFP+ is technically a standard for 10gig connectivity, but there are a lot of vendor lockouts and compatibility problems, so you have to do your research, but once you have it in place it is great.

I can transfer files between my desktop and my nas at 1.2GB per second over 10gig fiber....
 
Cat 6 isn't going anywhere. Cat5e is still 1g/2.5 and even 10g capable (at short runs), and we've been using cat 5e since the freaking 90's

If you need faster at home, you can get cheap used fiber and HBAs off ebay. Peer to peer though, fiber switches are expensive. Also... drivers could be a chore.

We are unlikely to see widespread adoption of even 10g cat6 ethernet in home in the next 10 years. You can do it now but it ain't cheap.

Switches aren't bad if you follow one of the home lab or home server forums and learn where the deals are. Again, I got an Aruba s2500-48T with 48 gigabit ports and 4 10gig SFP+ slots for only $125 about a year ago. Everyone knows about these now so they go for a little more, but if you stay current these things tend to have been bought all at about the same time, so they are often replaced at about the same time. There is always some Cisco, HPE or other gear that is the ex enterprise steal of the century.

Intel x520 and x540 NIC's are not cheap, but affordable and are plug and play in both Windows 10 and Linux. No hunting for drivers necessary.
 
We are unlikely to see widespread adoption of even 10g cat6 ethernet in home in the next 10 years. You can do it now but it ain't cheap.

These days widespread adoption of any new wired networking is unlikely. Normies don't like the hassle of wires and can't understand why anyone would use them when wifi exists.

There will always be enthusiasts who know the advantages of wires tech and care, but we need the normies for mass adoption and large enough quantities to drive down pricing. That's why we went so quickly from 100M to gigabit Ethernet, because of adoption by non-enthuaiasts. Without the non-enthuaiasts any new consumer wired Ethernet tech is pretty much dead in the water. You'll never reach the economies of scale necessary to make these techs affordable in the consumer market.

This is why older used enterprise tech is the only way, and in enterprise, going above gigabit more often than not means fiber.
 
My thoughts are gjt is how do you push that much data? What are you running to handle the logical i/o to push that kind of throughput. I'm sure th ebig data server farms could. Do it but you're gong to need to redesign the drawer layout or have a lot more disks on multiple busses feeding the adapters to maintain any kind of through put. I suppose for. Memory cached data to be delivered that would make some sense. I kind of dig that in an do a sad array and get local fiber switched 16gb connections giving me sub ms latency.

Were talking Netflix and other streaming services that would use this to tighten down their rack footprint to deliver data to more clients at once. That's where a cost savings would be I think. For your normal tasks most servers arnt even saturating a gigabit.
 
My thoughts are gjt is how do you push that much data? What are you running to handle the logical i/o to push that kind of throughput. I'm sure th ebig data server farms could. Do it but you're gong to need to redesign the drawer layout or have a lot more disks on multiple busses feeding the adapters to maintain any kind of through put. I suppose for. Memory cached data to be delivered that would make some sense. I kind of dig that in an do a sad array and get local fiber switched 16gb connections giving me sub ms latency.

Were talking Netflix and other streaming services that would use this to tighten down their rack footprint to deliver data to more clients at once. That's where a cost savings would be I think. For your normal tasks most servers arnt even saturating a gigabit.
This is only 2 lanes of PCIe5 which is in testing right now. and 4 lanes of PCIe4 which Epyc can handle 128 lanes of in a single platform. Completely doable. This isn't even that much bandwidth just really consolidated. Storage networks need this type of IO for NVMeoF.
 
40gbit is quad 10 (needs a QSFP connector)
100gbit is quad 25

Oh, so 40gbit is actually four fibers? I had seen the QSFP+ name for modules for 40Gbit, but I never knew the Q stood for quad. Is it like link aggregation, or could it actually carry 40gbit in a single connection?

Are the actually single 25gigabit implementations, or are they all 100gbit in quad config? I've never seen a 25gig line.
 
Last edited:
Oh, so 40gbit is actually four fibers? I had seen the QSFP+ name for modules for 40Gbit, but I never knew the Q stood for quad. Is it like link aggregation, or could it actually carry 40gbit in a single connection?

Are the actually single 25gigabit implementations, or are they all 100gbit in quad config? I've never seen a 25gig line.

No the QSFP+ are still single fiber, just a fatter transceiver compared to the 10g SFPs. We use those for the backbones between switches at work and it's the same OM3 fiber everything else uses.
 
No the QSFP+ are still single fiber, just a fatter transceiver compared to the 10g SFPs. We use those for the backbones between switches at work and it's the same OM3 fiber everything else uses.
That's what I had thought.

I wonder what makes it "quad"...
 
That's what I had thought.

I wonder what makes it "quad"...
The wiki article is actually pretty good if you're interested. It's one fiber with 4 channels. I've used 25gbs mellanox cards before and the 100gbs are based on that single channel mode just with quad damage lol.
 
Become a Patron!
Back
Top