Intel’s Integrated Photonics Solutions (IPS) Group Demonstrates First Fully Integrated Optical I/O Chiplet

Peter_Brosdahl

Moderator
Staff member
Joined
May 28, 2019
Messages
9,032
Points
113
The fully integrated optical compute interconnect utilized an Intel CPU to demonstrate the new technology's ability to transmit 32 Gbps of data across 64 channels. Intel’s Integrated Photonics Solutions Group has said the OCI is capable of spanning up to 100 meters of fiber optics and designed to use lower power.

See full article...
 
Very nice. Optical interconnect.

That is really cool.

The first application here is likely in the server / datacenter space, but if I am understanding this correctly (and it is very possible I am not), if they can make this work on a mass produced scale (which is always the difficult part compared to getting it to work once or twice in a lab environment) it could solve many if not all of the latency issues with chiplet interconnect.

You could get to the point where you could have completely modular core units, scalable as high as a package integrator wants to go, connecting to memory controllers and other typical on package functions, and absolutely kill the yield problems with large monolithic dies.

It could also alleviate many signal pathway issues for motherboard manufacturers
 
Inahve to wonder at the actual power requirements as compared to the silicone pathway interconnect power needs and the logic processors converting the optical data to digital and back. Changing mediums has to have a cost in power and potentially latency. Until we get full optical chips and pathways of course.
 
Inahve to wonder at the actual power requirements as compared to the silicone pathway interconnect power needs and the logic processors converting the optical data to digital and back. Changing mediums has to have a cost in power and potentially latency. Until we get full optical chips and pathways of course.

My gut tells me the same thing, but this appears to not be the case with current generation optical network transceivers that go into switches. (SFP+, QSFP+, SFP28, QSFP28, etc.) These are quite significantly lower power and lower latency than their copper equivalents, both in theory and in practice.

I actually saw pretty significant performance gains - both in latency and in bandwidth - by switching from 4gbit copper QSFP+ DAC cables to Optical transceivers when I was setting up my 40gig network...

...and that was comparing the direct attach copper cable directly linking two NIC's to four sets of 40 Gbit QSFP+ transceivers going throug a switch!

In other words:

NIC -> QSFP+ Optical transceiver -> Fiber Optic -> QSFP+ Optical transceiver -> Switch -> QSFP+ Optical Transceiver -> Fiber Optic -> QSFP+ Optical Transceiver -> NIC

was much lower latency and not insignificantly higher bandwidth than:

NIC -> Direct Attach Copper Cable -> NIC

When compared side by side between the same two machines.

How that is possible, I don't fully understand, but the numbers from my testing (which I sadly didn't record for posterity) don't like.

And they pretty much line up with the expectations of all of the datacenter type folks I have spoken to about the issue.

Maybe it's magic? :p

Anyway, move all of this on die, and its possibly even faster?
 
How that is possible, I don't fully understand, but the numbers from my testing (which I sadly didn't record for posterity) don't like.
This leads me to believe that the DACs are doing something weird / inefficient. I assume you have a similar conclusion, as adding two media transitions (electrical -> optical, then optical back to electrical) has to add latency. Could just be lower quality by design.

What we do know is that at higher bandwidths, pushing signals through copper is getting much, much harder over any distance, in terms of signal integrity as well as power.

Anyway, move all of this on die, and its possibly even faster?
That's the idea.

Basically along with the above, and as we've started to see with longer TB4 / HDMI / DP / ethernet cabling, higher-bandwidth over any non-trivial distance becomes nearly impractical over copper. To the point that silicon photonics become not just the more performant option but also the more economical option as well. Probably :).
 
Soo, they going cheap and available or expensive and exclusive?
 
Thinking about this a bit more. Most every type of transmission at least at the enterprise level involves Fiber switches and HBA's for the fiber. So there are performance improving options there that simply don't exist at the consumer grade.

Mainly some intelligent processing and caching on the controllers.

Will be interesting to see how the implementation makes it down to the SOC level.
 
Become a Patron!
Back
Top