Inahve to wonder at the actual power requirements as compared to the silicone pathway interconnect power needs and the logic processors converting the optical data to digital and back. Changing mediums has to have a cost in power and potentially latency. Until we get full optical chips and pathways of course.
My gut tells me the same thing, but this appears to not be the case with current generation optical network transceivers that go into switches. (SFP+, QSFP+, SFP28, QSFP28, etc.) These are quite significantly lower power and lower latency than their copper equivalents, both in theory and in practice.
I actually saw pretty significant performance gains - both in latency and in bandwidth - by switching from 4gbit copper QSFP+ DAC cables to Optical transceivers when I was setting up my 40gig network...
...and that was comparing the direct attach copper cable directly linking two NIC's to four sets of 40 Gbit QSFP+ transceivers going throug a switch!
In other words:
NIC -> QSFP+ Optical transceiver -> Fiber Optic -> QSFP+ Optical transceiver -> Switch -> QSFP+ Optical Transceiver -> Fiber Optic -> QSFP+ Optical Transceiver -> NIC
was much lower latency and not insignificantly higher bandwidth than:
NIC -> Direct Attach Copper Cable -> NIC
When compared side by side between the same two machines.
How that is possible, I don't fully understand, but the numbers from my testing (which I sadly didn't record for posterity) don't like.
And they pretty much line up with the expectations of all of the datacenter type folks I have spoken to about the issue.
Maybe it's magic?
Anyway, move all of this on die, and its possibly even faster?