Autumn 2017 Optical Connections Magazine

XXX XXXXX ROBERT BLUM SILICON PHOTONICS

Silicon photonics THE KEY TO DATA CENTRE CONNECTIVITY

Data centre trac growth is driving the need for high-speed connectivity between servers and switches. Silicon photonics will be a key enabling technology to meet the future demands, writes Intel’s Robert Blum.

for several reasons: First, the transmission distance over standard multimode fibre is limited to about 70m at 25G, which is too short a distance for most data centres. Second, various multi-source agreements optimised for 100G data centre connectivity have been formed, resulting in a much lower cost dierential between multimode and single mode transceivers. Third, in anticipation of the increased demand for bandwidth, several large data centre operators had already proactively installed large amounts of single mode fibre in their trunk cabling, so there is not even a one-time “upgrade” cost to migrate to single mode connectivity. approach to silicon photonics is that all the functionality needed for a 100G transmitter – in the case of a CWDM4 chip, this includes 4 lasers with dierent wavelengths, 4 modulators and an optical multiplexer – can be integrated on a single silicon die. This is made possible by the unique hybrid laser and results in significant advantages in manufacturability and product cost. For example, this level of integration enables on-wafer testing of the complete transmitter such that only known- good-die are passed on to the backend assembly, resulting in improved yields due to lower complexity and a reduced number of assembly steps. Similar transceivers based on traditional optics or discrete lasers require the assembly of multiple optical components, typically involving multiple bonding steps and INTEL’S APPROACH One of the key benefits of Intel’s

ROBERT BLUM

S ignificant innovations in optical connectivity are bandwidth-intensive and compute- intensive applications. This is where silicon photonics comes into play with its potential to bring electronics-type cost and scale to the optics industry – an industry that has traditionally been focused on lower volume or longer distance applications with limited ability to scale to the requirements of modern data centres. Silicon photonics has now been productised and will be a key enabling technology to meet the future demands for bandwidth in data centres. Networks inside data centres are often based on Clos topologies (a type of non-blocking, multistage switching architecture that reduces the number of ports required) and Hyperscale data centres will typically contain tens of thousands of Ethernet switches to interconnect the racks of servers through required for data centres to scale compute and storage functions so that they can continue to support future

a leaf and spine network architecture. A typical data centre today has one or two 10GbE based network interface controllers deployed at the server, which is then aggregated to 40GbE at the top of rack (TOR) switch. The connections between server and TOR are usually made through direct attach copper (DAC) cables, since these are the most cost eective solutions at these data rates for distances of a few metres. But the uplink from TOR to the next tier switch is almost always optical. Smaller data centres will typically use VCSEL-based transceivers over multi- mode fibres. These 40G transceivers aggregate four 10G lasers and can transmit over distances of up to 300 metres. Higher tier switch interconnects (leaf to spine and above) usually require the use of single mode fibres since distances between the switches will often exceed 300m. 100G UPGRADES AND SINGLE MODE As data centres transition to 100GbE, the move to single mode becomes inevitable

www.opticalconnectionsnews.com

33

ISSUE 10 | Q3 2017

Made with FlippingBook - Online magazine maker