Optical-Connections-Q1-2016-OFC-Edition-LR2 (1)

DRIVING DWDM

Internet content providers to take control of their

The emergence of a new class of optical transport platform to address the needs of Internet content providers is a significant development. It marks the first time in recent history that custom optical equipment, known as data centre interconnect, has been developed for an end- customer other than a telco.

Such a level of monitoring, decision-making and network adaptation is not something that telcos do or can match, given their legacy operations support systems. “The cable operators and the telcos are really paying attention to what they are doing,” says Esmacher. Different strategies For the telcos and the web- scale players, reducing cost- per-bit remains a critical goal. “More capacity is nice but a lot of this is driven by market forces,” says Jörg-Peter Elbers, vice president of advanced technology at Adva Optical Networking. “Even if you have a technology that is 100 times better, nobody will use it unless the technology is at an acceptable price point.” Vendors are adopting several approaches to improve optical-transport system performance and reduce its cost. Equipment vendors are embracing more flexible line-side transponders that support multiple modulation schemes. “The gain in spectral efficiency through modulation doesn’t come for free,” says Elbers. Higher-order modulation increases capacity and spectral efficiency but at the expense of a loss in system performance which can be quite dramatic, he says. Current 100-gigabit wavelengths with coherent detection use polarisation-

The systems vendors use different rack form-factors for their data centre interconnect products. Adva Optical Network’s CloudConnect uses a thin 0.5RU card that supports up to 400-gigabit line-side optics. Ciena Waveserver and Coriant’s Groove G30 use a 1RU card offering 400 gigabit and 1.6 T, respectively. Meanwhile, Infinera’s Cloud Xpress uses its 500 gigabit PIC in a 2RU box. System vendors use several performance metrics for their platforms. One is the line-side capacity on the equivalent of a 1RU card. The second is the capacity of a fully stacked platform. This is less meaningful given how platforms’ cards are now so dense that a fully stacked platform can fill several fibres. Data centres also have a limit as to how much power can be supplied to individual equipment. Which leads to a third, important platform metric: the gigabit/W consumed. Nor is it just hardware that is key. The web-scale players use telemetry, streaming measurement data collected from the network. The equipment data is fed into a large database used by the data centre’s management software to allow a web-scale player to optimally match its services and applications to the network. Data centre interconnect platforms must support such telemetry and provide the open interfaces needed by the management systems.

rather than the telcos that are now driving DWDM. But what is not in question is that their requirements are giving DWDM development a fresh impetus. Web-scale requirements The web-scale players’ transport needs are varied, depending on where their data centres are located and the applications they are running. Certain players may have relatively modest capacity needs that can be met with up to a terabit of capacity. Others want to connect their distributed data-centre sites in a metro to appear as one large virtual net. For switches in one data centre to talk to those in another yet appear adjacent requires significant interconnect capacity – in the tens of terabits. Moreover, web-scale players are no longer just metro players; they now operate and even own terrestrial and submarine long-haul networks. “Their requirements are identical to a global telco,” says Esmacher. Cisco’s NCS 2015 chassis for the telcos has 15 card slots, each supporting 200 Gbit/s. The total 3 terabit of capacity occupy 12 rack units (RU). In contrast, Cisco’s first data centre interconnect product – the NCS 1002 – crams 2 terabit line-side capacity in 2RU. Data centre interconnect platforms are stackable, similar to servers. This allows data centre managers to scale line- side capacity with traffic growth.

ROY RUBENSTEIN

I nternet content providers are experiencing significant traffic growth, estimated at between 80 and 100 percent annually. The new DWDM platform is designed to maximise the volume of traffic sent between large- scale data centres in the smallest possible footprint. These web-scale players now have an equal seat at the table with the traditional telcos, says Russ Esmacher, director, packet optical sales, Americas at Cisco: “They are driving a tremendous amount of innovation in the market.” “They are so focused on wringing the capacity out of components and systems, and are pushing the envelope more than the other players,” agrees Jay Gill, Infinera’s principal manager, cloud and SDN marketing. Ciena says it experienced a 70 percent increase in 100-gigabit and 200-gigabit DWDM deployments in 2015 compared to 2014. The web- scale players account for 80 percent of these 200-gigabit shipments, says Helen Xenos, Ciena’s director, product and technology marketing. The system vendors are cautious about the claim that it is the web-scale players

ISSUE 6 | Q1 2016 26

Made with FlippingBook - professional solution for displaying marketing and sales documents online