MATTHIAS GUNKEL MULTI-LAYER
Multi-layer multi-vendor network automation What is DT’s SDN Implementation Strategy in the IP Core?
T oday, core networks are usually rigid in the sense that they have been planned and designed for traffic patterns which were relevant at a certain point of time far in the past. From than on, network operations is just upgrading the network with ports in order to cope with the traffic increase. There is limited reactivity to changes in traffic patterns or new services, e.g. bigger sports events or future connectivity to specific virtual functions located in data centers. Most operators still plan and operate the router layer and the optical layer independently. Then, the layers are not aware of each other: each layer’s capacity is tremendously over-provisioned in order to meet the resilience requirements. Furthermore, operation is usually done by manual interventions of operating staff without a layer-overarching view and process automation comprising multiple vendor domains. This leads to long service provisioning times. DT’S FRAMEWORK ACTIVITIES IN THE PAST In order to improve the efficiency of ML transport networks and to reduce the over- provisioning, DT has spent considerable effort for achieving ML control in the core. It synchronises the information and influences the behavior of both layers. So far this mainly relates to ML resilience (MLR), i.e. both layers collaborate in a coordinated way when recovering from a failure. However, further communication between layers still relies on historically developed tools with non-congruent interfaces and an enormous effort for information export. DT’S ENVISIONED SOLUTION INSPIRED BY OPEN TRANSPORT- SDN Now, as MLR has been accomplished successfully in DT’s core, it is now about how to merge the optical and the packet layer to a fully integrated automation solution. Hereby, DT aims at an integration of core and aggregation domains coming from several vendors. The solution consists of hierarchically grouped control and management instances with a central multi-layer, multi- domain and multi-vendor orchestrator, aka super-controller, for coordination of E2E connectivity across vendor domains. It governs domain-specific SDN controllers
Today’s core network suffers from single layer planning processes and massive manual interventions by operating staff. DT plans to overcome these issues with the introduction of a ML multi-vendor automation environment following the spirit of Transport-SDN. In this feature, Matthias Gunkel outlines the anticipated use cases of an SDN solution.
MATTHIAS GUNKEL
interfaces towards the orchestrator are standardized T-API or RESTConf together with YANG data objects. The IP controller has similar responsibilities on the router layer as the optical controllers on L0/1. Specifically, it can set up and tear down IP/MPLS tunnels or define working and FRR backup paths. Knowing the IP topology and actual load on each IP interface, this controller is capable for IP TE and link metric optimisation in order to utilise the IP resources with highest efficiency. In contrast to optical controllers the IP controller acts as single gateway to all routers coming from multiple vendors. Southbound, it uses existing protocols (e.g. PCEP or BGP-LS) for gathering CP information. Inside the IP domain, classical IP/MPLS routing mechanisms are persistently being used. A likely northbound interface of the IP controller towards the orchestrator is a standardised NetConf/YANG applied for abstracted data exchange. A potential orchestrator’s sub-function or separate entity inside the orchestration plane might be an IP configuration and automation engine that integrates DT’s existing legacy tooling infrastructure. Triggered by the orchestrator or a human, it acts as the single centralised instance which activates all IP configurations by interfacing to the IP controller, probably through NetConf/YANG. IP controller’s alarms are directly notified to the orchestrator. With these clear communication paths, the architecture avoids logical loops otherwise risking data base inconsistencies. Any RESTful interface between the engine and the orchestrator might be a potential solution.
through standardised (as much as possible, or at least open) northbound interfaces/ protocols and data models. This serves to simplify the future exchange of equipment vendors. The transport orchestrator talks northbound to an overarching E2E service orchestrator which again might be connected to DT’s legacy IT. Certain functionalities like “service provisioning” and “scheduled maintenance” are considered as external applications northbound to the orchestrator and triggered by operations, planning or even selected customers. Other functionalities like “topology discovery”, “ML recovery” and “Traffic Engineering (TE) and optimization” are internal built-in orchestrator capabilities. An integral part of the orchestrator is also a combined “IP & optical“ TE data base populated with abstracted domain information. Furthermore, the orchestrator might also be augmented by a parent PCE for MD & ML E2E path optimization (e.g. cost, SLRG, latency). Optical controllers remain vendor- specific and provide abstraction of intra-domain information. Southbound, they may have proprietary interfaces and information models towards their own NEs. Their baseline configurations and modifications are either persistently done manually by a NOC (through NMS) or triggered by the orchestrator (through optical controller and NMS), but there is no direct connection between the orchestrator and any single optical NE (no white boxes). Demarcation between different domains is realised by OEO conversion. Per domain optical lightpath optimisation might be conducted by individual child PCEs. Likely controllers’
40
| ISSUE 7 | Q3 2016
www.opticalconnectionsnews.com
Made with FlippingBook - Online catalogs