Circuits provisioning in PIONIER with AutoBAHN system Radek Krzywania radek.krzywania@man.poznan.pl
ToC • Introduction to PIONIER • AutoBAHN deployment in PIONIER network – Topology – Network abstraction – Reservation process – Potential usage
Introduction to PIONIER
PIONIER Infrastructure • PIONIER is Polish NREN dedicated to interconnect all research and academicals institutions in Poland • The infrastructure interconnects 22 MANs and HPC centers • PSNC is the operator of PIONIER
PIONIER Infrastructure • 5.300 km of own Fiber Optics Cables • DWDM Adva equipment for L1 • Foundry Networks NetIron XMR 8000 series switches for L2/L3 in 22 MAN centers • Juniper M5 router for L3
PIONIER Infrastructure MAN Eth 1 Gb/s SDH 2,5 Gb/s 2x10 Gb/s (2 λ ) CBDF 10Gb/s (2 λ )
PIONIER place in Europe • PIONIER in Europe after 5 years of operating: – 4 th place among EU i EFTA countries in core network size (Mb/s x km) – 1 st place among EU and EFTA countries in core capacity of the network (Mb/s) (equal to SURFnet) – The highest number of CBDF links in EU countries (4 operating + 4 more planned in short time scale) – 5 th place in outgoing traffic and 6 th in incoming traffic to the NREN backbone • Source: TERENA Compendium 2008
AutoBAHN deployment in PIONIER network Topology
AutoBAHN in PIONIER • PSNC is an active partner in GEANT2 JRA3 activity (AutoBAHN) since its very beginning • PIONIER testbed infrastructure was one of the first to deploy AutoBAHN instance for dynamic circuit management • Testbed equipment is exactly the same as in parallel operational infrastructure
PIONIER topology for AutoBAHN • NetIron XMR 8000 switches are interconnected with 10Gb/s interfaces • AutoBAHN Technology Proxy has access to each of the boxes with CLI interface • The resources are seen as single MPLS cloud
PIONIER topology for AutoBAHN • Technology Proxy is aware of each piece of equipment in the network (all XMR boxes) • DM is provided with limited topology information, where only edge switches are present (those connected with end-points or having external connections to neighbour domains) • IDM topology is similar to the one at DM level, however the network equipment details are hidden and information about neighbour and global topology is included
AutoBAHN deployment in PIONIER network Network abstraction
Topology Abstraction Process • MPLS cloud abstraction decreases amount of information about physial network topology • Only reachability information between domain edge points are provided with additional links metrics • The pathfinding is limited to definition of ingres and egres network node and port • The intermediate nodes are selected automatically by MPLS with limited influance of administrator or AutoBAHN system
MPLS cloud issues for AutoBAHN • AutoBAHN was considered at the beginning of work to have full control over physical network resources • MPLS cloud abstraction prevents AutoBAHN to see all particular links in the network • The overall capacity of network links must be abstracted, which causes loss of some information • The control of booked and used network capacity is limited to heuristic accuracy • The pathfinding is limited to defining just source and destination end ports and nodes in topology
Alternative link capacity constraints • Ingress/Egress links limit the capacity allowed to reserve in the network – Core network bandwidth is considered to be infinite – Network may refuse reservation in case of insufficient bandwidth available • Capacity allowed to reserve is limited with policy rule – All domain ingress/egress links are associated with one node – All end points are associated with single node – Technology Proxy must be able to translate DM port names to physical ones – Accurate capacity value in policy may prevent reservation denials – Allows improved capacity control by network administrators
AutoBAHN deployment in PIONIER network Reservation process
How Circuits Are Created • A User wants to have circuit from some end point in GARR, terminated at file server in PIONIER network • GARR and GEANT2 domains provides their constraints set to PIONIER network with request to schedule reservation to the end point from selected ingress point
How Circuits Are Created • The request is forwarded to PIONIER DM, where pathfinder process is executed and constraints for local domain are given • IDM analyze constraints and defines global path attributes, which are send to DM in order to schedule reservation. • Again pathfinding is performed, and a path of two nodes and four links are given as a result • Links are validated in Calendar module to confirm resources availability • Then the resources are booked and reservation is scheduled • IDM is informed about successfully created reservation
How Circuits Are Created • At reservation start time, DM sends request for a circuit implementation to TP • TP transform DM topology into physical one and contact proper edge nodes to configure end ports of the circuit • The VLL is routed according to internal MPLS procedures
AutoBAHN deployment in PIONIER network Potential usage
Potential AutoBAHN users in PIONIER • SCARIe project – AutoBAHN provides connectivity for SCARIe research activities, interconnecting radiotelescopes at global scale – One of the radiotelescopes is located physically next to Toru ń city (PL) and is connected directly to PIONIER infrastructure
Potential AutoBAHN users in PIONIER • iTPV – Interactive Television may require dedicated circuits between data repositories
Potential AutoBAHN users in PIONIER • Data storage infrastructures – multiple data storage infrastructures distributed in Poland may be connected on demand with dedicated links
Potential AutoBAHN users in PIONIER • Telemedicine – dedicated circuits for high quality video streaming • HPC centers interconnectivity in Poland • VLAB – Virtual Laboratories • Interconnectivity dedicated for distributed Projects
Thank you Q&A
GMPLS/G 2 MPLS in PIONIER network Bartosz Belter bartosz.belter@man.poznan.pl Presented by: Radek Krzywania radek.krzywania@man.poznan.pl Poznan Supercomputing and Networking Center
BRIEF INTRODUCTION TO G 2 MLPS
What is G 2 MPLS? � G 2 MPLS is … • a Network Control Plane architecture that implements the concept of Grid Network Services o GNS is a service that allows the provisioning of network and Grid resources in a single-step, through a set of seamlessly integrated procedures. expected to expose interfaces specific for Grid services • made of a set of extensions to the standard GMPLS • o provide enhanced network and Grid services for “power” users / apps (the Grids) � G 2 MPLS is not … • an application-specific architecture; it aims to o support any kind of end-user applications by providing network transport services and procedures that can fall back to the standard GMPLS ones o provide automatic setup and resiliency of network connections for “standard” users
Why G 2 MPLS? uniform interface for the Grid-user to � Vsite A trigger Grid & network resource actions � single-step provisioning of Grid and network resources (w.r.t. the dual G.O-UNI G 2 approach Grid brokers + NRPS-es) adoption of well-established procedures � G 2 G 2 for traffic engineering, resiliency and G 2 crankback G.I-NNI possible integration of Grids in � G.E-NNI operational/commercial networks , by G 2 MPLS overcoming the limitation of Grids NRPS operating on dedicated, stand-alone network infrastructures G.O-UNI Grid nodes can be modelled as network nodes with node-level grid resources to Vsite C Vsite B be advertised and configured (this is a native task for GMPLS CP)
G 2 MPLS goals G 2 MPLS will provide part of the functionalities related to the selection � and co-allocation of both Grid and network resources � Co-allocation functionalities Discovery and Advertisement of Grid + network capabilities and • resources of the participating virtual sites (Vsites) • Service setup / teardown o coordination with local job scheduler in middleware o configuration of the involved network connections among the participating Vsites o (The network end-point – TNA – might not be specified, if Grid resources are specified) o resiliency mgmt for the installed network connections and possible recovery escalation to the Grid MW for job recovery o advanced reservations of Grid and network resources Service monitoring • o retrieving the status of a job ( Grid transaction ) and of the related network connections
GMPLS/G 2 MLPS DEPLOYMENT IN PIONIER NETWORK
G 2 MPLS test-bed – Transport Plane [1] � ADVA FSP 3000RE-II (Lambda Switch) • 15 pass through ports • 6 local ports • 3 physical units � Calient Diamond Wave (Fibre Switch) • 60 ports • 1 physical unit / 4 logical units (switch virtualization) Foundry XMR NetIron 8000 (Ethernet Switch) � • 2 x 4-port 10GE modules (XFP) • 1 x 24-port 1GE module (SFP) • 3 physical units
Recommend
More recommend