psnc s optical developments
play

PSNC's Optical Developments Artur Binczewski, Tomasz Szewczyk PSNC - PowerPoint PPT Presentation

PSNC's Optical Developments Artur Binczewski, Tomasz Szewczyk PSNC TERENA 3rd Network Architects Workshop PIONIER in the pill Area 312k sq km Population 38M Main academic


  1. PSNC's Optical Developments Artur Binczewski, Tomasz Szewczyk PSNC TERENA 3rd Network Architects Workshop

  2. PIONIER in the pill • Area 312k sq km • Population 38M • Main academic centers 21 • State universities 165+ • Students 2M+ • R&D institutions and Univ. interconnected via PIONIER network 700+ 6479 km of fiber infrastructure in Poland 2359 km of fiber in Europe (IRU) 8838 km of fiber in total Source - http://www.glif.is/publications/maps 21 MANs and 5 HPC Centers in PIONIER Consortium with PSNC as Operator

  3. PIONIER global collaboration GÉANT GLIF PIONIER EU

  4. PIONIER network infrastructure

  5. Regional Networks

  6. PIONIER National Infrastructure

  7. NewMAN project • National scale deployment of MPLS/IP platform for all MANs and PIONIER – MPLS „last mile” is provided for most of users – Full portfolio of offered services: IPV4, IPv6, MPLS based VPNs, multicast, QoS,… • New generation DWDM with automatic restoration (GMPLS) – 100G ready – Colorless, Directionless and Contentionless functions implemented – All nodes support p2p 10G transmission without regeneration DWDM platform MPLS/IP platform Core switches 191 nodes 79 Access switches 365 ROADMs 143 10 GE interfaces 4 416 10GE interfaces 396 1 GE interfaces 21 680

  8. National Research and Education Network – PIONIER (transmission) RUSSIA (Kaliningrad) 1 x 10 Gb/s LITHUANIA 1 x 10 Gb/s GDAŃSK Elbląg Suwałki KOSZALIN OLSZTYN SZCZECIN BIELARUS 1 x 10 Gb/s BYDGOSZCZ BIAŁYSTOK Gorzów TORUŃ Hamburg (GLIF, Surfnet, Nordunet) POZNAŃ 4x10 Gb/s Sochaczew GÉANT 30 Gb/s WARSZAWA ZIELONA GÓRA ŁÓDŹ PUŁAWY LUBLIN RADOM WROCŁAW 2 x 10 Gb/s (2 lambdas) CZĘSTOCHOWA KIELCE CBDF 10Gb/s OPOLE Zamość Colorless, directionless and contentionless 80ch DWDM system KATOWICE UKRAINE 1x10 Gb/s PIONIER network node Bielsko- Biała RZESZÓW KRAKÓW CESNET; SANET 2x10 Gb/s

  9. New GLIF links

  10. International Fiber Infrastructure HAMBURG AMS-IX GLIF DE-CIX VIX CERN

  11. 100G migration MAN Olsztyn MAN Toruń MAN Koszalin MAN Gdańsk Szczecin KDM HPC Gdansk MAN MAN Białystok Zielona Góra MAN Puławy Poznań MAN Bydgoszcz KDM HPC Poznan MAN HPC Warsaw Radom MAN MAN Łódź Lublin Wrocław 100Gb/s KDM HPC Krakow MAN HPC Wroclaw N x 100Gb/s Kielce MAN MAN MAN MAN Często Rzeszów Opole Śląsk chowa

  12. 100G migration HAMBURG AMS-IX Gda GLIF Poz Waw DE-CIX Cra Wroc VIX CERN 100Gb/s N x 100Gb/s planned

  13. 100G migration HPC PSNC NODE HPC HPC 2 resource MPLS Switch IB/Eth Firewall 100GE/10GE HPC 5 DWDM Switch SDN 100GE SDN Transponders 100GE & LLE 10x10GE MPLS Switch Switch Muxponders Firewall 100GE/10GE SDN SDN 100NET DWDM Coherent ≥ 100G λ NewMAN DWDM 10G & 100G λ Existing Existing Juniper MPLS Switch Juniper MPLS Switch with new 100GE card with new 100GE card MAN 21 MAN 1

  14. Is it enough? Research infrastructure must be equipped with advanced „living” labs • Available always and remotely • Provide possibility to make disruptive and repeatable tests • Provide full workflow for users (managing lab infrastructure, users access and experiments) Example of network technology laboratories in PIONIER network • Hardware Design and Prototyping Laboratory • Optical Technology Laboratory • Open Network Hardware Laboratory • Integrated Network Management and Simulation Laboratory • Software Defined Network Laboratory

  15. Hardware Design and Prototyping Laboratory HETEROGENOUS INSPECTION SYSTEM • Voiding check capabilities • Cold solder joints discover • Electrical correctness inspection PCB BOARD • Multilayered designs • Signal analysis in bandwidth of tens of GHz • Impedance controlled • Mostly FPGA powered • Gigabit transmission oriented (10G / 3GSDI etc.) STENCIL PRINTER REFLOW OVEN SMT PLACEMENT EQUIPMENT • Multiple soldering zones • Robotic SMT machine • Conveyor belt PCB transmission • High speed • High quality thermal profiling • High precision

  16. Examples of own hardware design The miniaturized 4k camera The high precision atomic clock signals MUX FPGA CHIP DIGITAL VIDEO INTERFACES • Internal IP architecture based on proprietary AXI4-like bus • HD-SDI & 3GSDI compatible • Unique design of quad head • Four independent video channels graphics controller • Multiple video modes available • Real time demosaicing and color space conversion MULTIPLEXER FEATURES VIDEO RAM • integrated DDS for phase and freq • Post acquisition buffer for digital signal adjustment 10MHz clock processing purposes • phase shifting for PPS & 10MHz VIDEO HEADER • Video RAM for 4 independent video • programmable delay line controllers • 8 independent LVDS lines • silent switching capability • DDR2 technology @ 400MHz • Source synchronous mode • OCXO oscillator • Up to 8Gbps throughput • backup clock for no input signal • Fitted for 8/10mpx sensors • ETH statistics and management • 4K resolution @ 60fps

  17. Optical Technology Laboratory 3 ROADM nodes supporting: – colourless, directionless, contentionless and gridless Interfaces: 10G, HD -SDI/3G, 100G and 400G link Tunable lasers able to apply different modulation schemes (affecting the size of the signal spectral bandwidth) Additional measurement tools and Existing measurement tools elements PSO-200 Tunable Broadband POLMUX Sources light source WDM OSA CD i PMD HR OSA OTDR IP tester 100G Router tester 10G EDFA/RAMAN Power Tunable Finisar MEMs Optical Polarizators meter attenuator Filters WaveShaper PMD Controller Switch Fibers 4000S

  18. Open Network Hardware Laboratory ONH Laboratory: – ATCA standard (Advanced Telecommunications Computing Architecture) – 6 chassis - equipped with several blades with different network processors (NP) and digital signal processors (DSP) Each ATCA chassis (node) in the ONH Laboratory has: – two Ethernet 14 SFP+ blades (switch), – two dual Intel Xeon E5-2600 blade with SAS disk, – dual Cavium Octeon II CN68xx blade together with Cavium Octeon SDK, – dual Broadcom XLP832 module and dual EZchip NP4 blade . Each ATCA chassis is also equipped with Texas Instrument DSP media resource module that provides high density of media processing Nodes have 20 x 1G/10G interfaces for interconnections and seven free slots for future expansion and reconfiguration

  19. Open Network Hardware Laboratory ATCA node decomposition

  20. Integrated Network Management and Simulation Laboratory TV TV Computer Center STB STB FTTH Network CPE CPE Access Switch FTTH MPLS MPLS Switch Router Router MPLS Network MPLS MPLS Network MPLS FTTH Router Router Switch Access MPLS MPLS Switch Router Router CPE CPE FTTH Network Server STB STB (OpenStack, Computing OpenFlow, Resources Domain A Domain B MGMT…) Computer Center TV TV

  21. Integrated Network Management and Simulation Laboratory Software • IBM Tivoli / HP OpenView – Cloud and infrastructure management system – Network management – Virtualization management – Cloud management – Services management • SteelCentral (OpNET) – Network planning – Network performance simulator – Multi-technology, multi-vendor design – Lifecycle support

  22. Software Defined Network Laboratory Current status • Mininet was used as first-choice for testing of SDN developments in PSNC • A simple testbed using MX80, MX240 and EX9208 was used for demonstrations (shared in time) • The number of SDN related projects increased, sharing the testbed is not possible anymore, it was decided to build a shared, flowspace based testbed • The testbed consists of: – Two Juniper MX80 supporting OpenFlow 1.0 – Flowvisor and servers to accommodate endpoints and controllers

  23. Software Defined Network Laboratory Current status • Floodlight and controller1 OpenDaylight as a controller1 Controller-3 controller Controller-n mgmt • In addition to „traditional flowvisor” we looking also for other options for flowvisor slicing/virtualization (flowspace firewall, OVX) • This simple testbed is VMA-1 VMB-1 VMA-2 Juniper MX 80 Juniper MX 80 VMB-2 VMA-3 VMB-3 integrated with current OpenFlow OpenFlow VMA-n VMB-n national testbed Endpoints Endpoints (PL-LAB) External External connections connections

  24. Software Defined Network Laboratory • A new infrastructure project received structural funds AGH funding: PL-LAB2020 PP 1 Gb/s IITiS • Joint collaboration of PSNC, 1 Gb/s NIT and 4 Polish Universities 1 Gb/s VPLS • 10 Gb/s interconnectivity PG between SDN sites in the core 10 Gb/s 10 Gb/s 10 Gb/s • Each site equipped with IŁ PSNC wireless and SDN testbeds PW • Deployment: 10 Gb/s Q3/Q4 2015 10 Gb/s PWr PŚl 10 Gb/s

Recommend


More recommend