VMS SAN Technology Spring 2002 John Covert 3C01 Fibre Channel Fibre Channel Fibre Channel � ANSI standard network and storage interconnect – OpenVMS, and most others, use it for SCSI storage – TCI/IP and VIA also possible � 1.06 gigabit/sec., full-duplex, serial interconnect – Very capable of 100MB/sec per link (with 1gb links) – 2gb in early 2002… 10gb in 2003-2004 � Long distance – 500M multi-mode fiber – 100KM single-mode fiber – 600KM with FC/ATM links 2 1
Topologies Topologies � Arbitrated loop FC-AL (NT/UNIX today) – Uses Hubs – Max. Number of nodes is fixed at 126 – Shared bandwidth � Switched (SAN - VMS / UNIX / NT) – Highly scalable – Multiple concurrent communications – Switch can connect other interconnect types 3 Current Configurations Current Configurations Current Configurations � Up to twenty switches (8 or 16-port) per FC fabric � AlphaServer 800, 1000A*, 1200, 4100, 4000, 8200, 8400, DS10, DS20, DS20E, ES40, GS60, GS80, GS140, GS160 & GS320 � Adapters (max) per host determined by the platform type: 2, 4, 8, 26 � Multipath support - no single point of failure � 100km max length (w/o ATM) * The AS1000A does not have console support for FC. 4 2
Long- -Distance Storage Interconnect Distance Storage Interconnect Long Long-Distance Storage Interconnect � FC is the first long-distance storage interconnect – New possibilities for disaster tolerance – Extensive multipath capability � Host-based Volume Shadowing � Data Replication Manager (DRM) 5 HBVS: Multi- -site FC Clusters site FC Clusters HBVS: Multi HBVS: Multi-site FC Clusters FDDI CI, DSSI, MC, FDDI T3 Gigabit Ethernet ATM Alpha Alpha Alpha Alpha FC Switch FC Switch FC Switch FC Switch FC (100 KM) (600KM w/ATM) HSG HSG HSG HSG HSG HSG HSG HSG = GigaSwitch host based shadow set 6 3
HBVS Multi- -site FC Pro and Con site FC Pro and Con HBVS Multi HBVS Multi-site FC Pro and Con � Pro – High performance, low latency – Symmetric access – Fast failover � Con – Full shadow copies and merges are required today � HSG write logging, after V7.3 – More CPU overhead 7 DRM Configuration DRM Configuration DRM Configuration Host-to-Host Cold stand-by nodes cluster communication FC host FC host FC host FC host 600KM max FC Switch FC Switch FC Switch FC Switch HSG HSG HSG HSG HSG HSG HSG HSG 8 4
DRM Configuration DRM Configuration DRM Configuration Host-to-Host Cold stand-by nodes (LAN/CI/DSSI/MC) Alpha Alpha Alpha Alpha FC Switch FC Switch FC Switch FC Switch FC (100 KM single mode) HSG HSG HSG HSG HSG HSG HSG HSG controller based 9 remote copy set DRM Pro and Con DRM Pro and Con DRM Pro and Con � Pro – High performance, low latency – No shadow merges – Supported now, and enhancements are planned � Con – Asymmetric access – Cold standby – Requires both HSG controller ports on the same fabric – Manual failover � 15 min. Is typical 10 5
VMS V7.3 SAN Features VMS V7.3 SAN Features VMS V7.3 SAN Features 11 FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” � KGPSA (FibreChannel) � KZPBA (SCSI) � Improves I/O scaling on SMP platforms – Moves I/O processing off the primary CPU – Reduces “hold time” of IOLOCK8 by ~30% – Streamlines the normal I/O path (read/write) – Uses pre-allocated “resource bundles” � Explicit controls available – SET DEVICE/PREFERRED_CPU – SYSGEN parameters � fast_path � fast_path_ports 12 6
FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” FibreChannel/SCSI “Fast Path” $ show device /full fga0 Device FGA0:, device type KGPSA Fibre Channel, is online, shareable, error logging is enabled. Error count 0 Operations completed 0 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G,W Reference count 0 Default buffer size 0 Current preferred CPU Id 15 Fastpath 1 FC Port Name 1000-0000-C921-BD93 FC Node Name 2000-0000-C921-BD93 $ $ $ set device fga0: /preferred=3 $ 13 Fibre Channel Tape Support Fibre Channel Tape Support Fibre Channel Tape Support � Modular Data Router – Fibre Channel to parallel SCSI bridge – Connects to one or two Fibre Channel ports on a SAN � Multi-host, but not multi-path � Can be served to the cluster via TMSCP � Supported as a native VMS tape device by COPY, BACKUP, etc. � ABS, MRU, SLS support 14 7
FibreChannel Tape Support FibreChannel Tape Support FibreChannel Tape Support TMSCP served WinNT Tru64 OpenVMS OpenVMS OpenVMS OpenVMS WinNT WinNT Tru64 Tru64 OpenVMS OpenVMS OpenVMS OpenVMS OpenVMS OpenVMS OpenVMS OpenVMS Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha Alpha or Alpha or Alpha or VAX VAX VAX FC Switch MDR RAID Array Disk SCSI Controller Tape Library 15 VMS V7.3- -1 SAN Features 1 SAN Features VMS V7.3 VMS V7.3-1 SAN Features 16 8
Failover to the MSCP Served Path Failover to the MSCP Served Path Failover to the MSCP Served Path � Disk Multipath Failover to MSCP Served Paths – Current implementation supports failover amongst direct paths – New implementation allows failover to a served path if all direct paths are down and failback when direct path is restored – Supported for multihost FibreChannel and SCSI connections 17 $ sh dev /full $1$dga100: Disk $1$DGA100: (CEAGLE), device type HSV100, is online, mounted, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled. Error count 0 Operations completed 562369 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 352 Default buffer size 512 Current preferred CPU Id 0 Fastpath 1 WWID 01000010:6005-08B4-0001-003F-0002-1000-0021-0000 Total blocks 41943040 Sectors per track 128 Total cylinders 2560 Tracks per cylinder 128 Host name "CEAGLE" Host type, avail Compaq AlphaServer ES40, yes Alternate host name "MARQUE" Alt. type, avail Compaq AlphaServer ES45 Model 2, yes Allocation class 1 I/O paths to device 5 Path PGA0.5000-1FE1-0011-AF08 (CEAGLE), primary path, current path. Error count 0 Operations completed 561886 Path PGA0.5000-1FE1-0011-AF0C (CEAGLE). Error count 0 Operations completed 161 Path PGB0.5000-1FE1-0011-AF09 (CEAGLE). Error count 0 Operations completed 161 Path PGB0.5000-1FE1-0011-AF0D (CEAGLE). Error count 0 Operations completed 161 Path MSCP (MARQUE). 18 Error count 0 Operations completed 0 � 9
Failover To The MSCP Path To The MSCP Path Failover Failover To The MSCP Path FDDI CI, DSSI, MC, FDDI T3 Gigabit Ethernet ATM Alpha Alpha Alpha Alpha FC Switch FC Switch FC Switch FC Switch FC (100 KM) HSG HSG HSG HSG HSG HSG HSG HSG = GigaSwitch host based shadow set 19 Multipath Multipath Tape Support Tape Support Multipath Tape Support � Multipath Tape Support – Allows user selection of path (load balancing) � 200MB/sec through a dual FC MDR � 8 SDLT drives can be driven at full compacted bandwidth � No need to use MDR SSP – Dynamic failover between 2 ports on the MDR – MDR is still a single point of failure, but fabric failures are tolerated – Failover to MSCP Path not supported for tape 20 10
$ sh dev /full mga4: Magtape $2$MGA4: (CLETA), device type COMPAQ SuperDLT1, is online, file-oriented device, available to cluster, device has multiple I/O paths, error logging is enabled, controller supports compaction (compaction disabled), device supports fastskip. Error count 0 Operations completed 0 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 0 Default buffer size 2048 WWID 02000008:500E-09E0-0005-460D Density default Format Normal-11 Allocation class 2 Volume status: no-unload on dismount, position lost, odd parity. I/O paths to device 4 Path PGA0.5005-08B3-0010-2699 (CLETA), primary path, current path. Error count 0 Operations completed 0 Path PGB0.5005-08B3-0010-2699 (CLETA). Error count 0 Operations completed 0 Path PGC0.5005-08B3-0010-269A (CLETA). Error count 0 Operations completed 0 Path PGD0.5005-08B3-0010-269A (CLETA). 21 Error count 0 Operations completed 0 Typical FibreChannel Configuration CPU 1 CPU 2 CPU 3 CPU 4 CPU 1 CPU 2 CPU 3 CPU 4 PCI PCI PCI PCI pga pgb pgc pgd pga pgb pgc pgd fga fgb fgc fgd fga fgb fgc fgd FC Switch FC Switch 1 2 3 4 1 2 A HSG B MDR 22 11
Distributed Interrupts Distributed Interrupts Distributed Interrupts � Distributed Interrupts to Fastpath Devices – Allows hardware interrupt to be directly targeted to the “preferred” fastpath CPU – Frees up CPU cycles on the primary processor – Avoids IP interrupt overhead to direct interrupt to the “preferred” fastpath CPU – CPU 0 load for IO processing = 0% with distributed interrupts + fastpath 23 Interrupt Coalescing on the KGPSA Interrupt Coalescing on the KGPSA Interrupt Coalescing on the KGPSA � Interrupts Coalescing on KGPSA Adapters – Aggregates IO completion interrupts in the host bus adapter – Saves passes through the interrupt handler and reduced IOLOCK8 hold time – Initial tests show a 25% reduction of IOLOCK8 hold time (3-4us per IO), resulting in a direct 25% increase in maximum IO/second for high IO workloads 24 12
Recommend
More recommend