hyspiri low latency concept benchmarks
play

HyspIRI Low Latency Concept & Benchmarks Dan Mandl August 24, - PowerPoint PPT Presentation

HyspIRI Low Latency Concept & Benchmarks Dan Mandl August 24, 2010 HyspIRI Science Workshop August 24-26, 2010 Pasadena, CA 1 HyspIRI Low Latency Data Ops Concept 804 Mbps 132 Mbps Hyperspectral Visible Multispectral ShortWave


  1. HyspIRI Low Latency Concept & Benchmarks Dan Mandl August 24, 2010 HyspIRI Science Workshop August 24-26, 2010 Pasadena, CA 1

  2. HyspIRI Low Latency Data Ops Concept 804 Mbps 132 Mbps Hyperspectral Visible Multispectral ShortWave InfraRed Thermal InfraRed (VSWIR) Imaging (TIR) Scanner Spectrometer Spectral Spectral Bands (8) 3.98 μ m, 7.35 Direct Range 380 to 2500 nm μ m, 8.28 μ m, 8.63 μ m, 9.07 10 nm bands Broadcast μ m, 10.53 μ m, 11.33 μ m, Spatial 12.05 Range ~146 km Spatial ( 13.2 deg. at 626km) IFOV 60 m Cross-Track Samples >2560 Range 600 km ( ± 25.3 ° at Sampling 60 m 626 km) • 20 Mbps Direct Broadcast (10Mbps data throughput) • Downlink Select Spectral Bands • Select L-2 Products • Continuous Earth-view Broadcast 2

  3. HyspIRI Data Flow Command TIR & Data 130.2 Mbps Handling Solid State 804 Mbps VSWIR Recorder Direct Broadcast IPM Module Spacecraft 20 Mbps S-band X-band 800 Mbps S-band housekeeping Science data command data Direct Broadcast Antennas 3 To/From Alaska and Norway Ground Stations

  4. Ongoing Efforts • Baseline detailed operations concept used to derive cost estimate to be presented by Steve Chien • Web Coverage Processing Service (WCPS)  Allows scientists to define algorithms that can be dynamically loaded onboard satellite or execute as part of the ground processing • Open Science Data Elastic Cloud  Many custom products generated in parallel by many virtual machines  Complex products generated in concurrent steps (parallel processing)  Elastic response to unanticipated user demand  Quick user access (multi-gigabit access)  Easy expandability of cloud as needed • Benchmarking of CPU’s for Intelligent Payload Module  SpaceCube ( initial results presented at previous workshop)  Other CPU’s (future workshops)  Onboard processing • Delay Tolerant Network Communication Connectivity  Upload of algorithms and download of data with fault and delay tolerant connection 4

  5. Experiment with Web Coverage Processing Service (WCPS) Approach to Injecting New Algorithms into SensorWeb Agent Converts WEKA Tree Machine Learning Object to WCPS Algorithm Data Mining / Classifier Decision Tree EO-1, HyspIRI… Science User GlobalHawk, Ikhana… Dynamic NASA Cloud Upload Infrastructure As A Service (IAAS) Collaboration with Open Cloud Consortium EO-1, HyspIRI data Reflectance WCPS Interface Algorithms Intelligent Agents Pattern Matching Algorithms Geometric Correction Algorithms Custom Algorithm Upload With Satellite Tasking, Image Acquisition & Processing Custom Data Product Data Distribution And Data Delivery (KMZ, PNG…) (e.g. oil classifier) And Notification

  6. Mobile Bay Oil Spill Detection Using EO-1 Advance Land Imager Data green = land white = cloud & sand black = cloud shadow blue = clear water grey = surface oil

  7. Low Fidelity HyspIRI IPM Testbed NETGEAR Gigabit Switch Data Generator Workstation Compact Flash • Allows the board and the data generator workstation to connect at • Generates test data and streams it • Ext3 formatted file Gigabit speed. to the board at rate up to 800Mbps. system with Linux libraries and tools Platform Cable USB • Provides an easy method for debugging software running on the board Virtex-5 FPGA Xilinx ML510 Development Board • GSFC SpaceCube 2 core FPGA • Enables the development team to • Configured as dual 400MHz PPC design verify the Virtex-5 while the GSFC • Capable of running with Linux or in a SpaceCube 2 is finalizing the design standalone mode 7

  8. Compute Cloud Testbed • Open Cloud Consortium (OCC) providing rack with 120 Tbytes usable, 1 – 10 Gbps fiber interface connected to GSFC and Ames and 320 core to support hundreds of virtual machines (part of larger expandable infrastructure consisting of 20 racks)  System admin support  Funded by multiple sources including National Science Foundation  Will stand up 100 Gbps interface wide area cloud (future)  Expect to be there at least 5+ years • Created account on BioNimbus cloud for NASA use  Demonstrated performing EO-1 ALI Level 1R and Level 1G processing in cloud • Will receive dedicated cloud compute rack in August 2010 donated by Open Cloud Consortium  Plan to port automated atmospheric correction using ATREM on Hyperion Level 1R to cloud (presently running on GSFC server)  In process of integrating FLAASH atmospheric correction into an automated process for Hyperion for Level 1R and then porting to cloud  Plan to demonstrate Hyperion level 1R and Level 1G processing in cloud  Plan to demonstrate multiple simultaneous automated higher level data products maximizing clouds ability to handle parallel processing  Make use of software agent-based architecture for intelligent parallel data processing for multiple data products  Experiment with security in open cloud (Open ID/OAuth) 8

  9. Open Cloud Testbed Environment Astronomical data Biological data (Bionimbus) Networking data Image processing for disaster relief & 9 HyspIRI Cloud Benchmarking

  10. Global Lambda Integrated Facility (GLIF) OCC Collaboration with Starlight (part of GLIF) GLIF is a consortium of institutions, organizations, consortia and country National Research & Education Networks who voluntarily share optical networking resources and expertise to develop the Global LambdaGrid for the advancement of scientific collaboration and discovery.

  11. Delay Tolerant Network (DTN) Protocol Benchmarking • Prototype being funded by NASA HQ / SCAN  Purpose is to provide space network that is delay/disruption tolerant  Using EO-1 in FY 11 to demonstrate various scenarios (Hengemihle)  Trying to demonstrate how it is applicable to low earth observing missions • HyspIRI applicability  Upload new data processing algorithms for IPM  Can send algorithm to DTN node without regard to when contact with satellite occurs  DTN node handles uplink when there is contact and send confirmation back to originator  Examining scenarios during Direct Broadcast to handle delays during downlink  E.g. data product ready but DB station not in view, DB node onboard receives data product and waits for contact to handle downlink and confirmation 11

  12. EO-1 Configuration for Preliminary Delay Tolerant Network (DTN) Prototype Lead: Jane Marquart Implementers: Rick Mason, Jerry Hengemihle/Microtel 12

  13. Conclusion • Experimenting with various bottlenecks for end-to-end data flow for low latency users of HyspIRI • Leveraging other funds and using HyspIRI funds to tailor for the HyspIRI mission • Results applicable to other high data volume Decadal missions 13

Recommend


More recommend