Data Mining in Aeronautics, Science, and Exploration Systems 2007 Conference June 26-27, 2007 Computer History Museum Mountain View, California, USA Sponsored by NASA Engineering and Safety Center Science Mission Directorate Aeronautics Research Mission Directorate - IVHM
Data Mining in Aeronautics, Science, and Exploration Systems 2007 Conference Computer History Museum Mountain View, CA June 26-27, 2007 Numerous disciplines, including aeronautics, physical sciences, and space exploration, have benefited from recent advances in data and text mining, machine learning, and statistics. The Data Mining in Aeronautics, Science, and Exploration Systems (DMASES) 2007 conference provides the data mining community with an opportunity to share these advances across the larger communities of engineers and scientists working in aeronautics, aerospace, and science. This single-track conference features in-depth lectures, tutorials, discussion, and a poster session. Conference Organizers Session Chairs Ashok N. Srivastava, Ph.D. Kevin H. Knuth, Ph.D. (Sciences) Intelligent Systems Division Department of Physics NASA Ames Research Center State University of New York, Albany Dawn M. McIntosh Michael D. New, Capt., Ph.D. (Aeronautics) Intelligent Systems Division Delta Airlines, Inc. NASA Ames Research Center Bob Beil Anindya Ghoshal, Ph.D. (Exp. Systems) Systems Engineering Office United Technologies Research Center NASA Engineering and Safety Center United Technologies Corp.
Conference Agenda Tuesday, June 26 8:00 AM REGISTRATION 8:30 AM Morning Announcements/Introductions 8:35 AM Mining Future Datascapes - Srivastava/NASA Ames Research Center 9:15 AM Ascent Summary Data Analysis Tool for Shuttle Wing Leading Edge Impact Detection - McIntosh/NASA Ames Research Center Exploration Systems Session 9:35 AM Distributed Mobility Management for Target Tracking in Mobile Sensor Networks - Chakrabarty/Duke University 10:20 AM * break * 10:45 AM A Structural Neural System for Data Mining and Anomaly Detection - Schulz/University of Cincinnati 11:25 AM Current Trends in Performance Prognostics Using Integrated Simulation and Sensors - Baca/Sandia National Laboratories 12:25 PM * Poster Session/Lunch * Sciences Session 2:00 PM Problem Solving Strategies: Sampling & Heuristics - Knuth/State University of New York, Albany 2:20 PM Making the Sky Searchable: Rapid Indexing for Automated Astrometry - Roweis/Google 2:30 PM Bayesian Analysis of the Cosmic Microwave Background - Jewell/NASA Jet Propulsion Laboratory 3:00 PM Efficient & Stable Gaussian Process Calculations - Foster/San Jose State University 3:30 PM * break * 4:00 PM Understanding Large-Scale Structure in Earth Science Remote Sensing Data Sets - Braverman/NASA Jet Propulsion Laboratory 4:30 PM Data-driven Modeling for Understanding Climate-Vegetation Interactions - Nemani/NASA Ames Research Center 5:00 PM END
Wednesday, June 27 8:00 AM REGISTRATION 8:30 AM Morning Announcements 8:35 AM Tutorial, session I - Principles of Bayesian Methods - Sansó/University of California, Santa Cruz 10:00 AM * break * 10:30 AM Tutorial, session II - Principles of Bayesian Methods - Sansó/University of California, Santa Cruz 12:30 PM * Collaboration Discussions & Networking/Lunch * Aeronautics Session 1:30 PM National Aeronautics Research & Development Policy – Overview and Outreach - Schlickenmaier/NASA Headquarters 2:00 PM Applying Knowledge Representation to Runway Incursion - Wilczynski/University of Southern California 3:00 PM The Role of Data Mining in Aviation Safety Decision Making - McVenes/Air Line Pilots Association, International 3:30 PM * break * 4:00 PM Sifting NOAA Archived ACARS Data for Wind Variation to Improve Traffic Efficiency - Ren/Georgia Institute of Technology 4:30 PM Data & Text Mining in Boeing - Kao/Boeing Phantom Works 5:00 PM Concluding Remarks - Srivastava 5:10 PM END
Invited Presentations Conference Coordinator Presentations Mining Future Datascapes Ashok Srivastava, NASA Ames Research Center Ascent Summary Data Analysis Tool for Shuttle Wing Leading Edge Impact Detection Dawn McIntosh, NASA Ames Research Center NASA Engineering and Safety Center Data Mining and Trending Working Group Bob Beil, NASA Engineering and Safety Center Tuesday, June 26 Distributed Mobility Management for Target Tracking in Mobile Sensor Networks Krishnendu Chakrabarty, Duke University A Structural Neural System for Data Mining and Anomaly Detection Mark Schulz, University of Cincinnati Current Trends in Performance Prognostics Using Integrated Simulation and Sensors Thomas J. Baca, Sandia National Laboratories Problem Solving Strategies: Sampling and Heuristics Kevin Knuth, SUNY Albany Making the Sky Searchable: Rapid Indexing for Automated Astronomy Sam Roweis, Google Bayesian Analysis of the Cosmic Microwave Background Jeff Jewell, NASA Jet Propulsion Laboratory Efficient and Stable Gaussian Process Calculations Leslie Foster, San Jose State University Understanding Large-Scale Structure in Earth Science Remote Sensing Data Sets Amy Braverman, NASA Jet Propulsion Laboratory Data-Driven Modeling for Understanding Climate-Vegetation Interfaces Ramakrishna Nemani, NASA Ames Research Center
Efficient & Stable Gaussian Process Calculations Leslie Foster San Jose State University The Gaussian process technique is one popular approach for analyzing and making predictions related to large data sets. However the traditional Gaussian process approach requires solving a system of linear equations that, in many cases, is so large that it is not practical to solve in a reasonable amount of time. We describe how low-rank approximations can be used to solve these equations approximately. The resulting algorithm is fast, accurate, numerically stable, and general. We illustrate the application of the algorithm to the prediction of redshifts using broad spectrum measurements of the light from galaxies.
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS E FFICIENT AND S TABLE G AUSSIAN P ROCESS C ALCULATIONS Leslie Foster, Nabeela Aijaz, Michael Hurley, Apolo Luis, Joel Rinsky, Chandrika Satyavolu, Alex Waagen (team leader) Mathematics San Jose State University foster@math.sjsu.edu June 26, 2007, DMASES 2007 L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE E FFICIENT AND S TABLE G AUSSIAN P ROCESS C ALCULATIONS
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS A BSTRACT The Gaussian process technique is one popular approach for analyzing and making predictions related to large data sets. However the traditional Gaussian process approach requires solving a system of linear equations that, in many cases, is so large that it is not practical to solve in a reasonable amount of time. We describe how low rank approximations can be used to solve these equations approximately. The resulting algorithm is fast, accurate, numerically stable and general. We illustrate the application of the algorithm to the prediction of redshifts using broad spectrum measurements of the light from galaxies. L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE DMASES 2007, J UNE 26-27, 2007, M OUNTAIN V IEW , CA
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS O UTLINE I. The Problem and Background II. Low Rank Approximation III. Numerical Stability and Rank Selection IV. Results V. Conclusions L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE DMASES 2007, J UNE 26-27, 2007, M OUNTAIN V IEW , CA
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS P REDICTION AND E STIMATION Training Data: X – data matrix of observations – n × d y – vector of target data – n × 1 Testing Data: X ∗ – matrix of new observations – n ∗ × d Goals: predict y ∗ corresponding to X ∗ estimate y corresponding to X L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE DMASES 2007, J UNE 26-27, 2007, M OUNTAIN V IEW , CA
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS Approaches for prediction with large data sets: Traditional regression Neural networks Support Vector Machines E-model . . . Gaussian Processes L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE DMASES 2007, J UNE 26-27, 2007, M OUNTAIN V IEW , CA
O UTLINES T HE P ROBLEM AND B ACKGROUND L OW R ANK A PPROXIMATION N UMERICAL S TABILITY AND R ANK S ELECTION R ESULTS C ONCLUSIONS G AUSSIAN P ROCESS S OLUTION Form covariance matrix K ( n × n ), cross covariance matrix K ∗ ( n ∗ × n ) and select parameter λ predict y ∗ using y ∗ = K ∗ ( λ 2 I + K ) − 1 y ˆ ( λ 2 I + K ) is large – for example 180000 × 180000 L ESLIE F OSTER , N ABEELA A IJAZ , M ICHAEL H URLEY , A POLO L UIS , J OEL R INSKY , C HANDRIKA S ATYAVOLU , A LEX W AAGEN ( TEAM LEADE DMASES 2007, J UNE 26-27, 2007, M OUNTAIN V IEW , CA
Recommend
More recommend