sensor fusion for context sensor fusion for context
play

Sensor Fusion for Context Sensor Fusion for Context Understanding - PowerPoint PPT Presentation

Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University Sensor Fusion for Context Sensor Fusion for Context Understanding Understanding Huadong Wu, Mel Siegel The Robotics Institute, Carnegie Mellon


  1. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University Sensor Fusion for Context Sensor Fusion for Context Understanding Understanding Huadong Wu, Mel Siegel The Robotics Institute, Carnegie Mellon University Sevim Ablay Applications Research Lab, Motorola Labs IMTC’2002, Anchorage, AK, USA 1 1 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  2. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University sensor fusion • how to combine outputs of multiple sensor perspectives on an observable? • modalities may be “complementary”, “competitive”, or “cooperative” • technologies may demand registration • variety of historical approaches, e.g.: – statistical (error and confidence measures) – voting schemes (need at least three) – Bayesian (probability inference) – neural network, fuzzy logic, etc IMTC’2002, Anchorage, AK, USA 2 2 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  3. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University context understanding • best algorithm for human-computer interaction tasks depends on context • context can be difficult to discern • multiple sensors give complementary (and sometime contradictory) clues • sensor fusion techniques needed • (but best algorithm for sensor fusion tasks may depend on context!) IMTC’2002, Anchorage, AK, USA 3 3 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  4. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University agenda • a generalizable sensor fusion architecture for “ context-aware computing ” – or (my preference, but not the standard term) “ context-aware human-computer interaction ” • a realistic test to demonstrate usability and performance enhancement • improved sensor fusion approach (to be detailed in next paper) IMTC’2002, Anchorage, AK, USA 4 4 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  5. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University background • current context-sensing architectures (e.g., Georgia Tech Context Toolkit) tightly couple sensors and contexts • difficult to substitute or add sensors, thus difficult to extend scope of contexts • we describe a modular hierarchical architecture to overcome these limitations IMTC’2002, Anchorage, AK, USA 5 5 IMTC-2002- 1077 Sensor Fusion mws@cmu.edu

  6. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University toward context understanding toward context understanding Identification, representation, and understanding of context Adapt behavior to context Information Separation + traditional humans Sensor Fusion system understand context naturally & effortlessly sensor sensor sensor Sensing hardware: cameras, microphones, etc. Environment situation: people in the meeting room, objects around a moving car, etc. IMTC’2002, Anchorage, AK, USA 6 6 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  7. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University methodology • top-down • adapt/extend Georgia Tech Context Toolkit (Motorola helps support both groups) • create realistic context and sensor prototypes • implement a practical context architecture for a plausible test application scenario • implement sensor fusion as a mapping of sensor data into the context database • place heavy emphasis on real sensor device characterization and (where needed) simulation IMTC’2002, Anchorage, AK, USA 7 7 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  8. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University context-sensing methodology: context-sensing methodology: sensor data-to-context mapping sensor data-to-context mapping context       L ? f ( ) f ( ) f ( ) sensor ⋅ ⋅ ⋅ 1 11 12 1 m 1       L ? f ( ) f ( ) f ( ) sensor       ⋅ ⋅ ⋅ 2 21 22 2 m 2 =       M M M O M M       observations L  ?   f ( ) f ( ) f ( )   sensor  ⋅ ⋅ ⋅ n n 1 n 2 nm m & hypotheses sensory output IMTC’2002, Anchorage, AK, USA 8 8 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  9. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University dynamic database • example: user identification and posture for discerning focus-of-attention in a meeting • tables (next) list basic information about environment (room) and parameters, e.g., – temperature, noise, lighting, available devices, number of people, segmentation of area, etc – initially many details are entered manually – eventually a fully “tagged” and instrumented environment can reasonably be anticipated • weakest link: maintaining currency IMTC’2002, Anchorage, AK, USA 9 9 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

  10. Robotics Institute, Carnegie Mellon University Robotics Institute, Carnegie Mellon University context classification and modeling context classification and modeling context space time social inside (personal information, feeling & thinking, emotional) outside now self, family, agitation / physical mood stress concentration preferences friends, activity schedule tiredness information colleague, outside (environment) merry, sad, nervousness focus of habits, current name, address, acqauintance, inside history computing & satisfy… attention height, weight, location proximity time people audiovisual etc. connectivity fitness, information: sensors city, altidude, close to: day, date individuals or human talking computing metablism, etc. sound image location, ambient personal weather building (name, group (e.g. (information environment (cloudyness, structure, audience of a processing: collection), processing: face (processing, altitude, speed, environment physical state: rain/snow, facilities, etc. show, music, etc.; in- memory, I/O, speaker recognition, orientation heart rate, temperature, knowledge), attendees in a sight objects, etc., recognition, object respiration rate, humidity, room, car, cock-tail party): surrounding hardware/softw speaking recognition, 3-D blood pressure, barometer devices people scenery are resource & understanding object blink rate, pressure, (function, interaction, cost), network measureing Galvanic forecast), states, etc.), …, casual chatting, connectivity, location and vicinity formal meeting, communication Resistance, orientation temperature, eye contact, bandwidth, body (absolute, humidity, attention communication temperature, change: change: time of the day: interruption noise-level, sweat travelling, walking/running office hour, source: brightness activity microphones cameras, infra- GPS, DGPS, network biometric speed, heading, speed, heading lunch time, …, imcoming calls, aural red sensors serverIP, RFID, resource, sensors: heart- season of a encounting, work body vision hands gyro, thermometer, rate/blood- (listen/talk) year, etc. etc., … history, social accelerometers, barometer, pressure/GRS, task ID drive, walk, sit, read, watch TV, content: work, type, write, use schedule, relationship … sight-seeing, …, entertainment, mouse, etc., … dead-reckoning humidity temperature, expectation people: eye living chore, sensor, photo- respiration, etc., contact etc., … diode sensors, … interruptable… accelerometers, gas sensor IMTC’2002, Anchorage, AK, USA 10 10 IMTC-2002-1077 Sensor Fusion mws@cmu.edu

Recommend


More recommend