ivr
play

iVR Integrated Vision and Radio Localization with Zero Human Effort - PowerPoint PPT Presentation

iVR Integrated Vision and Radio Localization with Zero Human Effort Jingao Xu*, Hengjie Chen*, Kun Qian , Erqun Dong*, Min Sun* Chenshu Wu , Zheng Yang* *School of Software and BNRist, Tsinghua University University of California, San


  1. iVR Integrated Vision and Radio Localization with Zero Human Effort Jingao Xu*, Hengjie Chen*, Kun Qian † , Erqun Dong*, Min Sun* Chenshu Wu ‡ , Zheng Yang* *School of Software and BNRist, Tsinghua University † University of California, San Diego ‡ University of Maryland, College Park September 12, London, UK

  2. Motivation • Various location-based ubiquitous applications. PoI discovery Indoor Location Navigation • Locating or tracking with Wi-Fi & IMU – Ubiquitous: Almost everywhere installed infrastructure. – Low-cost: Off-the-shelf Wi-Fi devices and Inertial Measurement Unit. – Non-invasive: not required to wear/carry any special devices. – Suffer from both large location errors and considerable deployment costs. 2

  3. Motivation • New opportunity: fusing Vision and Radio – Surveillance cameras are pervasively deployed in public areas – High accuracy localization and tracking – Low start-up efforts EV-loc TMC’15 PHADE Ubicomp’18 TAR Mobisys’18 3

  4. Motivation • Simply fusing Vision and Radio is not a guarantee for high accuracy and zero human effort – Absence of absolute location – Incorrespondence of identification – Looseness of sensor fusion 4

  5. Motivation • Simply fusing Vision and Radio is not a guarantee for high accuracy and zero human effort Automatic + Tightly coupled sensor iVR = map construction fusion method Absence Incorrespondence Looseness of absolute location of identification of sensor fusion 5

  6. System Overview I ndoor M ap A ut om at i c M ap C onst r uct i on S i m ul t aneous i m ages I m age- M ap I ni ni t i al al i zat i on P ha hase se P r oj ect i on M at r i x Lo Local i zat i on n P ha hase se V i V i deo f r r am e a m es I m age at t t t i m est am p: t 1 1 1 P edest r i an I m age- M ap ans D et ect i on P r oj ect i on L 1 L 2 er I m age at i t t 2 t r 2 t i m est am p: t l P edest 2 Fi e W i r r el e ess ss cl t 1 i t si si gnal s ack P ar R S S W i r el ess t t 1 2 C ol l ect i on I ndoor Local i zat i on R S S S am pl es ed Tr t 2 and A ugm ent Fusi si on ze I M U M U i A cc. Local r r eadi e ngs D at a I M U S ensor P edest r i an D ead t t 1 2 S am pl i ng R eckoni ng Tr aj ect or i es G yr . D at a t t 1 2 6

  7. Automatic Map Construction • Overview – Input: Images captured from a couple of ambient surveillance cameras – Output: Indoor map(floorplan) and Projection matrix – Key algorithm: Binocular Stereo Vision + SfM Calibration I m age 1 B i nocul ar S t ereo V i si on Unparallel F eat ure P oi nt s 2D I ndoor M ap I m age 2 Feat ur e P oi nt s E xt ract i on C onst ruct i on C or r espondence R el at i ve P ose C al cul at i on P roj ect i on M at ri x C al cul at i on E qui val ent E qui val ent I m age A cqui si t i on S f M C al i brat i on 7

  8. Automatic Map Construction • SfM Calibration Relative Pose Calculation by SfM Equivalent Virtual Image Generation 8

  9. Automatic Map Construction • Binocular Stereo Vision – Location: 9

  10. Automatic Map Construction • Map Construction and Projection Matrix Calculation – Projection Matrix ( T ) Location in world-coordinate Location in image-coordinate – Map construction • Outlining clusters of projections of feature points using Indoor Geometric Reasoning (IGR) algorithm 10

  11. Tightly-coupled Multimodal Fusion • Augmented Particle Filter – Input • Detection with Vision • Localization with Wireless signal • Pedestrian dead-reckoning with IMU – Output • Localization and tracking result for different pedestrian at fine-grained. Vision Wireless IMU Modal System Modal input Modal input input output 11

  12. Tightly-coupled Multimodal Fusion • Augmented Particle Filter Particle movement Particle weight Particle weight indicated by IMU assignment by wireless assignment by vision 12

  13. Experiment • Experimental Scenarios – Dataset & groundtruth: https://github.com/xujingao13/iVR 13

  14. Experiment • Performance of Automatic Map Construction – iVR also displays obstacles in environment, which will promote the rationality of localization results Automatically constructed Floorplan provided by map by iVR Administrator 14

  15. Experiment • Overall performance Average:0.65m • Average:0.75m • 95 th : 1.23m • 95 th : 1.86m • Localization accuracy compared Tracking accuracy compared with with state-of-the-art methods state-of-the-art methods 15

  16. Experiment • Performance under different conditions Different Different environment frame rate Different Different device placements Pedestrians 16

  17. Demo Video 17

  18. Contribution • We design an automatic indoor semantic map construction method based on merely a couple of ambient stationary cameras. • We propose a novel augmented particle filter algorithm that tightly couples measurements from multiple orthogonal systems, including vision, radio, and IMU , and jointly estimates a target’s location with enhanced accuracy and individual label • We prototype iVR and conduct extensive experiments in 5 scenarios. The result shows that iVR outperforms existing state-of-the-art systems by 70%. – The dataset (contains > 60k video frames and labeled ground truth) can be found at https://github.com/xujingao13/iVR 18

  19. Jingao Xu Tsinghua University xujingao13@gmail.com 19

  20. Motivation • Major Problems – Labor intensive site-survey of Wi-Fi localization methods – Drift error for IMU based tracking algorithms (PDR) – Rationality of localization results Site-survey Drift error Location rationality 20

  21. Conclusion • Automatic indoor map construction – Only need a couple of cameras – Present semantic information – To the best of our knowledge, this is the first work that constructs a physical map by using stationary surveillance cameras with unparalleled optical axes. • Tightly coupled sensor fusion method – Multimodal localization and tracking – Tightly coupled multimodal fusion – Augmented particle filter 21

Recommend


More recommend