Distributed localization of networked cameras Stanislav Funiak Carlos Guestrin Mark Paskin Rahul Sukthankar Carnegie Mellon University Stanford University Intel Research IPSN 2006 presentation, April 19, 2006 1
Distributed Localization of Cameras Place wireless cameras around an environment Need to know locations Costly to measure locations 2
Distributed Localization of Cameras If camera 1 sees person, then camera 2 sees person, learn about relative positions of cameras As person moves around, estimate positions of all cameras 3
Prior Work Simultaneous Localization from localization and mapping pairwise distances Ihler et al. , IPSN 2004 Montemerlo et al. , AAAI 2002 Whitehouse, Culler, ACM WSNA 02 Paskin, IJCAI 2003 Simultaneous Structure from calibration and tracking motion Rahimi et al. , CVPR 2004 Pollefeys, IJCV 2004 4 Soatto, Perona, IEEE PAMI 1998
Distributed Localization of Cameras Want a solution: online • distributed • represents • uncertainty about estimated locations e.g. for active control If camera 1 sees person, then camera 2 sees person, learn about relative positions of cameras As person moves around, estimate positions of all cameras 5
Tracking with Kalman Filter: Estimation posterior distribution prior distribution observation likelihood previous object location observations camera poses posterior distribution: posterior distribution: even more certain more certain 1 camera at image 2 known pose Observation model: prior distribution over object location: 6 uncertain
Tracking with Kalman Filter: Prediction posterior distribution motion model predicted distribution (prior at t +1) Motion model: t t+ 1 7
Camera Localization: Estimation Posterior distribution in absolute parameters camera angle d object location Start with wide prior on C Observe person at dist. d unknown – Camera could be camera pose anywhere in a ring 8 observation likelihood posterior distribution prior distribution
Kalman Filter uses a linear representation… ? Exact non-Gaussian posterior Gaussian approximation Problem structure lost with Gaussian approximation Exact posterior in absolute parameters Gaussian approximation 4 8 7 3 2 9 6 Far from ground truth 1 10 5 ground 0 y Overconfident 11 4 truth −1 12 3 −2 estimate −3 9 1 2 −4 −4 −3 −2 −1 0 1 2 3 4 x
Relative Over-Parameterization (ROP) Intuition : a ring structure can be represented with polar coordinates Not enough : Camera does not view person head on Relative over-parameterization – position relative to location of person Distance u , angle φ 1. ROP Lateral displacement v 2. The center – the unknown location of object 3. Ring distribution in polar coordinates – Almost Gaussian!!! v φ + π u φ u ( m x , m y ) - π 10
Comparing parameterizations best Gaussian best Gaussian true posterior: approx. in x,y, θ : approx. in ROP: ROP Standard parameterization v u φ ( m x , m y ) 11
Test run on Tower Scenario 4 4 8 7 8 7 3 3 9 6 2 9 6 2 1 1 10 5 10 5 0 y 0 y 11 4 11 4 −1 −1 12 3 12 3 −2 −2 −3 −3 1 2 1 2 −4 −4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 x x ROP with further improvements standard parameterization (see paper) 12
Donuts and Bananas on real data – Network of 5 cameras 13
Distributed Localization of Cameras Goal: each camera estimates the location of itself and the object 6 5 Want algorithm: • efficient • robust to message 7 4 loss, node loss 3 8 • ROP lets us use a single Gaussian 1 2 • Challenges? 14
Motion model introduces dependencies t t + 1 Motion model introduces dependencies among distant cameras communication and computation inefficiency Estimation at t : Prediction: Estimation at t + 1 : 15
Assumed density filtering Intuition: only capture strong dependencies among cameras based on [Boyen Koller 1998] 6 5 M t , C 5 , C 6 C 5 , C 6 M t , C 6 , C 7 C 6 , C 7 M t , C 4 , C 5 C 4 , C 5 4 7 M t , C 7 , C 8 C 7 , C 8 M t , C 3 , C 4 C 3 , C 4 M t , C 2 , C 3 C 2 , C 3 3 8 M t , C 1 , C 2 C 1 , C 2 Each clique contains Each clique contains: 1 2 • Camera and its neighbor • Object location 16
Distributed Filtering: Initialization 1. Assign each clique to one or more nodes 6 5 • can give clique to > 1 M t , C 5 , C 6 node for robustness 6 5 M t , C 6 , C 7 M t , C 4 , C 5 7 4 2. The nodes build a 4 7 network junction tree M t , C 6 , C 7 , C 8 M t , C 4 , C 5 , C 6 M t , C 3 , C 4 M t , C 7 , C 8 M t , C 7 , C 8 [Paskin et al. 2005] • build a routing tree 8 3 M t , C 2 , C 3 , C 4 M t , C 2 , C 3 8 3 • ensure the flow of information 1 2 M t , C 1 , C 2 1 2 17
Distributed Filtering: Estimation Instance of Robust Distributed Inference [Paskin Guestrin, UAI 2004] 1. Each node starts with prior over its clique 6 5 M t , C 6 , C 7 M t , C 5 , C 6 6 5 2. Nodes make observations 7 4 7 M t , C 6 , C 7 , C 8 M t , C 4 , C 5 , C 6 3. Nodes communicate 8 relevant likelihoods & 8 3 3 M t , C 7 , C 8 M t , C 2 , C 3 , C 4 priors neighbors 1 2 4. At convergence: condition on all 1 2 M t , C 2 , C 3 M t , C 1 , C 2 measurements made in the network 18
Prediction Revisited posterior distribution motion model prediction t t+ 1 strong direct dependencies weak indirect dependence How to implement the prediction step distributedly? How to prune weak dependencies? 19
Distributed Filtering: Prediction Want the best approximation (minimizing KL divergence): • captures short-range dependencies • drops long-range dependencies Sufficient to compute the marginals over cliques [Boyen, Koller 1998] 20
Summary of our approach M t , C 6 , C 7 M t , C 5 , C 6 1. Each node maintains a 6 5 clique marginal 2. Nodes build communication M t , C 4 , C 5 M t , C 7 , C 8 structure, network junction tree [Paskin et al. 2005] 7 4 3 8 3. Estimation: nodes condition M t , C 7 , C 8 M t , C 3 , C 4 on observations [Paskin & Guestrin UAI 04] 4. Prediction: 1 2 best approximation M t , C 1 , C 2 M t , C 2 , C 3 computed locally 21
Results: 44 simulated side-facing cameras 22
Results: 44 simulated side-facing cameras 23
Network of 25 cameras at Intel Research Pittsburgh 24
Network of 25 cameras at Intel Research Pittsburgh 25
Results: Model Complexity vs. Accuracy RMS error 0.8 0.7 pruning all 0.6 dependencies 0.5 better dependencies 0.4 among neighbors 0.3 0.2 keeping all dependencies 0.1 (exact solution) 0 26
Comparison with Rahimi et al., CVPR 2004 RMS error 0.8 Our approach: pruning all 0.7 dependencies • distributed 0.6 • online dependencies 0.5 better • estimates uncertainty among neighbors 0.4 keeping all 0.3 dependencies 0.2 (exact solution) 0.1 Rahimi et al. 0 CVPR 2004 27
Results: Communication vs. Accuracy RMS error 1 0.9 0.8 0.7 better 0.6 0.5 0.4 0.3 0.2 0.1 0 3 5 10 15 20 epochs per time step centralized solution 28
Conclusion • Accurate camera localization with only a single Gaussian!!! – ROP – parameterization accurately representing ring-like distributions – Effective technique for incorporating nonlinear observations • Distributed online algorithm for camera localization that represents uncertainty • Algorithm for distributed filtering for general dynamic models • Evaluated on network of 25 real cameras 29
Recommend
More recommend