using m best solutions
play

Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen - PowerPoint PPT Presentation

Joint Probabilistic Matching Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid 1 Introduction One-to-One Graph Matching in Computer Vision Action Recognition Feature Point


  1. Joint Probabilistic Matching Using m -Best Solutions S. Hamid Rezatofighi Anton Milan Zhen Zhang Qinfeng Shi Antony Dick Ian Reid 1

  2. Introduction  One-to-One Graph Matching in Computer Vision • Action Recognition • Feature Point Matching • Multi-Target Tracking ⋮ ⋮ • Person Re-Identification 2

  3. Introduction  Most existing works focus on • Feature and/or metric learning [Zhao et al ., CVPR 2014 , Liu et al ., ECCV 2010] • Developing better solvers [Cho et al ., ECCV 2010 , Zhou & De la Torre, CVPR 2013]  The optimal solution does not necessarily yield the correct matching assignment  To improving the matching results, we propose • to consider more feasible solutions • a principle approach to combine the solutions 3

  4. One-to-One Graph Matching  Formulating it as a constrained binary program ⋮ ⋮ 4

  5. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑦 1 1 𝑦 1 ⋮ ⋮ ⋮ 𝑂 𝑦 𝑁 5

  6. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑦 1 𝑘 = {0,1} 1 𝑦 𝑗 𝑦 1 𝑂 𝑈 ⊆ 𝔺 𝑁×(𝑂+1) 𝑘 , … , 𝑦 𝑁 0 , 𝑦 1 1 , … , 𝑦 𝑗 𝑌 = 𝑦 1 ⋮ ⋮ ⋮ 𝑂 𝑦 𝑁 6

  7. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑌 ∗ = argmin 𝑔 𝑌 𝑦 1 1 𝑌 ∈ 𝒴 𝑦 1 Or 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ⋮ where 𝑘 = 0,1 , 𝑘 𝒴 = ቄ𝑌 = 𝑦 𝑗 ∀𝑗,𝑘 | 𝑦 𝑗 𝑘 ≤ 1, ∀𝑘: ∑ 𝑦 𝑗 𝑘 = 1 ∀𝑗: ∑ 𝑦 𝑗 ቅ ⋮ ⋮ 𝑂 𝑦 𝑁 7

  8. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑌 ∗ = argmin 𝑔 𝑌 𝑦 1 1 𝑌 ∈ 𝒴 𝑦 1 Or 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ⋮ where 𝑘 = 0,1 , 𝑘 𝒴 = ቄ𝑌 = 𝑦 𝑗 ∀𝑗,𝑘 | 𝑦 𝑗 𝑘 ≤ 1, ∀𝑘: ∑ 𝑦 𝑗 𝑘 = 1 ∀𝑗: ∑ 𝑦 𝑗 ቅ ⋮ ⋮ 𝑂 𝑦 𝑁 8

  9. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑌 ∗ = argmin 𝑔 𝑌 𝑦 1 1 𝑌 ∈ 𝒴 𝑦 1 Or 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ⋮ where 𝑘 = 0,1 , 𝑘 𝒴 = ቄ𝑌 = 𝑦 𝑗 ∀𝑗,𝑘 | 𝑦 𝑗 𝑘 ≤ 1, ∀𝑘: ∑ 𝑦 𝑗 𝑘 = 1 ∀𝑗: ∑ 𝑦 𝑗 ቅ ⋮ ⋮ 𝑂 𝑦 𝑁 9

  10. One-to-One Graph Matching  Formulating it as a constrained binary program 0 𝑌 ∗ = argmin 𝑔 𝑌 𝑦 1 1 𝑌 ∈ 𝒴 𝑦 1 Or 𝐵𝑌 ≤ 𝐶 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ⋮ where 𝑘 = 0,1 , 𝑘 𝒴 = ቄ𝑌 = 𝑦 𝑗 ∀𝑗,𝑘 | 𝑦 𝑗 𝑘 ≤ 1, ∀𝑘: ∑ 𝑦 𝑗 𝑘 = 1 ∀𝑗: ∑ 𝑦 𝑗 ቅ ⋮ ⋮ 𝑂 𝑦 𝑁 10

  11. One-to-One Graph Matching  Formulating it as a constrained binary program 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 Or 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 where 𝑘 = 0,1 , 𝑘 𝒴 = ቄ𝑌 = 𝑦 𝑗 ∀𝑗,𝑘 | 𝑦 𝑗 𝑘 ≤ 1, ∀𝑘: ∑ 𝑦 𝑗 𝑘 = 1 ∀𝑗: ∑ 𝑦 𝑗 ቅ ⋮ ⋮ 11

  12. One-to-One Graph Matching  Examples of joint matching distribution 𝑞 𝑌 and cost 𝑔 𝑌 in different applications • Multi-target tracking [Zheng et al. , CVPR 2008] and person re-identification [Das et al. , ECCV 2014 ] 𝑘 𝑘 𝑦 𝑗 𝑔 𝑌 = 𝐷 𝑈 𝑌 or equivalently 𝑞 𝑌 ∝ ς 𝑞 𝑦 𝑗 • Feature point matching [Leordeanu et al. , IJCV 2011] 𝑔 𝑌 = 𝑌 𝑈 𝑅 𝑌 • Stereo matching [Meltzer et al. , ICCV 2005] and iterative closest point [Zheng, IJCV 1994] higher-order constraints in addition to one-to-one constraints 12

  13. Marginalization VS MAP Estimates  In general, globally optimal solution may or may not be easily achieved. 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 𝑌 ∈ 𝒴  Even the optimal solution does not necessarily yield the correct matching assignment 13

  14. Marginalization VS MAP Estimates  In general, globally optimal solution may or may not be easily achieved. 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 𝑌 ∈ 𝒴  Even the optimal solution does not necessarily yield the correct matching assignment • Visual similarity • Other ambiguities in the matching space 14

  15. Marginalization VS MAP Estimates  In general, globally optimal solution may or may not be easily achieved. 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 𝑌 ∈ 𝒴  Even the optimal solution does not necessarily yield the correct matching assignment • Visual similarity • Other ambiguities in the matching space 15

  16. Marginalization VS MAP Estimates  In general, globally optimal solution may or may not be easily achieved. 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 𝑌 ∈ 𝒴  Even the optimal solution does not necessarily yield the correct matching assignment 16

  17. Marginalization VS MAP Estimates  In general, globally optimal solution may or may not be easily achieved. 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 𝑌 ∈ 𝒴  Even the optimal solution does not necessarily yield the correct matching assignment 17

  18. Marginalization VS MAP Estimates Motivation to use marginalization  Encoding the entire distribution to untangle potential ambiguities  MAP only considers one single value of that distribution  Improving matching ranking due to averaging / smoothing property Exact marginalization is NP-hard  Requiring all feasible permutations to built the joint distribution Solution  Approximation using m -Best solutions 18

  19. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 19

  20. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 1 1-st optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 20

  21. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 2 2-nd optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 21

  22. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 3 3-rd optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 22

  23. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 𝑙 k- th optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 23

  24. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 𝑙 k -th optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴 24

  25. Marginalization Using m -Best Solutions Marginalization by considering a fraction of the matching space  Using m -highest joint probabilities 𝑞 𝑌 / m -lowest values for 𝑔 𝑌 𝑌 ∗ = argmax 𝑞 𝑌 𝑌 ∈ 𝒴 ∗ is 𝑌 𝑙 k -th optimal solution 𝑌 ∗ = argmin 𝑔 𝑌 𝑌 ∈ 𝒴  Approximation error bound decreases exponentially by increasing number of solutions [Rezatofighi et al. , ICCV 2015] 25

  26. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌 𝑌 1 𝐵𝑌 ≤ 𝐶 26

  27. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌 𝑌 2 𝐵𝑌 ≤ 𝐶 ∗ ≤ ∗ 𝑌, 𝑌 1 𝑌 1 1 − 1 27

  28. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌 𝑌 3 𝐵𝑌 ≤ 𝐶 ∗ ≤ ∗ 𝑌, 𝑌 1 𝑌 1 1 − 1 ∗ ≤ ∗ 𝑌, 𝑌 2 𝑌 2 1 − 1 28

  29. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌 𝑌 𝑙 𝐵𝑌 ≤ 𝐶 ∗ ≤ ∗ 𝑌, 𝑌 1 𝑌 1 1 − 1 ∗ ≤ ∗ 𝑌, 𝑌 2 𝑌 2 1 − 1 ⋮ ∗ ∗ 𝑌, 𝑌 𝑙−1 ≤ 𝑌 𝑙−1 1 − 1 29

  30. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌  General approach 𝑌 𝑙 𝐵𝑌 ≤ 𝐶  Impractical for large values of m 𝐵𝑌 ≤ ሖ ሖ 𝐶 30

  31. Computing the m -Best Solutions Naïve exclusion strategy ∗ = argmin 𝑔 𝑌  General approach 𝑌 𝑙 𝐵𝑌 ≤ 𝐶  Impractical for large values of m 𝐵𝑌 ≤ ሖ ሖ 𝐶 Binary Tree Partitioning Partitioning the space into a set of disjoint subspaces [Rezatofighi et al., ICCV 2015 ]  Efficient approach  Not a good strategy for weak solvers 31

  32. Experimental Results Query images Gallery images 0 𝑑 1 Person Re-Identification None of them 1 𝑑 1 𝑂 𝑑 𝑁 32

Recommend


More recommend