the logarithmic least squares optimality of the geometric
play

The logarithmic least squares optimality of the geometric mean of - PowerPoint PPT Presentation

The logarithmic least squares optimality of the geometric mean of weight vectors calculated from all spanning trees for (in)complete pairwise comparison matrices Sndor Bozki Institute for Computer Science and Control Hungarian Academy of


  1. The logarithmic least squares optimality of the geometric mean of weight vectors calculated from all spanning trees for (in)complete pairwise comparison matrices Sándor Bozóki Institute for Computer Science and Control Hungarian Academy of Sciences (MTA SZTAKI); Corvinus University of Budapest Vitaliy Tsyganok Laboratory for Decision Support Systems, The Institute for Information Recording of National Academy of Sciences of Ukraine; Department of System Analysis, State University of Telecommunications MCDM, Ottawa July 12, 2017 – p. 1/43

  2. incomplete pairwise comparison matrix   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1 a 34   A =   a 41 a 43 1 a 45     a 51 a 54 1     a 61 1 – p. 2/43

  3. incomplete pairwise comparison matrix and its graph   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1 a 34   A =   a 41 a 43 1 a 45     a 51 a 54 1     a 61 1 – p. 3/43

  4. The Logarithmic Least Squares (LLS) problem �� 2 � � w i � min log a ij − log w j i, j : a ij is known w i > 0 , i = 1 , 2 , . . . , n. n n The most common normalizations are � w i = 1 , � w i = 1 i =1 i =1 and w 1 = 1 . – p. 4/43

  5. Theorem (Bozóki, Fülöp, Rónyai, 2010): Let A be an incomplete or complete pairwise comparison matrix such that its associated graph G is connected. Then the optimal solution w = exp y of the logarithmic least squares problem is the unique solution of the following system of linear equations: � for all i = 1 , 2 , . . . , n, ( Ly ) i = log a ik k : e ( i,k ) ∈ E ( G ) y 1 = 0 where L denotes the Laplacian matrix of G ( ℓ ii is the degree of node i and ℓ ij = − 1 if nodes i and j are adjacent). – p. 5/43

  6. example   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1 a 34     a 41 a 43 1 a 45     a 51 a 54 1     a 61 1       4 − 1 0 − 1 − 1 − 1 y 1 (= 0) log( a 12 a 14 a 15 a 16 ) − 1 2 − 1 0 0 0 y 2 log( a 21 a 23 )                   0 − 1 2 − 1 0 0 y 3 log( a 32 a 34 )       =       − 1 0 − 1 3 − 1 0 y 4 log( a 41 a 43 a 45 )             − 1 0 0 − 1 2 0 y 5 log( a 51 a 54 )             − 1 0 0 0 0 1 y 6 log a 61 – p. 6/43

  7. Pairwise Comparison Matrix Calculator (PCMC) The logarithmic least squares optimal weight vector can be calculated at pcmc.online CR -minimal ( λ max -minimal) completion is also calculated. PCMC deals with Pareto optimality (efficiency) of weight vectors, too. – p. 7/43

  8. Pareto optimality (efficiency) Let A = [ a ij ] i,j =1 ,...,n be an n × n pairwise comparison matrix and w = ( w 1 , w 2 , . . . , w n ) ⊤ be a positive weight vector. Definition: weight vector w is called efficient, if there exists no positive weight vector w ′ = ( w ′ n ) ⊤ such that 1 , w ′ 2 , . . . , w ′ � � � a ij − w ′ � � � a ij − w i � � i � � for all 1 ≤ i, j ≤ n, � ≤ � � � � w ′ w j � � � j � a kℓ − w ′ � � � � � a kℓ − w k � k � � � for some 1 ≤ k, ℓ ≤ n. � < � � � � w ′ w ℓ � ℓ Remark: A weight vector w is efficient if and only if c w is efficient , where c > 0 is an arbitrary scalar. – p. 8/43

  9.       1 1 4 9 0 . 404518 0 . 436173       1 1 7 5 0 . 436173 0 . 436173       , w EM = , w ∗ =             1 / 4 1 / 7 1 4 0 . 110295 0 . 110295             1 / 9 1 / 5 1 / 4 1 0 . 049014 0 . 049014   1 0 . 9274 3 . 6676 8 . 2531   1 . 0783 1 3 . 9546 8 . 8989 � w EM �   i =   w EM   0 . 2727 0 . 2529 1 2 . 2503 j     0 . 1212 0 . 1124 0 . 4444 1   1 3 . 9546 8 . 8989 1   1 3 . 9546 8 . 8989 � w ′ � 1   i = .   w ′   0 . 2529 0 . 2529 1 2 . 2503 j     0 . 1124 0 . 1124 0 . 4444 1 – p. 9/43

  10.       1 1 4 9 0 . 404518 0 . 436173       1 1 7 5 0 . 436173 0 . 436173       , w EM = , w ∗ =             1 / 4 1 / 7 1 4 0 . 110295 0 . 110295             1 / 9 1 / 5 1 / 4 1 0 . 049014 0 . 049014   1 0 . 9274 3 . 6676 8 . 2531   1 . 0783 1 3 . 9546 8 . 8989 � w EM �   i =   w EM   0 . 2727 0 . 2529 1 2 . 2503 j     0 . 1212 0 . 1124 0 . 4444 1   1 3 . 9546 8 . 8989 1   1 3 . 9546 8 . 8989 � w ′ � 1   i = .   w ′   0 . 2529 0 . 2529 1 2 . 2503 j     0 . 1124 0 . 1124 0 . 4444 1 – p. 10/43

  11.       1 1 4 9 0 . 404518 0 . 436173       1 1 7 5 0 . 436173 0 . 436173       , w EM = , w ∗ =             1 / 4 1 / 7 1 4 0 . 110295 0 . 110295             1 / 9 1 / 5 1 / 4 1 0 . 049014 0 . 049014   1 0 . 9274 3 . 6676 8 . 2531   1 . 0783 1 3 . 9546 8 . 8989 � w EM �   i =   w EM   0 . 2727 0 . 2529 1 2 . 2503 j     0 . 1212 0 . 1124 0 . 4444 1   1 3 . 9546 8 . 8989 1   1 3 . 9546 8 . 8989 � w ′ � 1   i = .   w ′   0 . 2529 0 . 2529 1 2 . 2503 j     0 . 1124 0 . 1124 0 . 4444 1 – p. 11/43

  12.       1 1 4 9 0 . 404518 0 . 436173       1 1 7 5 0 . 436173 0 . 436173       , w EM = , w ∗ =             1 / 4 1 / 7 1 4 0 . 110295 0 . 110295             1 / 9 1 / 5 1 / 4 1 0 . 049014 0 . 049014   1 0 . 9274 3 . 6676 8 . 2531   1 . 0783 1 3 . 9546 8 . 8989 � w EM �   i =   w EM   0 . 2727 0 . 2529 1 2 . 2503 j     0 . 1212 0 . 1124 0 . 4444 1   1 3 . 9546 8 . 8989 1   1 3 . 9546 8 . 8989 � w ′ � 1   i = .   w ′   0 . 2529 0 . 2529 1 2 . 2503 j     0 . 1124 0 . 1124 0 . 4444 1 – p. 12/43

  13. Pareto optimality (efficiency) See more in Bozóki, S., Fülöp, J. (2017): Efficient weight vectors from pairwise comparison matrices, European Journal of Operational Research (in print) DOI 10.1016/j.ejor.2017.06.033 – p. 13/43

  14. The spanning tree approach (Tsyganok, 2000, 2010)   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1 a 34     a 41 a 43 1 a 45     a 51 a 54 1     a 61 1 – p. 14/43

  15. The spanning tree approach (Tsyganok, 2000, 2010)   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1 a 34     a 41 a 43 1 a 45     a 51 a 54 1     a 61 1   1 a 12 a 14 a 15 a 16 a 21 1 a 23       a 32 1     a 41 1     a 51 1     a 61 1 – p. 15/43

  16. – p. 16/43

  17. The spanning tree approach Every spanning tree induces a weight vector. Natural ways of aggregation: arithmetic mean, geometric mean etc. – p. 17/43

  18. Theorem (Lundy, Siraj, Greco, 2017): The geometric mean of weight vectors calculated from all spanning trees is logarithmic least squares optimal in case of complete pairwise comparison matrices. – p. 18/43

  19. Theorem (Lundy, Siraj, Greco, 2017): The geometric mean of weight vectors calculated from all spanning trees is logarithmic least squares optimal in case of complete pairwise comparison matrices. Theorem (Bozóki, Tsyganok): Let A be an incomplete or complete pairwise comparison matrix such that its associated graph is connected. Then the optimal solution of the logarithmic least squares problem is equal, up to a scalar multiplier, to the geometric mean of weight vectors calculated from all spanning trees. – p. 19/43

  20. proof Let G be the connected graph associated to the (in)complete pairwise comparison matrix A and let E ( G ) denote the set of edges. The edge between nodes i and j is denoted by e ( i, j ) . The Laplacian matrix of graph G is denoted by L . Let T 1 , T 2 , . . . , T s , . . . , T S denote the spanning trees of G , where S denotes the number of spanning trees. E ( T s ) denotes the set of edges in T s . Let w s , s = 1 , 2 , . . . , S, denote the weight vector calculated from spanning tree T s . Weight vector w s is unique up to a scalar multiplication. Assume without loss of generality that w s 1 = 1 . Let y s := log w s , s = 1 , 2 , . . . , S , where the logarithm is taken element-wise. – p. 20/43

Recommend


More recommend