introductory course on non smooth optimisation
play

Introductory Course on Non-smooth Optimisation Lecture 05 - - PowerPoint PPT Presentation

Introductory Course on Non-smooth Optimisation Lecture 05 - PeacemanRachford, DouglasRachford splitting Jingwei Liang Department of Applied Mathematics and Theoretical Physics Table of contents 1 Problem 2 PeacemanRachford splitting


  1. Introductory Course on Non-smooth Optimisation Lecture 05 - Peaceman–Rachford, Douglas–Rachford splitting Jingwei Liang Department of Applied Mathematics and Theoretical Physics

  2. Table of contents 1 Problem 2 Peaceman–Rachford splitting 3 Douglas–Rachford splitting Sum of more than two operators 4 Spingarn’s method of partial inverses 5 Acceleration 6 Numerical experiments 7

  3. Sum of two operators Pr Problem oblem Find x ∈ R n such that 0 ∈ A ( x ) + B ( x ) . Assump Assumptions tions A , B : R n ⇒ R n are maximal monotone. the resolvents of A , B are simple, i.e. easy to compute. zer ( A + B ) � = ∅ . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  4. Outline 1 Problem 2 Peaceman–Rachford splitting 3 Douglas–Rachford splitting 4 Sum of more than two operators 5 Spingarn’s method of partial inverses 6 Acceleration 7 Numerical experiments

  5. Peaceman–Rachford splitting Peaceman–Rachford splitting Let z 0 ∈ R n , γ > 0: x k = J γ B ( z k ) , y k = J γ A ( 2 x k − z k ) , z k + 1 = z k + 2 ( y k − x k ) . dates back to 1950s for solving numerical PDEs. the resolvents of A , B are evaluated separately. Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  6. How to derive given x ⋆ ∈ zer ( A + B ) , there exists z ⋆ ∈ R n such that � � z ⋆ − x ⋆ ∈ γ A ( x ⋆ ) z ⋆ ∈ x ⋆ + γ A ( x ⋆ ) , = ⇒ x ⋆ − z ⋆ ∈ γ B ( x ⋆ ) 2 x ⋆ − z ⋆ ∈ x ⋆ + γ B ( x ⋆ ) . apply the resolvent � x ⋆ = J γ A ( z ⋆ ) , x ⋆ = J γ B ( 2 x ⋆ − z ⋆ ) . equivalent formulation � x ⋆ = J γ A ( z ⋆ ) , � J γ B ( 2 x ⋆ − z ⋆ ) − x ⋆ � z ⋆ = z ⋆ + 2 . fixed-point iteration � x k = J γ A ( z k ) , � � z k + 1 = z k + 2 J γ B ( 2 x k − z k ) − x k . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  7. Fixed-point characterisartion tion Recall reflection operator R γ A = 2 J γ A − Id. Fixed-poin Fix ed-point formula ormulation y k = J γ A ( 2 x k − z k ) = J γ A ◦ ( 2 J γ B − Id )( z k ) . For z k , z k + 1 = z k + 2 ( y k − x k ) � � = z k + 2 J γ A ◦ ( 2 J γ B − Id )( z k ) − J γ B ( z k ) = 2 J γ A ◦ ( 2 J γ B − Id )( z k ) − ( 2 J γ B − Id )( z k ) = ( 2 J γ A − Id ) ◦ ( 2 J γ B − Id )( z k ) . Pr Property operty R γ A = 2 J γ A − Id , R γ B = 2 J γ B − Id are non-expansive. T PR = R γ A ◦ R γ B is non-expansive. NB : Cannot guarantee convergence in general. Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  8. Convergence Uniform monotonicity: φ : R + → [ 0 , + ∞ ] is increasing and vanishes only at 0 � u − v , x − y � ≥ φ ( || x − y || ) , ( x , u ) , ( y , v ) ∈ gra ( B ) . If B is uniformly monotone, then zer ( A + B ) = { x ⋆ } and fix ( T PR ) � = ∅ . Moreover � x − y , J γ B ( x ) − J γ B ( y ) � ≥ || J γ B ( x ) − J γ B ( y ) || 2 + γφ ( || J γ B ( x ) − J γ B ( y ) || ) . Let z ⋆ ∈ fix ( T PR ) , then x ⋆ = J γ A ( z ⋆ ) , and || z k + 1 − z ⋆ || 2 = || R γ A R γ B ( z k ) − R γ A R γ B ( z ⋆ ) || 2 ≤ || ( 2 J γ B − Id )( z k ) − ( 2 J γ B − Id )( z ⋆ ) || 2 = || z k − z ⋆ || 2 − 4 � z k − z ⋆ , J γ B ( z k ) − J γ B ( z ⋆ ) � + 4 || J γ B ( z k ) − J γ B ( z ⋆ ) || 2 ≤ || z k − z ⋆ || 2 − 4 γφ ( || J γ B ( z k ) − J γ B ( z ⋆ ) || ) . φ ( || z k − z ⋆ || ) → 0 and || z k − z ⋆ || → 0. Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  9. Outline 1 Problem 2 Peaceman–Rachford splitting 3 Douglas–Rachford splitting 4 Sum of more than two operators 5 Spingarn’s method of partial inverses 6 Acceleration 7 Numerical experiments

  10. Douglas–Rachford splitting To overcome the drawback of Peaceman–Rachford splitting. Douglas–Rachford splitting Let z 0 ∈ R n , γ > 0 , λ ∈ ] 0 , 2 [ : x k = J γ B ( z k ) , y k = J γ A ( 2 x k − z k ) , z k + 1 = z k + λ ( y k − x k ) . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  11. How to derive given x ⋆ ∈ zer ( A + B ) , there exists z ⋆ ∈ R n such that � � z ⋆ − x ⋆ ∈ γ A ( x ⋆ ) z ⋆ ∈ x ⋆ + γ A ( x ⋆ ) , = ⇒ x ⋆ − z ⋆ ∈ γ B ( x ⋆ ) 2 x ⋆ − z ⋆ ∈ x ⋆ + γ B ( x ⋆ ) . apply the resolvent � x ⋆ = J γ A ( z ⋆ ) , x ⋆ = J γ B ( 2 x ⋆ − z ⋆ ) . equivalent formulation � x ⋆ = J γ A ( z ⋆ ) , � J γ B ( 2 x ⋆ − z ⋆ ) − x ⋆ � z ⋆ = z ⋆ + . fixed-point iteration � x k = J γ A ( z k ) , � � z k + 1 = z k + J γ B ( 2 x k − z k ) − x k . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  12. Fixed-point characterisartion tion Same as PR, y k = J γ A ◦ R γ B ( z k ) Fix Fixed-poin ed-point formula ormulation � � z k + 1 = ( 1 − λ ) z k + λ z k + ( y k − x k ) � 1 � = ( 1 − λ ) z k + λ 2 z k + 1 2 ( z k + 2 ( y k − x k )) = ( 1 − λ ) z k + λ 1 2 ( Id + R γ A ◦ R γ B )( z k ) . Pr Property operty T DR = 1 2 ( Id + R γ A ◦ R γ B ) is firmly non-expansive. T λ DR = ( 1 − λ ) Id + λ T DR is λ 2 -averaged non-expansive. Peaceman–Rachford is the limiting case of Douglas–Rachford, λ = 2. NB : guaranteed convergence if λ ( 2 − λ ) > 0. Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  13. Convergence rate Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  14. Convergence rate Let X , Y be two subspaces X = { x : ax = 0 } , Y = { x : bx = 0 } and assume 1 ≤ p def = dim( X ) ≤ q def = dim( Y ) ≤ n − 1 . Projection onto subspace P X ( x ) = x − a T ( aa T ) − 1 ax . Define diagonal matrices � � c = diag cos ( θ 1 ) , · · · , cos ( θ p ) , � � s = diag sin ( θ 1 ) , · · · , sin ( θ p ) . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  15. Convergence rate Suppose p + q < n , then there exists orthogonal matrix U such that   Id p 0 0 0     0 0 p 0 0  U ∗ P X = U    0 q − p 0 0 0 0 n − p − q 0 0 0   and c 2 cs 0 0    c 2  cs 0 0  U ∗ . P Y = U    0 0 Id q − p 0 0 0 0 0 n − p − q Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  16. Convergence rate For the composition   c 2 0 0 cs     0 0 p 0 0  U ∗ P X ◦ P Y = U    0 q − p 0 0 0 0 n − p − q 0 0 0   and 0 p 0 0 0   − cs  c 2  0 0  U ∗ . P X ⊥ ◦ P Y ⊥ = U    0 0 0 q − p 0 Id n − p − q 0 0 0 Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  17. Convergence rate Fixed-point operator T DR = P X ◦ P Y + P X ⊥ ◦ P Y ⊥   c 2 cs 0 0   − cs c 2   0 0  U ∗ . = U    0 0 0 q − p 0 0 0 0 Id n − p − q Consider relaxation T λ DR = ( 1 − λ ) Id + λ T DR   Id p − λ s 2 λ cs 0 0   − λ cs Id p − λ s 2   0 0  U ∗ . = U    ( 1 − λ ) Id q − p 0 0 0 0 0 0 Id n − p − q Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  18. Convergence rate Eigenvalues � � � 1 − λ sin 2 ( θ i ) ± i λ cos ( θ i ) sin ( θ i ) | i = 1 , ..., p ∪ { 1 } : q = p , σ ( T λ DR ) = � � 1 − λ sin 2 ( θ i ) ± i λ cos ( θ i ) sin ( θ i ) | i = 1 , ..., p ∪ { 1 } ∪ { 1 − λ } : q > p . Complex eigenvalues � | 1 − λ sin 2 ( θ i ) ± i λ cos ( θ i ) sin ( θ i ) | = λ ( 2 − λ ) cos 2 ( θ i ) + ( 1 − λ ) 2 and � λ ( 2 − λ ) cos 2 ( θ i ) + ( 1 − λ ) 2 ≥ | 1 − λ | . 1 ≥ DR and z k − z ⋆ = ( T DR − T ∞ DR = T ∞ DR )( z k − 1 − z ⋆ ) . lim k → + ∞ T k Spectral radius, minimises at λ = 1 � ρ ( T DR − T ∞ DR ) = λ ( 2 − λ ) cos 2 ( θ i ) + ( 1 − λ ) 2 . � T DR = T DR − T ∞ DR || z k − z ⋆ || = || � T DR z k − 1 − � T DR z ⋆ || = ... = || � k ( z 0 − z ⋆ ) || T DR � � k || z 0 − z ⋆ || . ρ ( � ≤ C T DR ) Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  19. Optimal metric for DR X ′ and Y X and Y tric A invertable operation which makes the Friedrichs angle between X ′ and Y the Op Optimal timal me metric largest, e.g. π 2 ... Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

  20. Outline 1 Problem 2 Peaceman–Rachford splitting 3 Douglas–Rachford splitting 4 Sum of more than two operators 5 Spingarn’s method of partial inverses 6 Acceleration 7 Numerical experiments

  21. More than two operators oblem s ∈ N + and s ≥ 2 Pr Problem Find x ∈ R n such that 0 ∈ � i A i ( x ) . Assump Assumptions tions for each i = 1 , ..., s , A i : R n ⇒ R n is maximal monotone. zer ( � i A i ) � = ∅ . Jingwei Liang, DAMTP Introduction to Non-smooth Optimisation March 13, 2019

Recommend


More recommend