Implementation and numerical stability of saddle point solvers Miro Rozloˇ zn´ ık joint results with Pavel Jir´ anek and Valeria Simoncini Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Recent developments in the solution of indefinite systems Workshop, TU Eindhoven, April 17,2012
Saddle point problems We consider a saddle point problem with the symmetric 2 × 2 block form � A � � x � � f � B = . B T 0 y 0 ◮ A is a square n × n nonsingular (symmetric positive definite) matrix, ◮ B is a rectangular n × m matrix of (full column) rank m . Applications: mixed finite element approximations, weighted least squares, constrained optimization, computational fluid dynamics, electromagnetism etc. [Benzi, Golub and Liesen, 2005], [Elman, Silvester, Wathen, 2005]. For the updated list of applications leading to saddle point problems contact [Benzi].
Iterative solution of saddle point problems 1. segregated approach : outer iteration for solving the reduced Schur complement or null-space projected system; 2. coupled approach with block preconditioning : iteration scheme for solving the preconditioned system; 3. rounding errors in floating point arithmetic : numerical stability of the solver Numerous solution schemes: inexact Uzawa algorithms, inexact null-space methods, inner-outer iteration methods, two-stage iteration processes, multilevel or multigrid methods, domain decomposition methods Numerous preconditioning techniques and schemes: block diagonal preconditioners, block triangular preconditioners, constraint preconditioning, Hermitian/skew-Hermitian preconditioning and other splittings, combination preconditioning Numerous iterative solvers: conjugate gradient (CG) method, MINRES, GMRES, flexible GMRES, GCR, BiCG, BiCGSTAB, ...
Delay of convergence and limit on the final accuracy
Numerical experiments: a small model example A = tridiag(1 , 4 , 1) ∈ R 100 × 100 , B = rand(100 , 20) , f = rand(100 , 1) , κ ( A ) = � A � · � A − 1 � = 5 . 9990 · 0 . 4998 ≈ 2 . 9983 , κ ( B ) = � B � · � B † � = 7 . 1695 · 0 . 4603 ≈ 3 . 3001 .
Schur complement reduction method ◮ Compute y as a solution of the Schur complement system B T A − 1 By = B T A − 1 f, ◮ compute x as a solution of Ax = f − By. ◮ Segregated vs. coupled approach: x k and y k approximate solutions to x and y , respectively. ◮ Inexact solution of systems with A : every computed solution ˆ u of Au = b is interpreted as an exact solution of a perturbed system ( A + ∆ A )ˆ u = b + ∆ b, � ∆ A � ≤ τ � A � , � ∆ b � ≤ τ � b � , τκ ( A ) ≪ 1 .
Iterative solution of the Schur complement system choose y 0 , solve Ax 0 = f − By 0 compute α k and p ( y ) k y k +1 = y k + α k p ( y ) k solve Ap ( x ) = − Bp ( y ) � � k k � back-substitution: outer � � � iteration A: x k +1 = x k + α k p ( x ) k , inner � � iteration � B: solve Ax k +1 = f − By k +1 , � � C: solve Au k = f − Ax k − By k +1 , � � � x k +1 = x k + u k . � − α k B T p ( x ) r ( y ) k +1 = r ( y ) k k
Accuracy in the saddle point system � f − Ax k − By k � ≤ O ( α 1 ) κ ( A ) 1 − τκ ( A ) ( � f � + � B � Y k ) , k � ≤ O ( α 2 ) κ ( A ) � − B T x k − r ( y ) 1 − τκ ( A ) � A − 1 �� B � ( � f � + � B � Y k ) , Y k ≡ max {� y i � | i = 0 , 1 , . . . , k } . Back-substitution scheme α 1 α 2 A : Generic update τ u x k +1 = x k + α k p ( x ) k B : Direct substitution τ τ � x k +1 = A − 1 ( f − By k +1 ) additional C : Corrected dir. subst. system with A u τ x k +1 = x k + A − 1 ( f − Ax k − By k +1 ) − B T A − 1 f + B T A − 1 By k = − B T x k − B T A − 1 ( f − Ax k − By k )
Generic update: x k +1 = x k + α k p ( x ) k 0 0 10 10 τ = 10 −2 −2 10 −2 10 (y) || −4 (y) ||/||r 0 10 −4 10 τ = 10 −6 relative residual norms ||−B T x k ||/||−B T x 0 ||, ||r k −6 −6 10 residual norm ||f−Ax k −By k || 10 −8 −8 10 10 τ = 10 −10 −10 10 −10 10 −12 10 −12 10 τ = O(u) −14 −14 10 10 τ = O(u), τ = 10 −10 , τ =10 −6 , τ =10 −2 −16 −16 10 10 −18 −18 10 10 0 50 100 150 200 250 300 0 50 100 150 200 250 300 iteration number k iteration number k
Direct substitution: x k +1 = A − 1 ( f − By k +1 ) 0 0 10 10 τ = 10 −2 −2 τ = 10 −2 10 −2 10 (y) || −4 (y) ||/||r 0 10 −4 10 τ = 10 −6 relative residual norms ||−B T x k ||/||−B T x 0 ||, ||r k τ = 10 −6 −6 −6 10 residual norm ||f−Ax k −By k || 10 −8 −8 10 10 τ = 10 −10 τ = 10 −10 −10 10 −10 10 −12 10 −12 10 τ = O(u) −14 −14 10 10 τ = O(u) −16 −16 10 10 −18 −18 10 10 0 50 100 150 200 250 300 0 50 100 150 200 250 300 iteration number k iteration number k
Corrected direct substitution: x k +1 = x k + A − 1 ( f − Ax k − By k +1 ) 0 0 10 10 −2 τ = 10 −2 10 −2 10 (y) || −4 (y) ||/||r 0 10 −4 10 relative residual norms ||−B T x k ||/||−B T x 0 ||, ||r k τ = 10 −6 −6 −6 10 residual norm ||f−Ax k −By k || 10 −8 −8 10 10 τ = 10 −10 −10 10 −10 10 −12 10 −12 10 −14 −14 10 10 τ = O(u), τ = 10 −10 , τ =10 −6 , τ =10 −2 τ = O(u) −16 −16 10 10 −18 −18 10 10 0 50 100 150 200 250 300 0 50 100 150 200 250 300 iteration number k iteration number k
Null-space projection method ◮ compute x ∈ N ( B T ) as a solution of the projected system ( I − Π) A ( I − Π) x = ( I − Π) f, ◮ compute y as a solution of the least squares problem By ≈ f − Ax, Π = B ( B T B ) − 1 B T is the orthogonal projector onto R ( B ) . ◮ Schemes with the inexact solution of least squares with B . Every computed approximate solution ¯ v of a least squares problem Bv ≈ c is interpreted as an exact solution of a perturbed least squares ( B + ∆ B )¯ v ≈ c + ∆ c, � ∆ B � ≤ τ � B � , � ∆ c � ≤ τ � c � , τκ ( B ) ≪ 1 .
Null-space projection method choose x 0 , solve By 0 ≈ f − Ax 0 ∈ N ( B T ) compute α k and p ( x ) k x k +1 = x k + α k p ( x ) k solve Bp ( y ) ≈ r ( x ) − α k Ap ( x ) � � k k k � back-substitution: outer � � � iteration A: y k +1 = y k + p ( y ) k , inner � � iteration � B: solve By k +1 ≈ f − Ax k +1 , � � C: solve Bv k ≈ f − Ax k +1 − By k , � � � y k +1 = y k + v k . � r ( x ) k +1 = r ( x ) − α k Ap ( x ) − Bp ( y ) k k k
Accuracy in the saddle point system k � ≤ O ( α 3 ) κ ( B ) � f − Ax k − By k − r ( x ) 1 − τκ ( B ) ( � f � + � A � X k ) , � − B T x k � ≤ O ( τ ) κ ( B ) 1 − τκ ( B ) � B � X k , X k ≡ max {� x i � | i = 0 , 1 , . . . , k } . Back-substitution scheme α 3 A : Generic update u y k +1 = y k + p ( y ) k B : Direct substitution τ � y k +1 = B † ( f − Ax k +1 ) additional least C : Corrected dir. subst. square with B u y k +1 = y k + B † ( f − Ax k +1 − By k )
Generic update: y k +1 = y k + p ( y ) k 0 0 10 10 τ = 10 −2 0 || k ||/||r (x) −2 −2 10 10 relative residual norms ||f−Ax k −By k ||/||f−Ax 0 −By 0 ||, ||r (x) −4 −4 10 10 τ = 10 −6 −6 −6 10 residual norm ||−B T x k || 10 −8 −8 τ = 10 −10 10 10 −10 −10 10 10 −12 −12 10 10 −14 −14 10 10 τ = O(u) τ = O(u), τ = 10 −10 , τ =10 −6 , τ =10 −2 −16 −16 10 10 −18 −18 10 10 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 iteration number iteration number
Direct substitution: y k +1 = B † ( f − Ax k +1 ) 0 0 10 10 τ = 10 −2 τ = 10 −2 0 || k ||/||r (x) −2 −2 10 10 relative residual norms ||f−Ax k −By k ||/||f−Ax 0 −By 0 ||, ||r (x) −4 −4 10 10 τ = 10 −6 τ = 10 −6 −6 −6 10 residual norm ||−B T x k || 10 −8 −8 τ = 10 −10 10 10 τ = 10 −10 −10 −10 10 10 −12 −12 10 10 −14 −14 10 10 τ = O(u) τ = O(u) −16 −16 10 10 −18 −18 10 10 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 iteration number iteration number
Recommend
More recommend