computing the common zeros of two bivariate functions via
play

Computing the common zeros of two bivariate functions via B ezout - PowerPoint PPT Presentation

Computing the common zeros of two bivariate functions via B ezout resultants Colorado State University, 26th September 2013 Alex Townsend PhD student Mathematical Institute University of Oxford (with Yuji Nakatsukasa & Vanni Noferini)


  1. Computing the common zeros of two bivariate functions via B´ ezout resultants Colorado State University, 26th September 2013 Alex Townsend PhD student Mathematical Institute University of Oxford (with Yuji Nakatsukasa & Vanni Noferini) Work supported by supported by EPSRC grant EP/P505666/1.

  2. Introduction Motivation Global 1D rootfinding is crucial (25% of Chebfun code needs roots) Chebfun2 is an extension of Chebfun for bivariate functions Very high degree polynomial interpolants are common 1 Find the global minimum of � � x 2 0.5 4 + e sin ( 50 x ) + sin ( 70 sin ( x )) f ( x , y ) = � y 2 � 0 4 + sin ( 60 e y ) + sin ( sin ( 80 y )) + − cos ( 10 x ) sin ( 10 y ) − sin ( 10 x ) cos ( 10 y ) . −0.5 g = chebfun2( f ); −1 −1 −0.5 0 0.5 1 r = roots( gradient( g ) ); There are 2720 local extrema. Alex Townsend @ Oxford

  3. Introduction Algorithmic overview Let f and g be real-valued Lipschitz functions on [ − 1 , 1 ] 2 . Solve: � � f ( x , y ) ( x , y ) ∈ [ − 1 , 1 ] 2 . = 0 , g ( x , y ) “Polynomialization”: Replace f and g with bivariate polynomials p and q “Act locally”: Subdivide [ − 1 , 1 ] 2 with piecewise approximants until total degree ≤ 16, solve low degree rootfinding problems “Think globally”: Do refinement and regularization to improve global stability “ Think globally, act locally ”, Stan Wagon Alex Townsend @ Oxford

  4. Introduction NOT curve finding Not to be confused with bivariate ✘✘✘✘✘✘ ❳❳❳❳❳❳ ✘ rootfinding curve finding: ❳ ( x , y ) ∈ [ − 1 , 1 ] 2 . f ( x , y ) = 0 , Solutions lie along curves. Chebfun2 computes these by Marching Squares . spy plot of LNT Zero curves of LNT 0 1 0.8 100 0.6 0.4 200 0.2 300 0 −0.2 400 −0.4 −0.6 500 −0.8 −1 −1 −0.5 0 0.5 1 0 100 200 300 400 nz = 36049 ∗ Photo courtesy of Nick Hale. Alex Townsend @ Oxford

  5. Introduction Talk overview The talk follows Stan Wagon: “Polynomialization” “Act locally” “Think globally” Numerical examples WARNING: Simple common zeros only! Alex Townsend @ Oxford

  6. Polynomialization 1D Chebyshev interpolants For n ≥ 1, the Chebyshev points (of the 2nd kind) are given by � j π � x n j = cos , 0 ≤ j ≤ n . n The Chebyshev interpolant of f is the polynomial p of degree at most n s.t. n � p ( x n j ) = f ( x n p ( x ) = c j T j ( x ) , j ) , 0 ≤ j ≤ n , j = 0 where T j ( x ) = cos ( j cos − 1 ( x )) is the Chebyshev polynomial of degree j . Runge function 100,000 degree polynomial 1 2.5 Function Chebyshev 2 0.8 Equally−spaced 1.5 0.6 1 0.4 0.5 0.2 0 0 −0.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 Alex Townsend @ Oxford

  7. Polynomialization Tensor-product approximation Replace f and g by their polynomial interpolants n p m p n q m q � � � � p ( x , y ) = α ij T i ( x ) T j ( y ) , q ( x , y ) = β ij T i ( x ) T j ( y ) i = 0 j = 0 i = 0 j = 0 n p m p n p m p n q m q n q m q such that p ( x s , x ) = f ( x s , x ) and q ( x s , x ) = g ( x s , x ) . Select n p , m p t t t t and n q , m q large enough. Take n p = 9 , 17 , 33 , 65 , and so on, until tail of coefficients falls below relative machine precision. Chebyshev coefficients computed by fast DCT-I transform [Gentleman 72]. Alex Townsend @ Oxford

  8. Act locally Subdivision Key fact: Subdivide to deal with high degree Do not bisect! Instead subdivide Subdivide into subrectangles until off-center (to avoid awkward coin- polynomial degrees are small. cidences). sin((x−1/10)y)cos(1/(x + (y−9/10) + 5)) = (y−1/10)cos((x+(y+9/10) 2 /4)) = 0 Subdivide until degree 16. 11 14 11 11 11 13 11 10 Like 1D subdivision: Real solutions only. Alex Townsend @ Oxford

  9. Act locally B´ ezout resultant theorem Theorem (B´ ezout resultant theorem) Let p y and q y be two univariate polynomials of degree at most n p and n q . The Chebyshev B´ ezout resultant matrix max ( n p , n q ) p y ( s ) q y ( t ) − p y ( t ) q y ( s ) � � � B ( p y , q y ) = b ij 1 ≤ i , j ≤ max ( n p , n q ) , = b ij T i − 1 ( s ) T j − 1 ( t ) . s − t i , j = 1 is nonsingular if and only if p y and q y have no common roots. Usually, this theorem is stated using the Sylvester resultant Usually, stated in terms of the monomial basis There are stable ways to form B ( p y , q y ) . We use [T., Noferini, Nakatsukasa, 13a] Alex Townsend @ Oxford

  10. Act locally Hidden-variable resultant method The hidden-variable resultant method “hides” one of the variables: n p n q � � p y ( x ) = p ( x , y ) = α i ( y ) T i ( x ) , q y ( x ) = q ( x , y ) = β i ( y ) T i ( x ) . i = 0 i = 0 B ( p y , q y ) is a symmetric matrix of size max ( n p , n q ) Each entry of B ( p y , q y ) is a polynomial in y , of degree m p + m q For the y -values of p ( x , y ) = q ( x , y ) = 0 we want to solve � � det B ( p y , q y ) = 0 , y ∈ [ − 1 , 1 ] . det(B(p y ,q y )) −164 1x 10 Problem! Determinant is numerically zero: 0 −1 −1 −0.5 0 0.5 1 y Alex Townsend @ Oxford

  11. Act locally Matrix polynomial linearization Key fact: Inherit robustness from eigenvalue solver B ( p y , q y ) = � M i = 0 A i T i ( y ) ∈ R N × N . B ( p y , q y ) is a matrix-valued polynomial in y : The colleague matrix [Specht 1960,Good 1961]:   − A M − 1 I N − A M − 2 − A M − 3 · · · − A 0   A M        I N 0 I N          I N       − 1      ... ... ...  yX + Y = y     .  ...        2                 I N 0 I N         I N     2 I N 0 Similar to companion, but for Chebyshev. Inherited robustness from eigenvalue solver. Strong linearization. Alex Townsend @ Oxford

  12. Act locally Univariate rootfinding Key point: Use univariate rootfinder for x -values We use Chebfun’s 1D rootfinder for the x -values, once we have the y -values. We independently solve for each y ∗ p ( x , y ∗ ) = 0 , x ∈ [ − 1 , 1 ] and q ( x , y ∗ ) = 0 , x ∈ [ − 1 , 1 ] . Based on the colleague matrix ( ≈ companion matrix) Gets its robustness from eigenvalue solver Originally Boyd’s algorithm from [Boyd 02] 1D subdivision is not needed for us Alex Townsend @ Oxford

  13. Act locally Reviewing the algorithm Flowchart of the algorithm: B´ ezoutian � � � � yes f ( x , y ) p ( x , y ) Univariate Degree = 0 = 0 resultant g ( x , y ) q ( x , y ) rootfinding ≤ 16? method no, subdivide Collect together the solutions from the subdomains. Keep solutions in [ − 1 , 1 ] 2 , throw away the rest. Perturb some if necessary. Further questions: 1. Should we hide the x - or y -variable in the hidden-variable resultant method? 2. What is the operational cost of the algorithm? 3. Is the algorithm stable? Alex Townsend @ Oxford

  14. Think globally Stability of the B´ ezout resultant method Let p ( x ∗ , y ∗ ) = q ( x ∗ , y ∗ ) = 0 with � p � ∞ = � q � ∞ = 1. The Jacobian matrix is   ∂ p ∂ p ∂ x ( x ∗ , y ∗ ) ∂ y ( x ∗ , y ∗ )     J = J ( x ∗ , y ∗ ) =  .      ∂ q ∂ q   ∂ x ( x ∗ , y ∗ ) ∂ y ( x ∗ , y ∗ ) κ ∗ = � J − 1 � 2 Absolute condition number of problem at ( x ∗ , y ∗ ) : κ 2 κ ( y ∗ , B ) ≥ 1 κ ∗ κ 2 ( J ) ≥ Absolute condition number of y ∗ for B´ ezout: ∗ [1] � adj ( J ) � 2 2 The B´ ezout resultant method is unstable: If entries of J are small then, κ ( y ∗ , B ) ≫ κ ∗ This is BAD news! [1] Nakatsukasa, Noferini, & T., 2013b. Alex Townsend @ Oxford

  15. Think globally Local refinement Key fact: Local refinement can improve stability Redo B´ ezout resultant in Ω near ( x ∗ , y ∗ ) . Let Ω = [ x min , x max ] × [ y min , y max ] 1 1 | Ω | = x max − x min ≈ y max − y min κ Ω ( y ∗ , B ) ≈ | Ω | 2 κ ( y ∗ , B ) Zoom in Zoom in Shrinking | Ω | improves stability (in a think globally sense). −1 −1 −1 −1 1 1 Get O ( κ ∗ u ) error from polynomialization. Also do local refinement in detected ill-conditioned regions. Alex Townsend @ Oxford

  16. Think globally B´ ezout regularization Key fact: Regularize the problem by projecting The B´ ezout resultant is symmetric. Partition such that � � E ( y ) T B 1 ( y ) B 1 ( y ) ∈ R k × k , B 0 ( y ) ∈ R ( N − k ) × ( N − k ) , B ( p y , q y ) = , E ( y ) B 0 ( y ) with � E ( y ) � 2 = O ( u 1 / 2 ) . � B 0 ( y ) � 2 = O ( u ) , The eigenvalues of B 1 ( y ) and B ( p y , q y ) in [ − 1 , 1 ] are usually within O ( u ) . Effectively this step removes large eigenvalues. Alex Townsend @ Oxford

  17. More details Many other approaches Two-parameter eigenvalue Homotopy continuation method problem Solve a problem, make it harder. Use EIG to solve x and y together. H ( λ, z ) + Q ( z )( 1 − λ ) + P ( z ) λ, A 1 v = xB 1 v + yC 1 v , λ ∈ ( 0 , 1 ) . A 2 w = xB 2 w + yC 2 w . Contour algorithms Other resultant methods Solve two curve finding problems: Sylvester resultants f ( x , y ) = 0 , g ( x , y ) = 0 . u -resultants Inverse iteration, Newton-like Find intersection of curves. Alex Townsend @ Oxford

Recommend


More recommend