moment methods in energy minimization
play

Moment methods in energy minimization David de Laat CWI Amsterdam - PowerPoint PPT Presentation

Moment methods in energy minimization David de Laat CWI Amsterdam Andrejewski-Tage Moment problems in theoretical physics Konstanz, 9 April 2016 Packing and energy minimization Energy minimization Sphere packing Thomson problem (1904)


  1. Setup ◮ Goal: Find the ground state energy E of a system of N particles in a compact container ( V, d ) with pair potential h ◮ Assume h ( s ) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x and y are adjacent if h ( d ( x, y )) is large ◮ Let I t be the set of independent sets with ≤ t elements ◮ Let I = t be the set of independent sets with t elements ◮ These sets are compact metric spaces ◮ Define f ∈ C ( I N ) by � h ( d ( x, y )) if S = { x, y } with x � = y, f ( S ) = 0 otherwise

  2. Setup ◮ Goal: Find the ground state energy E of a system of N particles in a compact container ( V, d ) with pair potential h ◮ Assume h ( s ) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x and y are adjacent if h ( d ( x, y )) is large ◮ Let I t be the set of independent sets with ≤ t elements ◮ Let I = t be the set of independent sets with t elements ◮ These sets are compact metric spaces ◮ Define f ∈ C ( I N ) by � h ( d ( x, y )) if S = { x, y } with x � = y, f ( S ) = 0 otherwise ◮ Minimal energy: � E = min f ( P ) S ∈ I = N P ⊆ S

  3. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R

  4. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S

  5. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S

  6. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties:

  7. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure

  8. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i

  9. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i ◮ χ S is a measure of positive type (see next slide)

  10. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i ◮ χ S is a measure of positive type (see next slide) ◮ Relaxations: For t = 1 , . . . , N , � λ ( f ) : λ ∈ M ( I 2 t ) positive measure of positive type , E t = min � � N � for all 0 ≤ i ≤ 2 t λ ( I = i ) = i

  11. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i ◮ χ S is a measure of positive type (see next slide) ◮ Relaxations: For t = 1 , . . . , N , � λ ( f ) : λ ∈ M ( I 2 t ) positive measure of positive type , E t = min � � N � for all 0 ≤ i ≤ 2 t λ ( I = i ) = i ◮ E t is a min { 2 t, N } -point bound

  12. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i ◮ χ S is a measure of positive type (see next slide) ◮ Relaxations: For t = 1 , . . . , N , � λ ( f ) : λ ∈ M ( I 2 t ) positive measure of positive type , E t = min � � N � for all 0 ≤ i ≤ 2 t λ ( I = i ) = i ◮ E t is a min { 2 t, N } -point bound E 1 ≤ E 2 ≤ · · · ≤ E N

  13. Moment methods in energy minimization ◮ For S ∈ I = N , define the measure χ S = � R ⊆ S δ R ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by � � χ S ( f ) = f ( P ) dχ S ( P ) = f ( R ) R ⊆ S ◮ This measure satisfies the following 3 properties: ◮ χ S is a positive measure � N ◮ χ S satisfies λ ( I = i ) = � for all i i ◮ χ S is a measure of positive type (see next slide) ◮ Relaxations: For t = 1 , . . . , N , � λ ( f ) : λ ∈ M ( I 2 t ) positive measure of positive type , E t = min � � N � for all 0 ≤ i ≤ 2 t λ ( I = i ) = i ◮ E t is a min { 2 t, N } -point bound E 1 ≤ E 2 ≤ · · · ≤ E N = E

  14. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S

  15. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S ◮ This is an infinite dimensional version of the adjoint of the opererator y �→ M ( y ) that maps a moment sequence to a moment matrix

  16. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S ◮ This is an infinite dimensional version of the adjoint of the opererator y �→ M ( y ) that maps a moment sequence to a moment matrix ◮ Dual operator A ∗ t : M ( I 2 t ) → M ( I t × I t ) sym

  17. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S ◮ This is an infinite dimensional version of the adjoint of the opererator y �→ M ( y ) that maps a moment sequence to a moment matrix ◮ Dual operator A ∗ t : M ( I 2 t ) → M ( I t × I t ) sym ◮ Cone of positive definite kernels: C ( I t × I t ) � 0

  18. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S ◮ This is an infinite dimensional version of the adjoint of the opererator y �→ M ( y ) that maps a moment sequence to a moment matrix ◮ Dual operator A ∗ t : M ( I 2 t ) → M ( I t × I t ) sym ◮ Cone of positive definite kernels: C ( I t × I t ) � 0 ◮ Dual cone: M ( I t × I t ) � 0 = { µ ∈ M ( I t × I t ) sym : µ ( K ) ≥ 0 for all K ∈ C ( I t × I t ) � 0 }

  19. Measures of positive type [L–Vallentin 2015] ◮ Operator: � K ( J, J ′ ) A t : C ( I t × I t ) sym → C ( I 2 t ) , A t K ( S ) = J,J ′ ∈ I t : J ∪ J ′ = S ◮ This is an infinite dimensional version of the adjoint of the opererator y �→ M ( y ) that maps a moment sequence to a moment matrix ◮ Dual operator A ∗ t : M ( I 2 t ) → M ( I t × I t ) sym ◮ Cone of positive definite kernels: C ( I t × I t ) � 0 ◮ Dual cone: M ( I t × I t ) � 0 = { µ ∈ M ( I t × I t ) sym : µ ( K ) ≥ 0 for all K ∈ C ( I t × I t ) � 0 } ◮ A measure λ ∈ M ( I 2 t ) is of positive type if A ∗ t λ ∈ M ( I t × I t ) � 0

  20. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E

  21. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N

  22. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t )

  23. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t ) ◮ Define N t ( λ ) = { f ∈ C ( I t ) : � f, f � = 0 }

  24. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t ) ◮ Define N t ( λ ) = { f ∈ C ( I t ) : � f, f � = 0 }

  25. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t ) ◮ Define N t ( λ ) = { f ∈ C ( I t ) : � f, f � = 0 } ◮ If λ ∈ M ( I 2 t ) is of positive type and C ( I t ) = C ( I t − 1 ) + N t ( λ ) , then we can extend λ to a measure λ ′ ∈ M ( I N ) that is of positive type

  26. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t ) ◮ Define N t ( λ ) = { f ∈ C ( I t ) : � f, f � = 0 } ◮ If λ ∈ M ( I 2 t ) is of positive type and C ( I t ) = C ( I t − 1 ) + N t ( λ ) , then we can extend λ to a measure λ ′ ∈ M ( I N ) that is of positive type � N � N ◮ λ ( I = i ) = � for 0 ≤ i ≤ 2 t ⇒ λ ′ ( I = i ) = � for 0 ≤ i ≤ N i i

  27. Flat extensions ◮ Recall: E 1 ≤ E 2 ≤ · · · ≤ E N = E ◮ Sufficient condition for the existence of an extension of a feasible solution λ ∈ M ( I 2 t ) of E t to a feasible solution of E N ◮ Positive semidefinite form � f, g � = A ∗ t λ ( f ⊗ g ) on C ( I t ) ◮ Define N t ( λ ) = { f ∈ C ( I t ) : � f, f � = 0 } ◮ If λ ∈ M ( I 2 t ) is of positive type and C ( I t ) = C ( I t − 1 ) + N t ( λ ) , then we can extend λ to a measure λ ′ ∈ M ( I N ) that is of positive type � N � N ◮ λ ( I = i ) = � for 0 ≤ i ≤ 2 t ⇒ λ ′ ( I = i ) = � for 0 ≤ i ≤ N i i If an optimal solution λ of E t satisfies C ( I t ) = C ( I t − 1 )+ N t ( λ ) , then E t = E N = E

  28. Computations using the dual hierarchy 0

  29. Computations using the dual hierarchy E 0

  30. Computations using the dual hierarchy E t E 0

  31. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t

  32. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t

  33. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0

  34. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea:

  35. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea: 1. Express K in terms of its Fourier coefficients

  36. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea: 1. Express K in terms of its Fourier coefficients 2. Set all but finitely many of these coefficients to 0

  37. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea: 1. Express K in terms of its Fourier coefficients 2. Set all but finitely many of these coefficients to 0 3. Optimize over the remaining coefficients

  38. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea: 1. Express K in terms of its Fourier coefficients 2. Set all but finitely many of these coefficients to 0 3. Optimize over the remaining coefficients ◮ To do this we need a group Γ with an action on I t

  39. Computations using the dual hierarchy Dual maximization problem E ∗ E t E 0 t Strong duality holds: E t = E ∗ t ◮ In E ∗ t we optimize over kernels K ∈ C ( I t × I t ) � 0 ◮ Idea: 1. Express K in terms of its Fourier coefficients 2. Set all but finitely many of these coefficients to 0 3. Optimize over the remaining coefficients ◮ To do this we need a group Γ with an action on I t ◮ In principle this can be the trivial group, but for symmetry reduction a bigger group is better

  40. Harmonic analysis on subset spaces ◮ Let Γ be compact group with an action on V

  41. Harmonic analysis on subset spaces ◮ Let Γ be compact group with an action on V ◮ Example: Γ = O (3) and V = S 2 ⊆ R 3

  42. Harmonic analysis on subset spaces ◮ Let Γ be compact group with an action on V ◮ Example: Γ = O (3) and V = S 2 ⊆ R 3 ◮ Assume the metric is Γ -invariant: d ( γx, γy ) = d ( x, y ) for all x, y ∈ V and γ ∈ Γ

  43. Harmonic analysis on subset spaces ◮ Let Γ be compact group with an action on V ◮ Example: Γ = O (3) and V = S 2 ⊆ R 3 ◮ Assume the metric is Γ -invariant: d ( γx, γy ) = d ( x, y ) for all x, y ∈ V and γ ∈ Γ ◮ Then the action extends to an action on I t by γ ∅ = ∅ and γ { x 1 , . . . , x t } = { γx 1 , . . . , γx t }

  44. Harmonic analysis on subset spaces ◮ Let Γ be compact group with an action on V ◮ Example: Γ = O (3) and V = S 2 ⊆ R 3 ◮ Assume the metric is Γ -invariant: d ( γx, γy ) = d ( x, y ) for all x, y ∈ V and γ ∈ Γ ◮ Then the action extends to an action on I t by γ ∅ = ∅ and γ { x 1 , . . . , x t } = { γx 1 , . . . , γx t } ◮ By an “averaging argument” we may assume K ∈ C ( I t × I t ) � 0 to be Γ -invariant: K ( γJ, γJ ′ ) = K ( J, J ′ ) for all γ ∈ Γ and J, J ′ ∈ I t

  45. Harmonic analysis on subset spaces ◮ Fourier inversion formula: m π � � ˆ K ( x, y ) = K ( π ) i,j Z π ( x, y ) i,j π ∈ ˆ i,j =1 Γ

  46. Harmonic analysis on subset spaces ◮ Fourier inversion formula: m π � � ˆ K ( x, y ) = K ( π ) i,j Z π ( x, y ) i,j π ∈ ˆ i,j =1 Γ ◮ The Fourier matrices ˆ K ( π ) are positive semidefinite

  47. Harmonic analysis on subset spaces ◮ Fourier inversion formula: m π � � ˆ K ( x, y ) = K ( π ) i,j Z π ( x, y ) i,j π ∈ ˆ i,j =1 Γ ◮ The Fourier matrices ˆ K ( π ) are positive semidefinite ◮ The zonal matrices Z π ( x, y ) are fixed matrices that depend on I t and Γ

  48. Harmonic analysis on subset spaces ◮ Fourier inversion formula: m π � � ˆ K ( x, y ) = K ( π ) i,j Z π ( x, y ) i,j π ∈ ˆ i,j =1 Γ ◮ The Fourier matrices ˆ K ( π ) are positive semidefinite ◮ The zonal matrices Z π ( x, y ) are fixed matrices that depend on I t and Γ (These matrices take the role of the exponential functions in the familiar Fourier transform)

  49. Harmonic analysis on subset spaces ◮ Fourier inversion formula: m π � � ˆ K ( x, y ) = K ( π ) i,j Z π ( x, y ) i,j π ∈ ˆ i,j =1 Γ ◮ The Fourier matrices ˆ K ( π ) are positive semidefinite ◮ The zonal matrices Z π ( x, y ) are fixed matrices that depend on I t and Γ (These matrices take the role of the exponential functions in the familiar Fourier transform) ◮ To construct the matrices Z π ( x, y ) we need to “perform the harmonic analysis of I t with respect to Γ ”

  50. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S )

  51. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S ) ◮ By performing the harmonic analysis of I t with respect to Γ we mean: Decompose C ( I t ) as a direct sum of irreducible (smallest possible) Γ -invariant subspaces

  52. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S ) ◮ By performing the harmonic analysis of I t with respect to Γ we mean: Decompose C ( I t ) as a direct sum of irreducible (smallest possible) Γ -invariant subspaces ◮ We give a procedure to perform the harmonic analysis of I t with respect to Γ given that we know enough about the harmonic analysis of V .

  53. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S ) ◮ By performing the harmonic analysis of I t with respect to Γ we mean: Decompose C ( I t ) as a direct sum of irreducible (smallest possible) Γ -invariant subspaces ◮ We give a procedure to perform the harmonic analysis of I t with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C ( V ) into irreducibles

  54. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S ) ◮ By performing the harmonic analysis of I t with respect to Γ we mean: Decompose C ( I t ) as a direct sum of irreducible (smallest possible) Γ -invariant subspaces ◮ We give a procedure to perform the harmonic analysis of I t with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C ( V ) into irreducibles ◮ We do this explicitly for V = S 2 , Γ = O (3) , and t = 2 (by using Clebsch–Gordan coefficients)

  55. Harmonic analysis on subset spaces ◮ The action of Γ on I t extends to a linear action of Γ on C ( I t ) by γf ( S ) = f ( γ − 1 S ) ◮ By performing the harmonic analysis of I t with respect to Γ we mean: Decompose C ( I t ) as a direct sum of irreducible (smallest possible) Γ -invariant subspaces ◮ We give a procedure to perform the harmonic analysis of I t with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C ( V ) into irreducibles ◮ We do this explicitly for V = S 2 , Γ = O (3) , and t = 2 (by using Clebsch–Gordan coefficients) ◮ We use this to lower bound E ∗ 2 by maximization problems that have finitely many positive semidefinite matrix variables (but still infinitely many constraints)

  56. Invariant theory ◮ These constraints are of the form p ( x 1 , . . . , x 4 ) ≥ 0 for { x 1 , x 2 , x 3 , x 4 } ∈ I =4 , where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

  57. Invariant theory ◮ These constraints are of the form p ( x 1 , . . . , x 4 ) ≥ 0 for { x 1 , x 2 , x 3 , x 4 } ∈ I =4 , where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables ◮ These polynomials satisfy p ( γx 1 , . . . , γx 4 ) = p ( x 1 , . . . , x 4 ) for x 1 , . . . , x 4 ∈ S 2 and γ ∈ O (3)

  58. Invariant theory ◮ These constraints are of the form p ( x 1 , . . . , x 4 ) ≥ 0 for { x 1 , x 2 , x 3 , x 4 } ∈ I =4 , where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables ◮ These polynomials satisfy p ( γx 1 , . . . , γx 4 ) = p ( x 1 , . . . , x 4 ) for x 1 , . . . , x 4 ∈ S 2 and γ ∈ O (3) ◮ By a theorem of invariant theory we can write p as a polynomial in the inner products: p ( x 1 , x 2 , x 3 , x 4 ) = q ( x 1 · x 2 , . . . , x 3 · x 4 )

  59. Invariant theory ◮ These constraints are of the form p ( x 1 , . . . , x 4 ) ≥ 0 for { x 1 , x 2 , x 3 , x 4 } ∈ I =4 , where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables ◮ These polynomials satisfy p ( γx 1 , . . . , γx 4 ) = p ( x 1 , . . . , x 4 ) for x 1 , . . . , x 4 ∈ S 2 and γ ∈ O (3) ◮ By a theorem of invariant theory we can write p as a polynomial in the inner products: p ( x 1 , x 2 , x 3 , x 4 ) = q ( x 1 · x 2 , . . . , x 3 · x 4 ) ◮ This theorem is nonconstructive → We solve large sparse linear systems to perform this transformation explicitly

  60. Invariant theory ◮ These constraints are of the form p ( x 1 , . . . , x 4 ) ≥ 0 for { x 1 , x 2 , x 3 , x 4 } ∈ I =4 , where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables ◮ These polynomials satisfy p ( γx 1 , . . . , γx 4 ) = p ( x 1 , . . . , x 4 ) for x 1 , . . . , x 4 ∈ S 2 and γ ∈ O (3) ◮ By a theorem of invariant theory we can write p as a polynomial in the inner products: p ( x 1 , x 2 , x 3 , x 4 ) = q ( x 1 · x 2 , . . . , x 3 · x 4 ) ◮ This theorem is nonconstructive → We solve large sparse linear systems to perform this transformation explicitly ◮ Now we have constraints of the form q ( u 1 , . . . , u l ) ≥ 0 for ( u 1 , . . . , u l ) ∈ some semialgebraic set

  61. Sums of squares characterizations ◮ Putinar: Every positive polynomial on a compact set S = { x ∈ R n : g 1 ( x ) ≥ 0 , . . . , g m ( x ) ≥ 0 } , where the set { g 1 , . . . , g m } has the Archimedean property, is of the form m � f ( x ) = g i ( x ) s i ( x ) , where g 0 := 1 i =0

  62. Sums of squares characterizations ◮ Putinar: Every positive polynomial on a compact set S = { x ∈ R n : g 1 ( x ) ≥ 0 , . . . , g m ( x ) ≥ 0 } , where the set { g 1 , . . . , g m } has the Archimedean property, is of the form m � f ( x ) = g i ( x ) s i ( x ) , where g 0 := 1 i =0 ◮ The sum of squares s i can be modeled using positive semidefinite matrices

  63. Sums of squares characterizations ◮ Putinar: Every positive polynomial on a compact set S = { x ∈ R n : g 1 ( x ) ≥ 0 , . . . , g m ( x ) ≥ 0 } , where the set { g 1 , . . . , g m } has the Archimedean property, is of the form m � f ( x ) = g i ( x ) s i ( x ) , where g 0 := 1 i =0 ◮ The sum of squares s i can be modeled using positive semidefinite matrices ◮ We use this to go from infinitely many constraints to finitely many semidefinite constraints

  64. Sums of squares characterizations ◮ Putinar: Every positive polynomial on a compact set S = { x ∈ R n : g 1 ( x ) ≥ 0 , . . . , g m ( x ) ≥ 0 } , where the set { g 1 , . . . , g m } has the Archimedean property, is of the form m � f ( x ) = g i ( x ) s i ( x ) , where g 0 := 1 i =0 ◮ The sum of squares s i can be modeled using positive semidefinite matrices ◮ We use this to go from infinitely many constraints to finitely many semidefinite constraints ◮ In energy minimization the particles are interchangeable

Recommend


More recommend