welcome back
play

Welcome back. Today. Welcome back. Today. Review: Spectral gap, - PowerPoint PPT Presentation

Welcome back. Today. Welcome back. Today. Review: Spectral gap, Edge expansion h ( G ) , Sparsity ( G ) etc. Welcome back. Today. Review: Spectral gap, Edge expansion h ( G ) , Sparsity ( G ) etc. Write 1 2 as a relaxation of (


  1. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x

  2. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x

  3. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1.

  4. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest.

  5. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i )

  6. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1.

  7. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1. Choose δ to make ∑ i x i = 0,

  8. Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1. Choose δ to make ∑ i x i = 0, i.e., x ⊥ 1 .

  9. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 .

  10. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S |

  11. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0

  12. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph

  13. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally

  14. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality

  15. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 2

  16. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 ≤ h ( G ) 2

  17. Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 � ≤ h ( G ) ≤ 2 ( 1 − λ 2 ) 2

  18. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) .

  19. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S |

  20. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | Let x be the characteritic vector of set S

  21. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S

  22. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j

  23. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j | S || V − S | = 1 | x i − x j | = 1 ( x i − x j ) 2 2 ∑ 2 ∑ i , j i , j

  24. Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j | S || V − S | = 1 | x i − x j | = 1 ( x i − x j ) 2 2 ∑ 2 ∑ i , j i , j n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2

  25. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x

  26. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x

  27. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 x T x = 1 n ∑ i , j ( x i − x j ) 2

  28. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 x T x = 1 n ∑ i , j ( x i − x j ) 2 Proof: ( x i − x j ) 2 = ∑ x 2 i + x 2 ∑ j − 2 x i x j i , j i , j x i ) 2 = 2 n ∑ i = 2 nx T x x 2 x 2 = 2 n ∑ i − 2 ( ∑ i i i We used x ⊥ 1 ⇒ ∑ i x i = 0

  29. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x

  30. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 ( x T x − x T Mx ) = ∑ i , j M ij ( x i − x j ) 2

  31. x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 ( x T x − x T Mx ) = ∑ i , j M ij ( x i − x j ) 2 Proof: M ij ( x i − x j ) 2 = ∑ M ij ( x 2 i + x 2 ∑ j ) − 2 ∑ M ij x i x j i , j i , j i , j 1 j ) − 2 x T Mx = ∑ d ( x 2 i + x 2 i ∑ j ∼ i 1 j ) − 2 x T Mx = 2 ∑ d ( x 2 i + x 2 ( i , j ) ∈ E i − 2 x T Mx = 2 x T x − 2 x T Mx x 2 = 2 ∑ i

  32. Combining the two claims, we get

  33. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2

  34. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2

  35. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus

  36. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G )

  37. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G ) ≤ 2 h ( G )

  38. Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G ) ≤ 2 h ( G ) Hooray!! We get the easy part of Cheeger 1 − λ 2 ≤ h ( G ) 2

  39. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) .

  40. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G )

  41. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2

  42. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line.

  43. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V

  44. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0

  45. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0 What will be a good t ?

  46. Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0 What will be a good t ? We don’t know. Try all possible thresholds ( n − 1 possibilities), and hope there is a t leading to a good cut!

  47. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1

  48. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n

  49. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1

  50. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i )

  51. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i ) Main Lemma: G = ( V , E ) , d -regular x ∈ R V , x ⊥ 1 , δ = ∑ i , j M ij ( x i − x j ) 2 1 n ∑ i , j ( x i − x j ) 2 √ If S is the ouput of the sweeping cut algorithm, then h ( S ) ≤ 2 δ

  52. Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i ) Main Lemma: G = ( V , E ) , d -regular x ∈ R V , x ⊥ 1 , δ = ∑ i , j M ij ( x i − x j ) 2 1 n ∑ i , j ( x i − x j ) 2 √ If S is the ouput of the sweeping cut algorithm, then h ( S ) ≤ 2 δ Note: Applying the Main Lemma with the 2 nd eigenvector v 2 , we have � δ = 1 − λ 2 , and h ( G ) ≤ h ( S ) ≤ 2 ( 1 − λ 2 ) . Done!

  53. Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n

  54. Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ

  55. Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ

  56. Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ √ → E S ∼ D [ 1 d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | )] ≤ 0

  57. Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ √ → E S ∼ D [ 1 d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | )] ≤ 0 √ 1 ∃ S d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | ) ≤ 0

  58. The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1

  59. The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | .

  60. The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1

  61. The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1 S = { i : x i ≤ t }

  62. The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1 S = { i : x i ≤ t } Take D as the distribution over S 1 ,..., S n − 1 resulted from the above procedure.

  63. √ E S ∼ D [ 1 d | E ( S , V − S ) | ] Goal: E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ

Recommend


More recommend