Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1.
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest.
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i )
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1.
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1. Choose δ to make ∑ i x i = 0,
Action of M . v - assigns values to vertices. ( Mv ) i = 1 d ∑ j ∼ i v j . Action of M : taking the average of your neighbours. (Direct) result from the action of M , | λ i | ≤ 1 ∀ i v 1 = 1 . λ 1 = 1. Claim: For a connected graph λ 2 < 1. Proof: Second Eigenvector: v ⊥ 1 . Max value x . Connected → path from x valued node to lower value. → ∃ e = ( i , j ) , v i = x , x j < x . j i ( Mv ) i ≤ 1 d ( x + x ··· + v j ) < x . . . . Therefore λ 2 < 1. x ≤ x Claim: Connected if λ 2 < 1. Proof: By contradiction. Assign + 1 to vertices in one component, − δ to rest. x i = ( Mx i ) = ⇒ eigenvector with λ = 1. Choose δ to make ∑ i x i = 0, i.e., x ⊥ 1 .
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 .
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S |
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 2
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 ≤ h ( G ) 2
Spectral Gap and the connectivity of graph. Spectral gap: µ = λ 1 − λ 2 = 1 − λ 2 . | E ( S , V − S ) | Recall: h ( G ) = min S , | S |≤| V | / 2 | S | 1 − λ 2 = 0 ⇔ λ 2 = 1 ⇔ G disconnected ⇔ h ( G ) = 0 In general, small spectral gap 1 − λ 2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1 − λ 2 � ≤ h ( G ) ≤ 2 ( 1 − λ 2 ) 2
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) .
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S |
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | Let x be the characteritic vector of set S
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j | S || V − S | = 1 | x i − x j | = 1 ( x i − x j ) 2 2 ∑ 2 ∑ i , j i , j
Spectral Gap and Conductance. We will show 1 − λ 2 as a continuous relaxation of φ ( G ) . n | E ( S , V − S ) | φ ( G ) = min S ∈ V d | S || V − S | � if i ∈ S 1 Let x be the characteritic vector of set S x i = 0 if i �∈ S | E ( S , V − S ) | = 1 A ij | x i − x j | = d 2 ∑ 2 ∑ M ij ( x i − x j ) 2 i , j i , j | S || V − S | = 1 | x i − x j | = 1 ( x i − x j ) 2 2 ∑ 2 ∑ i , j i , j n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 x T x = 1 n ∑ i , j ( x i − x j ) 2
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 x T x = 1 n ∑ i , j ( x i − x j ) 2 Proof: ( x i − x j ) 2 = ∑ x 2 i + x 2 ∑ j − 2 x i x j i , j i , j x i ) 2 = 2 n ∑ i = 2 nx T x x 2 x 2 = 2 n ∑ i − 2 ( ∑ i i i We used x ⊥ 1 ⇒ ∑ i x i = 0
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 ( x T x − x T Mx ) = ∑ i , j M ij ( x i − x j ) 2
x T Mx Recall Rayleigh Quotient: λ 2 = max x ∈ R V −{ 0 } , x ⊥ 1 x T x 2 ( x T x − x T Mx ) 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 2 x T x Claim: 2 ( x T x − x T Mx ) = ∑ i , j M ij ( x i − x j ) 2 Proof: M ij ( x i − x j ) 2 = ∑ M ij ( x 2 i + x 2 ∑ j ) − 2 ∑ M ij x i x j i , j i , j i , j 1 j ) − 2 x T Mx = ∑ d ( x 2 i + x 2 i ∑ j ∼ i 1 j ) − 2 x T Mx = 2 ∑ d ( x 2 i + x 2 ( i , j ) ∈ E i − 2 x T Mx = 2 x T x − 2 x T Mx x 2 = 2 ∑ i
Combining the two claims, we get
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G )
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G ) ≤ 2 h ( G )
Combining the two claims, we get ∑ i , j M ij ( x i − x j ) 2 1 − λ 2 = min x ∈ R V −{ 0 } , x ⊥ 1 1 n ∑ i , j ( x i − x j ) 2 ∑ i , j M ij ( x i − x j ) 2 = min x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Recall n ∑ i , j M ij ( x i − x j ) 2 φ ( G ) = min x ∈{ 0 , 1 } V −{ 0 , 1 } ∑ i , j ( x i − x j ) 2 We have 1 − λ 2 as a continuous relaxation of φ ( G ) , thus 1 − λ 2 ≤ φ ( G ) ≤ 2 h ( G ) Hooray!! We get the easy part of Cheeger 1 − λ 2 ≤ h ( G ) 2
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) .
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G )
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line.
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0 What will be a good t ?
Cheeger Hard Part. � Now let’s get to the hard part of Cheeger h ( G ) ≤ 2 ( 1 − λ 2 ) . Idea : We have 1 − λ 2 as a continuous relaxation of φ ( G ) ∑ i , j M ij ( x i − x j ) 2 Take the 2 nd eigenvector x = argmin x ∈ R V − Span { 1 } 1 n ∑ i , j ( x i − x j ) 2 Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ { 0 , 1 } V Rounding: Take a threshold t , � x i ≥ t → x i = 1 x i < t → x i = 0 What will be a good t ? We don’t know. Try all possible thresholds ( n − 1 possibilities), and hope there is a t leading to a good cut!
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i )
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i ) Main Lemma: G = ( V , E ) , d -regular x ∈ R V , x ⊥ 1 , δ = ∑ i , j M ij ( x i − x j ) 2 1 n ∑ i , j ( x i − x j ) 2 √ If S is the ouput of the sweeping cut algorithm, then h ( S ) ≤ 2 δ
Sweeping Cut Algorithm Input: G = ( V , E ) , x ∈ R V , x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Let S i = { 1 ,..., i } i = 1 ,..., n − 1 Return S = argmin S i h ( S i ) Main Lemma: G = ( V , E ) , d -regular x ∈ R V , x ⊥ 1 , δ = ∑ i , j M ij ( x i − x j ) 2 1 n ∑ i , j ( x i − x j ) 2 √ If S is the ouput of the sweeping cut algorithm, then h ( S ) ≤ 2 δ Note: Applying the Main Lemma with the 2 nd eigenvector v 2 , we have � δ = 1 − λ 2 , and h ( G ) ≤ h ( S ) ≤ 2 ( 1 − λ 2 ) . Done!
Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n
Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ
Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ
Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ √ → E S ∼ D [ 1 d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | )] ≤ 0
Proof of Main Lemma WLOG V = { 1 ,..., n } x 1 ≤ x 2 ≤ ... ≤ x n Want to show 1 √ d | E ( S , V − S ) | ∃ i s.t. h ( S i ) = min ( | S | , | V − S | ) ≤ 2 δ Probabilistic Argument: Construct a distribution D over { S 1 ,..., S n − 1 } such that E S ∼ D [ 1 √ d | E ( S , V − S ) | ] E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ √ → E S ∼ D [ 1 d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | )] ≤ 0 √ 1 ∃ S d | E ( S , V − S ) |− 2 δ min ( | S | , | V − S | ) ≤ 0
The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1
The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | .
The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1
The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1 S = { i : x i ≤ t }
The distribution D 2 ⌋ = 0, and x 2 1 + x 2 WLOG, shift and scale so that x ⌊ n n = 1 Take t from the range [ x 1 , x n ] with density function f ( t ) = 2 | t | . � x n � 0 � x n 0 2 t d t = x 2 1 + x 2 Check: x 1 f ( t ) d t = x 1 − 2 t d t + n = 1 S = { i : x i ≤ t } Take D as the distribution over S 1 ,..., S n − 1 resulted from the above procedure.
√ E S ∼ D [ 1 d | E ( S , V − S ) | ] Goal: E S ∼ D [ min ( | S | , | V − S | )] ≤ 2 δ
Recommend
More recommend