Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin Ilias Diakonikolas Daniel M. Kane Pasin Manurangsi UW Madison UC San Diego Google Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - + - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT + Ξ΅ - + - OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces w* Input + w + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT + Ξ΅ - + - Bad news: [Arora et al.β97] Unless NP = RP, no poly-time π½ -learner for all constants π½ . OPT = Min classifjcation error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ 0] [Guruswami-Raghavendraβ 06, Feldman et al.β06] Even weak learning is NP-hard. Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT πΏ = Min πΏ -margin error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT πΏ = Min πΏ -margin error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - πΏ πΏ + Output A halfspace w with βsmallβ classifjcation error - - - + - - - + - OPT πΏ = Min πΏ -margin error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - πΏ πΏ + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT πΏ + Ξ΅ - + - OPT πΏ = Min πΏ -margin error among all halfspaces = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - πΏ πΏ + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT πΏ + Ξ΅ - + - Margin Assumption OPT πΏ = Min πΏ -margin error among all halfspaces - βRobustnessβ of the optimal halfspace to β 2 noise = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Agnostic Proper Learning of Halfspaces with a Margin w* Input + + - - + - Labeled samples (x 1 , y 1 ), (x 2 , y 2 ), β¦ β π (d) Γ {Β±1} + from distribution π + - Positive real number Ξ΅ - πΏ πΏ + Output A halfspace w with βsmallβ classifjcation error - - - + - An algorithm is a π½ -learner if it outputs w with - classifjcation error at most π½ γ» OPT πΏ + Ξ΅ - + - Margin Assumption OPT πΏ = Min πΏ -margin error among all halfspaces - βRobustnessβ of the optimal halfspace to β 2 noise = min w Pr (x, y)~ π [<w, x> γ» y οΌ πΏ ] - Variants used in Perceptron, SVMs Nearly Tight Bound for Robust Proper Learning of Halfspaces with a Margin Diakonikolas, Kane, Manurangsi
Recommend
More recommend