minimax strategy for prediction with expert advice under
play

Minimax strategy for prediction with expert advice under stochastic - PowerPoint PPT Presentation

Minimax strategy for prediction with expert advice under stochastic assumptions Wojciech Kot lowski Pozna n University of Technology, Poland Learning Faster from Easy Data II NIPS 2015 Workshop 1 / 3 Prediction with expert advice In


  1. Minimax strategy for prediction with expert advice under stochastic assumptions Wojciech Kot� lowski Pozna´ n University of Technology, Poland Learning Faster from Easy Data II NIPS 2015 Workshop 1 / 3

  2. Prediction with expert advice In trials t = 1 , 2 , . . . , T : – Algorithm predicts with w t ∈ ∆ K . – Loss vector ℓ t ∈ [0 , 1] K is revealed. – Algorithm incurs loss w t · ℓ t . Regret of a strategy ω = ( w 1 , . . . , w T ) : � � R = w t · ℓ t − min ℓ t,k k t t ≤ T � �� � L T,k Goal: find ω minimizing the worst-case regret over all sequences. 2 / 3

  3. Prediction with expert advice In trials t = 1 , 2 , . . . , T : – Algorithm predicts with w t ∈ ∆ K . – Loss vector ℓ t ∈ [0 , 1] K is revealed. – Algorithm incurs loss w t · ℓ t . Regret of a strategy ω = ( w 1 , . . . , w T ) : � � R = w t · ℓ t − min ℓ t,k k t t ≤ T � �� � L T,k Goal: find ω minimizing the worst-case regret over all sequences. Too pessimistic: minimax ω has the same regret on all sequences! 2 / 3

  4. Prediction with expert advice In trials t = 1 , 2 , . . . , T : – Algorithm predicts with w t ∈ ∆ K . – Loss vector ℓ t ∈ [0 , 1] K is revealed. – Algorithm incurs loss w t · ℓ t . Regret of a strategy ω = ( w 1 , . . . , w T ) : Drop the minimax principle? � � Drop the worst-case assumptions? R = w t · ℓ t − min ℓ t,k k t t ≤ T � �� � L T,k Goal: find ω minimizing the worst-case regret over all sequences. Too pessimistic: minimax ω has the same regret on all sequences! 2 / 3

  5. Prediction with expert advice In trials t = 1 , 2 , . . . , T : – Algorithm predicts with w t ∈ ∆ K . – Loss vector ℓ t ∈ [0 , 1] K is revealed. – Algorithm incurs loss w t · ℓ t . Regret of a strategy ω = ( w 1 , . . . , w T ) : Drop the minimax principle? � � Drop the worst-case assumptions? R = w t · ℓ t − min ℓ t,k k t t ≤ T � �� � L T,k Goal: find ω minimizing the worst-case regret over all sequences. Too pessimistic: minimax ω has the same regret on all sequences! 2 / 3

  6. Stochastic setting: optimal strategy for “easy” data Assumption : Each expert k = 1 , . . . , K generates losses i.i.d. from a fixed distribution P k . Goal : Find the minimax strategy ω w.r.t. all choices of distributions P = ( P 1 , . . . , P k ) . 3 / 3

  7. Stochastic setting: optimal strategy for “easy” data Assumption : Each expert k = 1 , . . . , K generates losses i.i.d. from a fixed distribution P k . Goal : Find the minimax strategy ω w.r.t. all choices of distributions P = ( P 1 , . . . , P k ) . Minimax in terms of what? 3 / 3

  8. Stochastic setting: optimal strategy for “easy” data Assumption : Each expert k = 1 , . . . , K generates losses i.i.d. from a fixed distribution P k . Goal : Find the minimax strategy ω w.r.t. all choices of distributions P = ( P 1 , . . . , P k ) . Minimax in terms of what? � � � R eg ( ω , P ) = E t w t · ℓ t − min k L T,k Expected regret : � � � Redundancy : R ed ( ω , P ) = E t w t · ℓ t − min k E [ L T,k ] R isk ( ω , P ) = E [ w T · ℓ T ] − min k E [ ℓ T,k ] Excess risk : 3 / 3

  9. Stochastic setting: optimal strategy for “easy” data Assumption : Each expert k = 1 , . . . , K generates losses i.i.d. from a fixed distribution P k . Goal : Find the minimax strategy ω w.r.t. all choices of We give a strategy ω ∗ , which is minimax with respect to all distributions P = ( P 1 , . . . , P k ) . three measures simultaneously : Minimax in terms of what? R ( ω ∗ , P ) = inf sup ω sup R ( ω , P ) , P P � � � R eg ( ω , P ) = E t w t · ℓ t − min k L T,k Expected regret : where R is either R eg , R ed , or R isk . � � � Redundancy : R ed ( ω , P ) = E t w t · ℓ t − min k E [ L T,k ] R isk ( ω , P ) = E [ w T · ℓ T ] − min k E [ ℓ T,k ] Excess risk : 3 / 3

Recommend


More recommend