Efficient learning of smooth probability functions from Bernoulli tests with guarantees Paul Rolland paul.rolland@epfl.ch Laboratory for Information and Inference Systems (LIONS) ´ Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL) Switzerland July 2019, Joint work with Ali Kavis, Alexander Immer, Adish Singla, Volkan Cevher @ LIONS
Introduction • Setup f : X → [0 , 1] , X ⊂ R d compact. • Observations: ⊲ Static setting: y i ∼ Bernoulli ( f ( x i )) ⊲ Dynamic setting: y i ∼ Bernoulli ( A i f ( x i ) + B i ) , with 0 ≤ A i + B i ≤ 1 • Goal: Approximate f over X from observation set S = { ( x i , y i ) } i =1 ,...,n • Need regularity assumption on f Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 2/ 8
Logistic Gaussian Process • Regularity assumption: f ( x ) = σ ( h ( x )) , h ∼ GP ( µ, κ ) 1 where σ ( x ) = 1+ e − x . • Observations: y i ∼ Bernoulli ( σ ( h (( x i ))) Figure: Sample from GP prior • Issues: ⊲ No analytically tractable posterior ⊲ Requires costly Bayesian computations Figure: Sample from LGP prior Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 3/ 8
Smooth Beta Processes: Static setting • Regularity assumption: f is L -Lipschitz continuous, i.e., | f ( x ) − f ( x ′ ) | ≤ L � x − x ′ � 2 ∀ x , x ′ ∈ X • Observations: y i ∼ Bernoulli ( f ( x i )) • Prior: p ( y | x ) = Beta ( α ( x ) , β ( x )) • Update of ˜ f ( x | X ) after observing X = { ( x 1 , y 1 ) , . . . , ( x n , y n ) } 1 : � � n n � � p ( y | X, x ) = Beta α ( x ) + δ y i =1 κ ( x , x i ) , β ( x ) + δ y i =0 κ ( x , x i ) i =1 i =1 Theorem (Informal - Convergence of static Beta process) 2 1 Using kernel κ ( x , x ′ ) = δ � x − x ′ � 2 ≤ ∆ n,L where ∆ t,L = L − d +2 n − d +2 , �� ˜ � f ( x | X ) − f ( x ) � 2 �� � � 2 d 2 d +2 n − sup = O L . E X E d +2 x ∈X 1 ” Continuous Correlated Beta Processes” , Goetschalckx et al. Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 4/ 8
Smooth Beta Processes: Dynamic setting • Regularity assumption: f is L -Lipschitz continuous, i.e., | f ( x ) − f ( x ′ ) | ≤ L � x − x ′ � 2 ∀ x , x ′ ∈ X • Observations: y i ∼ Bernoulli ( A i f ( x i ) + B i ) , with 0 ≤ A i + B i ≤ 1 . • Prior: p ( y | x ) = Beta ( α ( x ) , β ( x )) • Update of ˜ f ( x | X ) after observing X = { ( x 1 , y 1 ) , . . . , ( x n , y n ) } : n � C n p ( y | X, x ) = i Beta ( α ( x ) + i, β ( x ) + n − i ) i =1 where { C n i } i =1 ,...,n depend on { A i , B i } i =1 ,...,n and a kernel κ . Theorem (Informal - Convergence of dynamic Beta process) 2 1 Using kernel κ ( x , x ′ ) = δ � x − x ′ � 2 ≤ ∆ t,L where ∆ n,L = L − d +2 n − d +2 , and under the assumption A i + B i = 1 , �� ˜ � f ( x | X ) − f ( x ) � 2 �� � � 2 d 2 d +2 n − sup = O E X E L d +2 . x ∈X Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 5/ 8
Numerical results in Dynamic setting Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 6/ 8
Benefits of SBP • Fast computation of posterior update • Can include contextual features directly influencing success probabilities • Simple to implement Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 7/ 8
For more details... Welcome to our poster #233!! Efficient learning of smooth probability functions from Bernoulli tests | Paul Rolland , paul.rolland@epfl.ch Slide 8/ 8
Recommend
More recommend