Lecture 15: Batch RL Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Slides drawn from Philip Thomas with modifications Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Class Structure • Last time: Meta Reinforcement Learning • This time: Batch RL • Next time: Quiz Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
A Scientific Experiment Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
A Scientific Experiment Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
What Should We Do For a New Student? Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Involves Counterfactual Reasoning Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Involves Generalization Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Batch Reinforcement Learning Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Batch RL Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Batch RL Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
The Problem • If you apply an existing method, do you have confidence that it will work? Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
A property of many real applications • Deploying "bad" policies can be costly or dangerous Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
What property should a safe batch reinforcement learning algorithm have? • Given past experience from current policy/policies, produce a new policy • “Guarantee that with probability at least 1 − δ , will not change your policy to one that is worse than the current policy.” • You get to choose δ • Guarantee not contingent on the tuning of any hyperparameters Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Table of Contents Notation 1 Create a safe batch reinforcement learning algorithm 2 Off-policy policy evaluation (OPE) High-confidence off-policy policy evaluation (HCOPE) Safe policy improvement (SPI) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Notation � � s t = s ) • Policy π : π ( a ) = P ( a t = a • Trajectory: T = ( s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , · · · , s L , a L , r L ) • Historical data: D = { T 1 , T 2 , · · · , T n } • Historical data from behavior policy, π b • Objective: L V π = E [ � � π ] γ t R t � t = 1 Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Safe batch reinforcement learning algorithm • Reinforcement learning algorithm, A • Historical data, D , which is a random variable • Policy produced by the algorithm, A ( D ) , which is a random variable • a safe batch reinforcement learning algorithm, A , satisfies: Pr( V A ( D ) ≥ V π b ) ≥ 1 − δ or, in general Pr( V A ( D ) ≥ V min ) ≥ 1 − δ Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Table of Contents Notation 1 Create a safe batch reinforcement learning algorithm 2 Off-policy policy evaluation (OPE) High-confidence off-policy policy evaluation (HCOPE) Safe policy improvement (SPI) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Create a safe batch reinforcement learning algorithm • Off-policy policy evaluation (OPE) • For any evaluation policy, π e , Convert historical data, D , into n independent and unbiased estimates of V π e • High-confidence off-policy policy evaluation (HCOPE) • Use a concentration inequality to convert the n independent and unbiased estimates of V π e into a 1 − δ confidence lower bound on V π e • Safe policy improvement (SPI) • Use HCOPE method to create a safe batch reinforcement learning algorithm, Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Off-policy policy evaluation (OPE) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Importance Sampling � L � � L n � s t ) � � IS ( D ) = 1 π e ( a t � � � γ t R i t � � s t ) n π b ( a t i = 1 t = 1 t = 1 E [ IS ( D )] = V π e Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Create a safe batch reinforcement learning algorithm • Off-policy policy evaluation (OPE) • For any evaluation policy, π e , Convert historical data, D , into n independent and unbiased estimates of V π e • High-confidence off-policy policy evaluation (HCOPE) • Use a concentration inequality to convert the n independent and unbiased estimates of V π e into a 1 − δ confidence lower bound on V π e • Safe policy improvement (SPI) • Use HCOPE method to create a safe batch reinforcement learning algorithm Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
High-confidence off-policy policy evaluation (HCOPE) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Hoeffding’s inequality • Let X 1 , · · · , X n be n independent identically distributed random variables such that X i ∈ [ 0 , b ] • Then with probability at least 1 − δ : n � E [ X i ] ≥ 1 ln( 1 /δ ) � X i − b , n 2 n i = 1 where X i = 1 � n � L t = 1 γ t R i i = 1 ( w i t ) in our case. n Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Safe policy improvement (SPI) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Safe policy improvement (SPI) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Create a safe batch reinforcement learning algorithm • Off-policy policy evaluation (OPE) • For any evaluation policy, π e , Convert historical data, D , into n independent and unbiased estimates of V π e • High-confidence off-policy policy evaluation (HCOPE) • Use a concentration inequality to convert the n independent and unbiased estimates of V π e into a 1 − δ confidence lower bound on V π e • Safe policy improvement (SPI) • Use HCOPE method to create a safe batch reinforcement learning algorithm Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Monte Carlo (MC) Off Policy Evaluation • Aim: estimate value of policy π 1 , V π 1 ( s ) , given episodes generated under behavior policy π 2 • s 1 , a 1 , r 1 , s 2 , a 2 , r 2 , . . . where the actions are sampled from π 2 • G t = r t + γ r t + 1 + γ 2 r t + 2 + γ 3 r t + 3 + · · · in MDP M under policy π • V π ( s ) = E π [ G t | s t = s ] • Have data from a different policy, behavior policy π 2 • If π 2 is stochastic, can often use it to estimate the value of an alternate policy (formal conditions to follow) • Again, no requirement that have a model nor that state is Markov Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Monte Carlo (MC) Off Policy Evaluation: Distribution Mismatch • Distribution of episodes & resulting returns differs between policies mc-off-dist.png Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Importance Sampling • Goal: estimate the expected value of a function f ( x ) under some probability distribution p ( x ) , E x ∼ p [ f ( x )] • Have data x 1 , x 2 , . . . , x n sampled from distribution q ( s ) • Under a few assumptions, we can use samples to obtain an unbiased estimate of E x ∼ q [ f ( x )] � E x ∼ q [ f ( x )] = q ( x ) f ( x ) x Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Importance Sampling (IS) for Policy Evaluation • Let h j be episode j (history) of states, actions and rewards h j = ( s j , 1 , a j , 1 , r j , 1 , s j , 2 , a j , 2 , r j , 2 , . . . , s j , L j ( terminal ) ) Winter 2019 Slides drawn from Philip Thomas Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RL / 72
Recommend
More recommend