learning to control an octopus arm with gaussian process
play

Learning to Control an Octopus Arm with Gaussian Process Temporal - PowerPoint PPT Presentation

Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion) Gaussian Processes in Practice Workshop Why use GPs in RL? A Bayesian


  1. Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion)

  2. Gaussian Processes in Practice Workshop Why use GPs in RL? • A Bayesian approach to value estimation • Forces us to to make our assumptions explicit • Non-parametric – priors are placed and inference is performed directly in function space (kernels). • Domain knowledge intuitively coded into priors • Provides full posterior, not just point estimates • Efficient, on-line implementation, suitable for large problems 2/23

  3. Gaussian Processes in Practice Workshop Markov Decision Processes X : state space U : action space p : X × X × U → [0 , 1], x t +1 ∼ p ( ·| x t , u t ) R × X × U → [0 , 1], R ( x t , u t ) ∼ q ( ·| x t , u t ) q : I A Stationary policy: U × X → [0 , 1], u t ∼ µ ( ·| x t ) µ : Discounted Return: D µ ( x ) = � ∞ i =0 γ i R ( x i , u i ) | ( x 0 = x ) Value function: V µ ( x ) = E µ [ D µ ( x )] Goal: Find a policy µ ∗ maximizing V µ ( x ) ∀ x ∈ X 3/23

  4. Gaussian Processes in Practice Workshop Bellman’s Equation For a fixed policy µ : � � R ( x , u ) + γV µ ( x ′ ) V µ ( x ) = E x ′ , u | x Optimal value and policy: µ ∗ = argmax V ∗ ( x ) = max V µ ( x ) , V µ ( x ) µ µ How to solve it? - Methods based on Value Iteration (e.g. Q-learning) - Methods based on Policy Iteration (e.g. SARSA, OPI, Actor-Critic) 4/23

  5. Gaussian Processes in Practice Workshop Solution Method Taxonomy RL Algorithms Purely Policy based Value−Function based (Policy Gradient) Value Iteration type Policy Iteration type (Actor−Critic, OPI, SARSA) (Q−Learning) PI methods need a “subroutine” for policy evaluation 5/23

  6. Gaussian Processes in Practice Workshop Gaussian Process Temporal Difference Learning Model Equations: R ( x i ) = V ( x i ) − γV ( x i +1 ) + N ( x i ) Or, in compact form: R t = H t +1 V t +1 + N t   1 0 0 − γ . . .   0 1 0 − γ . . .   H t = .   . . . .   . .     0 0 1 . . . − γ Our (Bayesian) goal: Find the posterior distribution of V ( · ), given a sequence of observed states and rewards. 6/23

  7. Gaussian Processes in Practice Workshop The Posterior General noise covariance: Cov [ N t ] = Σ t Joint distribution:          H t K t H ⊤  0 t + Σ t H t k t ( x )  R t − 1    ∼ N  ,  k t ( x ) ⊤ H ⊤ V ( x ) 0 k ( x , x )   t Invoke Bayes’ Rule: E [ V ( x ) | R t − 1 ] = k t ( x ) ⊤ α t Cov [ V ( x ) , V ( x ′ ) | R t − 1 ] = k ( x , x ′ ) − k t ( x ) ⊤ C t k t ( x ′ ) k t ( x ) = ( k ( x 1 , x ) , . . . , k ( x t , x )) ⊤ 7/23

  8. Gaussian Processes in Practice Workshop 8/23

  9. Gaussian Processes in Practice Workshop The Octopus Arm Can bend and twist at any point Can do this in any direction Can be elongated and shortened Can change cross section Can grab using any part of the arm Virtually infinitely many DOF 9/23

  10. Gaussian Processes in Practice Workshop The Muscular Hydrostat Mechanism A constraint: Muscle fibers can only contract (actively) In vertebrate limbs, two separate muscle groups - agonists and antagonists - are used to control each DOF of every joint, by exerting opposite torques. But the Octopus has no skeleton! Balloon example Muscle tissue is incompressible, therefore, if muscles are arranged such that different muscle groups are interleaved in perpendicular directions in the same region, contraction in one direction will result in extension in at least one of the other directions. This is the Muscular Hydrostat mechanism 10/23

  11. Gaussian Processes in Practice Workshop Octopus Arm Anatomy 101 11/23

  12. ✱ ✲ ✳ ✳ ✳ ✳ ✳ ✳ ✲ ✲ ✲ ✲ ✲ ✳ ✱ ✱ ✱ ✱ ✸ ✱ ✯✰ ✭✮ ✫✬ ✩✪ ✳ ✴ ✥✦ ✶ ✷ ✷ ✷ ✷ ✷ ✶ ✶ ✶ ✶ ✶ ✵ ✴ ✵ ✵ ✵ ✵ ✵ ✴ ✴ ✴ ✴ ✴ ✴ ✧★ ✣✤ ✸ ✂ ☎ ✄ ✄ ✄ ✄ ✄ ✄ ✂ ✂ ✂ ✂ ☎ ✂ ✁ ✁ ✁ ✁ � � � � ✸ ✸ ☎ ☎ ✜✢ ✸ ✚✛ ✘✙ ✖✗ ✔✕ ✒✓ ✏✑ ✍✎ ☞✌ ✡☛ ✟✠ ✸ ✆ ✞ ✞ ✞ ✞ ✝ ✝ ✝ ✝ ✆ ✆ ✆ ✷ Gaussian Processes in Practice Workshop Our Arm Model arm tip dorsal side C N pair #N+1 arm base C 1 ventral side pair #1 longitudinal muscle transverse muscle transverse muscle longitudinal muscle 12/23

  13. ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ � ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ � ✁ � � � � � � � � � � � � � � � � � � ✁ Gaussian Processes in Practice Workshop The Muscle Model f ( a ) = ( k 0 + a ( k max − k 0 )) ( ℓ − ℓ 0 ) + β dℓ dt a ∈ [0 , 1] 13/23

  14. Gaussian Processes in Practice Workshop Other Forces • Gravity • Buoyancy • Water drag • Internal pressures (maintain constant compartmental volume) Dimensionality 10 compartments ⇒ 22 point masses × ( x, y, ˙ x, ˙ y ) = 88 state variables 14/23

  15. Gaussian Processes in Practice Workshop The Control Problem Starting from a random position, bring { any part, tip } of arm into contact with a goal region, optimally . Optimality criteria: Time, energy, obstacle avoidance Constraint: We only have access to sampled trajectories Our approach: Define problem as a MDP Apply Reinforcement Learning algorithms 15/23

  16. Gaussian Processes in Practice Workshop The Task 0.15 t = 1.38 0.1 0.05 0 −0.05 −0.1 −0.1 −0.05 0 0.05 0.1 0.15 16/23

  17. Gaussian Processes in Practice Workshop Actions Each action specifies a set of fixed activations – one for each muscle in the arm. Action # 1 Action # 2 Action # 3 Action # 4 Action # 5 Action # 6 Base rotation adds duplicates of actions 1,2,4 and 5 with positive and negative torques applied to the base. 17/23

  18. Gaussian Processes in Practice Workshop Rewards Deterministic rewards: +10 for a goal state, Large negative value for obstacle hitting, -1 otherwise. Energy economy: A constant multiple of the energy expended by the muscles in each action interval was deducted from the reward. 18/23

  19. Gaussian Processes in Practice Workshop Fixed Base Task I 19/23

  20. Gaussian Processes in Practice Workshop Fixed Base Task II 20/23

  21. Gaussian Processes in Practice Workshop Rotating Base Task I 21/23

  22. Gaussian Processes in Practice Workshop Rotating Base Task II 22/23

  23. Gaussian Processes in Practice Workshop To Wrap Up • There’s more to GPs than regression and classification • Online sparsification works Challenges • How to use value uncertainty? • What’s a disciplined way to select actions? • What’s the best noise covariance? • More complicated tasks 23/23

Recommend


More recommend