max plus algebra in the numerical solution of hamilton
play

Max-plus algebra in the numerical solution of Hamilton-Jacobi and - PowerPoint PPT Presentation

Max-plus algebra in the numerical solution of Hamilton-Jacobi and Isaacs equations Marianne Akian (INRIA Saclay - , Ile-de-France and CMAP Ecole Polytechnique) BIRS Workshop 11w5086 Advancing numerical methods for viscisity solutions


  1. Max-plus algebra in the numerical solution of Hamilton-Jacobi and Isaacs equations Marianne Akian (INRIA Saclay - ˆ , ´ Ile-de-France and CMAP Ecole Polytechnique) BIRS Workshop 11w5086 Advancing numerical methods for viscisity solutions and Applications Feb. 14–18, 2011 works with A. Lakhoua, S. Gaubert, A. Guterman, S. Detournay.

  2. Dynamic programming equations of optimal control and zero-sum games problems For instance if the Hamiltonian H is convex: � p · f ( x , α ) + 1 � 2 tr ( σ ( x , α ) σ T ( x , α ) X ) + r ( x , α ) H ( x , p , X ) = sup α ∈A and under regularity conditions, v is the viscosity solution of ∂ x , ∂ 2 v − ∂ v ∂ t + H ( x , ∂ v ∂ x 2 ) = 0 , ( x , t ) ∈ X × [ 0 , T ) , v ( x , T ) = φ ( x ) , x ∈ X , if and only if v is the value function of the finite horizon stochastic control problem: � T v ( x , t ) = sup E [ t r ( x ( s ) , a ( s )) ds + φ ( x ( T )) | x ( t ) = x ] d x ( s ) = f ( x ( s ) , a ( s )) + σ ( x ( s ) , a ( s )) dW ( s ) , x ( s ) ∈ X a strategy, a ( s ) ∈ A .

  3. Max-plus or tropical algebra ◮ It is the idempotent semiring R max := ( R ∪ {−∞} , ⊕ , ⊗ ) , where a ⊕ b = max ( a , b ) and a ⊗ b = a + b . The neutral elements are 0 = −∞ and 1 = 0. ◮ It is the limit of the logarithmic deformation of R + semiring: lim ε → 0 + ε log ( e a /ε + e b /ε ) max ( a , b ) = ε log ( e a /ε e b /ε ) a + b = and the usual order of R is a “natural” order on R max , for which all elements are “positive” or “zero”. ◮ The complete max-plus algebra R max is obtained by completing R max with the + ∞ element with the convention + ∞ + ( −∞ ) = −∞ . ◮ One can define on R max or R max notions similar to those of usual algebra: matrices, scalar product, linear spaces, measures, integrals, cones,...

  4. Part I: Max-plus discretizations First order HJ equations, or dynamic programming equations of undiscounted deterministic optimal control problems are max-plus linear, that is the Lax-Oleinik semigroup S t : φ �→ v ( · , T − t ) is max-plus linear (Maslov, 87): S t ( sup ( λ 1 + φ 1 , λ 2 + φ 2 )) = sup ( λ 1 + S t ( φ 1 ) , λ 2 + S t ( φ 2 )) , where λ + φ : x �→ λ + φ ( x ) . Recall that the set of all functions X → R max or R max is a max-plus semimodule (that is a linear space over R max ), where ◮ the addition is the pointwise maximum, which is equivalent to the supremum, ◮ the multiplication by a scalar is the pointwise addition λ · φ = λ + φ .

  5. Max-plus analogue of linear PDEs Usual algebra Max-plus algebra Parabolic PDE: − ∂ v Evolution HJ PDE: − ∂ v ∂ t + H ( x , ∂ v ∂ t + Lv = 0 ∂ x ) = 0 LQ problem: H ( x , p ) := p 2 Heat equation Lv := ∆ v 2 Stationnary HJ: H ( x , ∂ v Elliptic PDE: Lv = 0 ∂ x ) = 0 Ergodic HJ: − λ + H ( x , ∂ v Eigenproblem: Lv = λ v ∂ x ) = 0 with n n a ij ( x ) ∂ 2 v Lv = 1 g i ( x ) ∂ v � � + − δ ( x ) v + c ( x ) . 2 ∂ x i ∂ x j ∂ x i i , j = 1 i = 1

  6. Max-plus analogue of discretization schemes Usual algebra Max-plus algebra Probabilist point of view: Discretize the max-plus Brownian Discretize the Brownian process process (A., Quadrat, Viot, 98). Variational point of view: Weak solution of Generalized solution of HJ − 1 2 ∆ v = f on Ω , v = 0 on ∂ Ω (Kolokoltzov and Maslov, 88): v t ∈ W , � v t + δ , φ � = � S δ v t , φ � ∀ φ ∈ Z , v ∈ V , 1 � � ∇ v ∇ φ = f φ ∀ φ ∈ V , 2 where V = H 1 W , Z are subsemimodules of R X 0 (Ω) . max . FEM : replace V by finite Max-plus FEM : replace W and Z by dimensional subspaces finitely generated subsemimodules (A. Gaubert, Lakhoua, SICON 08). Replace S δ by a finite dimensional max-plus linear operator (Fleming and McEneaney, 00). Finite difference point of view: Error: use linearity and regularity, impossible or monotonicity possible.

  7. The max-plus finite element method ◮ The max-plus scalar product is given by: � u , v � = sup u ( x ) + v ( x ) . x ∈ X ◮ We fix the max-plus semimodules W and Z for solutions and test functions, together with some approximation of them by finitely genererated subsemimodules W h and Z h (here and in the sequel h refers to discretized objects): W h = span { w 1 , . . . , w p } finite elements Z h = span { z 1 , . . . , z q } test functions

  8. Examples of semimodule and their discretizations: ◮ W is the space of l.s.c. convex functions and w i : x �→ θ i · x , θ i ∈ R n . θ 2 θ 1 θ 3 ◮ W is the space of l.s.c. c -semiconvex functions and w i : x �→ − c x i � 2 , x i ∈ R n . 2 � x − ˆ ◮ W is the space of 1-Lip functions and w i : x �→ −� x − ˆ x i � , x i ∈ R n . x 1 ˆ x 2 ˆ x 3 ˆ

  9. The max-plus FEM (continued) h of the generalized solution v t of HJ equation ◮ The approximation v t must satisfy v t � v t + δ , φ � = � S δ v t h ∈ W h , h , φ � ∀ φ ∈ Z h , t = δ, 2 δ, . . . , h ◮ This is equivalent to v t λ t h = sup j + w j 1 ≤ j ≤ p and ( λ t + δ ( λ t j + � S δ w j , z i � ) sup + � w j , z i � ) = sup ∀ 1 ≤ i ≤ q . j 1 ≤ j ≤ p 1 ≤ j ≤ p ◮ This equation is of the form M λ t + δ = K λ t , where M and K are analogues of the mass and stiffness matrices, respectively.

  10. ◮ To compute λ t + δ as a function of λ t , one need to solve a max-plus linear system of the form M µ = ν , which may not have a solution. ◮ But it has always a greatest subsolution ( M µ ≤ ν ), M ♯ ν , where M ♯ is a the adjoint of M , it is a min-plus linear operator: ( M ♯ ν ) j = min 1 ≤ i ≤ q − M ij + ν i . ◮ So we take for max-plus FEM iteration: λ t + δ = M ♯ K λ t .

  11. Summary of max-plus FEM: ◮ Approach v t by v t h := sup 1 ≤ j ≤ p λ t j + w j where the λ 0 j are given, and � �� � λ t + δ � S δ w k , z i � + λ t = min −� w j , z i � + max , t = δ, 2 δ, . . . , 1 ≤ k j 1 ≤ i ≤ q 1 ≤ k ≤ p ◮ This is a zero-sum two player (deterministic) game dynamic programming equation ! ◮ The states and actions are in [ p ] : { 1 , . . . , p } and [ q ] , Min plays in states j ∈ [ p ] , choose a state i ∈ [ q ] and receive M ij from Max, Max plays in states i ∈ [ q ] , chooses a state k ∈ [ p ] and receive K ik from Min. λ N δ is the value of the game after N turns (Min + Max) starting j in state j .

  12. A geometric rewritting of the max-plus FEM : ◮ The FEM iterations can also be written as: v t + δ = Π h ◦ S δ ( v t v 0 h = P W h v 0 h ) and h where P W h ◦ P −Z h Π h = P W h v = max { w ∈ W h | w ≤ v } P −Z h v = min { w ∈ −Z h | w ≥ v } . ◮ These max-plus projectors were studied by Cohen, Gaubert, Quadrat, they are nonexpansive in the sup-norm.

  13. Example of projector Π h = P W h ◦ P −Z h We choose P 2 finite elements and P 1 test functions ˆ ˆ ˆ x 1 x 2 x 3 ˆ ˆ ˆ y 1 y 2 y 3

  14. Example of projector Π h = P W h ◦ P −Z h v ˆ ˆ ˆ ˆ ˆ y 1 y 2 y 3 y 4 y 5

  15. Example of projector Π h = P W h ◦ P −Z h P −Z h ( v ) v ˆ ˆ ˆ ˆ ˆ y 1 y 2 y 3 y 4 y 5

  16. Example of projector Π h = P W h ◦ P −Z h P −Z h ( v ) v ˆ ˆ ˆ ˆ ˆ ˆ x 1 x 2 x 3 x 4 x 5 x 6 ˆ ˆ ˆ ˆ ˆ y 1 y 2 y 3 y 4 y 5

  17. Example of projector Π h = P W h ◦ P −Z h P −Z h ( v ) Π h ( v ) v ˆ ˆ ˆ ˆ ˆ ˆ x 1 x 2 x 3 x 4 x 5 x 6 ˆ ˆ ˆ ˆ ˆ y 1 y 2 y 3 y 4 y 5

  18. ◮ As in the usual FEM, the error can be estimated from projection errors: � v T h − v T � ∞ ≤ ( 1 + T δ ) Projection error sup t = 0 ,δ,..., T � P W h ◦ P −Z h v t − v t � ∞ Projection error = sup t = 0 ,δ,..., T ( � P −Z h v t − v t � ∞ + � P W h v t − v t � ∞ ) . ≤ ◮ By convexity techniques, we obtain Projection error ≤ C (∆ x ) k /δ with k = 1 or 2 depending on the “degree” of the finite elements and on the regularity of the solution v t , and ∆ x equal to the “diameter” of the space discretization (Voronoi cells or Delaunay triangulation). ◮ The max-plus approximation theory seems limited to k = 2 ?

  19. However, this was an ideal FEM method. One need to compute: M ij = � w j , z i � = sup x ∈ X w j ( x ) + z i ( x ) � δ K ik = � z i , S δ w k � = sup x ∈ X , u ( · ) z i ( x ) + 0 ℓ ( x ( s ) , u ( s )) ds + w k ( x ) ◮ For good choices of w j and z i , M ij can be computed analytically. ◮ Computing K ik is a usual optimal control problem, but horizon δ may be small, and the final and terminal rewards w j and z i may be chosen to be nice, so that K ik may be well approximated. ◮ Then h − v T � ∞ ≤ ( 1 + T � v T δ )( Projection error + Approximation error ) ◮ For instance, using r -order one-step approximations of S δ w i ( x ) , Approximation error = O ( δ r + 1 ) . ◮ So the total max-plus FEM error is in the order of (∆ x ) k /δ + δ r , with r ≥ 1, and k = 1 or 2.

Recommend


More recommend