an introduction to mean field games and their applications
play

An introduction to mean field games and their applications Pr. - PowerPoint PPT Presentation

An introduction to mean field games and their applications Pr. Olivier Gu eant (Universit e Paris 1 Panth eon-Sorbonne) Mathematical Coffees Huawei/FSMP joint seminar September 2018 Table of contents 1. Introduction 2. Static


  1. What about uniqueness? Uniqueness If U is decreasing in the sense that � ∀ m 1 � = m 2 , ( U ( x , m 1 ) − U ( x , m 2 ))( m 1 − m 2 ) < 0 then, an equilibrium is unique. 14

  2. What about uniqueness? Uniqueness If U is decreasing in the sense that � ∀ m 1 � = m 2 , ( U ( x , m 1 ) − U ( x , m 2 ))( m 1 − m 2 ) < 0 then, an equilibrium is unique. This type of monotonicity result is ubiquitous in the MFG literature. 14

  3. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. 15

  4. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. Remark 1: Sometimes, there exists a global problem whose solution corresponds to a MFG equilibrium. 15

  5. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. Remark 1: Sometimes, there exists a global problem whose solution corresponds to a MFG equilibrium. Remark 2: Uniqueness is related to the strict concavity of F, hence the monotonicity assumption on U. 15

  6. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. Remark 1: Sometimes, there exists a global problem whose solution corresponds to a MFG equilibrium. Remark 2: Uniqueness is related to the strict concavity of F, hence the monotonicity assumption on U. Put your towel on the beach • Objective function: U ( x , m ) = − x 2 − γ m ( x ). 15

  7. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. Remark 1: Sometimes, there exists a global problem whose solution corresponds to a MFG equilibrium. Remark 2: Uniqueness is related to the strict concavity of F, hence the monotonicity assumption on U. Put your towel on the beach • Objective function: U ( x , m ) = − x 2 − γ m ( x ). � − x 2 m ( x ) dx − γ 2 m ( x ) 2 dx . • Global problem: F ( m ) = 15

  8. MFG and planning Variational characterization (planner’s problem) If there exists a function m �→ F ( m ) on P ( E ) such that DF = U , then any maximum of F is a Nash-MFG equilibrium. Remark 1: Sometimes, there exists a global problem whose solution corresponds to a MFG equilibrium. Remark 2: Uniqueness is related to the strict concavity of F, hence the monotonicity assumption on U. Put your towel on the beach • Objective function: U ( x , m ) = − x 2 − γ m ( x ). � − x 2 m ( x ) dx − γ 2 m ( x ) 2 dx . • Global problem: F ( m ) = • Unique equilibrium, of the form m ( x ) = 1 γ ( λ − x 2 ) + 15

  9. MFG in continuous time (with continuous state space)

  10. Differential games Static games are interesting but MFGs are really powerful in continuous time (differential games): The real power of MFGs in continuous time • Differential/stochastic calculus. • Ordinary and partial differential equations. • Numerical methods. 16

  11. Differential games Static games are interesting but MFGs are really powerful in continuous time (differential games): The real power of MFGs in continuous time • Differential/stochastic calculus. • Ordinary and partial differential equations. • Numerical methods. Also, very general results have been obtained with probabilistic methods (see Carmona, Delarue). 16

  12. Reminder of (stochastic) optimal control Agent’s dynamics dX t = α t dt + σ dW t , X 0 = x 17

  13. Reminder of (stochastic) optimal control Agent’s dynamics dX t = α t dt + σ dW t , X 0 = x Objective function �� T � sup ( f ( X s ) − L ( α s )) ds + g ( X T ) E ( α s ) s ≥ 0 0 17

  14. Reminder of (stochastic) optimal control Agent’s dynamics dX t = α t dt + σ dW t , X 0 = x Objective function �� T � sup ( f ( X s ) − L ( α s )) ds + g ( X T ) E ( α s ) s ≥ 0 0 Remarks: • f and L can also include a time dependency (e.g. discount rate). • Stationary (infinite horizon)/Ergodic problems can also be considered. 17

  15. Reminder of (stochastic) optimal control Main tool: value function The best “score” an agent can expect when he is in x at time t : �� T � ( f ( X s ) − L ( α s )) ds + g ( X T ) | X t = x u ( t , x ) = sup E ( α s ) s ≥ t t 18

  16. Reminder of (stochastic) optimal control Main tool: value function The best “score” an agent can expect when he is in x at time t : �� T � ( f ( X s ) − L ( α s )) ds + g ( X T ) | X t = x u ( t , x ) = sup E ( α s ) s ≥ t t PDE u “solves” the Hamilton-Jacobi(-Bellman) equation: ∂ t u + σ 2 2 ∆ u + H ( ∇ u ) = − f ( x ) , u ( T , x ) = g ( x ) , where H ( p ) = sup α α · p − L ( α ). 18

  17. Reminder of (stochastic) optimal control Main tool: value function The best “score” an agent can expect when he is in x at time t : �� T � ( f ( X s ) − L ( α s )) ds + g ( X T ) | X t = x u ( t , x ) = sup E ( α s ) s ≥ t t PDE u “solves” the Hamilton-Jacobi(-Bellman) equation: ∂ t u + σ 2 2 ∆ u + H ( ∇ u ) = − f ( x ) , u ( T , x ) = g ( x ) , where H ( p ) = sup α α · p − L ( α ). Optimal control The optimal control is α ∗ ( t , x ) = ∇ H ( ∇ u ( t , x )). 18

  18. From optimal control problems to mean field games • Continuum of players. 19

  19. From optimal control problems to mean field games • Continuum of players. • Each player has a position X i that evolves according to: dX i t = α i t dt + σ dW i X i 0 = x i t , Remark: only independent idiosyncratic risks (common noise has also been studied but it is more complicated). 19

  20. From optimal control problems to mean field games • Continuum of players. • Each player has a position X i that evolves according to: dX i t = α i t dt + σ dW i X i 0 = x i t , Remark: only independent idiosyncratic risks (common noise has also been studied but it is more complicated). • Each player optimizes: �� T � � � f ( X i s , m ( s , · )) − L ( α i ds + g ( X i max E s , m ( s , · )) T , m ( T , · )) ( α i s ) s ≥ 0 0 19

  21. From optimal control problems to mean field games • Continuum of players. • Each player has a position X i that evolves according to: dX i t = α i t dt + σ dW i X i 0 = x i t , Remark: only independent idiosyncratic risks (common noise has also been studied but it is more complicated). • Each player optimizes: �� T � � � f ( X i s , m ( s , · )) − L ( α i ds + g ( X i max E s , m ( s , · )) T , m ( T , · )) ( α i s ) s ≥ 0 0 • The Nash-equilibrium t ∈ [0 , T ] �→ m ( t , · ) must be consistent with the decisions of the agents. 19

  22. Examples Repulsion • f ( x , m ) = − m ( t , x ) − δ x 2 and g = 0. → Willingness to be close to 0 but far from other players. • Quadratic cost: L ( α ) = α 2 2 . 20

  23. Examples Repulsion • f ( x , m ) = − m ( t , x ) − δ x 2 and g = 0. → Willingness to be close to 0 but far from other players. • Quadratic cost: L ( α ) = α 2 2 . Congestion Cost of the form L ( α, m ( t , x )) = α 2 2 (1 + m ( t , x )). 20

  24. Partial differential equations • u value function of the control problem (with given m ). • m distribution of the players 21

  25. Partial differential equations • u value function of the control problem (with given m ). • m distribution of the players MFG PDEs ∂ t u + σ 2 2 ∆ u + H ( ∇ u , m ) = − f ( x , m ) ( HJB ) ∂ t m + ∇ · ( m ∇ p H ( ∇ u , m )) = σ 2 ( K ) 2 ∆ m where H ( p , m ) = sup α α · p − L ( α, m ). u ( T , x ) = g ( x ) , m (0 , x ) = m 0 ( x ) 21

  26. Partial differential equations • u value function of the control problem (with given m ). • m distribution of the players MFG PDEs ∂ t u + σ 2 2 ∆ u + H ( ∇ u , m ) = − f ( x , m ) ( HJB ) ∂ t m + ∇ · ( m ∇ p H ( ∇ u , m )) = σ 2 ( K ) 2 ∆ m where H ( p , m ) = sup α α · p − L ( α, m ). u ( T , x ) = g ( x ) , m (0 , x ) = m 0 ( x ) The optimal control is α ∗ ( t , x ) = ∇ p H ( ∇ u ( t , x ) , m ( t , · )). 21

  27. Remarks and variants Forward/Backward The system of PDEs is a forward/backward problem: • The HJB equation is backward in time (terminal condition) because agents anticipate the future. • The transport equation is forward in time because it corresponds to the dynamics of the agents. 22

  28. Remarks and variants Forward/Backward The system of PDEs is a forward/backward problem: • The HJB equation is backward in time (terminal condition) because agents anticipate the future. • The transport equation is forward in time because it corresponds to the dynamics of the agents. Other frameworks • Stationary setting (infinite horizon) • Ergodic setting 22

  29. Remarks and variants Forward/Backward The system of PDEs is a forward/backward problem: • The HJB equation is backward in time (terminal condition) because agents anticipate the future. • The transport equation is forward in time because it corresponds to the dynamics of the agents. Other frameworks • Stationary setting (infinite horizon) • Ergodic setting Related problem Same equations with initial and final conditions on m and no terminal condition on u : the problem is then that of finding the right terminal payoff g so that agents go from m 0 to m T . 22

  30. Some results 23

  31. Some results Existence A wide variety of PDE results, depending on f , L , g and σ . 23

  32. Some results Existence A wide variety of PDE results, depending on f , L , g and σ . Uniqueness If the cost function L does not depend on m and if f is decreasing in the sense: � ∀ m 1 � = m 2 , ( f ( x , m 1 ) − f ( x , m 2 ))( m 1 − m 2 ) < 0 then a solution of the PDEs system is unique. 23

  33. Some results Existence A wide variety of PDE results, depending on f , L , g and σ . Uniqueness If the cost function L does not depend on m and if f is decreasing in the sense: � ∀ m 1 � = m 2 , ( f ( x , m 1 ) − f ( x , m 2 ))( m 1 − m 2 ) < 0 then a solution of the PDEs system is unique. Remarks: 23

  34. Some results Existence A wide variety of PDE results, depending on f , L , g and σ . Uniqueness If the cost function L does not depend on m and if f is decreasing in the sense: � ∀ m 1 � = m 2 , ( f ( x , m 1 ) − f ( x , m 2 ))( m 1 − m 2 ) < 0 then a solution of the PDEs system is unique. Remarks: • Same criterion as above. 23

  35. Some results Existence A wide variety of PDE results, depending on f , L , g and σ . Uniqueness If the cost function L does not depend on m and if f is decreasing in the sense: � ∀ m 1 � = m 2 , ( f ( x , m 1 ) − f ( x , m 2 ))( m 1 − m 2 ) < 0 then a solution of the PDEs system is unique. Remarks: • Same criterion as above. • For more general cost functions L ( e.g. congestion), there is a more general criterion (see Lions, or see the result in graphs). 23

  36. MFG with quadratic cost/Hamiltonian MFG equations with quadratic cost function L ( α ) = α 2 2 on the domain [0 , T ] × Ω, Ω standing for (0 , 1) d : ∂ t u + σ 2 2 ∆ u + 1 2 |∇ u | 2 = − f ( x , m ) ( HJB ) ∂ t m + ∇ · ( m ∇ u ) = σ 2 ( K ) 2 ∆ m 24

  37. MFG with quadratic cost/Hamiltonian MFG equations with quadratic cost function L ( α ) = α 2 2 on the domain [0 , T ] × Ω, Ω standing for (0 , 1) d : ∂ t u + σ 2 2 ∆ u + 1 2 |∇ u | 2 = − f ( x , m ) ( HJB ) ∂ t m + ∇ · ( m ∇ u ) = σ 2 ( K ) 2 ∆ m Examples of conditions • Boundary conditions: ∂ u ∂ n = ∂ m ∂ n = 0 on (0 , T ) × ∂ Ω • Terminal condition: u ( T , x ) = g ( x ). • Initial condition: m (0 , x ) = m 0 ( x ) ≥ 0. 24

  38. MFG with quadratic cost/Hamiltonian MFG equations with quadratic cost function L ( α ) = α 2 2 on the domain [0 , T ] × Ω, Ω standing for (0 , 1) d : ∂ t u + σ 2 2 ∆ u + 1 2 |∇ u | 2 = − f ( x , m ) ( HJB ) ∂ t m + ∇ · ( m ∇ u ) = σ 2 ( K ) 2 ∆ m Examples of conditions • Boundary conditions: ∂ u ∂ n = ∂ m ∂ n = 0 on (0 , T ) × ∂ Ω • Terminal condition: u ( T , x ) = g ( x ). • Initial condition: m (0 , x ) = m 0 ( x ) ≥ 0. The optimal control is α ∗ ( t , x ) = ∇ u ( t , x ). 24

  39. Change of variables Theorem: u = σ 2 log( φ ) , m = φψ Let’s consider a smooth solution ( φ, ψ ) (with φ > 0) of: ∂ t φ + σ 2 − 1 2 ∆ φ = σ 2 f ( x , φψ ) φ ( E φ ) ∂ t ψ − σ 2 1 2 ∆ ψ = σ 2 f ( x , φψ ) ψ ( E ψ ) • Boundary conditions: ∂φ ∂ n = ∂ψ ∂ n = 0 on (0 , T ) × ∂ Ω � � u T ( · ) • Terminal condition: φ ( T , · ) = exp . σ 2 • Initial condition: ψ (0 , · ) = m 0 ( · ) φ (0 , · ) Then ( u , m ) = ( σ 2 log( φ ) , φψ ) is a solution of (MFG). 25

  40. Change of variables Theorem: u = σ 2 log( φ ) , m = φψ Let’s consider a smooth solution ( φ, ψ ) (with φ > 0) of: ∂ t φ + σ 2 − 1 2 ∆ φ = σ 2 f ( x , φψ ) φ ( E φ ) ∂ t ψ − σ 2 1 2 ∆ ψ = σ 2 f ( x , φψ ) ψ ( E ψ ) • Boundary conditions: ∂φ ∂ n = ∂ψ ∂ n = 0 on (0 , T ) × ∂ Ω � � u T ( · ) • Terminal condition: φ ( T , · ) = exp . σ 2 • Initial condition: ψ (0 , · ) = m 0 ( · ) φ (0 , · ) Then ( u , m ) = ( σ 2 log( φ ) , φψ ) is a solution of (MFG). Nice existence results exist on this system (see some of my papers). 25

  41. Numerics and examples

  42. Numerical methods • Variational formulation: when a global maximization problem exists, gradient-descent/ascent can be used (see Lachapelle, Salomon, Turinici) • Finite difference method (Achdou and Capuzzo-Dolcetta) • Specific methods in the quadratic cost case (see Gu´ eant). 26

  43. Examples with population dynamics Toy problem in the quadratic case • f ( x , ξ ) = − 16( x − 1 / 2) 2 − 0 . 1 max(0 , min(5 , ξ )), i.e. agents want to live near x = 1 2 but they do not want to live together. • T = 0 . 5 • g = 0 • σ = 1 µ ( x ) • m 0 ( x ) = 0 µ ( x ′ ) dx ′ , where � 1 � � �� 2 2 x − 3 µ ( x ) = 1 + 0 . 2 cos π . 2 27

  44. Toy problem in the quadratic case The functions φ and ψ . 28

  45. Toy problem in the quadratic case The dynamics of the distribution m . 29

  46. Examples with population dynamics (videos provided by Y. Achdou) Going out of a movie theater (1) • We consider a movie theatre with 6 rows, and 2 doors in the front to exit. • Neumann conditions on walls. • Homogenous Dirichlet conditions at the doors. • Running penalty while staying in the room. • Congestion effects. 30

  47. Examples with population dynamics (videos provided by Y. Achdou) Going out of a movie theater (2) • The same movie theatre with 6 rows, and 2 doors in the front to exit. • One door only will be open at a pre-defined time, but nobody knows which one. 31

  48. Numerous economic applications Many models in economics and finance – for instance: • Interaction between economic growth and inequalities (where Pareto distributions play a central role). → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). → Similar ideas developed by Lucas and Moll. 32

  49. Numerous economic applications Many models in economics and finance – for instance: • Interaction between economic growth and inequalities (where Pareto distributions play a central role). → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). → Similar ideas developed by Lucas and Moll. • Competition between asset managers. → Gu´ eant (Risk and decision analysis, 2013) 32

  50. Numerous economic applications Many models in economics and finance – for instance: • Interaction between economic growth and inequalities (where Pareto distributions play a central role). → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). → Similar ideas developed by Lucas and Moll. • Competition between asset managers. → Gu´ eant (Risk and decision analysis, 2013) • Oil extraction ( ` a la Hotelling) with noise. → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). 32

  51. Numerous economic applications Many models in economics and finance – for instance: • Interaction between economic growth and inequalities (where Pareto distributions play a central role). → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). → Similar ideas developed by Lucas and Moll. • Competition between asset managers. → Gu´ eant (Risk and decision analysis, 2013) • Oil extraction ( ` a la Hotelling) with noise. → See Gu´ eant, Lasry, Lions (Paris-Princeton lectures). • A long-term model for the mining industries. → Joint work of Achdou, Giraud, Lasry, Lions, and Scheinkman. 32

  52. Special for Huawei: MFG on graphs

  53. Framework MFGs are often written on continuous state spaces, but what about discrete structures? 33

  54. Framework MFGs are often written on continuous state spaces, but what about discrete structures? Notations for graph 33

  55. Framework MFGs are often written on continuous state spaces, but what about discrete structures? Notations for graph • Graph G . Nodes indexed by integers from 1 to N . 33

  56. Framework MFGs are often written on continuous state spaces, but what about discrete structures? Notations for graph • Graph G . Nodes indexed by integers from 1 to N . • ∀ i ∈ N = { 1 , . . . , N } : • V ( i ) ⊂ N \ { i } the set of nodes j for which a directed edge exists from i to j (cardinal: d i ). • V − 1 ( i ) ⊂ N \ { i } the set of nodes j for which a directed edge exists from j to i . 33

  57. Framework (continued) Players, strategies, and costs 34

  58. Framework (continued) Players, strategies, and costs • Each player’s position: Markov chain ( X t ) t with values in G . 34

  59. Framework (continued) Players, strategies, and costs • Each player’s position: Markov chain ( X t ) t with values in G . • Instantaneous transition probabilities at time t : λ t ( i , · ) : V ( i ) → R + ( ∀ i ∈ N ) 34

  60. Framework (continued) Players, strategies, and costs • Each player’s position: Markov chain ( X t ) t with values in G . • Instantaneous transition probabilities at time t : λ t ( i , · ) : V ( i ) → R + ( ∀ i ∈ N ) • Instantaneous cost L ( i , ( λ i , j ) j ∈V ( i ) ) to set the value of λ ( i , j ) to λ i , j . 34

  61. Hypotheses Hypotheses on L • Super-linearity hypothesis: L ( i , λ ) ∀ i ∈ N , lim = + ∞ | λ | di λ ∈ R + , | λ |→ + ∞ • Convexity hypothesis: ∀ i ∈ N , λ ∈ R d i + �→ L ( i , λ ) is strictly convex. 35

  62. Hypotheses Hypotheses on L • Super-linearity hypothesis: L ( i , λ ) ∀ i ∈ N , lim = + ∞ | λ | di λ ∈ R + , | λ |→ + ∞ • Convexity hypothesis: ∀ i ∈ N , λ ∈ R d i + �→ L ( i , λ ) is strictly convex. Also, we define: ∀ i ∈ N , p ∈ R d i �→ H ( i , p ) = sup λ · p − L ( i , λ ) . di λ ∈ R + 35

  63. Mean field game - control problem Control problem 36

  64. Mean field game - control problem Control problem • Admissible Markovian controls: � � ( λ t ( i , j )) t ∈ [0 , T ] , i ∈N , j ∈V ( i ) | t �→ λ t ( i , j ) ∈ L ∞ (0 , T ) A = 36

  65. Mean field game - control problem Control problem • Admissible Markovian controls: � � ( λ t ( i , j )) t ∈ [0 , T ] , i ∈N , j ∈V ( i ) | t �→ λ t ( i , j ) ∈ L ∞ (0 , T ) A = • For λ ∈ A and a given function m : [0 , T ] �→ P N we define the payoff function: J m : [0 , T ] × N × A → R by: � � T J m ( t , i , λ ) = E ( − L ( X s , λ s ( X s , · )) + f ( X s , m ( s ))) ds t � + g ( X T , m ( T )) for ( X s ) s ∈ [ t , T ] a Markov chain on G , starting from i at time t , 36 with instantaneous transition probabilities given by ( λ s ) s ∈ [ t , T ] .

  66. Mean field game - control problem Control problem • Admissible Markovian controls: � � ( λ t ( i , j )) t ∈ [0 , T ] , i ∈N , j ∈V ( i ) | t �→ λ t ( i , j ) ∈ L ∞ (0 , T ) A = • For λ ∈ A and a given function m : [0 , T ] �→ P N we define the payoff function: J m : [0 , T ] × N × A → R by: � � T J m ( t , i , λ ) = E ( − L ( X s , λ s ( X s , · )) + f ( X s , m ( s ))) ds t � + g ( X T , m ( T )) for ( X s ) s ∈ [ t , T ] a Markov chain on G , starting from i at time t , 36 with instantaneous transition probabilities given by ( λ s ) s ∈ [ t , T ] .

  67. Nash equilibrium Nash-MFG equilibrium A differentiable function m : t ∈ [0 , T ] �→ ( m ( t , i )) i ∈ P N is said to be a Nash-MFG equilibrium, if there exists an admissible control λ ∈ A such that: ∀ ˜ λ ∈ A , ∀ i ∈ N , J m (0 , i , λ ) ≥ J m (0 , i , ˜ λ ) and � � ∀ i ∈ N , d dt m ( t , i ) = λ t ( j , i ) m ( t , j ) − λ t ( i , j ) m ( t , i ) j ∈V − 1 ( i ) j ∈V ( i ) In that case, λ is called an optimal control. 37

  68. The G -MFG equations Definition (The G -MFG equations) The G -MFG equations consist in a system of 2 N equations, the unknown being t ∈ [0 , T ] �→ ( u ( t ) , m ( t )): � � d ∀ i ∈ N , dt u ( t , i )+ H i , ( u ( t , j ) − u ( t , i )) j ∈V ( i ) + f ( i , m ( t )) = 0 , � � � ∂ H ( j , · ) ∀ i , d dt m ( t , i ) = ( u ( t , k ) − u ( t , j )) k ∈V ( j ) m ( t , j ) ∂ p i j ∈V − 1 ( i ) � � � ∂ H ( i , · ) − ( u ( t , k ) − u ( t , i )) k ∈V ( i ) m ( t , i ) ∂ p j j ∈V ( i ) with u ( T , i ) = g ( i , m ( T )) and m (0) = m 0 ∈ P N given. 38

  69. The G -MFG equations Proposition (The G -MFG equations as a sufficient condition) Let m 0 ∈ P N and let us consider a C 1 solution ( u ( t ) , m ( t )) of the G -MFG equations with ( m (0 , 1) , . . . , m (0 , N )) = m 0 . 39

  70. The G -MFG equations Proposition (The G -MFG equations as a sufficient condition) Let m 0 ∈ P N and let us consider a C 1 solution ( u ( t ) , m ( t )) of the G -MFG equations with ( m (0 , 1) , . . . , m (0 , N )) = m 0 . Then: • t �→ m ( t ) = ( m ( t , 1) , . . . , m ( t , N )) is a Nash-MFG equilibrium � � • The relations λ t ( i , j ) = ∂ H ( i , · ) ( u ( t , k ) − u ( t , i )) k ∈V ( i ) ∂ p j define an optimal control. 39

Recommend


More recommend