A Stochastic Optimal Control Problem for the Heat Equation on the Halfline with Dirichlet Boundary-noise and Boundary-control Federica Masiero Universit` a di Milano Bicocca Roscoff 18-23 March 2010
PLAN 1. Heat equations with boundary noise; 2. Basic facts on stochastic optimal control; 3. Stochastic optimal control: boundary case; 4. Regularity of the FBSDE; 5. Solution of the related HJB; 6. Synthesis of the optimal control; 7. FBSDE in the infinite horizon case; 8. Stationary HJB and optimal control. 1
Heat equations with boundary noise Neumann boundary conditions ∂s ( s, ξ ) = ∂ 2 y ∂y ∂ξ 2 ( s, ξ ) + f ( s, y ( s, ξ )) , s ∈ [ t, T ] , ξ ∈ (0 , π ) , y ( t, ξ ) = x ( ξ ) , ∂y ∂y W 1 s + u 1 W 2 s + u 2 ∂ξ ( s, 0) = ˙ ∂ξ ( s, π ) = ˙ s , s . • { W i t } t ≥ 0 , i = 1 , 2 independent real Wiener processes; • { u i t } t ≥ 0 , i = 1 , 2 predictable real valued processes modelling the control; • y ( t, ξ, ω ) state of the system; • x ∈ L 2 (0 , π ). no noise as a forcing term! 2
Heat equations with boundary noise Reformulation in H = L 2 (0 , π ) � dX u s = AX u s ds + F ( s, X u s ) ds + ( λ − A ) b u s ds + ( λ − A ) b dW s s ∈ [ t, T ] , X u t = x, where � W 1 � � u 1 � � b 1 ( · ) � F ( t, x ) = f ( t, x ( · )), W = , u = and b ( · ) = . W 2 u 2 b 2 ( · ) b i ( · ) ∈ dom( λ − A ) α , 0 < α < 3 / 4 , b i ( · ) / ∈ dom( λ − A ) α , 3 / 4 < α < 1 . 3
Heat equations with boundary noise We can give sense to the mild formulation X u s = � s � s � s e ( s − t ) A x + e ( s − r ) A F ( r, X u e ( s − r ) A ( λ − A ) b u r dr + e ( s − r ) A ( λ − A ) b dW r r ) dr + t t t � s = e ( s − t ) A x + e ( s − r ) A F ( r, X u r ) dr t � s � s ( λ − A ) 1 − β e ( s − r ) A ( λ − A ) β b u r dr + ( λ − A ) 1 − β e ( s − r ) A ( λ − A ) β b dW r . + t t 4
Heat equations with boundary noise Dirichlet boundary conditions ∂s ( s, ξ ) = ∂ 2 y ∂y ∂ξ 2 ( s, ξ ) + f ( s, y ( s, ξ )) , s ∈ [ t, T ] , ξ ∈ (0 , + ∞ ) , (1) y ( t, ξ ) = x ( ξ ) , y ( s, 0) = u s + ˙ W s , • y ( t, ξ, ω ) state of the system; • { W t } t ≥ 0 real Wiener process; • { u t } t ≥ 0 predictable real valued process modelling the control. References • Da Prato-Zabczyk (1995): y ( s, · ) well defined in H α , for α < − 1 4. • Alos-Bonaccorsi (2002) and Bonaccorsi-Guatteri (2002): y ( s, · ) takes values in the weighted space L 2 ((0 , + ∞ ); ξ 1+ θ dξ ). 5
Heat equations with boundary noise In Fabbri-Goldys (2009) equation (1) (with f = 0) is reformulated as an evolution equation in H = L 2 ((0 , + ∞ ); ρ ( ξ ) dξ ) with ρ ( ξ ) = ξ 1+ θ or ρ ( ξ ) = min( ξ 1+ θ , 1) using results in Krylov (1999) and (2001). • The heat semigroup in L 2 ((0 , + ∞ )) extends to a bounded C 0 semigroup ( e tA ) t ≥ 0 in H with generator still denoted by A : dom( A ) ⊂ H → H . The semigroup ( e tA ) t ≥ 0 is analytic: for every β > 0, � ( λ − A ) β e tA � ≤ C β t − β for all t ≥ 0 . √ • Dirichlet map: λ > 0, ψ λ ( ξ ) = e − λξ , D λ : R → H , D λ ( a ) = aψ λ . • B = ( λ − A ) D λ ⇓ � dX u s = AX u s ds + Bu s ds + BdW s s ∈ [ t, T ] , X u t = x, 6
Heat equations with boundary noise • B : R → H α − 1 bounded • ψ λ ∈ dom( λ − A ) α and ( λ − A ) e tA D λ = ( λ − A ) 1 − α e tA ( λ − A ) α D λ : R → H bounded. • α ∈ ( 1 2 , 1 2 + θ 4 ). We can give sense to � s � s s = e ( s − t ) A x + e ( s − r ) A Bu r dr + e ( s − r ) A BdW r X u t t 7
Heat equations with boundary noise Our framework ∂s ( s, ξ ) = ∂ 2 y ∂y ∂ξ 2 ( s, ξ ) + f ( s, y ( s, ξ )) , s ∈ [ t, T ] , ξ ∈ (0 , + ∞ ) , y ( t, ξ ) = x ( ξ ) , y ( s, 0) = u s + ˙ W s , Hypothesis 1 1) f : [0 , T ] × R → R measurable, ∀ t ∈ [0 , T ] f ( t, · ) : R → R continuously differentiable and ∃ C f > 0 s.t. | f ( t, 0) | + | ∂f ∂r ( t, r ) |≤ C f , t ∈ [0 , T ] , r ∈ R . 2) x ( · ) ∈ H . 3) admissible control u : predictable process with values in a compact U ⊂ R . 8
Heat equations with boundary noise Set F ( s, X )( ξ ) = f ( s, X ( ξ )): F : [0 , T ] × H → H measurable and | F ( t, 0) | + | F ( t, x 1 ) − F ( t, x 2 ) | ≤ C f (1 + | x 1 − x 2 | ) , t ∈ [0 , T ] , x 1 , x 2 ∈ H . ∀ t ∈ [0 , T ], F ( t, · ) has a Gˆ ateaux derivative ∇ x F ( t, x ) and |∇ x F ( t, x ) | ≤ C f . ( x, h ) → ∇ x F ( t, x ) h continuous as a map H × H → R . � dX u s = AX u s ds + F ( s, X u s ) ds + Bu s ds + BdW s s ∈ [ t, T ] , X u t = x, By the Picard approximation scheme we find a mild solution � s � s � s X u s = e ( s − t ) A x + e ( s − r ) A F ( r, X u e ( s − r ) A Bu r dr + e ( s − r ) A BdW r . (2) r ) dr + t t t X ∈ L 2 (Ω; C ([ t, T ] , H )) ∀ p ∈ [1 , ∞ ), α ∈ [0 , θ/ 4), t ∈ [0 , T ] ∃ c p,α s.t. s | p ( s − t ) pα | X t,x dom ( λ − A ) α ≤ c p,α (1 + | x | H ) p . E sup s ∈ ( t,T ] 9
Basic facts on stochastic optimal control Controlled state equation in H � dX u τ = [ AX u τ + F ( τ, X u τ ) + u τ ] dτ + G ( τ, X u τ ) dW τ , τ ∈ [ t, T ] , X u t = x. Cost functional and value function � T g ( s, X u s , u s ) ds + E φ ( X u J ( t, x, u ) = E T ) , t J ∗ ( t, x ) = inf u J ( t, x, u ) . Hamiltonian function: for every τ ∈ [ t, T ] , x ∈ H, q ∈ H ∗ ψ ( τ, x, q ) = − inf { g ( τ, x, u ) + qu : u ∈ U} , Γ ( τ, x, q ) = { u ∈ U : g ( τ, x, u ) + qu = − ψ ( τ, x, q ) } 10
Basic facts on stochastic optimal control “ Analytic” approach Find a unique, sufficiently regular, solution of the Hamilton Jacobi Bellman equation (HJB) associated � ∂v ∂t ( t, x ) = −A t [ v ( t, · )] ( x ) + ψ ( t, x, ∇ v ( t, x )) v ( T, x ) = φ ( x ) , where A t f ( x ) = 1 G ( t, x ) G ∗ ( t, x ) ∇ 2 f ( x ) � � 2 Tr + � Ax, ∇ f ( x ) � + � F ( t, x ) , ∇ f ( x ) � For every admissible control u , J ( t, x, u ) ≥ v ( t, x ) and equality holds iff P -a.e. and for a.e. τ ∈ [ t, T ] u τ ∈ Γ ( τ, X u τ , ∇ v ( τ, X u τ )) . 11
Basic facts on stochastic optimal control u ∗ s. t. J ( t, x, u ∗ ) = J ∗ ( t, x ) is called optimal control; X ∗ ( · ) associated is called optimal trajectory; ( X ∗ , u ∗ ) is called optimal pair. Closed loop equation Assume that Γ is not empty. Closed loop equation: � � � � � � ��� � � dX τ = AX τ + F τ, X τ + Γ τ, X τ , ∇ v τ, X τ dτ + G τ, X τ dW τ , X t = x, τ ∈ [ t, T ] , x ∈ H. � � � ��� If there exists a solution, the pair X, Γ τ, X τ , ∇ v τ, X τ is an optimal pair. 12
Basic facts on stochastic optimal control References V. Barbu, G. Da Prato, Hamilton-Jacobi equations in Hilbert spaces, Research Notes in Mathematics, 86. Pitman (1983). P. Cannarsa, G. Da Prato, Second order Hamilton-Jacobi equations in infinite dimensions, SIAM J. Control Optim, 29, 2, (1991). P. Cannarsa, G. Da Prato, Direct solution of a second order Hamilton-Jacobi equations in Hilbert spaces, Stochastic Partial Differential Equations and Ap- plications, (1992). F. Gozzi, Regularity of solutions of second order Hamilton-Jacobi equations in Hilbert spaces and applications to a control problem, (1995) Comm Partial Differential Equations 20. 13
Basic facts on stochastic optimal control BSDE approach Controlled state equation in H � dX u τ = [ AX u τ + F ( τ, X u τ ) + G ( τ, X u τ ) u τ ] dτ + G ( τ, X u τ ) dW τ , X u t = x, τ ∈ [ t, T ] , x ∈ H. Forward-Backward system dX τ = AX τ dτ + F ( τ, X τ ) dτ + G ( τ, X τ ) dW τ , τ ∈ [ t, T ] dY τ = ψ ( τ, X τ , Z τ ) dτ + Z τ dW τ , τ ∈ [ t, T ] X t = x, Y T = φ ( X T ) . 14
Basic facts on stochastic optimal control BSDE in integral form � T � T Y τ + Z σ dW σ = φ ( X T ) + ψ ( σ, X σ , Z σ ) dσ τ τ There exists a unique adapted solution X t,x τ , Y t,x , Z t,x � � ( X τ , Y τ , Z τ ) = . τ τ v ( t, x ) = Y ( t, t, x ) is deterministic and J ( t, x, u ) ≥ v ( t, x ), for every admissible control u , and equality holds iff P -a.e. and for a.e. τ ∈ [ t, T ] u τ ∈ Γ ( τ, X τ , ∇ v ( τ, X τ ) G ( τ, X τ )) . Identification of Z t,x with ∇ v ( τ, X t,x τ ) G ( τ, X t,x τ ). τ 15
Basic facts on stochastic optimal control References N. El Karoui, S. Peng, M. Quenez, Backward stochastic differential equations in finance. Math. Finance 7, (1997), no.1. M. Fuhrman and G. Tessitore, Non linear Kolmogorov equations in infinite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. Ann. Probab. 30 (2002), no. 3. E. Pardoux and S. Peng, Backward stochastic differential equations and quasi- linear parabolic partial differential equations . Stochastic Partial Differential Equations and Their Applications. Lecture Notes in Control Inf.Sci. 176, (1992) 200-217. Springer, Berlin. 16
Recommend
More recommend