Optimal control of parabolic equations using spectral calculus Ivica Nakić Faculty of Science, University of Zagreb Joint work (in progress) with L. Grubišić, M. Lazar and M. Tautenhahn Operator Theory and Krein Spaces, TU Wien, 2019
The problem Inital condition or starting optimal control: solve the following problem min u ∈H { J ( u ): ∥ y ( T ) − y ∗ ∥ ≤ ε } , where (in the weak sense) { y ′ ( t ) + Ay ( t ) = f ( t ) for 0 ≤ t ≤ T , y (0) = u , ∫ T β ( t ) ∥ y ( t ) − w ( t ) ∥ 2 d t . J ( u ) = α 2 ∥ u ∥ 2 + 1 2 0 Here we assume: A selfadjoint, lower semi-bounded operator on a Hilbert space H , f ∈ L 2 ((0 , T ); H ) . Parameters: y ∗ is the target state, ε > 0 is the tolerance, α > 0 , β ∈ L ∞ ((0 , T ); [0 , ∞ )) are weights, and w ∈ L 2 ((0 , T ); H ) is the desired trajectory of the system. 2
The solu�on The solution ˆ u of the problem is given by h + b ) − y ∗ − y ∗ S T ˆ u = (ˆ µ + B ) − 1 (ˆ µ y ∗ h , where ∫ T ∫ T B = α I + β ( t ) S 2 t d t , b = β ( t ) S T + t w h ( t ) d t , 0 0 ∫ T ∫ · h = y ∗ − y ∗ S τ f ( τ ) d τ, w h = w − S τ f ( τ ) d τ, 0 0 µ is the unique solution of ˆ G ( µ ) := ∥ y ∗ h − ( µ + B ) − 1 ( µ y ∗ h + b ) ∥ = ε, and { S t } is the semigroup generated by − A . 3
The solution of the unconstrained problem ( ) is given by the same formula with . The function G is decreasing. Let g y h B y h b , so G g . Then g y h x , where x is the solution of the equation B x y h b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x y h b hence we obtain it for free. Remarks → The optimal final state ˆ y is given by y = (ˆ µ + B ) − 1 (ˆ µ y ∗ h + b ) . ˆ 4
The function G is decreasing. Let g y h B y h b , so G g . Then g y h x , where x is the solution of the equation B x y h b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x y h b hence we obtain it for free. Remarks → The optimal final state ˆ y is given by y = (ˆ µ + B ) − 1 (ˆ µ y ∗ h + b ) . ˆ → The solution of the unconstrained problem ( ε = ∞ ) is given by the same formula with ˆ µ = 0 . 4
Let g y h B y h b , so G g . Then g y h x , where x is the solution of the equation B x y h b hence the calculation of G reduces to solving a linear equation. The optimal final state is the solution of the equation B x y h b hence we obtain it for free. Remarks → The optimal final state ˆ y is given by y = (ˆ µ + B ) − 1 (ˆ µ y ∗ h + b ) . ˆ → The solution of the unconstrained problem ( ε = ∞ ) is given by the same formula with ˆ µ = 0 . → The function G is decreasing. 4
The optimal final state is the solution of the equation B x y h b hence we obtain it for free. Remarks → The optimal final state ˆ y is given by y = (ˆ µ + B ) − 1 (ˆ µ y ∗ h + b ) . ˆ → The solution of the unconstrained problem ( ε = ∞ ) is given by the same formula with ˆ µ = 0 . → The function G is decreasing. → Let g ( µ ) = y ∗ h − ( µ + B ) − 1 ( µ y ∗ h + b ) , so G ( µ ) = ∥ g ( µ ) ∥ . Then g ( µ ) = y ∗ h − x , where x is the solution of the equation ( µ + B ) x = µ y ∗ h + b , hence the calculation of G ( µ ) reduces to solving a linear equation. 4
Remarks → The optimal final state ˆ y is given by y = (ˆ µ + B ) − 1 (ˆ µ y ∗ h + b ) . ˆ → The solution of the unconstrained problem ( ε = ∞ ) is given by the same formula with ˆ µ = 0 . → The function G is decreasing. → Let g ( µ ) = y ∗ h − ( µ + B ) − 1 ( µ y ∗ h + b ) , so G ( µ ) = ∥ g ( µ ) ∥ . Then g ( µ ) = y ∗ h − x , where x is the solution of the equation ( µ + B ) x = µ y ∗ h + b , hence the calculation of G ( µ ) reduces to solving a linear equation. → The optimal final state is the solution of the equation µ + B ) x = ˆ µ y ∗ h + b , (ˆ hence we obtain it for free. 4
In applications, we can use B A , where T t exp t d t and we can N approximate b by ( w h t w i t i ) t i i N t i i A w i where t exp T t d t i t i i Remarks → In applications, it is enough to find good µ ( µ ≥ ˆ µ , µ close to ˆ µ ), not the optimal one. One choice is to take ∫ T µ = ∥ By ∗ h − b ∥ 2 t β ( t ) e − 2 t κ d t , 1 + α + ε 0 where A ≥ κ . 5
Remarks → In applications, it is enough to find good µ ( µ ≥ ˆ µ , µ close to ˆ µ ), not the optimal one. One choice is to take ∫ T µ = ∥ By ∗ h − b ∥ 2 t β ( t ) e − 2 t κ d t , 1 + α + ε 0 where A ≥ κ . → In applications, we can use B = ˜ β 0 ( A ) , where ∫ T 0 β ( t ) exp ( − 2 t λ ) d t and we can ˜ β 0 ( λ ) = α + approximate b by ( w h ( t ) ≈ ∑ N i =1 w i χ [ t i − 1 , t i ] ) ∫ t i N ∑ β i ( A ) w i , where ˜ ˜ β ( t ) exp ( − ( T + t ) λ ) d t , β i ( λ ) = t i − 1 i =1 5
An example Let A be positive definite operator and let f = 0 . We take β = χ [ T /3 , 2 T /3] and assume that w does not depend on time. Then B = α + 1 2 A − 1 S 2 T /3 ( I − S 2 T /3 ) , b = A − 1 S 4 T /3 ( I − S T /3 ) w . 6
Sensi�vity of the problem Let us perturbe all the parameters of the problem with perturbations < ν , in respective norms, such that that the perturbed problem has the same structure ( A still selfadjoint etc.). Then Theorem For small enough ν > 0 the optimal solutions of the original and the perturbed problem differ by < C ν with explicit C. 7
Idea of the proof Proof is mostly geometry. We can assume f = 0 . We work in ˜ H = Ran S T , with the scalar product ⟨ S − 1 T · , S − 1 T ·⟩ , and define ω ( · ) = J ( S − 1 T · ) . Then ˆ y is the unique solution of min H { ω ( y ): ∥ y − y ∗ ∥ ≤ ε } . y ∈ ˜ We define W c = { y ∈ ˜ H : ω ( y ) ≤ c } . Let Π c ( x ) be a projection of x to W c . Then c ( y ∗ ) = ˆ y , where ˆ c = ω (ˆ y ) . Π ˆ 8
Idea of the proof The second geometric ingredient is the following result: there exists ˆ γ > 0 such that y ∗ − ˆ y = ˆ y ) . γ ∇ ω (ˆ ε y ∗ y ˆ W ˆ y ˜ c 9
Non-homogeneous boundary condi�on Suppose we have y ′ ( t ) + Ly ( t ) = 0 for 0 ≤ t ≤ T , Gy ( t ) = g ( t ) , y (0) = u , where G is a boundary trace operator. We assume that ( L , G ) forms so-called boundary control system. Definition A boundary control system is a pair of operators ( L , G ) where L ∈ L ( Z , X ) and G ∈ L ( Z , U ) , if there exists β ∈ C such that: G is surjective, Ker G is dense in X , β − L restricted to Ker G is surjective, and Ker ( β − L ) ∩ Ker G = { 0 } . 10
Fear not, B and b can still be seen as operator/element in X using the fact that Tg A h , for h function with values in X . If g is constant in time, B and b have nice formulas. Non-homogeneous boundary condi�on We define the operator A on X by Au = Lu and D ( A ) = Ker G . Let X − 1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X − 1 . There exists a unique T ∈ L ( U , X − 1 ) such that the problem can be written as y ( t ) + ˆ Ay ( t ) = Tg ( t ) . ˙ So we are back in business if A is a lower semi-bounded selfadjoint operator ( ˆ A inherits the properties of A ) but X − 1 is not a nice space to work with. 11
Non-homogeneous boundary condi�on We define the operator A on X by Au = Lu and D ( A ) = Ker G . Let X − 1 be the extrapolation space corresponding to A and let ˆ A be the extension of A to X − 1 . There exists a unique T ∈ L ( U , X − 1 ) such that the problem can be written as y ( t ) + ˆ Ay ( t ) = Tg ( t ) . ˙ So we are back in business if A is a lower semi-bounded selfadjoint operator ( ˆ A inherits the properties of A ) but X − 1 is not a nice space to work with. Fear not, B and b can still be seen as operator/element in X using the fact that Tg ( · ) = ( β − ˆ A ) h ( · ) , for h function with values in X . If g is constant in time, B and b have nice formulas. 11
Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk. What's the point? Standard (and much more general) solution is based on Lagrange multipliers, and to find the solution one needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. 12
What's the point? Standard (and much more general) solution is based on Lagrange multipliers, and to find the solution one needs to solve two coupled time-dependent problems, the original and the adjoint one. We need to solve just one stationary problem, but with a more complicated operator and vector. Should have some advantages. Constructing efficient numerical procedure is a work in progress. To learn about one possible approach go to Luka’s talk. 12
distributed control boundary control non-selfadjoint case Look ahead → proper numerics 13
boundary control non-selfadjoint case Look ahead → proper numerics → distributed control 13
non-selfadjoint case Look ahead → proper numerics → distributed control → boundary control 13
Look ahead → proper numerics → distributed control → boundary control → non-selfadjoint case 13
Recommend
More recommend