controllability and stabilization of some korteweg de
play

Controllability and stabilization of some Korteweg-de Vries - PowerPoint PPT Presentation

Controllability and stabilization of some Korteweg-de Vries equations Jean-Michel Coron Laboratoire J.-L. Lions, Universit Pierre et Marie Curie (Paris 6) Workshop New trends in modeling, control and inverse problems Workshop New trends in


  1. Controllability and stabilization of some Korteweg-de Vries equations Jean-Michel Coron Laboratoire J.-L. Lions, Université Pierre et Marie Curie (Paris 6) Workshop New trends in modeling, control and inverse problems Workshop New trends in modeling, control and inverse problems Institut de Mathématiques de Toulouse, June 16 - 19, 2014

  2. The Korteweg de Vries (KdV) equation devenue o ou ip\ Joseph Boussinesq, Essai sur la théorie des eaux courantes. Mémoires présentés par divers savants à l’Acad. des Sci. Inst. Nat. France, XXIII, pp. 1-680, 1877. In this equation, t is time, s is the spatial variable and H + h ′ is the water surface elevation, H being the water surface elevation when the water is at rest (Korteweg and de Vries (1895)). This equation is used to describe approximately long waves in water of relatively shallow depth. After suitable scalings, the equation can be written as (1) y t + y x + y xxx + yy x = 0 .

  3. y ( t, x ) y ( t, x ) x

  4. A KdV control system (1) y t + y x + y xxx + yy x = 0 , t ∈ [0 , T ] , x ∈ [0 , L ] , (2) y ( t, 0) = y ( t, L ) = 0 , y x ( t, L ) = u ( t ) , t ∈ [0 , T ] . where, at time t ∈ [0 , T ] , the control is u ∈ R and the state is y ( t, · ) ∈ L 2 (0 , L ) .

  5. Definition of the local controllability Our control system is y t + y x + y xxx + yy x = 0 , t ∈ [0 , T ] , x ∈ [0 , L ] , (1) (2) y ( t, 0) = y ( t, L ) = 0 , y x ( t, L ) = u ( t ) , t ∈ [0 , T ] . Definition of the local controllability of (1)-(2) Let T > 0 . The control system (1)-(2) is locally controllable in time T if, for every ε > 0 , there exists η > 0 such that, for every y 0 ∈ L 2 (0 , L ) and for every y 1 ∈ L 2 (0 , L ) satisfying | y 0 | L 2 (0 ,L ) < η and | y 1 | L 2 (0 ,L ) < η , there exists u ∈ L 2 (0 , T ) satisfying | u | L 2 (0 ,T ) < ε such that the solution y ∈ C 0 ([0 , T ]; L 2 (0 , L )) of (1)-(2) satisfying the initial condition y (0 , x ) = y 0 ( x ) is such that y ( T, x ) = y 1 ( x ) . Question: Let T > 0 , is is true that (1)-(2) is locally controllable.

  6. Controllability of the linearized control system The linearized control system (around 0 ) is (1) y t + y x + y xxx = 0 , t ∈ [0 , T ] , x ∈ [0 , L ] , (2) y ( t, 0) = y ( t, L ) = 0 , y x ( t, L ) = u ( t ) , t ∈ [0 , T ] . where, at time t ∈ [0 , T ] , the control is u ∈ R and the state is y ( t, · ) ∈ L 2 (0 , L ) . Definition of the controllability of (1)-(2) Let T > 0 . The linear control system (1)-(2) is controllable in time T if, for every y 0 ∈ L 2 (0 , L ) and for every y 1 ∈ L 2 (0 , L ) , there exists u ∈ L 2 (0 , T ) such that the solution y ∈ C 0 ([0 , T ]; L 2 (0 , L )) of (1)-(2) satisfying the initial condition y (0 , x ) = y 0 ( x ) is such that y ( T, x ) = y 1 ( x ) .

  7. Controllability of the linearized control system Theorem (L. Rosier (1997)) For every T > 0 , the linearized control system is controllable in time T if and only � � � k 2 + kl + l 2 , k ∈ N ∗ , l ∈ N ∗ L �∈ N := 2 π . 3

  8. Application to the nonlinear system Theorem (L. Rosier (1997)) For every T > 0 , the KdV control system is locally controllable in time T if L �∈ N . Question: Does one have controllability if L ∈ N ?

  9. Controllability when L ∈ N Theorem (JMC and E. Crépeau (2004)) If L = 2 π (which is in N : take k = l = 1 ), for every T > 0 the KdV control system is locally controllable in time T . Theorem (E. Cerpa (2007), E. Cerpa and E. Crépeau (2008)) For every L ∈ N , there exists T > 0 such that the KdV control system is locally controllable in time T .

  10. The proof relies on a power series expansion. Let us explain the method on control systems of finite dimension y = f ( y, u ) , ˙ where the state is y ∈ R n and the control is u ∈ R m . We assume that (0 , 0) ∈ R n × R m is an equilibrium of the control system ˙ y = f ( y, u ) , i.e. that f (0 , 0) = 0 . Let H := Span { A i Bu ; u ∈ R m , i ∈ { 0 , . . . , n − 1 }} with A := ∂f ∂y (0 , 0) , B := ∂f ∂u (0 , 0) . If H = R n , the linearized control system around (0 , 0) is controllable and therefore the nonlinear control system ˙ y = f ( y, u ) is small-time locally controllable at (0 , 0) ∈ R n × R m .

  11. Let us look at the case where the dimension of H is n − 1 . Let us make a (formal) power series expansion of the control system ˙ y = f ( y, u ) in ( y, u ) around 0 . We write y = y 1 + y 2 + . . . , u = v 1 + v 2 + . . . . The order 1 is given by ( y 1 , v 1 ) ; the order 2 is given by ( y 2 , v 2 ) and so on. The dynamics of these different orders are given by y 1 = ∂f ∂y (0 , 0) y 1 + ∂f ∂u (0 , 0) v 1 , (1) ˙ ∂ 2 f y 2 = ∂f ∂y (0 , 0) y 2 + ∂f ∂u (0 , 0) v 2 + 1 ∂y 2 (0 , 0)( y 1 , y 1 ) (2) ˙ 2 + ∂ 2 f ∂ 2 f ∂y∂u (0 , 0)( y 1 , v 1 ) + 1 ∂u 2 (0 , 0)( v 1 , v 1 ) , 2 and so on.

  12. Let e 1 ∈ H ⊥ . Let T > 0 . Let us assume that there are controls v 1 ± and v 2 ± , both in L ∞ ((0 , T ); R m ) , such that, if y 1 ± and y 2 ± are solutions of ± = ∂f ± + ∂f y 1 ∂y (0 , 0) y 1 ∂u (0 , 0) v 1 (1) ˙ ± , y 1 (2) ± (0) = 0 , ∂ 2 f ± = ∂f ± + ∂f ± + 1 y 2 ∂y (0 , 0) y 2 ∂u (0 , 0) v 2 ∂y 2 (0 , 0)( y 1 ± , y 1 (3) ˙ ± ) 2 + ∂ 2 f ∂ 2 f ± ) + 1 ∂y∂u (0 , 0)( y 1 ± , u 1 ∂u 2 (0 , 0)( u 1 ± , u 1 ± ) , 2 y 2 (4) ± (0) = 0 , then y 1 (5) ± ( T ) = 0 , y 2 (6) ± ( T ) = ± e 1 .

  13. Let ( e i ) i ∈{ 2 ,...n } be a basis of H . By the definition of H , there are ( u i ) i =2 ,...,n , all in L ∞ (0 , T ) m , such that, if ( y i ) i =2 ,...,n are the solutions of y i = ∂f ∂y (0 , 0) y i + ∂f (1) ˙ ∂u (0 , 0) u i , (2) y i (0) = 0 , then, for every i ∈ { 2 , . . . , n } , (3) y i ( T ) = e i . Now let n � (4) b = b i e i i =1 be a point in R n . Let v 1 and v 2 , both in L ∞ ((0 , T ); R m ) , be defined by the following If b 1 � 0 , then v 1 := v 1 + and v 2 := v 2 (5) + , If b 1 < 0 , then v 1 := v 1 − and v 2 := v 2 (6) − .

  14. Let u : (0 , T ) → R m be defined by n � u ( t ) := | b 1 | 1 / 2 v 1 ( t ) + | b 1 | v 2 ( t ) + (1) b i u i ( t ) . i =2 Let y : [0 , T ] → R n be the solution of (2) y = f ( y, u ( t )) , y (0) = 0 . ˙ Then one has, as b → 0 , (3) y ( T ) = b + o ( b ) . Hence, using the Brouwer fixed-point theorem and standard estimates on ordinary differential equations, one gets the local controllability of y = f ( y, u ) (around (0 , 0) ∈ R n × R m ) in time T , that is, for every ε > 0 , ˙ there exists η > 0 such that, for every ( a, b ) ∈ R n × R n with | a | < η and | b | < η , there exists a trajectory ( y, u ) : [0 , T ] → R n × R m of the control system ˙ y = f ( y, u ) such that (4) y (0) = a, y ( T ) = b, | u ( t ) | � ε, t ∈ (0 , T ) . (5)

  15. Bad and good news for L = 2 π • Bad news: The order 2 is not sufficient. One needs to go to the order 3. • Good news: the fact that the order is odd allows to get the local controllability in arbitrary small time. The reason: If one can move in the direction ξ ∈ H ⊥ one can move in the direction − ξ . Hence it suffices to argue by contradiction (assume that it is impossible to enter in H ⊥ in small time etc.).

  16. Lie brackets and obstruction to small-time local controllability Theorem (H. Sussmann (1983)) Let us assume that f 0 and f 1 are analytic in an open neighborhood Ω of a and that (1) f 0 ( a ) = 0 . We consider the control system (2) y = f 0 ( y ) + uf 1 ( y ) , ˙ where the state is y ∈ Ω and the control is u ∈ R . Let us also assume that the control system (2) is small-time locally controllable ( a, 0) . Then [ f 1 , [ f 1 , f 0 ]]( a ) ∈ Span { ad k (3) f 0 f 1 ( a ); k ∈ N } .

  17. How to apply it to the KdV control system? Question 1: What is [ f 1 , [ f 1 , f 0 ]](0) in the case of the KdV control system? Question 2: What is Span { ad k f 0 f 1 (0); k ∈ N } in the case of the KdV control system? Note that in the finite dimensional case Span { ad k f 0 f 1 ( a ); k ∈ N } is just the controllable space of the linearized control system of y = f 0 ( y ) + uf 1 ( y ) at ( a, 0) , i.e. the linear control system ˙ y = ∂f 0 (1) ˙ ∂y ( a ) y + uf 1 ( a ) , where the state is y ∈ R n and the control is u ∈ R . Hence, one may expect that, for the KdV control system, Span { ad k f 0 f 1 (0); k ∈ N } is the controllable part of (2) y t + y x + y xxx = 0 , t ∈ [0 , T ] , x ∈ [0 , L ] , y ( t, 0) = y ( t, L ) = 0 , y x ( t, L ) = u ( t ) , t ∈ [0 , T ] . (3) where, at time t ∈ [0 , T ] , the control is u ∈ R and the state is y ( t, · ) ∈ L 2 (0 , L ) .

  18. bc Lie bracket [ f 0 , f 1 ]( a ) when f 0 ( a ) = 0 a

  19. bc Lie bracket [ f 0 , f 1 ]( a ) when f 0 ( a ) = 0 y ( ε ) u = − η a

  20. bc bc Lie bracket [ f 0 , f 1 ]( a ) when f 0 ( a ) = 0 y ( ε ) u = η y (2 ε ) u = − η a

  21. bc bc Lie bracket [ f 0 , f 1 ]( a ) when f 0 ( a ) = 0 y ( ε ) u = η y (2 ε ) ≃ a + ηε 2 [ f 0 , f 1 ](0) ε → 0 + u = − η a

  22. An example of the computation of a bracket for a PDE Consider the simplest PDE control system (1) y t + y x = 0 , x ∈ [0 , L ] , y ( t, 0) = u ( t ) . It is a control system where, at time t , the state is y ( t, · ) : (0 , L ) → R and the control is u ( t ) ∈ R . Formally it can be written in the form y = f 0 ( y ) + uf 1 ( y ) . Here f 0 is linear and f 1 is constant. ˙

  23. bc bc Lie bracket [ f 0 , f 1 ]( a ) for ˙ y = f 0 ( y ) + uf 1 ( y ) , with f 0 ( a ) = 0 y ( ε ) u = η y (2 ε ) ≃ a + ηε 2 [ f 0 , f 1 ](0) ε → 0 + u = − η a

Recommend


More recommend