on the dynamic programming principle for static and
play

On the dynamic programming principle for static and dynamic inverse - PowerPoint PPT Presentation

On the dynamic programming principle for static and dynamic inverse problems Stefan Kindermann, Industrial Mathematics Institute Johannes Kepler University Linz, Austria joint work with Antonio Leitao, University of Santa Catarina, Brazil


  1. On the dynamic programming principle for static and dynamic inverse problems Stefan Kindermann, Industrial Mathematics Institute Johannes Kepler University Linz, Austria joint work with Antonio Leitao, University of Santa Catarina, Brazil Dynamic programming principle

  2. Static Inverse Problems Static inverse problems in Hilbert spaces Fu = y F . . . (linear) operator between Hilbert spaces u . . . unknown solution y . . . data (possibly noisy) Examples (Parameter identification, image processing ....) Regularization Well established theory Dynamic programming principle

  3. Dynamic Inverse Problems Dynamic inverse problems in Hilbert spaces F ( t ) u ( t ) = y ( t ) t . . . (artificial) time parameter F ( t ) . . . (linear) time-dependent operator between Hilbert spaces u ( t ) . . . unknown time-dependent solution y ( t ) . . . time-dependent data (possibly noisy) Can be handled by standard theory standard numerical algorithm do not take into account time-structure Dynamic programming principle

  4. Examples Parameter Identification in elliptic PDEs with time-dependent parameter (moving objects) Dynamic impedance tomography Endocardiography Online identification (K¨ ugler) . . . just include a t in your favorite inverse problems Dynamic programming principle

  5. Regularization for dynamic problems F ( t ) : H → L 2 F ( t ) u ( t ) = y ( t ) Y = L 2 ([0 , T ] , L 2 (Ω)) . . . data space First choice for solution space: X = L 2 ([0 , T ] , H ) . . . solution space e.g. Tikhonov Regularization � T � T u α = argmin1 H dt + α � F ( t ) u ( t ) − y ( t ) � 2 � u ( ., t ) � 2 H dt 2 2 0 0 Solution: u α ( t ) = ( F ∗ ( t ) F ( t ) + α I ) − 1 F ∗ ( t ) y ( t ) Dynamic programming principle

  6. Tikhonov Regularization L 2 -case Solution: u α ( t ) = ( F ∗ ( t ) F ( t ) + α I ) − 1 F ∗ ( t ) y ( t ) Problem is time decoupled Easy implementation, results not satisfactory Not continuous in time ⇒ Use higher regularization in t . Dynamic programming principle

  7. Tikhonov Regularization- H 1 H 1 Regularization in t : � T � T u α = argmin1 H dt + α � F ( t ) u ( t ) − y ( t ) � 2 � u ′ ( t ) � 2 H dt 2 2 0 0 Solution F ∗ ( t ) F ( t ) u α + α u α ( t ) ′′ = F ∗ ( t ) y ( t ) u ′ α (0) = u ′ α ( T ) = 0 Hilbert-space-valued boundary-value problem. Discretization: If F ∗ F is matrix of size n × n , [0 , T ] is discretized into n T intervals ⇒ full matrix of size ( nn T ) × ( nn T ). Dynamic programming principle

  8. Louis-Schmitt Method A general numerical method for solving such problems was suggested by [Louis, Schmitt] Approximate derivatives by differences t -discretization Decomposition of the matrices Problem requires to solve Sylvester-Matrix equation FF ∗ UR + α U = Y for U , which can be done more efficient. Alternative: Dynamic Programming [S.K.,A. Leitao] iterative method Dynamic programming principle

  9. Principles of Dynamic Programming Dynamic Programming: [R. Bellman] Idea : Follow the path of the optimal solution in t ⇒ Evolution equation in t Dynamic programming principle

  10. Solve Problem by dynamic programming Rewrite problem as constraint optimization problem � T � T 1 H dt + α � F ( t ) u ( t ) − y ( t ) � 2 � v ( t ) � 2 min H dt 2 2 u , v 0 0 constraints u ′ = v Linear quadratic control problem, v ∼ control Dynamic programming principle

  11. Solve Problem by dynamic programming Value function: � T 1 � F ( s ) u ( s ) − y ( s ) � 2 + α � v ( s ) � 2 ds V ( t , ξ ) := min 2 u , v t u ( t ) = ξ u ′ = v Value function satisfies Hamilton-Jacobi Eq V t ( t , ξ ) = − 1 2 � F ( t ) ξ − y ( t ) � 2 + 1 α ( V ξ , V ξ ) V ( T , ξ ) = 0 From V ( t , ξ ) we get v v = − 1 α V ξ ( t , u ) ⇒ u can be found by evolution equation u ′ ( t ) = − 1 α V ξ ( t , u ( . t )) Dynamic programming principle

  12. Hamilton-Jacobi Equation Hamilton-Jacobi: Equation: PDE in a very high dimensional space Dimension = number of unknown in solution Numerical solution almost impossible If F is linear, V is a quadratic functional in ξ Ansatz: V ( t , ξ ) = 1 2 � ξ, Q ( t ) ξ � + � b ( t ) ξ � + g ( t ) Dynamic programming principle

  13. Ansatz V ( t , ξ ) = 1 2 � ξ, Q ( t ) ξ � + � b ( t ) ξ � + g ( t ) From HJ-Equation ⇒ Equation for Q , b , g Riccati equation for operator Q ( t ) F ∗ ( t ) F ( t ) + 1 Q ′ ( t ) α Q ( t ) ∗ Q = Q ( T ) = 0 Evolution equation for b Q ∗ 1 b ′ ( t ) α + F ∗ ( t ) y ( t ) = b ( T ) = 0 Equation for solution u ′ ( t ) = − 1 α ( Q ( t ) u ( t ) + b ( t )) Dynamic programming principle

  14. Dynamic programming method I First: apply dynamic programming principle Second: discretize equation Algorithm 1 Solve Riccati-Equation for Q ( t ) and Equation for b ( t ) backwards in time by explicit Euler method 2 Solve Eq for u forwards in time by explicit Euler with some initial condition u 0 Dynamic programming principle

  15. Alternative Method First discretize Functional t ∼ t i = k N Second: use discrete version of dynamic programming principle Algorithm Two backward recursions for Q , b one forward recursion for u : ( Q k + α − 1 I ) − 1 Q k + F ∗ Q k − 1 = k − 1 F k − 1 k = N + 1 , . . . , 2 ( Q k + α − 1 I ) − 1 b k − F ∗ = k = N + 1 , . . . , 2 b k − 1 k − 1 y k − 1 ( Q k + α I ) − 1 ( α u k − 1 − b k ) k = 1 , . . . , N u k = Dynamic programming principle

  16. Regularization Properties � T � γ ( σ, µ, t ) dE λ ( σ ) F ∗ y ( µ ) d µ u ( t ) = 0 σ α − 1 cosh( α − 1 √ λ ( t − 1)) sinh( α − 1 √ � λ ( µ ) µ ≤ t √ λ cosh( α − 1 √ γ ( λ, µ, t ) = sinh( α − 1 √ λ ( t ) cosh( α − 1 √ λ ( µ − 1)) µ ≥ t λ ) For fixed λ , γ ( λ, ., . ) is the Greens-Function for the boundary value problem Lx := x ′′ − λ x ′ (0) = 0 α x x ( T ) = 0 Sturm-Liouville Theory ⇒ Convergence Results as α → 0 Dynamic programming principle

  17. Computational Complexity After discretization let F ( t ) be a n × N matrix, T timesteps Naive Approach: Solve Optimality Conditions 2 3( NT ) 3 Louis-Schmitt Method (Sylvester Matrix Equation) 25( n + T ) 3 + 2 T ( n ( T + N )) Method I Explicit Euler O ( n 3 T ) if Q ( t ) is precomputed, then O ( n 2 T ) Method II O ( n 3 T ) Complexity is linear T . Dynamic programming principle

  18. Related work: Dynamic programming for static problems Dynamic programming principle with T as regularization parameter Static Problem: Fu = y Artificial time variable in u : u = u ( t ) Approximate solution u by minimizing � T � T J ( u ) = 1 H dt + 1 � Fu ( t ) − y � 2 � u ′ ( ., t ) � H dt 2 2 0 0 Use u T := u ( T ) as regularized solution Question: Is this a regularization ? Limit lim T →∞ u T ? T acts as regularization parameter Dynamic programming principle

  19. Dynamic programming As before apply Dynamic programming principle: . . . Algorithm Backward evolution for Q Q ′ ( t ) − I + Q ( t ) FF ∗ Q = Q ( T ) = 0 Forward evolution for u u ′ ( t ) = − ( F ∗ Q ( t ) ( Fu ( t ) − y ) Approximation of solution by u T Dynamic programming principle

  20. Regularization Properties Q ′ ( t ) − I + Q ( t ) FF ∗ Q ( t ) = Q ( T ) = 0 By Spectral Theory we get � Q ( t ) = q ( t , λ ) dF λ σ √ 1 q ( t , λ ) = √ tanh( λ ( T − t )) . λ � 1 − 1 √ � 1 cosh( λ T ) dE λ F ∗ y + u T := u ( T ) = √ dE λ u 0 . λ cosh( λ T ) Convergence and convergence rates results by usual spectral filter theory. (as in Engl, Hanke, Neubauer). Convergence as T → ∞ , Convergence rates Dynamic programming principle

  21. Discrete Problems First: Discretization Second: Principle of optimality Instead of T , iteration index N acts as regularization parameter Recursion for Q i and u i � g N ( λ ) dE λ F ∗ y . u N = � �� − 1 � g N ( λ ) = 1 �� � � �� λ λ 1 − 4 + 1 4 + 1 T 2 N +1 . λ T n ( x ): Chebyshev polynomial of the first kind of order n Convergence as N → ∞ , convergence rates Dynamic programming principle

  22. Examples Linear Integralequation with convolution kernel Stationary case Error vs. N , T error error 1 1 10 10 0 10 10 0 10 1 10 2 10 0 10 1 10 2 Landweber, CG, MethodI, MethodII Same order of convergence as CG parameter tuning in algorithm Dynamic programming principle

  23. Numerical Results- Dynamic case Linearized Dynamic Impedance Tomography Problem ∇ .γ ( t , . ) ∇ u = 0 Identify γ from Dirichlet to Neumann map for linearized problem Λ γ : u → ∂ ∂ nu Dynamic programming principle

  24. Numerical Results- Dynamic case Nonlinear Problem F nl : γ ( t ) → Λ γ Linearized Problem F δγ := F ′ nl (1) δγ For results we use nonlinear data and add random noise Dynamic programming principle

  25. Numerical Results-Exact solution Dynamic programming principle

  26. Numerical Results-Reconstruction 5 % noise Dynamic programming principle

  27. Generalizations-Nonlinear Problems Dynamic programming principle can be generalized to nonlinear problems: Nonlinear dynamic problems F ( t , u ( t )) = y ( t ) F is nonlinear operator in u Tikhonov functional for nonlinear problems � T � T J ( u ) = 1 H dt + α � F ( t , u ( t )) − y ( t ) � 2 � u ′ ( t ) � 2 H dt 2 2 0 0 Dynamic programming principle

Recommend


More recommend