3rd International Workshop on Equation-Based Object-Oriented Modeling Languages and Tools Oslo, 3 October 2010 Towards a Computer Algebra System with Automatic Differentiation for use with object-oriented modelling languages Joel Andersson and Moritz Diehl Boris Houska Department of Electrical Engineering (ESAT-SCD) & Optimization in Engineering Center (OPTEC) Katholieke Universiteit Leuven OPTEC (ESAT – SCD) – Katholieke Universiteit Leuven
OPTEC - Optimization in Engineering OPTEC – Optimization in Engineering Interdiciplinary: Mech.Eng. + Elec.Eng. + Civ.Eng. + Comp.Sc. Katholieke Universiteit Leuven, Belgium 2005-2010, phase II 2010-2017 Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
OPTEC - Optimization in Engineering OPTEC – Optimization in Engineering Interdiciplinary: Mech.Eng. + Elec.Eng. + Civ.Eng. + Comp.Sc. Katholieke Universiteit Leuven, Belgium 2005-2010, phase II 2010-2017 Myself M.Sc. Engineering Physics/Mathematics from Chalmers, Gothenburg PhD student since Oct 2008 for Prof. Moritz Diehl Topic: Modelling and Derivative Generation for Dynamic Optimization and Application to Large Scale Interconnected DAE Systems Application: Solar thermal power plant Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Pontryagin’s Maximum Principle – for problems without inequality constraints Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Pontryagin’s Maximum Principle – for problems without inequality constraints Direct methods : Parametrize controls and possibly state to form a Nonlinear Program (NLP) Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Pontryagin’s Maximum Principle – for problems without inequality constraints Direct methods : Parametrize controls and possibly state to form a Nonlinear Program (NLP) Collocation: Parametrize state to form a large, but sparse NLP Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Pontryagin’s Maximum Principle – for problems without inequality constraints Direct methods : Parametrize controls and possibly state to form a Nonlinear Program (NLP) Collocation: Parametrize state to form a large, but sparse NLP Single-shooting: Eliminate the state with an DAE integrator to form a small, but nonlinear NLP Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Dynamic optimization problem We consider dynamic optimization problems of the form (can be generalized further): � T minimize: L ( x , u , z , p , t ) dt + E ( x ( T )) x ( · ) , z ( · ) , u ( · ) , p t =0 f (˙ x ( t ) , x ( t ) , z ( t ) , u ( t ) , p , t ) = 0 t ∈ [0 , T ] subject to: h ( x ( t ) , z ( t ) , u ( t ) , p , t ) ≤ 0 t ∈ [0 , T ] r ( x (0) , x ( T ) , p ) = 0 x : R + → R Nx differential states, z : R + → R Nz algebraic states, u : R + → R Nu control, p ∈ R Np free parameters Solution Dynamic programming / Hamilton-Jacobi-Bellman equation – for very small problems Pontryagin’s Maximum Principle – for problems without inequality constraints Direct methods : Parametrize controls and possibly state to form a Nonlinear Program (NLP) Collocation: Parametrize state to form a large, but sparse NLP Single-shooting: Eliminate the state with an DAE integrator to form a small, but nonlinear NLP Multiple-shooting : Parametrize state at some times and use single shooting in between Good reference: L. Biegler Nonlinear Programming , SIAM 2010 Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Direct Multiple Shooting (Bock, 1984) Subdivide time horizon: 0 = t 0 ≤ . . . ≤ T N Parametrize control: u ( t ) = u i , t ∈ [ t i , t i +1 ] Parametrize state: s x , i = x ( t i ) Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Direct Multiple Shooting (Bock, 1984) Subdivide time horizon: 0 = t 0 ≤ . . . ≤ T N Parametrize control: u ( t ) = u i , t ∈ [ t i , t i +1 ] Parametrize state: s x , i = x ( t i ) Nonlinear Program (NLP): N − 1 minimize: � L i ( s x , i , u i , p ) + E ( s x , N ) s x , i , u i , p i =0 subject to: = F i ( s x , i , u i , p ) , ∀ i s x , i +1 0 ≥ h i ( s x , i , u i , p ) , ∀ i 0 = r ( s x , 0 , s x , N , p ) F i : Call to an DAE integrator Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Dynamic optimization Direct Multiple Shooting (Bock, 1984) Subdivide time horizon: 0 = t 0 ≤ . . . ≤ T N Parametrize control: u ( t ) = u i , t ∈ [ t i , t i +1 ] Parametrize state: s x , i = x ( t i ) Nonlinear Program (NLP): N − 1 minimize: � L i ( s x , i , u i , p ) + E ( s x , N ) s x , i , u i , p i =0 subject to: = F i ( s x , i , u i , p ) , ∀ i s x , i +1 0 ≥ h i ( s x , i , u i , p ) , ∀ i 0 = r ( s x , 0 , s x , N , p ) F i : Call to an DAE integrator Solve with e.g. structure-exploiting SQP method Software: ACADO Toolkit, MUSCOD-II Towards a Computer Algebra System with Automatic Differentiation — Joel Andersson
Recommend
More recommend