From Linear Feedback to NMPC 10 / 47 QP solution “QP is almost a technology” , S. Boyd Convex QP : No inequalities: solve a linear system Inequalities: interior point or active set method Active set: algorithm properties guess active constr. can be warm started solve linear system extremely fast with add/remove constr. good initial guess Nonconvex QP : NP-hard problem
From Linear Feedback to NMPC 10 / 47 QP solution “QP is almost a technology” , S. Boyd Convex QP : No inequalities: solve a linear system Inequalities: interior point or active set method Active set: algorithm properties guess active constr. can be warm started solve linear system extremely fast with add/remove constr. good initial guess Nonconvex QP : NP-hard problem Many reliable QP solvers available: qpOASES FORCES quadprog many others
From Linear Feedback to NMPC 11 / 47 Complexity of Active set method vs. IP vs. Active set: qpOASES IP: Forces [H.J. Ferreau, KU Leuven] [A. Domahidi, EPFL] NMPC comparison for M = 3 (n x = 21) 160 Very fast for few AS Speed is consistent NMPC with condensing NMPC with FORCES changes !! 140 # inputs irrelevant Maximum execution times per one RTI [ms] 120 # inputs << # states No need for condensing 100 Need for condensing Extension to convex 80 Limited to QP programming 60 Cubic in P 40 Linear in P 20 0 10 15 20 25 30 35 40 45 50 Horizon length
From Linear Feedback to NMPC 12 / 47 Linear system? Linear MPC at time i N N − 1 � � s k − x ref � 2 � � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = A s k + B u k C s k + D u k ≥ 0 , s 0 = ˆ x i
From Linear Feedback to NMPC 12 / 47 Linear system? Linear MPC at time i N N − 1 � � s k − x ref � 2 � � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = A s k + B u k C s k + D u k ≥ 0 , s 0 = ˆ x i Linear dynamics 1 Linear path constraints 2 Solve a QP at each iteration 3 Extremely fast for small to 4 medium scale problems
From Linear Feedback to NMPC 12 / 47 Linear system? Nonlinear system? Linearize at x ref , u ref , use Linear MPC at time i linear MPC N N − 1 � � s k − x ref � 2 � � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = A s k + B u k C s k + D u k ≥ 0 , s 0 = ˆ x i Linear dynamics 1 Linear path constraints 2 Solve a QP at each iteration 3 Extremely fast for small to 4 medium scale problems
From Linear Feedback to NMPC 12 / 47 Linear system? Nonlinear system? Linearize at x ref , u ref , use Linear MPC at time i linear MPC N N − 1 � � s k − x ref � 2 � � u k − u ref � 2 min Q + or... R u , s k =0 k =0 s.t. s k +1 = A s k + B u k Nonlinear MPC at time i C s k + D u k ≥ 0 , N − 1 N s 0 = ˆ x i � � s k − x ref � 2 � � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = f ( s k , u k ) Linear dynamics 1 h ( s k , u k ) ≥ 0 , Linear path constraints 2 s 0 = ˆ x i Solve a QP at each iteration 3 Extremely fast for small to 4 medium scale problems
From Linear Feedback to NMPC 12 / 47 Linear system? Nonlinear system? Linearize at x ref , u ref , use Linear MPC at time i linear MPC N N − 1 � � s k − x ref � 2 � � u k − u ref � 2 min Q + or... R u , s k =0 k =0 s.t. s k +1 = A s k + B u k Nonlinear MPC at time i C s k + D u k ≥ 0 , N − 1 N s 0 = ˆ x i � � s k − x ref � 2 � � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = f ( s k , u k ) Linear dynamics 1 h ( s k , u k ) ≥ 0 , Linear path constraints 2 s 0 = ˆ x i Solve a QP at each iteration 3 Problem is non-convex, Extremely fast for small to 4 use NLP solver medium scale problems
Nonlinear Programming and SQP 13 / 47 From Linear Feedback to NMPC 1 Nonlinear Programming and SQP 2 From Continuous Time to Discrete Time 3 Moving Horizon Estimation 4 Practical NMPC 5 Tutorial 6
Nonlinear Programming and SQP 14 / 47 Newton’s Method Problem: Find the zeros of F ( w ) Newton’s Method: Linearize and iteratively solve F ( w k ) + ∇ F ( w k ) T p k = 0
Nonlinear Programming and SQP 14 / 47 Newton’s Method Problem: Find the zeros of F ( w ) Newton’s Method: Linearize and iteratively solve F ( w k ) + ∇ F ( w k ) T p k = 0 Unconstrained Optimization Problem: min w f ( w ) First order necessary conditions (FONC): ∇ f ( w ) = 0 Find the zeros of FONC: Iteratively solve ∇ f ( w k ) + ∇ 2 f ( w k ) p k = 0
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1)
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Newton Type Algorithm Given an initial guess w 0 , keep iterating:
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Newton Type Algorithm Given an initial guess w 0 , keep iterating: determine a (descent) direction p k 1
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Newton Type Algorithm Given an initial guess w 0 , keep iterating: determine a (descent) direction p k 1 2 determine a step length α k
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Newton Type Algorithm Given an initial guess w 0 , keep iterating: determine a (descent) direction p k 1 2 determine a step length α k compute the step: w k +1 = w k + α k p k 3
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Newton Type Algorithm Given an initial guess w 0 , keep iterating: determine a (descent) direction p k 1 2 determine a step length α k compute the step: w k +1 = w k + α k p k 3 check for convergence and return the solution 4
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Lagrangian Function L ( w , λ, µ ) = f ( w ) − λ T g ( w ) − µ T h ( w )
Nonlinear Programming and SQP 15 / 47 Nonlinear Programming Problem (NLP) f ( w ) minimize w subject to g ( w ) = 0 h ( w ) ≥ 0 (1) Lagrangian Function L ( w , λ, µ ) = f ( w ) − λ T g ( w ) − µ T h ( w ) 1 st Order Necessary Conditions : the KKT system ∇ f ( w ∗ ) − ∇ g ( w ∗ ) λ ∗ − ∇ h ( w ∗ ) µ ∗ = 0 ∇ w L ( w ∗ , λ ∗ , µ ∗ )= ∇ λ L ( w ∗ , λ ∗ , µ ∗ )= g ( w ∗ )= 0 ∇ µ L ( w ∗ , λ ∗ , µ ∗ )= h ( w ∗ ) ≥ 0 µ ∗ ≥ 0 µ ∗ i h i ( w ∗ )= 0
Nonlinear Programming and SQP 16 / 47 Without Inequalities With Inequalities
Nonlinear Programming and SQP 16 / 47 Without Inequalities Linearize the KKT system � ∇ 2 � � ∆ w k � ∇ f � � w L ∇ g = − ∇ g T 0 g λ k +1 Solve the linear system With Inequalities
Nonlinear Programming and SQP 16 / 47 Without Inequalities Linearize the KKT system � ∇ 2 � � ∆ w k � ∇ f � � w L ∇ g = − ∇ g T 0 g λ k +1 Solve the linear system Corresponding QP 1 w L ∆ w k + ∇ f T ∆ w k 2 ∆ w T k ∇ 2 minimize ∆ w k subject to g + ∇ g T ∆ w k = 0 With Inequalities
Nonlinear Programming and SQP 16 / 47 Without Inequalities Linearize the KKT system � ∇ 2 � � ∆ w k � ∇ f � � w L ∇ g = − ∇ g T 0 g λ k +1 Solve the linear system Corresponding QP 1 w L ∆ w k + ∇ f T ∆ w k 2 ∆ w T k ∇ 2 minimize ∆ w k subject to g + ∇ g T ∆ w k = 0 With Inequalities The last 3 KKT conditions are nonsmooth .
Nonlinear Programming and SQP 16 / 47 Without Inequalities Linearize the KKT system � ∇ 2 � � ∆ w k � ∇ f � � w L ∇ g = − ∇ g T 0 g λ k +1 Solve the linear system Corresponding QP 1 w L ∆ w k + ∇ f T ∆ w k 2 ∆ w T k ∇ 2 minimize ∆ w k subject to g + ∇ g T ∆ w k = 0 With Inequalities The last 3 KKT conditions are nonsmooth . At each iteration solve the QP 1 w L ∆ w k + ∇ f T ∆ w k 2 ∆ w T k ∇ 2 minimize ∆ w k subject to g + ∇ g T ∆ w k = 0 h + ∇ h T ∆ w k ≥ 0
Nonlinear Programming and SQP 17 / 47 SQP method in a nutshell NMPC at time i min f ( w ) w s.t. g ( w ) h ( w ) ≥ 0 Iterative procedure :
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 min f ( w ) min 2 ∆ w ∆ w + ∆ w w ∆ w s.t. g ( w ) s.t. + ∆ w = 0 h ( w ) ≥ 0 + ∆ w ≥ 0 , Iterative procedure : Given current guess w k , λ k , µ k 1
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 2 ∆ w B ( w k ) ∆ w + ∇ f ( w k ) T ∆ w min f ( w ) min w ∆ w g ( w k ) + ∇ g ( w k ) T ∆ w = 0 s.t. g ( w ) s.t. h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 , h ( w ) ≥ 0 Iterative procedure : Given current guess w k , λ k , µ k 1 Linearize at w k , λ k , µ k : need 2 nd order derivatives for B ( w k ) 2
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 2 ∆ w B ( w k ) ∆ w + ∇ f ( w k ) T ∆ w min f ( w ) min w ∆ w g ( w k ) + ∇ g ( w k ) T ∆ w = 0 s.t. g ( w ) s.t. h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 , h ( w ) ≥ 0 Iterative procedure : Given current guess w k , λ k , µ k 1 Linearize at w k , λ k , µ k : need 2 nd order derivatives for B ( w k ) 2 Make sure Hessian B ( w k ) ≻ 0: avoid negative curvature 3
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 2 ∆ w B ( w k ) ∆ w + ∇ f ( w k ) T ∆ w min f ( w ) min w ∆ w g ( w k ) + ∇ g ( w k ) T ∆ w = 0 s.t. g ( w ) s.t. h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 , h ( w ) ≥ 0 Iterative procedure : Given current guess w k , λ k , µ k 1 Linearize at w k , λ k , µ k : need 2 nd order derivatives for B ( w k ) 2 Make sure Hessian B ( w k ) ≻ 0: avoid negative curvature 3 Solve QP 4
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 2 ∆ w B ( w k ) ∆ w + ∇ f ( w k ) T ∆ w min f ( w ) min w ∆ w g ( w k ) + ∇ g ( w k ) T ∆ w = 0 s.t. g ( w ) s.t. h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 , h ( w ) ≥ 0 Iterative procedure : Given current guess w k , λ k , µ k 1 Linearize at w k , λ k , µ k : need 2 nd order derivatives for B ( w k ) 2 Make sure Hessian B ( w k ) ≻ 0: avoid negative curvature 3 Solve QP 4 Globalization (e.g. line-search): ensure descent , stepsize α ∈ (0 , 1] 5
Nonlinear Programming and SQP 17 / 47 Quadratic Problem Approximation SQP method in a nutshell QP (for a given s , u ) NMPC at time i 1 2 ∆ w B ( w k ) ∆ w + ∇ f ( w k ) T ∆ w min f ( w ) min w ∆ w g ( w k ) + ∇ g ( w k ) T ∆ w = 0 s.t. g ( w ) s.t. h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 , h ( w ) ≥ 0 Iterative procedure : Given current guess w k , λ k , µ k 1 Linearize at w k , λ k , µ k : need 2 nd order derivatives for B ( w k ) 2 Make sure Hessian B ( w k ) ≻ 0: avoid negative curvature 3 Solve QP 4 Globalization (e.g. line-search): ensure descent , stepsize α ∈ (0 , 1] 5 w k +1 w k ∆ w = + α and iterate Update λ k +1 λ k ∆ λ 6 µ k +1 µ k ∆ µ
Nonlinear Programming and SQP 18 / 47 The Generalized Gauss-Newton Method [Bock1983] Specific Structure of f ( w ) 1 2 � F ( w ) � 2 minimize 2 w subject to g ( w ) = 0 h ( w ) ≥ 0 (2)
Nonlinear Programming and SQP 18 / 47 The Generalized Gauss-Newton Method [Bock1983] Specific Structure of f ( w ) 1 2 � F ( w ) � 2 minimize 2 w subject to g ( w ) = 0 h ( w ) ≥ 0 (2) Gauss-Newton Hessian Approximation Linearize inside the norm to obtain 1 2 � F ( w k ) + J ( w k ) ∆ w � 2 minimize 2 ∆ w subject to g ( w k ) + ∇ g ( w k ) T ∆ w = 0 h ( w k ) + ∇ h ( w k ) T ∆ w ≥ 0 where J ( w k ) = ∇ F ( w k ) T .
Nonlinear Programming and SQP 19 / 47 Why Does it perform well? Exact Hessian: ∇ 2 w L = ∇ 2 f � λ i ∇ 2 g i − � µ i ∇ 2 h i − = J T J + � F i ∇ 2 F i − � λ i ∇ 2 g i − � µ i ∇ 2 h i Gauss-Newton Hessian: w L ≈ J T J ∇ 2 No need for 2 nd order derivatives Lagrange multipliers When Does it perform well? � F � small: good fit ∇ 2 F i small: residuals F nearly linear � λ � and � µ � small: true when � F � small
Nonlinear Programming and SQP 20 / 47 Wide Range of Applications System identification: � y ( p ) − y � 2 min S p Model Predictive Control: � x − x r � 2 Q + � u − u r � 2 min R x , u Moving Horizon Estimation: � y ( x , u ) − y � 2 min S x , u
From Continuous Time to Discrete Time 21 / 47 From Linear Feedback to NMPC 1 Nonlinear Programming and SQP 2 From Continuous Time to Discrete Time 3 Moving Horizon Estimation 4 Practical NMPC 5 Tutorial 6
From Continuous Time to Discrete Time 22 / 47 Linear system Continuous time: x ( t ) = A c x ( t ) + B c u ( t ) ˙ Discrete time: s k +1 = A s k + B u k Discretization over a time interval t ∈ [ t k , t k +1 ] , input u ( t ) = u k A = e A c ( t k +1 − t k ) , � t k +1 e A c τ B c d τ B = t k
From Continuous Time to Discrete Time 22 / 47 Linear system Nonlinear system Continuous time: x ( t ) = A c x ( t ) + B c u ( t ) ˙ x ( t ) = f c ( x ( t ) , u ( t )) ˙ Discrete time: s k +1 = A s k + B u k s k +1 = f ( s k , u k ) Discretization over a time interval t ∈ [ t k , t k +1 ] , input u ( t ) = u k s k +1 = f ( s k , u k ) A = e A c ( t k +1 − t k ) , � t k +1 Integration of function f c can be e A c τ B c d τ B = complex, possibly implicit (algorithm) !! t k
From Continuous Time to Discrete Time 23 / 47 How to Discretize the System?
From Continuous Time to Discrete Time 23 / 47 How to Discretize the System? Single Shooting: From x ( t 0 ) integrate the system on the whole horizon → continuous trajectory 0.065 s 2 = x (2 T s ) 0.06 s 1 = x ( T s ) x 0.055 0.05 s 0 = x (0) 0.045 0 0.05 0.1 0.15 0.2 0.25 0.3 t
From Continuous Time to Discrete Time 23 / 47 How to Discretize the System? Single Shooting: From x ( t 0 ) integrate the system on the whole horizon → continuous trajectory Multiple Shooting: From x ( t k ) integrate the system on each interval separately → discontinuous trajectory 0.065 0.06 f ( s 0 , u 0 ) = x 0 ( T s ) f ( s 1 , u 1 ) = x 1 ( T s ) x 0.055 0.05 s 0 = x 0 (0) s 1 = x 1 (0) s 2 = x 2 (0) 0.045 0 0.05 0.1 0.15 0.2 0.25 0.3 t
From Continuous Time to Discrete Time 23 / 47 How to Discretize the System? Single Shooting: From x ( t 0 ) integrate the system on the whole horizon → continuous trajectory Multiple Shooting: From x ( t k ) integrate the system on each interval separately → discontinuous trajectory 0.065 0.06 x 0.055 s 1 - f ( s 0 , u 0 ) s 2 - f ( s 1 , u 1 ) 0.05 s 0 = x 0 (0) s 1 = x 1 (0) s 2 = x 2 (0) 0.045 0 0.05 0.1 0.15 0.2 0.25 0.3 t
From Continuous Time to Discrete Time 24 / 47 Multiple Shooting vs Single Shooting Better : unstable systems
From Continuous Time to Discrete Time 24 / 47 Multiple Shooting vs Single Shooting Better : unstable systems Better : initialization of states at intermediate nodes
From Continuous Time to Discrete Time 24 / 47 Multiple Shooting vs Single Shooting Better : unstable systems Better : initialization of states at intermediate nodes Warning : leads to bigger QP/NLP S. Shooting: n x + ( N − 1) n u opt. vars ( x 0 , u 0 , u 1 , . . . , u N − 1 ) M. Shooting: Nn x + ( N − 1) n u opt. vars ( x 0 , u 0 , x 1 , u 1 , . . . , x N )
From Continuous Time to Discrete Time 24 / 47 Multiple Shooting vs Single Shooting Better : unstable systems Better : initialization of states at intermediate nodes Warning : leads to bigger QP/NLP S. Shooting: n x + ( N − 1) n u opt. vars ( x 0 , u 0 , u 1 , . . . , u N − 1 ) M. Shooting: Nn x + ( N − 1) n u opt. vars ( x 0 , u 0 , x 1 , u 1 , . . . , x N ) Good news! : after integration , all x k , k = 1 , . . . , N can be eliminated → Condensing : reduce to the size of Single Shooting (to be continued...)
From Continuous Time to Discrete Time 24 / 47 Multiple Shooting vs Single Shooting Better : unstable systems Better : initialization of states at intermediate nodes Warning : leads to bigger QP/NLP S. Shooting: n x + ( N − 1) n u opt. vars ( x 0 , u 0 , u 1 , . . . , u N − 1 ) M. Shooting: Nn x + ( N − 1) n u opt. vars ( x 0 , u 0 , x 1 , u 1 , . . . , x N ) Good news! : after integration , all x k , k = 1 , . . . , N can be eliminated → Condensing : reduce to the size of Single Shooting (to be continued...) Continuity conditions : S. Shooting: imposed by the integration M. Shooting: imposed by the QP/NLP
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem QP (for a given s , u ) 1 � � � � ∆ s ∆ s + J T min � ∆ s ∆ u � B ∆ u ∆ u ∆ u , ∆ s 2 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k , ∂ s ∂ u ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , ∂ s ∂ u s 0 = ˆ x i Linearize f : evaluate integrator ∂ f ∂ s , ∂ f ∂ u : differentiate integrator h : evaluate nonlinear function ∂ h ∂ s , ∂ h ∂ u : differentiate nonlinear function � s � B = diag ( Q , . . . , Q , R . . . , R ) + λ ∂ 2 f ∂ w 2 + µ ∂ 2 h ∂ w 2 , w = u J = 2 w T diag ( Q , . . . , Q , R . . . , R )
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem QP (for a given s , u ) 1 � � � � ∆ s ∆ s + J T min � ∆ s ∆ u � B ∆ u ∆ u ∆ u , ∆ s 2 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k , ∂ s ∂ u ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , ∂ s ∂ u s 0 = ˆ x i Ensure B ≻ 0 Exact Hessian: add curvature to the negative directions Quadratic convergence
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem QP (for a given s , u ) 1 � � � � ∆ s ∆ s + J T min � ∆ s ∆ u � B ∆ u ∆ u ∆ u , ∆ s 2 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k , ∂ s ∂ u ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , ∂ s ∂ u s 0 = ˆ x i Ensure B ≻ 0 Exact Hessian: add curvature to the negative directions Quadratic convergence BFGS update: B k +1 = B k + B k σσ T B k + γγ T σ T B k σ σ T γ Superlinear convergence
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem QP (for a given s , u ) 1 � � � � ∆ s ∆ s + J T min � ∆ s ∆ u � B ∆ u ∆ u ∆ u , ∆ s 2 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k , ∂ s ∂ u ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , ∂ s ∂ u s 0 = ˆ x i Ensure B ≻ 0 Exact Hessian: add curvature to the negative directions Quadratic convergence BFGS update: B k +1 = B k + B k σσ T B k + γγ T σ T B k σ σ T γ Superlinear convergence Gauss-Newton approximation: B ≈ J T J (for linear MPC it is exact!) Linear convergence
From Continuous Time to Discrete Time 25 / 47 Let’s get a closer look at the problem QP (for a given s , u ) 1 � � � � ∆ s ∆ s + J T min � ∆ s ∆ u � B ∆ u ∆ u ∆ u , ∆ s 2 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k , ∂ s ∂ u ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , ∂ s ∂ u s 0 = ˆ x i Iterate to convergence All previous steps are repeated until convergence! Computations can become very long Cannot apply the control instantaneously
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about:
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i → only 1 st order derivatives, Gauss-Newton Hessian approximation Hessian B ≻ 0
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i → only 1 st order derivatives, Gauss-Newton Hessian approximation Hessian B ≻ 0 Result:
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i → only 1 st order derivatives, Gauss-Newton Hessian approximation Hessian B ≻ 0 Result: Converge while the system evolves next SQP iteration takes place on the new problem ˆ x i +1
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i → only 1 st order derivatives, Gauss-Newton Hessian approximation Hessian B ≻ 0 Result: Converge while the system evolves next SQP iteration takes place on the new problem ˆ x i +1 Need to have a good initial guess better to shift
The Real Time Iteration Scheme 26 / 47 Can we be faster? What about: 1 Newton step → no need to iterate Initial value embedding: → faster convergence, clever s 0 = ˆ x i as a constraint computations No globalization → need to enforce s 0 = ˆ x i → only 1 st order derivatives, Gauss-Newton Hessian approximation Hessian B ≻ 0 Result: Converge while the system evolves next SQP iteration takes place on the new problem ˆ x i +1 Need to have a good initial guess better to shift Under some (mild) conditions, the SQP solution is closely tracked
The Real Time Iteration Scheme 27 / 47 Standard SQP NMPC at time i N N − 1 � � � s k − x ref � 2 � u k − u ref � 2 min Q + R u , s k =0 k =0 s.t. s k +1 = f ( s k , u k ) h ( s k , u k ) ≥ 0 , s 0 = ˆ x i Iterative procedure (at each time i ): Given current guess s , u 1 Linearize at s , u 2 Make sure Hessian B ≻ 0 3 Solve QP 4 Globalization (e.g. line-search) 5 Update and iterate 6
The Real Time Iteration Scheme 27 / 47 Standard SQP Real Time Iterations RTI at time i NMPC at time i N N − 1 1 � ∆ s � � ∆ s � J T J + J T � � � � min ∆ s ∆ u � s k − x ref � 2 � u k − u ref � 2 min Q + ∆ u ∆ u ∆ u , ∆ s 2 R u , s k =0 k =0 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k ∂ s ∂ u s.t. s k +1 = f ( s k , u k ) ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , h ( s k , u k ) ≥ 0 , ∂ s ∂ u s 0 = ˆ x i s 0 = ˆ x i Iterative procedure (at each time i ): Given current guess s , u 1 Linearize at s , u 2 Make sure Hessian B ≻ 0 3 Solve QP 4 Globalization (e.g. line-search) 5 Update and iterate 6
The Real Time Iteration Scheme 27 / 47 Standard SQP Real Time Iterations RTI at time i NMPC at time i N N − 1 1 � ∆ s � � ∆ s � J T J + J T � � � � min ∆ s ∆ u � s k − x ref � 2 � u k − u ref � 2 min Q + ∆ u ∆ u ∆ u , ∆ s 2 R u , s k =0 k =0 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k ∂ s ∂ u s.t. s k +1 = f ( s k , u k ) ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , h ( s k , u k ) ≥ 0 , ∂ s ∂ u s 0 = ˆ x i s 0 = ˆ x i Iterative procedure (at each time i ): Preparation Phase Without knowing ˆ x i Given current guess s , u 1 Linearize Linearize at s , u 2 ( Gauss-Newton ⇒ B ≻ 0) Make sure Hessian B ≻ 0 3 Prepare the QP Solve QP 4 Globalization (e.g. line-search) 5 Update and iterate 6
The Real Time Iteration Scheme 27 / 47 Standard SQP Real Time Iterations RTI at time i NMPC at time i N N − 1 1 � ∆ s � � ∆ s � J T J + J T � � � � min ∆ s ∆ u � s k − x ref � 2 � u k − u ref � 2 min Q + ∆ u ∆ u ∆ u , ∆ s 2 R u , s k =0 k =0 ∂ f ∂ f s.t. ∆ s k +1 = f + ∆ s k + ∆ u k ∂ s ∂ u s.t. s k +1 = f ( s k , u k ) ∂ h ∂ h h + ∆ s k + ∆ u k ≥ 0 , h ( s k , u k ) ≥ 0 , ∂ s ∂ u s 0 = ˆ x i s 0 = ˆ x i Iterative procedure (at each time i ): Preparation Phase Without knowing ˆ x i Given current guess s , u 1 Linearize Linearize at s , u 2 ( Gauss-Newton ⇒ B ≻ 0) Make sure Hessian B ≻ 0 3 Prepare the QP Solve QP 4 Feedback Phase : Globalization (e.g. line-search) 5 Solve QP once ˆ x i available Update and iterate 6 → same latency as linear MPC
The Real Time Iteration Scheme 28 / 47 Linear MPC at time i RTI at time i N N � � s k − x ref � 2 Q + � u k − u ref � 2 � � s k − x ref � 2 Q + � u k − u ref � 2 min min R R u , s u , s k =0 k =0 s.t. s k +1 = A k s k + B k u k s.t. s k +1 = f ( s k , u k ) C k s k + D k u k ≥ 0 , h ( s k , u k ) ≥ 0 , s 0 = ˆ x i s 0 = ˆ x i
The Real Time Iteration Scheme 28 / 47 Linear MPC at time i RTI at time i N N � � s k − x ref � 2 Q + � u k − u ref � 2 � � s k − x ref � 2 Q + � u k − u ref � 2 min min R R u , s u , s k =0 k =0 s.t. s k +1 = A k s k + B k u k s.t. s k +1 = f ( s k , u k ) C k s k + D k u k ≥ 0 , h ( s k , u k ) ≥ 0 , s 0 = ˆ x i s 0 = ˆ x i At each time i : At each time i : Solve the QP Solve the QP 1 1
The Real Time Iteration Scheme 28 / 47 Linear MPC at time i RTI at time i N N � � s k − x ref � 2 Q + � u k − u ref � 2 � � s k − x ref � 2 Q + � u k − u ref � 2 min min R R u , s u , s k =0 k =0 s.t. s k +1 = A k s k + B k u k s.t. s k +1 = f ( s k , u k ) C k s k + D k u k ≥ 0 , h ( s k , u k ) ≥ 0 , s 0 = ˆ x i s 0 = ˆ x i At each time i : At each time i : Solve the QP Solve the QP 1 1 Compute the new linearization 2 of the constraints Prepare the new QP 3
The Real Time Iteration Scheme 28 / 47 Linear MPC at time i RTI at time i N N � � s k − x ref � 2 Q + � u k − u ref � 2 � � s k − x ref � 2 Q + � u k − u ref � 2 min min R R u , s u , s k =0 k =0 s.t. s k +1 = A k s k + B k u k s.t. s k +1 = f ( s k , u k ) C k s k + D k u k ≥ 0 , h ( s k , u k ) ≥ 0 , s 0 = ˆ x i s 0 = ˆ x i At each time i : At each time i : Solve the QP Solve the QP 1 1 Compute the new linearization 2 of the constraints Prepare the new QP 3 RTI differs from linear MPC in the sense that the constraints are re-linearized at each time instant on the current trajectory rather than only once on the reference trajectory
Recommend
More recommend