Attitude control and autonomous navigation of quadcopters Pantelis Sopasakis KU Leuven, Dept. of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analysis. v3.1.12 – 2017/06/25 at 19:21:12
Outline ◮ Problem statement and objectives ◮ Control Theory � State space systems � Stabilisation with linear state feedback � Linear Quadratic Regulator � Reference tracking � Linear State Observers ◮ Application � Quaternions and rotations � Attitude dynamics � Implementation � Autonomous navigation 1 / 174
I. Attitude control problem statement
Attitude control 2 / 174
Intuition 3 / 174
Attitude control: Problem statement 1. By controlling the voltages V 1 , V 2 , V 3 and V 4 at the four motors and obtaining measurements from the inertial measurement unit (IMU) make the quadcopter hover, that is φ = 0 (no pitch), θ = 0 (no roll), ψ = 0 or ˙ ψ = 0 (no yaw). 2. Follow the reference signals ( φ r , θ r , ˙ ψ r ) and the throttle reference signal τ r from the RC. 4 / 174
Attitude control: Workflow Nonlinear Choose inputs, Find equilibrium Linearise model states and outputs points Simulate Discretise no Design Check controllability controller and observability for ref. tracking yes Design Simulate Plot poles Observer and evaluate not satisfied satisfied Implement in C Plug into Zybo Test Done! and integrate thoroughly 5 / 174
II. State Space Systems
States, inputs and outputs ◮ States : Those variables that provide all we would need to know about our system to fully describe its future behaviour and evolution in time, ◮ Outputs : Variables, typically transformations of the state variables, that we can measure ◮ Inputs : Variables which we have the ability to manipulate so as to control the system ◮ Disturbances : Signals which affect the system behaviour and can be thought of as input variables over which we do not have authority (e.g., weather conditions). 6 / 174
State space systems State space systems are described in continuous time by x ( t ) = f ( x ( t ) , u ( t )) , ˙ y ( t ) = h ( x ( t ) , u ( t )) , R n x is the system state vector, u ∈ I R n u is the input vector where x ∈ I R n y is the vector of outputs . and y ∈ I Regularity conditions on f for the system to have unique solutions: check out Carath´ eodory’s Theorem. 7 / 174
Discrete-time state space systems The discrete time version is x k +1 = f ( x k , u k ) , y k = h ( x k , u k ) , where k ∈ N . 8 / 174
Why state space? ◮ Can be directly derived from first principles of physics ◮ Suitable for multiple-input multiple-output systems ◮ Enable the study of controllability, observability and other interesting properties ◮ Time-domain representation: more intuitive 9 / 174
Equilibrium points A pair ( x e , u e ) is called an equilibrium point of the continuous-time system x ( t ) = f ( x ( t ) , u ( t )) , ˙ if f ( x e , u e ) = 0 . If u ( t ) = u e and x (0) = x e , then x ( t ) = x e for all t > 0 . 10 / 174
Equilibrium points A pair ( x e , u e ) is called an equilibrium point of the discrete time system x k +1 = f ( x k , u k ) if f ( x e , u e ) = x e . Then, if u k = u e for all k and x 0 = x e , then x k = x e for all k ∈ N . 11 / 174
Discretisation Excerpt from Wikipedia: In mathematics, discretization concerns the process of transferring continuous functions, models, and equations into discrete counterparts. 12 / 174
Discretisation: the Euler method Recall that x ( t + h ) − x ( t ) x ( t ) = lim ˙ , h h → 0 so, for small h x ( t ) ≈ x ( t + h ) − x ( t ) ˙ . h Using this approximation x ( t ) = f ( x ( t ) , u ( t )) , ˙ is approximated — with sampling period h — by the discrete system x k +1 = x k + hf ( x k , u k ) , where x k is an approximation for x ( kh ) . 13 / 174
Discretisation: the Euler method 1 Exact Euler 0.9 0.8 x = − x 2 , x (0) = 1 ˙ 0.7 0.6 0.5 0.4 0 0.2 0.4 0.6 0.8 1 14 / 174
Discretisation: linear systems When the continuous-time system has the form x ( t ) = Ax ( t ) + Bu ( t ) , ˙ then we know that its solution is � t x ( t ) = e At x (0) + e A ( t − τ ) Bu ( τ )d τ 0 Assume that u ( t ) is constant on [ kh, ( k + 1) h ) and equal to u ( k ) = u k � h and define A d = e Ah and B d = 0 e Aτ B d τ . Then x k +1 = A d x k + B d u k , is an exact discretisation of the original system. In MATLAB use c2d — read the documentation for help. 15 / 174
Linearisation R n → I R m about a point The best linear approximation of a function f : I x 0 is given by f ( x ) ≈ f ( x 0 ) + J f ( x 0 )( x − x 0 ) R n × I R m → I R n this reads For functions f : I f ( x, u ) ≈ f ( x 0 , u 0 ) + J x f | ( x 0 ,u 0 ) ( x − x 0 ) + J u f | ( x 0 ,u 0 ) ( u − u 0 ) . 16 / 174
Jacobian matrix Let x = ( x 1 , x 2 , . . . , x n ) and f 1 ( x 1 , x 2 , . . . , x n ) f 2 ( x 1 , x 2 , . . . , x n ) f ( x ) = . . . f n ( x 1 , x 2 , . . . , x n ) 17 / 174
Jacobian matrix Let x = ( x 1 , x 2 , . . . , x n ) and f 1 ( x 1 , x 2 , . . . , x n ) f 2 ( x 1 , x 2 , . . . , x n ) f ( x ) = . . . f n ( x 1 , x 2 , . . . , x n ) R n → I R n × n defined as The Jacobian matrix of f is a function J f : I ∂f 1 ∂f 1 ∂f 1 ∂x 1 ( x ) ∂x 2 ( x ) · · · ∂x n ( x ) ∂f 2 ∂f 2 ∂f 2 · · · ∂x 1 ( x ) ∂x 2 ( x ) ∂x n ( x ) J f ( x ) = . . . ... . . . . . . ∂f n ∂f n ∂f n ∂x 1 ( x ) ∂x 2 ( x ) · · · ∂x n ( x ) 17 / 174
Linearisation Consider the continuous time system x ( t ) = f ( x ( t ) , u ( t )) , ˙ choose an equilibrium point ( x e , u e ) and define ∆ x ( t ) = x ( t ) − x e , ∆ u ( t ) = u ( t ) − u e Then, it is easy to verify that the system linearisation is written as d d t ∆ x ( t ) = A ∆ x ( t ) + B ∆ u ( t ) , where A = (J x f )( x e , u e ) and B = (J u f )( x e , u e ) . 18 / 174
Simulations In MATLAB we may use 1. ode23 : fast and of decent accuracy 2. ode45 : less fast but more reliable 3. Simulink and S-functions In C++ we may use odeint . In Python try scipy.integrate.ode . In this project we provide to you a simulator in MATLAB. 19 / 174
III. Stabilisation of Linear Systems
Controllability Linear time-invariant (LTI) discrete-time systems: x k +1 = Ax k + Bu k We say that the system is controllable if for every x 1 , x 2 ∈ I R n x there is a finite sequence of inputs ( u 0 , u 1 , . . . , u N ) which can steer the system state from x 1 to x 2 (we can move the system to any point x 2 starting from any point x 1 ). 20 / 174
Controllability The system is controllable iff � � A 2 B A n x − 1 B rank · · · = n x . B AB � �� � C ( A,B ) This is the most popular criterion, but there exist a lot more. We also often say that the pair ( A, B ) is controllable. 21 / 174
Why using rank is a bad idea Take for example the pair ( A, B ) with � 1 � � 0 � ǫ A = , B = ǫ ǫ 1 Then, the controllability matrix is � 1 � ǫ 0 ǫ whose rank is 2 but its singular values are 1 and ǫ . In MATLAB use rank( · ,tol) or svd instead of rank( · ) . 22 / 174
Stability 3 2 B δ 1 B ǫ 0 -1 -2 -3 -2 -1 0 1 2 3 23 / 174
Asymptotic Stability 3 2 1 0 -1 -2 -3 -2 -1 0 1 2 3 24 / 174
Stability analysis of linear systems Consider the linear discrete-time system x k +1 = Ax k . The origin ( x e = 0 ) is an asymptotically stable equilibrium point for the above system if and only if all eigenvalues of A are strictly within the unit circle, that is | λ i | < 1 . In MATLAB one may find the eigenvalues of a matrix using eig . To find the largest eigenvalue use eigs(A,1) . 25 / 174
Stabilisation Suppose the pair ( A, B ) is controllable and consider the system x k +1 = Ax k + Bu k . Problem: Find a controller u k = κ ( x k ) so that for the controlled system x k +1 = Ax k + Bκ ( x k ) the origin is an asymptotically stable equilibrium point. Hint: Choose κ ( x ) = Kx and find K so that A + BK has all its eigenvalues inside the unit circle. Such a K always exists because the system is controllable and we may choose K so as to place the eigenvalues of A + BK at any desired points in the unit circle. 26 / 174
Stabilisation We may choose K so that A + BK has eigenvalues s 1 , . . . , s n x using Ackerman’s formula. In MATLAB we may use the function place . But, where should we place the poles? What about performance? The linear quadratic regulator (LQR) gives an answer to these questions... Read carefully the documentation of place ... 27 / 174
IV. Linear Quadratic Regulator
Motivation We need to stabilise the system x k +1 = Ax k + Bu k , by a linear controller u k = Kx k so that the closed-loop system’s response minimises a certain cost index . We assume that x k is exactly known at time k . 28 / 174
Recommend
More recommend