Using line-based wind turbine representations for UAV localisation during autonomous inspection Dr Oliver Moolan-Feroze – University of Bristol
Overview 1. Why are we using a line-based model for localisation during wind turbine inspection 2. Defining the line-based turbine model 3. Extracting line-based representations from images 4. Integrating the line model into an optimiser 5. Results
Why Lines? • Wind turbines are quite similar. • Tower • Nacelle (hub) • 3 Blades • Model-based tracking approach makes sense over full SLAM • However... • What model do we use? • How do we associate model features with image locations?
Why Lines? • Typically, we find correspondences between model and image using distinguishable features • Enough correspondences allow us to estimate pose
Why Lines? • Typically, we find correspondences between model and image using distinguishable features • Enough correspondences allow us to estimate pose • In certain views we can use point- based features
Why Lines? • Typically, we find correspondences between model and image using distinguishable features • Enough correspondences allow us to estimate pose • In certain views we can use point- based features • However , a lot of the time, no features are in view, especially when close up
Why Lines? • We extend lines connecting the point features • Enables us to incroporate image measurements inbetween feature points
Turbine Model - definition Model is defined using a set of points P ∈ 𝑆 $ • Turbine base: 𝑞 & • Tower top: 𝑞 ' • Blade centre: 𝑞 ( • Blade tips: 𝑞 ) * where 𝑗 ∈ 1,2,3 And the set of connecting lines L • Turbine tower: 𝑚 ' = {𝑞 & , 𝑞 ' } • Nacelle: 𝑚 ( = 𝑞 ' , 𝑞 ( • Blades: 𝑚 ) * = {𝑞 ( , 𝑞 ) * }
Turbine Model - parameters The model is parameterised with the following values 𝜄 The x,y location of the base: 𝒅 ∈ 𝑆 7 • The height of the tower: ℎ • The heading of the turbine: 𝜕 • The length of the nacelle: 𝑠 • The rotation of the blades: 𝜚 • The length of the blades: 𝑐 •
Turbine Model - instantiation • Given a ‘unit’ version of the model = ℳ = { ? 𝒬, A ℒ} , and parameters 𝜄 we instantiate a model with the following functions 𝛚 𝑞 & = 𝜔 & ( ̂ 𝑞 & , 𝜄) 𝑞 ' = 𝜔 ' ( ̂ 𝑞 ' , 𝜄) 𝒬 = 𝑞 ( = 𝜔 ( ( ̂ 𝑞 ( , 𝜄) 𝑞 ) * = 𝜔 ) * ( ̂ 𝑞 ) * , 𝜄)
CNN line model feature extraction • To enable matching, we extract a representation of the reprojection of the line-model from the images using a CNN Encoder Decoder Reprojection RGB images as estimates of line well as 'prior' model and point reprojection model are the images are fed in outputs as inputs Convolutional layer Maxpooling layer Linear upsampling layer Sigmoid layer
CNN line model feature extraction With priors Without priors
CNN line model feature extraction • Examples of the network output Top) the extracted line model Bottom) the extracted point model
Integration with the optimiser • The lines extracted reprojection doesn't provide enough information to fully estimate pose • We combine the image information with a keyframe pose graph optimizer • The graph is constrained using image measurements and pose estimates obtained from IMU / GPS • 𝐻 = 𝑊, 𝐹 • 𝑊 = 𝑤 L , … , 𝑤 N • 𝑤 N = 𝑟, 𝑈 • 𝐹 = 𝑓 L,7 , … , 𝑓 NRL,N
Integration with the optimiser Model lines are split into a series of points • and projected into the image to generate constraints Image constraints can be generated in two • different ways 1. Using a perpendicular line search and establishing a 3D -> 2D correspondence • Restricts the movement of the model point during optimisation 2. Using direct image interpolation • Allows the model point to move freely over the image Example matching using perpendicular line search
Experiments – real Inspection data
Experiments – flight using synthetic data
Pose and Model Joint Optimisation Previously, turbine model parameters are estimated and set at the • beginning of flight Error in parameters -> error in estimated poses • We now jointly optimize both the set of poses, and the turbine • parameters 𝜄
Pose and Model Joint Optimisation • The set of function 𝛚 are designed to be differentiable. • When we project model points into the images, they are transformed using 𝜔 . • Parameters 𝜄 are now estimated during optimisation
Experiments – real data joint optimisation
Experiments – synthetic joint optimisation
Experiments – synthetic joint optimisation Error in position Final position error (m) Pose and model 10.0 Pose only 7.5 5.0 • Using synthetic data, we evaluated 2.5 the performance of joint 0.0 optimisation 0 2 4 6 8 10 Initial position error (m) • Improvement in both position and Error in orientation orientation estimation when doing joint Final orientation error (rad) 0.4 optimisation 0.3 0.2 0.1 0.0 0.00 0.02 0.04 0.06 0.08 Initial orientation error (rad)
The work presented today is based on two papers Improving drone localisation around wind turbines using monocular model-based tracking - Oliver • Moolan-Feroze, Konstantinos Karachalios, Dimitrios N. Nikolaidis, and Andrew Calway Simultaneous drone localisation and wind turbine model fitting during autonomous surface • inspection - Oliver Moolan-Feroze, Konstantinos Karachalios, Dimitrios N. Nikolaidis, and Andrew Calway Thanks!
Recommend
More recommend