underwater sparse image classification using deep
play

Underwater sparse image classification using deep convolutional - PowerPoint PPT Presentation

Underwater sparse image classification using deep convolutional neural networks Mohamed Elawady Heriot-Watt University VIBOT Msc 2014 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 1 About Me! 2014-Current: PhD in Imaging Processing


  1. Underwater sparse image classification using deep convolutional neural networks Mohamed Elawady Heriot-Watt University VIBOT Msc 2014 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 1

  2. About Me!  2014-Current: • PhD in Imaging Processing (Hubert Curien Laboratory, Jean Monnet University [FR]) • Thesis: “ Automatic reading of photographs using formal analysis in visual arts ” supervised by Christophe Ducottet, Cecile Barat, Philippe Colantoni.  2012 – 2014: • Erasmus Mundus European Masters in Vision and Robotics (VIBOT) (University of Burgundy [FR], University of Girona [SP], Heriot-Watt University [UK] ). • Thesis: “ Sparse coral classification using deep convolutional neural networks ” supervised by Neil Robertson, David Lane.  2003 – 2007: • Bachelor of Science in Computers & Informatics [Major: Computer Science] (Faculty of Computers & Informatics, Suez Canal University [EG]). • Thesis: “ Self-Authenticating Image Watermarking System ”. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 2

  3. Outline • Coralbots • Introduction • Problem Definition • Related Work • Methodology • Results • Conclusion and Future Work • Deep Learning Workshop, Edinburgh,2014 • Summer Internship 2014 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 3

  4. Coralbots • Team members: Lea-Anne Henry, Neil Robertson, David Lane and David Corne. • Target: Deep sea diving robots to save world’s coral reefs. • Progress: Three generations of VIBOT Msc work (2013 - 2015). • Resources: – http://www.coralbots.org/ – https://youtu.be/MJ-_d3HZOi4 – https://youtu.be/6q4UiuiqZuA 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 4

  5. Coralbots 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 5

  6. Introduction Fast facts about coral:  Consists of tiny animals (not plants).  Takes long time to grow (0.5 – 2cm per year).  Exists in more than 200 countries.  Generates 29.8 billion dollars per year through different ecosystem services.  10% of the world's coral reefs are dead, more than 60% of the world's reefs are at risk due to human- related activities.  By 2050, all coral reefs will be in danger. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 6

  7. Introduction Coral Transplantation:  Coral gardening through involvement of SCUBA divers in coral reef reassemble and transplantation.  Examples: Reefs capers Project 2001 at Maldives & Save Coral Reefs 2012 at Thailand.  Limitations: time & depth per dive session.  Robot-based strategy in deep-sea coral restoration through intelligent autonomous underwater vehicles (AUVs) grasp cold-water coral samples and replant them in damaged reef areas. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 7

  8. Problem Definition Dense Manual sparse Automatic sparse Classification Classification Classification Manually annotated through coral experts by Supervised learning Millions of coral images algorithm to annotate matching some random uniform pixels to target images autonomously classes More than 400 hours are Thousands of hours of required to annotate Input data are ROIs underwater videos 1000 images (200 coral around random points labelled points per image) Massive number of hours to annotate every pixel inside each coral image or video frame 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 8

  9. Problem Definition MLC Dataset Moorea Labeled Corals (MLC) University of California, San Diego (UCSD) Island of Moorea in French Polynesia ~ 2000 Images (2008, 2009, 2010) 200 Labeled Points per Image 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 9

  10. Problem Definition MLC Dataset 5 Coral Classes • Acropora “Acrop” • Pavona “ Pavon ” • Montipora “ Monti ” • Pocillopora “ Pocill ” • Porites “ Porit ” 4 Non-coral Classes • Crustose Coralline Algae “CCA” • Turf algae “Turf” • Macroalgae “Macro” • Sand “Sand” 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 10

  11. Problem Definition ADS Dataset Atlantic Deep Sea (ADS) Heriot-Watt University (HWU) North Atlantic West of Scotland and Ireland ~ 160 Images (2012) 200 Labeled Points per Image 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 11

  12. Problem Definition ADS Dataset 5 Coral Classes • DEAD “Dead Coral” • ENCW “Encrusting White Sponge” • LEIO “Leiopathes Species” • LOPH “Lophelia” • RUB “Rubble Coral” 4 Non-coral Classes • BLD “Boulder” • DRK “Darkness” • GRAV “Gravel” • Sand “Sand” 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 12

  13. Related Work 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 13

  14. Related Work 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 14

  15. Related Work 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 15

  16. Related Work Sparse (Point-Based) Classification 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 16

  17. Methodology Shallow vs Deep Classification:  Traditional architecture extracts hand-designed key features based on human analysis for input data.  Modern architecture trains learning features across hidden layers; starting from low level details up to high level details. Structure of Network Hidden Layers:  Trainable weights and biases.  Independent relationship within objects inside.  Pre-defined range measures.  Further faster calculation. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 17

  18. Methodology “LeNet - 5” by LeCun 1998 First back-propagation convolutional neural network (CNN) for handwritten digit recognition 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 18

  19. Methodology Recent CNN applications Object classification:  Buyssens (2012): Cancer cell image classification.  Krizhevsky (2013): Large scale visual recognition challenge 2012. Object recognition:  Girshick (2013): PASCAL visual Object detection system overview (Girshick) object classes challenge 2012. More than 10% better than top contest performer  Syafeeza (2014): Face recognition system.  Pinheiro (2014): Scene labelling. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 19

  20. Methodology Proposed CNN framework 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 20

  21. Methodology Proposed CNN framework 3 Basic Channels (RGB) Extra Channels Classification (Feature Layer maps) Color Enhancement Find suitable weights of convolutional kernel 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 21 and additive biases

  22. Methodology Proposed CNN framework 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 22

  23. Methodology Hybrid patching:  Three different-in-size patches are selected across each annotated point (61x61, 121x121, 181x181).  Scaling patches up to size of the largest patch (181x181) allowing blurring in inter-shape coral details and keeping up coral’s edges and corners.  Scaling patches down to size of the smallest patch (61x61) for fast classification computation. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 23

  24. Methodology Feature maps:  Zero Component Analysis (ZCA) whitening makes data less-redundant by removing any neighbouring correlations in adjacent pixels.  Weber Local Descriptor (WLD) shows a robust edge representation of high- texture images against high-noisy changes in illumination of image environment.  Phase Congruency (PC) represents image features in such format which should be high in information and low in redundancy using Fourier transform. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 24

  25. Methodology Color enhancement:  Bazeille’06 solves difficulties in capturing good quality under-water images due to non-uniform lighting and underwater perturbation.  Iqbal ‘07 clears under -water lighting problems due to light absorption, vertical polarization, and sea structure.  Beijbom’12 figures out compensation of color differences in underwater turbidity and illumination. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 25

  26. Methodology Proposed CNN framework 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 26

  27. Methodology Kernel weights & bias initialization: The network initialized biases to zero, and kernel weights using uniform random distribution using the following range: where N in and N out represent number of input and output maps for each hidden layer (i.e. number of input map for layer 1 is 1 as gray-scale image or 3 as color image), and k symbolizes size of convolution kernel for each hidden layer. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 27

  28. Methodology Convolution layer: Convolution layer construct output maps by convoluting trainable kernel over input maps to extract/combine features for better network behaviour using the following equation: l are output maps of previous (l-1) & current (l) layers with l-1 & x j where x i l , f (.) is convolution kernel numbers (input i and output j ) with weight k ij l is an activation sigmoid function for calculated maps after summation, and b j addition bias of current layer l with output convolution kernel number j. 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 28

  29. Methodology Proposed CNN framework 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 29

Recommend


More recommend