classification of remotely sensed images for landuse
play

Classification of Remotely Sensed Images for Landuse Information - PowerPoint PPT Presentation

Classification of Remotely Sensed Images for Landuse Information Prof. Krishna Mohan Buddhiraju Centre of Studies in Resources Engineering IIT Bombay INDIA bkmohan@csre.iitb.ac.in Todays Presentation (Very brief) Introduction to Remote


  1. Classification of Remotely Sensed Images for Landuse Information Prof. Krishna Mohan Buddhiraju Centre of Studies in Resources Engineering IIT Bombay INDIA bkmohan@csre.iitb.ac.in

  2. Today’s Presentation (Very brief) Introduction to Remote Sensing – source of images Image Classification Principles Texture based segmentation High Resolution Image Classification Hyperspectral Image Classification

  3. What is Remote Sensing? Remote sensing is the art and science of making measurements about an object or the environment without being in physical contact with it

  4. CSRE 0.6m x 0.6m

  5. 5.8m x 5.8m

  6. 23.25m x 23.25m

  7. High Spectral Resolution Response wavelength Large number of contiguous sensors Narrow bandwidth

  8. Low Contrast Image

  9. Contrast Enhanced Image

  10. Input Image FCC

  11. NDVI

  12. Concept of Image Classification Image classification - assigning pixels in the image to categories or classes of interest Examples: built-up areas, waterbody, green vegetation, bare soil, rocky areas, cloud, shadow, …

  13. Why Classification? • Quantitative information • Acreage of each category • Spatial location of each category • Identifying any changes happening in one or more categories since the last time the classification was done of an image of the same area for a past date

  14. Types of Classification • Supervised Classification • Partially Supervised Classification • Unsupervised Classification

  15. Supervised Classification • Familiarity with geographical area • Small sets of pixels can be identified for each class • Statistics for the classes can be estimated from the samples • Separate sets can be identified for classifier learning and post-classification validation

  16. Unsupervised Classification • Domain knowledge or the experience of an analyst may be missing • Data analyzed by numerical exploration • Data are grouped into subsets or clusters based on statistical similarity • K-Means and its many variants, hierarchical methods are often used

  17. Partially Supervised Classification When prior knowledge is available – For some classes, and not for others, – For some dates and not for others in a multitemporal dataset, Combination of supervised and unsupervised methods can be employed for partially supervised classification of images

  18. Statistical Characterization of Classes Each class has a conditional probability density function (pdf) denoted by p( x | c k ) The distribution of feature vectors in each class c k is indicated by p( x | c k ) We estimate P(c k | x ), the conditional probability of class c k given that the pixel’s feature vector is x

  19. Supervised Classification Principles • Typical characteristics of classes – Mean vector – Covariance matrix – Minimum and maximum gray levels within each band – Conditional probability density function p(C i | x ) where C i is the i th class and x is the feature vector • Number of classes L into which the image is to be classified should be specified by the user

  20. Inputs to a Classifier • How many and what classes to map input data into? • What are the attributes of each data element? (In case of images, the data element is a pixel, attributes are measurements in various wavelengths made by imaging sensors) • Samples to help classifier learn relationship between input raw data and information classes • Validation data to test the performance of classifier

  21. How are known sample locations marked?

  22. Necessary conditions for successful classification • Rich set of attributes (called features in machine learning literature) • Adequate number of samples for classifier learning (called training data) and validation (called test data ) • Capability of learning algorithm – should be able to exploit all information that can be exploited from the sample data

  23. Support Vector Machines Slides on SVM originally from Prof. Andrew Moore’s lectures on Machine Learning

  24. x a y est f denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Linear Classifiers

  25. Maximum Margin denotes +1 The maximum denotes -1 margin linear classifier is the linear classifier Linear SVM allowing maximum margin for test samples to vary from training samples

  26. Maximize Margin denotes +1 wx +b = 0 denotes -1   b x w i argmaxarg min Margin  d 2  w w , b x D i  i i 1        subject to x D y : x w b 0 i i i Strategy:  d 2 argmin w  i i 1      x D : b x w 1 w , b i i        subject to x D y : x w b 1 i i i

  27. Multilayer Perceptron Neural Networks

  28. Mathematical Representation x 1 w 1 Output n   x 2 net wx b + w 2 i i Inputs  i 1 …  y y f(net) . . w n x n b

  29. Mathematical Representation of the Activation Function

  30. Mathematical Representation of the Activation Function

  31. Multilayer Perceptron Network O I U N T P P U U T T N N O O D D E E S S H I D D E N L A Y E R S

  32. Selected Applications • Landuse/Landcover classification • Edge and line detection

  33. Input Image

  34. NN Supervised Classification

  35. Texture Analysis MUMBAI Data: IRS-1C, PAN Consists of 1024x1024 pixels.

  36. LEGEND WATER MARSHY LAND / SHALLOW WATER HIGHLY BUILT-UP AREA PARTIALLY BUILT-UP AREA Texture Classification by OPEN AREAS/ neural networks GROUNDS

  37. Identification of Informal Settlements based on Texture

  38. Classification Strategies High Resolution Satellite Image Pre-processing Decompose image at different level Segment image at Different Resolutions Linking the regions of different resolutions Connected Component Labeling Spatial Features Spectral Features Texture Features Context Object-Specific Classification General Purpose Classification Post-processing ( Relaxation Labeling Process ) Classified Image 39

  39. Grass Vegetation Roof top Concrete Open ground Object based classification

  40. Buildings1 Open ground Road Shadow Buildings2 Vegetation Object based classification

  41. Buildup Open ground Vegetation Object based classification

  42. Examples Road Extraction Biplab Banerjee, Siddharth Buddhiraju and Krishna Mohan Buddhiraju, Proc. ICVGIP 2012

  43. Examples Building outline extraction by object based image analysis Biplab Banerjee and Krishna Mohan Buddhiraju, UDMS 2013, Claire Ellul et al. (ed.), CRC Press, May 2013

  44. Object Specific Classification Examples Buildings Planes Trees Ashvitha Shetty and Krishna Mohan B., Building Extraction in High Spatial Resolution Images Using Deep Learning Techniques, LNCS10962, pp. 327–338, 2018

  45. Hyperspectral Imagery

  46. INTRODUCTION Hyperspectral sensors • Large number of contiguous bands • Narrow spectral BW Advantages • Better discrimination among classes on ground is offered • Highly correlated bands • Huge information from a contiguous and smooth Hyperspectral data of a scene spectra (Source: remotesensing.spiedigitallibrary.org) Centre of Studies in Resources Engineering, IIT 6/14/2019 47 BOMBAY

  47. Tea Spectra for different conditions

  48. Airborne Visible and InfraRed Imaging Spectrometer – Next Generation (AVIRIS- NG) AVIRIS-NG red-green-blue (visible) aerial image of the Refugio Incident oil spill, near Santa Barbara Channel beaches Source: https://aviris-ng.jpl.nasa.gov/

  49. High Spectral High spectral resolution image Resolution Image Atmospheric Correction Analysis Dimensionality Reduction Pure Pixel / Training Data Spectral Identification libraries Supervised Mixture Modeling Spectral Classification Matching Abundance Mapping General Purpose Classification Sub-pixel Mapping & classification Super-Resolution

  50. End Member Extraction • Pixel Purity Index (Source:https://www.researchgate.net/figure/T oy-example-illustrating-the-performance-of- the-PPI-endmember-extraction-algorithm-in- a_fig2_228856827) 51

  51. Endmember Extraction Algorithm Demonstration : Samson Dataset 4,8 5,88 68, 62 Fig. 3 Endmember 1 (4,8) Fig. 4 Endmember 2 (5,88) Fig. 1 Samson FCC Fig. 2 Auto-EME Endmembers Fig. 5 Endmember 3 (68,62)

  52. Abundance Distribution of Endmembers with GDME algorithm on Samson dataset Rock Vegetation Water Integrated Abundance Image Coordinates Ground Entropy Ground Entropy Coordinates on Image Truth Output Truth output 0 0.0767 0.9531 0.9657 (13, 3) 0 0.1189 (90, 95) 0 0.0198 1 0.8042 0.0469 0.0144 0.7974 0.7411 0 0.0777 (67, 53) 0.1966 0.1804 (56, 3) 0 0.1233 0.0061 0.0783 1 0.7989 0.7760 0.7556 0.1563 0.0671 (88, 70) 0.2065 0.1653 (19, 17) 0 0.1387 0.0174 0.0789 0.8437 0.7940 0 0.0737 0.0272 0.0707 (6, 3) 0 0.1195 0 0.1292 1 0.8066 (29, 10) 0.9728 0.8000

  53. Hyperion Hyperspectral Data • Number of rows = 1400 • Number of columns = 256 • Number of bands = 242 54

  54. SVM Classification 55

Recommend


More recommend