computational single cell classification using deep
play

Computational single-cell classification using deep learning on - PowerPoint PPT Presentation

Computational single-cell classification using deep learning on bright-field and phase images Nan Meng, Hayden K.-H. So, Edmund Y. Lam Imaging Systems Laboratory, Department of Electrical and Electronic Engineering, University of Hong Kong


  1. Computational single-cell classification using deep learning on bright-field and phase images Nan Meng, Hayden K.-H. So, Edmund Y. Lam Imaging Systems Laboratory, Department of Electrical and Electronic Engineering, University of Hong Kong http://www.eee.hku.hk/isl 15th IAPR International Conference on Machine Vision Applications E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 1 / 23

  2. In a nutshell MCF7 OAC OST PBMC THP1 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 2 / 23

  3. Introduction Introduction 1 Network Design 2 Channel Augmentation 3 Results 4 Conclusions and Future Work 5 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 3 / 23

  4. Introduction Ultrafast imaging Enabling technology #1: Time-stretch imaging Asymmetric-detection time-stretch optical microscopy (ATOM) for obtaining label-free, high-contrast image of the transparent cells at ultrahigh speed, and with sub-cellular resolution. Figure: Photo of an ATOM system. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 4 / 23

  5. Introduction Ultrafast imaging Figure: General schematic of an ATOM system. Top speed: ≈ 30 , 000 images per second Original image resolution: 84 × 305 Pixel superresolution gives: 305 × 305 Four bright-field images captured concurrently Data rate: 30000 × 84 × 305 × 4 ≈ 3 . 1 GB / s Detection mechanism: line by line via XXXXXX Cell flow: optofluidic system E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 5 / 23

  6. Introduction Cell classification Enabling technology #2: Deep learning for image classification Cell classification (phenotype): identify some specific cells among many different cells based on image analysis techniques. Data-driven methods for object classification. Automatically extract features to identify different types of cells. CONV POOL CONV POOL FC SOFTMAX Deep Learning Algorithm cell images Objects Feature Features Hypothesis Candidate Hypothesis detector verification formation class objects Modelbase E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 6 / 23

  7. Introduction Introduction Deep learning breaks the desired complicated mapping into a series of nested simple mappings, each described by a different layer of the model. Output (object identity) The input is presented at the 3rd hidden layer visible layer . (object parts) A series of hidden layers 2nd hidden layer (corners and contours) extracts increasingly abstract features from the image. 1st hidden later (edges) Finally, the last layer learns descriptions of the image in Visible layer (input pixels) terms of the object parts and Figure: Deep learning framework. use them to recognize objects. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 7 / 23

  8. Network Design Introduction 1 Network Design 2 Channel Augmentation 3 Results 4 Conclusions and Future Work 5 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 8 / 23

  9. Network Design Network Design Convolutional Neural Network (CNN) We explore a systematic way to design the network and tune the structure to obtain a robust model to avoid overfitting. Block Fully connected ReLU 2 layer 1 ReLU1 ReLU 3 64 5 6 3 11 12 100 305 24 12 24 128 Input 100 64 images 1 1 305 1 Batch 5 Fully connected Batch Batch 32 Norm 3 layer 2 Norm 1 Norm 2 Convolutional Convolutional Convolutional Layer 1 Layer 2 Layer 3 Figure: Our proposed CNN-based framework. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 9 / 23

  10. Network Design Network Design Building blocks Convolution layer extracts robust features for translation and rotation variations. Pooling layer makes the output less redundant. Batch normalization layer is effective to avoid overfitting. Block Combine these three connected reduce avoid translation layers as a basic feature redundancy overfitting invariance extraction unit, what we call a “block”. 11 305 Cascade multiple blocks to get Input Images the final framework. 305 5 Convolutional ReLU1 Batch Layer 1 Norm 1 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 10 / 23

  11. Network Design Network Design Cascading the blocks 1 Deep learning models can extract high-level features. 2 Higher layers are more likely to extract abstract and invariant features. input block 1 Block Fully connected ReLU 2 layer 1 ReLU1 ReLU 3 activations 64 5 6 3 11 12 100 305 24 12 24 128 Input 100 64 images 1 1 305 Batch 1 5 Fully connected Batch 32 Batch Norm 3 layer 2 Norm 1 Norm 2 block 2 Convolutional Convolutional Convolutional Layer 1 Layer 2 Layer 3 classifier activations E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 11 / 23

  12. Channel Augmentation Introduction 1 Network Design 2 Channel Augmentation 3 Results 4 Conclusions and Future Work 5 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 12 / 23

  13. Channel Augmentation Channel Augmentation Bright-field and phase images Bright-field imaging is a technique where light from the specimen and its surroundings is collected to form an image against a bright background. PCIe/Infiniband Input from host CPU I1 I1 Pixel Stream I2 I2 Pixel Stream Frequency phase Domain I3 Module I3 Pixel Stream I4 I4 Pixel Stream Figure: System Architecture. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 13 / 23

  14. Channel Augmentation Channel Augmentation Computation of phase image φ ( x , y ): � � �� C , κ x = κ y = 0 F − 1 φ ( x , y ) = Im (1) F{ G ( x , y ) · FOV } 2 π · ( κ x + i κ y ) , otherwise ∇ φ x and ∇ φ y : local phase shift G ( x , y ) = ∇ φ x + i · ∇ φ y F and F − 1 : Forward and inverse Fourier transforms ( κ x + i κ y ) : Fourier spatial frequencies normalized as a linear ramp C : An arbitrary integration constant FOV : image field of view expressed in physical units E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 14 / 23

  15. Channel Augmentation Channel Augmentation Generating the phase image aims to enrich information of each individual sample without increasing the size of the dataset used for training. [305 x 305] [305 x 305] [305 x 305] [305 x 305] [305 x 305] Output CNN-based classes framework [305 x 305 x 5] Figure: Channel augmentation cascades several relevant images together to enrich information. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 15 / 23

  16. Results Introduction 1 Network Design 2 Channel Augmentation 3 Results 4 Conclusions and Future Work 5 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 16 / 23

  17. Results Results Channel Augmentation is efficient for cell classification problem. Different channel images provide competitive information. Channel images contain better features than phase image. Cascading channel and phase images achieves best classification. Average Accuracy Aspects validation test channel 1 0.94 0.94 channel 2 0.96 0.95 channel 3 0.94 0.96 channel 4 0.94 0.92 channel 1-4 0.96 0.93 phase only 0.93 0.90 channel 1-4 & phase 0.97 0.97 Table: Classification accuracy with different channel augmentation strategies. E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 17 / 23

  18. Results Results The Precision of each class of testset Different channels perform discriminatively on specific type of cells. Model preforms best after channel augmentation. Table: Precision of test Precision of each class Aspects test tp THP1 OAC MCF7 PBMC OST precision = tp + fp channel 1 0.8429 0.9551 0.8795 1.000 1.000 channel 2 0.9296 0.9872 0.9868 0.8588 0.9889 channel 3 0.9444 0.9753 1.000 0.9255 0.9753 tp : true positive channel 4 0.7838 0.9773 0.9880 0.8462 0.9870 fp : false positive channel 1-4 1.000 0.9859 0.9767 0.7907 0.9259 phase 0.8571 0.9167 0.9870 0.8642 0.8642 channel 1-4 &phase 1.000 1.000 0.9524 0.9351 0.9324 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 18 / 23

  19. Results Results The Recall of each class of testset Different channels perform discriminatively on specific type of cells. Model preforms best after channel augmentation. Table: Recall of test Recall of each class Aspects test tp recall = THP1 OAC MCF7 PBMC OST tp + fn channel 1 0.8551 1.000 0.9634 0.9012 0.9518 channel 2 0.8354 0.9872 1.000 0.9359 0.9889 channel 3 0.8947 0.9875 1.000 0.9355 1.000 tp : true positive channel 4 0.7945 1.000 1.000 0.8048 0.9870 fn : false negative channel 1-4 0.9870 0.9859 0.9882 0.9189 0.8065 phase 0.8571 0.8652 0.8642 0.9091 1.000 channel 1-4 &phase 0.9756 0.9770 0.9351 0.9324 1.000 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 19 / 23

  20. Results Results The F1 score of each class of testset. Different channels perform discriminatively on specific type of cells. Model preforms best after channel augmentation. Table: F1 score of test F1 score of each class Aspects test THP1 OAC MCF7 PBMC OST F 1 = 2 × precision · recall channel 1 0.85 0.98 0.98 0.89 0.98 channel 2 0.88 0.99 0.99 0.90 0.99 precision + recall channel 3 0.92 0.98 0.93 0.99 1.0 channel 4 0.79 0.99 0.99 0.83 0.99 channel 1-4 0.99 0.98 0.98 0.85 0.86 phase 0.87 0.90 0.99 0.88 0.89 channel 1-4 &phase 0.99 0.94 0.99 1.0 0.95 E. Lam (University of Hong Kong) 15th IAPR-MVA 9 May 2017 20 / 23

Recommend


More recommend