Deep Learning Retinal Vessel Segmentation From a Single Annotated Example Praneeth Sadda 1 , John A. Onofrey 1 , Xenophon Papademetris 1,2 1 Department of Radiology and Biomedical Imaging, Yale University 2 Department of Biomedical Engineering, Yale University
Semantic Segmentation Bagci et al. 2014 Garcia-Peraza-Herrera et al. 2014 Stahl et al. 2004
FCNN
Fundamental Issue of Supervised Learning • Data is easy to acquire • Data is difficult to label
Many Datasets are Partially Labeled Fully Labeled Dataset Partially Labeled Dataset
FCNN
Style Transfer Zhu et al. 2017
Synthetic Image Generation Labeled Image Unlabeled Image Labeled Synthetic Image
Proposed Workflow Generation of Synthetic Partially Labeled Dataset Training with Fully-Labeled Examples Data FCNN
CycleGAN
CycleGANs ! "#"$% &, ( = * + ~ -(+) ( & 0 − 0 2 + * # ~ - # & ( 4 − 4 2 ( & 0 ≈ 0 & ( 4 ≈ 4
Methods • Provide a manual ground truth segmentation for a single “template” image. • Divide images into patches. • Train one CycleGAN for each unlabeled image (~10 hours per image) using a patchwise approach. • Transfer the style of the template image using a patchwise approach. • Train FCNN on the style transferred images using a patchwise approach.
Synthetic Image Generation
Results FCNN trained with one FCNN trained with one FCNN trained with 20 labeled and 19 unlabeled Phyisican Rater labeled example labeled examples examples
Results Training Dataset Sensitivity Specificity Accuracy 20 Labeled 0.60 ± 0.10 0.98 ± 0.01 0.94 ± 0.01 1 Labeled + 19 Unlabeled 0.62 ± 0.10 0.95 ± 0.01 0.93 ± 0.02 1 Labeled 0.86 ± 0.04 0.53 ± 0.06 0.56 ± 0.05
Conclusion • Style transfer can be used to exploit the information in unlabeled examples for supervised learning of semantic segmentation. • For segmenting
Recommend
More recommend