Segmentation-level Fusion for Iris Recognition Peter Wild 1 , 3 , Heinz Hofbauer 2 , James Ferryman 1 , Andreas Uhl 2 1 School of Systems Engineering, University of Reading, Reading RG6 6AY, UK. 2 Dept. of Computer Sciences, University of Salzburg, 5020 Salzburg, Austria. 3 AIT Austrian Institute of Technology GmbH, 2444 Seibersdorf, Austria. peter.wild@ait.ac.at, { hhofbaue, uhl } @cosy.sbg.ac.at, j.m.ferryman@reading.ac.uk 14 th Int’l Conf. of the Biometrics Special Interest Group (BIOSIG) September 2015 P. Wild et al. : Segmentation-level Fusion for Iris Recognition 1/20
Outline Introduction 1 Multi-segmentation Fusion Methods 2 Experimental Study 3 Conclusion 4 P. Wild et al. : Segmentation-level Fusion for Iris Recognition 2/20
Motivation Challenge Ambition Existing: fusion methods at Motivation 1 : better accuracy data/feature, score and for less invasive recording rank/decision level. conditions? Widely ignored: fusion at Motivation 2 : potentially normalisation/segmentation faster alternative to level prior to feature extraction. multi-algorithm fusion? Missing: any standards for Motivation 3 : improved interchange of segmentation understanding of types of results segmentation errors. Impact Investigate and suggest methods for effective multi-segmentation fusion , tested on public datasets with open source software. P. Wild et al. : Segmentation-level Fusion for Iris Recognition 3/20
Related Work Super-resolution Iris image-fusion Segmentation fusion [Huang et al. BMVC’03] [Hollingsw. et al. TIFS’09] [Uhl et al. ICIAR’13] among first combine high-res. proof-of-concept data-level fusion images from human (manual) aproaches for iris multiple frames ground truth present a Markov segmentation: [Jillela et al. WACV’11] network 97.46% - 97.64% image-level fusion learning-based GAR at 0.01% with Principal fusion method to FAR Comp. Transform enhance the no automated resolution of iris [Llano et al. ICB’15] algorithms images PCA vs. Laplacian Pyramid & Exp. Mean fusion P. Wild et al. : Segmentation-level Fusion for Iris Recognition 4/20
Fusion Framework Iris Segmentation Segmentation Iris Texture Algorithm 1 { P 1 , L 1 , E U 1 , E L 1 } N 1 { P, L, E U , E L } Rubbersheet . . . Fusion I transform Segmentation Algorithm k N { P k , L k , E U k , E L k } N k Noise Mask Input : inner/outer boundaries P , L : [ 0 , 2 π ) → [ 0 , m ] × [ 0 , n ] . Output 1 : refined boundaries for “Rubbersheet” mapping R ( θ, r ) := ( 1 − r ) · P ( θ ) + r · L ( θ ) . Output 2 : texture and noise masks T , M : [ 0 , 2 π ) × [ 0 , 1 ] → C ( C is the target color space, M = N ◦ R , T = I ◦ R for the original n × m image I and noise mask N ). P. Wild et al. : Segmentation-level Fusion for Iris Recognition 5/20
Investigated Questions 1 Does the combination of automated iris segmentation results yield more accurate result than each of the employed original segmentation algorithms? 2 How does the choice of database and segmentation algorithms impact on iris segmentation fusion? 3 How do outliers impact on overall recognition accuracy and how do ground-truth-based vs. recognition-based evaluations relate to each other? Contribution analysis of reference methods for iris segmentation-level fusion; considering ground-truth and recognition-based assessment. P. Wild et al. : Segmentation-level Fusion for Iris Recognition 6/20
Error Measures Ground-truth evaluation : Assessing segmentation noise mask; measures suggested by Noisy Iris Challenge Evaluation - Part I (NICE.I) and F-measure [Hofbauer et al. ICPR’14]: k � k � � k � E 1 := 1 fp i + fn i E 2 := 1 1 fp i + 1 1 fn i � � � ; k mn 2 k fp i + tn i 2 k fn i + tp i i = 1 i = 1 i = 1 (1) k F - measure = F 1 := 1 tp i � (2) tp i + 1 k 2 ( fn i + fp i ) i = 1 Recognition-based evaluation : Account for feature-based tolerance of false segmentations. Use Equal Error Rate (EER) as main performance indicator; McNemar test [McNemar, Psy.’47]. P. Wild et al. : Segmentation-level Fusion for Iris Recognition 7/20
Segmentation Fusion Sum-Rule Interpolation : Augmented-Model Interpolation : This fusion rule combines This model combines boundaries boundary points B i ( θ ) of B 1 , . . . , B k within a jointly applied curves B 1 , B 2 , . . . B k : parametrisation model ModelFit [ 0 , 2 π ) → [ 0 , m ] × [ 0 , n ] minimizing the model-error (e.g., into a single boundary B , Fitzgibbon’s ellipse-, or for pupillary and limbic least-squares circular fitting), boundaries, in analogy to executed separately for inner and the sum rule. outer iris boundaries. Models are combined, not only points. k B ( θ ) := 1 � B i ( θ ); (3) � k k � � i = 1 B ( θ ) := ModelFit B i ( θ ) (4) i = 1 P. Wild et al. : Segmentation-level Fusion for Iris Recognition 8/20
Iris Scanning and Pruning Process Overview over the iris scanning and pruning process horizontal scan area Input : Segmentation µ r + 2 . 5 σ r masks N ; vertical scana area µ r − 2 . 5 σ r Method : Augmented r C r model interpolation outlier based on mask-scan; outlier N equidistant scan lines are used to generate points; Outlier detection and removal using center of gravity C r (z-score > 2 . 5). (a) With outliers (b) With outliers pruned P. Wild et al. : Segmentation-level Fusion for Iris Recognition 9/20
Iris Scanning and Pruning Process: Details High number of scan lines is desirable ( N = 100 ); If the mask contains holes (noise), they should be closed by an dilate+erode morphological operation; OSIRIS algorithm produces masks, which extend over the actual boundaries, therefore a restriction step is introduced. Actual mask is generated by fitting an ellipse to the point clouds by a least-squares method. (a) Original (b) Corrected boundaries (c) Without noise P. Wild et al. : Segmentation-level Fusion for Iris Recognition 10/20
Tested Segmentation Algorithms CAHT - Contrast Adaptive Hough Trans. [Rathgeb et al. 2012] traditional sequential (limbic-after-pupillary) method; based on circular HT and contrast-enhancement; WAHET - Weighted Adaptive HT & ET [Uhl et al. BTAS’12] two-stage adaptive multi-scale HT segmentation, elliptical; OSIRIS - Open Source for Iris [Petrovska et al. 2007] circular HT-based method with boundary refinement; IFPP - Iterat. Fourier Pulling & Pushing [Uhl et al. ICIAR’12] iterative Fourier-series approximation and “pulling and pushing”; P. Wild et al. : Segmentation-level Fusion for Iris Recognition 11/20
Impact on Recognition Accuracy Equal error rate (EER) for combinations using USIT (Uni Salzburg Iris Toolkit [Rathgeb et al. 2012]) algorithms Ma (wavelet zero-crossing based) and Masek (1D Log-Gabor): Casia v4 Interval database IIT Delhi database Equal-error rate [%] of Masek Equal-error rate [%] of Masek CAHT WAHET OSIRIS IFPP CAHT WAHET OSIRIS IFPP CAHT 1.22 0.92 1.03 1.30 CAHT 1.85 3.60 1.65 1.38 WAHET 1.89 1.02 1.41 WAHET 6.82 3.90 3.70 OSIRIS 1.04 1.44 OSIRIS 1.40 1.94 IFPP 8.10 IFPP 3.87 Equal-error rate [%] of Ma Equal-error rate [%] of Ma CAHT WAHET OSIRIS IFPP CAHT WAHET OSIRIS IFPP CAHT 0.99 0.64 0.84 1.17 CAHT 1.72 4.06 1.95 1.43 WAHET 1.72 0.89 1.22 WAHET 7.43 4.86 4.23 OSIRIS 0.73 1.53 OSIRIS 1.21 2.40 IFPP 8.78 IFPP 4.36 P. Wild et al. : Segmentation-level Fusion for Iris Recognition 12/20
Results of the McNemar test, reported as X 2 values Casia v4 Interval database IIT Delhi database X 2 statistic for Masek X 2 statistic for Masek single method single method CAHT WAHET OSIRIS IFPP CAHT WAHET OSIRIS IFPP fused with CAHT 24742 8 246149 CAHT 49180 169 35918 WAHET 2543 13 247450 WAHET 20317 42328 24 OSIRIS 1158 22002 243734 OSIRIS 1746 27835 17116 IFPP 928 8110 3729 IFPP 3193 38721 3655 X 2 statistic for Ma X 2 statistic for Ma single method single method CAHT WAHET OSIRIS IFPP CAHT WAHET OSIRIS IFPP fused with CAHT 28739 135 273347 CAHT 21271 4614 61327 WAHET 3993 1649 276351 WAHET 52945 78177 53 OSIRIS 1620 15752 261445 OSIRIS 368 10149 26311 IFPP 1438 7076 10532 IFPP 1145 21256 11669 P. Wild et al. : Segmentation-level Fusion for Iris Recognition 13/20
Results of Segmentation-level Fusion (Recognition) Segmentation fusion increased performance in 10 out of 24 combination scenarios; Only one setup, IFPP + WAHET, which consistently increases the performance; Only one case, OSIRIS + CAHT using Ma on IITD deteriorates performance of both individual results. McNemar tests using χ 2 approximation with the continuity correction proposed by Edwards reveals EERs are different (critical value X 2 ∗ ≥ 6 . 64 indicates a rejection of the null hypothesis — with at least 99 % significance). P. Wild et al. : Segmentation-level Fusion for Iris Recognition 14/20
Recommend
More recommend