steganalysis by ensemble classifiers with boosting by
play

Steganalysis by Ensemble Classifiers with Boosting by Regression, - PowerPoint PPT Presentation

Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP2012 Steganalysis by Ensemble Classifiers with Boosting by Regression, and Post-Selection of Features Marc Chaumont, Sarra Kouider LIRMM, Montpellier, France October 2, 2012 IEEE


  1. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Steganalysis by Ensemble Classifiers with Boosting by Regression, and Post-Selection of Features Marc Chaumont, Sarra Kouider LIRMM, Montpellier, France October 2, 2012 IEEE International Conference on Image Processing 2012, Sept. 30 - Oct. 3 2012, Orlando, USA.

  2. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Preamble Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  3. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Preamble Steganography vs Steganalysis

  4. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Preamble The proposition An Improvement of a state-of-the-art steganalyzer P E ց of the steganalyzer THANKS TO boosting by regression of low complexity, post-selection of features of low complexity.

  5. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 The Kodovsky’s Ensemble Classifiers Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  6. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 The Kodovsky’s Ensemble Classifiers Notable properties Appeared during BOSS challenge (sept. 2010 - jan. 2011), Performances ≡ to SVM, Scalable regarding the dimension of the features vector, Low computational complexity, Low memory complexity, Easily parallelizable. J. Kodovsk´ y, J. Fridrich, and V. Holub, “Ensemble classifiers for steganalysis of digital media,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 2, pp. 432– 444, 2012.

  7. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 The Kodovsky’s Ensemble Classifiers Definition of a weak classifier Ensemble Classifiers is made of L weak classifiers Let x ∈ R d a feature vector, A weak classifier, h l , returns 0 for cover, 1 for stego : h l : R d → { 0 , 1 } → h l ( x ) x

  8. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 The Kodovsky’s Ensemble Classifiers How does classification work? 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (majority vote): 0 if � l = L � l =1 h l ( x ) ≤ L / 2 , C ( x ) = 1 otherwise .

  9. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  10. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Weighting the weak classifiers The classification (steganalysis) process was: 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (majority vote): 0 if � l = L � l =1 h l ( x ) ≤ L / 2 , C ( x ) = 1 otherwise .

  11. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Weighting the weak classifiers The classification (steganalysis) process was: 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (majority vote): 0 if � l = L � l =1 h l ( x ) ≤ L / 2 , C ( x ) = 1 otherwise . BUT : some weak classifiers are less efficient than others. THEN : introduce weights !

  12. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Weighting the weak classifiers The classification (steganalysis) process is now: 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (weighted vote): � � l = L 0 if � l = L l =1 α l l =1 α l h l ( x ) ≤ , C ( x ) = 2 1 otherwise .

  13. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Weighting the weak classifiers The classification (steganalysis) process is now: 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (weighted vote): � � l = L 0 if � l = L l =1 α l l =1 α l h l ( x ) ≤ , C ( x ) = 2 1 otherwise . How to calculate those weights with a small computational complexity ?

  14. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Boosting by regression Analytic expression of the weights During learning step: { α l } = arg min P E . { α l } simplify P E expression, least squares problem ⇒ linear system A . X = B with X the weights : n = N n = N � � A i , j = h i ( x n ) h j ( x n ) , B i = h i ( x n ) y n . n =1 n =1 ... solved thanks to a library of linear algebra. Order of complexity unchanged.

  15. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Post-selection of features Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  16. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Post-selection of features Reducing the dimension with few computations Remember: The classification (steganalysis) process is now: 1 Take an image to analys (i.e. classify in cover or stego), 2 Extract the features vector x ∈ R d , 3 Decide to classify cover or stego (weighted vote): � � l = L 0 if � l = L l =1 α l l =1 α l h l ( x ) ≤ , C ( x ) = 2 1 otherwise . Selection of features: Pre-selection may cost a lot. What about post-selection?

  17. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Post-selection of features Once a weak classifier learned : suppress the features reducing P E : Algorithm : 1 Compute a score for each feature; first database reading, 2 Define an order of selection of the features, 3 Find the best subset (lowest P E ); second database reading. Order of complexity unchanged.

  18. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Experiments Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  19. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Experiments Experimental conditions 10 000 greyscale images (512 × 512, BOSS database), The same 10 000 embedded at 0 . 4 bpp with HUGO, Feature vector dimension d = 5330 features (HOLMES subset), 5 different splits, 5 different seeds, HUGO: “Using High-Dimensional Image Models to Perform Highly Undetectable Steganography” T. Pevn´ y, T. Filler, and P. Bas, in Information Hiding, IH’2010. HOLMES: “Steganalysis of Content-Adaptive Steganography in Spatial Domain” J. Fridrich, J. Kodovsk´ y, V. Holub, and M. Goljan, in Information Hiding, IH’2011.

  20. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Experiments Steganalysis results

  21. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Experiments Steganalysis results Recall increase = 1 . 7% Same computational complexity order

  22. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Conclusion Outline Preamble 1 The Kodovsky’s Ensemble Classifiers 2 Boosting by regression 3 Post-selection of features 4 Experiments 5 Conclusion 6

  23. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Conclusion Summary Two propositions for the Kodovsk´ y steganalyzer: boosting by regression, post-selection of features. Significant recall increase (1.7%) No change in computational complexity order

  24. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Conclusion Annex: Metrics (1) Distance between the two classes: 1 [ j ] = | µ 1 [ j ] − µ 0 [ j ] | c ( l ) . � σ 2 1 [ j ] + σ 2 0 [ j ] Influence of a feature on the final correlation/decision ( = dot product) used to classify: i = N c ( l ) count ( x ( l ) � i [ j ] , w ( l ) [ j ] , y i ) , 2 [ j ] = i =1 with:  1 if [( x . w > 0 and y = 1)  count ( x , w , y ) = or ( x . w < 0 and y = 0)] , 0 otherwise .  count ( x ( l ) i = N i [ j ] , w ( l ) [ j ] , y i ) c ( l ) � 3 [ j ] = . � k = d red count ( x ( l ) i [ k ] , w ( l ) [ k ] , y i ) i =1 k =1

  25. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Conclusion Annex: Metrics (2) Feature correlation with the class: c ( l ) corr ( x ( l ) [ j ] , y ) 4 [ j ] = � � � i = N x ( l ) i [ j ] − x ( l ) [ j ] ( y i − y ) i =1 = . �� i = N � 2 �� i = N � x ( l ) i =1 ( y i − y ) 2 i [ j ] − x ( l ) [ j ] i =1 Feature correlation with the weak classifier: c ( l ) 5 [ j ] = corr ( x ( l ) [ j ] . w ( l ) [ j ] , y ) .

  26. Steganalysis by Ensemble Classifiers - Marc Chaumont - ICIP’2012 Conclusion Annex: P E in the Boosting by Regression During learning step: { α l } = arg min P E . { α l } � l = L i = N � � � P E = 1 � � f α l h l ( x i ) − y i . N i =1 l =1 with f a thresholding function defined by: f : R → { 0 , 1 } � l = L � l =1 α l 0 if x ≤ , x → f ( x ) = 2 1 otherwise . Let’s simplify, P E : � l = L � 2 i = N P E ≈ 1 � � α l h l ( x i ) − y i . N i =1 l =1 ⇒ least squares problem ... solved thanks to a library of linear algebra.

Recommend


More recommend