feature bagging for author attribution
play

Feature Bagging for Author Attribution PAN - CLEF 2012 - PowerPoint PPT Presentation

Feature Bagging for Author Attribution PAN - CLEF 2012 Franois-Marie Giraud / Thierry Artires LIP6 University Paris 6 - France Motivation From the littrature on author attribution Hard to beat a simple and efficient system


  1. Feature Bagging for Author Attribution PAN - CLEF 2012 François-Marie Giraud / Thierry Artières LIP6 – University Paris 6 - France

  2. Motivation • From the littérature on author attribution – Hard to beat a simple and efficient system Linear SVM on bag of features • Hypothetical explanations – Intrinsic difficulty to define relevant stylistic features • Stylistic individual features are embedded and hidden in a large amount of features • Stylistic features depend on the writer – Optimization concern • Undertraining phenomenon [McCallum et al., CIIR 2005]

  3. Motivation • Undertraining phenomenon Training Document set: Bag of features (words sorted most to less frequent)

  4. Motivation • Undertraining phenomenon Training Document set: Bag of features (words sorted most to less frequent) • Red subset of feature alone allows perfect training set Linear SVM discrimination • Blue subset of feature alone allows either • Green subset is Discrimination based useless on red features only

  5. Motivation • Undertraining phenomenon Linear SVM 0 0 0 0 Test Document containing no RED features. Near random prediction

  6. Undertraining investigation Document: Bag of 2500 features (words sorted most to less frequent) First X Training accuracy Validation accuracy

  7. Undertraining investigation Document: Bag of 2500 features (words sorted most to less frequent) All but X first Training accuracy Validation accuracy

  8. Undertraining investigation Document: Bag of 2500 features (words sorted most -> less frequent) Random X Training accuracy Validation accuracy

  9. Principle of Document feature Bag of ~ 3000 most frequents words bagging Bag of words Random selection of 50 to 200 features … K base classifiers learned on random subsets of features Majority vote Base classifiers results aggregation Prediction Author

  10. Preliminary results English public available blog corpus Statistics on Base classifiers Comparison with Baseline

  11. Experimental methodology for PAN Train A1 A2 Learning B1 B2 C1 C2 stage Train valid A1 A2 B1 B2 C1 C2

  12. Experimental methodology for PAN Train A1 A2 Learning B1 B2 C1 C2 stage Train valid Train valid Train valid Train valid A1 A2 A2 A1 A1 A2 A1 A2 B1 B2 B1 B2 B2 B1 B1 B2 C1 C2 C1 C2 C1 C2 C2 C1

  13. Comments on PAN results • Less random features works better. • Better ranks on closed tasks • Reject method have to be improved • Interest to use severals training/validation split

  14. Perspective : A two Stage Approach • Motivation – The way the classifier behaves when removing features depends on the author [Koppel 2007] Author profiles for unmasking method, [Koppel 2007] • Investigate mixing – this result with – our feature bagging approach

  15. Two Stage Approach 1. Bagging Appproach Learn multiple base classifiers erxploiting random selected subsets of features. Profile vector for document d and author a 2. Building new data (called profile) for each pair (document, author) 3. (Optional) sort all vectors of the new dataset according to. 4. Learn a binary classifier to say if a profile is correct or not

  16. Two Stage Approach True author (sorted) profiles 1. Bagging Appproach Learn multiple base classifiers erxploiting random selected subsets of value features. 2. Building new data (called profile) for each pair (document, author) Feature False author profiles 3. (Optional) sort all vectors of the new dataset according to. 4. Learn a binary classifier to say if a profile is correct or not Similar results as Bagging approach

  17. Conclusion and further works • Feature bagging approach to enforce exploiting all features ⇒ Outperforms the SVM baseline ⇒ Should be improved for handling open problems (cf PAN results) • Similar results of the second approach • While different representation ⇒ Should be combined

  18. ANY QUESTION ?

  19. Additional results on PAN

Recommend


More recommend