Text classification I (NaΓ―ve Bayes) CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Spring 2020 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford)
Outline } Text classification } definition } relevance to information retrieval } NaΓ―ve Bayes classifier 2
Formal definition of text classification } Document space π } Docs are represented in this (typically high-dimensional) space } Set of classes π· = {π & , β¦ , π ) } } Example: π· = { spam, non β spam } } Training set: a set of labeled docs. Each labeled doc π, π β πΓπ· } Using a learning method, we find a classifier πΏ . that maps docs to classes: πΏ: π β π· 3
Examples of using classification in IR systems } Language identification (classes: English vs. French etc.) } Automatic detection of spam pages (spam vs. non-spam) } Automatic detection of secure pages for safe search } Topic-specific or vertical search β restrict search to a βverticalβ like βrelated to healthβ (relevant to vertical vs. not) } Sentiment detection: is a movie or product review positive or negative (positive vs. negative) } Exercise: Find examples of uses of text classification in IR 4
Ch. 13 Standing queries } The path from IR to text classification: } You have an information need to monitor, say: } Unrest in the Niger delta region } You want to rerun an appropriate query periodically to find new news items on this topic } You will be sent new documents that are found } I.e., it β s not ranking but classification (relevant vs. not relevant) } Such queries are called standing queries } Long used by β information professionals β } A modern mass instantiation is Google Alerts 5
6 3
Spam filtering Another text classification task From: "" <takworlld@hotmail.com> Subject: real estate is the only way... gem oalvgkay Anyone can buy real estate with no money down Stop paying rent TODAY ! There is no need to spend hundreds or even thousands for similar courses I am 22 years old and I have already purchased 6 properties using the methods outlined in this truly INCREDIBLE ebook. Change your life NOW ! ================================================= Click Below to order: http://www.wholesaledaily.com/sales/nmd.htm ================================================= 7
Categorization/Classification } Given: } A representation of a document d } Issue: how to represent text documents. } Usually some type of high-dimensional space β bag of words } A fixed set of classes: C = {c 1 , c 2 ,β¦, c J } } Determine: } The category of d: Ξ³ (d) β C } Ξ³ (d) is a classification function } We want to build classification functions ( β classifiers β ). 8
Classification Methods (1) Β§ Manual classification Β§ Used by the originalYahoo! Directory Β§ Looksmart, about.com, ODP, PubMed Β§ Accurate when job is done by experts Β§ Consistent when the problem size and team is small Β§ Difficult and expensive to scale Β§ Means we need automatic classification methods for big problems 9
Classification Methods (2) } Hand-coded rule-based classifiers } One technique used by news agencies, intelligence agencies, etc. } Widely deployed in government and enterprise } Vendors provide β IDE β for writing such rules } Issues: } Commercial systems have complex query languages } Accuracy can be high if a rule has been carefully refined over time by a subject expert } Building and maintaining these rules is expensive 10
Classification Methods (3): Supervised learning } Given: } A document d } A fixed set of classes: C = {c 1 , c 2 ,β¦, c J } } A training set D of documents each with a label in C } Determine: } A learning method or algorithm which will enable us to learn a classifier Ξ³ } For a test document d , we assign it the class Ξ³ (d) β C 11
Bayes classifier } Bayesian classifier is a probabilistic classifier: π = argmax π(π· 7 |π) 7 π = argmax π π π· 7 π(π· 7 ) 7 } π = π’ & , β¦ , π’ = > } There are too many parameters π( π’ & , β¦ , π’ = > |π· 7 ) } One for each unique combination of a class and a sequence of words. } We would need a very, very large number of training examples to estimate that many parameters. 12
NaΓ―ve bayes assumption } NaΓ―ve bayes assumption: = > π π π· 7 = π π’ & , β¦ , π’ = > π· 7 = ? π(π’ @ |π· 7 ) @A& } π C : length of doc π (number of tokens) } π(π’ @ |π· 7 ) : probability of term π’ @ occurring in a doc of class π· 7 } π(π· 7 ) : prior probability of class π· 7 . 13
NaΓ―ve bayes assumption } NaΓ―ve bayes assumption: = > π π π· 7 = π π’ & , β¦ , π’ = > π· 7 = ? π(π’ @ |π· 7 ) @A& } π C : length of doc π (number of tokens) } π(π’ @ |π· 7 ) : probability of term π’ @ occurring in a doc of class π· 7 } π(π· 7 ) : prior probability of class π· 7 . } Equivalent to (language model view): H DE FG,> π π π· 7 = ? π π’ @ π· 7 @A& 14
Naive Bayes classifier } Since log is a monotonic function, the class with the highest score does not change. = > π = argmax π π π· 7 π(π· 7 ) = argmax π(π· 7 ) ? π(π’ @ |π· 7 ) 7 7 @A& = > π = argmax log π(π· 7 ) + M log π π’ @ π· 7 7 @A& log π π’ @ π· 7 : a weight that indicates how good an indicator π’ @ is for π· 7 15 log(xy) = log(x) + log(y)
Estimating parameters N(π· 7 ) and π N π’ @ π· 7 ) from training data } Estimate π } π 7 : number of docs in class π· 7 } π @,7 : number of occurrence of π’ @ in training docs from class π· 7 (includes multiple occurrences) Q R N π· 7 = } π Q S G,R N π’ @ π· 7 ) = } π V β S U,R UWX 16
Problem with estimates: Zeros π: πΆπΉπ½π»π½ππ» π΅ππΈ ππ΅π½ππΉπ½ πΎππ½π πππ π πππ π·βπππ = 0 17
οΏ½ οΏ½ Problem with estimates: Zeros } For doc π containing a term π’ that does not occur in any N π π = 0 doc of a class π β π } Thus π cannot be assigned to class π } We use π D,h + 1 N π’ π = π β π D j ,h + π D j βH } Instead of π D,h N π’ π = π β π D j ,h D j βH 18
NaΓ―ve Bayes: summary } Estimate parameters from the training corpus using add- one smoothing } For a new doc π = π’ & , β¦ , π’ = > , for each class, compute = > log π(π· 7 ) + β log π π’ @ π· 7 @A& } Assign doc π to the class with the largest score 19
NaΓ―ve Bayes: example } Training phase: } Estimate parameters of Naive Bayes classifier } Test phase } Classifying the test doc 20
NaΓ―ve Bayes: example π· = π·βπππ } Estimating parameters N π· = m N π·Μ = & Β¨ π n , π n N π·πΌπ½ππΉππΉ|π· = rs& tsu = u N π·πΌπ½ππΉππΉ|π·Μ = &s& msu = v Β¨ π &n π w N πππΏππ|π· = zs& tsu = & N πππΏππ|π·Μ = &s& msu = v Β¨ π &n π w N πΎπ΅ππ΅π|π· = zs& tsu = & N πΎπ΅ππ΅π|π·Μ = &s& msu = v Β¨ π &n π w } Classifying the test doc: m N π·|π β m u Γ & &n Γ & } π n Γ &n β 0.0003 &n πΜ = π· m N π·Μ |π β & v Γ v w Γ v } π n Γ w β 0.0001 w 21
NaΓ―ve Bayes: training 22
NaΓ―ve Bayes: test 23
Time complexity of Naive Bayes Generally: |β||π | < πΈ π β¬β’β } πΈ : training set, π : vocabulary, β : set of classes } π β¬β’β : average length of a training doc } π β¬ : length of the test doc } π β¬ : number of distinct terms in the test doc } Thus: Naive Bayes is linear in the size of the training set (training) and the test doc (testing). } This is optimal time. 24
Why does Naive Bayes work? } The independence assumptions do not really hold of docs written in natural language. } Naive Bayes can work well even though these assumptions are badly violated. } Classification is about predicting the correct class and not about accurately estimating probabilities. } Naive Bayes is terrible for correct estimation . . . } but it often performs well at choosing the correct class. 25
Naive Bayes is not so naive } Naive Bayes has won some bakeoffs (e.g., KDD-CUP 97) } A good dependable baseline for text classification (but not the best) } Optimal if independence assumptions hold (never true for text, but true for some domains) } More robust to non-relevant features than some more complex learning methods } More robust to concept drift (changing of definition of class over time) than some more complex learning methods } Very fast } Low storage requirements 26
Resources } Chapter 13 of IIR 27
Recommend
More recommend