Informatics 1: Data & Analysis Lecture 13: Annotation of Corpora Ian Stark School of Informatics The University of Edinburgh Friday 8 March 2013 Semester 2 Week 7 N I V E U R S E I H T T Y O H F G R E http://www.inf.ed.ac.uk/teaching/courses/inf1/da U D I B N
Lecture Plan XML We start with technologies for modelling and querying semistructured data . Semistructured Data: Trees and XML Schemas for structuring XML Navigating and querying XML with XPath Corpora One particular kind of semistructured data is large bodies of written or spoken text: each one a corpus , plural corpora . Corpora: What they are and how to build them Applications: corpus analysis and data extraction Ian Stark Inf1-DA / Lecture 13 2013-03-08
Homework Coursework Assignment The coursework assignment is now online. This runs alongside your usual tutorial exercises for the next two weeks; ask tutors for help with it where you have specific questions. The assignment is a Inf1-DA examination paper from 2011. Your tutor will give you marks and feedback on your work in the last tutorial of semester. Reading T. McEnery and A. Wilson. Corpus Linguistics. Second edition, Edinburgh University Press, 2001. Chapter 2: What is a corpus and what is in it? (§2.2.2 optional) Still available from the ITO. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Corpus Annotation Last lecture introduced the preprocessing steps of identifying tokens and sentence boundaries. Now we look to add further information to the data. Annotation adds information to the corpus that is not explicit in the data itself. This is often specific to a particular application; and a single corpus may be annotated in multiple ways. Annotation scheme is a basis for annotation, made up of a tag set and annotation guidelines . Tag set is an inventory of labels for markup. Annotation guidelines tell annotators — domain experts — how a tag set should be applied. In particular, this is to ensure consistency across different annotators. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Part-of-Speech (POS) Annotation Tagging by part-of-speech (POS) is the most basic kind of linguistic annotation. Each token is assigned a code indicating its part of speech. This might be a very simple classification: Noun (“claw”, “hyphen”); Adjective (“red”, “small”); Verb (“encourage”, “betray”). Or it could be more refined: Singular common noun (“elephant”, “table”); Comparative adjective (“larger”, “neater”); Past participle (“listened”, “written”). Even simple POS tagging can, for example, disambiguate some homographs like “boot” (verb) and “boot” (noun). Ian Stark Inf1-DA / Lecture 13 2013-03-08
Example POS Tag Sets CLAWS tag set (used for BNC): 62 tags (Constituent Likelihood Automatic Word-tagging System) Brown tag set (used for Brown corpus): 87 tags Penn tag set (used for the Penn Treebank): 45 tags Category Examples CLAWS5 Brown Penn Adjective happy, bad AJ0 JJ JJ Adverb often, badly PNI CD CD Determiner this, each DT0 DT DT Noun aircraft, data NN0 NN NN Noun singular goose, book NN1 NN NN Noun plural geese, books NN2 NN NN Noun proper singular London, Michael NP0 NP NNP Noun proper plural Greeks, Methodists NP0 NPS NNPS Ian Stark Inf1-DA / Lecture 13 2013-03-08
POS Tagging Idea: Tag parts of speech by looking up words in a dictionary. Problem: Ambiguity: words can carry several possible POS. Time flies like an arrow (1) / Fruit flies like a banana (2) time: singular noun or a verb; flies: plural noun or a verb; like: singular noun, verb, preposition. Combinatorial explosion: 2 × 2 × 3 = 12 POS sequences for (1). To resolve this kind of ambiguity, we need more information. One route would be to investigate the meaning of words and sentences — their semantics . Perhaps unexpectedly, it turns out that impressive improvements are possible using only the probabilities of different parts of speech. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Probabilistic POS Tagging Observation: words can have more than one POS, but one may be more frequent than the others. Idea: assign each word its most frequent POS (using frequencies from manually annotated training data). Accuracy: around 90%. Improvement: use frequencies of POS sequences , and other context clues. Accuracy: 96–98%. Sample POS tagger output It/PP was/VBD the/DT best/JJS of/IN times/NNS ,/, it/PP was/VBD the/DT worst/JJS of/IN times/NNS ,/, it/PP was/VBD the/DT age/NN of/IN wisdom/NN ,/, it/PP was/VBD the/DT age/NN of/IN foolishness/NN ,/, it/PP was/VBD the/DT epoch/NN of/IN belief/NN ,/, it/PP was/VBD the/DT epoch/NN of/IN incredulity/NN ,/, it/PP was/VBD the/DT season/NN of/IN Light/NP ,/, it/PP was/VBD the/DT season/NN of/IN Darkness/NN Ian Stark Inf1-DA / Lecture 13 2013-03-08
Data and Metadata One important application of markup languages like XML is to separate data from metadata : Data is the thing itself. In a corpus this is the samples of text. Metadata is data about the data. In a corpus this includes information about source of text as well as various kinds of annotation. At present XML is the most widely used markup language for corpora, replacing various others including the Standard Generalized Markup Language (SGML). The example on the next slide is taken from the BNC, which was first released as XML in 2007 (having been previously formatted in SGML). Ian Stark Inf1-DA / Lecture 13 2013-03-08
Example from BNC XML Edition <wtext type="FICTION"> <div level="1"> <head> <s n="1"> <w c5="NN1" hw="chapter" pos="SUBST">CHAPTER </w> <w c5="CRD" hw="1" pos="ADJ">1</w> </s> </head> <p> <s n="2"> <c c5="PUQ">’</c> <w c5="CJC" hw="but" pos="CONJ">But</w> <c c5="PUN">,</c> <c c5="PUQ">’</c> <w c5="VVD" hw="say" pos="VERB">said </w> <w c5="NP0" hw="owen" pos="SUBST">Owen</w> <c c5="PUN">,</c> <c c5="PUQ">’</c> <w c5="AVQ" hw="where" pos="ADV">where </w> <w c5="VBZ" hw="be" pos="VERB">is </w> <w c5="AT0" hw="the" pos="ART">the </w> <w c5="NN1" hw="body" pos="SUBST">body</w> <c c5="PUN">?</c> <c c5="PUQ">’</c> </s> </p> .... </div> </wtext> Ian Stark Inf1-DA / Lecture 13 2013-03-08
Aspects of BNC Example This example is the opening words of BNC text J10, which is a novel: The Mamur Zapt and the girl in the Nile by Michael Pearce. The wtext element stands for written text . Its attribute type indicates the kind of text (here FICTION). Element head tags a portion of header text (here, a chapter heading). The s element tags sentences. Sentences are numbered via the attribute n. The w element tags words. The attribute pos is a POS tag, with more detailed POS information given by the c5 attribute, which contains the CLAWS code. Attribute hw represents the head word , lemma or root form of the word (e.g., the root form of “said” is “say”). The c element tags punctuation. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Syntactic Annotation Moving above the level of individual words, parsing and syntactic annotation give information about the structure of sentences. Linguists use phrase markers to indicates which parts of a sentence belong together: noun phrase (NP): noun and its adjectives, determiners, etc. verb phrase (VP): verb and its objects; prepositional phrase (PP): preposition and its NP; sentence (S): VP and its subject. Phrase markers group hierarchically into a syntax tree . Syntactic annotation can be automated. Accuracy: around 90%. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Example Syntax Tree The following is from the Penn Treebank corpus. S NP VP NP PRP VB NP PP They saw DT NN IN NP the president of DT NN the company Ian Stark Inf1-DA / Lecture 13 2013-03-08
Syntax Tree in XML Here is the same syntax tree expressed in XML: <s> <np><w pos="PRP">They</w></np> <vp><w pos="VB">saw</w> <np> <np><w pos="DT">the</w> <w pos="NN">president</w></np> <pp><w pos="NN">of</w> <np><w pos="DT">the</w> <w pos="NN">company</w></np> </pp> </np> </vp> </s> Some choices made in this XML coding: phrase markers are represented by XML elements; while POS tags are given by attribute values. Note that, as a result of this, the tree on the previous slide is not quite the same as the XML element tree for this document. Ian Stark Inf1-DA / Lecture 13 2013-03-08
Recommend
More recommend