Semantic T extual Similarity & more on Alignment CMSC 723 / LING 723 / INST 725 M ARINE C ARPUAT marine@cs.umd.edu
2 topics today • P3 task: Semantic Textual Similarity – Including Monolingual alignment • Beyond IBM word alignment – Synchronous CFGs
Semantic T extual Similarity Series of tasks at international workshop on semantic evaluations (SemEval), since 2012 http://alt.qcri.org/semeval2017/task1/
What is Semantic T extual Similarity? Shjkahsiunu iuhndhau dhdkhn hdhaud8 kdhikahdi dhjhd dhjh jiidh iihiiohio hihiahdiod Yo! Come over here, you will be pleasantly يج جي يدجي دجايجدي دجكلادج Hnh whdun duuhj js ijd dj iow oijd oidj dk surprised idoasd io idjioio jidjduio iodio oi iiouio يفحيحي سحفيسحي حيحي وغو uwhd8 yh djhdhwuih jhu h uh jhihk, jdhhii, oiudoi ifuiosu fiuoi oiuiou oi io hiyuify 8iy ih iouoiu ساجكاسجك جديييج يج ي فس gdytysla, yuiyduinsjsh, iodpisomkncijsi. ou o ooihyiush iuh fhdfosiip upouosu oiu oi o Kjhhuduh, dhdhhd hhduhd ج حجوسحجفح . يحيح حسححك oisyoisy oi sih oiiou ios oisuois uois oudiosu doi jjhuiq …Welcome to my world, trust me you يج وي يدجي يدح حد سوحوح soiddu os oso iio oioisosuo. will never be disappointed djijdp idiowdiw كحكسجحسكححفجحيج I iwfiow ifiwoufowi ioiowruo iyfi I wioiwf oid يو ديويوديحيشوحوسحفح oi iwoiwy iowuouwr ujjd hihi iohoihiof uouo شكلامو ىلاعت سبطوفوفوفحسوي ou o oufois f uhdiy oioi oo ouiosufoisuf iouiouf paidp paudoi uiu fh uhhioiof طاسبنا رخا طسبنبته ،هوعد Semantic 안녕하세요 제가 당신에게 Добро пожаловать в Similarity 전화했지만 아무 мой мир, поверьте мне 소용이있을려고 ... 당신이 вы никогда не будете 시간을 즐기고 있었다 희망 разочарованы Quantitative Graded Similarity Score Confidence Score Principled Interpretability, which semantic components/features led to results (hopefully will lead to us gaining a better understanding of semantics)
Why Semantic T extual Similarity? • Most NLP applications need some notion of semantic similarity to overcome brittleness and sparseness • Provides evaluation beyond surface text processing • A hub for semantic processing as a black box in applications beyond NLP • Lends itself to an extrinsic evaluation of scattered semantic components
What is STS? • The graded process by which two snippets of text (t1 and t2) are deemed equivalent semantically, i.e. bear the same meaning • An STS system will quantifiably inform us on how similar t1 and t2 are, resulting in a similarity score • An STS system will tell us why t1 and t2 are similar giving a nuanced interpretation of similarity based on semantic components’ contributions
What is STS? • Word similarity has been relatively well studied – For example according to WN cord smile 0.02 rooster voyage 0.04 noon string 0.04 fruit furnace 0.05 ... hill woodland 1.48 More car journey 1.55 cemetery mound 1.69 similar ... cemetery graveyard 3.88 automobile car 3.92
What is STS? • Fewer datasets for similarity between sentences A forest is a large area where trees grow close together. VS. The coast is an area of land that is next to the sea. [0.25]
What is STS? • Fewer datasets for similarity between sentences A forest is a large area where trees grow close together. VS. Woodland is land with a lot of trees. [2.51]
What is STS? • Fewer datasets for similarity between sentences Once there was a Czar who had three lovely daughters. VS. There were three beautiful girls, whose father was a Czar. [4.3]
Related tasks • Paraphrase detection – Are 2 sentences equivalent in meaning? • Textual Entailment – Does premise P entail hypothesis H? • STS provides graded similarity judgments
Annotation: crowd-sourcing
Annotation: crowd-sourcing • English annotation process – Pairs annotated in batches of 20 – Annotators paid $1 per batch – 5 annotations per pair – Workers need to have Mturk master qualification • Defining gold standard judgments – Median value of annotations – After filtering low quality annotators (<0.80 correlation with leave-on-out gold & <0.20 Kappa)
Diverse data sources
Evaluation: a shared task Subset of 2016 results (Score: Pearson correlation)
STS models from word to sentence vectors • Can we perform STS by comparing sentence vector representation? This approach works well for word level similarity • But can we capture the meaning of a sentence in a single • vector?
“ Composing ” by averaging g (“ shots fired at residence ”) = 1 + + + 4 shots fired at residence [Tai et al. 2015, Wieting et al. 2016]
How can we induce word vectors for composition? 𝒚 𝟐 𝒚 𝟑 English paraphrases By our fellow By our colleagues [Wieting et al. 2016] members Bilingual sentence pairs Thus in fact … As que podramos … nuestra [Hermann & Blunsom 2014] by our fellow colega disputado members Bilingual phrase pairs by our fellow member de nuestra colega
STS models: monolingual alignment
One (of many) approaches to monolingual entailment Idea • Exploit not only similarity between words • But also similarity between their contexts See Sultan et al. 2013 https://github.com/ma- sultan/
2 topics today • P3 task: Semantic Textual Similarity – Including Monolingual alignment • Beyond IBM word alignment – Synchronous CFGs
Aligning words & constituents • Alignment: mapping between spans of text in lang1 and spans of text in lang2 – Sentences in document pairs – Words in sentence pairs – Syntactic constituents in sentence pairs • Today: 2 methods for aligning constituents – Parse and match – biparse
Parse & Match
Parse(-Parse)-Match • Idea – Align spans that are consistent with existing structure • Pros – Builds on existing NLP tools • Cons – Assume availability of lots of resources – Assume that representations can be matched
Aligning words & constituents 2 methods for aligning constituents: • Parse and match – assume existing parses and alignment • Biparse – alignment = structure
A “straw man” hypothesis: All languages have same grammar
A “straw man” hypothesis: All languages have same grammar
A “straw man” hypothesis: All languages have same grammar
A “straw man” hypothesis: All languages have same grammar
The biparsing hypothesis: All languages have nearly the same grammar
The biparsing hypothesis: All languages have nearly the same grammar
Example for the biparsing hypothesis: All languages have nearly the same grammar
The biparsing hypothesis: All languages have nearly the same grammar
The biparsing hypothesis: All languages have nearly the same grammar Dekai Wu and Pascale HKUST Human Language Fung, IJCNLP-2005 Technology Center
The biparsing hypothesis : All languages have nearly the same grammar Dekai Wu and Pascale HKUST Human Language Fung, IJCNLP-2005 Technology Center
The biparsing hypothesis : All languages have nearly the same grammar Dekai Wu and Pascale HKUST Human Language Fung, IJCNLP-2005 Technology Center
The biparsing hypothesis: All languages have nearly the same grammar Permuted SDTG/SCFG Indexed SDTG/SCFG notation SDTG/SCFG notation ITG shorthand VP VV PP ; 1 2 VP VV (1) PP (2) , VV (1) PP (2) VP VV PP , VV PP VP [ VV PP ] VP VV (1) PP (2) , PP (2) VV (1) VP VV PP ; 2 1 VP VV PP , PP VV VP VV PP Dekai Wu and Pascale HKUST Human Language Fung, IJCNLP-2005 Technology Center
Synchronous Context Free Grammars • Context free grammars (CFG) – Common way of representing syntax in (monolingual) NLP • Synchronous context free grammars (SCFG) – Generate pairs of strings – Align sentences by parsing them – Translate sentences by parsing them • Key algorithm: how to parse with SCFGs?
SCFG trade off • Expressiveness – SCFGs cannot represent all sentence pairs in all languages • Efficiency – SCFGs let us view alignment as parsing & benefit from well-studied formalism
Synchronous parsing cannot represent all sentence pairs
Synchronous parsing cannot represent all sentence pairs
Synchronous parsing cannot represent all sentence pairs
A subclass of SCFGs: Inversion Transduction Grammars • ITGs are the subclass of SDTGs/SCFGs: – with only straight and inverted transduction rules – with only transduction rules of rank < 2 equivalent – with only transduction rules of rank < 3 • ITGs are context-free (like SCFGs).
For length-4 phrases (or frames), ITGs can express 22 out of 24 permutations!
ITGs enable efficient DP algorithms [Wu 1995] e 0 e 1 e 2 e 3 e 4 e 5 e 6 e 7 c 0 c 1 c 2 c 3 c 4 c 5 c 6
ITGs enable efficient DP algorithms [Wu 1995] e 0 e 1 e 2 e 3 e 4 e 5 e 6 e 7 c 0 c 1 c 2 c 3 c 4 c 5 c 6
ITGs enable efficient DP algorithms [Wu 1995] e 0 e 1 e 2 e 3 e 4 e 5 e 6 e 7 c 0 c 1 c 2 c 3 c 4 c 5 c 6
Recommend
More recommend