representing symbolic linguistic structures for neural
play

Representing symbolic linguistic structures for neural NLP: methods - PowerPoint PPT Presentation

Representing symbolic linguistic structures for neural NLP: methods and applications Alexander Panchenko Assistant Professor for NLP About myself: a decade of fun R&D in NLP 2002-2008: Bauman Moscow State Technical University ,


  1. Representing symbolic linguistic structures for neural NLP: methods and applications Alexander Panchenko Assistant Professor for NLP

  2. About myself: a decade of fun R&D in NLP • 2002-2008: Bauman Moscow State Technical University , Engineer in Information Systems, MOSCOW • 2008: Xerox Research Centre Europe , Research Intern, FRANCE • 2009-2013: Université catholique de Louvain , PhD in Computational Linguistics, BELGIUM • 2013-2015: Startup in SNA , Research Engineer in NLP , MOSCOW • 2015-2017: TU Darmstadt , Postdoc in NLP , GERMANY • 2017-2019: University of Hamburg , Postdoc in NLP , GERMANY

  3. About myself: a decade of fun R&D in NLP • Publications in int’l conferences & journals: • ACL • EMNLP • EACL • ECIR • NLE • Best papers at Representation learning workshop (ACL’2016) and SemEval’2019. • Editor and co-chair : • Cambridge Natural Language Engineering (NLE) • Springer LNCS/CCIS: AIST conf. • PC : • ACL, NAACL, EMNLP , LREC, RANLP , COLING, …

  4. About myself: my expertise and past/present research foci • Lexical Semantics • Semantic similarity • Word sense disambiguation • Word/sense embedding • Taxonomy induction, • Frame induction, … • Argument mining • Graph clustering

  5. Latest publications (2019-2018) are on argument mining and lexical semantics • Lexical Semantics • Semantic similarity • Word sense disambiguation • Word/sense embedding • Taxonomy induction, • Frame induction, … • Argument mining • Graph clustering

  6. How to inform neural architectures for NLP with symbolic linguistic knowledge? • Special issue of the Natural Language Engineering journal on informing neural architectures for NLP with linguistic and background knowledge: https://sites.google.com/view/nlesi

  7. How to inform neural architectures for NLP with symbolic linguistic knowledge? Some options : • Graph embeddings • Poincaré embeddings • Regularisers that access the resource • Structure of neural network is based on the structure of the resource • … other specialised embeddings? • … invented by you? • Special issue of the Natural Language Engineering journal on informing neural architectures for NLP with linguistic and background knowledge: https://sites.google.com/view/nlesi

  8. Text: a sparse symbolic representation Image source: https://www.tensorflow.org/tutorials/word2vec

  9. Graph: a sparse symbolic representation

  10. Embedding graph into a vector space From a survey on graph embeddings [Hamilton et al., 2017]:

  11. Learning with an autoencoder From a survey on graph embeddings [Hamilton et al., 2017]:

  12. A summary of well-known graph embedding algorithms From a survey on graph embeddings [Hamilton et al., 2017]:

  13. Graph Metric Embeddings • A short paper at ACL 2019 • Paper : https://arxiv.org/abs/ 1906.07040 • Code : http://github.com/uhh- lt/path2vec

  14. path2vec model

  15. Computational gains compare to graph-based algorithms Similarity computation: graph vs vectors

  16. path2vec: evaluation results on three different graphs Evaluation on di ff erent graphs on SimLex999 (left) and shortest path distance (middle, right).

  17. path2vec evaluation inside a graph- based WSD algorithm (WordNet graph)

  18. Graph embeddings for neural entity linking • A short paper at ACL 2019 Student Research Workshop (main conference) • Paper: https://www.inf.uni- hamburg.de/en/inst/ab/lt/ publications/2019-sevgilietal- aclsrw-graphemb.pdf • Code : https://github.com/ uhh-lt/kb2vec

  19. What is Entity Linking? Source of image: https://medium.com/asgard-ai/how-to-enhance-automatic-text-analysis- with-entity-linking-29128a12b

  20. Challenges of Entity Linking Michael Jordan (NBA) vs Michael Jordan (LDA), etc. Ambiguity ruin everything: Source of image: https://medium.com/asgard-ai/how-to-enhance-automatic-text-analysis- with-entity-linking-29128a12b

  21. Graph embeddings for neural entity linking

  22. Graph embeddings for neural entity linking Architecture of our feed-forward neural ED system: using Wikipedia hyperlink graph embeddings as an additional input representation of entity candidates

  23. Graph embeddings for neural entity linking

  24. Graph embeddings for neural entity linking

  25. Poincaré embeddings for various NLP tasks • ACL 2019 full paper • Paper : https://www.inf.uni- hamburg.de/en/inst/ab/lt/ publications/2019-janaetal- aclmain-poincompo.pdf • Code : https://github.com/ uhh-lt/poincare

  26. Poincaré embeddings for various NLP tasks Contributions: • We devise a straightforward and e ffi cient approach for combining distributional and hypernymy information for the task of noun phrase compositionality prediction . As far as we are aware, this is the first application of Poincaré embeddings to this task. • We demonstrate consistent and significant improvements on benchmark datasets in un- supervised and supervised settings.

  27. Poincaré embeddings for various NLP tasks Image source: • Poincaré ball: https://arxiv.org/pdf/1705.08039.pdf • Distance on a ball between two points: •

  28. Poincaré embeddings for various NLP tasks Training objective: Training data: • A set of relations (apple IsA fruit) • Can be taken from WordNet • … or extracted from text Source of the image: https://arxiv.org/pdf/1902.00913.pdf

  29. Poincaré embeddings for noun compositionally hot dog —> food BUT dog —> animal green apple —> fruit AND apple —> fruit Evaluation results: comparison to the distributional models

  30. Noun compositionality for the Russian language • A Balto-Slavic NLP workshop at ACL 2019 • https://github.com/slangtech/ ru-comps • … paper to paper soon online. • A dataset for evaluation of noun compositionally for Russian.

  31. Poincaré embeddings for taxonomy induction • A short paper at ACL 2019 • Paper : https://www.inf.uni- hamburg.de/en/inst/ab/lt/ publications/2019-alyetal- aclshort-hypertaxi.pdf • Code : https://github.com/ uhh-lt/ Taxonomy_Refinement_Embe ddings

  32. Poincaré embeddings for taxonomy induction Outline of our taxonomy refinement method:

  33. Poincaré embeddings for taxonomy induction

  34. Comparative Argument Mining • 6th Workshop on Argument Mining at ACL 2019 . • Paper : https://www.inf.uni- hamburg.de/en/inst/ab/lt/ publications/2019-panchenkoetal- argminingws-compsent.pdf • Code : https://github.com/uhh-lt/ comparative

  35. Comparative Argument Mining • Sentiment analysis ++ • … not only opinions but also objective arguments. • … from text only. • Retrieve pros and cons to make some informed decisions.

  36. Comparative Argument Mining • Sentiment analysis ++ • … not only opinions but also objective arguments. • … from text only. • Retrieve pros and cons to make some informed decisions.

  37. Categorizing Comparative Sentences Contributions: • We release CompSent-19, a new corpus consisting of 7,199 sentences containing item pairs (27% of the sentences are tagged as comparative and annotated with a preference); • We present an experimental study of supervised classifiers and a strong rule-based baseline from prior work.

  38. Categorizing Comparative Sentences

  39. Categorizing Comparative Sentences

  40. Argument Mining Demo • Demo paper at ACL 2019 • Paper : https://www.inf.uni- hamburg.de/en/inst/ab/lt/ publications/2019-chernodubetal- acl19demo-targer.pdf • Code : • http://github.com/achernodub/ targer/ • https://github.com/uhh-lt/ targer • Demo : http:// ltdemos.informatik.uni- hamburg.de/targer/

  41. Argument Mining Demo

  42. Argument Mining Demo Analyze Text: input field, drop-down model selection, colorized labels, and tagged result.

  43. Argument Mining Demo Search Arguments: query box, field selectors, and result with link to the original document.

  44. Argument Mining Demo

  45. Interested in collaboration (BA/MA/PhD) on the following topics in the form of co-supervision: • Argument Mining • Entity Linking • Graph Embeddings • Knowledge bases and lexical resources for neural NLP • Word sense induction and disambiguation; lexical substitution • Relation extraction Image source: https://metro.co.uk/2018/02/08/freemasons-definitely-do- have-a-secret-handshake-but-they-wont-tell-us-what-it-is-7295849/

Recommend


More recommend