PageRank for Argument Relevance Henning Wachsmuth & Benno Stein • Bauhaus-Universität Weimar • www.webis.de
Goals 1. Information Retrieval: Future (better?) search engines. 2. Argumentation: Improvement due to larger corpora (the web). 3. Timeliness: Provide argumentation on dynamic corpora (the web). 4. Debating: Improve flexibility and fallback behavior. 2 Apr.’16 • B. Stein
Goal: Future Search Engines “[Current] Search engines will take you half way, at best. [to deliver material to address an argumentative information need effectively.]” [Noam Slonim, 14.12.2015] ❑ Classical retrieval systems operationalize the probability ranking principle. ❑ Future retrieval systems will provide us with justifications / rationales. ➜ Information needs may be formulated in hypothesis form. ➜ Rank documents according to the strongest arguments—support or attack. 3 Apr.’16 • B. Stein
4 Apr.’16 • B. Stein
Goals (continued) 1. Information Retrieval: Future (better?) search engines. 2. Argumentation: Improvement due to larger corpora (the web). 3. Timeliness: Provide argumentation on dynamic corpora (the web). 4. Debating: Improve flexibility and fallback behavior. 5 Apr.’16 • B. Stein
Goals (continued) Formalization, Contextualization relevant facts formalized and arguments arguments Extraxtion, Inference, Mining Validation candidate validated arguments, documents proof trees Synthesis, Retrieval Visualization author-centric reader-centric sources arguments Query Interaction User 6 Apr.’16 • B. Stein
What if we had perfect argument mining technology? Argument Conclusion Premises 7 Apr.’16 • B. Stein
Argument Graphs over Document Sets Hypo- thesis ≈ ≈ ≈ attack support attack support support support attack ... support Conclusion Web pages Premises Arguments 8 Apr.’16 • B. Stein
Argument Graphs over Document Sets An operationalizable model with five building blocks, in a nutshell: 1. Syntax. A canoncial argument structure. ARGUMENT ::= ( CONCLUSION , { PREMISE } n 1 ) 2. Semantics. An interpretation function α for an argument set A . α : A × A → { supports , attacks , unrelated } , where A is the set of all mined arguments in some document set. A query (= hypothesis of a user) is in the role of a conclusion. 3. The induced argument graph G = ( A D , E α ) for a document set D . From the RMS theory: E α is cleaned such that G becomes a DAG. 4. Recursive relevance computation for each a ∈ A via PageRank (or friends). See uses in bibliometrics, social networks, road networks, or neuroscience. 5. Argument ground (a-priori) strength. ∀ a ∈ A : S ( a ) ≡ d ∈ D, a ∈ d { R BM25 ( d ) } . max Identify pay-off values with relevance scores under some retrieval model. 9 Apr.’16 • B. Stein
Argument Graphs over Document Sets An operationalizable model with five building blocks, in a nutshell: 1. Syntax. A canoncial argument structure. ARGUMENT ::= ( CONCLUSION , { PREMISE } n 1 ) 2. Semantics. An interpretation function α for an argument set A . α : A × A → { supports , attacks , unrelated } , where A is the set of all mined arguments in some document set. A query (= hypothesis of a user) is in the role of a conclusion. 3. The induced argument graph G = ( A D , E α ) for a document set D . From the RMS theory: E α is cleaned such that G becomes a DAG. 4. Recursive relevance computation for each a ∈ A via PageRank (or friends). See uses in bibliometrics, social networks, road networks, or neuroscience. 5. Argument ground (a-priori) strength. ∀ a ∈ A : S ( a ) ≡ d ∈ D, a ∈ d { R BM25 ( d ) } . max Identify pay-off values with relevance scores under some retrieval model. 10 Apr.’16 • B. Stein
Argument Graphs over Document Sets An operationalizable model with five building blocks, in a nutshell: 1. Syntax. A canoncial argument structure. ARGUMENT ::= ( CONCLUSION , { PREMISE } n 1 ) 2. Semantics. An interpretation function α for an argument set A . α : A × A → { supports , attacks , unrelated } , where A is the set of all mined arguments in some document set. A query (= hypothesis of a user) is in the role of a conclusion. 3. The induced argument graph G = ( A D , E α ) for a document set D . From the RMS theory: E α is cleaned such that G becomes a DAG. 4. Recursive relevance computation for each a ∈ A via PageRank (or friends). See uses in bibliometrics, social networks, road networks, or neuroscience. 5. Argument ground (a-priori) strength. ∀ a ∈ A : S ( a ) ≡ d ∈ D, a ∈ d { R BM25 ( d ) } . max Identify pay-off values with relevance scores under some retrieval model. 11 Apr.’16 • B. Stein
Argument Graphs over Document Sets An operationalizable model with five building blocks, in a nutshell: 1. Syntax. A canoncial argument structure. ARGUMENT ::= ( CONCLUSION , { PREMISE } n 1 ) 2. Semantics. An interpretation function α for an argument set A . α : A × A → { supports , attacks , unrelated } , where A is the set of all mined arguments in some document set. A query (= hypothesis of a user) is in the role of a conclusion. 3. The induced argument graph G = ( A D , E α ) for a document set D . From the RMS theory: E α is cleaned such that G becomes a DAG. 4. Recursive relevance computation for each a ∈ A via PageRank (or friends). See uses in bibliometrics, social networks, road networks, or neuroscience. 5. Argument ground (a-priori) strength. ∀ a ∈ A : S ( a ) ≡ d ∈ D, a ∈ d { R BM25 ( d ) } . max Identify pay-off values with relevance scores under some retrieval model. 12 Apr.’16 • B. Stein
Argument Graphs over Document Sets An operationalizable model with five building blocks, in a nutshell: 1. Syntax. A canoncial argument structure. ARGUMENT ::= ( CONCLUSION , { PREMISE } n 1 ) 2. Semantics. An interpretation function α for an argument set A . α : A × A → { supports , attacks , unrelated } , where A is the set of all mined arguments in some document set. A query (= hypothesis of a user) is in the role of a conclusion. 3. The induced argument graph G = ( A D , E α ) for a document set D . From the RMS theory: E α is cleaned such that G becomes a DAG. 4. Recursive relevance computation for each a ∈ A via PageRank (or friends). See uses in bibliometrics, social networks, road networks, or neuroscience. 5. Argument ground (a-priori) strength. ∀ a ∈ A : S ( a ) ≡ d ∈ D, a ∈ d { R BM25 ( d ) } . max Identify pay-off values with relevance scores under some retrieval model. 13 Apr.’16 • B. Stein
PageRank for Argument Relevance 14 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 15 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 16 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 17 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 18 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 19 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 1. ground strength + attributed relevance p ( c j ) ˆ p ( c i ) = (1 − α ) · p ( d i ) · | D | 2. c j relies on c i as a premise � ˆ + α · ❀ increase ArgumentRank ( c i ) | A | | A j | j 3. reward few premises 4. ground strength ∼ PageRank c i c j 5. normalize by the average number of arguments per web page 20 Apr.’16 • B. Stein
PageRank for Argument Relevance p ( d j ) 1 1. ground relevance + attributed relevance � p ( d i ) = (1 − α ) · | D | + α · | D j | 2. d j links to d i ❀ increase PageRank ( d i ) j 3. reward exclusive links 4. uniform ground relevances (sum to 1) d i d j 1. ground strength + attributed relevance p ( c j ) ˆ p ( c i ) = (1 − α ) · p ( d i ) · | D | 2. c j relies on c i as a premise � ˆ + α · ❀ increase ArgumentRank ( c i ) | A | | A j | j 3. reward few premises 4. ground strength ∼ PageRank c i c j 5. normalize by the average number of arguments per web page 21 Apr.’16 • B. Stein
Recommend
More recommend