Evaluation of Rich and Explicit Feedback for Exploratory Search Esben Sørig 1 , Nicolas Collignon 2 , Rebecca Fiebrink 1 , and Noriko Kando 3 1 Goldsmiths, University of London 2 University of Edinburgh 3 National Institute of Informatics, Japan
Annotation feedback for exploratory IR • Current search systems have limited support exploratory searchers • Document annotation is a well- known strategy for active reading • We investigate the usage of annotations as an explicit feedback signal for IR systems. Alighieri, Dante. 1993. Convivio. Ed. Giorgio Inglese. Milano: Rizzoli
Previous work • Relevance feedback is a well-studied interactive, explicit feedback mechanism. User marks documents as relevant or irrelevant • Variants of relevance feedback such as interactive query expansion • Golovchinsky et al 1 studied the effect of annotation feedback on retrieval performance • Significant improvement in retrieval performance in single iteration of feedback • Fixed query, fixed search results, non-interactive, no user experience 1 G. Golovchinsky, M. N. Price, B. N. Schilit. 1999. From reading to retrieval: freeform ink annotations as queries. In Prooceedings of ACM SIGIR '99. ACM. 19-25.
Simulation retrieval performance • TREC Dynamic Domain 2017 user simulator • New York Times Annotated Corpus • Performance averaged over 60 search tasks
Simulation retrieval performance • TREC Dynamic Domain 2017 user simulator • New York Times Annotated Corpus • Performance averaged over 60 search tasks
Evaluation platform for annotation feedback
Experiment with human subjects • Goal : measure retrieval performance and user experience of real search users of annotation feedback • Two conditions: 1) Relevance feedback only. 2) Annotation feedback only • Between-subjects design • Amazon Mechanical Turk participants • Tasks chosen from user simulator • Pre- and post-task questionnaires to measure user experience
Discussion and questions • Is evaluation on TREC dataset representative of multi-session exploratory search contexts? • Are questionnaires sufficient to measure hypothesized benefit to user exploration, understanding, and engagement? • Questions?
Recommend
More recommend