Visualization Understanding and Memorability Steve Rubin Hi. My project is about visualization understanding and memorability.
The data? What really matters when Pictures? you look at a visualization? The trend? Something else? What really matters when you look at a visualization? The individual data points? The literal picture of the visualization? The trend displayed? Is it something else?
This visualization illustrates the BYU-Utah football rivalry. It shows the score of every BYU-Utah football game of the past 30 years. I see a lot of chart junk- lots of logos and people and quotes. But what really matters here?
In my opinion, the usefulness of this visualization boils down to one of these trends. Either a trend of wins and losses, or the overall trend of BYU’s record against Utah over the last 30 years.
What Makes a Visualization Memorable? Borkin et al., InfoVis 2013 • Color & human recognizable objects • Common graphs less memorable than unique visualization types A paper from Michelle Borkin at InfoVis this year asked the question, what makes a visualization memorable? To do this, they basically studied how well someone could recall the image of a visualization- this had nothing to do with its data or trends. So, they found that color and human recognizable objects are memorable, and that common graphs like line charts and bar charts are less memorable than unique visualization types. And, you won’t be surprised to hear, that first chart-junky visualization of the BYU-Utah football rivalry was one of the most memorable visualizations in their corpus.
Useful Junk? The E ff ects of Visual Embellishment on Comprehension and Memorability of Charts Bateman et al., CHI 2010 The paper “Useful Junk” from CHI in 2010 showed that embellished charts, like this monster bar chart,
Useful Junk? The E ff ects of Visual Embellishment on Comprehension and Memorability of Charts Bateman et al., CHI 2010 In charts with visual embellishments (“chart junk”): • Accuracy in reading data is no worse • Recall is better gave viewers no worse accuracy in reading the data and trend, and were easier to recall later on. There are a number of problems with their methodology, not limited to the fact that the trends shown in this visualizations were very simple.
Project goal: Study how well someone can understand the main point of a visualization. So the goal of my project is to dig deeper into how well someone can understand and retain the main point of a visualization. This isn’t the same as pure visual recall- I’m less focused on how visually interesting a chart can be, but more how it can best convey its story.
Pipeline Data Visualizations MTurk Analysis Here’s the pipeline of how I hope to study this.
Pipeline Data Visualizations MTurk Analysis • Pew Research data & visualizations • Corpus of visualizations like that of Borkin et al. • Varying visualization parameters First, I have to get data and visualizations. I can get data from a number of sources, but I am also going to look into getting a big corpus of visualizations like the one used in the InfoVis paper. Eventually, I hope to create visualizations to study in an automated or semi-automated way.
Pipeline Data Visualizations MTurk Analysis Questions • What are the main points of the visualization? • What are the main trends of the visualization? Conditions • Visualization is visible • After removing visualization • Significantly later in time (days? weeks?) Next, the visualizations will get pushed to mechanical turk, where the system will ask workers questions like “what are the main points of the visualization,” and “what are the main trends of the visualization.” The system will have different conditions, too- like asking workers these questions while the visualization is visible, after removing the visualization, and significantly later in time.
Pipeline Data Visualizations MTurk Analysis • Hand-coding & clustering responses (or have turkers do it) • Do they take away/recall different points and trends based on visualization type or style? • Do they take away the intended point? Finally, I’m going to take all of those results from turk and code or cluster them to find out what points people are taking away from visualizations.
Progress • Data & Visualizations Hand-tuned to start • MTurk Software is done, and further changes to survey instrument are easy Sample HIT • Analysis Hand-coded to start, and exploring clustering options As for my progress so far: I’m started out with some hand-tuned visualizations. I’ve written the application to manage the experiments and run the studies on mechanical turk. I’m also looking into clustering options for the results.
Milestones • Data & Visualizations Determine set of visualization types for the study OR run the study with large, random corpus (soon!) • MTurk Modify to accomodate new survey types (as needed) • Analysis Based on preliminary results, identify the key questions to study (also soon!) In the near future, I need to figure out exactly what data and visualization types I’m going to use in my study. On the turk side of things, I’ll only need to change the software if needed to create new survey types, but that’s fairly straightforward. The most challenging work lies in figuring out exactly what questions to ask the workers to glean the most interesting, discriminatory data about what makes trends most easy to understand. I would love to get some feedback on that aspect of this project- I’m trying to avoid going too far into graphical perception, the “what is the relative different of these two points” route.
Prior work • 1. Bateman, S., Mandryk, R., and Gutwin, C. Useful Junk? The Effects of Visual Embellishment on Comprehension and Memorability of Charts. Proceedings of the …, (2010). • 2. Borkin, M. a, Vo, A. a, Bylinskii, Z., et al. What makes a visualization memorable? IEEE transactions on visualization and computer graphics 19, 12 (2013), 2306–15. • 3. Cleveland, W.S. and McGill, R. Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association 79, 387 (1984), 531. • 4. Culbertson, H. and Powers, R. A study of graph comprehension difficulties. Educational Technology Research and …, (1959). • 5. Few, S. Data Art vs. Data Visualization: Why Does a Distinction Matter? http://www.perceptualedge.com/blog/?p=1245. • 6. Few, S. The Chartjunk Debate: A Close Examination of Recent Findings. http://www.perceptualedge.com/articles/ visual_business_intelligence/the_chartjunk_debate.pdf. • 7. Few, S. Chart Junk: A Magnet for Misguided Research. http://www.perceptualedge.com/blog/?p=1770. • 8. Friel, S., Curcio, F., and Bright, G. Making sense of graphs: Critical factors influencing comprehension and instructional implications. Journal for Research in mathematics … 32, 2 (2001), 124–158. • 9. Hullman, J., Adar, E., and Shah, P . Benefitting InfoVis with visual difficulties. IEEE transactions on visualization and computer graphics 17, 12 (2011), 2213–22. • 10. Kosslyn, S. Understanding Charts and Graphs. Applied cognitive psychology, (1989). • 11. Mackinlay, J. Automating the design of graphical presentations of relational information. ACM Transactions on Graphics (TOG) 5, 2 (1986), 110–141. • 12. Tractinsky, N. and Meyer, J. Chartjunk or Goldgraph? Effects of Presentation Objectives and Content Desirability on Information Presentation. MIS Quarterly 23, 3 (1999), 397–420. • 13. Wainer, H. How to display data badly. The American Statistician 38, 2 (1984), 137–147. And here’s a partial list of prior work that includes some of the papers I’ve already talked about.
Thanks!
Recommend
More recommend