Information Visualization and Visual Analytics roles, challenges, and examples Giuseppe Santucci
VisDis and the Database & User Interface • The VisDis and the Database/Interface group background is about: – Visual Information Access – Data quality – Data integration – Adaptive Interfaces – User Centered Design – Usability and Accessibility – Infovis evaluation – Visual quality metrics – Visual Analytics • Data sampling • Density map optimization
Outline • Information Visualization – Main issues • Data overloading – Visual Analytics – Automatic data analysis – Three examples • Projects and books
Information visualization ! 1. Infovis is perfect for exploration, when we don’t know exactly what to look at. It supports vague goals 2. Infovis is perfect to explain complex data and to support decisions • Other approaches to data analysis – Statistics: strong verification but does not support exploration and vague goals – Data mining: actionable and reliable but black box, not interactive, question-response style – Visual analytics (formerly Visual Data Mining) is trying to join the two worlds
Canonical steps in infovis – STEP 1 Internal DATA Mathematics Representation Sport Physics Chemistry Literature Encoding of values History Art Univariate data Geography Bivariate data Trivariate data Multidimensional data Encoding of relations Temporal data Map & Diagrams Graphs/Trees Data streams
Canonical steps in infovis – STEP 2 Internal Representation Space limitations Presentation Scrolling Overview + details Distortion Suppression Zoom & pan Semantic zoom Time limitation Perceptual issues Cognitive issues
SO WE ARE DONE! (?)
Outline • Information Visualization • Data overloading – Visual Analytics – Automatic data analysis – Three examples • Projects and books and conferences
Data size and complexity ! • 100 million FedEx transactions per day • 150 million VISA credit card transactions per day • 300 million long distance ATT calls per day • 50 billion e-mails per day • 600 billion IP packets per day • 1 trillion (10 12 ) of web pages (according to Google), corresponding to about 3 petabytes of data • Google processes 20 petabytes of data per day • Data streams (sensor network, IP traffic, etc) kilobyte, megabyte, gigabyte, terabyte, petabyte …
Rescuing information • In different situations people need to exploit and to use hidden information resting in unexplored large data sets – decision-makers – analysts – engineers – emergency response teams – ... • Several techniques exist devoted to this aim – Automatic analysis techniques (e.g., data mining) – Manual analysis techniques (e.g., Information visualization) • Petabyte datasets require a joint effort:
Visual Analytics
VA is highly interdisciplinary Data Evaluation Evaluation Mining Scientific & Spatio- Data Information Temporal Management Visualisation Data Human Perception +Cognition Infrastructure Infrastructure Each component presents challenging issues
Visualization • Scientific Visualization & Information Visualization – interactivity & scalability issues • Challenges : design of new scalable structure that support: – Visual abstractions (e.g., clustering, sampling, etc.) – Rapid update of visual displays for billion record databases (10 frames per second)
Data Management • Answering a query against a large data set is now possible Among the other challenges : • Integration of heterogeneous data such as numeric data, graphs, text, audio and video signals, semi-structured data • Data streams - In many application data are continuously produced (sensor data, stock market data, news data, etc.) • Data provenance - Understanding where data come from • Data reduction - Visualizing billion records is not possible. We need to reduce and abstract the data to support interaction at different detail levels (see, e.g., Google Earth) • ...
Data mining • Methods to automatically extract insights – Supervised learning from examples: using training samples to learn models for the classification (or prediction) of previously unseen data sample – Cluster analysis , which aims to extract structure from unknown data, grouping data instances into classes based on mutual similarity, and to identify outliers – Association rule mining (analysis of co-occurrence of data items) and dimensionality reduction • Challenges come from: – semi-structured and complex data (web data, documents) – interaction with visualizations
Spatio - Temporal Data • Data about time and space are widely spread – geographic measurements – GPS position data – remote sensing applications (e.g., satellite data) • Finding spatial relationships and patterns among this data is of special interest • The analysis of data with references both in space and in time is a challenging research topic: – scale : clusters and other phenomena may only occur at particular scales, which may not be the scale at which data is recorded – uncertainty : spatio-temporal data are often incomplete, interpolated, collected at different times, etc. – …
Perception and cognition A critical element is the human being ( ) • – Visual analysis tasks require the careful design of apt human-computer interfaces – Challenges : need to integrate Psychology, Sociology, Neurosciences, and Design issues • user-centred analysis and modelling • multimodal interaction techniques for Form Evaluatio Intention visualization and exploration of large Form Interpretatio Action plan information spaces Execute Perception Action • availability of improved display resources • novel interaction algorithms • perceptual, cognitive and graphical principles which in combination lead to improved visual communication of data and analysis results
Evaluation and Infrastructure • How to assess (evaluate) the effectiveness of visual analytics environment is a topic of lively debate • The same happens for infrastructures: agreed solutions are still under investigation Both topics are still in the phase of workshop results... D3!
Back to the Automatic Data Analysis We can classify the automatic activities in three main groups 1. Deriving new values from the dataset for ad-hoc visualization • This is the less standard and the more creative part of the process 2. Data reduction / data mining • Clustering /classification /… • Sampling / pixel oriented visualization • Dimension reduction 3. Visualization improvement • Data distribution • Perceptual issues • Cognitive issues
Example for group 1 Deriving new values from the dataset for ad-hoc visualization (you are going to visualize DERIVED data)
A Visual Analytics example (Group 1) Deriving new values from the dataset for ad-hoc visualization • How to visually compare J. London and M. Twain books ? • [D. A. Keim and D. Oelke. Literature Fingerprinting: A New Method for Visual Literary Analysis. 2007 IEEE Symp. on Visual Analytics Science and Technology (VAST '07) ] 1. Split the book in several text block (e.g., pages, paragraph, sentences) 2. Measure, for each text block, a relevant feature (e.g., average sentence length, word usage, etc. ) 3. Associate the relevant feature to a visual attribute (e.g., color) 4. Visualize it
J.London vs M.Twain average sentence lengths
User interaction (a non uniform book?)
Details of a book
What about the Bible?
Example 2 Data reduction / data mining
Visual Analytics of Anomaly Detection in Large Data Streams (paper from Daniel Keim group) • You have to monitor a network composed of 8 systems with 16 servers each • Each server provide basic information – CPU % occupation – DISK % occupation – MEM % occupation – ... – That corresponds to 128 temporal data streams (overplotting !!) CPU % time
Pixel oriented visualization 28 days (5 min windows), about 8k observations Each observation takes a pixel The color codes the CPU %
The whole system Color is preattentive!
Automated analysis • Computing high CPU % clusters • That selects hot time intervals
Automated analysis... • Detecting persistent anomalies
Looking for correlations
Example 3 Visualization improvement
A Visual Analytics example (Group 3 – Visualization improvement) Data distribution and perceptual issues 4 data items Density maps are plotted on the same pixel: d=4 empty we can map the pixel density values to a 256 levels grey or color scale 8x8 pixels
The case study (Infovis contest 2005) • About 60,000 USA companies plotted on a 800x450 (360,000 pixels) scatter plot • 126 distinct density values ranging on [1..1,633] • 7,042 active pixels (i.e., hosting at least one company): – 2526 pixels (36%) host exactly one company (d=1) – 1182 pixels (17%) host two companies (d=2) – ... – 1 pixel (0.0001 %) hosts 1633 companies (d=1633)
What is the problem? • The choice of the right mapping is crucial, because of density frequency distribution presents very skewed behaviour 36% Pixel number 17% 0.001% 1633 Density (126 distinct values)
Recommend
More recommend