Empirical Studies in Information Visualization: Seven Scenarios

A paper on Empirical Studies in Information Visualization: Seven Scenarios (IEEE Transactions on Visualization and Computer Graphics, 30 Nov. 2011.) by Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant and Sheelagh Carpendale take a new scenario based look at evaluation in information visualization. This paper is highly applicable to selecting what kind of scenario and methods to employ when evaluating software visualization systems. An earlier technical report is also available: Seven Guiding Scenarios for Information Visualization Evaluation. Techreport 2011-992-04, Department of Computer Science, University of Calgary.

The paper encapsulates the current evaluation practices in the InfoVis research community and provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization.

Instead of focusing on evaluation methods they provide an in-depth discussion of evaluation scenarios (derived from a systematic analysis of 850 papers from the community of which 361 had evaluations), categorized into those for understanding data analysis processes and those which evaluate visualizations themselves.

The scenarios for understanding data analysis are:

  • Understanding Environments and Work Practices (UWP)
  • Evaluating Visual Data Analysis and Reasoning (VDAR)
  • Evaluating Communication Through Visualization (CTV)
  • Evaluating Collaborative Data Analysis (CDA)

The scenarios for understanding visualizations are:

  • Evaluating User Experience (UE)
  • Evaluating User Performance (UP)
  • Evaluating Visualization Algorithms (VA)

Trends in evaluation from the analyzed papers revealed that the distribution of the papers across the seven scenarios remain skewed towards User Experience (UE – 34%) User Performance (UP – 33%), and Evaluating Visualization Algorithms (VA – 22%) for a total of 89%. While 11% for the process scenarios.

One reason for this explanation (skewed result) is that evaluation in the InfoVis community has been following the the traditions in Human Computer Interaction (HCI) and Computer Graphics (CG), both of which also have traditionally focused on usability evaluations, controlled experiments, and algorithm evaluations. The lack of evaluations in the process group raises the question should more of these types of evaluations be conducted and published?

Nonetheless the intention of the paper is to encourage researchers to reflect on evaluation goals and questions before choosing methods. By providing a diverse set of examples for each of the scenarios, they hope that evaluation in InfoVis will employ a more diverse set of evaluation methods.

An interesting study would be to analyze SoftVis papers and to see what scenarios those papers are categorized as.

Advertisements

About Craig Anslow

Craig Anslow is a software visualization researcher understanding multi-touch table user interfaces with respect to software visualization.
This entry was posted in Papers, Research and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s