A paper on Empirical Studies in Information Visualization: Seven Scenarios (IEEE Transactions on Visualization and Computer Graphics, 30 Nov. 2011.) by Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant and Sheelagh Carpendale take a new scenario based look at evaluation in information visualization. This paper is highly applicable to selecting what kind of scenario and methods to employ when evaluating software visualization systems. An earlier technical report is also available: Seven Guiding Scenarios for Information Visualization Evaluation. Techreport 2011-992-04, Department of Computer Science, University of Calgary.
The paper encapsulates the current evaluation practices in the InfoVis research community and provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization.
Instead of focusing on evaluation methods they provide an in-depth discussion of evaluation scenarios (derived from a systematic analysis of 850 papers from the community of which 361 had evaluations), categorized into those for understanding data analysis processes and those which evaluate visualizations themselves.
The scenarios for understanding data analysis are:
- Understanding Environments and Work Practices (UWP)
- Evaluating Visual Data Analysis and Reasoning (VDAR)
- Evaluating Communication Through Visualization (CTV)
- Evaluating Collaborative Data Analysis (CDA)
The scenarios for understanding visualizations are:
- Evaluating User Experience (UE)
- Evaluating User Performance (UP)
- Evaluating Visualization Algorithms (VA)
Trends in evaluation from the analyzed papers revealed that the distribution of the papers across the seven scenarios remain skewed towards User Experience (UE – 34%) User Performance (UP – 33%), and Evaluating Visualization Algorithms (VA – 22%) for a total of 89%. While 11% for the process scenarios.
One reason for this explanation (skewed result) is that evaluation in the InfoVis community has been following the the traditions in Human Computer Interaction (HCI) and Computer Graphics (CG), both of which also have traditionally focused on usability evaluations, controlled experiments, and algorithm evaluations. The lack of evaluations in the process group raises the question should more of these types of evaluations be conducted and published?
Nonetheless the intention of the paper is to encourage researchers to reflect on evaluation goals and questions before choosing methods. By providing a diverse set of examples for each of the scenarios, they hope that evaluation in InfoVis will employ a more diverse set of evaluation methods.
An interesting study would be to analyze SoftVis papers and to see what scenarios those papers are categorized as.