Affective Engagement in Information Visualization

2019-08-13T20:18:44Z (GMT) by Ya-Hsin Hung
Evaluating the “success” of an information visualization (InfoVis) where its main purpose is communication or presentation is challenging. Within metrics that go beyond traditional analysis- and performance-oriented approaches, one construct that has received attention in recent years is “user engagement”. In this research, I propose Affective Engagement (AE)-- user's engagement in emotional aspects as a metric for InfoVis evaluation. I developed and evaluated a self-report measurement tool named AEVis that can quantify a user's level of AE while using an InfoVis. Following a systematic process of evidence-centered design, each activity during instrument development contributed specific evidence to support the validity of interpretations of scores from the instrument. Four stages were established for the development: In stage 1, I examined the role and characteristics of AE in evaluating information visualization through an exploratory qualitative study, from which 11 indicators of AE were proposed: Fluidity, Enthusiasm, Curiosity, Discovery, Clarity, Storytelling, Creativity, Entertainment, Untroubling, Captivation, and Pleasing; In stage 2, I developed an item bank comprising various candidate items for assessing a user's level of AE, and assembled the first version of survey instrument through target population and domain experts' feedback; In stage 3, I conducted three field tests for instrument revisions. Three analytical methods were applied during this process: Item Analysis, Factor Analysis (FA), and Item Response Theory (IRT); In stage 4, a follow-up field test study was conducted to investigate the external relations between constructs in AEVis and other existing instruments. The results of the four stages support the validity and reliability of the developed instrument, including: In stage 1, user's AE characteristics elicited from the observations support the theoretical background of the test content; In stage 2, the feedback and review from target users and domain experts provides validity evidence for the test content of the instrument in the context of InfoVis; In stage 3, results from Exploratory and Confirmatory FA, as well as IRT methods reveal evidence for the internal structure of the instrument; In stage 4, the correlations between total scores and sub-scores of AEVis and other existing instruments provide external relation evidence of score interpretations. Using this instrument, visualization researchers and designers can evaluate non-performance-related aspects of their work efficiently and without specific domain knowledge. The utilities and implications of AE can be investigated as well. In the future, this research may provide foundation for expanding the theoretical basis of engagement in the fields of human-computer interaction and information visualization.