Accepted Position Papers
Here you can find all 20 position papers that have been accepted. We thank all authors for their interesting work and look forward to discuss with you at the workshop.
Evaluating the User Experience of Mobile VR
Gonzalo E Garcia (University of Melbourne); Kate Ferris (Human Computer Interaction, School of Computing and Information Systems, University of Melbourne); Greg Wadley (University of Melbourne, AUS)
Abstract: For decades, Virtual Reality (VR) systems have provided unique user experiences, inspiring researchers to develop methods for assessing user experiences of VR. Until recently, VR was restricted to tethered configurations in indoor settings; now, portable systems such as Oculus Quest combine excellent immersion with mobility, allowing VR to move into public spaces and unpredictable contexts. Just as the emergence of mobile screen-based computing required the development of new methods of design and evaluation, so the emergence of mobile VR prompts us to consider whether existing evaluation methods need to be augmented. In this paper, we describe our method to evaluate the user experience of a VR application that replicates flooding in the city of Melbourne, Australia. We conducted an empirical study with this application and a mobile VR device, and we assess the user experience with a number of qualitative and quantitative methods that are suitable for field studies.
Position paper Presentation video
Assessment of Presence in Augmented and Mixed Reality
Alexander Toet (TNO Human Factors); Tina Mioch (TNO Human Factors); Simon N.B. Gunkel (TNO); Omar Niamut (TNO); Jan B.F. van Erp (TNO)
Abstract: While the sense of presence in VR has been extensively studied, there are currently no scales available to measure the sense of presence in AR and MR. Here we propose a general Holistic Presence Questionnaire (HPQ), that measures presence through the sense of telepresence, internal and external plausibility and perceived behavioral and cognitive affordances in the mediated environment. The HPQ is sufficiently general to measure presence experienced in any type of multi-sensory (visual, auditory, haptic and olfactory) setting (including VR, AR and MR systems). By using single items to tap into each of the relevant psychological processing levels the HPQ is comprehensive and efficient. Individual items are sufficiently concise so that their (repeated) application minimally interferes with the experience.
Position paper Presentation video
There Is No One Size Fits All: The Need for Evaluating MR Interaction Techniques
Patrick Harms (Nuremberg Institute of Technology)
Abstract: In WIMP and touch interfaces, we are restricted to keyboard, mouse, and touch screens with corresponding gestures. In contrast, Mixed Reality (MR) enables a much wider set of interaction techniques requiring more diverse hardware and design patterns. This means, when designing MR applications, we need to decide which interaction technique works best, which hardware is hence required, and which design patterns may apply. Therefore, we propose that MR developers should put a certain emphasis on the selection of the best interaction technique, hardware and pattern for a specific application. This includes implementing multiple techniques for interactions that are specific for the application under development and testing them with users to assess the respective User Experience (UX) and usability. This allows to select the best interaction solution for a specific MR application.
Position paper Presentation video
Experiences with User Studies in Augmented Reality
Marc Satkowski (Interactive Media Lab Dresden, Technische Universität Dresden); Wolfgang Büschel (Interactive Media Lab Dresden, Technische Universität Dresden); Raimund Dachselt (Technische Universität Dresden)
Abstract: The research field of augmented reality (AR) is of increasing popularity, as seen, among others, in several recently published surveys. To produce further advancements in AR, it is not only necessary to create new systems or applications, but also to evaluate them. One important aspect in regards to the evaluation is the general understanding of how users experience a given AR application, which can also be seen by the increased number of papers focusing on this topic that were published in the last years. With the steadily growing understanding and development of AR in general, it is only a matter of time until AR devices make the leap into the consumer market where such an in-depth user understanding is even more essential. Thus, a better understanding of factors that could influence the design and results of user experience studies can help us to make them more robust and dependable in the future. In this position paper, we describe three challenges which researchers face while designing and conducting AR users studies. We encountered these challenges in our past and current research, including papers that focus on perceptual studies of visualizations, interaction studies, and studies exploring the use of AR applications and their design spaces.
Position paper Presentation video
Using Internet Studies to Assess the Impact of Self-Focused Mixed Reality on Perception, Affect, and Behavior
Ayanna E Seals (New York University); Monsurat Olaosebikan (Tufts University); Jennifer Otiono (Wellesley College); Orit Shaer (Wellesley College); Oded Nov (New York University)
Abstract: Self-focused mixed reality (MR) technologies (such as video filters and smart mirrors) are growing in popularity. It is in the best interest of end-users if these technologies and accompanying design features are evaluated for their impact on user’s perception, affective experiences, and behavior. Methodologies that enable these evaluations would help designers navigate the effect of increased or augmented self-attention on their design goals. In this paper, we present a recent online MR study methodology used to assess the impact of self-focused MR in a health behavior context. We present challenges and opportunities for evaluating the role of objective self-awareness and self-focused attention in future MR studies.
Lessons From a Remote At-Home Evaluation of an Augmented Reality Application
Jennifer Otiono (Wellesley College); Ziyue Qian (Wellesley College); Ayanna E Seals (New York University); Oded Nov (New York University); Orit Shaer (Wellesley College)
Abstract: In this paper, we present the methods, challenges, and lessons from conducting a moderated, remote, at-home study of an AugmentedReality (AR) application that overlays omic information in users’ kitchens. Due to the COVID-19 pandemic, our team adapted to remote studies, which have presented unique experiences and discussions. We explore ways that could lower barriers for researchers to conduct remote Mixed Reality (MR) studies and assume greater control over a remote study. We argue that remote studies conducted in study participants’ personal spaces can lead to more insightful and nuanced results, but participants’ privacy and issues related to equity should be considered and protected.
Position paper Presentation video
Considerations and Challenges of Measuring Operator Performance in Telepresence and Teleoperation Entailing Mixed Reality Technologies
Eleftherios Triantafyllidis (The University of Edinburgh); Zhibin Li (University of Edinburgh)
Abstract: Assessing human performance in robotic scenarios such as those seen in telepresence and teleoperation has always been a challenging task. With the recent spike in mixed reality technologies and the subsequent focus by researchers, new pathways have opened in elucidating human perception and maximising overall immersion. Yet with the multitude of different assessment methods in evaluating operator performance in virtual environments within the field of HCI and HRI, inter-study comparability and transferability are limited. In this short paper, we present a brief overview of existing methods in assessing operator performance including subjective and objective approaches while also attempting to capture future technical challenges and frontiers. The ultimate goal is to assist and pinpoint readers towards potentially important directions with the future hope of providing a unified immersion framework for teleoperation and telepresence by standardizing a set of guidelines and evaluation methods.
Position paper Presentation video
Assessing Discomfort in Mixed Reality with Subjective Measures
Teresa Hirzle (Ulm University)
Abstract: Subjective assessment of discomfort presents an ongoing challenge for the evaluation of user experience in mixed reality (MR) systems. Several problems complicate the issue, such as the definition of appropriate rating scales, repeated exposure of participants to the systems, or the challenge of assessing relative symptom scores by repeatedly employing questionnaires. In addition, the large number of existing terms and concepts of discomfort symptoms make it difficult to compare studies. Finally, the experience and assessment of discomfort are heavily influenced by rapidly changing technology and have to be constantly reevaluated. This position paper aims to raise awareness of the ongoing challenge of assessing discomfort in MR. Grounded in our prior work, we discuss five specific problems that need to be addressed in the future.
What We Measure in Mixed Reality Experiments
Anthony Steed (UCL)
Abstract: There are many potential measures that one might use when evaluating mixed-reality experiences. In this position paper I will argue that there are various stances to take for evaluation, depending on the framing of the experience within a larger body of work. I will draw upon various types of work that my team has been involved with in order to illustrate these different stances. I will then sketch out some directions for developing more robust measures that can help the field move forward.
Position paper Presentation video
Mixed Reality Methods for Analysis of Multimodal Human-Agent Interactions
Jonathan Harth (Universität Witten/Herdecke); Alexandra Hofmann (Universität Witten/Herdecke)
Abstract: The ongoing development of embodied conversational agents requires a precise analysis of human-agent interaction. Currently, however, there are still only few approaches that investigate interactions by means of multimodal methods and both the individual reflection of experience and the interactive behavior. In this paper, we present a methodological approach that allows collecting data on individual perceptions of interacting with virtual agents as well as on the interaction itself. By means of mixed reality, the jointly coordinated behavior of users and agents in virtual spaces can be captured which enables a more comprehensive understanding of the complex dynamics of human-agent interactions.
The Extent of the Proteus Effect as a Behavioral Measure for Assessing User Experience in Virtual Reality
Martin Kocur (University of Regensburg); Niels Henze (University of Regensburg); Valentin Schwind ( Frankfurt University of Applied Sciences)
Abstract: Assessing the user experience (UX) while being immersed in a virtual environment (VE) is crucial to obtain insights about the quality and vividness of the experience created by virtual reality (VR) systems. These valuable insights are necessary to understand a user’s response to VEs and, therefore, to advance in VR research. However, a standardized and effective measure for assessing UX is still missing. Consequently, this lack of suitable measures hinders researchers to gain knowledge and understanding about the effects of VEs on users and in turn slows down the progress in VR technology. To tackle this problem, we propose a behavioral measure for assessing UX based on a phenomenon known as the Proteus effect, which describes changes in behavior and attitude due to the embodiment of avatars with stereotypical characteristics. As avatars are a crucial part of an immersive experience, the extent of behavioral changes caused by the embodiment of avatars may pose an opportunity to implicitly quantify the UX of a VE. This paper discusses an alternative behavioral measure and contributes to the debate about suitable methods for assessing UX in VR systems.
Open the microphone, please! Conversational UX Evaluation in Virtual Reality
Inmaculada Rodriguez (University of Barcelona); Anna Puig (University of Barcelona)
Abstract: This paper proposes the use of conversational interactions for gathering feedback from users in Virtual Reality (VR) evaluation studies. A conversational interaction allows the user to communicate with the system using natural language in the form of text, voice or both. This interaction is facilitated by what are known as conversational agents (CA), which engage in a conversation with the user. In contrast to gathering user feedback once the experience has finished, these agents, either embodied or not, are in charge of administering the user post-tasks and post-study questionnaires, which are carried out inside the VR environment. In this paper we conceptualise conversational agents in the context of UXE (User eXperience Evaluation) in VR, and analyse key design elements to be taken into consideration when designing them. We hope our discussion encourages others to study in-world evaluation in VR.
Questionnaires and Qualitative Feedback Methods to Measure User Experience in Mixed Reality
Tobias Drey (Ulm University), Michael Rietzler (Ulm University), Enrico Rukzio (Ulm University)
Abstract: Evaluating the user experience of a software system is an essential final step of every research. Several concepts such as flow, affective state, presences, or immersion exist to measure user experience. Typical measurement techniques analyze physiological data, gameplay data, and questionnaires. Qualitative feedback methods are another approach to collect detailed user insights. In this position paper, we will discuss how we used questionnaires and qualitative feedback methods in previous mixed reality work to measure user experience. We will present several measurement examples, discuss their current limitations, and provide guideline propositions to support comparable mixed reality user experience research in the future.
Position paper Presentation video
Mixed reality technologies for people with dementia: Participatory evaluation methods
Shital Dr. Desai (York University); Arlene Astell (KITE UHN Toronto Rehabilitation Institute)
Abstract: Technologies can support people with early onset dementia (PwD) to aid them in Instrumental Activities of Daily Living (IADL). The integration of physical and virtual realities in Mixed reality technologies (MRTs) could provide scalable and deployable options in developing prompting systems for PwD. However, these emerging technologies should be evaluated and investigated for feasibility with PwD. Survey instruments such as SUS, SUPR-Q and ethnographic methods that are used for usability evaluation of websites and apps are used to evaluate and study MRTs. However, PwD who cannot provide written and verbal feedback are unable to participate in these studies. MRTs also present challenges due to different ways in which physical and virtual realities could be coupled. Experiences with physical, virtual and the couplings between the two are to be considered in evaluating MRTs. This paper presents methods that we have used in our labs – DATE and SaTS, to study the use of MRTs with PwD. These methods are used to understand the needs of PwD and other stake holders as well as to investigate experiences and interactions of PwD with these emerging technologies.
Position paper Presentation video
Evaluating User Experience in Tangible Augmented Reality
Denise Kahl (Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), Saarland Informatics Campus); Marc Ruble (Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), Saarland Informatics Campus); Antonio Krueger (Saarland University)
Abstract: In Virtual Reality (VR) applications, a lot of attention is already paid to how real a physical proxy object you interact with needs to feel in order to provide a good User Experience. However, when evaluating Augmented Reality (AR) applications, this is still strongly underrepresented. Especially in the area of Tangible Augmented Reality (TAR), however, the measurement of Presence is of great importance. Since virtual objects are integrated into the real world, it is important that the interaction with them feels as realistic as possible. Therefore, a possibility should be created to be able to determine User Experience in AR applications as well, in order to be able to compare applications with each other. The measurements carried out in the field of VR, such as the determination of Presence, should be used as a reference.
Let’s Make A Story: Measuring MR Child Engagement
Duotun Wang (University of Maryland); Jennifer Healey (Adobe Research ); Jing Qian (Brown University); Curtis Wigington (Adobe Research); Tong Sun (Adobe Research); Huaishu Peng (University of Maryland)
Abstract: We present the result of a pilot study measuring child engagement with the “Let’s Make A Story” system, a novel mixed reality (MR)collaborative storytelling system designed for grandparents and grandchildren. We compare our MR experience against an equivalent paper story experience. The goal of our pilot was to test the system with actual child users and assess the goodness of using metrics of time, user generated story content and facial expression analysis as metrics of child engagement. We find that multiple confounding variables make these metrics problematic including attribution of engagement time, spontaneous non-story related conversation and having the child’s full forward face continuously in view during the story. We present our platform and experiences and our finding that the strongest metric was user comments in the post-experiential interview.
Position paper Presentation video
Mixed Reality Doesn’t Need Standardized Evaluation Methods
Richard T Skarbez (La Trobe University); Mary C Whitton (UNC Chapel Hill); Missie Smith (Independent Researcher)
Abstract: In this position paper, we argue that standardized assessment methods for mixed reality are unachievable and undesirable. In fact, we argue for a future in which there is a greater diversity of purpose-specific measurement tools, rather than increased standardization. However, we recognize the value and encourage the use and development of standard evaluation methods, those externally validated by, accepted by, and frequently used by the community.
Position paper Presentation video
Towards the Evaluation of Kinesthetic Empathy in Virtual Reality
Roosa Piitulainen (Aalto University); Elisa Mekler (Aalto University)
Abstract: As part of a new H2020 FET Proactive project, we are looking to evaluate social dance experiences in virtual reality (VR). However, few existing measures appear directly applicable. In this position paper, we propose kinesthetic empathy (KE) as a framework with which to approach the social aspect of MR interactions. We review some evaluation methods that are currently used, and which may be employed to operationalize KE. We conclude with open questions and challenges that warrant further discussion.
Position paper Presentation video
Immersive Design Reviews through Situated Qualitative Feedback
Matt Whitlock (University of Colorado Boulder); Danielle A Szafir (University of Colorado)
Abstract: As commercial AR and VR headsets become increasingly available to consumers, developing useful applications will be critical in unlocking the unique affordances of immersive headsets, such as an embodied perspective, natural input and large display space. Building these applications requires iterative design, where professional designers and domain experts can make changes based on user feedback. While quantitative measures such as time to completion and accuracy can provide insight into user performance, they do not capture feedback such as preference and recommendations for future iterations, which can be critical in refining the application design. With this position paper, we discuss how embedded qualitative feedback mechanisms can support the iterative design process for immersive applications. By supporting collection of qualitative feedback in situ, prototyping tools can capture a better snapshot of user experience than using quantitative metrics alone.
Towards Low-burden Responses to Open Questions in VR
Dmitry Alexandrovsky (Uni-Bremen, DMLab); Susanne Putze (University of Bremen); Alexander Schülke (University of Bremen); Rainer Malaka ( University of Bremen, Digital Media Lab)
Abstract: Subjective self-reports in VR user studies is a burdening and often tedious task for the participants. To minimize the disruption with the ongoing experience VR research has started to administer the surveying directly inside the virtual environments. However, due to the tedious nature of text-entry in VR, most VR surveying tools focus on closed questions with predetermined responses, while open questions with free-text responses remain unexplored. This neglects a crucial part of UX research. To provide guidance on suitable self-reporting methods for open questions in VR user studies, this position paper presents a comparative study with three text-entry methods in VR and outlines future directions towards low-burden qualitative responding.