Skip to main content
Frontiers in Robotics and AI logoLink to Frontiers in Robotics and AI
. 2018 Apr 17;5:37. doi: 10.3389/frobt.2018.00037

A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014

Arindam Dey 1,*, Mark Billinghurst 1, Robert W Lindeman 2, J Edward Swan II 3
PMCID: PMC7805955  PMID: 33500923

Abstract

Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

Keywords: augmented reality, systematic review, user studies, usability, experimentation, classifications

1. Introduction

Augmented Reality (AR) is a technology field that involves the seamless overlay of computer generated virtual images on the real world, in such a way that the virtual content is aligned with real world objects, and can be viewed and interacted with in real time (Azuma, 1997). AR research and development has made rapid progress in the last few decades, moving from research laboratories to widespread availability on consumer devices. Since the early beginnings in the 1960's, more advanced and portable hardware has become available, and registration accuracy, graphics quality, and device size have been largely addressed to a satisfactory level, which has led to a rapid growth in the adoption of AR technology. AR is now being used in a wide range of application domains, including Education (Furió et al., 2013; Fonseca et al., 2014a; Ibáñez et al., 2014), Engineering (Henderson and Feiner, 2009; Henderson S. J. and Feiner, 2011; Irizarry et al., 2013), and Entertainment (Dow et al., 2007; Haugstvedt and Krogstie, 2012; Vazquez-Alvarez et al., 2012). However, to be widely accepted by end users, AR usability and user experience issues still need to be improved.

To help the AR community improve usability, this paper provides an overview of 10 years of AR user studies, from 2005 to 2014. Our work builds on the previous reviews of AR usability research shown in Table 1. These years were chosen because they cover an important gap in other reviews, and also are far enough from the present to enable the impact of the papers to be measured. Our goals are to provide a broad overview of user-based AR research, to help researchers find example papers that contain related studies, to help identify areas where there have been few user studies conducted, and to highlight exemplary user studies that embody best practices. We therefore hope the scholarship in this paper leads to new research contributions by providing outstanding examples of AR user studies that can help current AR researchers.

Table 1.

Summary of earlier surveys of AR usability studies.

Publication Venues considered Coverage years Total reviewed publications
Swan and Gabbard, 2005 IEEE ISMAR, ISWC, 1992–2004 21
IEEE VR, and Presence
Dünser et al., 2008 All venues in IEEE Xplore, ACM Digital Library, and Springer Link 1992–2007 165
Bai and Blackwell, 2012 IEEE ISMAR 2001–2010 71
This survey [2017] All venues indexed in Scopus 2005–2014 291

1.1. Previous user study survey papers

Expanding on the studies shown in Table 1, Swan and Gabbard (2005) conducted the first comprehensive survey of AR user studies. They reviewed 1,104 AR papers published in four important venues between 1992 and 2004; among these papers they found only 21 that reported formal user studies. They classified these user study papers into three categories: (1) low-level perceptual and cognitive issues such as depth perception, (2) interaction techniques such as virtual object manipulation, and (3) collaborative tasks. The next comprehensive survey was by Dünser et al. (2008), who used a list of search queries across several common bibliographic databases, and found 165 AR-related publications reporting user studies. In addition to classifying the papers into the same categories as Swan and Gabbard (2005), they additionally classified the papers based on user study methods such as objective, subjective, qualitative, and informal. In another literature survey, Bai and Blackwell (2012) reviewed 71 AR papers reporting user studies, but they only considered papers published in the International Symposium on Mixed and Augmented Reality (ISMAR) between 2001 and 2010. They also followed the classification of Swan and Gabbard (2005), but additionally identified a new category of studies that investigated user experience (UX) issues. Their review thoroughly reported the evaluation goals, performance measures, UX factors investigated, and measurement instruments used. Additionally, they also reviewed the demographics of the studies' participants. However there has been no comprehensive study since 2010, and none of these earlier studies used an impact measure to determine the significance of the papers reviewed.

1.1.1. Survey papers of AR subsets

Some researchers have also published review papers focused on more specific classes of user studies. For example, Kruijff et al. (2010) reviewed AR papers focusing on the perceptual pipeline, and identified challenges that arise from the environment, capturing, augmentation, display technologies, and user. Similarly, Livingston et al. (2013) published a review of user studies in the AR X-ray vision domain. As such, their review deeply analyzed perceptual studies in a niche AR application area. Finally, Rankohi and Waugh (2013) reviewed AR studies in the construction industry, although their review additionally considers papers without user studies. In addition to these papers, many other AR papers have included literature reviews which may include a few related user studies such as Wang et al. (2013), Carmigniani et al. (2011), and Papagiannakis et al. (2008).

1.2. Novelty and contribution

These reviews are valued by the research community, as shown by the number of times they have been cited (e.g., 166 Google Scholar citations for Dünser et al., 2008). However, due to a numebr of factors there is a need for a more recent review. Firstly, while early research in AR was primarily based on head-mounted displays (HMDs), in the last few years there has been a rapid increase in the use of handheld AR devices, and more advanced hardware and sensors have become available. These new wearable and mobile devices have created new research directions, which have likely impacted the categories and methods used in AR user studies. In addition, in recent years the AR field has expanded, resulting in a dramatic increase in the number of published AR papers, and papers with user studies in them. Therefore, there is a need for a new categorization of current AR user research, as well as the opportunity to consider new classification measures such as paper impact, as reviewing all published papers has become less plausible. Finally, AR papers are now appearing in a wider range of research venues, so it is important to have a survey that covers many different journals and conferences.

1.2.1. New contributions over existing surveys

Compared to these earlier reviews, there are a number of important differences with the current survey, including:

  • we have considered a larger number of publications from a wide range of sources

  • our review covers more recent years than earlier surveys

  • we have used paper impact to help filter the papers reviewed

  • we consider a wider range of classification categories

  • we also review issues experienced by the users.

1.2.2. New aims of this survey

To capture the latest trends in usability research in AR, we have conducted a thorough, systematic literature review of 10 years of AR papers published between 2005 and 2014 that contain a user study. We classified these papers based on their application areas, methodologies used, and type of display examined. Our aims are to:

  1. identify the primary application areas for user research in AR

  2. describe the methodologies and environments that are commonly used

  3. propose future research opportunities and guidelines for making AR more user friendly.

The rest of the paper is organized as follows: section 2 details the method we followed to select the papers to review, and how we conducted the reviews. Section 3 then provides a high-level overview of the papers and studies, and introduces the classifications. The following sections report on each of the classifications in more detail, highlighting one of the more impactful user studies from each classification type. Section 5 concludes by summarizing the review and identifying opportunities for future research. Finally, in the appendix we have included a list of all papers reviewed in each of the categories with detailed information.

2. Methodology

We followed a systematic review process divided into two phases: the search process and the review process.

2.1. Search process

One of our goals was to make this review as inclusive as practically possible. We therefore considered all papers published in conferences and journals between 2005 and 2014, which include the term “Augmented Reality,” and involve user studies. We searched the Scopus bibliographic database, using the same search terms that were used by Dünser et al. (2008) (Table 2). This initial search resulted in a total of 1,147 unique papers. We then scanned each one to identify whether or not it actually reported on AR research; excluding papers not related to AR reduced the number to 1,063. We next removed any paper that did not actually report on a user study, which reduced our pool to 604 papers. We then examined these 604 papers, and kept only those papers that provided all of the following information: (i) participant demographics (number, age, and gender), (ii) design of the user study, and (iii) the experimental task. Only 396 papers satisfied all three of these criteria. Finally, unlike previous surveys of AR usability studies, we next considered how much impact each paper had, to ensure that we were reviewing papers that others had cited. For each paper we used Google Scholar to find the total citations to date, and calculated its Average Citation Count (ACC):

Table 2.

Search terms used in the Scopus database.

“Augmented reality” AND “user evaluation(s)”
“Augmented reality” AND “user study/-ies”
“Augmented reality” AND “feedback”
“Augmented reality” AND “experiment(s)”
“Augmented reality” AND “pilot study”
“Augmented reality” AND participant AND study
“Augmented reality” AND participant AND experiment
“Augmented reality” AND subject AND study
“Augmented reality” AND subject AND experiment

We searched in Title, Abstract, and Keywords fields.

ACC=total lifetime citationslifetime (years) (1)

For example, if a paper was published in 2010 (a 5 year lifetime until 2014) and had a total of 10 citations in Google Scholar in April 2015, its ACC would be 10/5 = 2.0. Based on this formula, we included all papers that had an ACC of at least 1.5, showing that they had at least a moderate impact in the field. This resulted in a final set of 291 papers that we reviewed in detail. We deliberately excluded papers more recent than 2015 because most of these hadn't gather significant citations yet.

2.2. Reviewing process

In order to review this many papers, we randomly divided them among the authors for individual review. However, we first performed a norming process, where all of the authors first reviewed the same five randomly selected papers. We then met to discuss our reviews, and reached a consensus about what review data would be captured. We determined that our reviews would focus on the following attributes:

  • application areas and keywords

  • experimental design (within-subjects, between-subjects, or mixed-factorial)

  • type of data collected (qualitative or quantitative)

  • participant demographics (age, gender, number, etc.)

  • experimental tasks and environments

  • type of experiment (pilot, formal, field, heuristic, or case study)

  • senses augmented (visual, haptic, olfactory, etc.)

  • type of display used (handheld, head-mounted display, desktop, etc.).

In order to systematically enter this information for each paper, we developed a Google Form. During the reviews we also flagged certain papers for additional discussion. Overall, this reviewing phase encompassed approximately 2 months. During this time, we regularly met and discussed the flagged papers; we also clarified any concerns and generally strove to maintain consistency. At the end of the review process we had identified the small number of papers where the classification was unclear, so we held a final meeting to arrive at a consensus view.

2.3. Limitations and validity concerns

Although we strove to be systematic and thorough as we selected and reviewed these 291 papers, we can identify several limitations and validity concerns with our methods. The first involves using the Scopus bibliographic database. Although using such a database has the advantage of covering a wide range of publication venues and topics, and although it did cover all of the venues where the authors are used to seeing AR research, it remains possible that Scopus missed publication venues and papers that should have been included. Second, although the search terms we used seem intuitive (Table 2), there may have been papers that did not use “Augmented Reality” as a keyword when describing an AR experience. For example, some papers may have used the term “Mixed Reality,” or “Artificial Reality.”

Finally, although using the ACC as a selection factor narrowed the initial 604 papers to 291, it is possible that the ACC excluded papers that should have been included. In particular, because citations are accumulated over time, it is quite likely that we missed some papers from the last several years of our 10-year review period that may soon prove influential.

3. High-level overview of reviewed papers

Overall, the 291 papers report a total of 369 studies. Table 3 gives summary statistics for the papers, and Table 4 gives summary statistics for the studies. These tables contain bar graphs that visually depict the magnitude of the numbers; each color indicates the number of columns are spanned by the bars. For example, in Table 3 the columns Paper, Mean ACC, and Mean Author Count are summarized individually, and the longest bar in each column is scaled according to the largest number in that column. However, Publications spans two columns, and the largest value is 59, and so all of the other bars for Publications are scaled according to 59.

Table 3.

Summary of the 291 reviewed papers.

graphic file with name frobt-05-00037-i0001.jpg

Table 4.

Summary of the 369 user studies reported by the 291 reviewed papers.

graphic file with name frobt-05-00037-i0002.jpg

Figure 1 further summarizes the 291 papers through four graphs, all of which indicate changes over the 10 year period between 2005 and 2014. Figure 1A shows the fraction of the total number of AR papers that report user studies, Figure 1B analyzes the kind of display used, Figure 1C categorizes the experiments into application areas, and Figure 1D categorizes the papers according to the kind of experiment that was conducted.

Figure 1.

Figure 1

Throughout the 10 years, less than 10% of all published AR papers had a user study (A). Out of the 291 reviewed papers, since 2011 most papers have examined handheld displays, rather than HMDs (B). We filtered the papers based on ACC and categorized them into nine application areas; the largest areas are Perception and Interaction (C). Most of the experiments were in controlled laboratory environments (D).

3.1. Fraction of user studies over time

Figure 1A shows the total number of AR papers published between 2005 and 2014, categorized by papers with and without a user study. As the graph shows, the number of AR papers published in 2014 is five times that published in 2005. However, the proportion of user study papers among all AR papers has remained low, less than 10% of all publication for each year.

3.2. Study design

As shown in Table 4, most of the papers (213, or 73%) used a within-subjects design, 43 papers (15%) used a between-subjects design, and 12 papers (4%) used a mixed-factorial design. However, there were 23 papers (8%) which used different study designs than the ones mentioned above, such as Baudisch et al. (2013), Benko et al. (2014), and Olsson et al. (2009).

3.3. Study type

We found that it was relatively rare for researchers to report on conducting pilot studies before their main study. Only 55 papers (19%) reported conducting at least one pilot study in their experimentation process and just 25 of them reported the pilot studies with adequate details such as study design, participants, and results. This shows that the importance of pilot studies is not well recognized. The majority of the papers (221, or 76%) conducted the experiments in controlled laboratory environments, while only 44 papers (15%) conducted the experiments in a natural environment or as a field study (Figure 1D). This shows a lack of experimentation in real world conditions. Most of the experiments were formal user studies, and there were almost no heuristic studies, which may indicate that the heuristics of AR applications are not fully developed and there exists a need for heuristics and standardization.

3.4. Data type

In terms of data collection, a total of 139 papers (48%) collected both quantitative and qualitative data, 78 (27%) papers only qualitative, and 74 (25%) only quantitative. For the experimental task, we found that the most popular task involved performance (178, or 61%), followed by filling out questionnaires (146, or 50%), perceptual tasks (53, or 18%), interviews (41, or 14%) and collaborative tasks (21, or 7%). In terms of dependent measures, subjective ratings were the most popular with 167 papers (57%), followed by error/accuracy measures (130, or 45%), and task completion time (123, or 42%). We defined task as any activity that was carried out by the participants to provide data—both quantitative and/or qualitative—about the experimental system(s). Note that many experiments used more than one experimental task or dependent measure, so the percentages sum to more than 100%. Finally, the bulk of the user studies were conducted in an indoor environment (246, or 83%), not outdoors (43, or 15%), or a combination of both settings (6, or 2%).

3.5. Senses

As expected, an overwhelming majority of papers (281, or 96%) augmented the visual sense. Haptic and Auditory senses were augmented in 27 (9%) and 21 (7%) papers respectively. Only six papers (2%) reported augmenting only the auditory sense and five (2%) papers reported augmenting only the haptic sense. This shows that there is an opportunity for conducting more user studies exploring non-visual senses.

3.6. Participants

The demographics of the participants showed that most of the studies were run with young participants, mostly university students. A total of 182 papers (62%) used participants with an approximate mean age of less than 30 years. A total of 227 papers (78%) reported involving female participants in their experiments, but the ratio of female participants to male participants was low (43% of total participants in those 227 papers). When all 291 papers are considered only 36% of participants were females. Many papers (117, or 40%) did not explicitly mention the source of participant recruitment. From those that did, most (102, or 35%) sourced their participants from universities, whereas only 36 papers (12%) mentioned sourcing participants from the general public. This shows that many AR user studies use young male university students as their subjects, rather than a more representative cross section of the population.

3.7. Displays

We also recorded the displays used in these experiments (Table 3). Most of the papers used either HMDs (102 papers, or 35%) or handhelds (100 papers, or 34%), including six papers that used both. Since 2009, the number of papers using HMDs started to decrease while the number of papers using handheld displays increased (Figure 1B). For example, between 2010 and 2014 (204 papers in our review), 50 papers used HMDs and 79 used handhelds, including one paper that used both, and since 2011 papers using handheld displays consistently outnumbered papers using HMDs. This trend—that handheld mobile AR has recently become the primary display for AR user studies—is of course driven by the ubiquity of smartphones.

3.8. Categorization

We categorized the papers into nine different application areas (Tables 3, 4): (i) Perception (51 papers, or 18%), (ii) Medical (43, or 15%), (iii) Education (42, or 14%), (iv) Entertainment and Gaming (14, or 5%), (v) Industrial (30, or 10%), (vi) Navigation and Driving (24, or 9%), (vii) Tourism and Exploration (8, or 2%), (viii) Collaboration (12, or 4%), and (ix) Interaction (67, or 23%). Figure 1C shows the change over time in number of AR papers with user studies in these categories. The Perception and Interaction categories are rather general areas of AR research, and contain work that reports on more low-level experiments, possibly across multiple application areas. Our analysis shows that there are fewer AR user studies published in Collaboration, Tourism and Exploration, and Entertainment and Gaming, identifying future application areas for user studies. There is also a noticeable increase in the number of user studies in educational applications over time. The drop in number of papers in 2014 is due to the selection criteria of papers having at least 1.5 average citations per year, as these papers were too recent to be cited often. Interestingly, although there were relatively few of them, papers in Collaboration, Tourism and Exploration categories received noticeably higher ACC scores than other categories.

3.9. Average authors

As shown in Table 3, most categories had a similar average number of authors for each paper, ranging between 3.24 (Education) and 3.87 (Industrial). However papers in the Medical domain had the highest average number of authors (6.02), which indicates the multidisciplinary nature of this research area. In contrast to all other categories, most of the papers in the Medical category were published in journals, compared to the common AR publications venues, which are mostly conferences. Entertainment and Gaming (4.71), and Navigation and Driving (4.58) also had considerably higher numbers of authors per paper on average.

3.10. Individual studies

While a total of 369 studies were reported in these 291 papers (Table 4), the majority of the papers (231, or 80%) reported only one user study. Forty-seven (16.2%), nine (3.1%), two (<1%), and one (<1%) papers reported two, three, four, and five studies respectively, including pilot studies. In terms of the number of participants used (median) in each study, Tourism and Exploration, and Education were the highest among all categories with an average of 28 participants per study. Other categories used between 12 and 18 participants per study, while the overall median stands at 16 participants. Based on this insight, it can be claimed that 12 to 18 participants per study is a typical range in the AR community. Out of the 369 studies 31 (8.4%) were pilot studies, six (1.6%) heuristic evaluation, 54 (14.6%) field studies, and rest of the 278 (75.3%) were formal controlled user studies. Most of the studies (272, or 73.7%) were designed as within-subjects, 52 (14.1%) between-subjects, and 16 (4.3%) as mixed-factors (Table 4).

In the following section we review user studies in each of the nine application areas separately. We provide a commentary on each category and also discuss a representative paper with the highest ACCs in each application area, so that readers can understand typical user studies from that domain. We present tables summarizing all of the papers from these areas at the end of the paper.

4. Application areas

4.1. Collaboration

A total of 15 studies were reported in 12 papers in the Collaboration application area. The majority of the studies investigated some form of remote collaboration (Table 5), although Henrysson et al. (2005a) presented a face-to-face collaborative AR game. Interestingly, out of the 15 studies, eight reported using handheld displays, seven used HMDs, and six used some form of desktop display. This makes sense as collaborative interfaces often require at least one collaborator to be stationary and desktop displays can be beneficial in such setups. One noticeable feature was the low number of studies performed in the wild or in natural settings (field studies). Only three out of 15 studies were performed in natural settings and there were no pilot studies reported, which is an area for potential improvement. While 14 out of 15 studies were designed to be within-subjects, only 12 participants were recruited per study. On average, roughly one-third of the participants were females in all studies considered together. All studies were performed in indoor locations except for (Gauglitz et al., 2014b), which was performed in outdoors. While a majority of the studies (8) collected both objective (quantitative) and subjective (qualitative) data, five studies were based on only subjective data, and two studies were based on only objective data, both of which were reported in one paper (Henrysson et al., 2005a). Besides subjective feedback or ratings, task completion time and error/accuracy were other prominent dependent variables used. Only one study used NASA TLX (Wang and Dunston, 2011).

Table 5.

Summary of user studies in Collaboration application area.

References Topic Data type Displays used Dependent variables Study type Participants (female)
Almeida et al., 2012 AR based video meetings S DT Rating Formal 10 (0)
Chastine et al., 2007 Collaboration S HMD Interview answers Formal 16 (4)
Chen et al., 2013 Remote collaboration O + S HH Time, Subjective feedback Field 16 (7)
Gauglitz et al., 2012 Remote collaboration O + S HH, DT Error/Accuracy, Rating Formal 48 (21)
with an expert Completed task count
Gauglitz et al., 2014a Annotations in S HH, DT, User preference Field 11 (5)
remote Collaboration DT touchscreen
Gauglitz et al., 2014b Remote collaboration O + S HH, DT Time, Error/Accuracy, Rating Formal 60 (29)
Grasset et al., 2005 Collaboration O + S HMD Time, Error/Accuracy, Formal 14 (2)
Rating, Subject movement
Henrysson et al., 2005a Games, Interaction, O HH Rating Formal 12 (0)
Tangible Interfaces
Kasahara and Rekimoto, 2014 Remote Collaboration O + S HMD Time, Rating, Body movement Formal 10 (0)
Poelman et al., 2012 Remote Collaboration, S HMD Observation and discussion Field 5 (0)
Crime Scene Investigation
Sodhi et al., 2013 Remote Collaboration S HH Rating Formal 8 (1)
Wang and Dunston, 2011 Collaboration O + S HMD Time, NASA TLX Formal 16 (4)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values, and where more than one study was reported in the paper we used average counts.

4.1.1. Representative paper

As an example of the type of collaborative AR experiments conducted, we discuss the paper of Henrysson et al. (2005a) in more detail. They developed an AR-based face-to-face collaboration tool using a mobile phone and reported on two user studies. This paper received an ACC of 22.9, which is the highest in this category of papers. In the first study, six pairs of participants played a table-top tennis game in three conditions—face to face AR, face to face non-AR, and non-face to face collaboration. In the second experiment, the authors added (and varied) audio and haptic feedback to the games and only evaluated face to face AR. The same six pairs were recruited for this study as well. Authors collected both quantitative and qualitative (survey and interview) data, although they focused more on the latter. They asked questions regarding the usability of system and asked participants to rank the conditions. They explored several usability issues and provided design guidelines for developing face to face collaborative AR applications using handheld displays. For example, designing applications that have a focus on a single shared work space.

4.1.2. Discussion

The work done in this category is mostly directed toward remote collaboration. With the advent of modern head mounted devices such the Microsoft HoloLens, new types of collaborations can be created, including opportunities for enhanced face to face collaboration. Work needs to be done toward making AR-based remote collaboration akin to the real world with not only shared understanding of the task but also shared understanding of the other collaborators emotional and physiological states. New gesture-based and gaze-based interactions and collaboration across multiple platforms (e.g., between AR and virtual reality users) are novel future research directions in this area.

4.2. Education

Fifty-five studies were reported in 42 papers in the Education application area (Table 6). As expected, all studies reported some kind of teaching and learning applications, with a few niche areas, such as music training, educational games, and teaching body movements. Out of 55 studies, 24 used handheld displays, 8 used HMDs, 16 used some form of desktop displays, and 11 used spatial or large-scale displays. One study had augmented only sound feedback and used a head-mounted speaker (Hatala and Wakkary, 2005). Again, a trend of using handheld displays is prominent in this application area as well. Among all the studies reported, 13 were pilot studies, 14 field studies, and 28 controlled lab-based experiments. Thirty-one studies were designed as within-subjects studies, and 16 as between-subjects. Six studies had only one condition tested. The median number of participants was 28, jointly highest among all application areas. Almost 43% of participants were females. Forty-nine studies were performed in indoor locations, four in outdoor locations, and two studies were performed in both locations. Twenty-five studies collected only subjective data, 10 objective data, and 20 studies collected both types of data. While subjective rating was the primary dependent measure used in most of the studies, some specific measures were also noticed, such as pre- and post-test scores, number of items remembered, and engagement. From the keywords used in the papers, it appears that learning was the most common keyword and interactivity, users, and environments also received noticeable importance from the authors.

Table 6.

Summary of user studies in Education application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Anderson and Bischof, 2014 Training, Learning O + S DT IMI Score, Isolation, Muscle control Formal 12 (6)
Arvanitis et al., 2009 Learning O + S HMD Rating, Physiological measures, sense of welbeing Formal 5 (2)
Asai et al., 2005 AR Instructions/Annotations S HMD, HH Rating Formal 22 (15)
Cai et al., 2013 Education O + S S/LS Rating, Exam questions correct Formal 50 (30)
Cai et al., 2014 Learning and Teaching O + S DT Error/Accuracy, Rating Formal 29 (13)
Chang et al., 2013 Training O + S S/LS Error/Accuracy, Rating Formal 3 (1)
Chiang et al., 2014 Education O HH Learning outcomes Field 57 (NA)
Cocciolo and Rabina, 2013 Learning, Tourism S HH Interview response Field 34 (74)
Dünser et al., 2012b Education O HH Error/Accuracy Pilot 10 (10)
Fonseca et al., 2014b Education O + S HH Rating, Learning test scores Formal 48 (18)
Fonseca et al., 2014a Education S HH, DT Rating Field 57 (29)
Freitas and Campos, 2008 Educational AR Game for O + S S/LS, DT Error/Accuracy Field 54 (32)
2nd Graders (7-8 years old) Qualitative Observation
Furió et al., 2013 Education S Unspecified Rating, Multi-choice question responses Pilot 117 (74)
Gama et al., 2012 AR for movement training S DT Rating Pilot 10 (NA)
Hatala and Wakkary, 2005 Museum guide O + S Head-mounted speakers Rating, Subjective response Field 8 (4)
Hou and Wang, 2013 Training O DT Time, Error/Accuracy Formal 28 (14)
Hsiao, 2010 Education O + S S/LS Rating Formal 673 (338)
Hsiao et al., 2012 E-learning O + S S/LS, DT Error/Accuracy, Rating Formal 884 (489)
Hunter et al., 2010 Educational AR S Tangible, location-aware Observation Field 9 (NA)
S blocks with a video screen
Ibáñez et al., 2014 Education O + S HH Rating, knowledge learnt Formal 60 (15)
- pre and post test
Iwata et al., 2011 Self-learning, Gaming S DT Rating Formal 18 (1)
Juan et al., 2011b AR handheld gaming, S HH Rating Formal 38 (14)
Educational gaming
Juan et al., 2011a AR educational game S DT Rating, knowledge of animals Formal 31 (14)
Kurt, 2010 Learning S HH Rating, field notes, observations Field 55 (NA)
Li et al., 2011 AR for education O + S HH Error/Accuracy, Rating Formal 36 (20)
Liarokapis, 2005 Music Education S HMD, DT Ease of use/usability Pilot 9 (NA)
Lin et al., 2013 Education O HH Correct answers in physics tests Formal 40 (25)
Luckin and Fraser, 2011 Learning and Teaching O + S DT Rating, Items remembered, Engagement Field 304 (NA)
Martin-Gutierrez, 2011 AR use in the classroom S DT Rating Formal 47 (NA)
Oh and Byun, 2012 Interactive learning systems S HH Rating Pilot 15 (6)
Salvador-Herranz et al., 2013 Education S S/LS Rating Pilot 21 (9)
Santos et al., 2013 AR X-Ray techniques for education S HH Rating Pilot 27.3 (15.6)
Schwerdtfeger and Klinker, 2008 AR enabled instructions O + S HMD Time, Error/Accuracy, Formal 24 (10)
Subjective feedback
Shatte et al., 2014 Library management O HH Time Formal 35 (NA)
Sommerauer and Müller, 2014 Education O HH Error/Accuracy, Rating Field 101 (39)
Sumadio and Rambli, 2010 Education O + S DT Rating Formal 33 (20)
Szymczak et al., 2012 Multi-sensory AR S HH Rating Field 17 (10.5)
for historic city sites
Toyama et al., 2013 Reading Assistance O HMD Error/Accuracy Pilot 12 (NA)
Weing et al., 2013 Music Education S S/LS Interview questions Pilot 4 (0)
Wojciechowski and Cellary, 2013 Education S DT Rating Formal 42 (NA)
Yamabe and Nakajima, 2013 Training S S/LS, DT Rating Formal 10 (1.5)
Zhang et al., 2014 Teaching O + S HH Error/Accuracy, flow experience Field 147 (54)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

4.2.1. Representative paper

The paper from Fonseca et al. (2014a) received the highest ACC (22) in the Education application area of AR. They developed a mobile phone-based AR teaching tool for 3D model visualization and architectural projects for classroom learning. They recruited a total of 57 students (29 females) in this study and collected qualitative data through questionnaires and quantitative data through pre- and post-tests. This data was collected over several months of instruction. The primary dependent variable was the academic performance improvement of the students. Authors used five-point Likert-scale questions as the primary instrument. They reported that using the AR tool in the classroom was correlated with increased motivation and academic achievement. This type of longitudinal study is not common in the AR literature, but is helpful in measuring the actual real-world impact of any application or intervention.

4.2.2. Discussion

The papers in this category covered a diverse range of education and training application areas. There are some papers used AR to teach physically or cognitively impaired patients, while a couple more promoted physical activity. This set of papers focused on both objective and subjective outcomes. For example, Anderson and Bischof (2014) reported a system called ARM trainer to train amputees in the use of myoelectric prostheses that provided an improved user experience over the current standard of care. In a similar work, Gama et al. (2012) presented a pilot study for upper body motor movements where users were taught to move body parts in accordance to the instructions of an expert such as physiotherapist and showed that AR-based system was preferred by the participants. Their system can be applied to teach other kinds of upper body movements beyond just rehabilitation purposes. In another paper, Chang et al. (2013) reported a study where AR helped cognitively impaired people to gain vocational job skills and the gained skills were maintained even after the intervention. Hsiao et al. (2012) and Hsiao (2010) presented a couple of studies where physical activity was included in the learning experience to promote “learning while exercising". There are few other papers that gamified the AR learning content and they primarily focused on subjective data. Iwata et al. (2011) presented ARGo an AR version of the GO game to investigate and promote self-learning. Juan et al. (2011b) developed ARGreenet game to create awareness for recycling. Three papers investigated education content themed around tourism and mainly focused on subjective opinion. For example, Hatala and Wakkary (2005) created a museum guide educating users about the objects in the museum and Szymczak et al. (2012) created multi-sensory application for teaching about the historic sites in a city. There were several other papers that proposed and evaluated different pedagogical approaches using AR including two papers that specifically designed for teaching music such as Liarokapis (2005) and Weing et al. (2013). Overall these papers show that in the education space a variety of evaluation methods can be used, focusing both on educational outcomes and application usability. Integrating methods of intelligent tutoring systems (Anderson et al., 1985) with AR could provide effective tools for education. Another interesting area to explore further is making these educational interfaces adaptive to the users cognitive load.

4.3. Entertainment and gaming

We reviewed a total of 14 papers in the Entertainment and Gaming area with 18 studies were reported in these papers (Table 7). A majority of the papers reported a gaming application while fewer papers reported about other forms of entertainment applications. Out of the 18 studies, nine were carried out using handheld displays and four studies used HMDs. One of the reported studies, interestingly, did not use any display (Xu et al., 2011). Again, the increasing use of handheld displays is expected as this kind of display provides greater mobility than HMDs. Five studies were conducted as field studies and the rest of the 13 studies were controlled lab-based experiments. Fourteen studies were designed as within-subjects and two were between-subjects. The median number of participants in these studies was 17. Roughly 41.5% of participants were females. Thirteen studies were performed in indoor areas, four were in outdoor locations, and one study was conducted in both locations. Eight studies collected only subjective data, another eight collected both subjective and objective data, and the remaining two collected only objective data. Subjective preference was the primary measure of interest. However, task completion time was also another important measure. In this area, error/accuracy was not found to be a measure in the studies used. In terms of the keywords used by the authors, besides games, mobile and handheld were other prominent keywords. These results highlight the utility of handheld displays for AR Entertainment and Gaming studies.

Table 7.

Summary of user studies in Entertainment and Gaming application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Baudisch et al., 2013 Gaming S Audio interface Rating Formal 30 (7)
Dow et al., 2007 Entertainment O + S HMD Time, Observations, subject interviews Formal 12 (6)
Grubert et al., 2012 Mobile gaming O + S HH Time, Rating, Fatigue Formal 16 (8)
(as evidenced by phone posture)
Haugstvedt and Krogstie, 2012 Mobile AR for S HH Rating Field 121 (60.5)
Cultural Heritage
Henze and Boll, 2010a Mobile music listening S HH Rating Formal 15 (3.5)
Kern et al., 2006 Gaming S DT Subjective opinion Formal 3 (2)
Mulloni et al., 2008 Handheld AR Gaming S HH Rating Field 12 (6)
Oda and Feiner, 2009 Handheld AR gaming O + S HH Rating, Distance Formal 18 (3)
Schinke et al., 2010 AR tourism information O HH Error/Accuracy Formal 26 (13)
systems, outdoor AR
Vazquez-Alvarez et al., 2012 Tourism, Navigation O + S Headphone Time, Rating, Distance covered, Field 8 (2)
Walking speed, Times stopped
Wither et al., 2010 AR story-telling O + S HH Rating, Subjective feedback Formal 16 (9)
Xu et al., 2008 AR Gaming, O + S AR GamePad Rating Formal 18 (5)
Collaboration (Gizmondo)
Xu et al., 2011 Handheld AR for O No displays used Coding of recorded video Field 9 (NA)
social tabletop games
Zhou et al., 2007 Gaming, Audio AR O + S HMD Time, Error/Accuracy, Rating Formal 40 (13)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

4.3.1. Representative paper

Dow et al. (2007) presented a qualitative user study exploring the impact of immersive technologies on presence and engagement, using interactive drama, where players had to converse with characters and manipulate objects in the scene. This paper received the highest ACC (9.5) in this category of papers. They compared two versions of desktop 3D based interfaces with an immersive AR based interface in a lab-based environment. Participants communicated in the desktop versions using keyboards and voice. The AR version used a video see-though HMD. They recruited 12 participants (six females) in the within-subjects study, each of whom had to experience interactive dramas. This paper is unusual because user data was collected mostly from open-ended interviews and observation of participant behaviors, and not task performance or subjective questions. They reported that immersive AR caused an increased level of user Presence, however, higher presence did not always led to more engagement.

4.3.2. Discussion

It is clear that advances in mobile connectivity, CPU and GPU processing capabilities, wearable form factors, tracking robustness, and accessibility to commercial-grade game creation tools is leading to more interest in AR for entertainment. There is significant evidence from both AR and VR research of the power of immersion to provide a deeper sense of presence, leading to new opportunities for enjoyment in Mixed Reality (a continuum encompassing both AR and VR Milgram et al., 1995) spaces. Natural user interaction will be key to sustaining the use of AR in entertainment, as users will shy away from long term use of technologies that induce fatigue. In this sense, wearable AR will probably be more attractive for entertainment AR applications. In these types of entertainment applications, new types of evaluation measures will need to be used, as shown by the work of Dow et al. (2007).

4.4. Industrial

There was a total of 30 papers reviewed that focused on Industrial applications, and together they reported 36 user studies. A majority of the studies reported maintenance and manufacturing/assembly related tasks (Table 8). Eleven studies used handheld displays, 21 used HMDs, four used spatial or large screen displays, and two used desktop displays. The prevalence of HMDs was expected as most of the applications in this area require use of both hands at times, and as such HMDs are more suitable as displays. Twenty-nine studies were executed in a formal lab-based environment and only six studies were executed in their natural setups. We believe performing more industrial AR studies in the natural environment will lead to more-usable results, as controlled environments may not expose the users to the issues that they face in real-world setups. Twenty-eight studies were designed as within-subjects and six as between-subjects. One study was designed to collect exploratory feedback from a focus group (Olsson et al., 2009). The median number of participants used in these studies was 15 and roughly 23% of them were females. Thirty-two studies were performed in indoor locations and four in outdoor locations. Five studies were based on only subjective data, four on only objective data, and rest of the 27 collected both kinds of data. Use of NASA TLX was very common in this application area, which was expected given the nature of the tasks. Time and error/accuracy were other commonly used measurements along with subjective feedback. The keywords used by the authors to describe their papers highlight a strong interest in interaction, interfaces, and users. Guidance and maintenance are other prominent keywords that authors used.

Table 8.

Summary of user studies in Industrial area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Allen et al., 2011 AR for architectural planning S HH Rating Formal 18 (7)
Bruno et al., 2010 Industrial prototyping O + S HMD Error/Accuracy Formal 30 (NA)
Bunnun et al., 2013 Modeling O + S HH Time, Error/Accuracy, Rating Formal 10 (3)
Fiorentino et al., 2013 None O + S HMD Time, Error/Accuracy, Rating Formal 14 (3)
Fiorentino et al., 2014 Maintenance O + S S/LS Time, Error/Accuracy, Rating Formal 14 (3)
Gavish et al., 2013 Industrial maintenance O + S HH Time, Error/Accuracy Formal 40 (1)
and assembly Rating, unsolved errors
Hakkarainen et al., 2008 Object assembly S HH Rating Pilot 8 (NA)
Hartl et al., 2013 Document Verification O + S HH Time, Error/Accuracy, Rating, Formal 17 (1)
Nasa TLX, AttrakDiff UX survey Formal
Henderson and Feiner, 2009 AR Maintenance O + S HMD, DT Time, Rating Field 6 (0)
Henderson and Feiner, 2010 Industrial AR O + S HMD Time, Error/Accuracy, Rating Formal 15 (4)
Henderson S. and Feiner, 2011 Maintenance and repair, Defense O + S HMD Time, Error/Accuracy, Rating, Formal 6 (0)
Head movement, Supporting task focus
Henderson S. J. and Feiner, 2011 AR for Industrial Tasks, O + S HMD Time, Error/Accuracy, Rating Formal 11.3 (2.3)
Assembly Tasks
Irizarry et al., 2013 Construction O + S HH Time, Rating Formal 30 (9)
Lau et al., 2012 Tangible UI, 3D modeling O + S HMD Time, Error/Accuracy, Rating Formal 10 (4)
Magnusson et al., 2010 Pointing in space O HH Time, Error/Accuracy, Rating Formal 6 (3)
Markov-Vetter et al., 2012 Flight O + S HMD Time, Error/Accuracy, Formal 6 (1)
Rating, Pointing Behavior,
Physiological Measures, NASA RTLX
Marner et al., 2013 None O + S S/LS Time, Error/Accuracy, Rating Formal 24 (6)
Olsson et al., 2009 Design S HH Rating Formal 23 (10)
Petersen and Stricker, 2009 Industrial assembly O + S S/LS Time Formal 15 (10)
Rauhala et al., 2006 Humidity data visualization O + S HH Error/Accuracy Formal 10 (3)
Reif and Günthner, 2009 Storage facility management O + S HMD Time, Error/Accuracy, Rating Formal 16 (3)
Rosenthal et al., 2010 AR for guiding manual tasks O S/LS Time, Error/Accuracy Formal 30 (17)
Schall et al., 2013a Surveying S HH Rating Field 16 (4)
Schoenfelder and Schmalstieg, 2008 Industrial building acceptance O + S HMD, HH Error/Accuracy, Navigational activity Formal 36 (9)
Schwerdtfeger et al., 2009 Stock Picking O + S HMD Time, Error/Accuracy, Formal 13.5 (3.5)
Rating, NASA TLX
Schwerdtfeger et al., 2011 Order picking in a warehouse O + S HMD Time, Error/Accuracy, Observation Field 22.3 (10)
Tumler et al., 2008 Industrial AR; Order Picking Task O HMD Rating, heart rate Formal 12 (0)
Vignais et al., 2013 Ergonomics O + S HMD Time, Rating, Articulation score Formal 12 (0)
Yeh et al., 2012 Construction O Projected AR Time, Error/Accuracy Formal 34 (7)
Yuan et al., 2008 Assembly, manufacturing O + S HMD, DT Time, Error/Accuracy Formal 14 (4)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

4.4.1. Representative paper

As an example of the papers written in this area, Henderson S. and Feiner (2011) published a work exploring AR documentation for maintenance and repair tasks in a military vehicle, which received the highest ACC (26.25) in the Industrial area. They used a video see-though HMD to implement the study application. In the within-subjects study, the authors recruited six male participants who were professional military mechanics and they performed the tasks in the field settings. They had to perform 18 different maintenance tasks using three conditions—AR, LCD, and HUD. Several quantitative and qualitative (questionnaire) data were collected. As dependent variables they used task completion time, task localization time, head movement, and errors. The AR condition resulted in faster locating tasks and fewer head-movements. Qualitatively, AR was also reported to be more intuitive and satisfying. This paper provides an outstanding example of how to collect both qualitative and quantitative measures in an industrial setting, and so get a better indication of the user experience.

4.4.2. Discussion

Majority of the work in this category focused on maintenance and assembly tasks, whereas a few investigated architecture and planning tasks. Another prominent line of work in this category is military applications. Some work also cover surveying and item selection (stock picking). It will be interesting to investigate non-verbal communication cues in collaborative industrial applications where people form multiple cultural background can easily work together. As most of the industrial tasks require specific training and working in a particular environment, we assert that there needs to be more studies that recruit participants from the real users and perform studies in the field when possible.

4.5. Interaction

There were 71 papers in the Interaction design area and 83 user studies reported in these papers (see Table 9). Interaction is a very general area in AR, and the topics covered by these papers were diverse. Forty studies used handheld displays, 33 used HMDs, eight used desktop displays, 12 used spatial or large-screen displays, and 10 studies used a combination of multiple display types. Seventy-one studies were conducted in a lab-based environment, five studies were field studies, and six were pilot studies. Jones et al. (2013) were the only authors to conduct a heuristic evaluation. The median number of participants used in these studies was 14, and approximately 32% of participants were females. Seventy-five studies were performed in indoor locations, seven in outdoor locations, and one study used both locations. Sixteen studies collected only subjective data, 14 collected only objective data, and 53 studies collected both types of data. Task completion time and error/accuracy were the most commonly used dependent variables. A few studies used the NASA TLX workload survey (Robertson et al., 2007; Henze and Boll, 2010b) and most of the studies used different forms of subjective ratings, such as ranking conditions and rating on a Likert scale. The keywords used by authors identify that the papers in general were focused on interaction, interface, user, mobile, and display devices.

Table 9.

Summary of user studies in Interaction application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Ajanki et al., 2011 O + S HMD, HH Error/Accuracy, Rating Formal 7.5 (1.5)
Axholt et al., 2011 AR Optical See-Through Calibration O HMD Error/Accuracy Formal 11 (1)
Bai et al., 2012 Phone-based AR interaction methods O + S HH Time, Error/Accuracy, Rating Formal 10 (4)
Bai et al., 2013a Handheld AR O + S HH Time, Rating Formal 32 (16)
Bai et al., 2014 Basic AR interaction methods O + S HMD Time, Error/Accuracy, Rating Formal 5 (0)
Baričević et al., 2012 Handheld AR O + S HMD Time, Path deviation Formal 48 (24)
Benko and Feiner, 2007 Object selection O + S HMD Time, Error/Accuracy, Rating Formal 12 (2)
Benko et al., 2014 NA O + S S/LS Time, Error/Accuracy, Rating Formal 11 (5)
Boring et al., 2010 Mobile AR O + S HH Time, Error/Accuracy, Formal 12 (4)
Docking offset, subjective feedback
Boring et al., 2011 AR control of large displays S HH Discussions Field 15 (5)
Choi and Kim, 2013 Spatial AR O HH, S/LS Time, Number of clicks Formal 13 (4)
Chun and Höllerer, 2013 Handheld AR O + S HH Time, Error/Accuracy, Rating Formal 30 (23)
Datcu and Lukosch, 2013 Basic AR interaction O + S HMD Time, Error/Accuracy, Formal 25 (9)
with CV tracked hands Rating, Discussions
Denning et al., 2014 None S HMD Interview analysis Field 31 (13)
Dierker et al., 2009 None O + S HMD Time, Error/Accuracy, Rating Field 22 (11)
Fröhlich et al., 2006 Spatial information appliances S HH Rating, discussion Field 12 (5)
Grasset et al., 2012 None S HH User preference Pilot 7 (4)
Gupta et al., 2012 NA O + S DT Time, Error/Accuracy, Interviews Formal 16 (8)
Hürst and Van Wezel, 2013 Gesture-based interaction O + S HH Time, Rating, Interview Formal 21 (5)
for phone-based AR
Ha and Woo, 2010 Object manipulation for tangible UIs O + S DT Time, Rating Formal 20 (5)
Henderson and Feiner, 2008 AR Affordances for user interaction. O + S HMD Time, Error/ Accuracy, Rating Formal 15 (4)
Henrysson et al., 2005b Handheld AR Interaction O + S HH Time, Rating Formal 9 (2)
Henrysson et al., 2007 Mobile AR O + S HH Time, Rating Formal 12.5 (1.5)
Henze and Boll, 2010b NA O + S HH Time, NASA TLX Formal 12 (4)
Hoang and Thomas, 2010 Object manipulation O + S HMD Time, Error/Accuracy, Rating Formal 16 (2)
Jo et al., 2011 Selection of objects around the user O + S HH Time, Error/Accuracy, Task load Formal 16 (5)
Jones et al., 2013 None S S/LS Rating, Simulator sickness Formal 12.5 (2)
Kerr et al., 2011 Outdoor wearable AR S HMD Rating Pilot 8 (NA)
Ko et al., 2013 Handheld AR S HH Rating Formal 20 (10)
Kron and Schmidt, 2005 Telepresence S HMD Rating Formal 20 (0)
Langlotz et al., 2013 Spatialized audio in AR S HH Rating Pilot 30 (8)
Lee and Billinghurst, 2008 Multimodal interaction technique O + S HH, DT Freq. of speech and gesture commands Formal 12 (2)
Lee et al., 2009 None O HMD Number of collisions with virtual wire Formal 14 (5)
Lee et al., 2010 NA O HMD No. of collisions in path tracking task Formal 13.5 (4)
Lee and Billinghurst, 2011 Handheld outlining of AR objects O HH Time, Error/Accuracy Formal 8 (3)
Lee et al., 2013b Spatial Interaction S DT Rating Formal 10 (2)
Lee et al., 2013c Multimodal (speech-gesture) interaction O + S DT Time, Error/Accuracy, Rating Formal 25 (3)
Lehtinen et al., 2012 Interaction in Mobile AR O + S HH Time, percieved mental workload Formal 17 (7)
Leithinger et al., 2013 None O Optical-ST DT Time Formal 10 (4)
Looser et al., 2007 Tabletop AR; Object Selection O + S HMD Time, Error/Accuracy, Rating Formal 16 (1)
Lv, 2013 Mobile AR S HH Rating Formal 15 (6)
Maier et al., 2011 None O HMD Error/Accuracy Formal 24 (NA)
Mossel et al., 2013b None O + S HH Time, Rating, No. of steps to do task Formal 28 (12)
Mossel et al., 2013a 3D Interaction in AR O + S HH Time, Error/Accuracy, Rating Formal 28 (12)
Mulloni et al., 2013 AR tracking initialization/calibration O + S HH Error/Accuracy, Rating Formal 7 (2)
Ofek et al., 2013 None O + S S/LS Time, Error/Accuracy, Formal 48 (26)
Number of word detection
Oh and Hua, 2007 Multi-display AR/VR systems O + S HMD, HH, Time, Rating Formal 9 (3)
S/LS
Olsson and Salo, 2011 Mobile AR O + S HH Usage information Field 90 (15)
Olsson and Salo, 2012 Mobile AR S None Rating Formal 90 (15)
Porter et al., 2010 Spatial AR O + S S/LS Time, Rating Formal 24 (5)
Pusch et al., 2008 Haptic AR O + S HMD Error/Accuracy, Rating, Ranking Formal 13 (4)
Pusch et al., 2009 Haptics O + S HMD Rating, hand motion, perceived force Formal 13 (4)
Robertson et al., 2007 None O + S HMD Time, Error/Accuracy, NASA TLX Formal 26 (12)
Robertson et al., 2008 Basic AR placement task O + S HMD Time, Error/Accuracy, Rating Formal 28 (16)
Rohs et al., 2009b None O + S HH Time, Error/Accuracy, Formal 16.5 (10)
Rating, motion traces, gaze shifts
Rohs et al., 2011 Mobile AR, selection task O HH Time Formal 12 (6)
Sodhi et al., 2012 Guidance for gestures O + S S/LS Time, Error/Accuracy, Rating Formal 10 (2)
Sukan et al., 2012 Handheld AR O + S HH Time, Error/Accuracy, Pilot 15 (5.5)
Intersection Location
Takano et al., 2011 NA O HMD, DT Error/Accuracy Formal 15 (3)
Thomas, 2007 Mobile AR O + S HMD, HH Time, Error/Accuracy, Rating Formal 25 (5)
Toyama et al., 2014b None O HMD Error/Accuracy Pilot 9 (5)
Toyama et al., 2014a None O HMD Error/Accuracy Formal 10 (5)
Voida et al., 2005 Object manipulation S S/LS Subjective preference Formal 9 (6)
Weichel et al., 2014 3D printing O + S Non-AR Rating, Type of gesture Formal 11 (5.5)
White et al., 2007 None, AR Interaction Technique S HMD Rating Pilot 7 (4)
White et al., 2009 NA O + S HMD Time, Error/Accuracy, Rating Formal 13 (1)
Wither et al., 2007 None O + S HMD, HH Time, Error/Accuracy, Rating Formal 21 (4)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

4.5.1. Representative paper

Boring et al. (2010) presented a user study for remote manipulation of content on distant displays using their system, which was named Touch Projector and was implemented on an iPhone 3G. This paper received the highest ACC (31) in the Interaction category of papers. They implemented multiple interaction methods on this application, e.g., manual zoom, automatic zoom, and freezing. The user study involved 12 volunteers (four females) and was designed as a within-subjects study. In the experiment, participants selected targets and dragged targets between displays using the different conditions. Both quantitative and qualitative data (informal feedback) were collected. The main dependent variables were task completion time, failed trials, and docking offset. They reported that participants achieved highest performance with automatic zooming and temporary image freezing. This is a typical study in the AR domain based within a controlled laboratory environment. As usual in interaction studies, a significant amount of the study was focused on user performance with different input conditions, and this paper shows the benefit of capturing different types of performance measures, not just task completion time.

4.5.2. Discussion

User interaction is a cross-cutting focus of research, and as such, does not fall neatly within an application category, but deeply influences user experience in all categories. The balance of expressiveness and efficiency is a core concept in general human-computer interaction, but is of even greater importance in AR interaction, because of the desire to interact while on the go, the danger of increased fatigue, and the need to interact seamlessly with both real and virtual content. Both qualitative and quantitative evaluations will continue to be important in assessing usability in AR applications, and we encourage researchers to continue with this approach. It is also important to capture as many different performance measures as possible from the interaction user study to fully understand how a user interacts with the system.

4.6. Medicine

One of the most promising areas for applying AR is in medical sciences. However, most of the medical-related AR papers were published in medical journals rather than the most common AR publication venues. As we considered all venues in our review, we were able to identify 43 medical papers reporting AR studies and they in total reported 54 user studies. The specific topics were diverse, including laparoscopic surgery, rehabilitation and recovery, phobia treatment, and other medical training. This application area was dominated by desktop displays (34 studies), while 16 studies used HMDs, and handheld displays were used in only one study. This is very much expected, as often in medical setups, a clear view is needed along with free hands without adding any physical load. As expected, all studies were performed in indoor locations. Thirty-six studies were within-subjects and 11 were between-subjects. The median number of participants was 13, and approximately only 14.2% of participants were females, which is considerably lower than the gender-ratio in the profession of medicine. Twenty-two studies collected only objective data, 19 collected only subjective data, and 13 studies collected both types of data. Besides time and accuracy, various domain-specific surveys and other instruments were used in these studies as shown in Table 10.

Table 10.

Summary of user studies in Medical application areas.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Akinbiyi et al., 2006 Surgery O TV screen Time, Error/Accuracy, Formal 9 (3)
Number of broken structure, Applied force
Albrecht et al., 2013 Medical Education O + S HH Rating, Number of correct answers to exam Formal 10 (4)
Anderson et al., 2013 Movement training O S/LS Error/Accuracy Formal 8 (2)
Archip et al., 2007 Image-guided surgery O DT Error/Accuracy Field 8 (1)
Bai et al., 2013b Autism O + S S/LS Time, Rating, video analysis of type of play Formal 12 (2)
Bichlmeier et al., 2007 Image-Guided Surgery O + S DT Time, Error/Accuracy, discussion, qualitative Formal 12 (2)
Botden et al., 2007 Surgical Simulation O + S DT Rating, surgical effetiveness measures Formal 90 (NA)
Botden et al., 2008 Laparoscopic surgery S DT Rating Formal 55 (6)
Botella et al., 2005 Psychology/Phobia S HMD Rating, user behavior Formal 1 (1)
Botella et al., 2010 Phobia therapy O + S HMD Rating Formal 6 (6)
Botella et al., 2011 Phobia treatment O DT Rating, Anxiety Disorders Interview Schedule, Field 1 (1)
Behavioral avoidance test,
Fear of spiders questionnaire,
Spider phobia beliefs questionnaire,
Subjective units of discomfort scale
Bretón-López et al., 2010 Phobia S HMD Rating Formal 6 (6)
Brinkman et al., 2012 Laparosicpic surgical training O DT Time Formal 36 (NA)
Chintamani et al., 2010 Teleoperation O DT Time, Error/Accuracy, Path Distance: Formal 13.5 (2.5)
Deviation From Path: Distance From Receptacle:
Dixon et al., 2011 Image-guided surgical planning O + S Laparascope Error/Accuracy Formal 12 (NA)
Espay et al., 2010 Rehabilitation/training, O + S HMD Rating, gait Field 13 (7)
Gait assistance for performance
Parkinson's disease
Fichtinger G. et al., 2005 Medical O DT Error/Accuracy Pilot NA (NA)
Fichtinger G. D. et al., 2005 Medical O DT Error/Accuracy Pilot NA (NA)
Grasso et al., 2013 Medicine O DT Time, Number of scans, Dose Field 3 (NA)
Horeman et al., 2012 Laparoscopic Training O DT Time, Force applied Pilot 12 (NA)
Horeman et al., 2014 Surgical Training O DT Time, Error/Accuracy, Formal 25 (18)
Path length, motion volume
Jeon et al., 2012 Medical Training O + S DT Time, Similarity score Formal 12 (2)
Juan and Prez, 2010 Phobia Treatment S HMD Rating, SUS Formal 20 (4)
Juan and Joele, 2011 Phobia S HMD Rating Formal 24 (6)
King et al., 2010 Medicine O + S DT Fugl-Meyer Assessment, Formal 4 (NA)
Wolf Motor Function Test,
DASH questionnaire
Leblanc et al., 2010 Medical Training O + S DT Rating Formal 34 (NA)
Lee et al., 2013d Medical procedure training O + S DT Error/Accuracy, Rating Formal 40 (NA)
Luo et al., 2005b Medical AR O HMD Grasping force Pilot 1 (0)
Luo et al., 2005a Stroke rehabilitation O HMD Clinical measures related to Field 3 (0)
hand grasping performance
Markovic et al., 2014 Artificial Limbs O HMD Time, Error/Accuracy, Rating Formal 13 (NA)
Nicolau et al., 2005 Medicine O DT Time, Error/Accuracy Formal 2 (NA)
Nilsson and Johansson, 2007 Cognitive System Engineering S HMD Rating Field 12 (NA)
Regenbrecht et al., 2011 Stroke recovery and rehabilitation O + S DT Time, Error/ Accuracy Formal 64 (10)
Regenbrecht et al., 2012 Medical rehabilitation S DT Rating, Discussion Formal 36.2 (5.7)
Regenbrecht et al., 2014 AR for rehabilitation S DT Rating, Interview Formal 44 (8)
Ritter et al., 2007 Laparoscopic surgery O DT path length, smoothness Formal 60 (NA)
Teber et al., 2009 Laparoscopic Surgery O HMD Time, Error/Accuracy Field 1 (NA)
Thomas et al., 2010 Anatomical Education S DT Rating Formal 34 (21)
Wacker et al., 2006 Medical AR O HMD Error/Accuracy Formal 1 (NA)
Wilson et al., 2013 Medical procedures O HMD Time, Error/Accuracy Formal 34 (22)
Wrzesien et al., 2013 Therapy S HMD Standard therapy questionaires Formal 22 (NA)
Yoo et al., 2013 Health, Medicine O HMD Rating, Balance (Berg Balance Scale, BBS), Formal 21 (21)
gait parameters (velocity, cadence, step length,
and stride length), and falls efficacy
Yudkowsky et al., 2013 Medical Training O + S DT Ability to complete medical task Formal 16 (NA)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

The keywords used by authors suggest that AR-based research was primarily used in training and simulation. Laparoscopy, rehabilitation, and phobia were topics of primary interest. One difference between the keywords used in medical science vs. other AR fields is the omission of the word user, which indicates that the interfaces designed for medical AR were primarily focused on achieving higher precision and not on user experience. This is understandable as the users are highly trained professionals who need to learn to use new complex interfaces. The precision of the interface is of utmost importance, as poor performance can be life threatening.

4.6.1. Representative paper

Archip et al. (2007) reported on a study that used AR visualization for image-guided neurosurgery, which received the highest ACC (15.6) in this category of papers. Researchers recruited 11 patients (six females) with brain tumors who underwent surgery. Quantitative data about alignment accuracy was collected as a dependent variable. They found that using AR produced a significant improvement in alignment accuracy compared to the non-AR system already in use. An interesting aspect of the paper was that it focused purely on one user performance measure, alignment accuracy, and there was no qualitative data captured from users about how they felt about the system. This appears to be typical for many medical related AR papers.

4.6.2. Discussion

AR medical applications are typically designed for highly trained medical practitioners, which are a specialist set of users compared to other types of user studies. The overwhelming focus is on improving user performance in medical tasks, and so most of the user studies are heavily performance focused. However, there is an opportunity to include more qualitative measures in medical AR studies, especially those that relate to user estimation of their physical and cognitive workload, such as the NASA TLX survey. In many cases medical AR interfaces are aiming to improve user performance in medical tasks compared to traditional medical systems. This means that comparative evaluations will need to be carried out and previous experience with the existing systems will need to be taken into account.

4.7. Navigation and driving

A total of 24 papers reported 28 user studies in the Navigation and Driving application areas (see Table 11). A majority of the studies reported applications for car driving. However, there were also pedestrian navigation applications for both indoors and outdoors. Fifteen studies used handheld displays, five used HMDs, and two used heads-up displays (HUDs). Spatial or large-screen displays were used in four studies. Twenty-three of the studies were performed in controlled setups and the remaining five were executed in the field. Twenty-two studies were designed as within-subjects, three as between-subjects, and the remaining three were mixed-factors studies. Approximately 38% of participants were females in these studies, where the median number of participants used was 18. Seven studies were performed in an outdoor environment and the rest in indoor locations. This indicates an opportunity to design and test hybrid AR navigation applications that can be used in both indoor and outdoor locations. Seven studies collected only objective data, 18 studies collected a combination of both objective and subjective data, whereas only three studies were based only on subjective data. Task completion time and error/accuracy were the most commonly used dependent variables. Other domain specific variables used were headway variation (deviation from intended path), targets found, number of steps, etc.

Table 11.

Summary of user studies in Navigation and Driving application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Arning et al., 2012 AR navigation systems O + S HH, S/LS Rating, Navigation performance metrics Formal 24 (19)
Avery et al., 2008 Outdoor AR navigation O + S HMD Time, Error/Accuracy, Rating Formal 34 (9)
Choi et al., 2011 Outdoor AR (Mobile) O + S HH Time, Rating, Clicks on Formal 12 (1)
screen, number targets found
Dünser et al., 2012a Outdoor navigation O + S HH Time, Rating, Discussion; video coding Formal 22 (11)
Fröhlich et al., 2011 In-car navigation using AR O + S “in-car” screen Rating, See measures. Formal 31 (11)
Gee et al., 2011 Cooperative AR, O + S HH Time, Error/Accuracy Formal 12 (2)
Automatic map building
Goldiez et al., 2007 Navigation O HMD Time, Percentage of maze covered Formal 120 (60)
Ha et al., 2012 Path editing using O + S HMD Time, Error/Accuracy, Formal 16.5 (1)
tangible user interfaces Rating
Heller et al., 2014 Navigation O HH Path, orientation Formal 16.5 (2.5)
Kjeldskov et al., 2013 Mobile urban map-based info. O + S HH Rating, Interviews Field 58 (6)
Möller et al., 2012 Indoor navigation S DT Rating Formal 81 (39)
Möller et al., 2014 Navigation O + S HH Time, Error/Accuracy, Rating Formal 12 (1)
Morrison et al., 2009 AR Map, Augmenting a paper map S HH Rating, Primarily an observational study Field 37 (20)
Moussa et al., 2012 AR for driving analysis O + S HMD Error/Accuracy Formal 44 (18)
Mulloni et al., 2011a Indoor navigation O HH Time, Error/Accuracy, steps Formal 10 (5)
Mulloni et al., 2011b Navigation O + S HH Time, Rating, where AR was used Field 9 (NA)
Ng-Thow-Hing et al., 2013 Automotive Augmented Reality O HUD on car Error/Accuracy Formal 16 (8)
O windshield
Rohs et al., 2007 Moble maps on handheld display O + S HH Time, Error/Accuracy, Rating Formal 18 (10)
Rohs et al., 2009a Map navigation O HH Time, Error/Accuracy Formal 17 (12)
Rusch et al., 2013 Driving O S/LS Time, Error/Accuracy Formal 27 (14)
Schall et al., 2013b Driving O Projection Time, Error/Accuracy, Response Rate, Formal 20 (7)
O HUD Time to collision, Headway variation
Tönnis et al., 2005 Driving O + S S/LS Time, Error/Accuracy, Rating Formal 12 (2)
Tönnis and Klinker, 2007 Driving O + S HUD Time, Error/Accuracy, Rating Formal 24 (10)
Tangmanee and Teeravarunyou, 2012 Vehicle Navigation O + S S/LS Time, Subjective questions, Formal 5 (2)
number of eye fixations

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

Analysis of author-specified keywords suggests that mobile received a strong importance, which is also evident by the profuse use of handheld displays in these studies, since these applications are about mobility. Acceptance was one of the noticeable keywords, which indicates that the studies intended to investigate whether or not a navigation interface is acceptable by the users, given the fact that, in many cases, a navigational tool can affect the safety of the user.

4.7.1. Representative paper

Morrison et al. (2009) published a paper reporting on a field study that compared a mobile augmented reality map (MapLens) and a 2D map in a between-subjects field study, which received the highest ACC (16.3) in this application area of our review. MapLens was implemented on a Nokia N95 mobile phone and use AR to show virtual points of interest overlaid on a real map. The experimental task was to play a location-based treasure hunt type game outdoors using either MapLens or a 2D map. Researchers collected both quantitative and qualitative (photos, videos, field notes, and questionnaires) data. A total of 37 participants (20 female) took part in the study. The authors found that the AR map created more collaborations between players, and argued that AR maps are more useful as a collaboration tool. This work is important, because it provides an outstanding example of an AR Field study evaluation, which is not very common in the AR domain. User testing in the field can uncover several usability issues that normal lab-based testing cannot identify, particularly in the Navigation application area. For example, Morrison et al. (2009) were able to identify the challenges for a person of using a handheld AR device while trying to maintain awareness of the world around themselves.

4.7.2. Discussion

Navigation is an area where AR technology could provide significant benefit, due to the ability to overlay virtual cues on the real world. This will be increasingly important as AR displays become more common in cars (e.g., windscreen heads up displays) and consumers begin to wear head mounted displays outdoors. Most navigation studies have related to vehicle driving, and so there is a significant opportunity for pedestrian navigation studies. However human movement is more complex and erratic than driving, so these types of studies will be more challenging. Navigation studies will need to take into consideration the user's spatial ability, how to convey depth cues, and methods for spatial information display. The current user studies show how important it is to conduct navigation studies outdoors in a realistic testing environment, and the need to capture a variety of qualitative and quantitative data.

4.8. Perception

Similar to Interaction, Perception is another general field of study within AR, and appears in 51 papers in our review. There were a total of 71 studies reported in these papers. The primary focus was on visual perception (see Table 12) such as perception of depth/distance, color, and text. A few studies also reported perception of touch (haptic feedback). AR X-ray vision was also a common interface reported in this area. Perception of egocentric distance received significant attention, while exocentric distance was studied less. Also, near- to medium-field distance estimation was studied more than far-field distances. A comprehensive review of depth perception studies in AR can be found in Dey and Sandor (2014), which also reports similar facts about AR perceptual studies as found in this review.

Table 12.

Summary of user studies in Perception application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Blum et al., 2010 General AR S HMD, DT Rating Formal 18 (4)
Dey et al., 2010 X-ray vision O + S HH Time, Error/Accuracy, Formal 20 (2)
Rating, NASA TLX
Dey et al., 2012 X-ray vision O + S HH Error/Accuracy Formal 20 (NA)
Gabbard et al., 2005 Outdoor AR O HMD Time, Error/Accuracy Formal 18 (6)
Gabbard et al., 2006 Perception O HMD Time, Error/Accuracy Formal 18 (6)
Gabbard et al., 2007 Text legibility O HMD Time, Error/Accuracy Formal 24 (12)
Gabbard and Swan II, 2008 Outdoor AR O HMD Time, Error/Accuracy Formal 24 (12)
Gandy et al., 2010 AR Testbed Design O + S HMD Error/Accuracy, Physiological measures Formal 20 (6)
Grechkin et al., 2010 Distance estimation O HMD Distance walked Formal 53.5 (23.5)
Gustafsson and Gyllenswärd, 2005 Ambient Displays S Ambient display interview questions Pilot 15 (4)
Hincapié-Ramos et al., 2014 None S HH Interview questions Pilot 8 (2)
Iwai et al., 2013 Spatial AR O + S S/LS Time, Error/Accuracy Formal 10 (1)
Jankowski et al., 2010 Text readability O + S DT Time, Error/Accuracy, Rating Formal 20 (4)
Jeon and Choi, 2011 Haptic rendering of stiffness S Phantom Psychophysical PSE Formal 12 (4)
Jeon and Harders, 2012 Haptic AR S HMD, Phantom Rating Pilot 6 (2)
Jones et al., 2008 None O HMD Error/Accuracy Formal NA (NA)
Jones et al., 2011 None O HMD Error/Accuracy Formal 21.75 (NA)
Kellner et al., 2012 None O HMD Time, Error/Accuracy Formal 14.5 (6)
Kerber et al., 2013 None O HH Error/Accuracy Formal 12 (2)
Kim, 2013 Context in handheld AR S HH, HH projectors Rating Field 20 (10)
Knörlein et al., 2009 None O HMD Correct selection of Formal 14 (7)
strongest force
Lee et al., 2012 AR haptic perception O + S DT Rating, Perceived location Formal 14 (5)
Lee et al., 2013a NA O + S HMD Time, Rating Formal 48 (28)
Lindeman et al., 2007 None O AudioBone bone- Error/Accuracy, Frequency Formal 24 (2)
conducting headset
Liu et al., 2010 Displays O + S HMD Error, Rating Formal 10 (2)
Liu et al., 2012 Handheld AR O HH Time, Error/Accuracy Formal 16 (4)
Livingston et al., 2005 NA O + S HMD Error/Accuracy Formal 8 (NA)
Livingston, 2007 Visual acuity in AR displays O HMD, DT Time, Error/Accuracy Formal 5 (1)
Livingston and Ai, 2008 Tracking error O + S HMD Time, Error/Accuracy, Rating Formal 11 (1)
Livingston et al., 2009c Basic visual perception O HMD Time, Error/Accuracy Formal 20 (5.5)
Livingston et al., 2009b Basic perception in AR O HMD Time, Error/Accuracy Formal 11 (2)
Livingston et al., 2009a Object depth perception S HMD Time, Error/Accuracy, Rating Formal 12 (4)
Livingston et al., 2011 Military situation awareness O HMD Error/Accuracy Formal 14 (3)
Lu et al., 2012 Visual search O DT Time, Error/Accuracy Formal 20.5 (7)
Mercier-Ganady et al., 2014 None O + S S/LS Rating, BCI ouput Formal 12 (NA)
Olsson et al., 2012 Mobile AR S None Rating Formal 262 (133)
Peterson et al., 2009 None O + S Projection HUD Time, Error/Accuracy, Rating Formal 16 (NA)
Pucihar et al., 2014 None O + S HH Time, Error/Accuracy, Subject preference Formal 15 (4)
Salamin et al., 2006 Unspecified S HMD Rating, Able to perform tasks Pilot 6 (0)
Sandor et al., 2010 X-Ray Vision O + S HH Time, Rating Formal 21.5 (1)
Singh et al., 2010 NA O HMD Error/Accuracy, Distance to object Formal 18 (7)
Singh et al., 2012 Depth Perception O HMD Error/Accuracy Formal 40 (NA)
Suzuki et al., 2013 None O + S HMD Rating, Cardio-visual and tactile-visual feedback modulate proprioceptive drif, Formal 21 (11)
Tomioka et al., 2013 User-perspective cameras O HH Time Pilot 9.3 (0.7)
Tsuda et al., 2005 See-through vision S HH Rating Formal 14 (0)
Veas et al., 2011 Mobile AR S DT Rating Formal 18.6 (5.3)
Veas et al., 2012 Outdoor topography S HH Comments/Feedback Heuristic 7.5 (1)
Wagner et al., 2006 3D Characters in AR O + S HH Error/Accuracy, Rating Formal 13 (4)
Wither and Höllerer, 2005 Distance estimation O + S HMD Rating, Judged Depth Formal 19 (5)
Wither et al., 2011 Mobile AR O HH Error/Accuracy Field 13.5 (0)
Zhang et al., 2012 Depth perception O HMD Error/Accuracy Formal 52 (NA)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

Twenty-one studies used handheld displays, 34 studies used HMDs, and 9 studies used desktop displays. The Phantom haptic display was used by two studies where haptic feedback was studied. Sixty studies were performed as controlled lab-based experiments, and only three studies were performed in the field. Seven studies were pilot studies and there was one heuristic study (Veas et al., 2012). Fifty-three studies were within-subjects, 12 between-subjects, and six mixed-factors. Overall, the median number of participants used in these studies was 16, and 27.3% of participants were females. Fifty-two studies were performed in indoor locations, only 17 studies were executed outdoors, and two studies used both locations. This indicates that indoor visual perception is well studied whereas more work is needed to investigate outdoor visual perception. Outdoor locations present additional challenges for visualizations such as brightness, screen-glare, and tracking (when mobile). This is an area to focus on as a research community. Thirty-two studies were based on only objective data, 14 used only subjective data, and 25 studies collected both kinds of data. Time and error/accuracy were most commonly used dependent measures along with subjective feedback.

Keywords used by authors indicate an emphasis on depth and visual perception, which is expected, as most of the AR interfaces augment the visual sense. Other prominent keywords were X-ray and see-through, which are the areas that have received a significant amount of attention from the community over the last decade.

4.8.1. Representative paper

A recent paper by Suzuki et al. (2013), reporting on the interaction of exteroceptive and interoceptive signals in virtual cardiac rubber hand perception, received the highest ACC (13.5) in this category of papers. The authors reported on a lab-based within-subjects user study using 21 participants (11 female) who wore a head-mounted display and experienced a tactile feedback simulating cardiac sensation. Both quantitative and qualitative (survey) data were collected. The main dependent variables were proprioceptive drift and virtual hand ownership. Authors reported that ownership of the virtual hand was significantly higher when tactile sensation was presented synchronously with the heart-beat of the participant than when provided asynchronously. This shows the benefit of combing perceptual cues to improve the user experience.

4.8.2. Discussion

A key focus of AR is trying to create a perceptual illusion that the AR content is seamlessly part of the user's real world. In order to measure how well this is occurring it is important to conduct perceptual user studies. Most studies to date have focused on visual perception, but there is a significant opportunity to conduct studies on non-visual cues, such as audio and haptic perception. One of the challenges of such studies is being able to measure the users perception of an AR cue, and also their confidence in how well they can perceive the cue. For example, asking users to estimate the distance on an AR object from them, and how sure they are about that estimation. New experimental methods may need to be developed to do this well.

4.9. Tourism and exploration

Tourism is one of the relatively less explored areas of AR user studies, represented by only eight papers in our review (Table 13). A total of nine studies were reported, and the primary focus of the papers was on museum-based applications (five papers). Three studies used handheld displays, three used large-screen or spatial displays, and the rest head mounted displays. Six studies were conducted in the field, in the environment where the applications were meant to be used, and only three studies were performed in lab-based controlled environments. Six studies were designed to be within-subjects. This area of studies used a markedly higher number of participants compared to other areas, with the median number of participants being 28, with approximately 38% of them female. All studies were performed in indoor locations. While we are aware of studies in this area that have been performed in outdoor locations, these did not meet the inclusion criteria of our review. Seven studies were based completely on subjective data and two others used both subjective and objective data. As the nature of the interfaces were primarily personal experiences, the over reliance on subjective data is understandable. An analysis of keywords in the papers found that the focus was on museums. User was the most prominent keyword among all, which is very much expected for an interface technology such as AR.

Table 13.

Summary of user studies in Tourism and Exploration application area.

References Topic Data type Displays used Dependent measures Study type Participants (female)
Alvarez-Santos et al., 2014 Human-robot interaction, Tourism O + S DT Error/Accuracy, Rating Formal 12 (NA)
Asai et al., 2010 Interaction for museum exhibit S S/LS Rating Field 155 (NA)
Baldauf et al., 2012 AR for public displays S HH, S/LS Rating Field 31 (15)
Hatala et al., 2005 Museums S Headphones Rating Field 6 (NA)
Olsson et al., 2013 Mobile AR S HH Interview responses Field 28 (16)
Pescarin et al., 2012 Museums S Unspecified Comments from interviews, Field 362 (199)
S questionnaire
Sylaiou et al., 2010 Museums S S/LS Rating Formal 29 (13)
Tillon et al., 2011 Museums S HH Rating Field 16 (NA)

S, Subjective; O, Objective; DT, Desktop; HH, handheld. Participant numbers are absolute values and where more than one studies were reported in the paper we used average counts.

4.9.1. Representative paper

The highest ACC (19) in this application area was received by an article published by Olsson et al. (2013) about the expectations of user experience of mobile augmented reality (MAR) services in a shopping context. Authors used semi-structured interviews as their research methodology and conducted 16 interview sessions with 28 participants (16 female) in two different shopping centers. Hence, their collected data was purely qualitative. The interviews were conducted individually, in pairs, and in groups. The authors reported on: (1) the characteristics of the expected user experience and, (2) central user requirements related to MAR in a shopping context. Users expected the MAR systems to be playful, inspiring, lively, collective, and surprising, along with providing context-aware and awareness-increasing services. This type of exploratory study is not common in the AR domain. However, it is a good example of how qualitative data can be used to identify user expectations and conceptualize user-centered AR applications. It is also an interesting study because people were asked what they expected of a mobile AR service, without actually seeing or trying the service out.

4.9.2. Discussion

One of the big advantages of studies done in this area is the relatively large sample sizes, as well as the common use of “in the wild” studies, that assess users outside of controlled environments. For these reasons, we see this application area as useful for exploring applied user interface designs, using real end-users in real environments. We also think that this category will continue to be attractive for applications that use handheld devices, as opposed to head-worn AR devices, since these are so common, and get out of the way of the content when someone wants to enjoy the physically beautiful/important works.

5. Conclusion

5.1. Overall summary

In this paper, we reported on 10 years of user studies published in AR papers. We reviewed papers from a wide range of journals and conferences as indexed by Scopus, which included 291 papers and 369 individual studies. Overall, on average, the number of user study papers among all AR papers published was less than 10% over the 10-year period we reviewed. Our exploration shows that although there has been an increase in the number of studies, the relative percentage appears the same. In addition, since 2011 there has been a shift toward more studies using handheld displays. Most studies were formal user studies, with little field testing and even fewer heuristic evaluations. Over the years there was an increase in AR user studies of educational applications, but there were few collaborative user studies. The use of pilot studies was also less than expected. The most popular data collection method involved filling out questionnaires, which led to subjective ratings being the most widely used dependent measure.

5.2. Findings and suggestions

This analysis suggests opportunities for increased user studies in collaboration, more use of field studies, and a wider range of evaluation methods. We also find that participant populations are dominated by mostly young, educated, male participants, which suggests the field could benefit by incorporating a more diverse selection of participants. On a similar note, except for the Education and Tourism application categories, the median number of participants used in AR studies was between 12 and 18, which appears to be low compared to other fields of human-subject research. We have also noticed that within-subjects designs are dominant in AR, and these require fewer participants to achieve adequate statistical power. This is in contrast to general research in Psychology, where between-subject designs dominate.

Although formal, lab-based experiments dominated overall, the Education and Tourism application areas had higher ratios of field studies to formal lab-based studies, which required more participants. Researchers working in other application areas of AR could take inspiration from Education and Tourism papers and seek to perform more studies in real-world usage scenarios.

Similarly, because the social and environmental impact of outdoor locations differ from indoor locations, results obtained from indoor studies cannot be directly generalized to outdoor environments. Therefore, more user studies conducted outdoors are needed, especially ethnographic observational studies that report on how people naturally use AR applications. Finally, out of our initial 615 papers, 219 papers (35%) did not report either participant demographics, study design, or experimental task, and so could not be included in our survey. Any user study without these details is hard to replicate, and the results cannot be accurately generalized. This suggests a general need to improve the reporting quality of user studies, and education of researchers in the field on how to conduct good AR user studies.

5.3. Final thoughts and future plans

For this survey, our goal has been to provide a comprehensive account of the AR user studies performed over the last decade. We hope that researchers and practitioners in a particular application area can use the respective summaries when planning their own research agendas. In the future, we plan to explore each individual application area in more depth, and create more detailed and focused reviews. We would also like to create a publicly-accessible, open database containing AR user study papers, where new papers can be added and accessed to inform and plan future research.

Author contributions

All authors contributed significantly to the whole review process and the manuscript. AD initiated the process with Scopus database search, initial data collection, and analysis. AD, MB, RL, and JS all reviewed and collected data for an equal number of papers. All authors contributed almost equally to writing the paper, where AD and MB took the lead.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Ajanki A., Billinghurst M., Gamper H., Järvenpää T., Kandemir M., Kaski S., et al. (2011). An augmented reality interface to contextual information. Virt. Real. 15, 161–173. 10.1007/s10055-010-0183-5 [DOI] [Google Scholar]
  2. Akinbiyi T., Reiley C. E., Saha S., Burschka D., Hasser C. J., Yuh D. D., et al. (2006). Dynamic augmented reality for sensory substitution in robot-assisted surgical systems, in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, 567–570. [DOI] [PubMed] [Google Scholar]
  3. Albrecht U.-V., Folta-Schoofs K., Behrends M., Von Jan U. (2013). Effects of mobile augmented reality learning compared to textbook learning on medical students: randomized controlled pilot study. J. Med. Int. Res. 15. 10.2196/jmir.2497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Allen M., Regenbrecht H., Abbott M. (2011). Smart-phone augmented reality for public participation in urban planning, in Proceedings of the 23rd Australian Computer-Human Interaction Conference, OzCHI 2011, 11–20. [Google Scholar]
  5. Almeida I., Oikawa M., Carres J., Miyazaki J., Kato H., Billinghurst M. (2012). AR-based video-mediated communication: a social presence enhancing experience, in Proceedings - 2012 14th Symposium on Virtual and Augmented Reality, SVR 2012, 125–130. [Google Scholar]
  6. Alvarez-Santos V., Iglesias R., Pardo X., Regueiro C., Canedo-Rodriguez A. (2014). Gesture-based interaction with voice feedback for a tour-guide robot. J. Vis. Commun. Image Represent. 25, 499–509. 10.1016/j.jvcir.2013.03.017 [DOI] [Google Scholar]
  7. Anderson F., Bischof W. F. (2014). Augmented reality improves myoelectric prosthesis training. Int. J. Disabil. Hum. Dev. 13, 349–354. 10.1515/ijdhd-2014-0327 [DOI] [Google Scholar]
  8. Anderson J. R., Boyle C. F., Reiser B. J. (1985). Intelligent tutoring systems. Science 228, 456–462. [DOI] [PubMed] [Google Scholar]
  9. Anderson F., Grossman T., Matejka J., Fitzmaurice G. (2013). YouMove: enhancing movement training with an augmented reality mirror, in UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, 311–320. [Google Scholar]
  10. Archip N., Clatz O., Whalen S., Kacher D., Fedorov A., Kot A., et al. (2007). Non-rigid alignment of pre-operative MRI, fMRI, and DT-MRI with intra-operative MRI for enhanced visualization and navigation in image-guided neurosurgery. Neuroimage 35, 609–624. 10.1016/j.neuroimage.2006.11.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Arning K., Ziefle M., Li M., Kobbelt L. (2012). Insights into user experiences and acceptance of mobile indoor navigation devices, in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012. [Google Scholar]
  12. Arvanitis T., Petrou A., Knight J., Savas S., Sotiriou S., Gargalakos M., et al. (2009). Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities. Pers. Ubiquit. Comput. 13, 243–250. 10.1007/s00779-007-0187-7 [DOI] [Google Scholar]
  13. Asai K., Kobayashi H., Kondo T. (2005). Augmented instructions - A fusion of augmented reality and printed learning materials, in Proceedings - 5th IEEE International Conference on Advanced Learning Technologies, ICALT 2005, Vol. 2005, 213–215. [Google Scholar]
  14. Asai K., Sugimoto Y., Billinghurst M. (2010). Exhibition of lunar surface navigation system facilitating collaboration between children and parents in Science Museum, in Proceedings - VRCAI 2010, ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Application to Industry, 119–124. [Google Scholar]
  15. Avery B., Thomas B. H., Piekarski W. (2008). User evaluation of see-through vision for mobile outdoor augmented reality, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 69–72. [Google Scholar]
  16. Axholt M., Cooper M., Skoglund M., Ellis S., O'Connell S., Ynnerman A. (2011). Parameter estimation variance of the single point active alignment method in optical see-through head mounted display calibration, in Proceedings - IEEE Virtual Reality, 27–34. [Google Scholar]
  17. Azuma R. T. (1997). A survey of augmented reality. Presence 6, 355–385. [Google Scholar]
  18. Bai Z., Blackwell A. F. (2012). Analytic review of usability evaluation in ISMAR. Interact. Comput. 24, 450–460. 10.1016/j.intcom.2012.07.004 [DOI] [Google Scholar]
  19. Bai H., Lee G. A., Billinghurst M. (2012). Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces, in ACM International Conference Proceeding Series, 126–131. [Google Scholar]
  20. Bai H., Gao L., El-Sana J. B. J., Billinghurst M. (2013a). Markerless 3D gesture-based interaction for handheld augmented reality interfaces, in SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, SA 2013. [Google Scholar]
  21. Bai Z., Blackwell A. F., Coulouris G. (2013b). Through the looking glass: pretend play for children with autism, in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013, 49–58. [Google Scholar]
  22. Bai H., Lee G. A., Billinghurst M. (2014). Using 3D hand gestures and touch input for wearable AR interaction, in Conference on Human Factors in Computing Systems - Proceedings, 1321–1326. [Google Scholar]
  23. Baldauf M., Lasinger K., Fröhlich P. (2012). Private public screens - Detached multi-user interaction with large displays through mobile augmented reality. in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012. [Google Scholar]
  24. Baričević D., Lee C., Turk M., Höllerer T., Bowman D. (2012). A hand-held AR magic lens with user-perspective rendering, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers, 197–206. [Google Scholar]
  25. Baudisch P., Pohl H., Reinicke S., Wittmers E., Lühne P., Knaust M., et al. (2013). Imaginary reality gaming: Ball games without a ball, in UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, UK: ), 405–410. [Google Scholar]
  26. Benko H., Feiner S. (2007). Balloon selection: a multi-finger technique for accurate low-fatigue 3D selection, in IEEE Symposium on 3D User Interfaces 2007 - Proceedings, 3DUI 2007, 79–86. [Google Scholar]
  27. Benko H., Wilson A., Zannier F. (2014). Dyadic projected spatial augmented reality, in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, HI: ), 645–656. [Google Scholar]
  28. Bichlmeier C., Heining S., Rustaee M., Navab N. (2007). Laparoscopic virtual mirror for understanding vessel structure: evaluation study by twelve surgeons, in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR. [Google Scholar]
  29. Blum T., Wieczorek M., Aichert A., Tibrewal R., Navab N. (2010). The effect of out-of-focus blur on visual discomfort when using stereo displays, in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings, 13–17. [Google Scholar]
  30. Boring S., Baur D., Butz A., Gustafson S., Baudisch P. (2010). Touch projector: Mobile interaction through video, in Conference on Human Factors in Computing Systems - Proceedings, Vol. 4, (Atlanta, GA: ), 2287–2296. [Google Scholar]
  31. Boring S., Gehring S., Wiethoff A., Blöckner M., Schöning J., Butz A. (2011). Multi-user interaction on media facades through live video on mobile devices, in Conference on Human Factors in Computing Systems - Proceedings, 2721–2724. [Google Scholar]
  32. Botden S., Buzink S., Schijven M., Jakimowicz J. (2007). Augmented versus virtual reality laparoscopic simulation: what is the difference? A comparison of the ProMIS augmented reality laparoscopic simulator versus LapSim virtual reality laparoscopic simulator. World J. Surg. 31, 764–772. 10.1007/s00268-006-0724-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Botden S., Buzink S., Schijven M., Jakimowicz J. (2008). ProMIS augmented reality training of laparoscopic procedures face validity. Simul. Healthc. 3, 97–102. 10.1097/SIH.0b013e3181659e91 [DOI] [PubMed] [Google Scholar]
  34. Botella C., Juan M., Baños R., Alcañiz M., Guillén V., Rey B. (2005). Mixing realities? An application of augmented reality for the treatment of cockroach phobia. Cyberpsychol. Behav. 8, 162–171. 10.1089/cpb.2005.8.162 [DOI] [PubMed] [Google Scholar]
  35. Botella C., Bretón-López J., Quero S., Baños R., García-Palacios A. (2010). Treating cockroach phobia with augmented reality. Behav. Ther. 41, 401–413. 10.1016/j.beth.2009.07.002 [DOI] [PubMed] [Google Scholar]
  36. Botella C., Breton-López J., Quero S., Baños R., García-Palacios A., Zaragoza I., et al. (2011). Treating cockroach phobia using a serious game on a mobile phone and augmented reality exposure: a single case study. Comput. Hum. Behav. 27, 217–227. 10.1016/j.chb.2010.07.043 [DOI] [Google Scholar]
  37. Bretón-López J., Quero S., Botella C., García-Palacios A., Baños R., Alcañiz M. (2010). An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychol. Behav. Soc. Netw. 13, 705–710. 10.1089/cyber.2009.0170 [DOI] [PubMed] [Google Scholar]
  38. Brinkman W., Havermans S., Buzink S., Botden S., Jakimowicz J. E., Schoot B. (2012). Single versus multimodality training basic laparoscopic skills. Surg. Endosc. Other Intervent. Tech. 26, 2172–2178. 10.1007/s00464-012-2184-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Bruno F., Cosco F., Angilica A., Muzzupappa M. (2010). Mixed prototyping for products usability evaluation, in Proceedings of the ASME Design Engineering Technical Conference, Vol. 3, 1381–1390. [Google Scholar]
  40. Bunnun P., Subramanian S., Mayol-Cuevas W. W. (2013). In Situ interactive image-based model building for Augmented Reality from a handheld device. Virt. Real. 17, 137–146. 10.1007/s10055-011-0206-x [DOI] [Google Scholar]
  41. Cai S., Chiang F.-K., Wang X. (2013). Using the augmented reality 3D technique for a convex imaging experiment in a physics course. Int. J. Eng. Educ. 29, 856–865. [Google Scholar]
  42. Cai S., Wang X., Chiang F.-K. (2014). A case study of Augmented Reality simulation system application in a chemistry course. Comput. Hum. Behav. 37, 31–40. 10.1016/j.chb.2014.04.018 [DOI] [Google Scholar]
  43. Carmigniani J., Furht B., Anisetti M., Ceravolo P., Damiani E., Ivkovic M. (2011). Augmented reality technologies, systems and applications. Multim. Tools Appl. 51, 341–377. 10.1007/s11042-010-0660-6 [DOI] [Google Scholar]
  44. Chang Y.-J., Kang Y.-S., Huang P.-C. (2013). An augmented reality (AR)-based vocational task prompting system for people with cognitive impairments. Res. Dev. Disabil. 34, 3049–3056. 10.1016/j.ridd.2013.06.026 [DOI] [PubMed] [Google Scholar]
  45. Chastine J., Nagel K., Zhu Y., Yearsovich L. (2007). Understanding the design space of referencing in collaborative augmented reality environments, in Proceedings - Graphics Interface, 207–214. [Google Scholar]
  46. Chen S., Chen M., Kunz A., Yantaç A., Bergmark M., Sundin A., et al. (2013). SEMarbeta: mobile sketch-gesture-video remote support for car drivers, in ACM International Conference Proceeding Series, 69–76. [Google Scholar]
  47. Chiang T., Yang S., Hwang G.-J. (2014). Students' online interactive patterns in augmented reality-based inquiry activities. Comput. Educ. 78, 97–108. 10.1016/j.compedu.2014.05.006 [DOI] [Google Scholar]
  48. Chintamani K., Cao A., Ellis R., Pandya A. (2010). Improved telemanipulator navigation during display-control misalignments using augmented reality cues. IEEE Trans. Syst. Man Cybern. A Syst. Humans 40, 29–39. 10.1109/TSMCA.2009.2030166 [DOI] [Google Scholar]
  49. Choi J., Kim G. J. (2013). Usability of one-handed interaction methods for handheld projection-based augmented reality. Pers. Ubiquit. Comput. 17, 399–409. 10.1007/s00779-011-0502-1 [DOI] [Google Scholar]
  50. Choi J., Jang B., Kim G. J. (2011). Organizing and presenting geospatial tags in location-based augmented reality. Pers. Ubiquit. Comput. 15, 641–647. 10.1007/s00779-010-0343-3 [DOI] [Google Scholar]
  51. Chun W. H., Höllerer T. (2013). Real-time hand interaction for augmented reality on mobile phones, in International Conference on Intelligent User Interfaces, Proceedings IUI, 307–314. [Google Scholar]
  52. Cocciolo A., Rabina D. (2013). Does place affect user engagement and understanding?: mobile learner perceptions on the streets of New York. J. Document. 69, 98–120. 10.1108/00220411311295342 [DOI] [Google Scholar]
  53. Datcu D., Lukosch S. (2013). Free-hands interaction in augmented reality, in SUI 2013 - Proceedings of the ACM Symposium on Spatial User Interaction, 33–40. [Google Scholar]
  54. Denning T., Dehlawi Z., Kohno T. (2014). In situ with bystanders of augmented reality glasses: perspectives on recording and privacy-mediating technologies, in Conference on Human Factors in Computing Systems - Proceedings, 2377–2386. [Google Scholar]
  55. Dey A., Sandor C. (2014). Lessons learned: evaluating visualizations for occluded objects in handheld augmented reality. Int. J. Hum. Comput. Stud. 72, 704–716. 10.1016/j.ijhcs.2014.04.001 [DOI] [Google Scholar]
  56. Dey A., Cunningham A., Sandor C. (2010). Evaluating depth perception of photorealistic Mixed Reality visualizations for occluded objects in outdoor environments, in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, 211–218. [Google Scholar]
  57. Dey A., Jarvis G., Sandor C., Reitmayr G. (2012). Tablet versus phone: depth perception in handheld augmented reality, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers, 187–196. [Google Scholar]
  58. Dierker A., Mertes C., Hermann T., Hanheide M., Sagerer G. (2009). Mediated attention with multimodal augmented reality, in ICMI-MLMI'09 - Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interfaces, 245–252. [Google Scholar]
  59. Dixon B., Daly M., Chan H., Vescan A., Witterick I., Irish J. (2011). Augmented image guidance improves skull base navigation and reduces task workload in trainees: a preclinical trial. Laryngoscope 121, 2060–2064. 10.1002/lary.22153 [DOI] [PubMed] [Google Scholar]
  60. Dow S., Mehta M., Harmon E., MacIntyre B., Mateas M. (2007). Presence and engagement in an interactive drama, in Conference on Human Factors in Computing Systems - Proceedings (San Jose, CA: ), 1475–1484. [Google Scholar]
  61. Dünser A., Grasset R., Billinghurst M. (2008). A Survey of Evaluation Techniques Used in Augmented Reality Studies. Technical Report.
  62. Dünser A., Billinghurst M., Wen J., Lehtinen V., Nurminen A. (2012a). Exploring the use of handheld AR for outdoor navigation. Comput. Graph. 36, 1084–1095. 10.1016/j.cag.2012.10.001 [DOI] [Google Scholar]
  63. Dünser A., Walker L., Horner H., Bentall D. (2012b). Creating interactive physics education books with augmented reality, in Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI 2012,107–114. [Google Scholar]
  64. Espay A., Baram Y., Dwivedi A., Shukla R., Gartner M., Gaines L., et al. (2010). At-home training with closed-loop augmented-reality cueing device for improving gait in patients with Parkinson disease. J. Rehabil. Res. Dev. 47, 573–582. 10.1682/JRRD.2009.10.0165 [DOI] [PubMed] [Google Scholar]
  65. Fichtinger G., Deguet A., Masamune K., Balogh E., Fischer G., Mathieu H., et al. (2005). Image overlay guidance for needle insertion in CT scanner. IEEE Trans. Biomed. Eng. 52, 1415–1424. 10.1109/TBME.2005.851493 [DOI] [PubMed] [Google Scholar]
  66. Fichtinger G. D., Deguet A., Fischer G., Iordachita I., Balogh E. B., Masamune K., et al. (2005). Image overlay for CT-guided needle insertions. Comput. Aided Surg. 10, 241–255. 10.3109/10929080500230486 [DOI] [PubMed] [Google Scholar]
  67. Fiorentino M., Debernardis S., Uva A. E., Monno G. (2013). Augmented reality text style readability with see-through head-mounted displays in industrial context. Presence 22, 171–190. 10.1162/PRES_a_00146 [DOI] [Google Scholar]
  68. Fiorentino M., Uva A. E., Gattullo M., Debernardis S., Monno G. (2014). Augmented reality on large screen for interactive maintenance instructions. Comput. Indust. 65, 270–278. 10.1016/j.compind.2013.11.004 [DOI] [Google Scholar]
  69. Fonseca D., Redondo E., Villagrasa S. (2014b). Mixed-methods research: a new approach to evaluating the motivation and satisfaction of university students using advanced visual technologies, in Universal Access in the Information Society. [Google Scholar]
  70. Fonseca D., Martí N., Redondo E., Navarro I., Sánchez A. (2014a). Relationship between student profile, tool use, participation, and academic performance with the use of Augmented Reality technology for visualized architecture models. Comput. Hum. Behav. 31, 434–445. 10.1016/j.chb.2013.03.006 [DOI] [Google Scholar]
  71. Freitas R., Campos P. (2008). SMART: a system of augmented reality for teaching 2nd grade students, in Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction, BCS HCI 2008, Vol. 2, 27–30. [Google Scholar]
  72. Fröhlich P., Simon R., Baillie L., Anegg H. (2006). Comparing conceptual designs for mobile access to geo-spatial information, in ACM International Conference Proceeding Series, Vol. 159, 109–112. [Google Scholar]
  73. Fröhlich P., Baldauf M., Hagen M., Suette S., Schabus D., Kun A. (2011). Investigating safety services on the motorway: the role of realistic visualization, in Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2011, 143–150. [Google Scholar]
  74. Furió D., González-Gancedo S., Juan M.-C., Seguí I., Rando N. (2013). Evaluation of learning outcomes using an educational iPhone game vs. traditional game. Comput. Educ. 64, 1–23. 10.1016/j.compedu.2012.12.001 [DOI] [Google Scholar]
  75. Gabbard J., Swan II J. c. (2008). Usability engineering for augmented reality: employing user-based studies to inform design. IEEE Trans. Visual. Comput. Graph. 14, 513–525. 10.1109/TVCG.2008.24 [DOI] [PubMed] [Google Scholar]
  76. Gabbard J., Schulman R., Edward Swan II J., Lucas J., Hix D., Gupta D. (2005). An empirical user-based study of text drawing styles and outdoor background textures for augmented reality, in Proceedings - IEEE Virtual Reality, 11–18.317. [Google Scholar]
  77. Gabbard J., Swan II J., Mix D. (2006). The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence 15, 16–32. 10.1162/pres.2006.15.1.16 [DOI] [Google Scholar]
  78. Gabbard J., Swan II J., Hix D., Kim S.-J., Fitch G. (2007). Active text drawing styles for outdoor augmented reality: a user-based study and design implications, in Proceedings - IEEE Virtual Reality, 35–42. [Google Scholar]
  79. Gama A. D., Chaves T., Figueiredo L., Teichrieb V. (2012). Guidance and movement correction based on therapeutics movements for motor rehabilitation support systems, in Proceedings - 2012 14th Symposium on Virtual and Augmented Reality, SVR 2012 (Rio de Janeiro: ), 191–200. [Google Scholar]
  80. Gandy M., Catrambone R., MacIntyre B., Alvarez C., Eiriksdottir E., Hilimire M., et al. (2010). Experiences with an AR evaluation test bed: presence, performance, and physiological measurement, in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings, 127–136. [Google Scholar]
  81. Gauglitz S., Lee C., Turk M., Höllerer T. (2012). Integrating the physical environment into mobile remote collaboration, in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services, 241–250. [Google Scholar]
  82. Gauglitz S., Nuernberger B., Turk M., Höllerer T. (2014a). In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation, in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, 197–205. [Google Scholar]
  83. Gauglitz S., Nuernberger B., Turk M., Höllerer T. (2014b). World-stabilized annotations and virtual scene navigation for remote collaboration, in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, HI: ), 449–460. [Google Scholar]
  84. Gavish N., Gutiérrez T., Webel S., Rodríguez J., Peveri M., Bockholt U., et al. (2013). Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ. 23, 778–798. 10.1080/10494820.2013.815221 [DOI] [Google Scholar]
  85. Gee A., Webb M., Escamilla-Ambrosio J., Mayol-Cuevas W., Calway A. (2011). A topometric system for wide area augmented reality. Comput. Graph. (Pergamon) 35, 854–868. 10.1016/j.cag.2011.04.006 [DOI] [Google Scholar]
  86. Goldiez B., Ahmad A., Hancock P. (2007). Effects of augmented reality display settings on human wayfinding performance. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 37, 839–845. 10.1109/TSMCC.2007.900665 [DOI] [Google Scholar]
  87. Grasset R., Lamb P., Billinghurst M. (2005). Evaluation of mixed-space collaboration, in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005, Vol. 2005, 90–99. [Google Scholar]
  88. Grasset R., Langlotz T., Kalkofen D., Tatzgern M., Schmalstieg D. (2012). Image-driven view management for augmented reality browsers, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers, 177–186. [Google Scholar]
  89. Grasso R., Faiella E., Luppi G., Schena E., Giurazza F., Del Vescovo R., et al. (2013). Percutaneous lung biopsy: comparison between an augmented reality CT navigation system and standard CT-guided technique. Int. J. Comput. Assist. Radiol. Surg. 8, 837–848. 10.1007/s11548-013-0816-8 [DOI] [PubMed] [Google Scholar]
  90. Grechkin T. Y., Nguyen T. D., Plumert J. M., Cremer J. F., Kearney J. K. (2010). How does presentation method and measurement protocol affect distance estimation in real and virtual environments? ACM Trans. Appl. Percept. 7:26 10.1145/1823738.1823744 [DOI] [Google Scholar]
  91. Grubert J., Morrison A., Munz H., Reitmayr G. (2012). Playing it real: magic lens and static peephole interfaces for games in a public space, in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services, 231–240. [Google Scholar]
  92. Gupta A., Fox D., Curless B., Cohen M. (2012). DuploTrack: a real-time system for authoring and guiding duplo block assembly, in UIST'12 - Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, 389–401. [Google Scholar]
  93. Gustafsson A., Gyllenswärd M. (2005). The power-aware cord: Energy awareness through ambient information display, in Conference on Human Factors in Computing Systems - Proceedings, 1423–1426. [Google Scholar]
  94. Ha T., Woo W. (2010). An empirical evaluation of virtual hand techniques for 3D object manipulation in a tangible augmented reality environment, in 3DUI 2010 - IEEE Symposium on 3D User Interfaces 2010, Proceedings, 91–98. [Google Scholar]
  95. Ha T., Billinghurst M., Woo W. (2012). An interactive 3D movement path manipulation method in an augmented reality environment. Interact. Comput. 24, 10–24. 10.1016/j.intcom.2011.06.006 [DOI] [Google Scholar]
  96. Hakkarainen M., Woodward C., Billinghurst M. (2008). Augmented assembly using a mobile phone, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 167–168. [Google Scholar]
  97. Hartl A., Grubert J., Schmalstieg D., Reitmayr G. (2013). Mobile interactive hologram verification, in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013, 75–82. [Google Scholar]
  98. Hatala M., Wakkary R. (2005). Ontology-based user modeling in an augmented audio reality system for museums. User Modell. User Adapt. Interact. 15, 339–380. 10.1007/s11257-005-2304-5 [DOI] [Google Scholar]
  99. Hatala M., Wakkary R., Kalantari L. (2005). Rules and ontologies in support of real-time ubiquitous application. Web Semant. 3, 5–22. 10.1016/j.websem.2005.05.004 [DOI] [Google Scholar]
  100. Haugstvedt A.-C., Krogstie J. (2012). Mobile augmented reality for cultural heritage: A technology acceptance study, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers (Atlanta, GA: ), 247–255. [Google Scholar]
  101. Heller F., Krämer A., Borchers J. (2014). Simplifying orientation measurement for mobile audio augmented reality applications, in Conference on Human Factors in Computing Systems - Proceedings, 615–623. [Google Scholar]
  102. Henderson S. J., Feiner S. (2008). Opportunistic controls: leveraging natural affordances as tangible user interfaces for augmented reality, in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, 211–218. [Google Scholar]
  103. Henderson S. J., Feiner S. (2009). Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 (Orlando, FL: ), 135–144. [Google Scholar]
  104. Henderson S., Feiner S. (2010). Opportunistic tangible user interfaces for augmented reality. IEEE Trans. Visual. Comput. Graph. 16, 4–16. 10.1109/TVCG.2009.91 [DOI] [PubMed] [Google Scholar]
  105. Henderson S., Feiner S. (2011). Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE Trans Visual. Comput. Graphics 17, 1355–1368. 10.1109/TVCG.2010.245 [DOI] [PubMed] [Google Scholar]
  106. Henderson S. J., Feiner S. K. (2011). Augmented reality in the psychomotor phase of a procedural task, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 (Basel: ), 191–200. [Google Scholar]
  107. Henrysson A., Billinghurst M., Ollila M. (2005a). Face to face collaborative AR on mobile phones, in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005, Vol. 2005, 80–89. [Google Scholar]
  108. Henrysson A., Billinghurst M., Ollila M. (2005b). Virtual object manipulation using a mobile phone, in ACM International Conference Proceeding Series, Vol. 157, 164–171. [Google Scholar]
  109. Henrysson A., Marshall J., Billinghurst M. (2007). Experiments in 3D interaction for mobile phone AR, in Proceedings - GRAPHITE 2007, 5th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, 187–194. [Google Scholar]
  110. Henze N., Boll S. (2010a). Designing a CD augmentation for mobile phones, in Conference on Human Factors in Computing Systems - Proceedings, 3979–3984. [Google Scholar]
  111. Henze N., Boll S. (2010b). Evaluation of an off-screen visualization for magic lens and dynamic peephole interfaces, in ACM International Conference Proceeding Series (Lisbon: ), 191–194. [Google Scholar]
  112. Hincapié-Ramos J., Roscher S., Büschel W., Kister U., Dachselt R., Irani P. (2014). CAR: contact augmented reality with transparent-display mobile devices, in PerDis 2014 - Proceedings: 3rd ACM International Symposium on Pervasive Displays 2014, 80–85. [Google Scholar]
  113. Hoang T. N., Thomas B. H. (2010). Augmented viewport: an action at a distance technique for outdoor AR using distant and zoom lens cameras, in Proceedings - International Symposium on Wearable Computers, ISWC. [Google Scholar]
  114. Horeman T., Rodrigues S., Van Den Dobbelsteen J., Jansen F.-W., Dankelman J. (2012). Visual force feedback in laparoscopic training. Surg. Endosc. Other Intervent. Techniq. 26, 242–248. 10.1007/s00464-011-1861-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Horeman T., Van Delft F., Blikkendaal M., Dankelman J., Van Den Dobbelsteen J., Jansen F.-W. (2014). Learning from visual force feedback in box trainers: tissue manipulation in laparoscopic surgery. Surg. Endosc. Other Intervent. Techniq. 28, 1961–1970. 10.1007/s00464-014-3425-x [DOI] [PubMed] [Google Scholar]
  116. Hou L., Wang X. (2013). A study on the benefits of augmented reality in retaining working memory in assembly tasks: a focus on differences in gender. Automat. Construct. 32, 38–45. 10.1016/j.autcon.2012.12.007 [DOI] [Google Scholar]
  117. Hsiao K.-F., Chen N.-S., Huang S.-Y. (2012). Learning while exercising for science education in augmented reality among adolescents. Interact. Learn. Environ. 20, 331–349. 10.1080/10494820.2010.486682 [DOI] [Google Scholar]
  118. Hsiao O.-F. (2010). Can we combine learning with augmented reality physical activity? J. Cyber Ther. Rehabil. 3, 51–62. [Google Scholar]
  119. Hunter S., Kalanithi J., Merrill D. (2010). Make a Riddle and TeleStory: designing children's applications for the Siftables platform, in Proceedings of IDC2010: The 9th International Conference on Interaction Design and Children, 206–209. [Google Scholar]
  120. Hürst W., Van Wezel C. (2013). Gesture-based interaction via finger tracking for mobile augmented reality. Multimedia Tools Appl. 62, 233–258. 10.1007/s11042-011-0983-y [DOI] [Google Scholar]
  121. Ibáñez M., Di Serio A., Villarán D., Delgado Kloos C. (2014). Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness. Comput. Educ. 71, 1–13. 10.1016/j.compedu.2013.09.004 [DOI] [Google Scholar]
  122. Irizarry J., Gheisari M., Williams G., Walker B. (2013). InfoSPOT: a mobile augmented reality method for accessing building information through a situation awareness approach. Autom. Construct. 33, 11–23. 10.1016/j.autcon.2012.09.002 [DOI] [Google Scholar]
  123. Iwai D., Yabiki T., Sato K. (2013). View management of projected labels on nonplanar and textured surfaces. IEEE Trans. Visual. Comput. Graph. 19, 1415–1424. 10.1109/TVCG.2012.321 [DOI] [PubMed] [Google Scholar]
  124. Iwata T., Yamabe T., Nakajima T. (2011). Augmented reality go: extending traditional game play with interactive self-learning support, in Proceedings - 17th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2011, Vol. 1, (Toyama: ), 105–114. [Google Scholar]
  125. Jankowski J., Samp K., Irzynska I., Jozwowicz M., Decker S. (2010). Integrating text with video and 3D graphics: the effects of text drawing styles on text readability, in Conference on Human Factors in Computing Systems - Proceedings, Vol. 2, 1321–1330. [Google Scholar]
  126. Jeon S., Choi S. (2011). Real stiffness augmentation for haptic augmented reality. Presence 20, 337–370. 10.1162/PRES_a_00051 [DOI] [Google Scholar]
  127. Jeon S., Harders M. (2012). Extending haptic augmented reality: modulating stiffness during two-point squeezing, in Haptics Symposium 2012, HAPTICS 2012 - Proceedings, 141–146. [Google Scholar]
  128. Jeon S., Choi S., Harders M. (2012). Rendering virtual tumors in real tissue Mock-Ups using haptic augmented reality. IEEE Trans. Hapt. 5, 77–84. 10.1109/TOH.2011.40 [DOI] [PubMed] [Google Scholar]
  129. Jo H., Hwang S., Park H., Ryu J.-H. (2011). Aroundplot: focus+context interface for off-screen objects in 3D environments. Comput. Graph. (Pergamon) 35, 841–853. [Google Scholar]
  130. Jones J., Swan J., Singh G., Kolstad E., Ellis S. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception, in APGV 2008 - Proceedings of the Symposium on Applied Perception in Graphics and Visualization, 9–14. [Google Scholar]
  131. Jones J., Swan II J., Singh G., Ellis S. (2011). Peripheral visual information and its effect on distance judgments in virtual and augmented environments, in Proceedings - APGV 2011: ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, 29–36. [Google Scholar]
  132. Jones B., Benko H., Ofek E., Wilson A. (2013). IllumiRoom: peripheral projected illusions for interactive experiences, in Conference on Human Factors in Computing Systems - Proceedings (Paris: ), 869–878. [Google Scholar]
  133. Juan M., Joele D. (2011). A comparative study of the sense of presence and anxiety in an invisible marker versus a marker augmented reality system for the treatment of phobia towards small animals. Int. J. Hum. Comput. Stud. 69, 440–453. 10.1016/j.ijhcs.2011.03.002 [DOI] [Google Scholar]
  134. Juan M., Prez D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Comput. Graph. (Pergamon) 34, 756–766. 10.1016/j.cag.2010.08.001 [DOI] [Google Scholar]
  135. Juan M., Carrizo M., Abad F., Giménez M. (2011a). Using an augmented reality game to find matching pairs, in 19th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2011 - In Co-operation with EUROGRAPHICS, Full Papers Proceedings, 59–66. [Google Scholar]
  136. Juan M., Furió D., Alem L., Ashworth P., Cano J. (2011b). ARGreenet and BasicGreenet: Two mobile games for learning how to recycle, in 19th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2011 - In Co-operation with EUROGRAPHICS, Full Papers Proceedings (Plzen: ), 25–32. [Google Scholar]
  137. Kasahara S., Rekimoto J. (2014). JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation, in ACM International Conference Proceeding Series. [Google Scholar]
  138. Kellner F., Bolte B., Bruder G., Rautenberg U., Steinicke F., Lappe M., et al. (2012). Geometric calibration of head-mounted displays and its effects on distance estimation. IEEE Trans. Visual. Comput. Graph. 18, 589–596. 10.1109/TVCG.2012.45 [DOI] [PubMed] [Google Scholar]
  139. Kerber F., Lessel P., Mauderer M., Daiber F., Oulasvirta A., Krüger A. (2013). Is autostereoscopy useful for handheld AR?, in Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM 2013. [Google Scholar]
  140. Kern D., Stringer M., Fitzpatrick G., Schmidt A. (2006). Curball - A prototype tangible game for inter-generational play, in Proceedings of the Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE, 412–417. [Google Scholar]
  141. Kerr S., Rice M., Teo Y., Wan M., Cheong Y., Ng J., et al. (2011). Wearable mobile augmented reality: evaluating outdoor user experience, in Proceedings of VRCAI 2011: ACM SIGGRAPH Conference on Virtual-Reality Continuum and its Applications to Industry, 209–216. [Google Scholar]
  142. Kim M. J. (2013). A framework for context immersion in mobile augmented reality. Automat. Construct. 33, 79–85. 10.1016/j.autcon.2012.10.020 [DOI] [Google Scholar]
  143. King M., Hale L., Pekkari A., Persson M., Gregorsson M., Nilsson M. (2010). An affordable, computerised, table-based exercise system for stroke survivors. Disabil. Rehabil. Assist. Technol. 5, 288–293. 10.3109/17483101003718161 [DOI] [PubMed] [Google Scholar]
  144. Kjeldskov J., Skov M. B., Nielsen G. W., Thorup S., Vestergaard M. (2013). Digital urban ambience: mediating context on mobile devices in a city. Pervasive Mobile Comput. 9, 738–749. 10.1016/j.pmcj.2012.05.002 [DOI] [Google Scholar]
  145. Knörlein B., Di Luca M., Harders M. (2009). Influence of visual and haptic delays on stiffness perception in augmented reality, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 49–52. [Google Scholar]
  146. Ko S. M., Chang W. S., Ji Y. G. (2013). Usability principles for augmented reality applications in a smartphone environment. Int. J. Hum. Comput. Interact. 29, 501–515. 10.1080/10447318.2012.722466 [DOI] [Google Scholar]
  147. Kron A., Schmidt G. (2005). Haptic telepresent control technology applied to disposal of explosive ordnances: principles and experimental results, in IEEE International Symposium on Industrial Electronics, Vol. IV, 1505–1510. [Google Scholar]
  148. Kruijff E., Swan II J. E., Feiner S. (2010). Perceptual issues in augmented reality revisited, in Mixed and Augmented Reality (ISMAR), 2010 9th IEEE International Symposium on (Seoul: ), 3–12. [Google Scholar]
  149. Kurt S. (2010). From information to experience: place-based augmented reality games as a model for learning in a globally networked society. Teach. Coll. Rec. 112, 2565–2602. [Google Scholar]
  150. Langlotz T., Regenbrecht H., Zollmann S., Schmalstieg D. (2013). Audio stickies: visually-guided spatial audio annotations on a mobile augmented reality platform, in Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, OzCHI 2013, 545–554. [Google Scholar]
  151. Lau M., Hirose M., Ohgawara A., Mitani J., Igarashi T. (2012). Situated modeling: a shape-stamping interface with tangible primitives, in Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012, 275–282. [Google Scholar]
  152. Leblanc F., Senagore A., Ellis C., Champagne B., Augestad K., Neary P., et al. (2010). Hand-assisted laparoscopic sigmoid colectomy skills acquisition: augmented reality simulator versus human cadaver training models. J. Surg. Educ. 67, 200–204. 10.1016/j.jsurg.2010.06.004 [DOI] [PubMed] [Google Scholar]
  153. Lee M., Billinghurst M. (2008). A Wizard of Oz Study for an AR Multimodal Interface, in ICMI'08: Proceedings of the 10th International Conference on Multimodal Interfaces, 249–256. [Google Scholar]
  154. Lee G. A., Billinghurst M. (2011). A user study on the Snap-To-Feature interaction method, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, 245–246. [Google Scholar]
  155. Lee C., Bonebrake S., Höllerer T., Bowman D. (2009). A replication study testing the validity of AR simulation in VR for controlled experiments, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 203–204. [Google Scholar]
  156. Lee C., Bonebrake S., Höllerer T., Bowman D. (2010). The role of latency in the validity of AR simulation, in Proceedings - IEEE Virtual Reality, 11–18. [Google Scholar]
  157. Lee J., Kim Y., Kim G. J. (2012). Funneling and saltation effects for tactile interaction with virtual objectsvirtual, in Conference on Human Factors in Computing Systems - Proceedings, 3141–3148. [Google Scholar]
  158. Lee C., Rincon G., Meyer G., Höllerer T., Bowman D. (2013a). The effects of visual realism on search tasks in mixed reality simulation. IEEE Trans. Visual. Comput. Graph. 19, 547–556. 10.1109/TVCG.2013.41 [DOI] [PubMed] [Google Scholar]
  159. Lee J., Olwal A., Ishii H., Boulanger C. (2013b). SpaceTop: integrating 2D and spatial 3D interactions in a see-through desktop environment, in Conference on Human Factors in Computing Systems - Proceedings, 189–192. [Google Scholar]
  160. Lee M., Billinghurst M., Baek W., Green R., Woo W. (2013c). A usability study of multimodal input in an augmented reality environment. Virt. Real. 17, 293–305. 10.1007/s10055-013-0230-0 [DOI] [Google Scholar]
  161. Lee S., Lee J., Lee A., Park N., Lee S., Song S., et al. (2013d). Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine. Veter. J. 196, 197–202. 10.1016/j.tvjl.2012.09.015 [DOI] [PubMed] [Google Scholar]
  162. Lehtinen V., Nurminen A., Oulasvirta A. (2012). Integrating spatial sensing to an interactive mobile 3D map, in IEEE Symposium on 3D User Interfaces 2012, 3DUI 2012 - Proceedings, 11–14. [Google Scholar]
  163. Leithinger D., Follmer S., Olwal A., Luescher S., Hogge A., Lee J., et al. (2013). Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays, in Conference on Human Factors in Computing Systems - Proceedings, 1441–1450. [Google Scholar]
  164. Li N., Gu Y., Chang L., Duh H.-L. (2011). Influences of AR-supported simulation on learning effectiveness in face-to-face collaborative learning for physics, in Proceedings of the 2011 11th IEEE International Conference on Advanced Learning Technologies, ICALT 2011, 320–322. [Google Scholar]
  165. Liarokapis F. (2005). Augmented reality scenarios for guitar learning, in Theory and Practice of Computer Graphics 2005, TPCG 2005 - Eurographics UK Chapter Proceedings, 163–170. [Google Scholar]
  166. Lin T.-J., Duh H.-L., Li N., Wang H.-Y., Tsai C.-C. (2013). An investigation of learners' collaborative knowledge construction performances and behavior patterns in an augmented reality simulation system. Comput. Educ. 68, 314–321. 10.1016/j.compedu.2013.05.011 [DOI] [Google Scholar]
  167. Lindeman R., Noma H., De Barros P. (2007). Hear-through and mic-through augmented reality: using bone conduction to display spatialized audio, in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR. [Google Scholar]
  168. Liu S., Hua H., Cheng D. (2010). A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Visual. Comput. Graph. 16, 381–393. 10.1109/TVCG.2009.95 [DOI] [PubMed] [Google Scholar]
  169. Liu C., Huot S., Diehl J., MacKay W., Beaudouin-Lafon M. (2012). Evaluating the benefits of real-time feedback in mobile Augmented Reality with hand-held devices, in Conference on Human Factors in Computing Systems - Proceedings, 2973–2976. [Google Scholar]
  170. Livingston M. A., Ai Z. (2008). The effect of registration error on tracking distant augmented objects, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 77–86. [Google Scholar]
  171. Livingston M., Zanbaka C., Edward Swan II J., Smallman H. (2005). Objective measures for the effectiveness of augmented reality, in Proceedings - IEEE Virtual Reality, 287–288. [Google Scholar]
  172. Livingston M., Ai Z., Swan II J., Smallman H. (2009a). Indoor vs. outdoor depth perception for mobile augmented reality, in Proceedings - IEEE Virtual Reality, 55–62. [Google Scholar]
  173. Livingston M. A., Ai Z., Decker J. W. (2009b). A user study towards understanding stereo perception in head-worn augmented reality displays, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 53–56. [Google Scholar]
  174. Livingston M. A., Barrow J. H., Sibley C. M. (2009c). Quantification of contrast sensitivity and color perception using Head-Worn augmented reality displays, in Proceedings - IEEE Virtual Reality, 115–122. [Google Scholar]
  175. Livingston M. A., Ai Z., Karsch K., Gibson G. O. (2011). User interface design for military AR applications. Virt. Real. 15, 175–184. 10.1007/s10055-010-0179-1 [DOI] [Google Scholar]
  176. Livingston M. A., Dey A., Sandor C., Thomas B. H. (2013). Pursuit of “X-Ray Vision” for Augmented Reality. New York, NY: Springer. [Google Scholar]
  177. Livingston M. A. (2007). Quantification of visual capabilities using augmented reality displays, in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 3–12. [Google Scholar]
  178. Looser J., Billinghurst M., Grasset R., Cockburn A. (2007). An evaluation of virtual lenses for object selection in augmented reality, in Proceedings - GRAPHITE 2007, 5th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, 203–210. [Google Scholar]
  179. Lu W., Duh B.-L., Feiner S. (2012). Subtle cueing for visual search in augmented reality, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers, 161–166. [Google Scholar]
  180. Luckin R., Fraser D. (2011). Limitless or pointless? An evaluation of augmented reality technology in the school and home. Int. J. Technol. Enhanced Learn. 3, 510–524. 10.1504/IJTEL.2011.042102 [DOI] [Google Scholar]
  181. Luo X., Kline T., Fischer H., Stubblefield K., Kenyon R., Kamper D. (2005a). Integration of augmented reality and assistive devices for post-stroke hand opening rehabilitation, in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, Vol. 7, 6855–6858. [DOI] [PubMed] [Google Scholar]
  182. Luo X. c., Kenyon R., Kline T., Waldinger H., Kamper D. (2005b). An augmented reality training environment for post-stroke finger extension rehabilitation, in Proceedings of the 2005 IEEE 9th International Conference on Rehabilitation Robotics, Vol. 2005, 329–332. [Google Scholar]
  183. Lv Z. (2013). Wearable smartphone: wearable hybrid framework for hand and foot gesture interaction on smartphone, in Proceedings of the IEEE International Conference on Computer Vision, 436–443. [Google Scholar]
  184. Magnusson C., Molina M., Rassmus-Gröhn K., Szymczak D. (2010). Pointing for non-visual orientation and navigation, in NordiCHI 2010: Extending Boundaries - Proceedings of the 6th Nordic Conference on Human-Computer Interaction, 735–738. [Google Scholar]
  185. Maier P., Dey A., Waechter C. A. L., Sandor C., Tšnnis M., Klinker G. (2011). An empiric evaluation of confirmation methods for optical see-through head-mounted display calibration, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, 267–268. [Google Scholar]
  186. Markov-Vetter D., Moll E., Staadt O. (2012). Evaluation of 3D selection tasks in parabolic flight conditions: pointing task in augmented reality user interfaces, in Proceedings - VRCAI 2012: 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, 287–293. [Google Scholar]
  187. Markovic M., Dosen S., Cipriani C., Popovic D., Farina D. (2014). Stereovision and augmented reality for closed-loop control of grasping in hand prostheses. J. Neural Eng. 11:046001. 10.1088/1741-2560/11/4/046001 [DOI] [PubMed] [Google Scholar]
  188. Marner M. R., Irlitti A., Thomas B. H. (2013). Improving procedural task performance with Augmented Reality annotations, in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013, 39–48. [Google Scholar]
  189. Martin-Gutierrez J. (2011). Generic user manual for maintenance of mountain bike brakes based on augmented reality, in Proceedings of the 28th International Symposium on Automation and Robotics in Construction, ISARC 2011, 1401–1406. [Google Scholar]
  190. Mercier-Ganady J., Lotte F., Loup-Escande E., Marchal M., Lecuyer A. (2014). The Mind-Mirror: see your brain in action in your head using EEG and augmented reality, in Proceedings - IEEE Virtual Reality, 33–38. [Google Scholar]
  191. Milgram P., Takemura H., Utsumi A., Kishino F. (1995). Augmented reality: a class of displays on the reality-virtuality continuum, in Telemanipulator and Telepresence Technologies, Vol. 2351 (International Society for Optics and Photonics; ), 282–293. [Google Scholar]
  192. Morrison A., Oulasvirta A., Peltonen P., Lemmelä S., Jacucci G., Reitmayr G., et al. (2009). Like bees around the hive: a comparative study of a mobile augmented reality map, in Conference on Human Factors in Computing Systems - Proceedings (Boston, MA: ), 1889–1898. [Google Scholar]
  193. Mossel A., Venditti B., Kaufmann H. (2013a). 3Dtouch and homer-s: intuitive manipulation techniques for one-handed handheld augmented reality, in ACM International Conference Proceeding Series. [Google Scholar]
  194. Mossel A., Venditti B., Kaufmann H. (2013b). Drillsample: precise selection in dense handheld augmented reality environments, in ACM International Conference Proceeding Series. [Google Scholar]
  195. Moussa G., Radwan E., Hussain K. (2012). Augmented reality vehicle system: left-turn maneuver study. Transport. Res. C Emerging Technol. 21, 1–16. 10.1016/j.trc.2011.08.005 [DOI] [Google Scholar]
  196. Mulloni A., Wagner D., Schmalstieg D. (2008). Mobility and social interaction as core gameplay elements in multi-player augmented reality, in Proceedings - 3rd International Conference on Digital Interactive Media in Entertainment and Arts, DIMEA 2008, 472–478. [Google Scholar]
  197. Mulloni A., Seichter H., Schmalstieg D. (2011a). Handheld augmented reality indoor navigation with activity-based instructions, in Mobile HCI 2011 - 13th International Conference on Human-Computer Interaction with Mobile Devices and Services, 211–220. [Google Scholar]
  198. Mulloni A., Seichter H., Schmalstieg D. (2011b). User experiences with augmented reality aided navigation on phones, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, 229–230. [Google Scholar]
  199. Mulloni A., Ramachandran M., Reitmayr G., Wagner D., Grasset R., Diaz S. (2013). User friendly SLAM initialization, in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013, 153–162. [Google Scholar]
  200. Möller A., Kranz M., Huitl R., Diewald S., Roalter L. (2012). A mobile indoor navigation system interface adapted to vision-based localization, in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012. [Google Scholar]
  201. Möller A., Kranz M., Diewald S., Roalter L., Huitl R., Stockinger T., et al. (2014). Experimental evaluation of user interfaces for visual indoor navigation, in Conference on Human Factors in Computing Systems - Proceedings, 3607–3616. [Google Scholar]
  202. Ng-Thow-Hing V., Bark K., Beckwith L., Tran C., Bhandari R., Sridhar S. (2013). User-centered perspectives for automotive augmented reality, in 2013 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities, ISMAR-AMH 2013, 13–22. [Google Scholar]
  203. Nicolau S., Garcia A., Pennec X., Soler L., Ayache N. (2005). An augmented reality system to guide radio-frequency tumour ablation. Comput. Animat. Virt. Worlds 16, 1–10. 10.1002/cav.52 [DOI] [Google Scholar]
  204. Nilsson S., Johansson B. (2007). Fun and usable: augmented Reality instructions in a hospital setting, in Australasian Computer-Human Interaction Conference, OZCHI'07, 123–130. [Google Scholar]
  205. Oda O., Feiner S. (2009). Interference avoidance in multi-user hand-held augmented reality, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 13–22. [Google Scholar]
  206. Ofek E., Iqbal S. T., Strauss K. (2013). Reducing disruption from subtle information delivery during a conversation: mode and bandwidth investigation, in Conference on Human Factors in Computing Systems - Proceedings, 3111–3120. [Google Scholar]
  207. Oh S., Byun Y. (2012). The design and implementation of augmented reality learning systems, in Proceedings - 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, ICIS 2012, 651–654. [Google Scholar]
  208. Oh J.-Y., Hua H. (2007). User evaluations on form factors of tangible magic lenses, in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 23–32. [Google Scholar]
  209. Olsson T., Salo M. (2011). Online user survey on current mobile augmented reality applications, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, 75–84. [Google Scholar]
  210. Olsson T., Salo M. (2012). Narratives of satisfying and unsatisfying experiences of current mobile Augmented Reality applications, in Conference on Human Factors in Computing Systems - Proceedings, 2779–2788. [Google Scholar]
  211. Olsson T., Ihamäki P., Lagerstam E., Ventä-Olkkonen L., Väänänen-Vainio-Mattila K. (2009). User expectations for mobile mixed reality services: an initial user study, in VTT Symp. (Valtion Teknillinen Tutkimuskeskus) (Helsinki: ), 177–184. [Google Scholar]
  212. Olsson T., Kärkkäinen T., Lagerstam E., Ventä-Olkkonen L. (2012). User evaluation of mobile augmented reality scenarios. J. Ambient Intell. Smart Environ. 4, 29–47. 10.3233/AIS-2011-0127 [DOI] [Google Scholar]
  213. Olsson T., Lagerstam E., Kärkkäinen T., Väänänen-Vainio-Mattila K. (2013). Expected user experience of mobile augmented reality services: a user study in the context of shopping centres. Pers. Ubiquit. Comput. 17, 287–304. 10.1007/s00779-011-0494-x [DOI] [Google Scholar]
  214. Papagiannakis G., Singh G., Magnenat-Thalmann N. (2008). A survey of mobile and wireless technologies for augmented reality systems. Comput. Anim. Virt. Worlds 19, 3–22. 10.1002/cav.v19:1 [DOI] [Google Scholar]
  215. Pescarin S., Pagano A., Wallergård M., Hupperetz W., Ray C. (2012). Archeovirtual 2011: an evaluation approach to virtual museums, in Proceedings of the 2012 18th International Conference on Virtual Systems and Multimedia, VSMM 2012: Virtual Systems in the Information Society, 25–32. [Google Scholar]
  216. Petersen N., Stricker D. (2009). Continuous natural user interface: reducing the gap between real and digital world, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 23–26. [Google Scholar]
  217. Peterson S., Axholt M., Cooper M., Ellis S. (2009). Visual clutter management in augmented reality: effects of three label separation methods on spatial judgments, in 3DUI - IEEE Symposium on 3D User Interfaces 2009 - Proceedings, 111–118. [Google Scholar]
  218. Poelman R., Akman O., Lukosch S., Jonker P. (2012). As if being there: mediated reality for crime scene investigation, in Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 1267–1276. [Google Scholar]
  219. Porter S. R., Marner M. R., Smith R. T., Zucco J. E., Thomas B. H. (2010). Validating spatial augmented reality for interactive rapid prototyping, in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings, 265–266. [Google Scholar]
  220. Pucihar K., Coulton P., Alexander J. (2014). The use of surrounding visual context in handheld AR: device vs. user perspective rendering, in Conference on Human Factors in Computing Systems - Proceedings, 197–206. [Google Scholar]
  221. Pusch A., Martin O., Coquillart S. (2008). HEMP - Hand-displacement-based Pseudo-haptics: a study of a force field application, in 3DUI - IEEE Symposium on 3D User Interfaces 2008, 59–66. [Google Scholar]
  222. Pusch A., Martin O., Coquillart S. (2009). HEMP-hand-displacement-based pseudo-haptics: a study of a force field application and a behavioural analysis. Int. J. Hum. Comput. Stud. 67, 256–268. 10.1016/j.ijhcs.2008.09.015 [DOI] [Google Scholar]
  223. Rankohi S., Waugh L. (2013). Review and analysis of augmented reality literature for construction industry. Visual. Eng. 1, 1–18. 10.1186/2213-7459-1-9 [DOI] [Google Scholar]
  224. Rauhala M., Gunnarsson A.-S., Henrysson A., Ynnerman A. (2006). A novel interface to sensor networks using handheld augmented reality, in ACM International Conference Proceeding Series, Vol. 159, 145–148. [Google Scholar]
  225. Regenbrecht H., McGregor G., Ott C., Hoermann S., Schubert T., Hale L., et al. (2011). Out of reach? - A novel AR interface approach for motor rehabilitation, in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, 219–228. [Google Scholar]
  226. Regenbrecht H., Hoermann S., McGregor G., Dixon B., Franz E., Ott C., et al. (2012). Visual manipulations for motor rehabilitation. Comput. Graph. (Pergamon) 36, 819–834. [Google Scholar]
  227. Regenbrecht H., Hoermann S., Ott C., Muller L., Franz E. (2014). Manipulating the experience of reality for rehabilitation applications. Proc. IEEE 102, 170–184. 10.1109/JPROC.2013.2294178 [DOI] [Google Scholar]
  228. Reif R., Günthner W. A. (2009). Pick-by-vision: augmented reality supported order picking. Vis. Comput. 25, 461–467. 10.1007/s00371-009-0348-y [DOI] [Google Scholar]
  229. Ritter E., Kindelan T., Michael C., Pimentel E., Bowyer M. (2007). Concurrent validity of augmented reality metrics applied to the fundamentals of laparoscopic surgery (FLS). Surg. Endosc. Other Intervent. Techniq. 21, 1441–1445. 10.1007/s00464-007-9261-5 [DOI] [PubMed] [Google Scholar]
  230. Robertson C., MacIntyre B., Walker B. N. (2007). An evaluation of graphical context as a means for ameliorating the effects of registration error, in IEEE Transactions on Visualization and Computer Graphics, Vol. 15, 179–192. [DOI] [PubMed] [Google Scholar]
  231. Robertson C., Maclntyre B., Walker B. (2008). An evaluation of graphical context when the graphics are outside of the task area, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 73–76. [Google Scholar]
  232. Rohs M., Schöning J., Raubal M., Essl G., Krüger A. (2007). Map navigation with mobile devices: virtual versus physical movement with and without visual context, in Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07, 146–153. [Google Scholar]
  233. Rohs M., Schleicher R., Schöning J., Essl G., Naumann A., Krüger A. (2009a). Impact of item density on the utility of visual context in magic lens interactions. Pers. Ubiquit. Comput. 13, 633–646. 10.1007/s00779-009-0247-2 [DOI] [Google Scholar]
  234. Rohs M., Schöning J., Schleicher R., Essl G., Naumann A., Krüger A. (2009b). Impact of item density on magic lens interactions, in MobileHCI09 - The 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. [Google Scholar]
  235. Rohs M., Oulasvirta A., Suomalainen T. (2011). Interaction with magic lenses: Real-world validation of a Fitts' law model, in Conference on Human Factors in Computing Systems - Proceedings, 2725–2728. [Google Scholar]
  236. Rosenthal S., Kane S., Wobbrock J., Avrahami D. (2010). Augmenting on-screen instructions with micro-projected guides: when it works, and when it fails, in UbiComp'10 - Proceedings of the 2010 ACM Conference on Ubiquitous Computing, 203–212. [Google Scholar]
  237. Rusch M., Schall M., Jr., Gavin P., Lee J., Dawson J., Vecera S., et al. (2013). Directing driver attention with augmented reality cues. Transport. Res. Part F Traf. Psychol. Behav. 16, 127–137. 10.1016/j.trf.2012.08.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  238. Salamin P., Thalmann D., Vexo F. (2006). The benefits of third-person perspective in virtual and augmented reality, in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, 27–30. [Google Scholar]
  239. Salvador-Herranz G., Pérez-López D., Ortega M., Soto E., Alcañiz M., Contero M. (2013). Manipulating virtual objects with your hands: a case study on applying desktop Augmented Reality at the primary school, in Proceedings of the Annual Hawaii International Conference on System Sciences, 31–39. [Google Scholar]
  240. Sandor C., Cunningham A., Dey A., Mattila V.-V. (2010). An augmented reality X-ray system based on visual saliency, in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings, 27–36. [Google Scholar]
  241. Santos M. E. C., Chen A., Terawaki M., Yamamoto G., Taketomi T., Miyazaki J., et al. (2013). Augmented reality x-ray interaction in k-12 education: Theory, student perception and teacher evaluation, in Proceedings - 2013 IEEE 13th International Conference on Advanced Learning Technologies, ICALT 2013, 141–145. [Google Scholar]
  242. Schall G., Zollmann S., Reitmayr G. (2013a). Smart Vidente: advances in mobile augmented reality for interactive visualization of underground infrastructure. Pers. Ubiquit. Comput. 17, 1533–1549. 10.1007/s00779-012-0599-x [DOI] [Google Scholar]
  243. Schall M., Rusch M., Lee J., Dawson J., Thomas G., Aksan N., et al. (2013b). Augmented reality cues and elderly driver hazard perception. Hum. Fact. 55, 643–658. 10.1177/0018720812462029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  244. Schinke T., Henze N., Boll S. (2010). Visualization of off-screen objects in mobile augmented reality, in ACM International Conference Proceeding Series, 313–316. [Google Scholar]
  245. Schoenfelder R., Schmalstieg D. (2008). Augmented reality for industrial building acceptance, in Proceedings - IEEE Virtual Reality, 83–90. [Google Scholar]
  246. Schwerdtfeger B., Klinker G. (2008). Supporting order picking with augmented reality, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 91–94. [Google Scholar]
  247. Schwerdtfeger B., Reif R., Günthner W., Klinker G., Hamacher D., Schega L., et al. (2009). Pick-by-vision: a first stress test, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 115–124. [Google Scholar]
  248. Schwerdtfeger B., Reif R., Günthner W. A., Klinker G. (2011). Pick-by-vision: there is something to pick at the end of the augmented tunnel. Virt. Real. 15, 213–223. 10.1007/s10055-011-0187-9 [DOI] [Google Scholar]
  249. Shatte A., Holdsworth J., Lee I. (2014). Mobile augmented reality based context-aware library management system. Exp. Syst. Appl. 41, 2174–2185. 10.1016/j.eswa.2013.09.016 [DOI] [Google Scholar]
  250. Singh G., Swan II J., Jones J., Ellis S. (2010). Depth judgment measures and occluding surfaces in near-field augmented reality, in Proceedings - APGV 2010: Symposium on Applied Perception in Graphics and Visualization, 149–156. [Google Scholar]
  251. Singh G., Swan II J., Jones J., Ellis S. (2012). Depth judgments by reaching and matching in near-field augmented reality, in Proceedings - IEEE Virtual Reality, 165–166. [Google Scholar]
  252. Sodhi R. B., Benko H., Wilson A. (2012). LightGuide: projected visualizations for hand movement guidance, in Conference on Human Factors in Computing Systems - Proceedings, 179–188. [Google Scholar]
  253. Sodhi R., Jones B., Forsyth D., Bailey B., Maciocci G. (2013). BeThere: 3D mobile collaboration with spatial input, in Conference on Human Factors in Computing Systems - Proceedings, 179–188. [Google Scholar]
  254. Sommerauer P., Müller O. (2014). Augmented reality in informal learning environments: a field experiment in a mathematics exhibition. Comput. Educ. 79, 59–68. 10.1016/j.compedu.2014.07.013 [DOI] [Google Scholar]
  255. Sukan M., Feiner S., Tversky B., Energin S. (2012). Quick viewpoint switching for manipulating virtual objects in hand-held augmented reality using stored snapshots, in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers, 217–226. [Google Scholar]
  256. Sumadio D. D., Rambli D. R. A. (2010). Preliminary evaluation on user acceptance of the augmented reality use for education, in 2010 2nd International Conference on Computer Engineering and Applications, ICCEA 2010, Vol. 2, 461–465. [Google Scholar]
  257. Suzuki K., Garfinkel S., Critchley H., Seth A. (2013). Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion. Neuropsychologia 51, 2909–2917. 10.1016/j.neuropsychologia.2013.08.014 [DOI] [PubMed] [Google Scholar]
  258. Swan J. E., II., Gabbard J. L. (2005). Survey of user-based experimentation in augmented reality, in Proceedings of 1st International Conference on Virtual Reality, HCI International 2005, 1–9. [Google Scholar]
  259. Sylaiou S., Mania K., Karoulis A., White M. (2010). Exploring the relationship between presence and enjoyment in a virtual museum. Int. J. Hum. Comput. Stud. 68, 243–253. 10.1016/j.ijhcs.2009.11.002 [DOI] [Google Scholar]
  260. Szymczak D., Rassmus-G?ohn K., Magnusson C., Hedvall P.-O. (2012). A real-world study of an audio-tactile tourist guide, in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services (San Francisco, CA: ), 335–344. [Google Scholar]
  261. Takano K., Hata N., Kansaku K. (2011). Towards intelligent environments: an augmented reality-brain-machine interface operated with a see-through head-mount display. Front. Neurosci. 5:60. 10.3389/fnins.2011.00060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  262. Tangmanee K., Teeravarunyou S. (2012). Effects of guided arrows on head-up display towards the vehicle windshield, in 2012 Southeast Asian Network of Ergonomics Societies Conference: Ergonomics Innovations Leveraging User Experience and Sustainability, SEANES 2012. [Google Scholar]
  263. Teber D., Guven S., Simpfendörfer T., Baumhauer M., Güven E., Yencilek F., et al. (2009). Augmented reality: a new tool To improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results. Eur. Urol. 56, 332–338. 10.1016/j.eururo.2009.05.017 [DOI] [PubMed] [Google Scholar]
  264. Thomas R., John N., Delieu J. (2010). Augmented reality for anatomical education. J. Vis. Commun. Med. 33, 6–15. 10.3109/17453050903557359 [DOI] [PubMed] [Google Scholar]
  265. Thomas B. H. (2007). Evaluation of three input techniques for selection and annotation of physical objects through an augmented reality view, in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 33–36. [Google Scholar]
  266. Tillon A. B., Marchal I., Houlier P. (2011). Mobile augmented reality in the museum: Can a lace-like technology take you closer to works of art?, in 2011 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities, ISMAR-AMH 2011, 41–47. [Google Scholar]
  267. Tomioka M., Ikeda S., Sato K. (2013). Approximated user-perspective rendering in tablet-based augmented reality, in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013, 21–28. [Google Scholar]
  268. Toyama T., Dengel A., Suzuki W., Kise K. (2013). Wearable reading assist system: augmented reality document combining document retrieval and eye tracking, in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 30–34. [Google Scholar]
  269. Toyama T., Sonntag D., Dengel A., Matsuda T., Iwamura M., Kise K. (2014a). A mixed reality head-mounted text translation system using eye gaze input, in International Conference on Intelligent User Interfaces, Proceedings IUI, 329–334. [Google Scholar]
  270. Toyama T., Sonntag D., Orlosky J., Kiyokawa K. (2014b). A natural interface for multi-focal plane head mounted displays using 3D gaze, in Proceedings of the Workshop on Advanced Visual Interfaces AVI, 25–32. [Google Scholar]
  271. Tsuda T., Yamamoto H., Kameda Y., Ohta Y. (2005). Visualization methods for outdoor see-through vision, in ACM International Conference Proceeding Series, Vol. 157, 62–69. [Google Scholar]
  272. Tumler J., Doil F., Mecke R., Paul G., Schenk M., Pfister E., et al. (2008). Mobile augmented reality in industrial applications: approaches for solution of user-related issues, in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 87–90. [Google Scholar]
  273. Tönnis M., Klinker G. (2007). Effective control of a car driver's attention for visual and acoustic guidance towards the direction of imminent dangers, in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality, 13–22. [Google Scholar]
  274. Tönnis M., Sandor C., Klinker G., Lange C., Bubb H. (2005). Experimental evaluation of an augmented reality visualization for directing a car driver's attention, in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005, Vol. 2005, 56–59. [Google Scholar]
  275. Vazquez-Alvarez Y., Oakley I., Brewster S. (2012). Auditory display design for exploration in mobile audio-augmented reality. Pers. Ubiquit. Comput. 16, 987–999. 10.1007/s00779-011-0459-0 [DOI] [Google Scholar]
  276. Veas E., Mendez E., Feiner S., Schmalstieg D. (2011). Directing attention and influencing memory with visual saliency modulation, in Conference on Human Factors in Computing Systems - Proceedings, 1471–1480. [Google Scholar]
  277. Veas E., Grasset R., Kruijff E., Schmalstieg D. (2012). Extended overview techniques for outdoor augmented reality. IEEE Trans. Visual. Comput. Graphics 18, 565–572. 10.1109/TVCG.2012.44 [DOI] [PubMed] [Google Scholar]
  278. Vignais N., Miezal M., Bleser G., Mura K., Gorecky D., Marin F. (2013). Innovative system for real-time ergonomic feedback in industrial manufacturing. Appl. Ergon. 44, 566–574. 10.1016/j.apergo.2012.11.008 [DOI] [PubMed] [Google Scholar]
  279. Voida S., Podlaseck M., Kjeldsen R., Pinhanez C. (2005). A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment, in CHI 2005: Technology, Safety, Community: Conference Proceedings - Conference on Human Factors in Computing Systems, 611–620. [Google Scholar]
  280. Wacker F., Vogt S., Khamene A., Jesberger J., Nour S., Elgort D., et al. (2006). An augmented reality system for MR image-guided needle biopsy: initial results in a swine model. Radiology 238, 497–504. 10.1148/radiol.2382041441 [DOI] [PubMed] [Google Scholar]
  281. Wagner D., Billinghurst M., Schmalstieg D. (2006). How real should virtual characters be?, in International Conference on Advances in Computer Entertainment Technology 2006. [Google Scholar]
  282. Wang X., Dunston P. (2011). Comparative effectiveness of mixed reality-based virtual environments in collaborative design. IEEE Trans. Syst. Man Cybernet. C Appl. Rev. 41, 284–296. 10.1109/TSMCC.2010.2093573 [DOI] [Google Scholar]
  283. Wang X., Kim M. J., Love P. E., Kang S.-C. (2013). Augmented reality in built environment: classification and implications for future research. Autom. Construct. 32, 1–13. 10.1016/j.autcon.2012.11.021 [DOI] [Google Scholar]
  284. Weichel C., Lau M., Kim D., Villar N., Gellersen H. (2014). MixFab: a mixed-reality environment for personal fabrication, in Conference on Human Factors in Computing Systems - Proceedings, 3855–3864. [Google Scholar]
  285. Weing M., Schaub F., Röhlig A., Könings B., Rogers K., Rukzio E., et al. (2013). P.I.A.N.O.: enhancing instrument learning via interactive projected augmentation, in UbiComp 2013 Adjunct - Adjunct Publication of the 2013 ACM Conference on Ubiquitous Computing (Zurich: ), 75–78. [Google Scholar]
  286. White S., Lister L., Feiner S. (2007). Visual hints for tangible gestures in augmented reality, in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR. [Google Scholar]
  287. White S., Feng D., Feiner S. (2009). Interaction and presentation techniques for shake menus in tangible augmented reality, in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009, 39–48. [Google Scholar]
  288. Wilson K., Doswell J., Fashola O., Debeatham W., Darko N., Walker T., et al. (2013). Using augmented reality as a clinical support tool to assist combat medics in the treatment of tension pneumothoraces. Milit. Med. 178, 981–985. 10.7205/MILMED-D-13-00074 [DOI] [PubMed] [Google Scholar]
  289. Wither J., Höllerer T. (2005). Pictorial depth cues for outdoor augmented reality, in Proceedings - International Symposium on Wearable Computers, ISWC, Vol. 2005, 92–99. [Google Scholar]
  290. Wither J., DiVerdi S., Höllerer T. (2007). Evaluating display types for AR selection and annotation, in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR. [Google Scholar]
  291. Wither J., Allen R., Samanta V., Hemanus J., Tsai Y.-T., Azuma R., et al. (2010). The Westwood experience: connecting story to locations via mixed reality, in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Arts, Media, and Humanities, ISMAR-AMH 2010 - Proceedings, 39–46. [Google Scholar]
  292. Wither J., Tsai Y.-T., Azuma R. (2011). Indirect augmented reality. Comput. Graph. (Pergamon) 35, 810–822. 10.1016/j.cag.2011.04.010 [DOI] [Google Scholar]
  293. Wojciechowski R., Cellary W. (2013). Evaluation of learners' attitude toward learning in ARIES augmented reality environments. Comput. Educ. 68, 570–585. 10.1016/j.compedu.2013.02.014 [DOI] [Google Scholar]
  294. Wrzesien M., Bretón-López J., Botella C., Burkhardt J.-M., Alcañiz M., Pérez-Ara M., et al. (2013). How technology influences the therapeutic process: evaluation of the patient-therapist relationship in augmented reality exposure therapy and in vivo exposure therapy. Behav. Cogn. Psychother. 41, 505–509. 10.1017/S1352465813000088 [DOI] [PubMed] [Google Scholar]
  295. Xu Y., Gandy M., Deen S., Schrank B., Spreen K., Gorbsky M., et al. (2008). BragFish: exploring physical and social interaction in co-located handheld augmented reality games, in Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, ACE 2008, 276–283. [Google Scholar]
  296. Xu Y., Barba E., Radu I., Gandy M., MacIntyre B. (2011). Chores are fun: understanding social play in board games for digital tabletop game design, in Proceedings of DiGRA 2011 Conference: Think Design Play (Urtecht: ). [Google Scholar]
  297. Yamabe T., Nakajima T. (2013). Playful training with augmented reality games: case studies towards reality-oriented system design. Multimedia Tools Appl. 62, 259–286. 10.1007/s11042-011-0979-7 [DOI] [Google Scholar]
  298. Yeh K.-C., Tsai M.-H., Kang S.-C. (2012). On-site building information retrieval by using projection-based augmented reality. J. Comput. Civil Eng. 26, 342–355. 10.1061/(ASCE)CP.1943-5487.0000156 [DOI] [Google Scholar]
  299. Yoo H.-N., Chung E., Lee B.-H. (2013). The effects of augmented reality-based otago exercise on balance, gait, and falls efficacy of elderly women. J. Phys. Ther. Sci. 25, 797–801. 10.1589/jpts.25.797 [DOI] [PMC free article] [PubMed] [Google Scholar]
  300. Yuan M. L., Ong S. K., Nee A. Y. C. (2008). Augmented reality for assembly guidance using a virtual interactive tool. Int. J. Product. Res. 46, 1745–1767. 10.1080/00207540600972935 [DOI] [Google Scholar]
  301. Yudkowsky R., Luciano C., Banerjee P., Schwartz A., Alaraj A., Lemole G., et al. (2013). Practice on an augmented reality/haptic simulator and library of virtual brains improves residents' ability to perform a ventriculostomy. Simul. Healthcare 8, 25–31. 10.1097/SIH.0b013e3182662c69 [DOI] [PubMed] [Google Scholar]
  302. Zhang R., Nordman A., Walker J., Kuhl S. A. (2012). Minification affects verbal- and action-based distance judgments differently in head-mounted displays. ACM Trans. Appl. Percept. 9:14 10.1145/2325722.2325727 [DOI] [Google Scholar]
  303. Zhang J., Sung Y.-T., Hou H.-T., Chang K.-E. (2014). The development and evaluation of an augmented reality-based armillary sphere for astronomical observation instruction. Comput. Educ. 73, 178–188. 10.1016/j.compedu.2014.01.003 [DOI] [Google Scholar]
  304. Zhou Z., Cheok A., Qiu Y., Yang X. (2007). The role of 3-D sound in human reaction and performance in augmented reality environments. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 37, 262–272. 10.1109/TSMCA.2006.886376 [DOI] [Google Scholar]

Articles from Frontiers in Robotics and AI are provided here courtesy of Frontiers Media SA

RESOURCES