Skip to main content
PLOS One logoLink to PLOS One
. 2022 Jan 6;17(1):e0261607. doi: 10.1371/journal.pone.0261607

Evaluation of virtual tour in an online museum: Exhibition of Architecture of the Forbidden City

Jia Li 1,*, Jin-Wei Nie 2, Jing Ye 3
Editor: Prabhat Mittal4
PMCID: PMC8735558  PMID: 34990488

Abstract

Online virtual museum tours combine museum authority and an academic approach with the diversity and interactivity of online resources; such tours have become an essential resource for online scientific research and education. Many important museums around the world are developing this type of online service. Comprehensive evaluation of such tours is, however, urgently needed to ensure effectiveness. This paper establishes a heuristic evaluation scale based on the literature. Taking the online virtual tour of the Exhibition of Architecture of the Forbidden City as a case study, confirmatory factor analysis was then carried out to improve the scale. Interviews were conducted to discuss and analyze the research results. The developed evaluation scale has four dimensions: authenticity, interaction, navigation, and learning. The results from the case study showed, first, that the exhibition had visual authenticity, but the behavioral authenticity was insufficient; second, the exhibition was generally interactive, but this aspect could be improved by enriching the links; third, the lack of effective navigation design for the exhibit was the main factor affecting experience quality. Fourth, the exhibition was informative and supported learning, but needs further improvement to the quantity and quality of information provided. Finally, the interviews revealed that the online exhibition did not entirely support people of different ages and abilities, so it needs further improvement to be wholly inclusive.

Introduction

In the traditional sense, a museum collects a variety of valuable objects to primarily support scientific research and social education [1]. Going beyond the traditional methods for exhibiting material, as technological innovation and transitions in institutional function allow, museum displays are becoming increasingly diversified, and there has been a development focus on interaction, with a recent focus on enhancing the relationship between human and objects (exhibits) [2]. With the development of science and technology, museums are, in turn, developing and applying a variety of new digital technologies and thus greatly expanding the ways of displaying collections across time and space and creating brand-new new experiences [3]. In 2019, International Council on Monuments and Sites (ICOMOS) promulgated London’s Charter, which considered the digitization of cultural assets (including museum galleries) and noted that the accuracy of this approach needs careful and rigorous consideration. Different digital visualization methods and results need to be evaluated to ensure understanding of the assets through interpretive materials and to obtain the best results for the museum and its visitors. Thanks to the inherent advantages of academic authority and objectivity, museum resources have become a quality resource for online academics and education; an audience-centered experience should, however, play a real and important role in the evaluation of these and related resources [4].

An online museum is also called a virtual, digital, or electronic museum [5]. Online museums are an extension of the traditional museum but are based on network technology and contain multi-dimensional works and hypermedia [6]. Online museums break the limitations of physical time and space that bind traditional museums and can constantly update previous collections, research, displays, and educational tools [7]. Online museums shake the traditional museum’s display view, which is centered on objects, and the traditional display space view, which is based on buildings as containers; thus, online museums significantly increase audience autonomy.

Virtual tour is essentially a branch of virtual reality (VR) technology and has been widely used in the medical, building, and transportation industries [8]. Unlike VR, the virtual tour space environment really exists [9]. Through the use of panoramic image technology, which acquires information about the real space environment and generates a remarkably similar VR space, people who cannot visit the museum in person can engage in an immersive experience of the museum collection through a phone or computer interface [10].

The ultimate goal of such an exhibition is the audience’s cognitive improvement, as well as the perceptual and intellectual recognition people gain, as an experiencing subject, after extensive interaction with the museum exhibits, media, and space [11]. A virtual tour of an online museum can be regarded as an extension of the online museum and as an organic combination of the real and the online museums [12]. This involves making the museum’s architectural or exhibition space into a virtual tour, which can then be publicly released through the online museum. In recent years, many world-renowned museums—including the Louvre Museum in France, the Metropolitan Museum of Art in the United States, the Palace Museum in China, and the Hermitage Museum in Russia—have opened online museum virtual tours on their official websites. If we ignore the evaluation of the effects of digital technology (as applied in museums) on the general audience experience, it will not be possible to assess the full importance of the functions of digital technology, nor can it truly make up for the deficiencies of traditional museums [13]. Given the above problems, the purpose of this study is as follows.

  1. To try to construct a set of user experience evaluation methods for online museum virtual tours; and

  2. To evaluate, as a case study, the Exhibition of Architecture of the Forbidden City (EAFC), to further demonstrate and improve the suggested method.

Literature review

How can online museum virtual tours be evaluated? What would be the result of such evaluations? Donghai (1988) believes that the study of the museum audience is a critical trend in the development of contemporary museology and has become one of the standards for museum modernization, sometimes even determining the fate of the museum [14].

There has been notable research on the evaluation of virtual museum tours. Based on the development of a virtual tour application for the Ispata Museum in Turkey, Bastanlar discussed user preferences for navigation functions, control options, and information acquired during the virtual museum tour [15]. Barbieri (2017) evaluated three critical qualities—usability, entertainment, and learning—of two kinds of virtual museum systems based on the development of a virtual museum system for Cetraro in Italy. However, the evaluation method was relatively simple, and the research scope was somewhat limited [16]. Kabassi et al. evaluated a virtual tour of museums in Italy by combining the VR evaluation scale developed by Sutcliffe and Gault and concluded that the three most important dimensions in virtual museum tours are coordination of movements and performance, support of navigation, direction, and support of learning [17]. However, this study used experts as the main evaluating body, ignoring the fact that the majority of virtual museum users are not experts but rather the general public. This is problematic, as some studies have shown that, in the evaluation of the application experience, the opinions of experts and general users sometimes differ [18].

There are no direct relevant research results on user experience evaluation of virtual tours in online museums. Styliani et al. analyzed the relationship between user experience and online museums and proposed the importance of “real user” experience evaluation [11]. Pagano, Roussou, et al., through user-experience research, discussed the path and communication paradigm of improving the user interaction experience in virtual museums. However, the scope of their research was based on digital interactive services in the physical environment of the museum, rather than on online services [19, 20]. Lin collected and studied the user data from online museums and summarized four design features and five design guidelines to improve user experience, but this research mainly focused on the online learning experience [21]. Based on a literature review, MacDonald (2015) established a set of scales for evaluating user experience of online museums, which included three dimensions—visceral, behavioral, and reflective, which correspond to visual, interaction, and experience—and conducted an empirical study through expert user experience [22]. Although the above studies involved the evaluation of user experience of museum virtual or online resources, they did not involve the research issues considered in this paper.

Materials and methods

The object of this evaluation is the EAFC, which is a permanent exhibition on the official website of the Palace Museum in Beijing. The exhibition focuses on the achievements of Lei, a famous architectural designer of the Qing Dynasty, and uses texts, drawings, photos, and models as exhibits to comprehensively show the artistic appearance of imperial architecture in the Forbidden City (the link to the exhibition is: http://quanjing.artron.net/scene/gPTvX3m1LENXdkTv5UzNsDxkLU1rUNKV/zijinchengjianzhuzhan/tour.html).

Fig 1 is the screenshot of the virtual tour interface; during the tour, the interface consists of five parts: function button, movement button, point of interest (POI), artifact information, and exhibition space. Buttons with different symbols represent different functions, such as switching scenes, zooming, and viewing maps. The white tip that appears on the floor of the exhibition space is the move button, which can be clicked to switch scenes. The little blue magnifying glass symbol near the artifacts is the POI, and when you click on the POI it pops up information about the artifacts. The exhibition space is a virtual tour space based on 360° panoramic photography technology. During the experience, visitors can move the view or zoom in on the artifacts with the mouse.

Fig 1. The screenshot of the virtual tour interface.

Fig 1

(Blue, green, red, purple, and yellow boxes are drawn by the researchers, in which the blue box shows function buttons, the green box shows the movement button, the red box shows the POI, the purple box shows the artifact information, and the yellow box shows the exhibition space).

Based on heuristic evaluation, this study constructed an evaluation scale for the online museum virtual tour experience. Developed by Jakob Nielsen, heuristic evaluation has often been applied to evaluating the usability of products or services. Sutcliffe and Gault (2004) developed a heuristic evaluation scale for VR with 12 factors, including naturalness and compatibility [23]. Kabassi et al. (2019) applied Sutcliffe and Gault’s scale to the study of virtual tours and summarized the relevant factors into four dimensions of VR experience, presence perception, navigation ability, and learning support [17] (Table 1). In addition to revising some expressions of these dimensions and factors, the fourth dimension, learning support, was expanded to include four factors, such as information availability and information richness, and the 12 factors were expanded to 15, to highlight the educational function of museums (Table 2).

Table 1. The scale of Sutcliffe and Gault (2004) and Kabassi et al. (2019).

Heuristic evaluation scale for VR developed by Sutcliffe and Gault (2004) Heuristic evaluation scale for virtual tours developed by Kabassi et al. (2019)
Factors Categories
1. Natural engagement. 1. Natural engagement: how close the interaction is to the real world. VR experience
2. Compatibility with the user’s task and domain. 2. Compatibility with the user’s task and the domain: how close the behavior of objects is to the real world and affordance for task action.
3. Natural expression of action. 3. Realistic feedback: visibility of the effect of users’ actions and conformity to the laws of physics.
4. Close coordination of action and representation. 4. The natural expression of action: does the system allow the user to act naturally? Perception of presence
5. Realistic feedback. 5. Close coordination of action and representation: quality of the response between user movement and virtual environment.
6. Faithful viewpoints. 6. Clear turn-taking: clearness of who has the initiative.
7. Navigation and orientation support. 7. Sense of presence: the naturalness of the user’s perception of engagement in the system and being in a ‘real’ world.
8. Clear entry and exit points. 8. Faithful viewpoints: the naturalness of change between viewpoints. Navigation
9. Consistent departures. 9. Navigation and orientation support: naturalness in orientation and navigation. Is it clear where they are and how they return?
10. Support for learning. 10. Clear entry and exit points: clearness of entry and exit points.
11. Clear turn-taking. 11. Consistent departures: consistency of departure actions.
12. Sense of presence. 12. Support for learning: promotion of learning. learning aspect

Table 2. Evaluation scale for virtual tours of online museums.

Dimensions Factors Definitions Item
Authenticity A1 Authenticity of participation The interactive experience is as close as possible to what happens in the real world. Q1 When I wander, I feel like I’m in a real museum.
A2 Environmental authenticity The virtual environment is as close to the real world as possible. Q2 I felt like I was in a real museum.
Q3 The artifacts give me a very real feeling.
A3 Authenticity of feedback The feedback of the virtual environment based on behavior corresponds to the state of the real world. Q4 The space and objects in the virtual exhibit give real responses to my wandering behavior.
Interactivity B1 Naturalness of behavior Behavior is natural and unrestricted. Q5 The process of my virtual tour is very natural and there are no restrictions.
B2 Viewer-system coordination The behavior is coordinated with the performance of the system. Q6 When I move the camera, the picture changes very naturally.
Q7 When I zoom out or zoom in, the picture changes very naturally.
B3 Clear permissions In the process of interaction, the rights of the viewer and the system are very clear. Q8 I understand what I can operate and what I can’t.
B4 Naturalness of being The sense of presence and participation is very natural. Q9 When I interacted with the exhibits, the feedback was as expected.
Navigation C1 Loyalty of perspective The change in perspective direction is expected. Q10 My perspective changed in line with my expectations.
C2 Clarity of direction The viewer always knows the direction. Q11 I always knew the directions to visit.
C3 Clarity of location The viewer always knows the location. Q12 I always know where I am.
Q13 I know how to locate myself when I am lost
C4 Clarity of start and end The viewer knows where to start and where to end. Q14 I know where we start and where we end.
Learning D1 Information availability The audience gets the information they want. Q15 I can get the information I want.
D2 Information abundance The system can provide enough information. Q16 I get enough information from the exhibition.
D3 Enjoyable presentation of information The information provided by the system can arouse the attention and interest of the audience. Q17 The information I get is interesting to me.
D4 Connectivity of information The audience is willing to share their information with others. Q18 I will discuss the information with others.

Confirmatory factor analysis (CFA) was the primary method used in this study and was combined with interviews. The purpose of CFA is to verify the validity of the scale and further optimize its structure.

The questionnaire was divided into two parts, A and B. Questionnaire A was designed according to the four dimensions and 15 criteria in Table 1, with a total of 18 items, each item set to strongly disagree, disagree, general, agree, and strongly agree on a five-level Likert scale. Questionnaire B was a demographic variable questionnaire and gathered information including age, gender, and educational background using a single choice format. All the collected data were analyzed using SPSS Statistics 26.

In addition to the questionnaire, the interview is also a common method used in many similar studies [24, 25], and the semi-structured interview is usually the most commonly used method for this kind of research [2628]. The content of this interview is basically still centered on the four dimensions in the scale. The interviews sought to provide a more comprehensive and in-depth understanding of the experience of the subjects and to supplement the deficiencies of the quantitative analysis [25], so we look forward to some unexpected answers and asking further questions based on the answers.

The test process included the following steps: first, the subjects were asked to experience the EAFC online. During the experience, the subjects were asked to find a designated exhibit and then visit it freely. Second, after the tour had been completed, the subjects filled in the questionnaire to collect data for statistical analysis. Finally, sample interviews were conducted after the questionnaire had been completed. During the whole process, except in the first step, all of the participants were informed of the theme of the exhibition and were required to find a designated artifact named Ceiling of Ci Ning Palace Garden Linxi Pavilion (Fig 2); all the other artifacts were visited by the participant freely. The test time for each participant was about 10–20 minutes (Fig 3).

Fig 2. Ceiling of Ci Ning Palace Garden Linxi Pavilion.

Fig 2

Fig 3. Flow diagram of the test.

Fig 3

Ethical statement

The study complied with the IRB principles and was approved by the Academic Committee of Jiaxing University. Before data collection took place, all participants were informed of the benefits, risks, purpose, and how the data were used. All tests were done with the participant’s consent and the questionnaire and interview were completed anonymously.

Results

To ensure the reliability of the questionnaire, this work consisted of a pre-test and an official test, and SPSS was used for reliability analysis of the data. The pre-test ran from January 4 to January 13, 2020, with 22 subjects; 18 valid questionnaires were collected, and the reliability test results showed that the Cronbach’s α coefficient was 0.815 (> 0.7). The official test ran from January 15 to February 20, 2020, and the Cronbach’s α coefficient of all valid questionnaires was 0.932 (> 0.7). The above results prove that the scale had high reliability.

A total of 254 people participated in the official test, and 212 valid questionnaires were collected. Respondents included 108 males and 104 females, and the gender ratio for the valid questionnaires was the same as for the original sample of collected responses. Most respondents were aged 20–39 (82.54%). Most had a college degree or above (college students, 66.04%; graduate students, 26.89%). Most subjects (95.76%) were experiencing an online museum virtual tour for the first time.

‘Agree’ and ‘strongly agree’ accounted for 79.72% of all subjects’ responses to the EAFC experience, and the average value for all items was 3.88, indicating that most subjects were satisfied with the experience. According to the analysis of average scores (Fig 4), I feel I have entered a real museum (Q2) had the highest average score (4.2), while I always know the directions to visit (Q11) had the lowest average score (3.6). Thus, a virtual tour based on panoramic image technology can indeed reflect the real space environment realistically. However, in the process of use, the navigation design appeared to be insufficient to affect the satisfaction of the test subjects. From the scale dimensions, the average score for navigation was the lowest (3.71), and the average score for authenticity was the highest (4.03), followed by learning (3.96) and interactivity (3.85). This result confirmed that the virtual tour provided a poor navigation experience, but a good experience in terms of reality and other aspects.

Fig 4. Comparison of mean scores of questionnaire items.

Fig 4

To judge whether the variables could be used for CFA, we conducted a Kaiser–Meyer–Olkin (KMO) test on the data and Bartlett’s sphericity test, as shown in Table 3. The analysis results showed that the KMO value was 0.935, which is larger than the required 0.6 to meet the standard; the value for Bartlett’s spherical Sig. is 0.000, which is less than 0.05, and this indicates that all 18 items are suitable for CFA.

Table 3. KMO and Bartlett test results.

Kaiser–Meyer–Olkin Measure of Sampling Adequacy .936
Bartlett’s Test of Sphericity Approx. Chi-Square 2070.515
df 153
Sig. .000

Varimax was adopted to obtain the component matrix after the rotation axis, as shown in Table 4. The matrix was then rearranged according to whether the factor load coefficient was greater than 0.4 based on the corresponding relationship between the problem items. Table 3 shows that Q7 is close to the factor load coefficient of factor 1 and factor 2 (0.503; 0.474), and Q15 is close to the factor load coefficient of factor 3 and factor 4 (0.454; 0.497); we therefore tried deleting one or both of the questions. Considering the rationality of the scale structure and the component, we found that after the deletion of Q15, the corresponding relationship between the factors for each research item tended to be reasonable. The CFA results after deleting Q15 showed that Q8 was the only valid item with a component (0.661) corresponding to factor 4, and the corresponding relationship between the factors of all of the researched items tended to be reasonable after deleting attempts (Table 5).

Table 4. Rotated component matrix.

Dimensions Items Factor loadings Communalities
Factor1 Factor2 Factor3 Factor4
A Q1 .806 .166 .138 .199 .736
Q2 .771 .197 .161 .256 .724
Q3 .702 .124 .171 .342 .654
Q4 .642 .098 .272 .304 .588
B Q5 .632 .163 .168 .305 .547
Q6 .665 .388 .231 -.054 .649
Q7 .503 .474 .213 -.043 .525
Q8 .248 .303 .228 .629 .600
Q9 .639 .130 .428 .006 .608
C Q10 .694 .340 .285 -.177 .710
Q11 .164 .757 .327 -.032 .708
Q12 .243 .822 .099 .240 .801
Q13 .135 .715 .084 .404 .699
Q14 .224 .740 .215 .167 .672
D Q15 .316 .254 .454 .497 .617
Q16 .287 .186 .786 .127 .751
Q17 .388 .224 .697 .062 .691
Q18 .151 .222 .706 .272 .645

Table 5. Rearranged component matrix.

Dimensions Items Component Communalities
Factor1 Factor2 Factor3 Factor4
A Q3 .799 .223 .208 .052 .735
Q2 .786 .230 .185 .217 .752
Q1 .784 .168 .154 .280 .745
Q4 .660 .133 .291 .209 .582
Q5 .586 .155 .170 .351 .520
Q9 .526 .049 .384 .412 .596
B Q10 .439 .151 .180 .696 .732
Q7 .282 .318 .187 .602 .578
Q6 .468 .253 .173 .589 .659
C Q12 .235 .837 .130 .214 .819
Q13 .230 .812 .131 .024 .730
Q14 .200 .740 .224 .223 .688
Q11 -.016 .628 .271 .513 .731
D Q18 .211 .274 .782 .013 .731
Q16 .236 .150 .765 .275 .740
Q17 .329 .183 .685 .275 .687

We conducted interviews with 33 respondents, and the interviews were mainly semi-structured. We chose interviewees from different age groups, and the proportion of interviewees is as close as possible to their age structure. Due to the spread of COVID-19, we used Line or WeChat for interviews. During the interviews, the researchers made sure they were alone in the room without external interference. At the start of each interview, we requested the interviewees to be alone in a room as much as possible, and we also informed them the interview would be recorded. Fortunately, all the interviewees we contacted were cooperative. Table 6 shows the questions of the interview questionnaire, the age distribution, and the number of participants and respondents in Table 7. First, the researchers asked simple questions such as How did you feel about the experience? What were some of the problems you encountered? Second, the interviewees were asked additional questions that were designed according to the four dimensions of the scale. Finally, according to the specific responses provided by the interviewees, the interviewees were asked further questions to gain an understanding of their real experience as much as possible. Each interview was for 15–30 minutes.

Table 6. The interview questions.

The overall feeling 1) How did you feel about the experience?
2) What were some of the problems you encountered?
Authenticity 3) Do you feel real?
4) Does it feel like visiting a real museum exhibition?
Interactivity 5) How do you feel about interacting with the virtual exhibition?
6) How do you feel the virtual exhibition respond to you?
Navigation 7) Do you have any trouble finding directions or exits?
8) Do you know where you are in the exhibition?
Learning 9) Do you think you can learn anything from this exhibition?
10) Did you find the exhibition interesting?

Table 7. Age range and proportion of participants and respondents.

The age range of participants The number of participants The number of interviewee
≤19 8(3.76%) 0(0.00%)
20–29 67(31.46%) 12(36.4%)
30–39 109(51.17%) 17(51.5%)
40–49 17(7.89%) 2(6.06%)
≥50 12(5.63%) 2(6.06%)

The final interview results reflect the evaluation dimensions contained in the evaluation quantitative research scale mentioned above. On the whole, most of the participants expressed a positive attitude towards the experience and said their experience with the FCAA was “realistic” or “interesting.” Some respondents even said they had a sense of freshness or “surprise.” Some interviewees said, however, that the tour was “inconvenient” to use, and some interviewees even said they would “feel dizzy” and have a bad experience after using it for a long time. In addition to the evaluation of the four dimensions, we found that FCAA neglected the inclusive design, which also affected the experience of some people. Therefore, we believe that inclusivity cannot be ignored. In the following section, We have combined the results of CFA and the interview to discuss our research results from the four dimensions of the scale and inclusivity.

Discussion

According to CFA, the four dimensions of the original design were verified, and the scale structure was concise and reasonable. Through CFA, the factors clear permissions and information availability—corresponding to Q8 and Q15—were deleted, and the criteria naturalness of behavior and naturalness of existence—corresponding to Q5 and Q9—were adjusted to dimension A. The revised scale obtained is presented in Table 8.

Table 8. Evaluation scale after CFA.

Dimensions Definition Factors Definition
A. Authenticity Whether the feeling in the virtual space is close to the reality. A1 Environmental authenticity The virtual environment is as close to the real world as possible.
A2 Authenticity of participation The interactive experience is as close as possible to what happens in the real world.
A3 Authenticity of feedback The feedback of the virtual environment based on behavior corresponds to the state of the real world.
A4 Naturalness of behavior Behavior is natural and unrestricted.
A5 Naturalness of being The sense of presence and participation is very natural.
B. Interactivity The reacts of virtual space to human active behavior. B1 Loyalty of the perspective The change in perspective direction is expected.
B2 Viewer-system coordination The behavior is coordinated with the performance of the system.
C. Navigation Identify location and direction in the virtual space. C1 Clarity of location The viewer always knows the location.
C2 Clarity of start and end The viewer knows where to start and where to end.
C3 Clarity of direction The viewer always knows the direction.
D. Learning Learn new information during virtual tour. D1 Connectivity of information The audience is willing to share their information with others.
D2 Information abundance The system can provide enough information.
D3 Enjoyable presentation of information The information provided by the system can arouse the attention and interest of the audience.

In many ways, the quantitative analysis results—together with the interview results and their mutual confirmation—also show the omissions of quantitative research. According to the CFA and the interviewee feedback, the EAFC probably has the following significant aspects that need further attention.

1. Authenticity

Most of the experiencers rated authenticity well. The researchers believe this is due to the technical features of online virtual tour. However, in the authenticity dimension of the scale, there is a certain gap between the scores for the corresponding criteria such as authenticity of participation, authenticity of feedback, and naturalness of participation, and the scores for the questions of authenticity of the environment, which is consistent with the interview results. For example, when asked in an interview, “Does participation feel real?” the responses tended to be “not too bad” or “mostly it looks real,” or statements in a similar vein.

Unlike a VR experience based on a completely virtual environment, an online museum virtual tour involves panoramic image acquisition of the real environment of the exhibition site. The exhibits and environment that experiencers see are almost the same as what could be seen at the site, so (ideally) viewers should feel similar to being on the museum site. Viewers simulate a visit in a virtual environment, essentially switching between seamless panoramic photos. Because these panoramic photos have a high number of pixels, the picture has a certain sharpness and good visual effect, no matter whether users zoom in, zoom out, or rotate the angle of view. However, panoramic photographs do not change their two-dimensional nature and thus do not allow the viewer to engage in the same kind of interaction as a live visit.

2. Interactivity

Interactivity was rated moderately. The interactivity score was slightly lower than the average score for all questions, which indicates that the exhibition could be further improved in terms of interactivity. During the interviews, some interviewees said that they lacked the opportunity for more interaction with the exhibits when viewing the exhibition. For example, some interviewees said that they hoped to enjoy the exhibition from different angles and magnify more details. This suggests that visitors want more interaction with the exhibition.

The EAFC online virtual tour is still at the development level and lacks a diversified interactive design. Many virtual exhibitions have a link button that allows users to click to learn more information, ensuring a certain amount of interaction. However, to ensure a smoother online experience, the curator may have specific control over the size of the data, so some details cannot be sufficiently enlarged, and the linked information tends to be small-size pictures and texts. The curator should consider making up for this defect in subsequent improvements, such as providing more precise details in the links, interactive virtual exhibition models, or even online mini-games related to the exhibition, to provide visitors with a richer and more diversified interactive experience.

3. Navigation

The navigation evaluation was the lowest among all dimensions, and this significantly impacts the experience. In terms of navigability, clarity of direction was rated lowest, meaning that the audience had difficulty deciding what to see and what not to see during the virtual tour. Other navigational aspects, such as the audience’s perception of their location and whether they had an exact starting and stopping position, were lower than the average score for all of the questions (3.88).

There are several reasons for this result. First, in the process of switching exhibition areas, the perspective after switching was not consistent with expectations, which is one reason for the poor navigation. When the scene of the exhibition area is switched during the virtual tour, the screen does not change according to the direction of the guide arrow before the switch but changes the perspective and faces the cabinet. Although the design is convenient in allowing the audience to enter the perspective of viewing the exhibits quickly, the inconsistency between the perspective and the behavior expectation during the switching process may create confusion. Second, the two guide arrows in the direction before and after the tour have the same appearance, and there is no indication or distinction of the exhibition sequence, which makes it easy for viewers to be confused. Finally, because the direction of the perspective is quite sensitive, it is easy to distort the audience’s sense of time and space, leading to the loss of spatial orientation.

4. Learning

From the scale scores and interview responses, there appears to still be room for improvement in learning ability. In the questionnaire, the evaluation of learning is second only to authenticity. Most interviewees also said that the exhibits in the EAFC were quite rich, and they could get in touch with much information that they had not previously learned. However, others said that only part of the physical exhibits provided sufficient detailed written descriptions; most items lacked such descriptions. Most interviewees said that, after experiencing the online virtual tour, they would be more willing to visit the site, which indicates that the virtual tour can effectively expand the museum audience. However, it also reflects that the information provided by the virtual tour does not satisfy the curiosity of the interviewees. Therefore, although the online virtual tour of the museum provides good opportunities for learning, it still fails to reach the ideal state. Some interviewees also pointed out that the exhibit descriptions were too academic, which affected the learning effect.

5. Inclusivity

Inclusive details of the design were not considered sufficient, and these affect different types of users. The inclusive evaluation dimension should thus be taken into consideration. Inclusive design is a strategy to deal with globalization that considers user differences, both physical and mental [29, 30]. Of those surveyed, 5.7% were over the age of 50, and only two respondents were over 60. In the interview, some senior interviewees said that, because it was the first time they had come into contact with such things, they needed more time to get used to the virtual so that they could visit smoothly. One interviewee said the process of viewing the exhibition was “laborious” because the pictures or text provided by the exhibition were “not very clear.” This result was not included in the previous scale and has been ignored by researchers.

Through further communication, we learned that some interviewees did not realize it was possible to zoom in and out of the exhibition picture at the beginning. Some interviewees failed to notice the extension function for the exhibit information or the function button in the lower-left corner of the page. Some interviewees did not even notice the existence of the guide arrow in the first place. This was true or almost all age groups, not just older people. Other interviewees said that voice functions, similar to voice guides, should be designed to help people with poor vision.

As online resources, online virtual museum tours should take into account audiences of different ages, abilities, and cultural backgrounds. This study suggests that teaching guidance could be set up on the initial webpage as a guide to the unfamiliar experience, which would allow users to become familiar with various functions smoothly. The visual recognizability of the interface font and symbols could be further optimized, and sound, voice or music prompts—as well as other multi-sensory stimulus factors—could be integrated into the prompts, guidance and explanations to facilitate access for people of different ages, experiences, and abilities.

Further, although our participants did not include the disabled, it should be considered in the future, which is one of the goals of the inclusive design. In fact, great strides have been made to extend museum services to people with disabilities. For example, Cachia criticized the ocularcentrism in museum exhibition and argued that the inclusivity of exhibition could be expanded by integrating audio description into the exhibition itself and the discursive elements of the exhibition such as the catalogues, symposia, and websites [31]. The Audio Description Project (ADP) of the American Council of the Blinded (ACB) summarizes the relevant technical information and practice project of the museum for the audio description of the disabled and provides a broad reference for related research and practice [32]. In terms of virtual exhibition, Montagud et al. discussed the integration of audio description, subtitle, and sign language and other access services into the interface of museum virtual visit and compared the differences in the inclusiveness of different interfaces [33]. In addition, some studies were conducted in terms of touch and taste, in an attempt to provide more choices for the audience of museum virtual experience [34, 35]. All of these works provide more space for the disabled to accept museum virtual exhibition services.

Conclusions

Based on a literature review, this paper constructed a set of evaluation dimensions for online museum virtual tours centered on user experiences. Based on the resulting scale, the CFA method and individual interviews were used to evaluate the EAFC online museum virtual tour. This made it possible to verify the dimensions of the scale, as well as simplifying and rationalizing the structure. Based on an evaluation of the EAFC, we drew the following conclusions:

  1. The exhibition fostered a good sense of reality. Due to the characteristics of the online museum virtual tour technology, the EAFC provided good performance in terms of the authenticity of the environment and the virtual exhibit environment. However, due to the characteristics of the panoramic image acquisition technology, the EAFC provided less satisfactory visual effects in terms of the experience of the virtual environment, authenticity of human behavior feedback, and naturalness of behavior.

  2. The interactivity of the exhibition is merely general and needs to be improved. The EAFC has a certain degree of interactivity, such as zooming in or out of the exhibits, and specific information can be obtained by clicking a link. However, it would possible, for example, to add a variety of different types of content to the links to make the experience more interactive.

  3. The navigation design has substantial room for improvement. Navigation has a significant influence on improving the experience of virtual online museum tours. In particular, when visitors visit the EAFC, it is difficult for them to have clarity about their direction and location, which may cause confusion. This is a result of not only a technical limitation of virtual tours but also a lack of humanistic consideration in UI design, operation logic, and auxiliary design, among other factors.

  4. There are opportunities for learning on the site, and it can arouse the interest of visitors, but there also appears to be a lack of learning materials. EAFC has a wealth of exhibits and materials, which are of high artistic and scientific value. Although the overall variety and quantity of the collection are rich, the quantity and quality of the information displayed for the individual exhibits are insufficient, and the forms of learning offered are also very simple, which means this is an aspect that could be improved.

  5. Inclusivity was not sufficiently considered and needs to be further optimized. This finding was mainly obtained through interviews. To achieve successful, inclusive virtual tours of an online museum, everyone should be able to visit without being limited by their ability, age, or other conditions. Therefore, to improve the quality of the visit, we need to consider the design from the perspective of inclusivity.

Unlike previous related studies, this study targeted the general public and their experiences of a virtual tour service from online museums. For the first time, the universality of such tours was considered. Previous similar studies, which focused on the experience of expert users and virtual tour services in museum exhibition halls, have practical reference value and broad application scope. For museum stuffs and exhibition designers, it is hoped that the results of this research can provide some references for consideration in the design and planning of online virtual tours of relevant museums and exhibition halls and could also be used to evaluate the relevant products or services developed. For the general audience, by encouraging people to visit this type of exhibition, people who are interested in the theme of the exhibition can indeed get new information, especially under the background of the COVID-19 pandemic, people can get a visit similar to the one in the real museum without going out of their house. However, through this work, you can understand not only the advantages of this kind of exhibition, but also the problems or difficulties that you may face, so that you can be more prepared to adapt to it better. For example, you may want to look out for any functions like a mini-map that can enhance your museum visit experience.

Follow-up studies could apply the research results to more examples to further verify the feasibility and rationality of the method presented here. The heuristic scale of this study focuses on users’ evaluation of usability based on their experience. Subsequent studies should further improve the relevant evaluation methods from the non-utilitarian perspective of entertainment. Due to time and space limitations, inclusivity was not discussed in depth here; alongside an aging society and the continuous attention to social equality, inclusivity should become a vital evaluation standard for museums. Follow-up research should more deeply explore this dimension and further improve the evaluation scale.

Supporting information

S1 File. Data for CFA.

(XLS)

S2 File. Original and translation of the questionnaire.

(DOCX)

S3 File. Data collected in interviews.

(DOCX)

S4 File. Interview guideline.

(DOCX)

S5 File. (COREQ) 32-item checklist.

(DOCX)

S6 File

(SAV)

Acknowledgments

We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Leinhardt G, Crowley K. Objects of learning, objects of talk: Changing minds in museums. In: Scott GP, editors. Perspectives on object-centered learning in museums. Mahwah, NJ: Erlbaum; 2002:301–324. [Google Scholar]
  • 2.Yaneva A, Rabesandratana TM, Greiner B. Staging scientific controversies: a gallery test on science museums’ interactivity. Public Understanding of Science. 2009;18(1): 79–90. doi: 10.1177/0963662507077512 [DOI] [PubMed] [Google Scholar]
  • 3.Ke G, Jiang Q. Application of internet of Things technology in the construction of wisdom museum. Concurrency and Computation: Practice and Experience. 2019;31(10): e4680. [Google Scholar]
  • 4.Cunliffe D, Kritou E, Tudhope D. Usability evaluation for museum web sites. Museum Management and Curatorship.2001;19(3): 229–252. [Google Scholar]
  • 5.Schweibenz W. Virtual Museums: The Development of Virtual Museums. ICOM News Magazine. 2004;3: 3. [Google Scholar]
  • 6.Foo S. Online virtual exhibitions: Concepts and design considerations. DESIDOC Journal of Library & Information Technology. 2008;28(4): 22–34. [Google Scholar]
  • 7.Navarrete T. Digital heritage tourism: innovations in museums. World Leisure Journal. 2019;61(3): 200–214. [Google Scholar]
  • 8.Jun T, Fengjun L, Ping W. National museum research based on virtual roaming technology. In: 2015 Seventh International Conference on Measuring Technology and Mechatronics Automation; 2015 Jun13-14; Nanchang, China: IEEE; 2015. p.683–688. [Google Scholar]
  • 9.Bessa M, Melo M, Narciso D, Barbosa L, Vasconcelos-Raposo J. Does 3D 360 video enhance user’s VR experience? An Evaluation Study. Proceedings of the XVII International Conference on Human Computer Interaction; 2015 Aug 2–7; Los Angeles, USA. New York: ACM; 2016. [Google Scholar]
  • 10.Lifeng S, Li Z, Yunhao L, Xiaofeng H. Real-time walkthrough in real image-based virtual space. Journal of Image and Graphics. 1999;4(6): 507–513. Chinese. [Google Scholar]
  • 11.Styliani S, Fotis L, Kostas K, Petros P. Virtual museums, a survey and some issues for consideration. Journal of Cultural Heritage. 2009;10(4): 520–528. [Google Scholar]
  • 12.Hu Q, Yu D, Wang S, Fu C, Ai M, Wang W. Hybrid three-dimensional representation based on panoramic images and three-dimensional models for a virtual museum: Data collection, model, and visualization. Information Visualization, 2017;16(2): 126–138. [Google Scholar]
  • 13.Pagano A, Armone G, De Sanctis E. Virtual museums and audience studies: the case of “Keys to Rome” exhibition. In: 2015 Digital Heritage; 2015 Sept 28-Oct 2; Granada, Spain: IEEE; 2015. p. 373–376. [Google Scholar]
  • 14.Donghai S. Essentials of the History of the Evolutionary Changes of Museum. Chinese Museum. 1988;(10): 10–23. Chinese. [Google Scholar]
  • 15.Bastanlar Y. User behaviour in web-based interactive virtual tours. In: 29th International Conference on Information Technology Interfaces; 2007 Jun 25–28; Cavtat, Croatia: IEEE; 2007. p. 221–226. [Google Scholar]
  • 16.Barbieri L, Bruno F, Muzzupappa M. Virtual museum system evaluation through user studies. Journal of Cultural Heritage. 2017;26: 101–108. [Google Scholar]
  • 17.Kabassi K, Amelio A, Komianos V, Oikonomou K. Evaluating museum virtual tours: the case study of Italy. Information. 2019;10(11): 351. [Google Scholar]
  • 18.Pescarin S, Pagano A, Wallergard M, Hupperetz W, Ray C. Archeovirtual 2011: An evaluation approach to virtual museums. In: 18th International Conference on Virtual Systems and Multimedia; 2012 Sept 2–5; Milan, Italy: IEEE; 2012. p. 25–32. doi: 10.5402/2012/186734 [DOI] [Google Scholar]
  • 19.Pagano A, Pietroni E, Cerato I. User experience evaluation of immersive virtual contexts: the case of the virtual museum of the Tiber Valley project. In: 9th International Conference on Education and New Learning Technologies; 2017 Jul 3–5; Barcelona, Spain: IATED; 2017. p. 3373–3384. [Google Scholar]
  • 20.Roussou M, Katifori A. Flow, staging, wayfinding, personalization: Evaluating user experience with mobile museum narratives. Multimodal Technologies and Interaction. 2018;2(2): 32. [Google Scholar]
  • 21.Lin ACH, Fernandez WD, Gregor S. Understanding web enjoyment experiences and informal learning: A study in a museum context. Decision Support Systems. 2012;53(4): 846–858. [Google Scholar]
  • 22.MacDonald C. Assessing the user experience (UX) of online museum collections: Perspectives from design and museum professionals. In: Museums and the Web. 2015. [Google Scholar]
  • 23.Sutcliffe A, Gault B. Heuristic evaluation of virtual reality applications. Interacting with Computers. 2004;16(4): 831–849. [Google Scholar]
  • 24.Hammady R, Ma M, Strathern C, Mohamad M. Design and development of a spatial mixed reality touring guide to the Egyptian museum. Multimedia Tools and Applications. 2020;79(5): 3465–3494. [Google Scholar]
  • 25.Barneche-Naya V, Hernández-Ibañez LA. A comparative study on user gestural inputs for navigation in NUI-based 3D virtual environments. Universal Access in the Information Society. 2020; 1–17. [Google Scholar]
  • 26.Parker E, Saker M. Art museums and the incorporation of virtual reality: Examining the impact of VR on spatial and social norms. Convergence. 2020;26(5–6): 1159–1173. [Google Scholar]
  • 27.Cushing AL, Cowan BR. Walk1916 Exploring non-research user access to and use of digital surrogates via a mobile walking tour app. Journal of Documentation. 2017;73(5): 917–933. [Google Scholar]
  • 28.Marius N, Nina S, Markus K, Yu X, & Atte O. Effects of gamified augmented reality in public spaces. IEEE Access. 2019;7: 148108–148118. [Google Scholar]
  • 29.Roger C, Clarkson J, Julia C. Design for inclusivity: A practical guide to accessible, innovative and user-centred design. London & New York: Routledge & CRC; 2016. [Google Scholar]
  • 30.Clarkson PJ, Coleman R. History of inclusive design in the UK. Applied ergonomics. 2015;46: 235–247. doi: 10.1016/j.apergo.2013.03.002 [DOI] [PubMed] [Google Scholar]
  • 31.Cachia A. Talking blind: disability, access, and the discursive turn. Disability Studies Quarterly. 2013;33(3). [Google Scholar]
  • 32.The Audio Description Project. The Audio Description Project. [Cited 2021 April 20]. Available from: https://acb.org/adp/index.html
  • 33.Montagud M, Orero P, Matamala A. Culture 4 all: accessibility-enabled cultural experiences through immersive VR360 content. Personal and Ubiquitous Computing. 2020;24(6), 887–905. [Google Scholar]
  • 34.Vi CT, Ablart D, Gatti E, Velasco C, Obrist M. Not just seeing, but also feeling art: Mid-air haptic experiences integrated in a multisensory art exhibition. International Journal of Human-Computer Studies. 2017;108: 1–14. [Google Scholar]
  • 35.Rodrigues JM, Ramos CM, Pereira JA, Sardo JD, Cardoso PJ. Mobile five senses augmented reality system: technology acceptance study. IEEE Access. 2019;7: 163022–163033. [Google Scholar]

Decision Letter 0

Stefano Triberti

8 Apr 2021

PONE-D-21-07531

Evaluation of Virtual Roaming in an Online Museum: Exhibition of Architecture of the Forbidden City

PLOS ONE

Dear Dr. Jia,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Two Reviewers evaluated the manuscript and gave generally favorable opinions, however the contribution needs extensive revisions. Especially: 

- language editing by a native english speaker is still needed

- the link to the online exhibition did not work at my first try; you should probably separate it from the final dot (without it, it works)

- methodology lacks information, especially regarding the methodological approach for qualitative data

- ethical approval was missing. This is necessary for consideration by PLOS ONE

- Authors report a pre-test of the questionnaire and an alpha with 18 participants. Is this useful? What was the reliability index with the full sample?

- data were partial. First, items in the database are in chinese, they should be translated in english to be readable by international audience; second, qualitative data from interviews were missing. These are also mandatory for consideration by PLOS ONE

- figures and more information on participants' experience and tasks within the exploration would improve the readers' comprehension of the research

- final discussion says little about the take-home messages for a broader audience: what have we learnt about the design and evaluation of online museums in general? 

I encourage Authors to perfom extensive revisions taking into account these aspects and all those identified by Reviewers. Please remind that final acceptance of the contribution could not be guaranteed at this step.

Please submit your revised manuscript by May 23 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Stefano Triberti, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ

3. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

4. Please amend the manuscript submission data (via Edit Submission) to include author Jia Li.

5. Please amend your authorship list in your manuscript file to include author Taiwan Jia

6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an excellent research paper of relevance to museum studies and it should be published. The methods and process are well-described and clear. The paper is well-written. I am not a statistician so I can't evaluate this part. However, I would be happy to use the paper and evaluation tool in my own teaching in museum studies. My "minor revisions" suggestions are very minor. First, on p. 3, ICOMOS should be defined or the name written out fully. Next, on p. 9, in the chart, at Learning D3 the term "Fun of knowledge" is written. This is not correct English usage. Instead, it should say something like "Enjoyable presentation of information." Then, on p. 22, the term "generality" is used in reference to a suggestion by the authors that the concept should become a "vital evaluation standard." The term's application should be defined and an example included, in my view. Last, on p. 18-19 and again on p. 21 the authors refer to inclusivity, inclusiveness and ability. Yet, they do not mention disability (an oversight, in my view), and the solutions offered through disability studies and inclusive design, such as image descriptions, audio descriptions, and so on. For example, see: https://www.acb.org/adp/ There are many such strategies recommended for museums by disability studies scholars and activists--see Amanda Cachia and Georgina Kleege, for example. The paper should direct museum-based readers to look to these and related sources for suggestions.

Reviewer #2: The manuscript discusses a research to develop a heuristic evaluation scale based on the existing literature and a case study. The case study is a virtual online museum tour. The proposed evaluation scale includes four dimensions: authenticity, interaction, navigation, and learning.

I am not expert in factor analysis, and I assume that the methodology is properly applied.

My concerns are related to the description of the case study and the methods.

First of all, I suggest the authors to better describe the case study. In addition to the link provided to access the online tour, there is the need to understand the functionalities, how the experience is structured, the peculiarities of the museum as well as the interaction modalities. Adding screenshots of relevant points of interest can help to better understand the case study.

Secondly, some details related to the survey are missing:

- Since the ethics statement is not provided, I wonder whether the authors deal with the informed consent to involve the participants in the research.

- How was the EAFC online experience presented to the participants? I wonder whether the authors provided some instructions to the participants about the navigation and the interface, or just let them free to explore the functionalities.

- What was the task to accomplish during the experience? Was it the same for all the participants?

- What were the groups and attributes used for the stratified sampling of the interviewees?

- It is not clear the reason why the researcher asked “further random questions” during the interview. Is “random” the proper term?

- Moreover, it is not clear how the qualitative data from the interviews were analysed and integrated with the questionnaire’s results.

Finally, I suggest the authors to introduce the scale by conceptualizing the different elements, as well as to clearly explain how their scale differs from and improve the existing literature.

A final comment is about the brief discussion about inclusivity, that I appreciate. Indeed it is a relevant issue to make virtual museum accessible for people with diverse needs and abilities. I hope that the authors will further explore this aspect in details, because the literature is missing such investigations to guide the future design.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Annamaria Recupero

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jan 6;17(1):e0261607. doi: 10.1371/journal.pone.0261607.r002

Author response to Decision Letter 0


7 Jun 2021

Response to Reviewers

Reply to reviewer’s comments: Manuscript [PONE-D-21-07531]

Dear Dr. Stefano Triberti,

We would like to thank the editor and reviewers for the invaluable comments and advice that helped improve our original manuscript. My partners and I studied the journal and reviewer's suggestions for revision carefully and revised the manuscript based on each suggestion. Please find our responses to the comments below.

Editor Requirements

1. language editing by a native english speaker is still needed.

Authors’ response: Thank you for your suggestion. We will continue to hand over the manuscript to a professional translation company for editing.

2. the link to the online exhibition did not work at my first try; you should probably separate it from the final dot (without it, it works).

Authors’ response: Thank you for your reminder. I have reedited this link to ensure its validity.

3. Methodology lacks information, especially regarding the methodological approach for qualitative data.

Authors’ response: Thank you for pointing out this problem. It was true that there was a lack of description in the methodology of the qualitative part of this study. We have strengthened this part in line 4-10 of p. 11 and lines 1–7 of p. 12 and added relevant references.

4. Ethical approval was missing. This is necessary for consideration by PLOS ONE.

Authors’ response: Thank you very much for your reminder. This paper did not provide an ethics statement, which was an oversight error on our part. We have added the ethical statements on p. 13, lines 3–8.

In accordance with IRB standards, the tests were carried out under the background that the participants fully understood the purpose, process, risk, and data use of the study. The participants filled in the questionnaire anonymously, and the data were recorded with their consent. This study adopts a paperless electronic questionnaire. Only with the consent of the participants, they can continue to fill in the questionnaire.

5. Authors report a pre-test of the questionnaire and an alpha with 18 participants. Is this useful? What was the reliability index with the full sample?

Authors’ response: Thank you for your correction. It is our negligence that we did not report the Cronbach’s α coefficient of all valid questionnaires in the study. We have added this on p. 14, lines 2–7.

6. Data were partial. First, items in the database are in chinese, they should be translated in english to be readable by international audience; second, qualitative data from interviews were missing. These are also mandatory for consideration by PLOS ONE.

Authors’ response: We apologize for this, and we will translate the database in the future. Qualitative data will also be translated, collated, and uploaded.

7. Figures and more information on participants' experience and tasks within the exploration would improve the readers' comprehension of the research.

Authors’ response: Thank you very much for your suggestions. We have added Figure 2 and Figure 3 in p. 12¬–13 to make our tests clearer and easier to understand.

8. Final discussion says little about the take-home messages for a broader audience: what have we learnt about the design and evaluation of online museums in general?

Authors’ response: Thank you very much for your reminder. It was an oversight error that we considered more about museum staffs and designers than a broader audience. We have added about this in the conclusion on p. 28, lines 3–10, with implications for a wider audience.

Journal Requirements

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Authors’ response: Thank you very much for your reminder. We have modified our manuscript according to the requirements of the journal to conform to PLOS ONE's style.

2. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.

Authors’ response: Thank you for your reminding. I have registered the ORCID ID and updated my personal information.

3. Please include additional information regarding the survey or questionnaire used in the study and ensure that you have provided sufficient details that others could replicate the analyses. For instance, if you developed a questionnaire as part of this study and it is not under a copyright more restrictive than CC-BY, please include a copy, in both the original language and English, as Supporting Information.

Authors’ response: We apologize that it was an oversight error on our part to not upload these questionnaires. We have uploaded the original and translated version of our questionnaire in the Support Information.

4. Please amend the manuscript submission data (via Edit Submission) to include author Jia Li.

Authors’ response: Thank you for the reminder. This was an editing error; we have added Jia Li in the edited submission.

5. Please amend your authorship list in your manuscript file to include author Taiwan Jia.

Authors’ response: Thank you for the reminder; “Taiwan Jia” was a mistake in editing, and there is no such author. We have modified the personal information again.

6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly.

Authors’ response: Thank you for reminding us. We have added Supporting Information on p. 32 and updated our in-text citations.

Reviewer #1

This is an excellent research paper of relevance to museum studies and it should be published. The methods and process are well-described and clear. The paper is well-written. I am not a statistician so I can't evaluate this part. However, I would be happy to use the paper and evaluation tool in my own teaching in museum studies. My "minor revisions" suggestions are very minor. First, on p. 3, ICOMOS should be defined or the name written out fully. Next, on p. 9, in the chart, at Learning D3 the term "Fun of knowledge" is written. This is not correct English usage. Instead, it should say something like "Enjoyable presentation of information." Then, on p. 22, the term "generality" is used in reference to a suggestion by the authors that the concept should become a "vital evaluation standard." The term's application should be defined and an example included, in my view. Last, on p. 18-19 and again on p. 21 the authors refer to inclusivity, inclusiveness and ability. Yet, they do not mention disability (an oversight, in my view), and the solutions offered through disability studies and inclusive design, such as image descriptions, audio descriptions, and so on. For example, see: https://www.acb.org/adp/ There are many such strategies recommended for museums by disability studies scholars and activists--see Amanda Cachia and Georgina Kleege, for example. The paper should direct museum-based readers to look to these and related sources for suggestions.

Authors’ response: Thank you very much for your recognition and support, which have encouraged us to continue improving our manuscript. In light of your suggestions, we have made the following revisions.

� We added the full name for the ICOMS on p. 3, line 11.

� We replaced "Fun of information" in D3 in Table 2 (original Table 1) on p. 11 with “Enjoyable presentation of information”.

� “Generality” in p. 28, lines 16–17 (original p. 22) was a wrong word, while the correct one should be “inclusivity”. The explanation of inclusivity has been defined on p. 24, line 8-9.

� Thank you very much for your suggestions on inclusion; we have learnt a lot from the links you provided. We have discussed this more on p. 25, lines 10–23 and p. 26, lines1–2, with references to some relevant literature.

Thank you very much for pointing out the mistakes in out manuscript and putting forward these valuable suggestions, which made our work more reasonable and easier to understand.

Reviewer #2

1. The manuscript discusses a research to develop a heuristic evaluation scale based on the existing literature and a case study. The case study is a virtual online museum tour. The proposed evaluation scale includes four dimensions: authenticity, interaction, navigation, and learning.

I am not expert in factor analysis, and I assume that the methodology is properly applied.

My concerns are related to the description of the case study and the methods.

Authors’ response: Thank you very much for your recognition and support.

2. First of all, I suggest the authors to better describe the case study. In addition to the link provided to access the online tour, there is the need to understand the functionalities, how the experience is structured, the peculiarities of the museum as well as the interaction modalities. Adding screenshots of relevant points of interest can help to better understand the case study.

Authors’ response: Thank you very much for your suggestions. On p. 7, lines 10–19, we described the functionalities, structure, peculiarities, interaction modalities and other elements of the exhibition. We also illustrated them through Figure 1on p. 8.

3. Secondly, some details related to the survey are missing:

-Since the ethics statement is not provided, I wonder whether the authors deal with the informed consent to involve the participants in the research.

Authors’ response: Thank you for pointing this out, and we did forget to provide an ethics statement. We have added relevant statements on p. 13, lines 3–8. The participants have fully understood the purpose, process, risk and data use of the study. They filled in the questionnaire anonymously, and the data were recorded with their consent. This study adopted a paperless electronic questionnaire. The questionnaires were filled only with the consent of the participants.

- How was the EAFC online experience presented to the participants? I wonder whether the authors provided some instructions to the participants about the navigation and the interface, or just let them free to explore the functionalities.

Authors’ response: Thank you for raising this point, and we have added description on p. 12–13 of the paper and Figure 2 and Figure 3 to further explain the testing process. Participants were only asked to find the designated artifact at the beginning; the rest of their visit was undisturbed.

- What was the task to accomplish during the experience? Was it the same for all the participants?

Authors’ response: Yes, all the participants had the same task. They all needed to find the artifact named Ceiling of Ci Ning Palace Garden Linxi Pavilion. I apologize for that we forgot to explain in the article. We have added the description and a figure on p. 12, lines 12–17.

- What were the groups and attributes used for the stratified sampling of the interviewees?

Authors’ response: Thank you for pointing out this missing part. We did lack description of the groups and attributes used for the stratified sampling of the interviewees. However, we have strengthened the description on p. 18, lines 4–8 and added Table 7 on p. 18.

- It is not clear the reason why the researcher asked “further random questions” during the interview. Is “random” the proper term?

Authors’ response: Thank you for your correction. We also think that “random” is not very appropriate. We have re-described the interview questioning process on p. 17, line12-13 and p.18 1–2.

- Moreover, it is not clear how the qualitative data from the interviews were analysed and integrated with the questionnaire’s results.

Authors’ response: Thank you for pointing out our oversight error. The questions of our semi-structured interview were formulated based on the scale dimensions of quantitative research, and the discussion was also based on the results of these two parts. We gave explanations on p. 17, line 12-13 and p. 18, lines1–2, and added Table 6 on p. 18 for further explanation.

4. Finally, I suggest the authors to introduce the scale by conceptualizing the different elements, as well as to clearly explain how their scale differs from and improve the existing literature.

Authors’ response: Thanks for your suggestion; we have conceptualized the different elements in the scale in Table 8 on p. 19–20. In addition, we have listed the scales we refer to in Table 1 on p. 9 to provide the reader with a comparison for the scales we have designed.

5. A final comment is about the brief discussion about inclusivity, that I appreciate. Indeed it is a relevant issue to make virtual museum accessible for people with diverse needs and abilities. I hope that the authors will further explore this aspect in details, because the literature is missing such investigations to guide the future design.

Authors’ response: Thank you very much for your recognition. On p. 25, lines 10–23 and p. 26, lines 1–2 we have discussed more about the dimension of inclusivity and have cited some relevant literatures, hoping to provide more information to readers.

Attachment

Submitted filename: Response to Reviewers-jiali0607.docx

Decision Letter 1

Stefano Triberti

5 Jul 2021

PONE-D-21-07531R1

Evaluation of Virtual Roaming in an Online Museum: Exhibition of Architecture of the Forbidden City

PLOS ONE

Dear Dr. li,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Revisions were adequately performed by Authors, but lacks regarding the journal's criteria for publication stay unsolved

- the items in the .sav file are still in chinese. Authors are asked to "maximize the accessibility and reusability of the data", so as I said in the previous round of revisions these should be translated in the english language like the rest of the paper materials https://journals.plos.org/plosone/s/data-availability 

- the methodological details added on the qualitative research are limited. According to Author guidelines, "Qualitative research studies should be reported in accordance to the Consolidated criteria for reporting qualitative research (COREQ) checklist or Standards for reporting qualitative research (SRQR) checklist" see this page for links to the checklists https://journals.plos.org/plosone/s/submission-guidelines , I also suggest Authors to consider other qualitative research published on PLOS ONE to see examples of the checklists included as supporting information, such as for example https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0247121 , https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0225534

Again, please notice these are PLOS ONE's criteria for publication so they should be satisfied before the article could be considered for publication

Please submit your revised manuscript by Aug 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Stefano Triberti, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: I appreciate the adjustments made to clarify some issues.

I found one typo on page 16 line 6: "Frist" instead of "First".

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: Yes: Annamaria Recupero

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jan 6;17(1):e0261607. doi: 10.1371/journal.pone.0261607.r004

Author response to Decision Letter 1


14 Aug 2021

Editor Requirements

1.The items in the .sav file are still in Chinese. Authors are asked to "maximize the accessibility and reusability of the data", so as I said in the previous round of revisions these should be translated in the English language like the rest of the paper materials https://journals.plos.org/plosone/s/data-availability

Authors’ response: Thank you very much for pointing this out, and we apologize for this negligence. We have translated all the contents of the .sav file into English.

2.The methodological details added on the qualitative research are limited. According to Author guidelines, "Qualitative research studies should be reported in accordance to the Consolidated criteria for reporting qualitative research (COREQ) checklist or Standards for reporting qualitative research (SRQR) checklist".

Authors’ response: Thank you for reminding us. According to the COREQ checklist, we have supplemented more details of the qualitative study in line 9 on p. 11, line 7 on p. 13, lines 6–11 on p. 16, and line 6 on p. 17. Moreover, files S4 and S5 have been added to provide supporting information.

3.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results.

Authors’ response: Thank you for your suggestion. We have uploaded the lab protocols to protocols.io, and you can view them at dx.doi.org/10.17504/protocols.io.bww4pfgw.

Journal Requirements

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article s retracted status in the References list and also include a citation and full reference for the retraction notice.

Authors’ response: Thank you very much for the reminder. We went through the references one by one and found that some of the non-English references were not easy for global readers to search and access on Google Scholar, so we replaced them with English sources with the same views. The references are modified in lines 10–21 on p. 28, lines 7–9, 13–17, and 22–25 on p. 29, and in line 1 on p. 30.

Some new modifications are also listed below. We replace the virtual roaming with virtual tour in the manuscript. Although our research object is called virtual tour in some literatures (e.g., ref. 8), more literatures use virtual tour (e.g., ref. 14 and 16). After careful investigation, we decided to use virtual tour to clarify our research object. These are modified in the title page, line 6, 8 on p. 4, line 10–11, 17, 20 on p.7, table 2, table 8, and line 3 on p. 20.

Reviewer #2

I appreciate the adjustments made to clarify some issues. I found one typo on page 16 line 6: "Frist" instead of "First".

Authors’ response: Thank you very much for your approval, as well as for pointing out our flaws. We have corrected the misspellings in line 13 on p. 16.

Attachment

Submitted filename: Response to Reviewers-jiali0607.docx

Decision Letter 2

Prabhat Mittal

7 Dec 2021

Evaluation of Virtual Tour in an Online Museum: Exhibition of Architecture of the Forbidden City

PONE-D-21-07531R2

Dear Dr. li,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Prabhat Mittal, Ph.D.

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

Reviewer #4: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: I Don't Know

Reviewer #3: Yes

Reviewer #4: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: (No Response)

Reviewer #3: The authors of the paper has responded and added corrections to the majority of the suggestions highlighted by the previous reviewers. It was a an acceptable paper with justifiable methodology. I would suggest additional information to be added in the methodology section (a few of the suggestions stated below were mentioned in the later part of the discussion section):

1-To clearly state the type of interview carried out (focus group or personal interview)

2-To state the selection process of the respondents and how the questionnaires were distributed (it was mentioned in the flow diagram in Figure 3, however this need to be explained earlier in the method section)

3-The sentence "..212 valid questionnaires were collected." It should be written as "...212 valid responses from the questionnaire were collected."

4-In of of the result's section, the authors mentioned that they conducted the interview by using WeChat (due to the COVID-19 pandemic). The authors need to explain the process of acquiring the contact information of the potential respondents in the methodology section.

Reviewer #4: In this paper factor analysis is conducted to obtain a clear pattern of loading. It is a way of carrying out a particular task that is used to reduce a large number of variables into fewer numbers of factors. This technique extracts maximum common variance from all variables and puts them into a common score. The loading purpose indicates the depth of the relationships between items. Each factor will tend to have either large or small loading's of any particular variable. Hence, factor loading is used to assess the validity of an item and to summarize the sort of correlation among variables. Unfortunately, this paper applied the first part but neglected the second part which is related to summarize the sort of correlation among variables. Hence, we think it needs to go further analysis like Correlation analysis to explore the relationships among the variables. The role of correlation is to capture the similarities or differences between the variables. It measures the degree of association between the values of related variables given in the data set. Then, the mutual influence of variables on one another will be traced.

It is interesting to note, that the main goal of factor analysis is to identify a group of inter-related variables, to see how they are related to each other or we can say that Factor analysis can be used to identify the hidden dimensions or constructs which may or may not be apparent from direct analysis.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: No

Reviewer #4: Yes: Dr.Salahaddin Yasin Baper

Acceptance letter

Prabhat Mittal

20 Dec 2021

PONE-D-21-07531R2

Evaluation of Virtual Tour in an Online Museum: Exhibition of Architecture of the Forbidden City

Dear Dr. Li:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Prabhat Mittal

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Data for CFA.

    (XLS)

    S2 File. Original and translation of the questionnaire.

    (DOCX)

    S3 File. Data collected in interviews.

    (DOCX)

    S4 File. Interview guideline.

    (DOCX)

    S5 File. (COREQ) 32-item checklist.

    (DOCX)

    S6 File

    (SAV)

    Attachment

    Submitted filename: Response to Reviewers-jiali0607.docx

    Attachment

    Submitted filename: Response to Reviewers-jiali0607.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES