Skip to main content
PLOS One logoLink to PLOS One
. 2020 Sep 4;15(9):e0238041. doi: 10.1371/journal.pone.0238041

Database of virtual objects to be used in psychological research

Deian Popic 1, Simona G Pacozzi 1, Corinna S Martarelli 1,*
Editor: Haoran Xie2
PMCID: PMC7473576  PMID: 32886717

Abstract

Although many visual stimulus databases exist, to our knowledge, none includes 3D virtual objects, that can directly be used in virtual reality (VR). We present 121 objects that have been developed for scientific purposes. The objects were built in Maya, and their textures were created in Substance Painter. Then, the objects were exported to an FBX and OBJ format and rendered online using the Unreal Engine 4 application. Our goal was to develop the first set of high-quality virtual objects with standardized names, familiarity, and visual complexity. The objects were normed based on the input of 83 participants. This set of stimuli was created for use in VR settings and will facilitate research using VR methodology, which is increasingly employed in psychological research.

Introduction

Since virtual reality (VR) technology emerged as research method in the 1980s, it has evolved in different ways to become less expensive, more user-friendly, and more immersive. Current fully immersive VR environments presented via a head-mounted display are becoming increasingly realistic. They conform to human vision (e.g., motion parallax), and they allow for motion and physical interaction with the virtual world in a similar manner as with reality. Immersive VR provides the unique feeling of being physically present in a computer-generated virtual environment, as the user is fully immersed in the virtual world.

During the last two decades, VR has been increasingly used in psychological research. In a seminal paper, Blascovich et al. [1] discussed the trade-off between experimental control and external validity that characterizes research methods in psychology. They suggested that immersive VR might one day eliminate the trade-off entirely. For instance, in the field of visual memory research, encoding, storage, and retrieval processes are investigated in conventional laboratory settings by using 2D visual stimuli presented on a blank screen under highly controlled experimental conditions. It has been argued that external validity is not warranted because in everyday life, learning and memory processes take place in interactive 3D contexts (e.g., [2]). Testing memory processes in real environments enhances external validity but reduces experimental control and experimental replicability. The advantage of VR is its high degree of experimental control, external validity and experimental replicability [1]. In addition, VR environments enable immersive and interactive experiences, which are not provided by classical experimental settings. VR might bridge the gap between highly controlled laboratory settings and reality [2].

Since the seminal paper of Blascovich et al. [1], several studies have proven the benefits of using virtual reality in psychological research (e.g., [36]). To the best of our knowledge, no validated database of virtual objects has been developed yet. However, such stimuli are essential for studying processes such as e.g., learning and memory in VR. The goal of this article is to report the creation of 121 virtual objects that can directly be used in VR and the standardization of these stimuli in terms of name, familiarity, and visual complexity. The tested norms are in agreement with different approaches used to validate visual stimulus material (see, e.g., [7, 8]).

In sum, we aimed to create, standardize, and share a first set of virtual objects. Open science not only makes scientific work more visible and impactful but also increases efficiency, transparency, and transfer of knowledge. When stimulus sets are standardized and shared, they can be a powerful resource for other researchers to use in VR settings. For this reason, the present set of 3D objects is available online on the Open Science Framework (OSF). To our knowledge, this is the first open database of virtual objects. Such databases are extremely relevant to the growing trend of VR research in psychological sciences.

Methods

Participants

To standardize the virtual objects, data from an English-speaking sample was recruited from Amazon’s Mechanical Turk. We preregistered the study on the OSF (https://osf.io/mc2ny). Ninety-six participants completed the online survey in exchange for $3. One participant took part in the survey two times. The duplicate was removed, and only the first completed survey was included in the final dataset. To restrict the sample to native English speakers, responses from non-US participants (12 participants) were removed. The remaining sample was comprised of 83 participants (51.8% female) with an average age of 37.5 years (SD = 12.7), 100% of whom were US residents. In terms of occupation, they were classified as sales (21.7%), technicians (18.1%), service sector (16.9%), academic profession (10.8%), executives (6%), students (4.8%), temporary staff (3.6%), social services (2.4%), or other (15.7%). The ethics committee of the Swiss Distance University Institute (2019-04-00002) approved the study, which was conducted according to the principles expressed in the Declaration of Helsinki.

Materials

Virtual objects

Initially, ideas for virtual objects were gathered through discussions within the research group and aligned with pre-existing 2D databases. We selected inanimate objects that were common in daily usage and were not rotationally symmetric. Rotational asymmetric objects are objects that look differently in each distinct orientation (e.g., a cup with one handle but not a cup with two handles). This characteristic is for example needed in VR experiments testing orientation recall where subjects have to reinstate the original orientation of objects. A set of 121 objects belonging to categories such as furniture, electronic devices, or food, were built in Maya versions 18 and 19 (Autodesk). The textures of the objects were created in Substance Painter (Adobe). For each object, three to four texture maps (i.e., the BaseColor map, the normal map, the occlusion/roughness/metallic map, and the alpha map) were created and exported as PNG with a resolution of 2048 x 2048 pixels. Objects were exported from Maya to FBX and OBJ format and rendered online using the Unreal Engine 4 application (version 4.21.2, Epic Games). The objects (including textures) can be found on the OSF (https://osf.io/q658a/). The objects were designed to be changed in color in the Unreal Engine. Some of the textures own an alpha texture to highlight details. For example, the cap’s logo (see Fig 1) remains white when the alpha texture is applied. The objects and textures can be imported directly into the Unreal Engine, where the textures have to be connected into one material. The material will then be assigned to the corresponding virtual object. Examples of objects are presented in Fig 1.

Fig 1. Examples of stimuli as presented in the online survey.

Fig 1

For the standardization procedure, the virtual objects were converted to video format. After rendering the objects into AVI video files using Unreal Engine 4 application (pixel resolution: 1920 x 1080, frame rate: 60 fps, length: 4 seconds), they were converted into mp4 files for presentation in the online survey. Objects were presented in grayscale. The videos can also be found on the OSF (https://osf.io/q658a/).

Name of virtual objects. Following the procedure of Brodeur et al. [7], we asked participants to identify virtual objects with the following question: “Identify the object as briefly and unambiguously as possible by writing only one name, the first name that comes to mind. The name can be composed of more than one word.”

Familiarity. Participants were asked to rate the level to which they are familiar with the object on a 5-point Likert scale ranging from not familiar at all to very familiar.

Visual complexity. Participants were asked to subjectively rate the level to which the object appears to be complex in terms of the quantity of details and the number of angles on a 5-point Likert scale ranging from not complex at all to very complex.

Procedure

Participants completed the survey online using the freely available open-source software LimeSurvey (www.limesurvey.org). They were first asked to report informed consent, and then they reported age, gender, employment, country of origin, and country of residence. Moreover, they were certified to be at least 18 years of age and that they understand written English well. Virtual objects were then presented one at time with automatic playback of a four-second video showing all sides of the object in grayscale. It was possible to replay the video. Next, participants were asked to type the name of the object (name agreement task) and then provided their ratings of familiarity and of visual complexity. Participants could move to the next object at their own pace, and objects were presented in a randomized order across participants. The entire validation procedure lasted about 30 minutes.

Analyses

Raw data as well as an Excel spreadsheet containing specific details related to each virtual object are available on the OSF (https://osf.io/q658a/). For each object, we report the modal name, name agreement, and mean ratings for familiarity and visual complexity.

Modal name and name agreement

The name or combination of names given by the highest number of participants was considered the modal name of the object. Before determining modal names, individual entries were revised as follows:

  1. Entries like “I don’t know” or “no idea” were coded as DKO (don’t know object).

  2. Misspellings were corrected.

  3. Abbreviated words were written out (e.g., “television” instead of “TV”).

  4. Conjunctions like “and,” “or,” and “with” were discarded.

  5. Adjectives describing a state or a feature that is irrelevant to the identity of the object were also discarded (e.g., “cap” instead of “stylish cap”). Adjectives were not removed if they provided relevant information regarding the nature, shape, or function of the object (e.g., “wireless phone” or “artificial banana”).

  6. Composite names with a rearranged word order (e.g., “bowl of fruit” and “fruit bowl”) were considered as the same name.

Once the modal name was identified, all entries that contained the modal name as previously defined (see above) were recoded with the modal name. For example, the responses “bottle opener” and “bottle cap opener” were recoded to the modal name “opener”. This strategy was adopted to prevent single entries with basically the same meaning from gaining too much weight in the analyses and thus confound potentially high name agreement. The name agreement measure provides information about which objects elicit rather homogenous names and which objects are named rather inconsistently.

Familiarity and visual complexity

The ratings for familiarity and visual complexity were computed by averaging scores on the 5-point scale and calculating standard deviations.

Results

Table 1 summarizes the agreement and ratings for name, familiarity, and visual complexity, and Fig 2 shows their histograms.

Table 1. Agreement and rating for name, familiarity and visual complexity.

Descriptive statistics
Variables Mean SD Min Max
Modal name agreement 74% 22.8% 20.5% 100%
DKO 0.55% 1.43% 0% 7.20%
Familiarity 4.37 0.42 2.77 4.80
Visual complexity 2.42 0.45 1.64 3.59

DKO = Don’t know object. SD = Standard deviation.

Fig 2. Histograms of frequencies of norms.

Fig 2

Name of the virtual objects

The name of the virtual object was defined as the name given by the highest number of participants. Overall, 74% (SD = 22.8%) of participants agreed on the chosen names.

Familiarity

Overall, the objects were rated as rather familiar (M = 4.37, SD = 0.42), as shown in Table 1.

Visual complexity

Overall, the objects were rated as having medium complexity (M = 2.42, SD = 0.45), as shown in Table 1.

Discussion

VR technology is a promising tool for bridging the gap between experimental control and external validity in psychological research. Sets of stimuli are valuable for development and validation of experimental material. Several sets of stimuli have been developed and validated, but virtual objects that can be directly used in a VR environment have not yet been made accessible. The goal of this study was to develop and validate the first normative database of virtual objects. The stimuli and stimulus-specific norms for each normative dimension are made available online on the OSF to facilitate access to experimental material for VR.

In this study, name agreement was 74% across all virtual objects and ranged from 20.50% to 100%. Thus, the percentage of name agreement was comparable to the percentage of name agreement found in studies using 2D line-drawn pictures (72%–85% depending on the language used [9]) and in studies using 2D photographs (64%; [10]). It is important to consider that recoding entries with more precise modal names to the modal names as we chose led to higher name agreement by eliminating potential alternative names. We argue that as long as the function of the object remains the same, it is acceptable to neglect additional details to prevent an artificial reduction in name agreement. However, by using this procedure, we might not have captured the diversity of the stimuli, which in turn represents the diversity of real-world objects.

Furthermore, the virtual objects in the present study were predominantly everyday objects, presented with a simple and unambiguous design, which would likely lead to high name agreement. The DKO rate of only 0.55% emphasizes that the objects were easily recognizable. In the second phase of the project conducted by Brodeur et al. [7], which added 930 new normative photos to the existing database, name agreement was substantially lower (mean of 58%) than in the first phase of the project. The authors argue that adding more stimuli generally leads to a reduction in name agreement because more uncommon objects have to be included. With only 121 objects, we have a relatively small database, which may further explain the high percentage of name agreement.

The average ratings of familiarity and visual complexity were 4.37 and 2.42 (both out of 5), respectively. It is not surprising that the mean familiarity rating was rather high since everyday objects were selected for this project. The score is comparable with the mean rating of familiarity (4.0 out of 5) found by Brodeur et al. [10], who also used common objects in their validation study. However, the average familiarity score reported by Snodgrass and Vanderwart [8] was numerically lower than the one we found. It is conceivable that 3D objects, as well as photos, correspond more with what we can see and touch in real life and thus lead to the impression of higher familiarity.

The average score obtained for visual complexity, which aligns with our expectations, might reflect the simple and colorless design of the virtual objects. Interestingly, it is consistent with the visual complexity score of 2.4 for photo stimuli reported by Brodeur et. al. [10]. The authors argue that although photos include more details than artificially designed objects, photo stimuli are more similar to what subjects perceive in everyday life. Indeed, the visual complexity score for line-drawn pictures reported in the database of Snodgrass and Vanderwart [8] was 3.0, which is numerically higher than that in the present study. To summarize, the set of virtual objects is comparable to pre-existing databases with 2D stimuli in terms of name agreement, familiarity rating, and visual complexity rating.

There are some limitations of the database and the norming procedure. A larger number of virtual objects should be developed and evaluated in future research. Moreover, future research would benefit from examining other dimensions of the database, such as object category or object agreement (i.e., the extent to which the object is similar to the one imagined by the subject). In the current validation study, objects were presented in grayscale and centered in the middle of a table. Color of objects can be changed, and objects can be presented in other (richer) contexts, however the relationship between color, context and ratings is unknown. It would be important to further investigate these aspects.

Although open source databases are growing in popularity, they have not yet achieved widespread use. In a recent review, McKiernan et al. [11] illustrated that open research is associated with increased citations, media attention, potential collaborators, job opportunities, and funding opportunities. A large majority of researchers support open science and contribute to efficient and transparent research practices. It is important to also make experimental tools and stimulus material accessible to third parties to ensure a standardized procedure for research experiments.

Here, we present a database of 121 virtual objects. The objects were created specifically to help experimental psychologists who investigate perception and memory in virtual reality. Yet another possibility is to use these everyday objects to create virtual environments (e.g., in social or clinical psychology). The virtual objects have been normed for name, familiarity and visual complexity, thus allowing researchers to use this information as experimental variables or as control variables (to avoid any confounding effects). Finally, the present results indicate that the virtual objects are valid and may benefit future research in different fields of knowledge. We hope that the virtual objects will be useful for researchers adopting VR environments to test their research questions.

Supporting information

S1 Data. Original data.

(XLSX)

S2 Data. Ratings for individual objects.

(XLSX)

Data Availability

All relevant data are available on the Open Science Framework (osf.io/q658a/).

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Blascovich J, Loomis J, Beall AC, et al. Immersive virtual environment technology as a methodological tool for social psychology. Psychol Inq 2002; 13: 103–124. 10.1207/S15327965PLI1302_01 [DOI] [Google Scholar]
  • 2.Kisker J, Gruber T, Schöne B. Experiences in virtual reality entail different processes of retrieval as opposed to conventional laboratory settings: A study on human memory. Curr Psychol 2019. 10.1007/s12144-019-00257-2. [DOI] [Google Scholar]
  • 3.Krokos E, Plaisant C, Varshney A. Virtual memory palaces: Immersion aids recall. Virtual Real; 23 2019. 10.1007/s10055-018-0346-3. [DOI] [Google Scholar]
  • 4.Martarelli CS, Borter N, Bryjova J, et al. The influence of parent’s body mass index on peer selection: An experimental approach using virtual reality. Psychiatry Res 2015; 230: 5–12. 10.1016/j.psychres.2015.05.075. [DOI] [PubMed] [Google Scholar]
  • 5.Murcia-López M, Steed A. The effect of environmental features, self-avatar, and immersion on object location memory in virtual environments. Front ICT; 3 2016. 10.3389/fict.2016.00024. [DOI] [Google Scholar]
  • 6.Wälti MJ, Woolley DG, Wenderoth N. Reinstating verbal memories with virtual contexts: Myth or reality? PLoS ONE 2019; 14(3): e0214540 10.1371/journal.pone.0214540. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Brodeur MB, Guérard K, Bouras M. Bank of standardized stimuli (BOSS) phase II: 930 new normative photos. PLoS ONE 2014; 9(9): e106953 10.1371/journal.pone.0106953 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. J Exp Psychol [Hum Learn] 1980; 6(2): 174–215. 10.1037/0278-7393.6.2.174 [DOI] [PubMed] [Google Scholar]
  • 9.Bates E, D’Amico S, Jacobsen T, et al. Timed picture naming in seven languages. Psychon Bull Rev 2003; 10(2): 344–380. 10.3758/BF03196494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Brodeur MB, Dionne-Dostie E, Montreuil T, et al. The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PLoS ONE 2010; 5(5): e10773 10.1371/journal.pone.0010773 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.McKiernan EC, Bourne PE, Brown CT, et al. Point of view: How open science helps researchers succeed. eLife 2016; 5: e16800 10.7554/eLife.16800.001 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Haoran Xie

15 Jun 2020

PONE-D-20-11816

Database of virtual objects to be used in psychological research

PLOS ONE

Dear Dr. Martarelli,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 30 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Haoran Xie

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Database of virtual objects to be used in psychological research

This study features a downloadable dataset of 3D object sets for virtual realty research, an important fast-growing research trend. As authors stated, the coming of this dataset is timely, and its availability for free download also speeds up the dissemination of knowledge and contributing to its widespread usage. In addition, several indices (N=80), including object familiarities, complexity, and name agreement, were provided for cross-database checks.

Overall, I highly agree with authors’ point on making these AR/VR-compatible stimuli freely accessible, with raters’ norms data, provided a good start point toward wider usage and further improvement, stimulation, and growing progress. That being said, I do have several suggested edits and 1~2 wishes that would require authors’ responses.

Line 47 (pp. 3): control, external validity, and (misplace comma) experimental replicability [1]

Line 64-65 (pp. 4): Such databases are extremely relevant, as VR technology is becoming more popular in psychological research. Relevant to what? “to the growing popularity/trend of VR research in psychological sciences”.

Line 69: recruited from Amazon’s website Mechanical Turk.

Line 78: The local ethics 78 committee approved the study, which … Could the name of the IRB and document number being stated here, if possible?

Line 84: were not "rotationally symmetric”….not sure what it meant here for the name of the VR stimulus set. After viewing the stimulus mp4, I guess it meant to represent the rotation of stimuli on the table, so could be better comprehended to add reference to the mp4.

Line 89: when I first saw the “FBX” format, I wondered if FBX is a standard version for VR stimuli, and googled online (and found this one: https://experience.briovr.com/blog/obj-and-fbx-files-for-virtual-reality-augmented-reality/). I guess the authors would also like to add some references of “FBX” for naive readers.

Line 197: "The average ratings of familiarity and visual complexity were 4.37 and 2.42”. I would suggest the addition of (both out of 5) immediately after the numbers, and make sure that the scale were consistent across mentioned studies in this paragraph (e.g., Brodeur [7] and Snodgrass and Vanderwart [8].The reason I raised this concern is that as the authors mentioned, in line 213-214, that "texture can appear artificial in drawings, leading to ambiguity and creating the impression of higher visual complexity.” seem counterintuitive at first glance. Because the Snodgrass et al. stimuli were mostly black-line drawings, and not clear whether the so-called “textures” was present in affecting the raters around 80'.

Suggestion 1: after downloading the video, I found the choice of colorless object and the smoothness of the object (e.g., Bananas, wither 1 or many, were not that smooth in rendered video). I guess as the 1st release, there are definitely rooms for improvement for the future release. I only hope that it will not be 24 or 32 years apart, such as the Snowgrass et al., and the subsequent color version (Rossion et al. 2004) https://journals.sagepub.com/doi/abs/10.1068/p5117 and (Moreno et al. 2012) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0037527#

Suggestion 2: To concur with the authors on the benefits of open science, I like to suggest one further step: a road map on how to best use this uploaded VR stimuli. E.g., to create a psychological VR experiment. First, I guess a VR headset would be required, and then code with certain, possibly open-source software, and then adopting material, and investigate the effect? How could the research make the best use of such stimuli? Some road map suggestion would be highly desired, or is there already some publication that could be of reference?

Reviewer #2: The purpose of this research is clear, the design and procedure are straight forward, and the methodology is consistent with some other researches in this area. These are the strength of this paper. In terms of significance, virtual reality does play a more and more important role in psychological experiments nowadays. Therefore, building good databases, especially the first one with 3-D objects, is truly important work.

Here are the questions need some clarification. First. the “database” is small with only 121 objects. Especially many of them have more than one version.

About the participants. 12 non-US participants were removed because they were not “the target population”. Are the participants removed by their citizenship or English proficiency?

Finally, this paper aimed to “create, standardize, and share the first set of objects” for psychological experiments. The standardization might need some more elaboration.

In sum, the issue is important and the methodology is fine. However, the database is small, as well as there are some conceptual concerns to be clarified.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Chun-Chia Kung

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: review 20200607.pdf

PLoS One. 2020 Sep 4;15(9):e0238041. doi: 10.1371/journal.pone.0238041.r002

Author response to Decision Letter 0


22 Jun 2020

Dear Prof. Xie, dear Editor

We thank you very much for sending us the reviews for our manuscript entitled “Database of virtual objects to be used in psychological research”. We thank you for your comments and we are happy that you think that the study has the potential to make enough of a contribution on its own. In agreement with the reviewers’ suggestions, we have developed more deeply some methodological aspects and we have extended the limitation section.

We greatly appreciate the opportunity to submit this revision of our manuscript and we have revised the manuscript in accordance with the reviewers’ recommendations, which we found to be clear, helpful, and very constructive.

We hereby submit the revised version of the manuscript and accompanying reply letter explaining how we addressed the reviewers’ comments.

We hope that we have satisfactorily addressed their suggestions and thank you for considering this paper for publication in PLoS ONE. Please do not hesitate to contact us if you have any questions.

Yours sincerely and on behalf of my coauthors,

Corinna Martarelli

Resubmission of the manuscript entitled “Database of virtual objects to be used in psychological research” (PONE-D-20-11816)

Point-by-point reply

Note: the revised passages are written in blue in the manuscript.

Reviewer 1:

This study features a downloadable dataset of 3D object sets for virtual reality research, an important fast-growing research trend. As authors stated, the coming of this dataset is timely, and its availability for free download also speeds up the dissemination of knowledge and contributing to its widespread usage. In addition, several indices (N=80), including object familiarities, complexity, and name agreement, were provided for cross-database checks.

Overall, I highly agree with authors’ point on making these AR/VR-compatible stimuli freely accessible, with raters’ norms data, provided a good start point toward wider usage and further improvement, stimulation, and growing progress. That being said, I do have several suggested edits and 1~2 wishes that would require authors’ responses.

Reply: We thank Prof. Kung very much for his helpful and very constructive comments. Further we greatly appreciate the general positive assessment. We tried to address each of the points raised, and suggestions made have been very helpful in further developing our paper. Please find below how we addressed your points.

Reviewer 1:

Line 47 (pp. 3): control, external validity, and (misplace comma) experimental replicability [1]

Reply: Thank you, we removed the comma.

Reviewer 1:

Line 64-65 (pp. 4): Such databases are extremely relevant, as VR technology is becoming more popular in psychological research. Relevant to what? “to the growing popularity/trend of VR research in psychological sciences”.

Reply: In line with your suggestion we rewrote the sentence that now reads as follows:

Such databases are extremely relevant to the growing trend of VR research in psychological sciences.

Reviewer 1:

Line 69: recruited from Amazon’s website Mechanical Turk.

Reply: We removed website and now write “recruited from Amazon’s Mechanical Turk”.

Reviewer 1:

Line 78: The local ethics 78 committee approved the study, which … Could the name of the IRB and document number being stated here, if possible?

Reply: Thank you for drawing our attention to this issue, we now implemented the missing information. The sentence now reads as follows:

The ethics committee of the Swiss Distance University Institute (2019-04-00002) approved the study, which was conducted according to the principles expressed in the Declaration of Helsinki.

Reviewer 1:

Line 84: were not "rotationally symmetric”….not sure what it meant here for the name of the VR stimulus set. After viewing the stimulus mp4, I guess it meant to represent the rotation of stimuli on the table, so could be better comprehended to add reference to the mp4.

Reply: We thank Reviewer 1 for directing our attention to this issue. Rotational symmetry is the property of an object when it looks the same after some degree of rotation. We wanted to avoid this rotational symmetry, thus we created for example a cup with one handle and not a cup with two handles. Rotational asymmetry is important when for example testing object orientation recall in a virtual environment. In the revised version of the manuscript we now write:

Rotational asymmetric objects are objects that look differently in each distinct orientation (e.g., a cup with one handle but not a cup with two handles). This characteristic is for example needed in VR experiments testing orientation recall where subjects have to reinstate the original orientation of objects.

Reviewer 1:

Line 89: when I first saw the “FBX” format, I wondered if FBX is a standard version for VR stimuli, and googled online (and found this one: https://experience.briovr.com/blog/obj-and-fbx-files-for-virtual-reality-augmented-reality/). I guess the authors would also like to add some references of “FBX” for naive readers.

Reply: Thank you for this comment! The most up to date file format for VR is FBX, however OBJ files are also universal. These are the two universal file formats for virtual objects (as for example PNG and JPG for 2D stimuli). We acknowledge the concern of the Reviewer and decided to additionally export the files also in OBJ format. We implemented this information in the revised version of the manuscript and uploaded the OBJ export to the database on the OSF. Thank you, we think that this is a fine addition to our database that extends its accessibility.

Reviewer 1:

Line 197: "The average ratings of familiarity and visual complexity were 4.37 and 2.42”. I would suggest the addition of (both out of 5) immediately after the numbers, and make sure that the scale were consistent across mentioned studies in this paragraph (e.g., Brodeur [7] and Snodgrass and Vanderwart [8].The reason I raised this concern is that as the authors mentioned, in line 213-214, that "texture can appear artificial in drawings, leading to ambiguity and creating the impression of higher visual complexity.” seem counterintuitive at first glance. Because the Snodgrass et al. stimuli were mostly black-line drawings, and not clear whether the so-called “textures” was present in affecting the raters around 80'.

Reply: We acknowledge your suggestion and added “out of 5” in the revised version of the manuscript. Yes, the studies mentioned in the paragraph also used 5-point Likert scales. The sentence was indeed ambiguous. We removed the sentence. Thank you!

Reviewer 1:

Suggestion 1: after downloading the video, I found the choice of colorless object and the smoothness of the object (e.g., Bananas, wither 1 or many, were not that smooth in rendered video). I guess as the 1st release, there are definitely rooms for improvement for the future release. I only hope that it will not be 24 or 32 years apart, such as the Snowgrass et al., and the subsequent color version (Rossion et al. 2004) https://journals.sagepub.com/doi/abs/10.1068/p5117 and (Moreno et al. 2012) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0037527#

Reply: Thank you for this comment. In the revised version of the manuscript we now make more explicit that the object’s color can be changed in the Unreal Engine. We now write:

The objects were designed to be changed in color in the Unreal Engine. Some of the textures own an alpha texture to highlight details. For example, the cap’s logo (see Figure 1) remains white when the alpha texture is applied. The objects and textures can be imported directly into the Unreal Engine, where the textures have to be connected into one material. The material will then be assigned to the corresponding virtual object.

We decided to keep the color constant (grayscale) in the validation procedure. Yes, hopefully we do not have to wait 24-32 years for a next VR database and / or validation study. We now added the following sentence in the limitation section:

In the current validation study, objects were presented in grayscale and centered in the middle of a table. Color of objects can be changed, and objects can be presented in other (richer) contexts, however the relationship between color, context and ratings is unknown. It would be important to further investigate these aspects.

Reviewer 1:

Suggestion 2: To concur with the authors on the benefits of open science, I like to suggest one further step: a road map on how to best use this uploaded VR stimuli. E.g., to create a psychological VR experiment. First, I guess a VR headset would be required, and then code with certain, possibly open-source software, and then adopting material, and investigate the effect? How could the research make the best use of such stimuli? Some road map suggestion would be highly desired, or is there already some publication that could be of reference?

Reply: Many thanks for this suggestion. We do think that there are many ways as these virtual objects could be used. In our lab we are for example using the virtual objects to investigate memory recall with continuous measures (recall of location, color and orientation of the objects). The VR environment allows for a high degree of experimental control, i.e., the accurate measurement of location, orientation, and color of virtual objects while allowing participants to engage with their environment in a realistic manner. However, other research groups might need objects to fill in virtual rooms. Then they could use these objects, without the objects being the central part of their research question. In the revised version of the manuscript we now write:

The objects were created specifically to help experimental psychologists who investigate perception and memory in virtual reality. Yet another possibility is to use these everyday objects to create virtual environments (e.g., in social or clinical psychology). The virtual objects have been normed for name, familiarity and visual complexity, thus allowing researchers to use this information as experimental variables or as control variables (to avoid any confounding effects). Finally, the present results indicate that the virtual objects are valid and may benefit future research in different fields of knowledge. We hope that the virtual objects will be useful for researchers adopting VR environments to test their research questions.

Reviewer #2:

The purpose of this research is clear, the design and procedure are straight forward, and the methodology is consistent with some other researches in this area. These are the strength of this paper. In terms of significance, virtual reality does play a more and more important role in psychological experiments nowadays. Therefore, building good databases, especially the first one with 3-D objects, is truly important work.

Reply: We are happy that Reviewer 2 thinks the present work is a valuable contribution. Please find below how we addressed the comments of Reviewer 2.

Reviewer #2:

Here are the questions need some clarification. First. the “database” is small with only 121 objects. Especially many of them have more than one version.

Reply: We agree that the database is small. However, we think that it is relevant to the growing VR research to make this first database freely accessible. We acknowledge the concern of Reviewer 2 and extend the limitation section that now reads as follows:

There are some limitations of the database and the norming procedure. A larger number of virtual objects should be developed and evaluated in future research. Moreover, future research would benefit from examining other dimensions of the database, such as object category or object agreement (i.e., the extent to which the object is similar to the one imagined by the subject). In the current validation study, objects were presented in grayscale and centered in the middle of a table. Color of objects can be changed, and objects can be presented in other (richer) contexts, however the relationship between color, context and ratings is unknown. It would be important to further investigate these aspects.

Reviewer #2:

About the participants. 12 non-US participants were removed because they were not “the target population”. Are the participants removed by their citizenship or English proficiency?

Reply: Participants were removed by their citizenship (all participants confirmed that they understand written English well). We wanted to restrict the study to native English speakers. Studies comparing US and Indian MTurk participants show that in language-based tasks US MTurk participants give higher quality responses than Indian MTurk participants (e.g., Kazai et al., 2012). It has been suggested that non-native English speakers should be avoided when the task is language based (e.g., Hauser et al., 2018). Following this recommendation, we preregistered our study and stated that we will exclude responses from non-US participants. Please see our preregistration here: https://osf.io/mc2ny

Kazai, G., Kamps, J., & Milic-Frayling, N. (2012). The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In Proceedings of the 21st ACM international conference on Information and knowledge management.

Hauser, D. J., Paolacci, G., Chandler, J. (2019). Common concerns with Mturk as a participant pool: Evidence and solutions. In Handbook of Research Methods in Consumer Psychology, edited by Kardes, F. R., Herr, P. M., & Schwarz, N. New York: Routledge.

Following the suggestion of Reviewer 2, we now make clear that we removed participants by their citizenship. In the revised version we now write:

To restrict the sample to native English speakers, responses from non-US participants (12 participants) were removed.

Reviewer #2:

Finally, this paper aimed to “create, standardize, and share the first set of objects” for psychological experiments. The standardization might need some more elaboration.

In sum, the issue is important and the methodology is fine. However, the database is small, as well as there are some conceptual concerns to be clarified.

Reply: Thank you for directing our attention to this issue. We agree that the database is small and that the standardization might need some more elaboration. We addressed these issues in the limitation section of the manuscript, please see our answer to your comment above.

Decision Letter 1

Haoran Xie

10 Aug 2020

Database of virtual objects to be used in psychological research

PONE-D-20-11816R1

Dear Dr. Martarelli,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Haoran Xie

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Most of my (as of Reviewer 1) comments were addressed, and suggested things added (such as the new virtual objects OBJ file format, other than the original only FBX format, among other points). Overall, I am happy for their improvementss and correspondences, and therefore have no further Q/suggestions. Also sorry for the belated response.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Chun-Chia Kung

Reviewer #2: No

Acceptance letter

Haoran Xie

26 Aug 2020

PONE-D-20-11816R1

Database of virtual objects to be used in psychological research

Dear Dr. Martarelli:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Haoran Xie

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Data. Original data.

    (XLSX)

    S2 Data. Ratings for individual objects.

    (XLSX)

    Attachment

    Submitted filename: review 20200607.pdf

    Data Availability Statement

    All relevant data are available on the Open Science Framework (osf.io/q658a/).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES