Skip to main content
PLOS One logoLink to PLOS One
. 2025 May 22;20(5):e0324514. doi: 10.1371/journal.pone.0324514

Examining the availability/findability of stimuli employed in social media and body image research

David Smailes 1,*, Arnela Aleksandra 1, Megan Coakley 1, Susan Mair 2, Joe Ventress 1
Editor: Tyler Horan3
PMCID: PMC12097549  PMID: 40402963

Abstract

Concerns over the trustworthiness of the research findings generated in Psychology (as well as other disciplines) has led to calls for the adoption of practices that make research more open, transparent, and reproducible. One of these practices is the open sharing of research materials, such as task stimuli. There is some evidence that, generally, the uptake of this practice has been slow in Psychology. The aim of this study was to examine the availability/findability of the stimuli used in a sample of papers that investigated the effect of exposure to images from social media on participants’ body image, as this may be a field where progress in the open sharing of task stimuli may be especially slow. We coded the method sections of 38 studies (published across 36 articles from 2012 to 2021) in terms of the availability/findability of the images they employed and found that in only two articles were we able to fully access task stimuli. We also found no evidence that the sharing of images used as task stimuli had increased over time. We discuss likely reasons for this reticence to share task stimuli in this field, the impact this has on reproducibility, replicability, and research waste, and ways in which this issue can be addressed. All study materials and data are available at doi.org/10.17605/osf.io/wpvst.

Examining the availability/findability of stimuli employed in social media and body image research

Since 2012, and the start of the ‘replication crisis’ [1], Psychology researchers (as well as researchers in other disciplines, e.g., biology; see [2]) have increasingly engaged in a set of ‘open research’ practices. ‘Open research’ practices include the pre-registration of predictions and analysis plans (e.g., [3]), the use of open-source software (e.g., [4]), and the open sharing of data (e.g., [5]). By adopting these practices, individuals aim to increase the transparency and rigor of their research. As a result, the likelihood that they will generate robust, replicable findings should be increased.

A further key ‘open research’ practice is the sharing of research materials, such as questionnaires and tasks. Reproducing others’ work is a cornerstone of science [6] and researchers who openly share their research materials facilitate this core process [7]. However, it appears that the norm across many disciplines is to not openly share research materials. For example, in a recent study that sampled 250 articles published in Psychology journals between 2014 and 2017, Hardwicke et al. [8] reported that research materials were shared in only 14% of those articles. Similarly, in an analogous study where 250 social science articles (i.e., articles from Psychology, but also from Geography, Economics, and Political Science, for example) published between 2014 and 2017 were examined, 11% were rated as openly sharing their research materials [9].

One particular area within Psychology where the sharing of research materials could be considered to be especially important is in research that examines the impact of images from social media on body image. This evidence base typically examines how exposure to images from social media that, for example, promote the ‘thin ideal’ (e.g., [10]) or that serve as ‘fitspiration’ (e.g., [11]) affect participants’ body image (or related constructs) versus exposure to control images (such as nature scenery). The images used in these studies are clearly very important to the design of the study, as their content should determine the magnitude of the intended manipulation, and so it could be considered especially important that researchers working in this area share the images they have used as task stimuli. However, the experience of two of the authors (AA and DS) in designing a study that examined the effect of exposure to images from social media that promote the ‘thin ideal’ on women aged 18–22 years versus women aged 31–45 years (see: doi.org/10.17605/osf.io/zfjqh), suggested that the sharing of stimuli was very uncommon in this area.

Given this experience, the present study aimed to investigate this issue systematically, by examining how available (or findable) images used as task stimuli in research on social media and body image were. In addition, given that there is some indication that open research practices are becoming more common over time (e.g., [12]), we tested the possibility that there would be an association between year-of-publication and open sharing of the images used as task stimuli.

Method

Sample of articles

Rather than performing a novel literature search, we used all 36 articles synthesized as part of de Valle et al.’s [13] meta-analysis, which examined the impact of exposure to images from social media on body image, as our sample of our to-be-coded articles. Our sample of papers is, therefore, made up of the papers listed in Table A1 of the Supplementary Materials to de Valle et al.’s study. The sample of papers is also available at https://osf.io/wfejz.

We took this approach (re-using the corpus of articles generated by de Valle et al. [13] rather than performing a novel literature search) in an effort to reduce research waste [14]. That is, our judgement was that if we performed a similar literature search to the one conducted by de Valle et al. [13], our search would generate only a few more additional eligible studies (given that we began this project in January 2023, and de Valle et al. completed their searches in February 2021) and that the benefit of identifying these additional search results would not outweigh the cost (in terms of researcher time) of performing a novel literature search.

The 36 articles reported findings from 38 studies, and were published between 2012 and 2021. The sample sizes of the studies varied from 47 to 501, with the majority of studies involving exclusively participants who self-reported their gender as female. Each study employed an experimental method which involved exposing at least one group of participants to images that were taken from social media websites (such as Facebook or Instagram), or were made to look as if they were taken from a social media website (with the images purchased from companies such as ShutterStock, a stock photography provider, and then edited). In some studies (e.g., [15]), the effect of being exposed to images from social media versus, for example, being exposed to images of scenes of nature on participants’ body image/satisfaction was compared. In other studies (e.g., [16]), the effect of being exposed to one type of images from social media (e.g., those that promoted the ‘thin ideal’) versus being exposed to other types of images from social media (e.g., those that parodied the ‘thin ideal’) on participants’ body image/satisfaction was compared.

Coding of studies

We drafted a coding system based on our past experiences of reading papers that investigated the effect of exposure to images from social media on participants’ body image. One author (JV) piloted the coding system on all 36 studies. After this pilot, we refined the coding system (e.g., making some language more precise, revising some examples), and then all studies were reviewed and coded by two authors (DS and MC) using this revised coding scheme, to establish inter-rater reliability. Where there were disagreements in the coding of these two raters, agreement on the most appropriate code was achieved through discussion. Coding involved reading the full-text manuscripts of the to-be-coded articles, as well as accessing supplementary materials or online repositories whenever needed.

The coding scheme – which is available at https://osf.io/7nxja – involved reviewing studies in terms of four codes, with the highest code (the fourth code) reflecting the most ‘open’ practices in terms of making task stimuli available, findable, or reproducible. Up to the point of researchers sharing the stimuli they employed in a completely open or reproducible manner (e.g., sharing the images used, or providing URLs where the images could be found), we also tried to code for whether the information provided made the work more or less reproducible. We did this by considering how precisely the study described the possible set of images someone could use if they were trying to reproduce the task stimuli. So, for example, we considered whether a study gave only examples of the kinds of social media accounts they took images from, or whether they explained exactly which social media accounts they took images from. We considered the latter as being a more precise description of the images used and, therefore, as increasing the likelihood that a researcher would be able to reproduce the original task stimuli. Or, if a study reported using images that were associated with a specific hashtag, we considered how many image results that hashtag was associated with. A study that used a hashtag associated with a small number of images to source task stimuli was considered to have provided a more precise description of the images used than a study that used a hashtag associated with a large number of images to source task stimuli. Our coding system often referred to Instagram, to metrics relevant to Instagram (such as number of followers, or number of posts from an account), and to social media accounts of women, as the majority of the to-be-coded studies focused on the effects of images of women taken from Instagram on participants who self-reported their gender as female.

The first code assessed whether the authors provided a generic description of the images used. For example, did they report something like “images used were taken from the accounts of young female celebrities and depicted those celebrities in revealing clothing/as being extremely thin/being extremely muscular”?. This was scored ‘yes’ versus ‘no’.

The second code assessed whether the authors provided example images, examples of accounts used, and/or a widely-used hashtag that was used to identify images. For example, did they provide a small set of example images (here we set a threshold of less than 50% of the images used), did they report that images were taken from the accounts of celebrities in an imprecise way (e.g., stating that images were “taken from the accounts of celebrities such as Kendall Jenner and Ariana Grande”), and/or did they provide a hashtag, which when we searched for that hashtag resulted in more than 1,995 image results. We set this threshold of 1,995 images on the following basis. We accessed a list of the 50 most-followed accounts on Instagram (from https://en.wikipedia.org/wiki/List_of_most-followed_Instagram_accounts; accessed on 21 February 2023). From that list, we identified the accounts that belonged to women under the age of 50 years. There were 26 of these, and we calculated the median number of posts from those 26 accounts. The median was 1,994.5, which we rounded up to 1,995. Our logic here, then, was that providing a hashtag that resulted in more than 1,995 images was less precise than reporting that images were taken from ‘an average’ highly-followed, young, female celebrity account on Instagram. Again, this was scored ‘yes’ versus ‘no’.

The third code assessed whether the authors provided the majority of the images used, exhaustive information about accounts used, and/or a not-very-widely-used hashtag that was used to identify images. For example, did they provide a large set of example images (50% or more of the images used), did they report that images used were taken from the accounts of Kendall Jenner, Ariana Grande, Harry Styles, and Chris Hemsworth (and these accounts alone), and/or did they report that images used were sourced using a specific hashtag, which when we searched for that hashtag resulted in no more than 1,995 image results? Our logic for using this threshold of 1,995 images was the same as above. Here, where researchers reported using a hashtag that resulted in no more than 1,995 images, this was as precise, or more precise, as reporting that images were taken from ‘an average’ highly-followed, young, female celebrity account on Instagram. Again, this was scored ‘yes’ versus ‘no’.

The fourth code assessed whether the authors provided URLs for the images used and/or provided all of the images used. These images/URLs could be provided in the manuscript, in supplementary materials, or at some other online repository (provided we were able to access the repository). Again, this was scored ‘yes’ versus ‘no’.

When coding articles, we coded multiple studies within the same article separately, and so these appear as separate rows in the spreadsheet at https://osf.io/vzhj9. Some studies involved multiple sets of images from social media, which were described separately in the article (e.g., images from the social media accounts of men versus images from the social media accounts of women; images from social media that promoted the ‘thin ideal’ versus images from social media that parodied the ‘thin ideal’). Again, these appear as separate rows in the spreadsheet at https://osf.io/vzhj9. Where an article reported that stimuli was available upon request, we did not take this into account, as we assumed that – as in the case for data-sharing where data is reported to be available on request (e.g., [4]) – the materials would not be easily/meaningfully available through this route.

To examine inter-rater reliability, we took the ‘highest’ code that a rater had scored ‘yes’ and tested agreement for this. For 88% of scores given (36 out of 41), there was agreement between the two raters. We used Cohen’s kappa as our measure of agreement and found ‘substantial’ levels of agreement (kappa = 0.80). In addition, we examined level of agreement code-by-code (i.e., Did DS and MC both score ‘yes’ or both score ‘no’ for Code 1? Did DS and MC both score ‘yes’ or both score ‘no’ for Code 2? And so on). For Code 1, there was perfect agreement (both raters scored ‘yes’ for all studies). For Code 2, we found ‘substantial’ levels of agreement (same scores given for 39 of 41 studies; kappa = 0.88). For Code 3 (same scores given for 38 of 41 studies; kappa = 0.78) and Code 4 (same scores given for 40 of 41 studies; kappa = 0.79), we found ‘moderate’ levels of agreement.

Results

Availability of task stimuli

We rated two studies (reported across two publications; [17,18]) as sharing images/providing links to images in a way that made their task stimuli fully reproducible, as they provided all of the images they had used as stimuli. In one of these articles, images used as task stimuli were shared in an appendix to the full-text. In the other article, images were shared at flickr.com. We rated four studies (reported across four publications) as providing the majority of the images used, exhaustive information about accounts used, and/or a not-very-widely-used hashtag that was used to identify images. We rated 23 studies (reported across 22 publications) as providing a small sample of example images, examples of accounts used, and/or a widely-used hashtag that was used to identify images. Finally, we rated 12 studies (reported across 10 publications) as describing the images used in a way that was least helpful in reproducing their task stimuli (i.e., they gave only generic descriptions of the kinds of images employed as stimuli. For example, “Eighteen images from high-popularity female influencers (defined for this purpose as users with>100k followers; 101k-1.2 M), were chosen on the criteria of high-quality (clear and professional appearing) imagery and relative obscurity (i.e., accounts from European and US influencers who are not typically internationally renowned/recognisable, based on pilot data from three Australian female participants). The final profiles depicted thin, attractive, white female influencers (aged ∼20−30) with a combination of close up and full-body photos featuring the influencer from a variety of angles (i.e., both facing and looking away from the camera). The images depicted the influencers in artistic, lifestyle, and travel selfies. Influencers were fully clothed in summer attire (except one swimsuit photograph depicting the subject’s back) and most photos appeared to be staged/posed rather than spontaneous)”). Study-by-study ratings are available at https://osf.io/vzhj9.

Association between Availability of Stimuli and Year-of-Publication

To test the possibility that the authors of studies published more recently were more likely to have engaged in more open research practices, we ran an exploratory analysis where we correlated year-of-publication with the ‘highest’ code for which a ‘yes’ score was given. Analysis was performed in Jamovi (version 2.3.21; [19,20]). We found that there was a non-significant, positive correlation between year-of-publication and highest code scored yes (rho = .12, p = .60).

Discussion

The aim of this study was to examine the availability/findability of the images used as task stimuli in research that tested the effect that exposure to images from social media have on participants’ body image (and related constructs). We found that very few of the articles we coded made the images employed fully available. Many more articles gave only broad, imprecise descriptions of the images they used. Thus, it seems that in experimental research on the impact of social media on body image, there has been little adoption of open sharing of study materials.

These findings are consistent with the results of past research that has examined the availability of tasks/task stimuli in Psychology research. For example, Hardwicke et al. [8] reported that research materials were shared in only 14% of a random sample of Psychology articles published between 2014 and 2017. Similarly, across a random sample of 250 articles from the social sciences published between 2014 and 2017, only 11% openly shared their research materials [9]. Thus, the pattern we have reported here, of the norm in social media and body image research being a failure to share research materials, is also the norm in other areas of Psychology and in other social science disciplines.

The lack of open sharing of the images used as task stimuli in social media and body image research likely has several implications or consequences. First, this makes reproducing others’ work much more difficult, as without certainty around what stimuli were used in a study researchers cannot be confident that they are creating a task that is equivalent to the one used in the original study. Given that reproducing others’ research and attempting to replicate their findings is a cornerstone of the scientific method [6], it is clearly a substantial problem if researchers are not able to do so.

Related to this is a second consequence: the ability of researchers to ‘successfully’ replicate others’ work (i.e., to find the same effects that have been reported in previous studies) is probably reduced by the failure to openly share task stimuli. This is because where a researcher cannot precisely re-create the task used in an original study, and so must source a different set of stimuli when developing their own version of the task for a subsequent study, the magnitude of the effect elicited by the task in the subsequent study will almost certainly differ from the magnitude of the effect elicited by the task in the original study. And so, across studies that use different sets of stimuli, we should expect effect sizes to vary beyond sampling error, possibly to the extent that some studies will find significant effects on body image and other studies will not. Thus, what appears to be inconsistency within an evidence base, may simply be a result of the use of variation in stimuli employed. To some extent, variation in the stimuli used across studies can strengthen an evidence base as it shows whether an effect is generalizable. And, as de Valle et al. [13] noted this was the case in their meta-analysis, where effects were similar across studies that employed different types of images (e.g., images that promote a traditional ‘thin ideal’ image versus images that promote specific forms of ‘fitspiration’). However, if materials were shared openly, it would be easier for researchers to examine the effects of the use of different images in a systematic manner (as Yarkoni [21] recommended).

The third consequence of a lack of sharing of images used as task stimuli in social media and body image research is one of research waste [14]. Without straightforward access to the task stimuli that others have used previously, researchers will unnecessarily spend a (presumably) substantial amount of time sourcing their own stimuli, building a task using this stimuli, and (ideally) piloting this task to ensure it works as intended. At present, there are no good estimates of how much time researchers spend unnecessarily developing tasks that could have been shared openly. It would be useful for future research to provide these kind of estimates, as has been done for estimating the cost of conducting a meta-analysis (around $141,000; [22]), and the number of ‘redundant’ meta-analyses that are published (possibly more than 50% of all published meta-analyses; [23]).

It is likely that there are a number of systemic factors that have contributed to the lack of sharing of images used in social media and body image research. For older studies in our sample, it is likely that sharing task stimuli was both not something that researchers were actively encouraged to do and was something that was relatively complex to achieve. For example, the Open Science Framework, which many researchers now use to share data and research materials [24], was only established in 2013. Prior to its existence, researchers may not have had a good understanding of which online repositories would be effective places to share study materials from. More recently, it is likely that a simple lack of resources/time-pressure discourage researchers from sharing task stimuli.

However, another issue may be that researchers are hesitant to share tasks that feature stimuli taken from social media because of concerns about privacy or copyright. That is, they may worry that if they develop a task that contains images from a celebrity’s social media account and then share that task freely online, they may be violating copyright law. Such concerns are warranted in some contexts (see [25]), but will not be relevant for many researchers, as we explain below.

Concerns about privacy are very much warranted where researchers have used images (or other types of content, such as text) posted to ‘private’ accounts, or content that the original poster intended to be private to some extent. See, for example, Morant et al. [26], where content from Twitter was used by researchers in a way that was upsetting for the original Twitter users, who felt they had engaged in private conversations, which should not have been used as data by the researchers without consent being obtained. In these contexts, researchers should request permission from the account holder to use the content they have posted for research purposes. If this is provided, then researchers should discuss with the account holder whether they also consent to that content being shared openly with other researchers (e.g., via the Open Science Framework, or in supplementary materials to an article), as it is possible that an account holder may consent to content from their posts being used in a single research project, but may not consent to that content being widely shared.

In contrast, concerns about copyright law are less warranted where researchers have used images posted to ‘public’ social media accounts, which are easily discoverable by anyone accessing a social media platform such as Instagram, Facebook, Twitter, or TikTok. Typically these images can be used under the doctrine of ‘fair use’ (see help.instagram.com/116455299019699?helpref=faq_content and https://copyright.gov/fair-use/), as long as researchers include a citation to accompany the content they have used/shared. A good example of this is Lalancette and Raynauld’s [27] article, which analysed the use of online images in ‘celebrity politics’ in Canada, and re-produced 11 images from Canadian Prime Minister Justin Trudeau’s Instagram account, with each image accompanied by an appropriate citation. Similarly, as part of a research project conducted by AA and DS, we posted PowerPoint files containing the images we employed as stimuli on the Open Science Framework (see https://osf.io/egamu and https://osf.io/yce9z) in a way that does not violate copyright law.

Researchers can raise concerns they have around privacy and copyright with copyright librarians at their institutions, who should be able to provide clear guidance about what best practice is in terms of re-using and sharing content taken from social media accounts. If this advice is not available to a researcher, and they wish to err on the side of caution, then posting the URLs where the images used can be found, as well as a detailed description of the images, is another way in which researchers can make their stimuli available to others (see https://osf.io/y796h and https://osf.io/g7xem for examples of this). However, it should be noted that there are limitations to this approach of sharing URLs, as they may be affected by ‘link rot’ [28], where once-usable links become unavailable over time. This is especially likely to be a problem when trying to provide links to social media accounts, as it is possible to automatically delete posts from social media, once a post is a certain ‘age’ or so that an account does not contain more than a specific number of posts (see tweetdeleter.com/features/auto-delete-tweets and aischedul.com/delete-instagram-posts-automatically-after-publishing/). Thus, best practice likely involves posting both the URLs where the images used as task stimuli can be found, as well as re-productions of the images used, accompanied by appropriate citations.

It is, therefore, possible for researchers – at least in most cases – to share the images from social media that they have used as task stimuli, and there are good reasons (related to increasing reproducibility, increasing replicability, and reducing research waste) to do so. However, here we found no evidence that open sharing of stimuli had increased over time, which suggests that adoption of this practice will not occur without specific interventions. Interventions that may increase the open sharing of stimuli by researchers working on social media and body image include journals implementing policies that mandate the sharing of research materials and/or journals awarding ‘badges’ that acknowledge articles where the authors have shared research materials. Journal policies mandating data-sharing (or the inclusion of a statement explaining why data cannot be shared) have been shown to be associated with a substantial increase in that practice, although it should be noted that data was often shared in sub-optimal ways [29]. The awarding of badges by journals for articles that have adopted ‘open research’ practices, such as the pre-registration of analysis plans and the sharing of analysis code, has also been associated with an increase in the adoption of these practices [30]. The International Journal of Eating Disorders has recently introduced a ‘badges’ system and new editorial policies around open science [7], and research that examines the impact these interventions have on the open sharing of research materials on social media and body image research published in that journal versus social media and body image research published elsewhere will be of interest.

This study suffered from several limitations. First, the sample of articles we coded was relatively small in comparison to other studies that have examined ‘open research’ practices (e.g., samples of 250 papers were used in [8] and [9]). That being said, this study focused on a narrower subset of articles, and so this smaller sample of papers is almost inevitable. In addition, others (e.g., [31]) have used similar (even smaller) samples of articles to investigate the kinds of questions we have examined here. Second, because we employed a sample of papers generated by researchers who conducted a meta-analysis in 2021, the articles we reviewed do not reflect practices researchers have engaged in 2022–2025. Given that we did not find an association between year-of-publication and increased sharing of images used as task stimuli, we would be surprised if there had been a substantial change in this practice in 2022 and 2023. Although, as we noted above, the International Journal of Eating Disorders has recently implemented new policies and incentives around engaging in ‘open science’ practices, and future research should examine if sharing of images used as task stimuli becomes more common in social media and body image research through the mid-2020s. Third, we developed our coding system based on our past experience of reading publications in this field, rather than using coding systems developed by other researchers. This may have introduced some bias in, for example, the codes generated. Fourth, the concerns we have raised about open sharing of materials would be less of a problem if, when contacted, the authors of a publication were able to straightforwardly share (e.g., via private email correspondence) the materials they had used. We took the approach of assuming that contacting authors would have been unproductive, and we think that this is a fair assumption based on, for example, data that around 85% of authors do not respond to requests to share data [32]. However, it is possible that authors would be more willing to discuss sharing of materials than data. Future research that examines how often authors of social media and body image research are willing/able to share testing materials when requested would be valuable.

Finally, another limitation caused by our use of a sample of papers generated by other researchers when they were conducting a meta-analysis is that the value of our study depends, at least to some extent, on the comprehensiveness of their search strategy and the rigor with which it was carried out. Our judgment was that the literature search conducted by de Valle et al. [13] was an effective one, in that the terms employed were broad enough so that relevant studies were discovered and in that several of the criteria by which systematic reviews are evaluated (using more than one database to search for publications, using more than one researcher to make decisions about including/excluding publications, using more than one researcher to extract data from publications; [33]) were met. However, it is true that the limitations that applied to de Valle et al.’s search strategy (e.g., excluding studies that were not written in English) also apply to the current study.

In summary, this study aimed to examine the availability/findability of images used as task stimuli in social media and body image research. In line with research that has examined similar questions in other disciplines, we found that open sharing of task stimuli was rare. Future research that examines whether this changes throughout the 2020s, as open research practices become incentivized by a growing number of journals, will be valuable.

Data Availability

The data for this study are available at doi.org/10.17605/osf.io/wpvst.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Lilienfeld SO, Strother AN. Psychological measurement and the replication crisis: Four sacred cows. Can Psychol. 2020;61:281–8. [Google Scholar]
  • 2.Allen C, Mehler DMA. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 2019;17(5):e3000246. doi: 10.1371/journal.pbio.3000246 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Nosek BA, Beck ED, Campbell L, Flake JK, Hardwicke TE, Mellor DT, et al. Preregistration Is Hard, And Worthwhile. Trends Cogn Sci. 2019;23(10):815–8. doi: 10.1016/j.tics.2019.07.009 [DOI] [PubMed] [Google Scholar]
  • 4.Kathawalla U-K, Silverstein P, Syed M. Easing Into Open Science: A Guide for Graduate Students and Their Advisors. Collabra: Psychology. 2021;7(1). doi: 10.1525/collabra.18684 [DOI] [Google Scholar]
  • 5.Rouder JN. The what, why, and how of born-open data. Behav Res Methods. 2016;48(3):1062–9. doi: 10.3758/s13428-015-0630-z [DOI] [PubMed] [Google Scholar]
  • 6.Simons DJ. The Value of Direct Replication. Perspect Psychol Sci. 2014;9(1):76–80. doi: 10.1177/1745691613514755 [DOI] [PubMed] [Google Scholar]
  • 7.Burke NL, Frank GKW, Hilbert A, Hildebrandt T, Klump KL, Thomas JJ, et al. Open science practices for eating disorders research. Int J Eat Disord. 2021;54(10):1719–29. doi: 10.1002/eat.23607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hardwicke TE, Thibault RT, Kosie JE, Wallach JD, Kidwell MC, Ioannidis JPA. Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014-2017). Perspect Psychol Sci. 2022;17(1):239–51. doi: 10.1177/1745691620979806 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hardwicke T, Wallach J, Kidwell M, Bendixen T, Crüwell S, Ioannidis J. An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017). R Soc Open Sci. 2020;7:190806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Anixiadis F, Wertheim EH, Rodgers R, Caruana B. Effects of thin-ideal instagram images: The roles of appearance comparisons, internalization of the thin ideal and critical media processing. Body Image. 2019;31:181–90. doi: 10.1016/j.bodyim.2019.10.005 [DOI] [PubMed] [Google Scholar]
  • 11.Tiggemann M, Zaccardo M. “Exercise to be fit, not skinny”: The effect of fitspiration imagery on women’s body image. Body Image. 2015;15:61–7. doi: 10.1016/j.bodyim.2015.06.003 [DOI] [PubMed] [Google Scholar]
  • 12.Simmons JP, Nelson LD, Simonsohn U. Pre‐registration is a Game Changer. But, Like Random Assignment, it is Neither Necessary Nor Sufficient for Credible Science. J Consum Psychol. 2021;31(1):177–80. doi: 10.1002/jcpy.1207 [DOI] [Google Scholar]
  • 13.de Valle MK, Gallego-García M, Williamson P, Wade TD. Social media, body image, and the question of causation: Meta-analyses of experimental and longitudinal evidence. Body Image. 2021;39:276–92. doi: 10.1016/j.bodyim.2021.10.001 [DOI] [PubMed] [Google Scholar]
  • 14.Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9. doi: 10.1016/S0140-6736(09)60329-9 [DOI] [PubMed] [Google Scholar]
  • 15.Sampson A, Jeremiah HG, Andiappan M, Newton JT. The effect of viewing idealised smile images versus nature images via social media on immediate facial satisfaction in young adults: A randomised controlled trial. J Orthod. 2020;47(1):55–64. doi: 10.1177/1465312519899664 [DOI] [PubMed] [Google Scholar]
  • 16.Slater A, Cole N, Fardouly J. The effect of exposure to parodies of thin-ideal images on young women’s body image and mood. Body Image. 2019;29:82–9. doi: 10.1016/j.bodyim.2019.03.001 [DOI] [PubMed] [Google Scholar]
  • 17.Brichacek A, Neill J, Murray K. The effect of basic psychological needs and exposure to idealised Facebook images on university students’ body satisfaction. Cyberpsychol. 2018;12:1–12. [Google Scholar]
  • 18.Taniguchi E, Lee HE. Cross-Cultural Differences between Japanese and American Female College Students in the Effects of Witnessing Fat Talk on Facebook. J Intercult Commun Res. 2012;41(3):260–78. doi: 10.1080/17475759.2012.728769 [DOI] [Google Scholar]
  • 19.Jamovi Project (2022). jamovi. (Version 2.3) [Computer Software]. Retrieved from https://www.jamovi.org. [Google Scholar]
  • 20.R Core Team. R: A Language and environment for statistical computing. (Version 4.1) [Computer software]. Retrieved from https://cran.r-project.org. (R packages retrieved from MRAN snapshot 2022-01-01). 2021. [Google Scholar]
  • 21.Yarkoni T. The generalizability crisis. Behav Brain Sci. 2020;45:e1. doi: 10.1017/S0140525X20001685 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Michelson M, Reuter K. The significant cost of systematic reviews and meta-analyses: A call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun. 2019;16:100443. doi: 10.1016/j.conctc.2019.100443 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sigurdson MK, Khoury MJ, Ioannidis JPA. Redundant meta-analyses are common in genetic epidemiology. J Clin Epidemiol. 2020;127:40–8. doi: 10.1016/j.jclinepi.2020.05.035 [DOI] [PubMed] [Google Scholar]
  • 24.Tackett JL, Brandes CM, Reardon KW. Leveraging the Open Science Framework in clinical psychological assessment research. Psychol Assess. 2019;31(12):1386–94. doi: 10.1037/pas0000583 [DOI] [PubMed] [Google Scholar]
  • 25.DuPre E, Hanke M, Poline J-B. Nature abhors a paywall: How open science can realize the potential of naturalistic stimuli. Neuroimage. 2020;216:116330. doi: 10.1016/j.neuroimage.2019.116330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Morant N, Chilman N, Lloyd-Evans B, Wackett J, Johnson S. Acceptability of Using Social Media Content in Mental Health Research: A Reflection. Comment on “Twitter Users’ Views on Mental Health Crisis Resolution Team Care Compared With Stakeholder Interviews and Focus Groups: Qualitative Analysis”. JMIR Ment Health. 2021;8(8):e32475. doi: 10.2196/32475 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lalancette M, Raynauld V. The Power of Political Image: Justin Trudeau, Instagram, and Celebrity Politics. American Behavioral Scientist. 2017;63(7):888–924. doi: 10.1177/0002764217744838 [DOI] [Google Scholar]
  • 28.Evangelou E, Trikalinos TA, Ioannidis JPA. Unavailability of online supplementary scientific information from articles published in major journals. FASEB J. 2005;19(14):1943–4. doi: 10.1096/fj.05-4784lsf [DOI] [PubMed] [Google Scholar]
  • 29.Hardwicke TE, Mathur MB, MacDonald K, Nilsonne G, Banks GC, Kidwell MC, et al. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R Soc Open Sci. 2018;5(8):180448. doi: 10.1098/rsos.180448 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S, Falkenberg L-S, et al. Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. PLoS Biol. 2016;14(5):e1002456. doi: 10.1371/journal.pbio.1002456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Nieto I, Navas JF, Vázquez C. The quality of research on mental health related to the COVID-19 pandemic: A note of caution after a systematic review. Brain Behav Immun Health. 2020;7:100123. doi: 10.1016/j.bbih.2020.100123 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Gabelica M, Bojčić R, Puljak L. Many researchers were not compliant with their published data sharing statement: a mixed-methods study. J Clin Epidemiol. 2022;150:33–41. doi: 10.1016/j.jclinepi.2022.05.019 [DOI] [PubMed] [Google Scholar]
  • 33.Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008. doi: 10.1136/bmj.j4008 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Tyler Horan

17 Feb 2025

PONE-D-24-48265Examining the availability/findability of stimuli employed in social media and body image researchPLOS ONE

Dear Dr. Smailes,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 30 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Tyler Horan

Academic Editor

PLOS ONE

Journal requirements:  

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.

2. Please match your authorship list in your manuscript file and in the system.

3. Your abstract cannot contain citations. Please only include citations in the body text of the manuscript, and ensure that they remain in ascending numerical order on first mention.

4. Thank you for stating the following in your Competing Interests section:  

[None]. 

Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state ""The authors have declared that no competing interests exist."", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now 

This information should be included in your cover letter; we will change the online submission form on your behalf.

Additional Editor Comments:

Based on the reviews submitted, I recommend that the author revise the manuscript to take into account the recommendations provided by both reviewers. I agree with their assessment and welcome a revised version that incorporates the feedback.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I appreciated this study which was pretty direct an interesting. I have just a few thoughts.

I don’t you need to refer to a “so called” replication crisis in the first line. I think that can communicate skepticism, which I don’t think was your intent. There’s not too much doubt remaining at this point that social science experienced a replication crisis.

I appreciated the discussion of the issue on copyright. I’m not a lawyer but I’m not sure that posting something publicly invalidates copyright. For instance a song or film publicly released doesn’t lose copyright protection. I do like the suggestion about providing links to material (with the caveats the authors mention about the decay of those links being well-taken).

I’d be curious regarding a further point of data…if the authors tried reaching out to article authors for their full material…what would be their reaching out success? That might be one little data point that would be interesting to add. Open materials may be a bigger/lesser issue depending on how good researchers are about responding to requests for materials.

Reviewer #2: This study examined the public availability of study stimuli used in social media and body image research from the de Valle et al. (2021) review. Results showed that few studies provided open access to the stimuli used. This is an important issue to raise. However, it would be more helpful if the paper sought to understand why these practices have not been adopted. For example, researchers could be surveyed to better understand the barriers to open sharing of their stimuli. The paper would be more influential if it helped overcome barriers to allow for more open science practices. Below, I suggest ways to improve the manuscript.

1) The authors imply that the lack of stimuli sharing is due to poor open science practices. My studies were included in the review, along with that of my colleagues. I have not shared the stimuli because my ethics committee have not allowed me to do so, despite my highlighting the public nature of the posts. They argue that I do not own or have permission to reproduce the images. This is a common occurrence across many institutions. The authors recommend contacting copyright librarians. However, to reduce research waste, it would be helpful for the authors to mention specific resources or policies that researchers could provide to their ethics committees to argue against privacy concerns and to advocate for the open sharing of public social media posts. This would help reduce barriers to open sharing of stimuli within the field.

2) I think it should also be acknowledged that similar effects are often found within the literature despite using different stimuli, as highlighted in the de Valle et al. (2021) review. Given the research aims to determine the effect of social media images on body image, I think it is important to ensure that the effects generalise to other images. I do not see this as research waste but rather as an important step to ensure the findings are not specific to one set of images.

3) The field moves quickly, and many studies have been published on this topic since 2021. I was surprised to see researchers’ time be mentioned as a reason for not doing a search for more recent literature. This does not seem to be a strong justification given that time since publication is examined as a moderator and it has been four years since the data were searched.

4) On Page 8, it states that studies in which data were available on request were not taken into account because it was assumed that those data would not be easily available. This assumption does not seem fair. If stimuli can be accessed by emailing the researchers, that would overcome issues related to reproducibility, replicability, and research waste.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: Yes:  Christopher Ferguson

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

PLoS One. 2025 May 22;20(5):e0324514. doi: 10.1371/journal.pone.0324514.r003

Author response to Decision Letter 1


21 Mar 2025

Thank you for the helpful comments. Please find our response to the reviewers’ comments below.

Reviewer #1: Peer Review

Comment 01 = I don’t you need to refer to a “so called” replication crisis in the first line. I think that can communicate skepticism, which I don’t think was your intent. There’s not too much doubt remaining at this point that social science experienced a replication crisis.

Response 01 = Thanks very much for this comment. We have deleted “so-called” so that the sentence (Line 41) now reads “Since 2012, and the start of the ‘replication crisis’ (Lillenfield & Strother, 2020)…”.

Comment 02 = I appreciated the discussion of the issue on copyright. I’m not a lawyer but I’m not sure that posting something publicly invalidates copyright. For instance a song or film publicly released doesn’t lose copyright protection. I do like the suggestion about providing links to material (with the caveats the authors mention about the decay of those links being well-taken).

Response 02 = Apologies that we weren’t clearer in the manuscript. Copyright isn’t invalidated when images are publicly posted on platforms such as Instagram. But, under a ‘fair use’ doctrine, images from social media can be re-used by others in ways that don’t infringe the author’s copyright (and the ways we describe re-using images from social media would [we are confident] be classed as ‘fair use’). We have included specific reference to the ‘fair use’ doctrine in the revised manuscript (Lines 287-289), and have included links to two websites - help.instagram.com/116455299019699?helpref=faq_content and https://copyright.gov/fair-use/ - that provide more detailed information. We hope that these additions provide extra context/information that is helpful.

Comment 03 = I’d be curious regarding a further point of data…if the authors tried reaching out to article authors for their full material…what would be their reaching out success? That might be one little data point that would be interesting to add. Open materials may be a bigger/lesser issue depending on how good researchers are about responding to requests for materials.

Response 03 = We also think that this would be a very interesting research question but we think we should raise two points. First, sharing materials via email correspondence, versus ‘open sharing’ via a public repository, is less effective (even with the best of intentions) as authors’ email addresses can change over time, and when this happens, tracking down an author can either be time-consuming, difficult, or impossible. For example, Gabelica et al. (2022) reported that around 5% of the email addresses they contacted no longer worked, when they requested data from recently published papers (articles were all published in January 2019; Gabelica et al. was accepted for publication in May 2022). Presumably this problem will become worse over time. Second, we feel that this question is a little bit beyond the scope of this specific manuscript. As we note again below, we think that a larger-scale, follow-up study that examined the availability/findability of stimuli employed in social media and body image research (a) from 2021 to end-of-2025, (b) by using the method employed here, and (c) by also testing how effective requests for copies of materials were (as suggested by Reviewer #1 here, and by Reviewer #2 below) would be extremely useful.

Reviewer #2: Peer Review

Comment 04 = However, it would be more helpful if the paper sought to understand why these practices have not been adopted. For example, researchers could be surveyed to better understand the barriers to open sharing of their stimuli. The paper would be more influential if it helped overcome barriers to allow for more open science practices. Below, I suggest ways to improve the manuscript.

Response 04a = We agree that examining why materials are rarely shared would be both interesting and useful. However, we think that a project formally examining this (e.g., via a survey, as suggested) is beyond the scope of this manuscript (ideally this is something we would be keen to examine in a larger-scale, follow-up study, which we note elsewhere).

Response 04b = We also agree that manuscripts which identify ways to address problems are more valuable than manuscripts that solely identify the existence of a problem. We intended that our manuscript would offer some solutions to what we assumed was a key barrier – concerns about copyright infringement – by providing some advice that might reduce researchers’ concerns about copyright infringement. However, we accept that the previous version of the manuscript didn’t provide sufficient information/context about this issue, and so we have added links (Lines 287-289) to two websites - help.instagram.com/116455299019699?helpref=faq_content and https://copyright.gov/fair-use/ - that provide more detailed information. We hope that these additions provide extra context/information that is helpful.

Comment 05 = The authors imply that the lack of stimuli sharing is due to poor open science practices. My studies were included in the review, along with that of my colleagues. I have not shared the stimuli because my ethics committee have not allowed me to do so, despite my highlighting the public nature of the posts. They argue that I do not own or have permission to reproduce the images. This is a common occurrence across many institutions. The authors recommend contacting copyright librarians. However, to reduce research waste, it would be helpful for the authors to mention specific resources or policies that researchers could provide to their ethics committees to argue against privacy concerns and to advocate for the open sharing of public social media posts. This would help reduce barriers to open sharing of stimuli within the field.

Response 05a = Apologies, we did not intend to imply that failing to share materials openly was poor practice, or the fault of specific researchers (we have added the word ‘systemic’ in Line 261 to emphasize that we think this is a ‘system-problem’ rather than the fault of individual researchers). As we noted in the original manuscript, we expect that this happens for a variety of understandable reasons such as a lack of time/resource, the absence of appropriate platforms through which stimuli can be shared, and concerns about copyright/privacy.

Response 5b = We agree that the manuscript would have been stronger if we had provided links to resources that offer clear advice about the re-use of images from social media. Thus, in the revised manuscript, we have included specific reference to the ‘fair use’ doctrine (Lines 287-289), and have included links to two websites - help.instagram.com/116455299019699?helpref=faq_content and https://copyright.gov/fair-use/ - that provide more detailed information about this. We hope that these additions provide extra context/information that is helpful in, for example, discussions with ethics committees.

Comment 06 = I think it should also be acknowledged that similar effects are often found within the literature despite using different stimuli, as highlighted in the de Valle et al. (2021) review. Given the research aims to determine the effect of social media images on body image, I think it is important to ensure that the effects generalise to other images. I do not see this as research waste but rather as an important step to ensure the findings are not specific to one set of images.

Response 06 = This is a very valid point. Thank you for raising it. We have noted in the revised manuscript that this (consistent effects across different stimuli) was found in de Valle et al.’s meta-analysis. However, we note that open sharing of materials would make this kind of analysis more straightforward for researchers to conduct (Line 246-250).

Comment 07 = The field moves quickly, and many studies have been published on this topic since 2021. I was surprised to see researchers’ time be mentioned as a reason for not doing a search for more recent literature. This does not seem to be a strong justification given that time since publication is examined as a moderator and it has been four years since the data were searched.

Response 07 = We also think that examining whether open sharing of materials has become more common post-2021 would be a very interesting research question. Apologies for not being clearer in the manuscript, but it is not that we think assessing practices post-2021 is not worth our time. Our argument is that the corpus of papers identified by de Valle et al. (2021) enabled us to run the present project with little resource/at low cost and that this was very valuable. In part this is because the results we reported should allow us to obtain more resource to run the kind of project Reviewer #2 suggests in their comments. That is, as we note again below, we think that a larger-scale, follow-up study that examined the availability/findability of stimuli employed in social media and body image research (a) from 2021 to end-of-2025, (b) by using the method employed here, and (c) by also testing how effective requests for copies of materials were would be extremely useful.

Comment 08 = On Page 8, it states that studies in which data were available on request were not taken into account because it was assumed that those data would not be easily available. This assumption does not seem fair. If stimuli can be accessed by emailing the researchers, that would overcome issues related to reproducibility, replicability, and research waste.

Response 08 = We also think that this would be a very interesting research question and the reviewer is clearly correct that this would, to some extent, reduce the impact of the problem we identified in the manuscript. However, we feel that this is beyond the scope of this specific manuscript. As we noted above, we think that a larger-scale, follow-up study that examined the availability/findability of stimuli employed in social media and body image research (a) from 2021 to end-of-2025, (b) by using the method employed here, and (c) by also testing how effective requests for copies of materials were (as suggested by Reviewer #2 here, and by Reviewer #1 above) would be extremely useful. We’d very happily discuss the possibility of developing that sort of study with Reviewer #2 at some point in the future, if they were keen to collaborate on that kind of project.

We have noted in the Discussion section (Line 343-351) that this is a limitation of the present study, we note that we think our assumption that contacting authors would likely have been unproductive (given that, for example, Gabelica et al. [2022] reported that 85.83% of authors contacted re: sharing their data did not respond), and that future research should examine this issue re: social media and body image research.

Novel References

Gabelica, M., Bojčić, R., & Puljak, L. (2022). Many researchers were not compliant with their published data sharing statement: a mixed-methods study. Journal of Clinical Epidemiology, 150, 33-41.

Attachment

Submitted filename: rebuttal_letter_resubmission02_v2.docx

pone.0324514.s002.docx (20KB, docx)

Decision Letter 1

Tyler Horan

28 Apr 2025

Examining the availability/findability of stimuli employed in social media and body image research

PONE-D-24-48265R1

Dear Dr. Smailes,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tyler Horan

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: I appreciate the resources provided by the authors regarding copyright laws. These will help researchers argue for the sharing of public posts in publications. The rest of my suggestions were deemed outside of the scope of the paper. I believe the study proposed as a next step would be a more helpful addition to the literature than the current paper. I have no further suggestions.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: Yes:  Christopher Ferguson

Reviewer #2: No

**********

Acceptance letter

Tyler Horan

PONE-D-24-48265R1

PLOS ONE

Dear Dr. Smailes,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Tyler Horan

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: rebuttal_letter_resubmission01_v3.docx

    pone.0324514.s001.docx (28.1KB, docx)
    Attachment

    Submitted filename: rebuttal_letter_resubmission02_v2.docx

    pone.0324514.s002.docx (20KB, docx)

    Data Availability Statement

    The data for this study are available at doi.org/10.17605/osf.io/wpvst.


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES