Skip to main content
PLOS One logoLink to PLOS One
. 2019 Nov 19;14(11):e0224697. doi: 10.1371/journal.pone.0224697

Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts

Kate Bredbenner 1,*, Sanford M Simon 1
Editor: David Orrego-Carmona2
PMCID: PMC6863540  PMID: 31743342

Abstract

Background

Journals are trying to make their papers more accessible by creating a variety of research summaries including graphical abstracts, video abstracts, and plain language summaries. It is unknown if individuals with science, science-related, or non-science careers prefer different summaries, which approach is most effective, or even what criteria should be used for judging which approach is most effective. A survey was created to address this gap in our knowledge. Two papers from Nature on similar research topics were chosen, and different kinds of research summaries were created for each one. Questions to measure comprehension of the research, as well as self-evaluation of enjoyment of the summary, perceived understanding after viewing the summary, and the desire for more updates of that summary type were asked to determine the relative merits of each of the summaries.

Results

Participants (n = 538) were randomly assigned to one of the summary types. The response of adults with science, science-related, and non-science careers were slightly different, but they show similar trends. All groups performed well on a post-summary test, but participants reported higher perceived understanding when presented with a video or plain language summary (p<0.0025). All groups enjoyed video abstracts the most followed by plain language summaries, and then graphical abstracts and published abstracts. The reported preference for different summary types was generally not correlated to the comprehension of the summaries. Here we show that original abstracts and graphical abstracts are not as successful as video abstracts and plain language summaries at producing comprehension, a feeling of understanding, and enjoyment. Our results indicate the value of relaxing the word counts in the abstract to allow for more plain language or including a plain language summary section along with the abstract.

Introduction

Every scientific paper is a story, but it can be a challenge to access those stories. Many papers are hidden behind subscription fees that make access prohibitive. But even if the reader gets behind the paywall, scientific stories are often written in a dense and jargon-laden fashion. This kind of style may not be limiting for experts in the field, but for those outside of that field, it can ensure that story is not heard. This has led to a recent incorporation of different kinds of summaries with the goal of making the science more accessible.

In a recent 3M survey of 14,025 people, 88% of them thought that scientists should be sharing their results in easy to understand language [1]. Many journals have recognized this need, and they create a variety of summaries including videos, graphics, and plain language summaries in addition to the abstracts that come with every scientific paper. While all of these summaries tell the same story, they tell it using different media styles.

eLife is committed to plain language summaries and has been creating them since the journal was founded in 2012 [2]. Other journals have followed eLife’s lead in incorporating plain language summaries including the Public Library of Science (PLOS) journal family, Proceedings of the National Academy of Sciences, USA (PNAS), Cell, and Science [3]. All Cochrane systematic reviews also require plain language summaries [4]. Despite these journals all writing plain language summaries, there is no recognized standard summary format. Plain language summaries are usually called different names and have different word counts depending on the journal. The one thing that remains constant is a lack of jargon.

Plain language summaries aren’t the only summary type that has been introduced to reach a wider audience. Videos have become very popular in the last few years. Video abstracts are generally three to five minutes in length and cover the major findings of the research paper they are about. The first video abstract was produced by Cell in 2009 [5], and since then Cell has been a major contributor of video summaries. Other video contributors include the Wiley publishing group and ACS Publications.

Cell has also been a leader for graphical abstracts. Cell defines the graphical abstract as “one single-panel image that is designed to give readers an immediate understanding of the take-home message of the paper.” They also say that “its intent is to encourage browsing, promote interdisciplinary scholarship, and help readers quickly identify which papers are most relevant to their research interests [6].”

While having different ways to summarize published research could increase accessibility, each of these summaries takes time to make. Video abstracts can take over 20 hours to complete and graphical abstracts aren’t far behind. They also require specialized equipment and skills to be effective [7,8]. While plain language summaries might seem the easiest to produce, even eLife found that they were publishing too many papers for each one to have its own plain language summary, and in 2016 they scaled back the number of summaries they publish [9].

Summaries are necessary for sharing scientific findings quickly with peers and the public. Unfortunately, only a small portion of journals create even one kind of summary for their papers. To encourage more journals to summarize the research they publish, it would help to know what the most effective summaries are for different audiences.

Previous research on blogs which combined a written article with videos or graphics concluded that scientists had better recall and enjoyed the blog post more when a video was combined with the text whereas non-scientists had better recall and enjoyed the blog post more with an image included [10]. Research on support for the James Webb Space Telescope found that participants were more supportive of telescope construction when they viewed interactive media including a video and a simulation over traditional text [11]. We also know that people tend to be able to recognize science images better than they can answer science questions [12].

All of this data taken together suggests that videos or graphics might be more important when it comes to enjoyment and comprehension of science summaries, but we are still missing data which directly compares science summaries in the way that they are currently created by journals. Results of an eLife survey in 2016 found that they have a ratio of scientist: non-scientist readership of 6:1 for their plain language summaries, and over 90% of both scientists and non-scientists said that most or all of the summaries they’ve read were informative [13]. However, we don’t know the relative efficacy of reaching people with different kinds of summaries. We also don’t know if adults with science, science-related, and non-science careers all enjoy and comprehend the same kinds of summaries.

To evaluate the effectiveness of different summary types for people with different careers, we created a survey that presents participants with a video abstract, graphical abstract, plain language summary, or published abstract from two papers in the same subject area (S6 File). The survey looked at comprehension, perceived understanding, enjoyment, and whether the participants wanted to see more summaries of that type. The combination of these four measurements was used to determine which summary method is most effective. We also compared across career types and reported learning preferences to see what role they play. Finally, we offer suggestions to researchers and journals regarding what to do about summaries in the future.

Methods

Science summary design

This work was granted exempt status from the Rockefeller University IRB (ref #342107). We chose two recently published papers as the subject of study. Cohn et al. was published in Nature Medicine in April 2018 and outlines a method for recovering latent cells from HIV-1+ patient blood in order to study the latent cells for a possible future cure. It also sequences these latent cells and shows that they are often clonal [14]. Takata et al. was published in Nature in September 2017 and shows that HIV-1 has selectively removed CG dinucleotides from its genome to more closely mimic the nucleotide content of its human host. Specifically, HIV-1 has removed CG dinucleotides to avoid the host protein ZAP which recognizes and destroys RNA in the cell that has these dinucleotides [15]. Both papers are within the HIV-1 field, and both were published at similar times in similar journals. Both papers were also first-authored by graduate students both at Rockefeller University and both in the same year of graduate school.

The abstracts of both papers were taken, as published, to place into a survey. From there, a plain language summary was written for each paper (S5 File). The main takeaways that were highlighted in the published abstracts were also the main focus of the plain language summaries. Every effort was taken to eliminate any jargon and to provide real-world context for each of the findings. Both plain language summaries were of similar length (422 words for Cohn et al.; 433 words for Takata et al.). The plain language summaries followed the guidelines put into place by eLife, including their list of questions that they ask the scientists to make the summaries easier for the editors to write [16]. The abstracts and plain language summaries were put into a readability calculator to obtain the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level (FKGL) similar to previously published work on plain language summaries [17]. The scores can be seen in Table 1.

Table 1. Readability of published abstracts and plain language summaries for Cohn et al. and Takata et al.

Cohn et al. Takata et al.
Abstract Plain Language Abstract Plain Language
FRESa 24.2 63 15.2 58
FKGLb 16.2 9.5 16.1 10

All scores were obtained from www.readability-score.com.

a Flesch Reading Ease Score

b Flesch-Kincaid Grade Level

The plain language summaries were used as the spoken script for the video abstracts (S2 and S3 Files). The videos of each paper were illustrated using similar visual motifs and the videos were of similar lengths (2:33 for Cohn et al.; 2:49 for Takata et al.). Both videos followed the “whiteboard explainer” style where images are drawn on a screen and the drawing is either freeze-framed or sped up to match the narration. Both videos were uploaded to YouTube and were made unlisted so only survey participants could see them. The YouTube generated closed captions were edited to reflect the actual script, and closed captions were set to automatically appear anytime the videos were played. The closed captions could be turned off by the participants if they wanted.

The videos followed all of the qualifications set by Cell for their video abstracts [18]. All technical specifications were met or exceeded, and the videos were within the requested length. The first author has made several of these Cell video abstracts and is familiar with the qualifications necessary. Audio quality was also carefully controlled as it plays a role in how favorable participants find the research [8].

The graphical abstracts were created using Keynote software and they followed the same visual motifs that were in the videos (S4 File). The graphical abstracts were placed into a color-blindness simulator to be sure that all possible participants could see the image equally well [19]. The graphical abstracts were also created based on the guidelines set up by Cell for their graphical abstracts [6]. All technical specifications were met or exceeded where possible.

All summaries were created with the intent that the videos, graphics, and plain language summaries should be content-identical.

Survey design

The survey was created in the Google Forms platform. Eight surveys were created that were mostly identical except for the type of summary shown. Two surveys showed video abstracts, two showed graphical abstracts, two showed plain language summaries, and two showed published abstracts. Each pair of surveys showed both the Cohn et al. summary and the Takata et al. summary, but one version showed the Cohn et al. summary first and one showed the Takata et al. summary first (Fig 1). Switching which paper was shown first was done to be able to prove that the responses received did not differ based on which paper summary was seen first. Participants were randomly assorted to one of the eight surveys via a random URL generator embedded into the button on the survey website. Each participant only completed one survey meaning that they only saw one type of science summary, but they saw that type of summary for both papers (Fig 1).

Fig 1. Flowchart of survey assignment and pooling.

Fig 1

A flowchart representing the eight survey versions created for this research. Once participants click on the button “Participate in Survey”, they are randomly assorted to one of the eight possible surveys including two versions each of video abstracts, plain language summaries, graphical abstracts, and published abstracts where one version shows the Cohn et al. summary first and one shows the Takata et al. summary first. All surveys ask background questions prior to showing a summary. The asterisks denote surveys that contained an error which only showed participants with science careers both the Takata et al. and Cohn et al. summaries. All other participants were only shown the Cohn et al. summary. The error was correctly shortly after publicizing the survey. Data from both versions of each type of summary were pooled, as denoted by brackets and the phrase “Data Pooled”.

Four of the surveys contained a collection error that was corrected part way through data collection (Fig 1). If participants were funneled to one of the surveys with the error, only participants who marked that they had science careers were shown both the Cohn et al. and Takata et al. summaries. All other participants only saw the Cohn et al. summary. This error lead to fewer non-science and science-related participants for the Takata et al. paper, but the error was corrected in time to still get usable data.

Before presenting the created summaries, all surveys asked participants to report their career type (science, science-related, non-science, or undergraduate), input their gender if they so desired (a fill-in-the-blank that was not required), and report their preference for receiving science updates (written summaries, video, audio, reading the original research paper, or graphics) (S6 File).

Participants that reported they had science careers were asked an additional series of questions about how they prefer to receive research updates in their field versus outside of their field. This list of options included newspaper articles, social media, recommendations from friends and colleagues, scientific journals, and PubMed/other alerts (S6 File).

After the background information was collected, participants were shown one of the summaries and asked follow up questions about it. There was a 6-question quiz associated with each of the papers to determine comprehension (Table 2). The quiz was one multiple choice question and five true/false questions. These questions were designed to be answerable regardless of which summary type the participant had seen. Other follow up questions were asked to determine how much the participants enjoyed and understood the research and also how much they wanted to see more summaries of the type that they were presented (Table 2). The full survey is available as supplementary file 6.

Table 2. Follow up questions for Cohn et al. and Takata et al.

Type of Question Cohn et al. Takata et al.
Comprehension–Multiple Choice This research focuses on: This research focuses on:
(a) HIV (a) HIV
(b) FIV (b) FIV
(c) Influenza (c) Influenza
(d) I don’t know (d) I don’t know
Comprehension–T/F This research created a capture technique to collect all T-cells from patients. False Vertebrates have evolved less AG nucleotide pairs.
False
Comprehension–T/F The capture technique is a type of cure for the virus discussed in the summary.
False
The virus mentioned has evolved to lack CG pairs to avoid cell anti-viral defenses.
True
Comprehension–T/F Latent cells captured from patient blood are mostly from a single latent cell that divided.
True
ZAP interacts with the DNA of the virus mentioned in the summary.
False
Comprehension–T/F Captured latent cells have higher expression of genes that increase virus activation.
False
All possible DNA nucleotide pairs show up at the same rate as each other in vertebrates (eg. AT is present at the same frequency as GT or CG or GC).
False
Comprehension–T/F Latent cells are a consequence of the lifecycle of the virus mentioned.
True
ZAP is a protein that is made by the infected host cell.
True
Enjoyment I enjoyed readinga this abstractb: I enjoyed readinga this abstractb:
(0) Not at all (0) Not at all
(1) A bit (1) A bit
(2) Average (2) Average
(3) Mostly (3) Mostly
(4) Very Much (4) Very Much
Understanding I understand this research more after readinga this abstractb: I understand this research more after readinga this abstractb:
(0) Not at all (0) Not at all
(1) A bit (1) A bit
(2) Average (2) Average
(3) Mostly (3) Mostly
(4) Very Much (4) Very Much
Desire for Updates I want to get more science updates via written abstractb after readinga this: I want to get more science updates via written abstractb after readinga this:
(0) Not at all (0) Not at all
(1) A bit (1) A bit
(2) Average (2) Average
(3) Mostly (3) Mostly
(4) Very Much (4) Very Much

Follow up questions for comprehension have the correct answer noted in bold. For enjoyment, understanding, and desire for updates, participants were only presented with the phrases “Not at all”, “A bit”, “Average”, “Mostly”, and “Very Much”. The numbers in parentheses were added for analysis and presentation of data.

a The word ‘reading’ was removed or changed to ‘viewing’ for surveys with video or graphical abstracts.

b The word ‘abstract’ was changed to ‘video summary’ or ‘summary’ for surveys with video abstracts, ‘summary’ for surveys with plain language summaries, and ‘graphical summary’ or ‘summary’ for surveys with graphical abstracts.

Survey recruitment

Recruitment was done using the snowball method used in studies similar to this research [10]. Participants were recruited online via the first author’s social media pages using appropriate hashtags. Emails were also sent to a number of science groups including the National Alliance for Broader Impacts (NABI), The Falling Walls organization, the BioBus, all attendees of the 2019 SciOut conference, all attendees of the Science Alliance Leadership Training (SALT) up to 2018, and all members of the Rockefeller University Community.

Survey analysis

Results were downloaded from Google Forms and put into Google Sheets for analysis. Results from surveys which showed the Takata et al. summary first and results from surveys which showed the Cohn et al. summary first were compared via the Mann-Whitney U-Test to see if the populations were different based on which summary was presented first. In all cases of all summary types, it did not matter which summary was shown first, so Takata et al. data from both versions were pooled and Cohn et al. data from both versions were pooled (Fig 1).

The results were then checked between Cohn et al. and Takata et al. to see if the two papers yielded different results. The two papers showed statistically significant differences between the Cohn et al. and Takata et al. published abstracts in comprehension scores and the desire for more updates. Since the published abstracts were significantly different in these categories, the two papers were kept separate for analysis.

Participants who marked that they were undergraduates were pooled with participants who marked that they had non-science careers since undergraduates were in the minority of participants and they often have the same schooling as adults without science careers. There were no significant differences in the results between these two populations, so pooling seemed appropriate.

The results from the separate careers (science, science-related, and non-science) in the two papers were compared to see if there were significant differences. There were statistically significant differences between careers in every scoring category in each of the two papers, so the careers were kept separate for analysis.

All statistical significance between populations was calculated by the Mann-Whitney U Test [20]. All correlation values were calculated by the Pearson’s r correlation coefficient.

Results

Survey participants and preferences

Participation in the survey was fairly even across careers. The Cohn et al. data set contained 201 science, 156 science-related, and 181 non-science participants (Table 3). The Takata et al. data set contained fewer science related (n = 112) and non-science (n = 133) participants due to a Google Form error that was corrected shortly after the survey was first publicized (Fig 1 and Table 3). Of the 538 total participants, 505 reported having a binary gender. The female/male split of those 505 participants was fairly even with the science and non-science participants having approximately a 60:40 split and the science-related participants having a 70:30 split. The 70:30 female/male split in science-related participants is representative of the number of women in outreach and other science-related careers as compared to men in those same careers [21].

Table 3. Participant numbers for Cohn et al. and Takata et al.

Participant Numbers for Cohn et al.
Career Video Graphic Plain Language Abstract Total
Science 42 49 49 61 201
Non-Science 44 47 47 43 181
Science-related 26 42 39 49 156
Total 112 138 135 153 538
Participant Numbers for Takata et al.
Career Video Graphic Plain Language Abstract Total
Science 42 49 49 61 201
Non-Science 35 31 38 29 133
Science-Related 18 29 30 35 112
Total 95 109 117 125 446

Numbers of participants separated by paper, by career, and by summary type.

The number of participants who viewed each type of summary is also fairly even. Only science-related video participants are lagging in number of participants (n = 26 for Cohn et al., n = 18 for Takata et al.) compared to the other summary and career types (n>39 for Cohn et al., n>29 for Takata et al.).

Before showing participants a science summary, our survey asked participants to report their preference for getting scientific information via written summaries, graphics/infographics, videos, audio sources like podcasts, and reading the original research paper. We gathered this information to see how much prior preferences affect the comprehension, enjoyment, understanding, and desire for updates of the different summary types. Participants from all careers had the same hierarchy of reported learning preferences with the exception of research papers (Fig 2 and S1 File). Written summaries were by far the most preferred learning type followed by graphics/infographics, videos, and then audio/podcasts (Fig 2). For research papers, non-science participants preferred them the least, science-related participants preferred them second only to written summaries, and science participants preferred them the most (S1 File).

Fig 2. Participant reported preferences.

Fig 2

Reported learning preferences for all participants and update preferences of participants with science careers. (A) shows data of all participants that answered the Cohn et al. paper and the Takata et al. paper. The bar charts show the reported preference of the participants for different ways to hear about science. (B) shows the update preferences of science participants both in their field of study and outside of it. The graph on the left shows preferences for research inside the scientist’s field of study and the right shows preferences for research outside the field of study.

We also wanted to know how scientists like to receive updates inside versus outside of their field so we could learn how that might affect their view of the different science summaries. We gathered information by asking participants with science careers how much they preferred getting research updates via scientific journals, newspaper articles, social media, recommendations from colleagues, or by PubMed alert. Recommendations from scientific journals, recommendations from friends and colleagues, and PubMed/Other Alerts are the most preferred update mechanisms inside the scientists’ field of study (Fig 2B). When asked about their preferences outside their field, recommendations from friends and colleagues were the most preferred followed by social media, then newspaper articles and recommendations from scientific journals (Fig 2B).

Video and plain language summaries are the most effective regardless of career

Given the clear reported preference for written forms of communication (Fig 2A), it was expected that the plain language summaries and perhaps the published abstracts would be the most effective summaries when tested. Contrary to our expectations, videos had the highest scores for comprehension, understanding, enjoyment, and desired updates (Median(M)>4 of 6, M>3 of 4, M>3 of 4, M>2 of 4 for videos respectively) (Fig 3). Plain language summaries often had equal scores to videos (M>4 of 6, M>2 of 4, M>2 of 4, M>2 of 4 for plain language respectively), but videos were either as effective or more effective than plain language summaries in all cases except comprehension of science-related participants for the Cohn et al. paper where plain language summaries had a higher average score (M = 4 of 6 for video, M = 5 of 6 for plain language) (Fig 3).

Fig 3. All data from all summaries.

Fig 3

Histograms of the comprehension, understanding, enjoyment, and desire for more updates data for all survey types and all career types. A shows data for the Cohn et al. paper participants. B shows data for the Takata et al. participants. Each histogram shows the data as a percentage of participants. Comprehension histograms are plotted from 1–6, and understanding, enjoyment, and want updates plots are plotted from 0–4. Comprehension scores are from a series of questions asked in the survey (Table 2, S6 File). Understanding, enjoyment, and want updates scores are numerical representations of responses where 0 was “not at all” and 4 was “very much” (Table 2, S6 File). Statistical significance is shown above each plot where p<0.01 using the Mann-Whitney U-Test. Specifically, the asterisks represent the following p-values: p<0.00001(****), p<0.0001(***), p<0.001(**), p<0.01(*).

The differences in comprehension were generally small across summary types and careers. These small differences indicated that people are able to get the main takeaways of the paper no matter what type of summary they are shown. When statistically significant differences did occur, they indicated that video and plain language summaries produced higher comprehension scores (Fig 3).

Video and plain language summaries had higher reported understanding scores than published abstracts and graphical abstracts (Fig 3). This was true of participants from all careers and was true for both papers tested. In some cases, video even outperformed plain language summaries (Fig 3), which is surprising given that the majority of participants ranked written summaries as their highest preference for getting new scientific information (Fig 2A).

Videos and plain language summaries had the highest enjoyment scores, but there were some differences between careers. Participants with science careers enjoyed videos the most (M = 4 of 4 for Cohn et al. and Takata et al., p<0.00001) followed by plain language summaries (M = 2 of 4 for Cohn et al., M = 3 of 4 for Takata et al., p<0.00001). Participants with non-science careers enjoyed videos the most as well (M = 3 of 4 for Cohn et al. and Takata et al., p<0.00001). Participants with science-related careers liked the videos and plain language summaries equally (M = 3 of 4 for video and plain language for Cohn et al. and Takata et al.). Published abstracts and graphical abstracts were enjoyed the least by all careers (all p<0.0027) with the exception of non-science participants who enjoyed abstracts the least (M = 1 of 4 for Cohn et al., M = 0 of 4 for Takata et al.), but enjoyed graphical abstracts and plain language summaries equally (M = 2 of 4 for graphic, M = 2 of 4 for plain language for Cohn et al.; M = 1 of 4 for graphic, M = 2 of 4 for plain language for Takata et al.) (Fig 3).

When asked if they wanted to get more updates in the form of the summary they saw, participants rated videos or plain language summaries the highest, independent of career (all p<0.00148 for video, all p<0.01 for plain language summaries) (Fig 3). Published abstracts and graphical abstracts had the lowest update scores (all p<0.00148) with the exception of non-science participants who scored published abstracts the lowest (M = 0 for Cohn et al. and Takata et al.), but scored graphics and plain language summaries equally (M = 2 of 4 for Cohn et al.; M = 1 of 4 for graphic, M = 2 of 4 for plain language for Takata et al.) (Fig 3).

Overall, video abstracts and plain language summaries produced the highest comprehension, understanding, enjoyment, and desire for more updates. This led us to the conclusion that video abstracts and plain language summaries are the most effective summary formats regardless of career.

Strong correlations exist between reported learning preferences and summary ranks

Generally participants from all careers felt similarly about the summaries. All participants scored the video and plain language summaries the highest and the graphical abstracts and published abstracts the lowest in all categories. We thought that if we sorted the participants by their reported preferences rather than by their careers, we might see strong correlations between reported preference and comprehension, understanding, enjoyment, or desire for updates.

To see whether reported preference was correlated with the summary scores, we looked at the comprehension, enjoyment, understanding, and update scores from participants that viewed each of the summaries and saw if those scores correlated with their reported preference for getting updates of that type. For example, to look at the published abstracts, we looked to see if the comprehension, understanding, enjoyment, and desire for updates scores correlated with the reported preference for reading the original research paper. For the video scores, we looked to see if the scores correlated with the video reported preference. For the graphical abstract scores, we looked to see if the scores correlated with the graphic/infographic preference. We could not evaluate plain language summaries because almost all of the participants marked average or higher preference for written summaries when asked how they prefer to get science updates prior to viewing any of our summaries (Fig 2A). This limited our ability to see any correlation, so plain language summaries were not analyzed. Videos, graphical abstracts, and published abstracts each had a wider distribution of reported preferences from lowest to highest, so they were analyzed (Fig 2A).

Although the comprehension, understanding, enjoyment, and update scores were similar between the Takata et al. and Cohn et al. papers when the data was separated by career, the preference correlations showed a clear difference between the two papers (Fig 4). In the Cohn et al. data set, comprehension score was not correlated with the reported preference in any of the summary types (Fig 4A). This lack of correlation means that participants did not perform better on the comprehension test when they were paired with their preferred type of summary whether it was a video, graphic, or the original published abstract. The Takata et al. data showed similar results for the video and graphical summaries, but it showed a significant correlation between preference for reading the original research paper and the published abstract comprehension score (r = 0.29, p = 0.0009) (Fig 4B). This correlation indicates that participants which marked reading the original research paper as their highest preference performed better on the comprehension test and participants that marked reading the original research paper lower performed worse. This Takata et al. specific correlation could be due to the basic biology nature of that paper and the background knowledge required to understand their findings.

Fig 4. Correlations between reported preference and summary values.

Fig 4

Bar graphs of preference correlation for Cohn et al. and Takata et al. papers. Both graphs show data for videos, graphics, and published abstracts. Analysis was not completed for plain language summaries due to the overwhelming reported preference for written summaries (see Fig 2 for reported preference data). For each summary type, the reported preference for that type was tested for correlation with the comprehension score, reported understanding, reported enjoyment, or the desire for more updates of that type using a Pearson’s r correlation calculation. A shows the data for Cohn et al. B shows the data for Takata et al. Statistical significance is noted where p<0.01. Specifically, the asterisks represent the following p-values: p<0.00001(****), p<0.0001(***), p<0.001(**), p<0.01(*).

Significant correlations were also seen for published abstracts in reported understanding, enjoyment, and the desire for more updates in both papers (all p<0.00001). This indicates that participants which prefer reading the original research paper also score the abstracts higher in all categories, and those that do not prefer reading the original research paper score the abstracts lower in all categories. The same correlations were not seen for videos. The only significant correlation for the video summaries was a correlation between the desire for more video updates and the reported video preference (r = 0.387, p = 0.000024 for Cohn et al.; r = 0.33, p = 0.0011 for Takata et al.) (Fig 4). This indicates that participants who reported a preference for videos wanted to keep seeing more videos even after viewing our video abstracts. The lack of correlation between video reported preference and understanding/enjoyment highlights how effective videos were overall. Participants gave high scores to the video abstracts in the understanding and enjoyment categories regardless of whether they reported that they preferred videos as a way to get new scientific information before seeing our video abstracts.

Reported understanding and comprehension show strong correlations for Takata et al. summaries

It might be expected that the better you perform on a quiz, the more confident you are that you understood the material covered in that quiz. When we examined the relationship between comprehension score and reported understanding, the Cohn et al. and Takata et al. papers diverged (Fig 5).

Fig 5. Heat maps of reported understanding versus comprehension score.

Fig 5

Heat maps of reported understanding versus comprehension score of Cohn et al. and Takata et al. separated by summary type. The larger heat maps show the summed data for all participants and the three smaller heat maps to the right show the data for each career type. Each larger heat map contains the Pearson’s r correlation value for all careers. Statistical significance is noted where p<0.05. Specifically, the asterisks represent the following p-values: p<0.00005(****), p<0.0005(***), p<0.005(**), p<0.05(*).

The Cohn et al. data showed correlations between comprehension scores and understanding scores for the video (r = 0.223, p = 0.018) and plain language summaries (r = 0.193, p = 0.025), but no correlation for the graphical abstracts or published abstracts (Fig 5). The correlation between understanding and comprehension scores in video and plain language summaries suggest that participants felt confident in their answers and their understanding of the Cohn et al. paper after watching the video or reading the plain language summary. It also suggests that participants did not feel as confident after reading the published abstract or viewing the graphical abstract.

Contrary to the Cohn et al. data, the Takata et al. data showed significant correlations in all summary types (all p<0.00018) (Fig 5). These correlations hint at the possibility that more background knowledge is needed to understand the findings of the Takata et al. paper as compared to the Cohn et al. paper.

Discussion

We found that videos and plain language summaries are the most effective summaries, based on comprehension, understanding, and enjoyment. This finding was independent of an individual’s career type. Surprising to us, our results also hint that there can be differences between papers based on the background knowledge required to understand the findings of that paper even when the readability of the two paper summaries are similar.

The Takata et al. paper shows that HIV-1 has selectively removed CG dinucleotides from its genome to more closely mimic the nucleotide content of its human host [15]. Understanding this finding requires a solid understanding of DNA and basic molecular biology concepts. In contrast, the Cohn et al. paper outlines a method for recovering latent cells from HIV-1+ patient blood in order to study them for a possible future cure [14]. This paper doesn’t require nearly the depth of molecular biology understanding as the Takata et al. paper, and that difference seems to show up in the data.

Individuals with a strong science background are more likely to report that they prefer to get information from scientific papers since they are more likely to still be working in academia and reading scientific papers on a regular basis. That preference for scientific papers then correlates strongly with the comprehension score, but only for the Takata et al. paper where background knowledge is necessary to understand the finding (r = 0.3, p = 0.0009) (Fig 4B). It would be illuminating to create a survey with a broader range of papers to see if this trend holds true.

In addition to looking at a broader range of papers, we would also like to see more research into how the quality of the summary is related to the effectiveness of that summary. The quality of the content of published plain language summaries, video abstracts, and graphical abstracts is known to vary despite the rigorous technical specifications and instructions outlined by the journals that require these summaries [6,1618]. Video abstracts and graphical abstracts are produced by the researchers themselves, and not every research team has the skills necessary to create these summaries. Occasionally, videos will be made by a company that the researchers hire, like Animate Your Science. However, this results in a greater variability in quality of the video abstracts since professional videographers generally have access to better film equipment and editing software than the average researcher making a video. Plain language summaries also differ from journal to journal as some are written by editors and other by the authors themselves [3].

Previous research on plain language summaries suggests that multiple rounds of editing and collaborations with members of your intended audience can help make better summaries [22,23]. Perhaps this would also be true for videos and graphical abstracts, but more research on this area is needed.

Overall, our research only identifies which science summary is the most effective when a single researcher carefully creates content-identical summaries following the rules set out by the journals that are associated with those summaries. There is still much more work to be done in order to know exactly how we should summarize our science. One type of summary that was completely omitted in this study was podcasts. Cell, Science, and Nature all produce podcasts about some of the most relevant research that they publish. It is unclear how a podcast would perform relative to videos, graphics, and plain language summaries.

This research also only included graphical abstracts that closely mimic the currently published Cell graphical abstracts, following the best practices that Cell has laid out [6]. It’s possible that infographics could be more successful than the current graphical abstracts. Infographics function as a combination of word descriptions and images together. They often include data and small descriptions of the work, and they are meant to inform the viewer on a particular topic. The current graphical abstracts encourage the creators of the graphics to avoid using any sentences and to avoid adding any data to the image [6]. Participants of this survey often commented on the graphical abstracts saying that they wished they had words or even a small description to go with the image which suggests that infographics might be more helpful at conveying information.

Cell has stated that their graphical abstracts are meant as almost an advertising tool to encourage readers to browse papers and see which ones might be interesting to them whether they are in that scientist’s field of interest or outside of it [6]. Perhaps the graphical abstracts performed poorly in this research simply because they are not meant to provide the key takeaways of the research article in the same way that videos and plain language summaries are. Given the time involved in making these graphical abstracts, our results suggest that the time might be better invested in preparing an alternative, such as a video or a plain language summary, if the goal is to help interest readers in other articles.

We also recommend more research on video abstracts specifically, since they were the most effective in our analysis. There is a slight negative, though not statistically significant, correlation of video preference and reported understanding in both papers (r = -0.07 for Cohn et al. and Takata et al.), which indicates that although many people did not report a preference for videos, they understood the research much better after viewing them (Fig 4). The enjoyment of the videos was also not strongly correlated with video preference even though people reported high enjoyment scores after watching the videos (Figs 3 and 4). A comments section was available at the end of the surveys and the comments for the videos were very positive and many participants indicated that they were surprised at how helpful a video abstract could be.

Despite videos being the most effective, we recognize that there are limitations to their implementation due to the inherent cost and time involved. Not every researcher has the resources available to produce a video summary. Also, participants did not report that they preferred videos more than written summaries or graphics before viewing our summaries (Fig 2A), so recruiting a large number of people to watch a video might be challenging even though it has been shown here to be the most effective.

Based on this study, we suggest that all researchers consider writing a plain language summary of their research. Those summaries can be published with their paper or published in other relevant locations including lab websites, university websites, or university newsletters. To get started with a plain language summary, it can be helpful to look at the eLife questions for researchers or the Cochrane Methods for writing a plain language summary [16,24]. Also, we recommend editing the summaries at least once, hopefully after getting feedback from a member of the intended audience and possibly using a jargon detection program to make sure the summary is accessible [22,25]. Summaries can be put into a readability calculator to help make sure that the summary is easy to read. However, we don’t recommend depending solely on readability scores because scientific summaries must use unfamiliar words or phrases at times for accuracy, and readability calculators only report on how easy a written document is to read, not how easy it is to understand. Finally, a plain language summary can be a great way to organize a research paper as it forces researchers to focus on the take-home message, so writing plain language summaries can have benefits to both the researcher and to the people the research is trying to communicate with.

If the findings of their research are of public relevance, researchers could consider investing the time and money into a video of their results. Videos had the highest ratings across the board and they left people feeling very confident and positive about the research being presented (Fig 3). Not all papers require a video, but it is an excellent option for select relevant papers that should not be overlooked.

We also recommend that journals consider including plain language summaries with all of their papers as a separate section available outside the paywall, if a paywall exists for that publication. We also recommend that journals with topics of high public relevance or those that would benefit from strong interdisciplinary ties consider creating videos of their papers to share with the community.

Supporting information

S1 File. Reported learning preferences by career for Cohn et al.

The bar charts show the reported preference of the participants for different ways to hear about science separated by career for the Cohn et al. data set.

(TIF)

S2 File. Cohn et al. video abstract.

A compressed version of the video created for Cohn et al. paper. The video script used was the same as the plain language summary. The video was created with Autodesk Sketchbook, GarageBand, and iMovie software and was hosted on YouTube. The video was embedded into the survey for participants to view. The closed captioning was edited to be on with the option for the user to turn it off. For the full video see: youtu.be/RLuunA81kJo.

(MP4)

S3 File. Takata et al. video abstract.

A compressed version of the video created for Takata et al. paper. The video script used was the same as the plain language summary. The video was created with Autodesk Sketchbook, GarageBand, and iMovie software, and was hosted on YouTube. The video was embedded into the survey for participants to view. The closed captioning was edited to be on with the option for the user to turn it off. For the full video see: youtu.be/Kp-0PvS99fM.

(MP4)

S4 File. Graphical abstracts.

Graphical abstracts created for the Cohn et al. (A) and Takata et al. (B) papers. Graphical abstracts used similar visual motifs as the video abstracts and were created using Keynote software. Each abstract was put through a color blindness simulator to ensure that the abstracts could be seen properly by all viewers. The abstracts were embedded into the survey for participants to review.

(TIF)

S5 File. Plain language summaries.

Plain language summaries written for the Cohn et al. (A) and Takata et al. (B) papers. Summaries were written based on intensive review of the published papers. The summaries also hit each key point mentioned in the abstracts of each paper. The Cohn et al. summary contains 422 words (A) and the Takata et al. summary contains 433 words (B). Each summary was embedded into the survey for participants to review.

(TIF)

S6 File. Copy of published abstract survey.

A PDF copy of the survey presented to participants. This copy shows the published abstracts as the summary type. It has the Cohn et al. summary shown first and the Takata et al. shown second. Other surveys are identical to this one except that they show videos, plain language summaries, or graphical abstracts instead of the published abstracts. The videos can be seen in S2 and S3 Files. The graphical abstracts are in S4 File and the plain language summaries are in S5 File. Half of the surveys have the Cohn et al. summary shown first and half have the Takata et al. summary shown first. (See Fig 1 for schematic).

(PDF)

S7 File. Survey data.

All data from the survey.

(XLSX)

Acknowledgments

We thank Dr. Michelle Itano (University of North Carolina) and Dr. Jeanne Garbarino (Rockefeller University) for useful discussions. We also thank Arlene Hurley (Rockefeller University) for help with our IRB application preparation.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

We are grateful for the support of NIH 5R01GM119585 to SMS (https://www.nih.gov/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

Decision Letter 0

David Orrego-Carmona

10 Jul 2019

PONE-D-19-16933

Scientists should use plain language summaries and video abstracts to summarize their research

PLOS ONE

Dear Ms Bredbenner,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Many thanks for your submission. This is a very interesting and pertinent topic which can certainly be published in the special issue. However, considering the comments from the reviewers, I would highly recommend you re-structure the article to present the methods before the results and clarify the procedure you used. Please follow the recommendations of the reviewers in your revision.

==============================

We would appreciate receiving your revised manuscript by Aug 24 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

David Orrego-Carmona, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Could you please include a copy of the questionnaire as Supplementary Information? Currently the questions asked are only available in the data files and hard to extract for the reader.

3. We note that you have included the phrase “data not shown” in your manuscript. Unfortunately, this does not meet our data sharing requirements. PLOS does not permit references to inaccessible data. We require that authors provide all relevant data within the paper, Supporting Information files, or in an acceptable, public repository. Please add a citation to support this phrase or upload the data that corresponds with these findings to a stable repository (such as Figshare or Dryad) and provide and URLs, DOIs, or accession numbers that may be used to access these data. Or, if the data are not a core part of the research being presented in your study, we ask that you remove the phrase that refers to these data.

4. Thank you for stating the following in the Competing Interests section:

I have read the journal's policy and the authors of this manuscript have the following competing interests: KB is a paid creator of video abstracts via SimpleBiologist. https://www.simplebiologist.com

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This is an interesting and important topic. The methods were appropriate, but I think the synthesis and presentation of results could be improved to make the paper more accessible to readers.

I found the abstract and introduction clearly written, although the introduction was a little long. Other sections were very difficult to follow. Having the methods at the end of the paper was extremely confusing and made the paper very difficult to read – you need to understand the methods used to be able to interpret the results. The results seemed overly complicated and very difficult to follow.

Given that your focus is on the benefits of alternative models of presentation, how about providing a video summary and PLS of this paper?

Major suggestions

Some of the methods are rather confusing. I can’t work out whether each participant viewed a summary for each of the two papers and if so, whether they viewed the same type of summary. Could you reword to make this clear? It does become clear at line 500 but should be made explicit from the start. Actually, having moved on to line 504 I am now confused again! This suggests that they did receive summaries for two papers. I think I have worked out now that you have 8 surveys because the order in which the papers are presented is switched but this isn’t immediately clear – please simplify this section so it is clear what information each participant received.

I’m not sure of the purpose of the additional bit of the survey for those with science careers. Given that this paper is already very difficult to follow, and that this is not the main objective, perhaps it might be clearer to remove this?

I would like more information on the questions asked to determine comprehension. Could these be described in more detail somewhere? Perhaps in a table?

I think you are possibly over-analysing all the different subgroups given the relatively small numbers. It may be better to focus on overall results first and then discuss any differences between your groups of participants.

I question the value of the correlation analysis. This is confusing and difficult to follow and I’m not sure it adds much to the take home message of the paper.

I would like to see some more detail on how outcomes were measured. You have rating preferences on the figures, but I don’t quite understand these. I presume participants were asked to rate on 5-point scale but what does “slightly prefer” and “a bit” mean? Prefer to what?

Some data would be easier to follow in tables – e.g. summary of participants. Some of the detail could then be removed from the text and figures to make it easier to follow.

The way results are described is really confusing. You talk about participants within each data set. But the datasets don’t have participants (the survey does) and they are summaries of research papers not data sets. Could you consider clearer wording?

“Surprising to us, the results also show that there can be differences between papers based on the background knowledge required to understand the findings of that paper.” – I’m not quite sure of the basis for this claim and wonder whether you are reading too much into an analysis based only on two different papers?

The discussion describes what the two papers you were looking at. I think a box or table in the methods would be more appropriate for this – this would be helpful information when people are considering what you have done.

I wonder in the discussion whether you might consider adding something on the benefits/costs to researchers in writing a PLS or producing a video summary. For example, I find that to write a PLS for my research I need to really think carefully about what the take home message is and what I want to say. This often leads to me improving other sections of the report based on the thinking/work I do to write a PLS. On the other hand, you may consider the extra resources needed to produce a video summary. I really liked your video summaries but I wouldn’t know how to go about producing one.

Minor suggestions

In the introduction you could also add the all Cochrane systematic reviews are required to produce a plain language summary.

Line 186 – “Video and PLS…” I don’t quite understand why this is here and written in bold

Line 528… Significant differences in what?

Line 551 “Normalisation…” Would it not be simpler to state that you used % rather than actual numbers for the histograms?

Line 166 (and elsewhere) “Trends in….” – you have done a cross-sectional survey so how can you comment on trends? I don’t understand this sentence.

Figure 1 – doesn’t really seem to be about demographics. It’s summarising your three groups of participants and their learning preferences. This is quite difficult to read.

Reviewer #2: The manuscript is really well written, and it is of great quality. The subject is important for the scientific field and it should be spread up so our research could reach all kinds of public.

Cochrane has a standard summary plain language format perhaps it is a good idea to encourage the writers to follow one model until papers do not reach a consensus.

About the methods, I don’t know if it is preference of the authors but the methods section after the results makes the understanding a little bit difficult.

Also, I am not sure if it is clear about how the preference for the type of abstract was assessed. By the text “Participants were randomly assigned to one of the eight possible surveys via a random URL generator embedded into the button on the survey website. No single participant ever saw more than one of the different summary types.”, so each participant only saw one type of summary (video, graphical, plain language and published abstracts). If each participant only saw one type of abstract (both Takata and Cohn) how do you know which the person prefers?

Also, you report the number of people that saw each article “The Takata et al. data set contains fewer science related (n=112) and non-science (n=133)”, but how many saw each type of abstract of each author?

I believe this matter should be enlightened for the final version.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2019 Nov 19;14(11):e0224697. doi: 10.1371/journal.pone.0224697.r002

Author response to Decision Letter 0


5 Aug 2019

Reviewer’s Comments

Reviewer #1:

1. Some of the methods are rather confusing. I can’t work out whether each participant viewed a summary for each of the two papers and if so, whether they viewed the same type of summary. Could you reword to make this clear? It does become clear at line 500 but should be made explicit from the start. Actually, having moved on to line 504 I am now confused again! This suggests that they did receive summaries for two papers. I think I have worked out now that you have 8 surveys because the order in which the papers are presented is switched but this isn’t immediately clear – please simplify this section so it is clear what information each participant received.

The methods section has been rewritten to be as clear as possible and a new figure was added that shows a flowchart of how participants were funneled into the 8 different surveys (fig 1).

2. I’m not sure of the purpose of the additional bit of the survey for those with science careers. Given that this paper is already very difficult to follow, and that this is not the main objective, perhaps it might be clearer to remove this?

We designed this section of the survey because, although it would be incredible if people from all walks of life read scientific journals, the majority of journal traffic comes from people with science careers. We thought it was a good idea to get information about where scientists are already going to hear about updates inside and outside of their fields so we can try to create summaries that work with where scientists already are and what they already use. We have updated the results section where we talk about this information to better reflect our intentions with these questions.

3. I would like more information on the questions asked to determine comprehension. Could these be described in more detail somewhere? Perhaps in a table?

The comprehension questions are now a part of a supplementary figure that displays the survey (S6 File). There is also a table of the comprehension questions in the methods section of the manuscript (Table 1).

4. I think you are possibly over-analysing all the different subgroups given the relatively small numbers. It may be better to focus on overall results first and then discuss any differences between your groups of participants.

Since we found that there were statistically significant differences between our sub-groups, we didn’t feel comfortable pooling them. We have gone through the results section and stated the overall outcome before discussing differences between subgroups to improve clarity.

5. I question the value of the correlation analysis. This is confusing and difficult to follow and I’m not sure it adds much to the take home message of the paper.

We believe that this analysis does show important aspects of our data including the differences that show up because of how you prefer to learn or get updates. This is an important distinction and we feel that the correlation helps us present that well. Changes have been made in the results section that should emphasize the importance.

6. I would like to see some more detail on how outcomes were measured. You have rating preferences on the figures, but I don’t quite understand these. I presume participants were asked to rate on 5-point scale but what does “slightly prefer” and “a bit” mean? Prefer to what?

For figure 2, we asked the participants to rank the way that they like to hear about new science (before ever seeing the summaries we were testing), so the preferences are between the different options for getting information (video, audio, written, etc). For the other figures, there is not a preference but instead participants say how much they agree with statements like “I enjoyed reading this abstract” or “I understand this research more after reading this abstract.” Hopefully the addition of the survey as a supplementary file (S6 File) and the additional changes to the methods help clarify this issue (Table 1).

7. Some data would be easier to follow in tables – e.g. summary of participants. Some of the detail could then be removed from the text and figures to make it easier to follow.

The Demographics Figure (prev. fig 1) has been split into a table (Table 2) that shows the participant numbers and a figure (fig 2) which shows the reported preferences of the participants.

8. The way results are described is really confusing. You talk about participants within each data set. But the datasets don’t have participants (the survey does) and they are summaries of research papers not data sets. Could you consider clearer wording?

We have edited the results section to be clearer on this point.

9. “Surprising to us, the results also show that there can be differences between papers based on the background knowledge required to understand the findings of that paper.” – I’m not quite sure of the basis for this claim and wonder whether you are reading too much into an analysis based only on two different papers?

When we designed the survey, we chose a paper with a medical focus and one with a basic biology focus on purpose to be able to see if they showed the same results. Since we saw a difference between the two, especially in participants with science-related careers and non-science careers, we thought that this was something worth noting. We hope that someone will perform additional research which addresses this question more fully than we have in this paper, but we think it is worth discussing. We changed the wording of this paragraph to better reflect that our results are hinting at this possibility rather than showing it explicitly.

10. The discussion describes what the two papers you were looking at. I think a box or table in the methods would be more appropriate for this – this would be helpful information when people are considering what you have done.

The methods have been updated to more fully cover the scope of the survey and the two papers.

11. I wonder in the discussion whether you might consider adding something on the benefits/costs to researchers in writing a PLS or producing a video summary. For example, I find that to write a PLS for my research I need to really think carefully about what the take home message is and what I want to say. This often leads to me improving other sections of the report based on the thinking/work I do to write a PLS. On the other hand, you may consider the extra resources needed to produce a video summary. I really liked your video summaries but I wouldn’t know how to go about producing one.

We did have some consideration of the benefits/costs for videos and PLS in the discussion, but we have added more based on this comment. Previously we focused on time and cost. Now we have also included the ideas suggested in this comment.

12. In the introduction you could also add the all Cochrane systematic reviews are required to produce a plain language summary.

This information has been added to the introduction (line 72).

13. Line 186 – “Video and PLS…” I don’t quite understand why this is here and written in bold

“Video and Plain Language Summaries are the most effective regardless of Career” is the title of that section of the results. We split the results into several sections for clarity.

14. Line 528… Significant differences in what?

The methods have been rewritten to be clearer. In this case, we checked for significant differences between comprehension, enjoyment, understanding, and the desire for more updates between the two papers in each summary type and with each career. For example, we looked at whether there was a significant difference between the Cohn et al. and Takata et al. results from non-science participants that saw the video summary. The only significant differences we saw between papers were with the published abstract.

15. Line 551 “Normalisation…” Would it not be simpler to state that you used % rather than actual numbers for the histograms?

This is a fair point. We thought normalization to 100 for each seemed simpler since every subgroup had different numbers of people, but we can say percentage instead. It has been changed in the final manuscript.

16. Line 166 (and elsewhere) “Trends in….” – you have done a cross-sectional survey so how can you comment on trends? I don’t understand this sentence.

We were trying to use the “trends” phrase to make conclusions across subgroups. We have adjusted our language into something that we believe is clearer.

17. Figure 1 – doesn’t really seem to be about demographics. It’s summarising your three groups of participants and their learning preferences. This is quite difficult to read.

We have adjusted figure one and it is now a table (table 2) and a figure (fig 2). We hope that makes the results clearer.

Reviewer #2

1. Cochrane has a standard summary plain language format perhaps it is a good idea to encourage the writers to follow one model until papers do not reach a consensus.

We have added the fact that Cochrane systematic reviews have plain language summaries in our introduction (line 72), but after searching their website for some time, we could not find a good outline of how to write or format a plain language summary, so we did not include the suggestion that authors should follow the Cochrane format.

2. About the methods, I don’t know if it is preference of the authors but the methods section after the results makes the understanding a little bit difficult.

We have updated the methods both in language and in location. It’s now before the results section.

3. Also, I am not sure if it is clear about how the preference for the type of abstract was assessed. By the text “Participants were randomly assigned to one of the eight possible surveys via a random URL generator embedded into the button on the survey website. No single participant ever saw more than one of the different summary types.”, so each participant only saw one type of summary (video, graphical, plain language and published abstracts). If each participant only saw one type of abstract (both Takata and Cohn) how do you know which the person prefers?

The reported preferences that are in figure 1 (now fig 2) were obtained by asking the participants how they prefer to get new scientific information (before ever seeing one of our created summaries). All other figures are based on asking participants to say how much they “enjoyed reading this abstract” or “understand this research more after reading this abstract.” Then the scores for each summary type were compared. We hope that the updates in the methods section (Table 1) and the supplemental figure (S6 File) that shows the survey will clear up this confusion.

4. Also, you report the number of people that saw each article “The Takata et al. data set contains fewer science related (n=112) and non-science (n=133)”, but how many saw each type of abstract of each author?

I believe this matter should be enlightened for the final version.

We have separated the information from figure 1 into a table of the participant numbers (Table 2) and the reported preferences as a figure (fig 2). All Takata et al. participants saw the summaries by both authors. A certain portion of Cohn et al. participants only saw the Cohn et al. summary (the difference between the numbers of participants between Cohn and Takata). The table presentation of the participant numbers along with the updates to the methods should clear up this confusion.

Decision Letter 1

David Orrego-Carmona

23 Sep 2019

PONE-D-19-16933R1

Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts

PLOS ONE

Dear Ms Bredbenner,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

We would appreciate receiving your revised manuscript by Nov 07 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

David Orrego-Carmona, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (if provided):

Dear authors,

Many thanks for the revised version of the manuscript. Please find attached some minor comments by the reviewer. I am enquiring about the possibility to add a video.

All the best,

David

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: This manuscript has a wide range of information and is a little bit difficult to follow. The changes made in the first review solved many of these problems and made it really easier and organized. That said, I noted a few more things that could be adjusted in this version once I could get a better idea of the message you are trying to send.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: REVIEW 2 PLOS.docx

PLoS One. 2019 Nov 19;14(11):e0224697. doi: 10.1371/journal.pone.0224697.r004

Author response to Decision Letter 1


8 Oct 2019

Reviewer Comments

1. If you go to Cochrane Methods you can find a section on Methodological Expectations of Cochrane Intervention Reviews (MECIR) (https://methods.cochrane.org/methodological-expectations-cochrane-intervention-reviews). There you will find a link to the Standards for the reporting of plain language summaries of new reviews of interventions by the Plain Language Expectations for Authors of Cochrane Summaries (PLEACS Initiative). (https://methods.cochrane.org/sites/default/files/public/uploads/pleacs_2019.pdf)

We have reviewed the Cochrane standards and found aspects of the methods that seem like good practices, so we have added the reference to the manuscript (line 650 for track changes, line 622 for final).

2. Lines 130-7 of introduction seems like methods to me. If you could take this part then the paragraph you be as follows, that way you would say what and why you did on your research.

“To evaluate the effectiveness of different summary types for people with different careers, we created a survey that presents participants with a video abstract, graphical abstract, plain language summary, or published abstract from two papers in the same subject area. The survey looked at comprehension, perceived understanding, enjoyment, and whether the participants wanted to see more summaries of that type (…)”

We took this suggestion and removed the more methods-like components of the introduction. (Line 130)

3. Line 238: “input their gender if they so desired (a fill-in-the-blank that was not required)”

Were there many people who didn’t fill this? Once this was something that you reported I think this topic should have been mandatory, but that is not something you can change at this point. I Believe, then, that this (the number of people that didn’t answer this topic) should be reported when you inform the proportion of men/women that answered the survey.

505 of 538 reported a binary gender which was then used for the proportion of men/women who answered the survey. Data from individuals that either didn’t report a gender or reported a non-binary gender were still used for analysis, but were left out of gender-specific reporting.

The proportion of participants reporting a binary gender is now included in the manuscript at (Line 323 in track changes, line 314 in final).

4. Line 367: “Given the clear reported preference for written forms of communication (fig 2A), it was expected that the plain language summaries and perhaps the published abstracts would be the most effective summaries when tested (…) but videos were either as effective or more effective than plain language summaries in all cases except comprehension of science related participants for the Cohn et al. paper where plain language summaries had a higher average score (M=4 of 6 for video, M=5 of 6 for plain language) (fig 3)”

I believe this deserves some kind of discussion. Although the most preferable format is the written one, the videos were the easier format for comprehension. How would the authors explain that? A recently published paper from physiotherapy journal (https://doi.org/10.1016/j.physio.2018.11.003) analysed all the plain language summaries from the Physiotherapy Evidence Database and it found that only 2% of reports were considered at a suitable reading level for non scientific population! That could explain why the scientific community in addition to prefer, comprehend the message better than the rest of the people surveyed.

It is interesting that the most preferred format was written, but the videos were best for comprehension. We feel that written summaries were reported as the most preferred because they require the least effort and because written summaries can be skimmed. We don’t have direct evidence for this from the survey, but the comments on the survey and in talks with other scientists this definitely seems to be the difference. Also, many of the commenters on the video surveys said that they didn’t know just how helpful a video abstract could be which perhaps suggests that they aren’t widely watched.

Overall, there is still a lot of research to be done on this subject and we hope this paper opens people’s eyes to how much we need data.

To specifically address this point, we are now reporting the Flesh Reading Scores for both plain language summaries and published abstracts as calculated in the same way as the paper you mentioned. (New Table 1, line 175 track changes, line 166 final)

The plain language summary scores are 63 and 58 for Cohn et al and Takata et al respectively, which are both considered a normal level and suitable for 9/10th grade. This is a bit higher than the recommended 6th grade reading level to reach the general public, but the Flesch Reading criteria are often harsh on scientific writing where you often need to use unfamiliar words for protein names or other phenomena. We also put our plain language summaries through other calculators and got a variety of scores, so this may not be the final word on whether a plain language summary is good/effective or not. We have commented on this idea in the discussion as well (line 635 track changes, line 625 final).

We are also providing suggestions for authors of plain language summaries which reference for the eLife and the Cochrane guidelines. Also in the manuscript, we reference a jargon detector that may be helpful in limiting unfamiliar words and increasing the readability of plain language summaries. (line 631 in track changes, line 621 in final).

5. I understand the authors say they found a significant result on the correlation analysis and for that found it important to report, however, as the authors themselves report on the text, the correlation could have happened because one of the papers is harder to read once it requires more background knowledge to understand. Therefore, although significant this should be interpreted carefully as this may not reflect the true situation, mainly because these sort of resources usually target people that are not part of the scientific community or people from other areas of research, which, as consequence, need an easier text to understand the results of a very specific scientific study.

Also you could not analyse the correlation for plain language summaries, perhaps it would not be correlated since almost everyone preferred PLS but not all comprehend them. Again, if the plain language summaries were really easier to read (which they appear they are not) the preferred format could be the best for understanding (-but that is just a theory!)

To address this point, we have reported the Flesch Reading scores for both plain language summaries (New Table 1, line 175 track changes, line 166 final). These reading scores do not suggest that one of the plain language summaries was far more difficult to read than the other. We believe the difference between the two papers has to do with the amount of background knowledge needed to understand the implications of the findings. In general, any form of reading score doesn’t really measure how easy it is to get the point of an article. You can write an explanation of quantum mechanics using only the 1000 most common English words, short sentences, and active voice, but that doesn’t guarantee that people will understand quantum mechanics. They will be able to literally read what you wrote, but that’s not necessarily what is most important. We have added a small change to our discussion to be clear about this point (line 542 track changes, line 532 final).

That said, authors of plain language summaries should definitely try to keep their writing easy to read when possible and we have made sure that our manuscript reflects that opinion.

Decision Letter 2

David Orrego-Carmona

21 Oct 2019

Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts

PONE-D-19-16933R2

Dear Dr. Bredbenner,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

I am still waiting for an answer regarding the possibility of including a video abstract for the article.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

David Orrego-Carmona, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

David Orrego-Carmona

7 Nov 2019

PONE-D-19-16933R2

Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts

Dear Dr. Bredbenner:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. David Orrego-Carmona

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Reported learning preferences by career for Cohn et al.

    The bar charts show the reported preference of the participants for different ways to hear about science separated by career for the Cohn et al. data set.

    (TIF)

    S2 File. Cohn et al. video abstract.

    A compressed version of the video created for Cohn et al. paper. The video script used was the same as the plain language summary. The video was created with Autodesk Sketchbook, GarageBand, and iMovie software and was hosted on YouTube. The video was embedded into the survey for participants to view. The closed captioning was edited to be on with the option for the user to turn it off. For the full video see: youtu.be/RLuunA81kJo.

    (MP4)

    S3 File. Takata et al. video abstract.

    A compressed version of the video created for Takata et al. paper. The video script used was the same as the plain language summary. The video was created with Autodesk Sketchbook, GarageBand, and iMovie software, and was hosted on YouTube. The video was embedded into the survey for participants to view. The closed captioning was edited to be on with the option for the user to turn it off. For the full video see: youtu.be/Kp-0PvS99fM.

    (MP4)

    S4 File. Graphical abstracts.

    Graphical abstracts created for the Cohn et al. (A) and Takata et al. (B) papers. Graphical abstracts used similar visual motifs as the video abstracts and were created using Keynote software. Each abstract was put through a color blindness simulator to ensure that the abstracts could be seen properly by all viewers. The abstracts were embedded into the survey for participants to review.

    (TIF)

    S5 File. Plain language summaries.

    Plain language summaries written for the Cohn et al. (A) and Takata et al. (B) papers. Summaries were written based on intensive review of the published papers. The summaries also hit each key point mentioned in the abstracts of each paper. The Cohn et al. summary contains 422 words (A) and the Takata et al. summary contains 433 words (B). Each summary was embedded into the survey for participants to review.

    (TIF)

    S6 File. Copy of published abstract survey.

    A PDF copy of the survey presented to participants. This copy shows the published abstracts as the summary type. It has the Cohn et al. summary shown first and the Takata et al. shown second. Other surveys are identical to this one except that they show videos, plain language summaries, or graphical abstracts instead of the published abstracts. The videos can be seen in S2 and S3 Files. The graphical abstracts are in S4 File and the plain language summaries are in S5 File. Half of the surveys have the Cohn et al. summary shown first and half have the Takata et al. summary shown first. (See Fig 1 for schematic).

    (PDF)

    S7 File. Survey data.

    All data from the survey.

    (XLSX)

    Attachment

    Submitted filename: REVIEW 2 PLOS.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES