Skip to main content
F1000Research logoLink to F1000Research
. 2023 Mar 3;10:36. Originally published 2021 Jan 19. [Version 3] doi: 10.12688/f1000research.28324.3

The disconnect between researcher ambitions and reality in achieving impact in the Earth & Environmental Sciences – author survey

Andrew Kelly 1,a, Victoria Gardner 1, Anna Gilbert 1
PMCID: PMC10076906  PMID: 37034186

Version Changes

Revised. Amendments from Version 2

The authors thank the reviewers for their further comments and suggestions. Additional clarification of our understanding of the terms “real-world problems” and similar has been provided in the Terms and terminology section, where we also note that the use of this terminology in the survey without clear definitions may have contributed to the gap between our authors’ feedback and the results of the Dimensions analysis. Our terminology has been carefully revised throughout to be more dispassionate, rather than inferring the feelings/beliefs of our respondents. We have prepared a shareable summary sheet of the referenced Author Survey data and updated the citation. Finally, we have commented on the broad role of publishers in fostering collaboration within our research communities, as well as upholding and maintaining the academic record, which leads them to be well-placed to support non-academic engagement.

Abstract

Background: There is an increasing desire for research to provide solutions to the grand challenges facing our global society, such as those expressed in the UN SDGs (“real-world impact”). Herein, we undertook an author survey to understand how this desire influenced the choice of research topic, choice of journal, and preferred type of impact.

Methods: We conducted a survey of authors who had published in >100 of our Earth & Environmental Science journals. The survey was sent to just under 60,000 authors and we received 2,695 responses (4% response rate).  

Results: Respondents indicated that the majority of their research (74%) is currently concerned with addressing urgent global needs, whilst 90% of respondents indicated that their work either currently contributed to meeting real-world problems or that it would be a priority for them in the future; however, the impetus for this research focus seems to be altruistic researcher desire, rather than incentives or support from publishers, funders, or their institutions. Indeed, when contextualised within existing reward and incentive structures, respondents indicated that citations or downloads were more important to them than contributing to tackling real-world problems.

Conclusions: At present, it seems that the laudable and necessary ambition of researchers in the Earth & Environmental Sciences to contribute to the tackling of real-world problems, such as those included in the UN SDGs, is seemingly being lost amidst the realities of being a researcher, owing to the prioritisation of other forms of impact, such as citations and downloads.

Keywords: academic publishing, Earth Sciences, Environmental Sciences, research journals, research assessment, survey, Sustainable Development Goals, SDGs

Introduction

The role that academic publishing plays in the research process is often understood to comprise registration, curation, evaluation, dissemination, and archiving 1 . This covers validation (through the peer-review process), publication (participation in the scholarly record), curation (preservation of the work to ensure its availability in perpetuity), and dissemination (to relevant communities). These activities help researchers to advance knowledge by building on existing outcomes, progressing discussion and debate, and driving consensus. However, in the digital age, the potential of a research journal is much broader than this, fostering collaboration, network-building (both within core and adjacent fields), career development, and maximizing the capability of research to mobilise knowledge and contribute to the solving of grand challenges 2, 3 . Indeed, especially in the Earth and Environmental Sciences, there is increasing pressure on researchers to support policy formulation or to address societal challenges through their research in order to continue to receive research funding 4 . Therefore, it is essential that the mechanisms and drivers that collectively influence where an author chooses to publish their research support publication in the journals that are most relevant to their work; that is, where their research is most likely to be found, read, cited, and iterated upon by those working in the same and adjacent disciplines, as well as by those working outside of academia, in policy-making, lobbying, or advisory capacities.

However, such mechanisms and drivers, both personal and external, are varied and nuanced, as are our authors’ expectations for what impact that their work might have once it has been published. Academics 5 , institutions 6 , publishers 7 and learned societies 8 often survey their researchers and/or members to understand their values and views towards key issues around topics such as open access, data sharing, reproducibility, and career progression 9 . Research impact has also been the subject of both surveys 10 and research 11 in recent years, with common themes emerging around the opportunities presented by the move to digital of open access, and of linking research outputs to broader societal impact or benefit. Furthermore, several national research evaluation systems, such as the UK’s Research Excellence Framework (REF) 12 and the Australian Research Council’s Excellence in Research for Australia 13 include the potential societal impact of applications in their allocation of research funding. Conversely, common roadblocks have been identified, including access to outputs, incompatible research culture, and an over-emphasis on journal metrics, rather than individual researcher/research-output impact, which lead to a focus on publishing work in highly ranked journals in order to advance in their careers 5, 1416 . For example, one of the main findings from Springer Nature’s 2019 research collaboration on researcher attitudes towards societal impact was the focus from survey respondents on the concept of “academic impact”, which was more important to most respondents than “societal impact beyond academia”.

It is in this context that we undertook the 2020 Impact Assessment of Earth & Environmental Sciences Research: Author Survey. Surveys have been frequently used for evaluating “research impact” 17 , and there has been much discussion about the relationship with the UN Sustainable Development Goals (SDGs) within the Earth and Environmental Sciences communities 18 . Therefore, our survey was particularly designed to achieve three main aims. To understand:

  • what drives these communities to choose the topics that they research;

  • what drives these communities to choose the journals that they publish in; and

  • what type(s) of impact they are most looking for from their work.

We investigated what benefits publishing in our journals could impart on both the research and on the authors following publication, and we looked at to what extent global challenges, such as those expressed in the SDGs and the missions of Horizon Europe, were shaping researcher ambitions.

Methods

In Spring 2020, Taylor & Francis surveyed authors from across our Earth & Environmental Sciences portfolio. The survey (see Extended data 19 ), hosted on Alchemer (formerly SurveyGizmo), was emailed to authors using Salesforce Marketing Cloud. It was sent to just under 60,000 authors and received 2,695 responses (4% response rate).

The survey comprised 23 questions: section A (Q1 & 2) = multiple choice questions to clarify the article that the survey responses related to; section B (Q3 & 4) = multiple choice questions with the option of prose responses relating to the choice of journal; section C (Q5–10) = multiple choice questions with the option of prose responses relating to the downstream value of publishing the article for both the work and the author; section D (Q11–20) = largely multiple choice with the option of prose relating to the impact of the work, the motivation for undertaking the work, and the ability of the work to tackle real-world problems and influence policy change. Questions 13, 15, 18, and 20 were solely prose responses. Section E (Q21–23) = demographic questions.

A confidentiality and privacy statement was provided on the first page of the survey, which outlined how the data would be used. Consent to participate in the survey was implied by the authors who clicked through to complete the questionnaire after reading this statement and the instructions given in the invitation email. The data are fully anonymized and no sensitive personal data regarding the respondents were collected. To protect the anonymity of the respondents, all prose responses to the free-text questions (questions 13, 15, 18, and 20) have been omitted from the shared dataset, though some comments have been included herein. Written informed consent was not sought due to the low-risk nature of the research.

The survey responses include authors from 102 journals in the Earth & Environmental Sciences portfolio, and the geographical distribution of responses was similar to that of authors in the portfolio. Therefore, we can be reasonably confident that our responses are representative of Taylor & Francis authors in our Earth & Environmental Sciences journals.

Data analysis

Confidence intervals were calculated for certain parts of our analysis in which we compared groups of different sizes, and we are only reporting herein on differences that are statistically significant. Microsoft Excel was used to prepare the tables and charts. Confidence intervals were calculated by using the Creative Research Systems sample-size calculator.

A note about error bars and statistical significance. The country-comparison charts presented in this report include error bars, which plot the confidence intervals for the percentages shown. When making comparisons, error bars are useful as a visual means of demonstrating the range that likely contains the true overall value for each country in the chart. If the error bars for two or more countries overlap, we should be cautious about making substantive conclusions about any differences, because they may not be statistically significant. Therefore, only clearly statistically significant differences are included in the comparisons presented herein.

Terms and terminology

Herein, we have used a series of terms, including “real-world problems”, “global challenges”, “real-world application”, and “real-world challenges” interchangeably to describe the wider societal impact of a piece of research, which typically occurs downstream of further academic advancements. We considered a real-world problem to be an issue that poses risk to individuals, wider society, or the environment, such as natural hazards, cost of living, and food insecurity, as opposed to purely academic or theoretical research topics. In this context, “real-world application” denotes research that, directly or indirectly, has addressing a known real-world problem as a clear outcome or goal, whilst “real-world challenges” and “global challenges” denote real-world problems on a global scale with clear calls to action. Research impact, more broadly, incorporates both “real-world” and “academic” outputs, and the relationship between a piece of work and these outcomes, for both academic and non-academic stakeholders, is complex 20, 21 . With thanks to the reviewers of this article for highlighting this issue, we note that the design of our survey assumed that these terms mentioned above were commonly understood by our respondents, which may not have been the case. As such, respondents may not have fully understood or been able to accurately judge the degree to which their work addressed a “real-world problem” (or similar term), which likely contributed to the observed difference between our authors’ perspectives on their work and the Dimensions analysis (see below) 22 .

Results and discussion

SDG-relevance of earth and environment research – the current picture

‘Why do researchers undertake the research that they do?’ It is a fundamental question and the answer is multifaceted, varying by career stage, geography, and subject discipline. However, the publication of the Sustainable Development Goals (SDGs) by the UN in September 2015, which had the stated aim of providing a “ a shared blueprint for peace and prosperity for people and the planet, now and into the future” , allows us an opportunity to frame the question in such a way that gets to the core of what researchers hope to achieve through their work, that is: ‘ do researchers study topics that contribute, either directly or indirectly, to the tackling of real-world problems?’.

Comprising such urgent needs as Clean Water and Sanitation (SDG 6) and Climate Change (SDG 13), and tackling threats to Life on Land (SDG 15) and Life below Water (SDG 14), one might readily anticipate that a high proportion of research in the Earth and Environmental Sciences would have a part to play in meeting the needs expressed by the SDGs. Indeed, 74% of respondents indicated that, in their opinion and based on their understanding of the broader context of their work and the challenges expressed by the SDGs, their research contributed (directly or indirectly) to the tackling of real-world problems, such as those expressed by the UN SDGs ( Figure 1). Furthermore, 90% of respondents indicated that their work either currently contributed to meeting real-world problems or that it would be a research priority for them in the future. Therefore, we might infer that, in the Earth & Environmental Sciences, it is a strong research imperative for our authors that their work contributes to the tacking of real-world problems.

Figure 1.

Figure 1.

Proportion of respondents whose research contributes to tackling real-world problems, now ( A) and in the future ( B).

Such a high percentage aligns with the perspectives of journal editors in these subject areas, who, in contributing to our recent publication Sustainable Development Goals in the Earth and Environmental Sciences 23 , expounded the variety, breadth, and richness of research that their journals and subject areas have to offer in tackling the challenges laid out in the SDGs.

In our survey, whilst younger researchers were slightly more likely to undertake this type of research (76% of respondents aged under 50 answered ‘ Yes’ compared with 70% of respondents aged 50 or older, with resolved confidence intervals), the difference was not very pronounced, thus suggesting that this is a multi-generational aspiration, rather than one solely driven by early-career researchers.

Whilst we have noted that the tackling of real-world problems, such as those expressed by the SDGs, was a strong research priority for respondents, 8% of respondents selected ‘ No’ when asked if their work had such implications. As a follow-up question to those respondents, we asked the following: Please could you elaborate on why your research might not necessarily contribute to tackling real-world problems? and asked for prose responses.

Several responses indicated that their work was too highly specialised, fundamental, narrow in scope, or too preliminary to have broad application to a grand challenge, such as an SDG, whilst others indicated that the contribution of the work depended more on the engagement of policy-makers and aligned government politics than on the relevance of the research. This appears to highlight an opportunity gap around the role that journals could play in maximising the capability of research to mobilise knowledge. This aligns with the view of the International Science Council, noting in its 2021 report that “ The value of science to national economies and in confronting global challenges demands more efficient processes of knowledge dissemination 24 .

Comparison of author-led and analytics-led analyses

In addition to qualitative feedback from survey respondents, we also used the Sustainable Development Goals Research Category in Dimensions Analytics to quantitatively analyse the proportion of research published in the surveyed journals that had been linked to one or more of the SDGs 25 .

To facilitate as close a comparison as possible between the datasets, we analysed the Dimensions records for articles published in the 102 surveyed journals between 2012 and 2020, in line with the distribution list for the survey.

Of the 53,890 published articles, 9,096 (17%) were linked to one or more SDGs, with 10,144 SDG links overall. The relative proportion of article tags to the individual SDGs is shown in Figure 2. As might be expected for Earth and Environmental Sciences subject areas, more than one third of the tagged articles were linked to Climate Action (SDG 13; 37%), followed by Affordable and Clean Energy (SDG 7; 16%) and Sustainable Cities and Communities (SDG 11; 13%).

Figure 2. Relative proportion of the different SDGs covered by T&F research corpus (based on number of article tags).

Figure 2.

Notably, the proportion of articles that were linked to one or more SDGs in the Dimensions records (17%) was much lower than the proportion of respondents who indicated that, in their opinion, their research contributed (directly or indirectly) to the tackling of real-world problems (74%). While the survey responses and Dimensions analysis did not overlap entirely in the articles they considered, we think that the overlap was significant enough and the difference was pronounced enough to allow us to make a couple of inferences.

Firstly, the structure of a research article does not typically allow for discussion of the broader implications / applications of a piece of research in contributing to grand challenges, such as the SDGs, which makes meta-analysis more difficult and could mask the relevance of the work to a non-academic audience.

Secondly, the ability of analytics/AI tools to link article-level research with broader problems is improving 26 , but remains under-developed, and further learning will be required to appropriately characterise research with more-indirect SDG implications.

Why is addressing real-world challenges a research priority for our authors?

To understand a bit more about the motivating factors that sit behind the decision of our researchers to investigate topics that have application to real-world problems, we asked Why have you chosen to undertake research that contributes to these topics? ( Figure 3). The responses to this question presented a clear split between internal drivers—personal interest (62%) and the desire to contribute to addressing real-world problems (78%)—and external drivers, such as encouragement from a university (16%), other collaborators (16%), or improved opportunities to secure research funding (15%), with internal drivers and aspirations being much more significant.

Figure 3. Motivating factors for investigating topics with applications to real-world problems.

Figure 3.

We find it surprising that the influence of institutions (16%) and funders (15%) were only narrowly more influential than coincidence (14%) in prompting research that was skewed towards meeting global challenges.

Ambitions vs reality

We saw the greatest gap between aspiration and reality when we asked what type of impact was most important to our researchers, who were asked to make a maximum of three selections. The most-preferred type of impact was citations from within the same field (69%), coming above contribution to the advancement of research (53%), contribution to tackling real-world problems, such as those expressed by the UN SDGs (21%), and input into policy decision-making (19%; Figure 4).

Figure 4. Most-preferred type of impact (maximum of three selections).

Figure 4.

Interestingly, having seen the strong desire of authors to undertake research with real-world application in the earlier questions, when compared with other types of impact, contributing to the tacking of real-world problems dropped to fifth in the list (21%), behind citations from within the same field (69%) and from adjacent/other fields (25%), and achieving a large readership (34%).

We suggest that some respondents may have felt that citations were a necessary step in contributing to the advancement of research or in tackling real-world problems, through knowledge sharing and discussion, as the reasons for these selections weren’t probed further. This may be explored further in future research.

Input into policy decision-making, where ideas, research, and theory can be put into practice for much of the national-scale change that is required to meet the needs captured by the UN SDGs (19%), placed further down the list, on par with forming new collaborations (19%). Only attention from the press (3%) and attention on social media (3%) ranked lower.

We also considered the extent to which there was overlap between three of the key responses to the question shown in Figure 3: ‘ contribution to the advancement of research’, ‘ contribution to tackling big real-world problems’, and ‘ input into policy decision-making’ ( Figure 5). The percentages shown are based on the total number of respondents who selected at least one of these three options.

Figure 5. Alignment of selected responses to the question What type of impact is most important to you?.

Figure 5.

Of the respondents who selected one or more of ‘ contribution to the advancement of research’, ‘ input into policy decision-making’ and ‘ contribution to tackling big real-world problems’ as one of their top-three most important types of impact, more than half (51%) only selected ‘ contribution to the advancement of research’, which indicated that research impact was much more important than policy and real-world impact.

We also noted modest overlap between respondents who selected either ‘ input into policy decision-making’ or ‘ contribution to tackling big real-world problems’ and ‘ contribution to the advancement of research’, and minor overlap between respondents who selected ‘ input into policy decision-making’ and ‘ contribution to tackling big real-world problems’.

We did not probe the links between these responses further in the survey, although, as discussed above, several respondents who indicated that their work did not contribute to tackling real-world problems cited the necessary engagement of policy-makers to achieve this form of impact. Furthermore, many of the responses to the question ‘ How could journals or publishers help research to influence the response to real-world problems?’ indicated the need to engage a non-academic audience (see below).

Influence on the choice of journal

Feedback from respondents has indicated two key points: 1) the aspiration of researchers to contribute to the tackling of real-world problems with their work; and 2) an emphasis on citations and readership as primary measures of impact 27, 28 . We wanted to understand what role the choice of journal played in meeting researchers’ aspirations. To investigate this question, and to see if there might be a correlation between the choice of journal and their preferred type of impact, we asked all of the respondents why they chose to submit their paper to the journal that their work was published in ( Figure 6).

Figure 6. Motivating factors for the choice of journal.

Figure 6.

The predominant factor in determining the selection of a journal was its relevance to the author’s research, and 64% of respondents indicated this was one of the most-influential factors underpinning their choice of journal. Reaching a broader non-academic audience (which we might link to the desire to contribute to resolving real-world challenges) came quite far down the list, with only 6% of respondents noting that this was an influential factor in determining their choice of journal.

Interestingly, although 42% of authors indicated that the journal having an Impact Factor was an important factor in their decision-making, only 8% indicated that they chose the journal because it had the highest available Impact Factor, perhaps indicating that the presence of an Impact Factor was more important than the score itself. Many institutions, policy-makers, and funders are keen to reduce emphasis on the Impact Factor as part of research assessment practices 29 , so there is perhaps a misalignment in the priorities of researchers, as opposed to their institutions and funders.

We also asked whether the respondents had published in their first-choice journal to understand the possible clouding of a correlation between the preferred type of impact and choice of journal by not being able to publish in their first choice. We found that 75% of respondents indicated that they published in their first-choice journal, whilst 19% indicated that it was their second choice and 6% indicated that they had previously submitted their work to more than one other journal. Based on these responses, we inferred that it was reasonable to make the correlation between citation as the preferred form of impact and the importance of a journal receiving an Impact Factor.

Comparing priorities across different geographies

Scholarly communication is multifaceted, with a range of different stakeholders located all around the globe, across both the private and public sectors. Indeed, we found that authors located in different regions placed different emphases on the criteria that shaped their choice of journal, and their preferred types of impact. We discuss below findings from some specific countries or regions with noteworthy responses, and have made all findings available in the supplementary dataset.

United States. Fewer respondents based in the United States indicated that having an Impact Factor was an important criterion in determining their choice of journal compared to the overall average (23% vs 42; Figure 7). Of note, this feedback corresponded with the results of our post-publication author survey 30 , which is sent to authors in all subject areas and all geographies. Respondents from the US typically rate that having an Impact Factor is a less-important factor in determining their choice of journal compared with the global average.

Figure 7. Relative importance of indexing in the JCR in determining choice of journal for US-based authors.

Figure 7.

Instead, US-based authors placed more value on real-world types of impact than the global average, with a higher proportion indicating that contribution to tackling big real-world problems, such as those expressed by the UN SDGs, was one of the most-important types of impact to them (29%), and a much-higher proportion indicating that having an input into policy decision-making was important to them (34% vs 19% overall; Figure 8).

Figure 8. Relative importance of input into decision making for US-based authors.

Figure 8.

Recommendation from a colleague (39% vs 26% overall) and the journal’s capacity to reach a broader non-academic audience (13% vs 7% overall) were also indicated to be much more important factors as a means of identifying a suitable journal for US-based respondents, compared to the global average ( Figure 9).

Figure 9. Relative importance of recommendation from a colleague in determining choice of journal for US-based authors.

Figure 9.

China. Responses from researchers based in China largely reflected the overall results, both in the most-important types of impact to them and the most-important factors that influence their choice of journal. As in other geographies, respondents from China indicated that receiving citations from within the same field was one of the most-important types of impact for them (72%), followed by contribution to the advancement of research (49%) and readership/downloads (33%).

However, one noticeable distinction was the relative unimportance of having a real-world impact in terms of contribution to tackling real-world problems (10% vs 21% overall; Figure 10) and input into policy decision making (8% vs 19% overall), perhaps because China-based respondents were less likely to have collaborations with groups who were involved in SDG-related activities (8% vs 16% overall; Figure 11).

Figure 10. Relative importance of tackling real-world problems to China-based authors.

Figure 10.

Figure 11. Relative proportion of researchers with collaborators who are involved in SDG-related activities.

Figure 11.

Conversely, for China-based researchers, the relevance of a journal to their work was a much-more-important consideration (83%) compared to the global average (65%) and compared to authors based in the US (59%) and Europe (49%), whilst whether the journal had an Impact Factor was as important to Chinese respondents (44%) as the overall average (42%; Figure 12).

Figure 12. Relative importance of a journal’s relevance in determining choice of journal for China-based authors.

Figure 12.

Europe (including the UK). European respondents closely followed the global averages for both the types of impact that were most important and the most-important factors in determining the choice of journal.

Respondents indicated that receiving citations from within the same field was the most-important type of impact to them (73%), followed by contribution to the advancement of research (50%) and readership/downloads (33%). Where respondents based in Europe differed was in the prospect of forming new collaborations (24%), which they considered to be a more-important type of impact than for respondents from India (16%) and China (14%; Figure 13).

Figure 13. Relative importance of new collaborations as a form of impact for Europe- and China-based authors.

Figure 13.

Interestingly, and perhaps related to the premium placed on network-building outside of their own subject communities, only 49% of respondents from Europe said that the relevance of the journal to their work was a key factor in influencing their choice of publication venue, much lower than all other territories (65% average; Figure 14).

Figure 14. Relative importance of relevance in determining choice of journal for Europe-based authors.

Figure 14.

India. Respondents from India again closely followed the global averages in terms of the types of impact that were most important and the most-important factors in determining the choice of journal. However, there were two distinct points of divergence.

Compared to the global average, respondents from India placed significantly greater importance on the relevance of the journal for their work (87% vs 65% overall), comparable to respondents from China (83%) and significantly higher than respondents from the US (59%) and the UK and Europe (49%). Similarly, respondents from India placed much greater importance on the journal’s capability to reach their community (30%) compared to respondents from China (12%), the US (15%), and UK and Europe (18%), as well as to the overall average (19%; Figure 15).

Figure 15. Relative importance of reaching a target community between the demographics considered.

Figure 15.

Perhaps most importantly, respondents from India indicated that a journal’s capacity to raise their profile was much more important to them (31%) than respondents from the other territories that we considered (UK and Europe 16%, China 14%, US 12%; Figure 16).

Figure 16. Relative importance of raising the respondent’s profile in determining choice of journal.

Figure 16.

Driving real-world impact

Finally, we asked participants the following question: How could journals or publishers help research to influence the response to real-world problems? Answers were provided as prose and our analysis found suggestions clustered around four main improvements to mechanisms around: access, accessibility, communication of outcomes, and timeliness, as described below.

  • 1. 

    Improve access to the latest research, in particular to non-academic/policy-maker audiences, as well as to the underlying code/data.

    “Engage closer with non-governmental organisations (environmental and social) – provide greater access to these organisations that are fundamental to achieving the SDGs but do not have the financial resources to enjoy access/membership of the Journals.”

    “Provide access to interesting real-world data sets” / “Publish code and data along with papers; special issues focused on practical applications”

  • 2. 

    Improve the accessibility of research by changing the language, style, and format of publications to serve a non-academic audience.

    “Prepare readers’ digest versions of relevant articles, in multiple languages.”

    “Provide a policy-type document for research papers that tackle real-world problems. Original research paper may be difficult to read by policy-makers.”

    “Publish an e-digest of abstracts indexed by problem area. Send it to NGOs and managers in government agencies so they can quickly find articles that are relevant for their issues.”

    “Increased use of executive summaries from research papers that are accessible to a broader audience than academia”

    “provide support producing infographics and sharing research to non-academic audiences”

  • 3. 

    Improve communication links to raise the visibility of research implications on policy and real-world issues.

    “Making more publicity to the "non-scientific world" of the issues that are published in the journals” / “be present at policy events”

    “Special editions and workshops (can be via Zoom) to bring people together.”

    “Share published papers on social media and create TV shows where scientists engage on current issues.”

    “Connections with academic media outlets, like the Conversation etc.”

    “They should announce research grants related to real world problems”

  • 4. 

    Better support the publication of research on areas of particular relevance to live policy issues.

    “Seek out authors who are also practitioners.” / “By opening spaces for discussion among different actors (policy-makers, civil society and academia) and societal sector.”

    “Be willing to publish applied work, not just academic studies.” / “encourage and publish more transdisciplinary research”

    “By staying focused on their journals' scope which should be specific to these real-world problems”

    “By planning special issues which focus on research that are in response to real-world problems. When doing so, ensuring that enough time is given for research in this area to be specifically conducted, and not expecting that data is already available to be tailored into a paper that addresses these issues.”

    “By considering articles that address real world problems, even it if they are not considered "high impact" or "potentially citable".”

The contribution that publishers make to the advancement of research is often understood to comprise validation (through the peer-review process), publication (participation in the scholarly record), curation (preservation of the work to ensure its availability in perpetuity), and dissemination (to relevant communities). We also note here that similar important contributions are made by learned societies, as discussed elsewhere 31, 32 . However, increasingly, the value of a research journal is much broader than this, additionally supporting career development—through citation in promotion applications, recognition through awards, or appointments to governmental/non-governmental advisory panels/working groups—and fostering new research collaborations through network-building within core and adjacent fields and non-academic communities, both domestically and internationally. In this regard, to most-effectively engage non-academic audiences, the respondents indicated that policy-makers, industry, and the wider public must have access to the original research, both the underlying data and the conclusions. In this regard, greater support for open research, such as greater provision of open access publication models across all key stakeholders, could be an important step to take to allow non-academic readers to access and engage with the latest research.

To help realise the potential reach, impact, and policy application of the latest original research, respondents noted that research outcomes should be presented in a format, style, and language that is accessible and comprehensible to a non-academic audience 33 , or to pursue tailored research syntheses for a particular point of use, although such syntheses have been found to have varied impacts on policy and practice 34 . Whilst the research article well-serves the research community, the structure, length, and tone may create some barriers for non-academic readers, who are often looking for evidence pertaining to their particular point of need and may be put-off from drawing out points of relevance from a full research paper. This sentiment was also expressed by stakeholders across higher education, research, policy, and publishing who attended a dinner discussion convened by Taylor & Francis and the Higher Education Policy Institute in 2022 and outlined in a co-authored Policy note 35 .

Finally, respondents indicated that authors and publishers should seek to maximize the opportunity to bring the latest research into the public conscious, with the aim of cultivating a culture that drives policy change by engaging with live policy issues. It was suggested that this could be achieved by adopting a transdisciplinary approach at the outset of a piece of work, involving scientific and societal stakeholders 36 . Respondents noted that non-academic summaries, workshops, and discussion forums could directly engage with policy-makers right at the point of need. Indeed, as noted by one respondent, it is important for publishers to “be present” where appropriate at policy events and to advocate for the value of the research that they publish on behalf of their authors. However, there are relatively few examples of such engagement by publishers and their effectiveness remains unclear 37 . In any case, such value, which increasingly extends to public and policy engagement, must also be recognised and valued by institutions and funders with respect to career advancement and reputational growth 38, 39 .

Conclusion

Following a survey of >2,500 researchers who had published in our Earth & Environmental Sciences journals portfolio, we found that a majority of respondents (90%) indicated that their work either currently contributed to meeting real-world problems or that it would become a priority in the future, thus suggesting that, as one might anticipate, the tackling of real-world challenges is a significant research priority in the Earth & Environmental Sciences.

Whilst it is encouraging to see that the majority of research in the subject area is concerned (directly or indirectly) with addressing global needs, the impetus seems to be altruistic researcher desire, rather than incentives or support from publishers, funders, or institutions. As a result, it seems that this necessary application of original research is being lost amidst the realities of being a researcher – where success is predominantly measured by citations and readership. Respondents suggested four key areas for action by publishers and other stakeholders across the scholarly communication ecosystem to help researchers meet their aspiration for their work to have real world impact: access, accessibility, communication of outcomes, and timeliness.

Acknowledgements

The authors thank Tom Fleet for his support in the set-up and execution of the survey, and the reviewers for their careful consideration and feedback.

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

[version 3; peer review: 2 approved

Data availability

Underlying data

Figshare: Taylor-and-Francis_Impact-Assessment-of-Earth-and-Environmental-Sciences-Research-Author-Survey_Raw-Data_Figshare, https://doi.org/10.6084/m9.figshare.13281146.v1 40 .

Extended data

Figshare: Taylor-and-Francis_Earth-and-Environment-Survey-Questions, https://doi.org/10.6084/m9.figshare.13281104.v1 19 .

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

This paper was written using data obtained on (DATE), from Digital Science’s Dimensions platform, available at https://app.dimensions.ai. Access was granted to subscription-only data sources and functions under licence agreement.

https://doi.org/10.6084/m9.figshare.20176412

Author information

Dr Andrew Kelly holds a BSc (First Class Honours) in Chemistry from the University of York (2003) and a PhD in Chemistry from the University of Bath (2008). He has worked in publishing for more than 10 years and is currently a Journals Portfolio Manager at Taylor & Francis. ORCiD: 0000-0002-5785-4782

Victoria Gardner is Director of Policy at Taylor & Francis, where she advises on global policy developments around scholarly communications including around open access, copyright and IP, and the move to digital. She has worked in a variety of roles within Taylor & Francis, including portfolio management, open access, and technology. ORCID: 0000-0002-8519-6377

Anna Gilbert holds a BTh in Theology from Bangor University and an MSc in Social Research Methods from the University of Surrey. She is currently a Research & Analytics Manager at Taylor & Francis, where she is responsible for primary research and marketing/web analytics. ORCiD: 0000-0002-9627-9274

References

  • 1. Sugimoto CR, Allen L, Jeroen B, et al. : Rethinking impact factors: New pathways in journal metrics [version 1; not peer reviewed]. F1000Res. 2019;8:671. 10.7490/f1000research.1116751.1 [DOI] [Google Scholar]
  • 2. Herman E, Akeroyd J, Bequet G, et al. : The changed - and changing -Landscape of serials publishing: Review of the literature on emerging models. Learn Publ. 2020;33(3):213–229. 10.1002/leap.1288 [DOI] [Google Scholar]
  • 3. Sarewitz D, Pielke RA, Jr: The neglected heart of science policy: reconciling supply of and demand for science. Environ Sci Policy. 2007;10(1):5–16. 10.1016/j.envsci.2006.10.001 [DOI] [Google Scholar]
  • 4. Poppy G: Science Must Prepare for Impact. Nature. 2015;526(7571):7. 10.1038/526007a [DOI] [PubMed] [Google Scholar]
  • 5. Niles MT, Schimanski LA, McKiernan EC, et al. : Why we publish where we do: Faculty publishing values and their relationship to review, promotion and tenure expectations. PLoS One. 2020;15(3):e0228914. 10.1371/journal.pone.0228914 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Harley D, Acord SK, Earl-Novell S, et al. : Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Berkeley, CA: University of California Center for Studies in Higher Education,2010;2010. Reference Source [Google Scholar]
  • 7. Taylor & Francis: Taylor & Francis researcher survey 2019.2019. Reference Source [Google Scholar]
  • 8. Rowley J, Johnson F, Sbaffi L, et al. : Academics' behaviors and attitudes towards open access publishing in scholarly journals. J Assoc Inf Sci Technol. 2017;68(5):1201–1211. 10.1002/asi.23710 [DOI] [Google Scholar]
  • 9. Alperin JP, Muñoz Nieves C, Schimanski LA, et al. : How significant are the public dimensions of faculty work in review, promotion and tenure documents? eLife. Edited by E. Pewsey et al. eLife Sciences Publications, Ltd,2019;8:e42254. 10.7554/eLife.42254 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Springer Nature: Towards Societal Impact: Researcher Attitudes and Motivations.2020. Reference Source [Google Scholar]
  • 11. International Handbook on Responsible Innovation A Global Resource. (eds R.v. Schomberg and J. Hankins), Edward Elgar Publishing, Northampton, MA.2019. Reference Source [Google Scholar]
  • 12. Higher Education Funding Council of England: The Nature, Scale and Beneficiaries of Research Impact: An Initial Analysis of Research Excellence Framework (REF) 2014 Impact Case Studies. London: HEFCE.2015. Reference Source [Google Scholar]
  • 13. Australian Research Council: State of Australian University Research 2015-2016: Volume 1 ERA National Report. Canberra: Commonwealth of Australia. Reference Source [Google Scholar]
  • 14. Berlemann M, Haucap J: Which Factors Drive the Decision to Opt out of Individual Research Rankings? An Empirical Study of Academic Resistance to Change. Res Policy. 2015;44(5):1108–15. 10.1016/j.respol.2014.12.002 [DOI] [Google Scholar]
  • 15. Nicholas D, Jamali HR, Herman E, et al. : A global questionnaire survey of the scholarly communication attitudes and behaviours of early career researchers. Learn Publ. 2020;33(3):198–211. 10.1002/leap.1286 [DOI] [Google Scholar]
  • 16. Lane J: Let's make science metrics more scientific. Nature. 2010;464(7288):488–489. 10.1038/464488a [DOI] [PubMed] [Google Scholar]
  • 17. Solans-Domènech M, Pons JMV, Adam P, et al. : Development and validation of a questionnaire to measure research impact. Res Eval. 2019;28(3):253–262. 10.1093/reseval/rvz007 [DOI] [Google Scholar]
  • 18. Geosciences and the Sustainable Development Goals. (Eds Gill, J.C. and Smith, M.), Springer--Nature, Amsterdam.2021. 10.1007/978-3-030-38815-7 [DOI] [Google Scholar]
  • 19. Research & Analytics, Taylor & Francis: Taylor-and-Francis_Earth-and-Environment-Survey-Questions. figshare. Online resource.2020. 10.6084/m9.figshare.13281104.v1 [DOI] [Google Scholar]
  • 20. Molas-Gallart J, D'Este P, Llopis O, et al. : Towards an Alternative Framework for the Evaluation of Translational Research Initiatives. Research Evaluation. 2016;25(3):235–43. 10.1093/reseval/rvv027 [DOI] [Google Scholar]
  • 21. Kok MO, Schuit AJ: Contribution mapping: a method for mapping the contribution of research to enhance its impact. Health Res Policy Syst. 2012;10:21. 10.1186/1478-4505-10-21 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Oliver K, Innvar S, Lorenc T, et al. : A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2. 10.1186/1472-6963-14-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Taylor & Francis, . Kelly A, Agyeman J, et al. : Sustainable Development Goals in the Earth & Environmental Sciences. 10.6084/m9.figshare.12933173.v3 [DOI] [Google Scholar]
  • 24. International Science Council: Opening the record of science: making scholarly publishing work for science in the digital era.Paris, France. 2021. 10.7557/5.5603 [DOI] [Google Scholar]
  • 25. Digital Science: Dimensions.[Software]. 2018; Accessed on (DATE), under licence agreement. Reference Source [Google Scholar]
  • 26. AI Ethics in Scholarly Communication.STM Association, 2021. Reference Source [Google Scholar]
  • 27. Irawan DE, Abraham J, Tennant J, et al. : The Need for a New Set of Perspectives to Measure Research Impact in Earth Sciences: Indonesian’s Case. SocArXiv. 2020. 10.31235/osf.io/7rsj5 [DOI] [Google Scholar]
  • 28. Fryirs KA, Brierley GJ, Dixon T: Engaging with research impact assessment for an environmental science case study. Nat Commun. 2019;10(1):4542. 10.1038/s41467-019-12020-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Suggestions for a National Framework for Publication of and Access to Literature in Science and Technology in India.Appendix 3, point 1. Reference Source [Google Scholar]
  • 30. Taylor & Francis Journals Author Survey.summary. 10.6084/m9.figshare.22188448 [DOI] [Google Scholar]
  • 31. Late E, Korkeamäki L, Pölönen J, et al. : The role of learned societies in national scholarly publishing.Learned Publishing.2020;33(1):5–13. 10.1002/leap.1270 [DOI] [Google Scholar]
  • 32. Hopkins J: The role of learned societies in knowledge exchange and dissemination: the case of the Regional Studies Association, 1965–2005. Hist Educ. 2011;40(2):255–271. 10.1080/0046760X.2010.518161 [DOI] [Google Scholar]
  • 33. Lunn J, Hanson B: Providing greater context for Earth and space science research. Eos. 2017;98. 10.1029/2017EO071657 [DOI] [Google Scholar]
  • 34. Wyborn C, Louder E, Harrison J, et al. : Understanding the Impacts of Research Synthesis. Environmental Science & Policy. 2018;86:72–84. 10.1016/j.envsci.2018.04.013 [DOI] [Google Scholar]
  • 35. Gardner V, Brassington L: Why open access is not enough: Spreading the benefits of research.HEPI Policy Note 42. Higher Education Policy Institute, 2022. Reference Source [Google Scholar]
  • 36. Brennan M, Rondón-Sulbarán J: Transdisciplinary research: Exploring impact, knowledge and quality in the early stages of a sustainable development project. World Development. 2019;122:481–491. 10.1016/j.worlddev.2019.06.001 [DOI] [Google Scholar]
  • 37. Oliver K, Hopkins A, Boaz A, et al. : What works to promote research-policy engagement? Evidence & Policy. 2022;18(4):691–713. 10.1332/174426421X16420918447616 [DOI] [Google Scholar]
  • 38. Hanson B, Pandya R, Stall S, et al. : Expanding recognition for contributions of science to society. Eos. 2018;99. 10.1029/2018EO106255 [DOI] [Google Scholar]
  • 39. Singh GG, Farjalla VF, Chen B, et al. : Researcher engagement in policy deemed societally beneficial yet unrewarded. Front Ecol Environ. 2019;17(7):375–382. 10.1002/fee.2084 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Research & Analytics, Taylor & Francis: Taylor-and-Francis_Impact-Assessment-of-Earth-and-Environmental-Sciences-Research-Author-Survey_Raw-Data_Figshare. figshare. Dataset.2020. 10.6084/m9.figshare.13281146.v2 [DOI] [Google Scholar]
F1000Res. 2023 Apr 5. doi: 10.5256/f1000research.145008.r165356

Reviewer response for version 3

Kathryn Oliver 1

Hallo, thanks for your detailed responses which address all my main points. Thanks also for the HEPI note and related links which I enjoyed reading.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Evidence production and use, research impact, academic-policy engagement

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2023 Feb 24. doi: 10.5256/f1000research.57459.r161241

Reviewer response for version 2

Kathryn Oliver 1

Thanks for a chance to read this paper, from an interesting mixed team. Apologies from me too that this has taken so long to deliver; I can see that the first revision has already produced lots of improvement.

Back in 2014 I undertook a systematic review of barriers and facilitators of evidence use in policy. Most of the 145 studies included were surveys of academics by other academics, identifying factors which they felt affected whether or not their research had impact 1 .

I mention this because as far as we could tell, these surveys were nearly all based on researchers’ opinions, which could not be verified from observational, ethnographic or experimental studies of evidence use in policy. This doesn’t imply that researchers are providing misleading answers, but there is a well-documented comprehension gap in academic generally about how policy works and how research impact actually occurs.

You deal with this a bit in the ambitions vs. reality section, where you contrast the stated desire of researchers to undertake applied research with their preference for citations. You could discuss this a bit more by explaining what you (and if possible they) mean by ‘real world application’ and the other categories. I suspect most researchers think that their work does in some way ‘tackle’ a real world problem - the question is at how many degrees removal is that the case. Overall I would urge caution in how interpretations of this finding is worded, e.g. researchers “indicated that they BELIEVED their work either currently contributed to meeting real-world problems or that it would become a priority in the future”, and  “Whilst it is encouraging to see that RESEARCHERS FEEL THAT the majority of research in the subject area is concerned (directly or indirectly) with addressing global needs” etc.

For me the most interesting bit is about the role of journals and publishers. There is some connection here to the history of learned societies, who as you know were the first publishers of regular academic communications 2 , 3 . The suggestions raised by your respondents are all very instrumental (e.g. formatting etc) but I wonder if you would like to comment on the role of journals in promoting relationships and community, in raising the quality of research (through forging those links) and maintaining standards (which of course could include utility of research as a quality marker).

Some of the other suggestions raised by respondents (e.g. more academic-policy engagement) relate to an evidence base which is actually pretty weak. Some of the data we collected in this project 4 looked at publisher- and journal-led activities to promote research impact, although we didn’t find many examples. Happy to share our data on this if helpful - I think there were a couple of prizes offered by publishers (e.g. Emerald) and some convening.

Is the work clearly and accurately presented and does it cite the current literature?

Yes

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Yes

Are the conclusions drawn adequately supported by the results?

Yes

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Evidence production and use, research impact, academic-policy engagement

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. : A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res .2014;14: 10.1186/1472-6963-14-2 2 10.1186/1472-6963-14-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. : The role of learned societies in national scholarly publishing. Learned Publishing .2020;33(1) : 10.1002/leap.1270 5-13 10.1002/leap.1270 [DOI] [Google Scholar]
  • 3. : The role of learned societies in knowledge exchange and dissemination: the case of the Regional Studies Association, 1965–2005. History of Education .2011;40(2) : 10.1080/0046760X.2010.518161 255-271 10.1080/0046760X.2010.518161 [DOI] [Google Scholar]
  • 4. : What works to promote research-policy engagement?. Evidence & Policy .2022; 10.1332/174426421X16420918447616 1-23 10.1332/174426421X16420918447616 [DOI] [Google Scholar]
F1000Res. 2023 Feb 28.
Andrew Kelly 1

Thanks for a chance to read this paper, from an interesting mixed team. Apologies from me too that this has taken so long to deliver; I can see that the first revision has already produced lots of improvement.

We’re grateful for you making the time to review our paper and for your comments.

Back in 2014 I undertook a systematic review of barriers and facilitators of evidence use in policy. Most of the 145 studies included were surveys of academics by other academics, identifying factors which they felt affected whether or not their research had impact.

I mention this because as far as we could tell, these surveys were nearly all based on researchers’ opinions, which could not be verified from observational, ethnographic or experimental studies of evidence use in policy. This doesn’t imply that researchers are providing misleading answers, but there is a well-documented comprehension gap in academic generally about how policy works and how research impact actually occurs.

You deal with this a bit in the ambitions vs. reality section, where you contrast the stated desire of researchers to undertake applied research with their preference for citations. You could discuss this a bit more by explaining what you (and if possible they) mean by ‘real world application’ and the other categories. I suspect most researchers think that their work does in some way ‘tackle’ a real world problem - the question is at how many degrees removal is that the case.

A similar point was raised by another reviewer, and we agree that we likely presumed too much understanding of this term by the respondents to the survey, which may have been exacerbated by our use of a series of terms interchangeably without definition. The 2014 article is well-noted, thank you for highlighting it. In the first revision, we added a section, Terms and terminology, to briefly define our understanding of these terms, and we have now added a comment that these terms may not have been commonly understood by all respondents to the survey, which may have then contributed to the gap between our authors’ feedback and the Dimensions analysis. We have also expanded this section to further articulate what we understand by these terms.

Overall I would urge caution in how interpretations of this finding is worded, e.g. researchers “indicated that they BELIEVED their work either currently contributed to meeting real-world problems or that it would become a priority in the future”, and  “Whilst it is encouraging to see that RESEARCHERS FEEL THAT the majority of research in the subject area is concerned (directly or indirectly) with addressing global needs” etc.

Yes, this is a fair criticism. We have revised some of our terminology to be more dispassionate, rather than inferring the feelings/beliefs of our respondents.

For me the most interesting bit is about the role of journals and publishers. There is some connection here to the history of learned societies, who as you know were the first publishers of regular academic communications. The suggestions raised by your respondents are all very instrumental (e.g. formatting etc) but I wonder if you would like to comment on the role of journals in promoting relationships and community, in raising the quality of research (through forging those links) and maintaining standards (which of course could include utility of research as a quality marker).

Our author survey also included questions relating to the extent to which the article influenced future career events for our authors (e.g., future collaboration, promotion, or invitation to join an Editorial Board/speak at a conference), and, as you mention, whether publication of their article led to the forming of new connections (both within the subject area and with non-academic communities, and domestically/internationally). We have omitted these results from this paper, as we felt that they related more to the wider value of publishing in academic journals, rather than the main topic of this paper, but would be happy to share the data with you and have expanded on this slightly.

Some of the other suggestions raised by respondents (e.g. more academic-policy engagement) relate to an evidence base which is actually pretty weak. Some of the data we collected in this project looked at publisher- and journal-led activities to promote research impact, although we didn’t find many examples. Happy to share our data on this if helpful - I think there were a couple of prizes offered by publishers (e.g. Emerald) and some convening.

Thank you for sharing this paper. We have included a comment to note some caution about the effectiveness of such engagement activities. Yes please, we would be very interested in discussing your analysis and picking this discussion up separately as we look to develop future activities in this area.

Since this survey, we have been working on the topic of engagement and impact, with a focus on the research-policy exchange, and published a Policy Note with the UK Higher Education Policy Institute (HEPI) on this topic last year, which we’d be delighted to discuss with you in more detail (available here: https://www.hepi.ac.uk/wp-content/uploads/2022/12/Why-open-access-is-not-eno.ugh-Spreading-the-benefits-of-research-1.pdf).  You may also find a recent report from HEPI of interest as well: ‘How to talk to policymakers about research’, based on interviews with those experienced in working in the policy and HE sectors: https://www.hepi.ac.uk/wp-content/uploads/2023/01/How-to-talk-to-policymakers-about-research.pdf.

F1000Res. 2022 Jul 11. doi: 10.5256/f1000research.57459.r143684

Reviewer response for version 2

Brooks Hanson 1

The authors have revised the paper extensively and seem to have at least considered all the original comments constructively. The paper is now "published".

My only comment is wrt ref. 29. Does F1000 allow references to "data not available?" This seems odd for a journal demonstrating otherwise complete transparency. I understand that the full author survey is not available, but it would seem that the authors/TF could at least extract these data if they want to cite it and provide basic statistics around it (N of respondents, etc.)

Is the work clearly and accurately presented and does it cite the current literature?

No

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Earth and space science broadly, and scholarly publishing

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

F1000Res. 2023 Feb 28.
Andrew Kelly 1

We have prepared a summary sheet of the data for this question, which has been hosted on FigShare, and updated the reference. Thank you for raising this issue.

F1000Res. 2021 Apr 12. doi: 10.5256/f1000research.31330.r79623

Reviewer response for version 1

Martin Dominik 1

I am having some difficulties with the policy framing of the article, and it does not become obvious what point exactly the authors intend to make. A few statements don’t appear to match up.

The article touches on many topics, but I feel that none of them are discussed to a sufficient extent. It initially centres around 3 key questions that are to be addressed by a survey of authors, but towards the end it morphs into an essay on developing an environment that supports translating research into policy, which is not much underpinned by the survey data. Consequently, the article feels like two that are loosely connected. Moreover, it sometimes reads like a policy statement and advertisement by Taylor & Francis rather than a research article. This impression is strengthened by the lack of research articles amongst the references in conjunction with the authors not adequately positioning their findings in the context of other research on the topic.

There is a fundamental conflict between the aspiration of “maximising the capability of research to achieve” and focusing efforts on addressing urgent needs, which remains unresolved in the article. Throughout, these are conflated and confused. I would consider “capability” a most relevant keyword in this context.

Several references are ill-chosen for supporting the specific point that the authors try to make. Specifically, the Lisbon strategy (reference 1) has the declared aim “to make Europe the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, but it does not include an explicit call for research to have “impact”. Moreover, the report of reference 21 states that “People are broadly split on whether the UK invests too much in long-term R&D rather than solving issues that matter now (33% agree vs. 35% disagree)”. The link in reference 20 does not work.

It is rather unusual for me to defend Michael Gove, but “we no longer need experts” is a (popular) misquotation by omission. He stated: “I think the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.”, and the latter part of the statement is much relevant for scientists engaging in public debate. It is important to know how to build and maintain trust.

While the authors elaborate on the term “impact”, it remains problematic and likely to be understood in various ways. One could challenge the statement about quantification in economic terms being straightforward, and in particular question whether benefits should be evaluated in such a way. For example, people dying early could be economically beneficial, but would that be societally desired?

The “missions” approach in Horizon Europe is somewhat controversial. Notably, the recent 2020 Euroscience Open Forum (ESOF) included a session “Does science for missions undermine the missions of science?”. Likewise, the authors state that the drive to solve "the Grand Challenges of our time" has acquired increased urgency during the COVID-19 pandemic, but one could also argue that prominently reveals a potential flaw of focusing on identified challenges, which is in neglecting those strands of research that are most suitable to provide the basis for the next challenges that we are to encounter (e.g. https://www.statnews.com/2020/02/10/fluctuating-funding-and-flagging-interest-hurt-coronavirus-research/ and  https://www.nytimes.com/2021/04/08/health/coronavirus-mrna-kariko.html)

The authors refer to citations as the primary currency of success or progress, but it might be worth keeping in mind that it is a widespread myth that this applies universally. In particular, the quoted UK Research Excellence Framework (REF) is not based on citation counts. Moreover, there are significant differences between “impact” in the REF and “Pathways to impact” in the context of funding applications to UK Research Councils. Both are distinct from the “Pathways to Impact initiative” (reference 11). I appreciate the authors mentioning a “chain reaction” emerging from original research. Could one elaborate on what would determine the value of research?

With regard to the role of academic publishing, I note that the International Science Council has recently published an insightful report “Opening the record of science: making scholarly publishing work for science in the digital era.” 1

The definition of the three main aims of the survey does not specify what group of people “our communities” refers to. I am far less confident than the authors about the respondents being “representative”. I would expect that those who just care about their bibliometric profile and other similar performance indicators are not inclined to spend any time responding, which would result in the respondents being more engaged for the scientific community and the wider society.

I found the “note about error bars and statistical significance” almost entirely stating trivialities, whereas the authors do not provide the crucial information of what the quoted “error bars” actually refer to and what they mean, which leave me unable to interpret them.

There are some substantial weaknesses with the survey questions and the provided answer options. It is somewhat confusing that in some cases respondents were able to pick any number of answers from a list, whereas in others the number of choices was limited. This poses some difficulties for the interpretation of the results and the limitations to answer options should be mentioned clearly in the respective figure and/or captions. It would also be useful if the authors referred to the question numbers.

My main concern with regard to the survey is about Q14 and Q16, which refer to “real-word problems”. I think that it is an unfortunate choice that the authors put these central rather than referring to Q11 in conjunction with Q3 on addressing the question on why researchers undertake the research that they do. While the term “real-world problem” carries a polemic tone suggesting that academics might be detached from reality, “directly or indirectly contributing” is remarkably fuzzy. It is not clear to me what Q14 and Q16 are actually able to capture, and I feel that answering with “yes”, “no”, or “Don’t know” is mostly a matter of interpretation of the question. I could make a case for my research falling into either of these categories, depending on what point of view I assume. In fact, “don’t know” appears to be a good option given that for some research the connection to “real-word problems” is not immediately apparent and the connection might only be built in the future. Apparently, a substantial number of respondents chose that option. I also note that Q16 refers to “priority” whereas Q14 does not. I did not see the authors commented on lower numbers for an affirmative response on Q16 as compared to Q14.

I also wonder how many of the respondents are familiar with what the UN SDGs are, or are willing to look at up before they answer the question. I note that SDG 8 explicitly recognises creativity and innovation as drivers of economic growth, which aligns fundamental research, not directly targeted at specific challenges, with the UN SDGs.

Something that puzzles me is that 38% of the respondents did not choose the answer “I find researching these topics interesting”. Why do they do research that they are not interested in?

It would seem to me that the survey reveals another “gap” than the one the authors claim. A key gap appears to be in how the research actually materialises into something useful, with only about 20% of the survey respondents stated that “input into policy decision-making” or “contribution to tackling big real-world problems, such as those expressed by the UN SDGs” was amongst the most important forms of “impact” (although they were limited to 3 answers). In contrast, the authors elaborate on the point that respondents ranked formal recognition over making a contribution and state that we risk devaluing the necessary application of original research to addressing our global challenges by prioritising other metrics and outcomes. However, their study does not provide evidence for that. If the research of the respondents is oriented towards “real-world problems”, the underlying motivation is not the relevant issue. Unfortunately, the authors do not elaborate on to what extent “contribution to the advancement of research” is aligned with “contribution to tackling real-world problems” and/or “input into policy decision-making” or rather not. It is the more unfortunate that respondents were restricted to a maximum of 3 answers for Q11 rather than being able to state where each of them ranks in priority.

On the question of why authors chose to submit their manuscript to the specific journal, the top chosen answer is pretty much an umbrella category that encompasses more than half of the other answer options, which are more specific on what most “relevant” means.

The authors should define what they consider “Europe”, e.g. if respondents stated that they are located in Turkey, have their answers been included or not? To my knowledge, the UK is in Europe.

The mentioned effort on narrowing the science-policy gap is laudable, but can we expect getting the researchers onboard?

The authors mention “traditional” mechanisms around engagement, knowledge transfer, and research assessment, but in particular with respect to the latter, there are only fashions, but no tradition. A tradition only gets established once something is passed on from generation to generation, while we saw substantial changes on shorter time-scales. Notably, the h-index was not invented before 2005.

The authors argue that universities are well-positioned to “educate” their faculties, but are we facing a question of education? If researchers are motivated, aren’t they in need of support rather than incentives?

Channels to reach policy makers is certainly a relevant point, but looking at social media and news services, the question of quality pops up. Scientists with a large public followership are not necessarily the best suited to speak on a specific topic.

Is the work clearly and accurately presented and does it cite the current literature?

No

If applicable, is the statistical analysis and its interpretation appropriate?

No

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

No

Are sufficient details of methods and analysis provided to allow replication by others?

No

Reviewer Expertise:

Astronomy, science policy

I confirm that I have read this submission and believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.

References

  • 1. : Opening the record of science: making scholarly publishing work for science in the digital era. International Science Council .2021; 10.24948/2021.01 10.24948/2021.01 [DOI]
F1000Res. 2022 Jun 30.
Andrew Kelly 1

Reponses have been added in-line below the reviewer's comments and are shown in italics.

Reviewer's comments:

I am having some difficulties with the policy framing of the article, and it does not become obvious what point exactly the authors intend to make. A few statements don’t appear to match up.

The article touches on many topics, but I feel that none of them are discussed to a sufficient extent. It initially centres around 3 key questions that are to be addressed by a survey of authors, but towards the end it morphs into an essay on developing an environment that supports translating research into policy, which is not much underpinned by the survey data. Consequently, the article feels like two that are loosely connected. Moreover, it sometimes reads like a policy statement and advertisement by Taylor & Francis rather than a research article. This impression is strengthened by the lack of research articles amongst the references in conjunction with the authors not adequately positioning their findings in the context of other research on the topic.

We agree that the article needed to be more focused and has been reframed around the results of the author survey, rather than the non-citation value of academic research. The policy framing has been removed.

There is a fundamental conflict between the aspiration of “maximising the capability of research to achieve” and focusing efforts on addressing urgent needs, which remains unresolved in the article. Throughout, these are conflated and confused. I would consider “capability” a most relevant keyword in this context.

Several references are ill-chosen for supporting the specific point that the authors try to make. Specifically, the Lisbon strategy (reference 1) has the declared aim “to make Europe the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion”, but it does not include an explicit call for research to have “impact”. Moreover, the report of reference 21 states that “People are broadly split on whether the UK invests too much in long-term R&D rather than solving issues that matter now (33% agree vs. 35% disagree)”. The link in reference 20 does not work.

It is rather unusual for me to defend Michael Gove, but “we no longer need experts” is a (popular) misquotation by omission. He stated: “I think the people of this country have had enough of experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.”, and the latter part of the statement is much relevant for scientists engaging in public debate. It is important to know how to build and maintain trust.

Yes, this section was initially reframed to focus on the increased public trust in science/research during the pandemic, but has now been removed owing to the tighter focus and the references have been updated.

While the authors elaborate on the term “impact”, it remains problematic and likely to be understood in various ways. One could challenge the statement about quantification in economic terms being straightforward, and in particular question whether benefits should be evaluated in such a way. For example, people dying early could be economically beneficial, but would that be societally desired?

We agree that the difficulty in qualifying impact itself is part of the challenge, whilst quantification isn’t necessarily a good thing. We have reframed around the mobilisation/transfer of knowledge.

The “missions” approach in Horizon Europe is somewhat controversial. Notably, the recent 2020 Euroscience Open Forum (ESOF) included a session “Does science for missions undermine the missions of science?”. Likewise, the authors state that the drive to solve "the Grand Challenges of our time" has acquired increased urgency during the COVID-19 pandemic, but one could also argue that prominently reveals a potential flaw of focusing on identified challenges, which is in neglecting those strands of research that are most suitable to provide the basis for the next challenges that we are to encounter (e.g.

https://www.statnews.com/2020/02/10/fluctuatingfunding-and-flagging-interest-hurt-coronavirus-research/ and https://www.nytimes.com/2021/04/08/health/coronavirus-mrna-kariko.html)

Yes, we agree, although the missions concept is gaining traction worldwide, potentially at the expense of curiosity-driven research. The discursive elements around Horizon Europe has been removed as part of the tightening of the article.

The authors refer to citations as the primary currency of success or progress, but it might be worth keeping in mind that it is a widespread myth that this applies universally. In particular, the quoted UK Research Excellence Framework (REF) is not based on citation counts. Moreover, there are significant differences between “impact” in the REF and “Pathways to impact” in the context of funding applications to UK Research Councils. Both are distinct from the “Pathways to Impact initiative” (reference 11). I appreciate the authors mentioning a “chain reaction” emerging from original research. Could one elaborate on what would determine the value of research?

With regard to the role of academic publishing, I note that the International Science Council has recently published an insightful report “Opening the record of science: making scholarly publishing work for science in the digital era.”1

Thank you for sharing the reference. Our experience suggests that this is the case, along with other examples, such as Horizon Europe’s business case, which comments on increased citations comparatively, but this has been modified or removed in the Introduction.

The definition of the three main aims of the survey does not specify what group of people “our communities” refers to. I am far less confident than the authors about the respondents being “representative”. I would expect that those who just care about their bibliometric profile and other similar performance indicators are not inclined to spend any time responding, which would result in the respondents being more engaged for the scientific community and the wider society.

We also agree that survey sampling tends to lead to some degree of self-selection, which may emphasise some biases. However, we were satisfied that the total number of responses across a wide range of journals and the geographical alignment of the respondents with the journals’ author base allowed us to have reasonable confidence in the representative nature of the results.

I found the “note about error bars and statistical significance” almost entirely stating trivialities, whereas the authors do not provide the crucial information of what the quoted “error bars” actually refer to and what they mean, which leave me unable to interpret them.

The error bars plot the confidence intervals for the percentages shown. If the error bars for two or more countries overlap, we have been cautious about making any substantive conclusions, because they may not be statistically significant, and only clearly statistically significant differences are discussed in our comparisons.

There are some substantial weaknesses with the survey questions and the provided answer options. It is somewhat confusing that in some cases respondents were able to pick any number of answers from a list, whereas in others the number of choices was limited. This poses some difficulties for the interpretation of the results and the limitations to answer options should be mentioned clearly in the respective figure and/or captions. It would also be useful if the authors referred to the question numbers.

The format of the question was selected according to the purpose of the question and the number of perceived answers. The phrasing of the questions was appropriate for the settings that were used; that is, the questions that presented a limited number of options used wording that emphasised priority, whereas the questions that allowed for an unlimited number of selections used wording that emphasised relevancy.

With regards the question that allowed a maximum of three responses, this was labelled clearly within the paper. within the text and chart labels.

My main concern with regard to the survey is about Q14 and Q16, which refer to “real-word problems”. I think that it is an unfortunate choice that the authors put these central rather than referring to Q11 in conjunction with Q3 on addressing the question on why researchers undertake the research that they do. While the term “real-world problem” carries a polemic tone suggesting that academics might be detached from reality, “directly or indirectly contributing” is remarkably fuzzy. It is not clear to me what Q14 and Q16 are actually able to capture, and I feel that answering with “yes”, “no”, or “Don’t know” is mostly a matter of interpretation of the question. I could make a case for my research falling into either of these categories, depending on what point of view I assume. In fact, “don’t know” appears to be a good option given that for some research the connection to “real-word problems” is not immediately apparent and the connection might only be built in the future. Apparently, a substantial number of respondents chose that option. I also note that Q16 refers to “priority” whereas Q14 does not. I did not see the authors commented on lower numbers for an affirmative response on Q16 as compared to Q14.

We used the phrase “real-world problems” as we felt it was commonly used parlance in aggregating topics such as the SDGs, but agree that there is some subjectivity there and there is a need for education of researchers in contextualising their work. We have included a brief section to introduce the terms below the note on error bars.

I also wonder how many of the respondents are familiar with what the UN SDGs are, or are willing to look at up before they answer the question. I note that SDG 8 explicitly recognises creativity and innovation as drivers of economic growth, which aligns fundamental research, not directly targeted at specific challenges, with the UN SDGs.

Whilst we did not probe the degree of familiarity of the respondents with the UN SDGs, or the scope of individual Goals or Targets, we believe that, especially in the subject areas covered by the survey, it is reasonable to assume that most respondents are broadly familiar with the priorities of the SDGs and that respondents who felt they were not sufficiently familiar with the SDGs to answer the question would have answered “don’t know”.

Something that puzzles me is that 38% of the respondents did not choose the answer “I find researching these topics interesting”. Why do they do research that they are not interested in?

It would seem to me that the survey reveals another “gap” than the one the authors claim. A key gap appears to be in how the research actually materialises into something useful, with only about 20% of the survey respondents stated that “input into policy decision-making” or “contribution to tackling big real-world problems, such as those expressed by the UN SDGs” was amongst the most important forms of “impact” (although they were limited to 3 answers).

Thank you for sharing this observation, which has been included in the revision.

In contrast, the authors elaborate on the point that respondents ranked formal recognition over making a contribution and state that we risk devaluing the necessary application of original research to addressing our global challenges by prioritising other metrics and outcomes. However, their study does not provide evidence for that. If the research of the respondents is oriented towards “real-world problems”, the underlying motivation is not the relevant issue. Unfortunately, the authors do not elaborate on to what extent “contribution to the advancement of research” is aligned with “contribution to tackling real-world problems” and/or “input into policy decision-making” or rather not. It is the more unfortunate that respondents were restricted to a maximum of 3 answers for Q11 rather than being able to state where each of them ranks in priority.

We have added a new Venn diagram (Figure 5), which looks at the overlap of responses to three of the key options from question 11. The percentages are based on the total number of respondents selecting at least one of these three options. We chose to limit the number of responses to Q11 because we felt that, whilst it would have been worthwhile asking respondents to rank all of the answers, it would have been a much-larger undertaking to ask respondents to rank or rate nine answers. Additionally, asking respondents to rank all of the answers would not have allowed them to rank things as equally important/unimportant, or to leave some items unranked.

On the question of why authors chose to submit their manuscript to the specific journal, the top chosen answer is pretty much an umbrella category that encompasses more than half of the other answer options, which are more specific on what most “relevant” means.

The authors should define what they consider “Europe”, e.g. if respondents stated that they are located in Turkey, have their answers been included or not? To my knowledge, the UK is in Europe. The mentioned effort on narrowing the science-policy gap is laudable, but can we expect getting the researchers onboard?

A mapped list of countries to the regions that were used in the analysis has been added to the FigShare deposit.

The authors mention “traditional” mechanisms around engagement, knowledge transfer, and research assessment, but in particular with respect to the latter, there are only fashions, but no tradition. A tradition only gets established once something is passed on from generation to generation, while we saw substantial changes on shorter time-scales. Notably, the h-index was not invented before 2005.

The authors argue that universities are well-positioned to “educate” their faculties, but are we facing a question of education? If researchers are motivated, aren’t they in need of support rather than incentives?

Channels to reach policy makers is certainly a relevant point, but looking at social media and news services, the question of quality pops up. Scientists with a large public followership are not necessarily the best suited to speak on a specific topic.

F1000Res. 2021 Feb 10. doi: 10.5256/f1000research.31330.r77703

Reviewer response for version 1

Brooks Hanson 1

Review of Andrew et al., The disconnect between researcher ambitions and reality in achieving impact in the Earth & Environmental Sciences – narrowing the gap.

This is a report of an interesting survey getting at a major question related to understanding motivations of researchers in conducting and publishing research, and ultimately how to align incentives to support, recognize, and reward better research and activities of scholars aimed at addressing societal challenges and working with communities.

Major points:

1) The main improvement needed for this paper is placing it in context of other work and author surveys. There are many related and similar surveys and analyses, done by publishers, societies, funders, and scholars, and none (not exaggerating; none) are cited or mentioned. Much of the findings here regarding citations and priorities around publishing, open access, and more, have been covered in other recent author surveys, in this general discipline and other disciplines. This context is essential for this paper to be considered scholarly (and published in a scholarly journal). I’ve reviewed a lot of papers over the years, and this is the first submitted to a leading journal where I’ve seen such a lack of referencing. This might be acceptable for a report by a publisher (cf. Elsevier’s recent gender analysis, self-published, also completely without references) but not a submission to a scholarly journal. Most of the references are just to websites, not any formal survey results or scholarly research on these topics (there’s a lot even in the past few years). Such comparisons would also strengthen some of the conclusions.

Just a note that JpGU and AGU have conducted a somewhat similar survey of their members. The results are not published yet but were presented in this session: https://agu.confex.com/agu/fm20/meetingapp.cgi/Session/105702 at the recent AGU Fall Meeting (see presentation starting at about 40 minutes; registration is required). I’ve been involved in helping this survey. Overall these results (and others AGU has conducted but not published) are similar to the results given in the later questions here regarding selecting journals, citations, etc. However, on the motivation for research (first question in this survey, and the one that sets that main stage for discussion), the AGU-JpGU wording was different but a large number of respondents, well beyond a majority, indicated that their primary motivation for research was around basic “discovery” or “elaboration/synthesis” rather than “responding to responsibility of society.” JpGU members even moreso. In this survey, unlike the T&F one, there was a large age-related difference between early career, and later-career respondents (early career researchers were more focused on topics related to societal impact). I suspect that the populations of respondents overlap heavily in the two surveys. Recognizing that the AGU-JpGU results are not yet fully analyzed or published, I’m just raising this to bring caution to over interpreting the first question of this survey as worded. This question is, however, the most interesting one to explore and provides much of the interesting novelty here. Just be cautious in interpreting the answers.

One test would be also to simply score recent publications (outputs) as to whether they align with the results—that is, do most of the outputs directly or indirectly support SDGs, for example? My sense is that in the Earth and space sciences, many indirectly do, but that the path is long and I’m not sure 75% would without quite a stretch.

2) As the authors note there is a disconnect between authors reporting that they are working (directly or indirectly) on societal relevant topics vs. the “impact” (that is, citations) that they are seeking in their work. Further exploration is needed on whether the respondents misinterpreted the first question or if it was worded so vaguely (“indirectly”) as to be meaningless. Note that much “basic” research in the Earth, environmental, and space science has widespread indirect impacts. Much real time “basic-science” data about the Earth is used in the GPS system, weather predictions, or other uses, e.g. For examples, see this discussion here that I was involved with: https://eos.org/editors-vox/earth-and-space-science-for-the-benefit-of-humanity and the linked papers. Indeed many grant applications require a statement regarding impacts.

Similarly, results for the question on impact expected by the authors are used in comparison. I also wonder if the wording and reality of the scope of published papers drove this response (that is, many have an indirect vs. direct impact) and the response was viewed as a direct impact.

3) The authors list a number of actions T&F are taking or should take. Interestingly, T&F has not signed DORA—as Springer-Nature and Elsevier have now signed (whatever one thinks of that), Wiley and T&F are the major publishers who have not (many individual society journals published with Wiley have). Perhaps the authors could indicate why or why not that would be appropriate and how to leverage that impact. Here’s a recent editorial from a T&F publication: https://www.tandfonline.com/doi/full/10.1080/10919392.2018.1522774

4) The authors indicate what some stakeholders, especially publishers, might do. In the Earth environment and space sciences, there are several leading global societies. These are not mentioned. What is their role? Many have missions aligned with providing benefits to society and many are involved in science communication, policy, outreach and training/mentoring (moreso than most commercial publishers and indeed universities). Indeed this might be an argument to focus on publishing with a society versus a commercial title, where these resources are more directly leveraged.

Other items:

The authors argue that it is surprising that JIF is important to researchers but that they don’t always/regularly choose the highest JIF journals when submitting. This is because researchers know rejection rates and do optimization around likelihood of success (or they don’t want to waste their time, which is also important).

Is the work clearly and accurately presented and does it cite the current literature?

No

If applicable, is the statistical analysis and its interpretation appropriate?

Yes

Are all the source data underlying the results available to ensure full reproducibility?

Yes

Is the study design appropriate and is the work technically sound?

Partly

Are the conclusions drawn adequately supported by the results?

Partly

Are sufficient details of methods and analysis provided to allow replication by others?

Yes

Reviewer Expertise:

Earth and space science broadly, and scholarly publishing

I confirm that I have read this submission and believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.

F1000Res. 2022 Jul 1.
Andrew Kelly 1

Responses to the reviewer comments have been added in italics.

Reviewer's comments:

This is a report of an interesting survey getting at a major question related to understanding

motivations of researchers in conducting and publishing research, and ultimately how to align

incentives to support, recognize, and reward better research and activities of scholars aimed at addressing societal challenges and working with communities.

Major points:

1) The main improvement needed for this paper is placing it in context of other work and author surveys. There are many related and similar surveys and analyses, done by publishers, societies, funders, and scholars, and none (not exaggerating; none) are cited or mentioned. Much of the findings here regarding citations and priorities around publishing, open access, and more, have been covered in other recent author surveys, in this general discipline and other disciplines. This context is essential for this paper to be considered scholarly (and published in a scholarly journal).

I’ve reviewed a lot of papers over the years, and this is the first submitted to a leading journal where I’ve seen such a lack of referencing. This might be acceptable for a report by a publisher (cf. Elsevier’s recent gender analysis, self-published, also completely without references) but not a submission to a scholarly journal. Most of the references are just to websites, not any formal survey results or scholarly research on these topics (there’s a lot even in the past few years). Such comparisons would also strengthen some of the conclusions.

Just a note that JpGU and AGU have conducted a somewhat similar survey of their members. The results are not published yet but were presented in this session:

https://agu.confex.com/agu/fm20/meetingapp.cgi/Session/105702 at the recent AGU Fall Meeting (see presentation starting at about 40 minutes; registration is required). I’ve been involved in helping this survey.

Thank you and we acknowledge the limitations of our introduction. As part of the refocusing of the article, we have pared-back the introduction to the article and included additional referencing to support the discussion.

Overall these results (and others AGU has conducted but not published) are similar to the results given in the later questions here regarding selecting journals, citations, etc. However, on the motivation for research (first question in this survey, and the one that sets that main stage for discussion), the AGU-JpGU wording was different but a large number of respondents, well beyond a majority, indicated that their primary motivation for research was around basic “discovery” or “elaboration/synthesis” rather than “responding to responsibility of society.” JpGU members even moreso. In this survey, unlike the T&F one, there was a large age related difference between early career, and later-career respondents (early career researchers were more focused on topics related to societal impact). I suspect that the populations of respondents overlap heavily in the two surveys. Recognizing that the AGU-JpGU results are not yet fully analyzed or published, I’m just raising this to bring caution to over interpreting the first question of this survey as worded. This question is, however, the most interesting one to explore and provides much of the interesting novelty here. Just be cautious in interpreting the answers.

Thank you for raising this and we agree that it would be interesting to investigate further.

One test would be also to simply score recent publications (outputs) as to whether they align with the results—that is, do most of the outputs directly or indirectly support SDGs, for example? My sense is that in the Earth and space sciences, many indirectly do, but that the path is long and I’m not sure 75% would without quite a stretch.

This was a very interesting suggestion. We have used Dimensions SDGs category data (new Figure 2) to analyse their quantitative alignment of research published in the same set of journals with the SDGs and compared this with the author’s qualitative responses.

2) As the authors note there is a disconnect between authors reporting that they are working (directly or indirectly) on societal relevant topics vs. the “impact” (that is, citations) that they are seeking in their work. Further exploration is needed on whether the respondents misinterpreted the first question or if it was worded so vaguely (“indirectly”) as to be meaningless. Note that much “basic” research in the Earth, environmental, and space science has widespread indirect impacts.

Much real time “basic-science” data about the Earth is used in the GPS system, weather predictions, or other uses, e.g. For examples, see this discussion here that I was involved with:

https://eos.org/editors-vox/earth-and-space-science-for-the-benefit-of-humanity and the linked papers. Indeed many grant applications require a statement regarding impacts. Similarly, results for the question on impact expected by the authors are used in comparison. I also wonder if the wording and reality of the scope of published papers drove this response (that is, many have an indirect vs. direct impact) and the response was viewed as a direct impact.

A comment has been added on this in the revised submission to note further research would be useful to better understand the motivations and responses.

3) The authors list a number of actions T&F are taking or should take. Interestingly, T&F has not signed DORA—as Springer-Nature and Elsevier have now signed (whatever one thinks of that), Wiley and T&F are the major publishers who have not (many individual society journals published with Wiley have). Perhaps the authors could indicate why or why not that would be appropriate and how to leverage that impact. Here’s a recent editorial from a T&F publication: https://www.tandfonline.com/doi/full/10.1080/10919392.2018.1522774

Pleasingly, Taylor & Francis has since signed DORA, but no comment has been made in the article, as the policy-related points have been removed.

4) The authors indicate what some stakeholders, especially publishers, might do. In the Earth environment and space sciences, there are several leading global societies. These are not mentioned. What is their role? Many have missions aligned with providing benefits to society and many are involved in science communication, policy, outreach and training/mentoring (moreso than most commercial publishers and indeed universities). Indeed this might be an argument to focus on publishing with a society versus a commercial title, where these resources are more directly leveraged.

This was an oversight from the previous submission. As we have removed the policy discussion, we haven’t elaborated on this further, but we acknowledge the mission focus of many of the leading societies and their importance in shaping the behaviours of researchers in their communities.

Other items:

The authors argue that it is surprising that JIF is important to researchers but that they don’t always/regularly choose the highest JIF journals when submitting. This is because researchers know rejection rates and do optimization around likelihood of success (or they don’t want to waste their time, which is also important).

We agree that likelihood of success is one of the main drivers in the decision-making of authors when selecting a journal and included a paragraph on whether the article was finally published in the first/second/third-or-more choice. We also suggest that speed of publication/time to first decision, and the journal’s relevance to the community are other important drivers in addition to acceptance rate and Impact Factor.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    Underlying data

    Figshare: Taylor-and-Francis_Impact-Assessment-of-Earth-and-Environmental-Sciences-Research-Author-Survey_Raw-Data_Figshare, https://doi.org/10.6084/m9.figshare.13281146.v1 40 .

    Extended data

    Figshare: Taylor-and-Francis_Earth-and-Environment-Survey-Questions, https://doi.org/10.6084/m9.figshare.13281104.v1 19 .

    Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

    This paper was written using data obtained on (DATE), from Digital Science’s Dimensions platform, available at https://app.dimensions.ai. Access was granted to subscription-only data sources and functions under licence agreement.

    https://doi.org/10.6084/m9.figshare.20176412


    Articles from F1000Research are provided here courtesy of F1000 Research Ltd

    RESOURCES