Skip to main content
PLOS One logoLink to PLOS One
. 2024 Aug 26;19(8):e0304651. doi: 10.1371/journal.pone.0304651

Delphi studies in social and health sciences—Recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study

Marlen Niederberger 1,*, Julia Schifano 1, Stefanie Deckert 2, Julian Hirt 3,4, Angelika Homberg 5, Stefan Köberich 6, Rainer Kuhn 7,8, Alexander Rommel 9, Marco Sonnberger 10; the DEWISS network
Editor: Monica Duarte Correia de Oliveira11
PMCID: PMC11346927  PMID: 39186713

Abstract

Background

While different proposals exist for a guideline on reporting Delphi studies, none of them has yet established itself in the health and social sciences and across the range of Delphi variants. This seems critical because empirical studies demonstrate a diversity of modifications in the conduction of Delphi studies and sometimes even errors in the reporting. The aim of the present study is to close this gap and formulate a general reporting guideline.

Method

In an international Delphi procedure, Delphi experts were surveyed online in three rounds to find consensus on a reporting guideline for Delphi studies in the health and social sciences. The respondents were selected via publications of Delphi studies. The preliminary reporting guideline, containing 65 items on five topics and presented for evaluation, had been developed based on a systematic review of the practice of Delphi studies and a systematic review of existing reporting guidelines for Delphi studies. Starting in the second Delphi round, the experts received feedback in the form of mean values, measures of dispersion, a summary of the open-ended responses and their own response in the previous round. The final draft of the reporting guideline contains the items on which at least 75% of the respondents agreed by assigning scale points 6 and 7 on a 7-point Likert scale.

Results

1,072 experts were invited to participate. A total of 91 experts completed the first Delphi round, 69 experts the second round, and 56 experts the third round. Of the 65 items in the first draft of the reporting guideline, consensus was ultimately reached for 38 items addressing the five topics: Title and Abstract (n = 3), Context (n = 7), Method (n = 20), Results (n = 4) and Discussion (n = 4). Items focusing on theoretical research and on dissemination were either rejected or remained subjects of dissent.

Discussion

We assume a high level of acceptance and interdisciplinary suitability regarding the reporting guideline presented here and referred to as the "Delphi studies in social and health sciences–recommendations for an interdisciplinary standardized reporting" (DELPHISTAR). Use of this reporting guideline can substantially improve the ability to compare and evaluate Delphi studies.

Introduction

Internationally, Delphi studies have proven themselves in a variety of disciplines and fields of application. Analyses show a growing prevalence of this technique, especially in the contexts of medicine, science and technology, and the social sciences [1]. They represent an important tool for analyzing potential future conditions [2, 3]. Associated with this is the idea of collective intelligence, according to which the prognostic ability of a group of experts is better than that of a single expert [4]. In the context of health sciences research, Delphi studies are used in the medical and natural sciences [5] and the behavioral social sciences [6]. They are selected for use if little or inconsistent evidence is available [7], or primary studies are not possible because of economic, ethical, or pragmatic reasons, or there are practical challenges in clinical or nursing contexts.

Due to the prevalence of Delphi studies [1, 8], different authors have already formulated proposals for reporting Delphi studies [912]. One guideline has been published using the acronym CREDES (Guidance on Conducting and REporting DElphi Studies) [9]. Another has been published using the keyword ACCORD (ACcurate COnsensus Reporting Document) [13, 14]. Yet none of these reporting guidelines claims to be valid for the many diverse areas of application or Delphi variants in the health and social sciences. This gap should be closed with the help of the study presented here, in that we develop the reporting guideline "DELPHISTAR—Delphi studies in social and health sciences—recommendations for an interdisciplinary standardized reporting."

Characteristics and variants of Delphi techniques

Delphi techniques are structured survey procedures in which complex topics, on which uncertain or incomplete knowledge exists, are evaluated by experts in an iterative process [15]. Specific to a Delphi procedure is that the survey is repeated and, from the second survey round onwards, information is shared regarding the results of the previous round enabling the respondents to reconsider their judgments and, if needed, revise them. Five typical characteristics of the Delphi process can be gleaned from the methods literature [7, 16]:

  1. Experts are surveyed while typically preserving their anonymity.

  2. The survey is conducted in at least two Delphi rounds.

  3. A standardized questionnaire is used, often with open-ended questions to gather arguments and capture the horizons of legitimation.

  4. The statistical analysis is based on descriptive calculations.

  5. From the second Delphi round onwards, the experts receive feedback on the results of the previous round along with the questionnaire and can thus reconsider and, if necessary, revise their judgments.

Some authors define the Delphi process more narrowly and focus on the finding of consensus among the expert judgments [17, 18]. According to Dalkey and Helmer [19], the process is suitable "to obtain the most reliable consensus of opinion of a group of experts…by a series of intensive questionnaires interspersed with controlled feedback." Narrowing the definition to consensus, however, seems discriminating given the many different settings in which Delphi studies are applied, for instance, to forecast future developments [3] or discover and aggregate knowledge [20].

In recent years many variations of the Delphi procedure have been developed [21, 22]. More than 10 different variants have already been identified [23, 24]. The Delphi variants differ from each other in terms of process design, for instance, whether or not the Delphi rounds are held separately or overlap with each other, in the weighting of open-ended and standardized responses, and also in regard to the expert panel, e.g., group size and the handling of anonymity [24, 25]. Among the Delphi variants are both established variants and some that have hardly been used before:

  • Real-time Delphi, in which expert judgments are reflected back online and in real time. There are no clearly separate Delphi rounds [21, 26].

  • Delphi markets, where the Delphi concept is combined with virtual marketing platforms (prediction markets) and the findings of Big Data research to improve abilities to forecast the future and the quality on which such predictions are based [27].

  • Policy Delphis are concerned with capturing dissent, meaning a wide range of diverse judgments [16, 28].

  • Argumentative Delphi, where the focus is on the qualitative reasoning for the experts’ quantitative evaluations [23].

  • Group Delphi, for which the experts are invited to a workshop to openly formulate and discuss arguments in favor of divergent judgments [29, 30].

  • Deliberative Delphi (citizens’ Delphi), in which citizens are surveyed iteratively. In between the Delphi rounds, they are trained to make informed and responsible judgments [31].

  • Fuzzy Delphi applies different analytical strategies to quantify the linguistic labels often used in the Likert scales to allow for potential differences in the understanding of these expressions when calculating mean values [32].

  • Café Delphi, in which a smaller number of experts are surveyed in an informal, "café-like" atmosphere [33].

A look at the paper published by Mullen in 2003 [34] makes it clear that this list here is far from complete. She identifies more than ten additional Delphi variants (e.g., Delphi conference, decision Delphi, Delphi forecast, ranking Delphi), but without defining them more closely or differentiating them from one another. Furthermore, different systematic reviews report on countless other, hardly nameable or understandable, modifications of Delphi procedures [9, 35].

The differentiation between Delphi variants is accompanied by epistemological and methodological specifications regarding the classic Delphi design, which also affects the characteristics. Hence, the definition of "expert" is broadened to include not only people in certain professional positions or who have attained academic excellence, but also people with a specific kind of lifeworld experience, which then means that experts are not just members of certain professions, but also patients, patients’ relatives, or users [36, 37].

From an epistemological standpoint, newer Delphi studies are often based on constructivist assumptions and use not only standardized questionnaires, but also explorative instruments in the form of open central questions [38] or workshops [39]. Ensuring anonymity, however, remains a constant in the evolution of the Delphi technique; the names of participating experts are published only in exceptional cases [30, 40].

Given the often considerably limited scope of journal articles, it is sometimes impossible to present and justify the use of the selected Delphi variant and any modifications to it, such that it is all sufficiently transparent to outsiders. In the following, a look at publication practices suggests, at the least, how these aspects are addressed.

Reporting Delphi studies

Different systematic reviews document unclear or potentially misleadingly formulated approaches in Delphi studies [41]. There are sometimes even errors in the presentation of the method or statistical analysis [42]. As an example, even a survey of experts in a single round is declared to be a Delphi study [43]. In respect to presenting the methodological approach, questions remain unanswered, for instance, regarding the form of feedback [44], why the selected number of rounds was chosen [45], at what point "consensus" was defined [46], and how high the response rate for each Delphi round was [47].

A recognized reporting guideline can help to counteract such methodological misunderstandings and imprecisions. Ultimately, the quality of Delphi studies can also be improved through more transparency. This is the aim pursued by the present study concerning the development of the reporting guideline "DELPHISTAR—Delphi studies in social and health sciences—recommendations for an interdisciplinary standardized reporting."

Background

The scientific network DEWISS has set the goal of developing a reporting guideline for Delphi studies that is valid for the different Delphi variants and diverse fields within the health and social sciences (more information is available at https://delphi.ph-gmuend.de/). The German-speaking DEWISS Network is comprised of 20 scientists and academics from different subject areas and disciplines. All of the members conduct Delphi studies in the context of their research and grapple with the methodological and epistemological aspects of Delphi techniques. They perform methodological tests, carry out surveys to improve the methodological basis of Delphi studies, advise other researchers on how to conduct Delphi studies, and develop concepts and materials that can be used to teach about Delphi procedures (e.g., short videos at https://delphi.ph-gmuend.de/). Since its founding, this network has received funding from the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), an overarching institution providing support for science and research in the Federal Republic of Germany (project number 429572724, time period 2020 to 2024).

Method

Using the acronym DELPHISTAR (OSF registration: https://osf.io/gc4jk), a multi-method research design consisting of three sub-studies was carried out (Fig 1).

Fig 1. Methods concept for developing the reporting guideline for Delphi studies.

Fig 1

  • First sub-study: In the first step, an overview of Delphi studies was created from a methodological standpoint [41]. A total of 16 previous reviews of Delphi studies were identified, systematically evaluated, and the results summarized in a map [41]. It was seen here that, among other things, there is a diversity of approaches and, in some instances, unexamined modifications to Delphi studies. The research team’s awareness of the relevant aspects and the necessity for a reporting guideline was raised by these findings.

  • Second sub-study: In a systematic review, ten earlier recommendations for reporting Delphi studies were identified, analyzed in terms of content, and examined for commonalities and differences [48]. In the course of this, it was seen, among other things, that these previous recommendations did not claim to have validity across disciplines or for different Delphi variants. The recommendations were often developed for a specific research area, e.g., palliative medicine [9] or medical education [49]. This is possibly the reason why the proposal published in the EQUATOR Network by Jünger et al. [9] did not result in any fundamental improvement in reporting practices [35].

  • Third sub-study: The results gathered from the first two sub-studies were discussed in the DEWISS Network and transformed into a comprehensive reporting guideline for Delphi studies. Consensus among additional Delphi experts was reached on this reporting guideline by means of a Delphi procedure. The selection of the Delphi method is justified by the fact that it is also recommended by other authors for the development of a reporting guideline [50]. The Delphi process is presented in the following.

The Delphi process

International experts on Delphi procedures were surveyed for the purpose of developing a reporting guideline for Delphi studies. The aim was to find consensus on the reporting criteria. The approach was based on the "classic" Delphi technique with three rounds that were carried out online (Fig 2). Digital collection of data is now an established part of Delphi procedures [25]. However, since our process exhibits the five typical characteristics of a Delphi procedure (see Introduction), we identify our study as a "classic" Delphi. In doing so, we allot a relatively high importance to the free-text responses, in that we analyze them systematically, combine them with the quantitative data, and use them to fine-tune the wording in the reporting guideline.

Fig 2. Process of the Delphi study.

Fig 2

Questionnaire development

The questionnaire was developed by the DEWISS Network on the basis of the first two sub-studies [41, 48]. These sub-studies identified existing reporting guidelines and research methods, and the findings were synthesized during several DEWISS network meetings (Table 1). The results were incorporated in the first draft of the reporting guideline for Delphi studies. For this, we selected a structured sequence organized by topics and sections because this resembles established reporting guidelines, particularly the PRISMA guideline for systematic reviews [51].

Table 1. Reporting guideline.

Overview of the items that were evaluated according to topic and section.

Topic Section (n = Anzahl der Items)
Title and abstract (n = 3) Title and abstract (n = 3)
Context (n = 16) Formal (n = 8)
Theory (n = 3)
Content (n = 5)
Method (n = 32) Body of knowledge & Integration of knowledge (n = 3)
Delphi variations (n = 2)
Sample of experts (n = 5)
Survey (n = 11)
Delphi rounds (n = 3)
Feedback (n = 4)
Data analysis (n = 4)
Results (n = 4) Delphi process (n = 5)
Results (n = 1)
Discussion and dissemination (n = 8) Quality of findings (n = 5)
Dissemination (n = 3)

Finally, items covering five specific topics, each with up to seven sections, were contained in the initial questionnaire (Table 1). They are presented here as they appear in the final version of the reporting guideline.

The proposed content of the reporting guideline was queried in the form of standardized items on a 7-point rating scale ("1 = very unimportant, 7 = very important" or "1 = very unlikely, 7 = very likely") (Fig 3). Different rating scale widths have been established in Delphi studies [9, 52]. Firstly, they enable a separate evaluation of each item; secondly, an experimental study shows that for those taking the survey, the completion time is quicker and the cognitive effort is lower when compared to ranking scales [53]. This is an important argument in regard to participant motivation. With this in mind, we deliberately chose an odd-numbered scale width. Taze et al. [54], to cite one example, also recommend this for Delphi studies. The items were deliberately formulated so that it was possible to understand them without further explanation. Even so, examples were still included in some instances. Each item was programmed as a required question. For this reason, there was always an evasive option available ("cannot evaluate this item").

Fig 3. An example of a page from the questionnaire on judgment certainty and a text box for comments (Source: Unipark).

Fig 3

Also, in all three of the rounds the experts were asked in a standardized manner about the certainty of their judgment ("1 = extremely uncertain, 7 = absolutely certain") so that this could be taken into consideration in the analysis. In the first and second Delphi rounds it was possible to comment freely after each topic (see Fig 3). The free-text boxes were each limited to 300 characters. In the third and final survey round it was possible to comment freely at the end of the survey without any limitations on the character count.

Also integrated into the survey were questions about the respondents’ expertise (discipline, country, experience with Delphi studies, proficiency as a Delphi practitioner). These served to describe the sample.

The survey was conducted in English. The initial questionnaire, including the reporting guideline, was translated by a native English speaker and then reviewed for accuracy by methods experts at the Leibniz Institute for the Social Sciences (GESIS), a renowned German research institute in the empirical social sciences. In all three of the Delphi rounds experts were requested not to use any machine translation tools in order to avoid any distortions as a result of translation errors.

The comprehensibility of the questions and the technical functioning of the online survey were tested prior to each Delphi round by DEWISS Network members who had not directly collaborated in the questionnaire development.

Selecting the experts

Considered as experts were academics who had conducted several Delphi studies themselves and/or who were working on methodological issues related to the Delphi technique. These experts were identified via publications. A search was conducted of two databases compiled by the DEWISS Network and freely accessible through ZOTERO [48]. The first database contains Delphi primary studies (available at: https://www.zotero.org/groups/4396781/dewiss_datenbanken_delphi-studien/collections/25H44TFI), and the second has publications based on the methodology of Delphi studies (e.g., reviews, methods experiments; available at: https://www.zotero.org/groups/4396781/dewiss_datenbanken_delphi-studien/collections/NGTBI3PE). Both databases were created in 2021 based on systematic research of the literature in the central databases for health and social sciences (Scopus, MEDLINE via PubMed, CINAHL and Epistemonikos) and contain Delphi studies and methods papers published between 2016 and 2021. The search was conducted using the keyword "delphi*" in the title or abstract. Publications were included if they involved methodological publications regarding Delphi studies or Delphi primary studies in the health or social sciences. The collection of methods-based studies includes 155 papers and the one with Delphi primary studies comprises 7,044 papers [48]. Authors who had published at least five papers (n = 863) were filtered out of the primary study collection. All lead and senior authors (n = 228) were filtered out of the database containing the methods studies. Nineteen authors were present in both databases so that, in the end, 1,072 Delphi experts were identified and invited to participate in the Delphi study. The author information listed in the publications was used as the contact information. The sample contained 352 women and 710 men (10 unclear) from 47 countries (TOP 5: USA, England/UK, Australia, Canada, Italy).

Participation in the Delphi study was voluntary and anonymous. Informed consent was obtained from all of the participants at the beginning of the survey using an online form. The study design complies with the Helsinki Declaration [55], with regard for the European General Data Protection Regulation [56] and the principles of the DFG [57].

Data collection

The programming and sending of the questionnaire was done using Unipark software [58]. The invitation email contained a personalized link to the questionnaire and a PDF attachment with the contents of the reporting guideline that were to be evaluated. The time period for the survey was always a minimum of four weeks, during which two to three reminders to participate in the Delphi study were sent (Fig 2). Along with each survey questionnaire, the experts also received a PDF of the preliminary reporting guideline. Each time it was made clear which items had been agreed on, which items had been reworded, and if any new items had been added.

First Delphi round

All of the identified experts (n = 1,072) were invited by email to participate in the first Delphi round. Due to security rules at some institutions, some of the emails were blocked, which is why only 87% (n = 934/1,072) of the emails were deliverable.

Second Delphi round

The initial questionnaire was revised based on the results of the first Delphi round, meaning that consented items were removed and the remaining items were reworded as necessary based on the free-text comments. The changes in wording were highlighted in color so that the experts could see and understand them. The revisions served to fine-tune the semantics and validate the changes by passing them back to the surveyed experts [59]. This approach is often described in "classic" Delphi studies [60, 61].

The experts received feedback on the statistical group response (aggregated percent agreement on the scale points 6+7, mean value, standard deviation) from the previous round and a summary of the arguments made in the open-ended responses. In addition, the experts were able to see their own responses to the standardized items from the previous round. Furthermore, the definition of consensus was also communicated to the experts.

Experts who had completed the first round were contacted one week before the second Delphi round informing them about it and requesting them to participate again.

Third Delphi round

The questionnaire was revised anew and shortened based on the results of the second Delphi round. Shortening the questionnaire was also undertaken as a measure to maintain participants’ motivation to participate.

As feedback, experts received the statistical group response from the previous round and again were able to see their own responses to the standardized items. Since there were only a few new arguments in the open-ended responses and these had been integrated into the questionnaire as part of the revision process, no summary of the arguments made in the open-ended responses was included with the questionnaire at this point in the process. Changes in the wording were, however, again made visible using color highlighting.

Data analysis

Statistical analysis was performed using R [62]. The responses to the standardized questions were descriptively analyzed (absolute and relative frequencies, minimum, maximum, mean, median, standard deviation). Consensus was defined a priori as follows: Consensus for the inclusion of an item in the reporting guideline exists if at least 75% of the responses assign the scale values of 6 or 7 (very important) on the 7-point rating scale. From the second round onward, all items with a rejection rate of at least 50% were excluded, meaning that less than half of the responses assigned the scale values of 6 or 7 on the 7-point rating scale. Items for which consensus had already been reached were not presented again for evaluation in the subsequent rounds.

Analysis of the open-ended responses from the text boxes was done using the Argument-based QUalitative Analysis strategy (AQUA) [63] with Microsoft Word (2019). The AQUA method is based on established analytical methods in qualitative social research and was developed further for the analysis of qualitative data from Delphi studies. When applying the AQUA method, arguments from the open-ended responses are extracted and categorized by topic [63]. No quantification regarding frequency of mentions was undertaken. The arguments in each Delphi round were discussed in the DEWISS Network and, if needed, used to reword the items on the questionnaire.

Ethical approval

The ethics commission at the University of Education Schwäbisch Gmünd granted written approval on 10 July 2023, rendering an ethics vote unnecessary.

Results

Of the 934 experts invited to the first Delphi round, 91 (10%) completed the survey. The second Delphi round had a response rate of 76% (n = 69/91), the third had a response rate of 81% (n = 56/69). Overall, experts hailed from 22 countries (round 1), 20 countries (round 2) and 19 countries (round 3), with about half of the experts working in one of five countries: USA, UK, Canada, Australia and China. The distribution in terms of region and discipline remained comparable for all rounds (Table 2). Between 87% and 89% of the experts in each of the rounds stated that they were associated with the health sciences; the others belonged more to the social sciences (Table 2). The central tendency involving publications by the experts is similar across all of the rounds. The number of Delphi studies personally conducted by the participating experts is on average clearly lower in the first Delphi round than in the two subsequent rounds. The results of the self-assessed expert profile and response behavior show only minor fluctuations in the relative frequencies for the rounds (Table 2). The majority of the experts judged their ability to apply classic Delphi techniques as excellent (scale points 6+7 out of 7), whereas less than 50% assessed their abilities to be excellent in regard to the real-time Delphi, group Delphi and policy Delphi. For the other Delphi variants, only 5% or fewer of the experts judged their competence to be high.

Table 2. Composition of the expert panel.

First Delphi round (n = 91) Second Delphi round (n = 69) Third Delphi round (n = 56)
Land1 USA 18%2 17% 16%
UK 11% 13% 14%
Canada 11% 13% 13%
Australia 8% 10% 13%
China 5% 6% 7%
Disziplin Humanities 3% 3% 2%
Health science 87% 87% 89%
Engineering science 2% 0% 0%
Other 8% 10% 9%
Number of Delphi studies participated in (not as a respondent) Mean (sd) 9.5 (9.2) 17.6 (63) 18 (67.7)
Median 6 9 7.5
Number of Delphi publications Mean (sd) 9.6 (15.0) 11 (18.6) 10 (15.8)
Median 6 6 6
Profiles of expertise on Delphi studies Delphi beginner 12% 10% 11%
Delphi user 53% 51% 50%
Delphi expert 35% 39% 39%
Response behavior Considered 34% 35% 36%
Intuitive 12% 9% 11%
Sometimes considered/ sometimes intuitive 51% 55% 52%
I can’t say 3% 1% 2%
Ability of Delphi variants (Scale: 1 = absolutely no ability to 7 = excellent ability)3 Scale value 6+7 in %, mean (sd)
Classic Delphi 68%, 5.9 (1.2) 72%, 5.9 (1.2) 75%, 6.0 (1.2)
Real-time Delphi 20%, 4.3 (1.9) 20%, 4.2 (2.0) 23%, 4.4 (1.9)
Group Delphi 34%, 5.0 (1.8) 42%, 5.1 (1.9) 41%, 5.0 (1.8)
Policy Delphi 19%, 4.0 (2.0) 17%, 3.9 (2.0) 21%, 4.1 (2.0)
Argumentative Delphi 5%, 3.1 (1.8) 4%, 2.9 (1.8) 2%, 2.7 (1.6)
Deliberative Delphi 5%, 3.1 (1.9) 3%, 2.9 (1.7) 5%, 2.9 (1.8)
Fuzzy Delphi 1%, 2.4 (1.4) 1%, 2.4 (1.5) 2%, 2.4 (1.5)

1Only the five most frequent countries are listed for this category. 2The percentages refer to the number of participants in a specific Delphi round. The given values have been rounded, whereby it is possible that rounding differences could result. 3The question about the ability to apply Delphi variants was not a required question in the first Delphi round.

All of the judgments were included in the analysis, and the statements on judgment certainty were taken into account when analyzing the items for content and revising the questionnaire because, in all of the Delphi rounds and for all of the topics, the experts on average (median 6) responded with good levels of judgement certainty and the variance among the responses was low (standard deviation ≤1.2).

In total, 65 items were presented for evaluation regarding the reporting guideline. At the end of the three Delphi rounds consensus was found for the inclusion of over 38 items in the reporting guideline for Delphi studies in the health and social sciences (S1 File). The points of agreement and disagreement are discussed below.

Topic: Title and abstract

Consent was reached for all of the items asked about the topic of Title and Abstract. The majority of the experts said it is important that Delphi studies can be identified through their titles and abstracts and that the abstract’s content should be structured (Table 3).

Table 3. Results for the topic of Title and abstract.

No. Checklist (= Items) Consensus (Round) Agreement % (n)
1 Identification as a Delphi procedure in the title In (R2) 78% (n = 54)
2 Identification as a Delphi procedure in the abstract In (R1) 96% (n = 87)
3 Structured abstract (e.g., background, method, results and discussion) In (R2) 81% (n = 56)

*R1/R2/R3 Delphi round 1/2/3; n number; "No." refers to the item number in the final version of the reporting guideline

Topic: Context

The topic of Context was covered in three sections: formal, theory and content. For the section on formal aspects, it was possible to reach agreement on five items (Table 4). According to the experts’ opinions, information about funding sources, author team, methods consulting, project background, and the study protocol are important topics for a Delphi reporting guideline. Dissent exists on whether information about the time point of a Delphi study, an ethics vote, or additional information about project background need to be reported. In terms of an ethics vote, it is "typically not required to perform a Delphi in health sciences, since it does not involve human subjects" (free-text comment in the second Delphi round).

Table 4. Results for the topic of Context.

Section No. Checklist (= Items) Consensus (Round) Agreement % (n)
Formal 4 Information about the sources of funding In (R2) 81% (n = 56)
5 Information about the team of authors and/or researchers (e.g., discipline, institution) In (R1) 76% (n = 69)
6 Information about the methods consulting In (R2) 75% (n = 60)
7 Information about the project’s background In (R2) 82% (n = 55)
Time period in which the Delphi study was conducted No consensus 67% (n = 37)
8 Information about the study protocol In (R1) 76% (n = 68)
Information on the ethics vote should be provided. This also includes indicating if no vote was required by the responsible ethics committee No consensus 67% (n = 33)
Reference to additional information or materials about the project or Delphi study (e.g., online questionnaire, website on the project background) No consensus 57% (n = 32)
Theory Positioning within the philosophy of science (e.g., realistic, positivist, constructivist) Out (R2) 20% (n = 12)
Identification of the research paradigm (qualitative or quantitative or Mixed Methods) No consensus 52% (n = 28)
Statement of presuppositions (e.g., regarding potentially contradictory topics) Out (R2) 46% (n = 29)
Content Highlight why the Delphi study is relevant (e.g., due to research gaps or practical relevance to avoid "research waste") No consensus 71% (n = 39)
Reflection on the relevance of the Delphi procedure as a topic, taking social developments and innovations into account (e.g., the Covid-19 pandemic) Out (R2) 39% (n = 25)
9 Justification of the chosen method (Delphi procedure) to answer the research question In (R2) 83% (n = 57)
10 Aim of the Delphi procedure (e.g., consensus, forecasting) In (R1) 89% (n = 81)
Information if the Delphi study is combined with another study (e.g., systematic review to develop the questionnaire, focus group with patients to discuss the Delphi results) No consensus 70% (n = 39)

The experts did not agree to include any item from the section on theory in the reporting guideline (Table 4). In regard to the item about research paradigm, the free-text responses displayed opposing patterns of argument. Several of the respondents viewed Delphi studies as belonging to the quantitative paradigm ("A qualitative questionnaire is qualitative research, not Delphi"; commentary from the first Delphi round). For these experts, Delphi judgments have a universal and evidence-based character. Other respondents assigned Delphi studies to the qualitative paradigm ("A Delphi study has the aim to communicate and have a discussion, it is qualitative research"; commentary from the second Delphi round). This latter group emphasizes the relevance of open-ended questions in Delphi procedures, e.g., to gather context for specific judgments.

In the section covering content, justifying the selected method and stating the aim of a Delphi study are central elements of reporting (Table 4). What is not necessary, according to the respondents, is reporting within the context of current social developments. Disagreement remains about the items on making the relevance of a study clear. The argument against this is a pragmatic one, namely that a reporting guideline cannot cover all conceivable aspects.

Topic: Method

The topic of Method was divided into seven sections: body & integration of knowledge, Delphi variations, sample of experts, survey, Delphi rounds, feedback, and data analysis. Consent was found for reporting on all three of the items asked about in the section on the body & integration of knowledge (Table 5), Accordingly, the identification of relevant expertise, the handling of missing knowledge, and an explanation of who is considered an expert in a particular Delphi study are considered important aspects when reporting a Delphi study.

Table 5. Results for the topic of Method.

Section No. Checklist (= Items) Consensus (Round) Agreement % (n)
Body & Integration of knowledge 11 Identification and elucidation of relevant expertise, spheres of experience, and perspectives (e.g., theory, practice, affected groups, disciplines) In (R1) 78% (n = 69)
12 Handling of knowledge, expertise and perspectives which are missing or have been deliberately not integrated In (R1) 75% (n = 66)
13 Basic definition of expert1 In (R1) 79% (n = 71)
Delphi variations 14 Identification of the type of Delphi procedure and potential modifications (e.g., classic Delphi, real-time Delphi, group Delphi) In (R1) 80% (n = 71)
15 Justification of the Delphi variation and modifications, including during the Delphi process, if applicable In (R1) 79% (n = 70)
Sample of experts 16 Selection criteria for the experts (per round if there are different expert groups) In (R2) 94% (n = 65)
17 Identification of the experts In (R2) 78% (n = 54)
18 Information about recruiting and any subsequent recruiting of experts In (R2) 78% (n = 53)
Information about how refusals and dropouts are handled (e.g., number of reminders, non-response analyses) No consensus 73% (n = 41)
Anonymity of the experts Out (R2) 49% (n = 33)
Survey 19 Elucidation of the content development for the questionnaire2 In (R2) 81% (n = 55)
20 Description of the questionnaire (content and structure) In (R3) 86% (n = 48)
Number of questions (open, closed, hybrid) No consensus 66% (n = 37)
Reference to additional integrated materials or information (e.g., info boxes illustrating the current knowledge about the theme focused on) No consensus 52% (n = 28)
Information about and justification of the types of scales used (e.g., nominal scales, rating or ranking scales) No consensus 63% (n = 35)
Information about the graphic design of the questionnaire (e.g., use of figures) Out (R2) 28% (n = 19)
Information about the validity of the items/scales (e.g., information on the piloting of the questionnaire or the evaluation of validity) No consensus 56% (n = 31)
Information about the query regarding the experts’ degree of certainty or competency Out (R2) 43% (n = 28)
Information about the pretest for the questionnaire Out (R2) 42% (n = 28)
Length of time to fill out the questionnaire per round Out (R2) 35% (n = 24)
Information about the software used for the survey (e.g., soscisurvey, e-delphi) Out (R2) 39% (n = 27)
Delphi rounds 21 Number of Delphi rounds In (R1) 88% (n = 80)
22 Information about the aims of the individual Delphi rounds In (R1) 77% (n = 70)
23 Disclosure and justification of the criterion for discontinuation In (R1) 83% (n = 74)
Feedback 24 Information about what data was reported back per round In (R1) 86% (n = 77)
25 Information on how the results of the previous Delphi round were fed back to the experts surveyed (e.g., via frequencies, mean values, measures of dispersion, listing of comments) In (R3) 80% (n = 44)
26 Information on whether feedback was differentiated by specific groups (e.g., by field of expertise, institutional affiliation) In (R3) 76% (n = 41)
27 Information about how dissent and unclear results were handled In (R1) 86% (n = 78)
Data analysis 28 Disclosure of the quantitative and qualitative analytical strategy In (R1) 86% (n = 78)
Information about the software used for analysis (e.g., SPSS, R, MAXQDA) No consensus 54% (n = 30)
29 Definition and measurement of consensus In (R1) 95% (n = 86)
30 Information on group-specific analysis or weighting of experts (e.g., theory vs. practice, discipline-specific analysis) In (R1) 81% (n = 73)

Previous reference in questionnaire: For us, “experts” are the participants; this can be people from academia, practice, or representatives of lived experience (e.g., patients, family members).

2 *Note: We use the term “questionnaire” for the survey instrument regardless of whether quantitative or qualitative items are integrated or weighted.

In the section addressing Delphi variations, the experts agreed that it is important to identify and justify the Delphi variants and any modifications (Table 5).

In the section on the sample of experts, the selection criteria, how experts were found, and information about the recruitment process must be described (Table 5). How anonymity was handled was not viewed as relevant by the experts. The arguments in the free-text comments for disclosing respondents’ identities included a better understanding of the judgments; the counterargument posed the question whether the relevant people would still participate in that case. Dissent remained concerning the relevance of reporting dropouts.

Eleven items were proposed in the section on survey, for which agreement on two items was reached (Table 5). The experts considered a general description of the questionnaire’s development and the survey process to be relevant. What was found irrelevant or remained in dissent were, among other things, items regarding the pretest of the questionnaire and naming the software used.

In the sections about Delphi rounds and feedback, the experts agreed on reporting the number of rounds, identifying the objectives of each Delphi round, defining a termination criterion, and giving a detailed description of the feedback’s design, including if group-specific analysis should be made available or, if applicable, how dissent was handled (Table 5).

In the section covering data analysis, it was agreed that the analytical methods applied to quantitative and qualitative data, the definition of consensus, and information regarding subgroup analysis or the weighting of the expert groups must be reported (Table 5). The percentage agreement for reporting the software used for analysis lies below the defined value for consensus.

Topic: Results

The topic involving Results contained the two sections on Delphi process and results. In the section on Delphi process there is consensus that the process, the number of experts per Delphi round, and any unexpected events during the Delphi process must all be reported (Table 6). Not included in the consensus are the reporting of sociodemographic characteristics and information about the experts’ competency. Emerging from the free-text comments is the observation that it is difficult to define and measure competence.

Table 6. Results for the topic of Results.

Section No. Checklist (= Items) Consensus (Round) Agreement % (n)
Delphi process 31 Illustration of the Delphi process (e.g., in a flow chart) In (R3) 75% (n = 42)
32 Information about special aspects during the Delphi process (e.g., deviations from the intended approach with justification) In (R2) 86% (n = 59)
33 Number of experts per round (both invited and participating) In (R1) 88% (n = 80)
Information about the experts’ sociodemographics per round Out (R2) 38% (n = 26)
Information about expert competency (e.g., via professional experience, institutional affiliation, expertise in relevant fields/disciplines, conflict of interests) No consensus 73% (n = 40)
Results 34 Presentation of the results for each Delphi round and the final results In (R1) 80% (n = 73)

In the section focused on results the experts argued for presenting the results of each round (Table 6).

Topic: Discussion and dissemination

The topic of Discussion and Dissemination was subdivided into the two sections on quality of findings and dissemination. Belonging to the section on quality of findings is the reporting of a study’s results, the validity and reliability of the findings, and possible limitations of a Delphi study (Table 7). With 74%, the agreement on the external validity of the results lies just under the cut-off value which requires 75% agreement.

Table 7. Results for the topic of Discussion and dissemination.

Section No. Checklist (= Items) Consensus (Round) Agreement % (n)
Quality of findings 35 Highlighting the findings from the Delphi study In (R3) 89% (n = 49)
36 Validity of the results (e.g., transferability of the findings) In (R1) 78% (n = 69)
37 Reliability of the results (e.g., how many people analyzed the qualitative responses) In (R3) 80% (n = 43)
External validity of the findings No consensus 74% (n = 39)
38 Reflection on potential limitations (e.g., distortion, skewing, bias) In (R1) 89% (n = 81)
Dissemination Availability of the dataset No consensus 61% (n = 34)
Accessibility of the results for interested members of the public Out (R2) 49% (n = 34)
Information about further use of the results Out (R2) 42% (n = 29)

No items from the section on dissemination will be included (Table 7).

Discussion

The proposed reporting guideline for Delphi studies in the health and social sciences encompasses a total of 38 items that have been agreed upon by an international expert panel of Delphi practitioners. By including experts from different subject areas and with broad range of Delphi knowledge, we assume that the DELPHISTAR Reporting Guideline will be received very well by the scientific community. It is comparable in its scope to established guidelines, e.g., CONSORT [64] (37 items) and PRISMA [51] (42 items). The requirement of 75% for a consensus resulted in the exclusion of several items that in some cases only very narrowly failed to meet this criterion; and in future discussions regarding the reporting guideline, it would be worth considering the possible inclusion of these items as "desirable" based on some type of grading system [65]. Ten items (e.g., external validity, information about expert competency) achieved a consensus ranging from more than 60% up to 74% in the third Delphi round. A consensus ranging between 50% and 59% was reached in the third round for five items (e.g., information about the software used for analysis, information about the validity of the items/scales).

First and foremost, we expect an improvement in the reporting of Delphi studies. The potential for this is demonstrated by analyses of existing reporting guidelines, for instance, studies evaluating the Consolidated Standards of Reporting Trials (CONSORT) checklists show that the use of the reporting guideline is associated with an improved reporting of randomized controlled trials [66, 67]. We also expect to see a simplification or harmonization of the review process for Delphi studies and a raised awareness in Delphi practitioners about the quality of Delphi studies.

That said, the implementation of this recommended guideline is also contingent on whether journals require and check for the use of the guideline [67]. It is no less important for us, as the DEWISS Network, to promote DELPHISTAR to familiarize the target fields with it and to publish in the EQUATOR network. In terms of dissemination, we intend to create our own website, upload a short video via social media, and also inform the publishers of relevant journals and Delphi practitioners via email. Regarding this specific objective, the participating experts will be explicitly asked for their evaluation of the reporting guideline and their participation in the Delphi after the fact [52]. By doing this, we hope to gain information and insights concerning the quality of this Delphi study and future Delphi procedures.

Several items remained without agreement or did not meet the previously defined criterion for consent, the reason for which could possibly be traced back to the lack of methods research. This is seen in regard to three aspects:

  1. The agreement to exclude items involving theory is a sign of absent discussions about the theoretical positioning of Delphi studies. Nonetheless, this would still be important because the definition of an epistemological aim is directly connected with the selection of quality criteria for Delphi studies [68]. Delphi studies that are more qualitative must be measured against criteria such as transparency or intersubjective comprehensibility; whereas quantitative Delphi studies have more to do with criteria such as scale quality and reliability of the results [23]. Admittedly, no established criteria yet exist to evaluate the quality of Delphi studies, even though initial proposals are available [52, 69].

  2. The dissent around the items involving expert competency or scale validity could indicate that there is still too little methods research on this that investigates the potential influence of these aspects on judgement behavior and, thus, on the results [70].

  3. Evaluations of Delphi studies could also provide new information. To date, such evaluations are carried out only in individual instances [71], but could yield important insights regarding the participants’ motivations and judgment behaviors. This knowledge could also be relevant to further development of the Delphi reporting guideline.

We make the claim that DELPHISTAR can be used with different Delphi variants. Viewed from a quantitative perspective, it could be critically said that most of the participating experts consider their expertise to be in the classic Delphi, real-time Delphi, policy Delphi and the group Delphi. This was to be expected because, despite the increasing differentiations and methodological modifications, these are the most frequently used Delphi variants [41, 72]. Argued from a qualitative standpoint, we assume based on our sampling method that the individually surveyed experts have a very high level of proficiency in the Delphi techniques covered by the questionnaire. Despite this, we are not able to determine with certainty that the items in the reporting guideline can be applied to all of the innovations and modifications to Delphi procedures. It is also for this reason that we plan to take a further step to test this reporting guideline on a defined random sample of publications in order to ensure feasibility.

Strengths and limitations

The results of our Delphi survey must be viewed in the context of the expert panel and the survey time point in 2022. We assume that the use and applicability of DELPHISTAR must be subject to ongoing critical reflection. It is possible that items which were not included in the reporting guideline will be required by reviewers (e.g., "time period in which the Delphi study was conducted”). Furthermore, technical innovations, methodological developments and discussions regarding methods can affect Delphi studies thus changing the criteria for reporting them (e.g., "information about the software used for analysis"). This suggests that discussions about the participation of affected persons in Delphi studies conducted in clinical or nursing contexts will become increasingly more important, very possibly making methodological modifications to Delphi techniques necessary [73, 74]. Information regarding ethical approval would become much more important as a consequence.

In the Delphi study presented here, it was possible to achieve a typical response rate for international online Delphi studies, with approximately 10% [75]. Reasons why experts did not participate could involve language barriers or not receiving the emails. Using private email addresses for this would be conceivable, as several authors recommend [76]. It is possible that the regular reminder may have been effective in encouraging participation in all three Delphi rounds, in that, among other things, the actual completion time (average time for the experts participating up to that point) was included in the feedback.

The expert panel’s geographic heterogeneity was successfully maintained. Nevertheless, biases in the panel could be present due to the predominance of experts with a background in the health sciences. Furthermore, only Delphi experts who published between 2016 and 2021 were included. It is possible that, as a consequence, specialists who also possess a high level of expertise and an impressive publication history in this field were excluded.

A relatively strict consensus criterion of 75% was selected for this Delphi study, which results in items being either kept or rejected. Considerations could have been made to divide the results into different categories, for example, into three categories with a) items of highly consensual and necessary inclusion (e.g., 75% and above), b) items of desirable and generally necessary inclusion (e.g., between 60% and 75%), and c) possible items of inclusion depending on the study and study objectives (less than 60%). Following this strategy may very well have produced a differentiated yet more complex reporting guideline.

Supporting information

S1 File. Delphi studies in social and health sciences–recommendations for an interdisciplinary standardized reporting (DELPHISTAR).

(DOCX)

pone.0304651.s001.docx (20.5KB, docx)
S2 File. DELPHISTAR–questionnaires and datasets of the Delphi rounds.

(ZIP)

pone.0304651.s002.zip (732.7KB, zip)

Acknowledgments

We extend our thanks to all of the experts who participated in the Delphi study and contributed to the development of a reporting guideline for Delphi studies. The following lists all of the experts who participated in the survey rounds and gave their consent to be named.

We are grateful for the time, commitment, and expertise of the following members of the Delphi Expert Panel. All experts who participated at all survey rounds and wished to be acknowledged are named below.

Named with permission (not all Delphi participants wished to be acknowledged):

Alam, M., Department of Dermatology, Northwestern University, USA

Backman, C., The University of British Columbia, Canada

Banno, M., Department of Psychiatry, Seichiryo Hospital, Japan

Bartoszko, J., Department of Anesthesia and Pain Management, Toronto General Hospital-University Health Network, Canada

Bloomfield, F., Liggins Institute, University of Auckland, New Zealand

Bober, M. B., Division of Orthogenetics, Nemours A.I. duPont Hospital for Children, USA

Chalkoo, M., Government Medical College Srinagar Kashmir, India

Chan, T. M., McMaster University, Canada

Chen, Y., Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, China

Coscia, C., Department of Architecture and Design (DAD), Politecnico di Torino, Italy

de Luca, K., Discipline of Chiropractic, School of Health, Medical and Applied Sciences, CQUniversity, Australia

Demartines, N., University Hospital CHUV of Lausanne, Switzerland

Farzandipour, M., Kashan University of Medical Sciences, Iran

Goldstein, D., University of New South Wales, Australia

Goodman, C., Centre for Research in Public Health and Community Care (CRIPACC), University of Hertfordshire, United Kingdom

Grant, S., HEDCO Institute for Evidence-Based Educational Practice, College of Education, University of Oregon, USA

Hübner, M., Centre hospitalier universitaire vaudois, Switzerland

Harris, T., Polycystic Kidney Disease Charity, United Kingdom

Howell, M., University of Sydney, Australia

Huber, A. M., IWK Health and Dalhousie University, Canada

Jansen, T. L., VieCuri Medisch Centrum, Netherlands

Johnson, N., Consultant hand surgeon, Pulvertaft Hand Centre, United Kingdom

Julious, S. A., School of Health and Related Research, University of Sheffield, United Kingdom

Kenny, G. P., Human and Environmental Physiology Research Unit, University of Ottawa, Canada

Konge, L., Copenhagen Academy for Medical Education and Simulation (CAMES), University of Copenhagen, Denmark

Kopkow, C., Center for Evidence-Based Healthcare, University Hospital and Medical Faculty Carl Gustav Carus, TU Dresden, Germany

LaPrade, R. F., Twin Cities Orthopedics, USA

Lim, M., Evelina London Children’s Hospital; King’s Health Partners Academic Health Science Centre; Department Women and Children’s Health, School of Life Course Sciences (SoLCS), Faculty of Life Sciences and Medicine, Kings College London, United Kingdom

Loudovici-Krug, D., ÄMM Research Consultation Center, Institute of Physical and Rehabilitative Medicine, University Hospital Jena, Germany

Ma, Y., Department of Nursing, Chinese PLA General Hospital, China

MacLennan, S., Academic Urology Unit, Institute of Applied Health Sciences, University of Aberdeen, United Kingdom

Mokkink, L. B., Department Epidemiology and Data Science, Amsterdam UMC, Vrije Universiteit Amsterdam, Netherlands

Montero, M., Department of Environment, Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Spain

Myles, P. S., Monash University, Australia

Nayahangan, L. J., Center for Human Resources and Education, Copenhagen Academy for Medical Education and Simulation (CAMES), Denmark

Pace, N. L., University of Utah, USA

Page, M. J., Monash University, Australia

Parente, F., Psychology Department, Towson University, USA

Payne, K., The University of Manchester, United Kingdom

Petrovic, M., Department of Internal Medicine and Paediatrics, Ghent University, Belgium

Sprung, C. L., Hadassah Medical Organization and Faculty of Medicine, Hebrew University of Jerusalem, Israel

Raveendran, K., Fatimah Hospital, Malaysia

Roller-Wirnsberger, R., Department of Internal Medicine Graz, Medical University of Graz, Austria

Sconfienza, L. M., University of Milano, Italy

Spinelli, A., Department of Biomedical Sciences, Humanitas University, Italy

van der Heijde, D., Leiden University Medical Center, Netherlands

Vohra, S., University of Alberta, Canada

Weissman, J. S., Center for Surgery and Public Health, Brigham and Women’s Hospital, Harvard Medical School, USA

Westby, M., Centre for Aging SMART Vancouver, Canada

Wu, Y., Peking University School of Public Health and Clinical Research Institute, China

Yadlapati, R., Division of Gastroenterology, University of California San Diego, USA

Yarris, L. M., Oregon Health & Science University, USA

Zhang, X., Hong Kong Baptist University, China

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The authors are members of the DEWISS network. The DEWISS Network is supported by the German Research Foundation (project number: 429572724). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Khodyakov D, Grant S, Kroger J, Gadwah-Meaden C, Motala A, Larkin J. Disciplinary trends in the use of the Delphi method: A bibliometric analysis. PLoS ONE. 2023; 18:e0289009. doi: 10.1371/journal.pone.0289009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gerhold L, Bartl G, Haake N. Security culture 2030. How security experts assess the future state of privatization, surveillance, security technologies and risk awareness in Germany. Futures. 2017; 87:50–64. doi: 10.1016/j.futures.2017.01.005 [DOI] [Google Scholar]
  • 3.Cuhls K, Dragomir B, Gheorghiu R, Rosa A, Curaj A. Probability and desirability of future developments–Results of a large-scale Argumentative Delphi in support of Horizon Europe preparation. Futures. 2022; 138. doi: 10.1016/j.futures.2022.102918 [DOI] [Google Scholar]
  • 4.Surowiecki J. The wisdom of crowds. Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. 1st ed. New York: Doubleday; 2004. [Google Scholar]
  • 5.Hart LM, Jorm AF, Kanowski LG, Kelly CM, Langlands RL. Mental health first aid for Indigenous Australians: using Delphi consensus studies to develop guidelines for culturally appropriate responses to mental health problems. BMC Psychiatry. 2009; 9:47. doi: 10.1186/1471-244X-9-47 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Li L, Taeihagh A, Tan SY. What factors drive policy transfer in smart city development? Insights from a Delphi study. Sustainable Cities Soc. 2022; 84:104008. doi: 10.1016/j.scs.2022.104008 [DOI] [Google Scholar]
  • 7.Niederberger M, Renn O. Delphi Methods In The Social And Health Sciences. Concepts, applications and case studies. Wiesbaden: Springer; 2023. [Google Scholar]
  • 8.Flostrand A, Pitt L, Bridson S. The Delphi technique in forecasting–A 42-year bibliographic analysis (1975–2017). Technol Forecast Soc Change. 2020; 150:119773. doi: 10.1016/j.techfore.2019.119773 [DOI] [Google Scholar]
  • 9.Jünger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: recommendations based on a methodological systematic review. Palliat Med. 2017; 31:684–706. doi: 10.1177/0269216317690685 [DOI] [PubMed] [Google Scholar]
  • 10.Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS ONE. 2011; 6:1–9. doi: 10.1371/journal.pone.0020476 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014; 67:401–9. doi: 10.1016/j.jclinepi.2013.12.002 [DOI] [PubMed] [Google Scholar]
  • 12.Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000; 32:1008–15. doi: 10.1046/j.1365-2648.2000.t01-1-01567.x [DOI] [PubMed] [Google Scholar]
  • 13.Gattrell WT, Hungin AP, Price A, Winchester CC, Tovey D, Hughes EL, et al. ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol. Res Integr Peer Rev. 2022; 7:3. doi: 10.1186/s41073-022-00122-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Gattrell WT, Logullo P, van Zuuren EJ, Price A, Hughes EL, Blazey P, et al. ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi. PLoS Med. 2024; 21:e1004326. doi: 10.1371/journal.pmed.1004326 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Linstone HA, Turoff M. The delphi method. Massachusetts: Addison-Wesley; 1975. [Google Scholar]
  • 16.Turoff M, Linstone HA, editors. The Delphi Method. Techniques and Applications. Bosto: n: Addison-Wesley; 2002. [Google Scholar]
  • 17.Ab Latif R, Mohamed R, Dahlan A, Mat Nor MZ. Using Delphi Technique: Making Sense of Consensus in Concept Mapping Structure and Multiple Choice Questions (MCQ). EIMJ. 2016; 8. doi: 10.5959/eimj.v8i3.421 [DOI] [Google Scholar]
  • 18.Hsu C-C, Sandford BA. The Delphi Technique: Making Sense of Consensus. Pract. Assess. Res. Evaluation. 2007; 12:10. doi: 10.7275/pdz9-th90 [DOI] [Google Scholar]
  • 19.Dalkey N, Helmer O. An Experimental Application of the DELPHI Method to the Use of Experts. Manage Sci. 1963; 9:458–67. doi: 10.1287/mnsc.9.3.458 [DOI] [Google Scholar]
  • 20.Haque CE, Berkes F, Fernández-Llamazares Á, Ross H, Chapin FS III, Doberstein B, et al. Social learning for enhancing social-ecological resilience to disaster-shocks: a policy Delphi approach. DPM. 2022; 31:335–48. doi: 10.1108/DPM-03-2021-0079 [DOI] [Google Scholar]
  • 21.Aengenheyster S, Cuhls K, Gerhold L, Heiskanen-Schüttler M, Huck J, Muszynska M. Real-Time Delphi in practice—A comparative analysis of existing software-based tools. Technol Forecast Soc Change. 2017; 118:15–27. doi: 10.1016/j.techfore.2017.01.023 [DOI] [Google Scholar]
  • 22.John-Matthews JS, Wallace MJ, Robinson L. The Delphi technique in radiography education research. Radiography (Lond). 2017; 23:S53–S57. doi: 10.1016/j.radi.2017.03.007 [DOI] [PubMed] [Google Scholar]
  • 23.Hasson F, Keeney S. Enhancing rigour in the Delphi technique research. Technol Forecast Soc Change. 2011; 78:1695–704. doi: 10.1016/j.techfore.2011.04.005 [DOI] [Google Scholar]
  • 24.Niederberger M, Deckert S. Das Delphi-Verfahren: Methodik, Varianten und Anwendungsbeispiele. Z Evid Fortbild Qual Gesundhwes. 2022; 174:11–9. doi: 10.1016/j.zefq.2022.08.007 [DOI] [PubMed] [Google Scholar]
  • 25.Cuhls K. The Delphi Method: An Introduction. In: Niederberger M, Renn O, editors. Delphi Methods In The Social And Health Sciences. Wiesbaden: Springer Fachmedien Wiesbaden; 2023. pp. 3–27. [Google Scholar]
  • 26.Linstone HA, Turoff M. Delphi: A brief look backward and forward. Technol Forecast Soc Change. 2011; 78:1712–9. doi: 10.1016/j.techfore.2010.09.011 [DOI] [Google Scholar]
  • 27.Servan-Schreiber E. Prediction Markets. Collective Wisdom: Principles and Mechanisms. Cambridge: Cambridge University Press; 2012. pp. 21–37. [Google Scholar]
  • 28.Turoff M. The design of a policy Delphi. Technol Forecast Soc Change. 1970; 2:149–71. doi: 10.1016/0040-1625(70)90161-7 [DOI] [Google Scholar]
  • 29.Niederberger M, Renn O. The Group Delphi Process in the Social and Health Sciences. In: Niederberger M, Renn O, editors. Delphi Methods In The Social And Health Sciences. Wiesbaden: Springer Fachmedien Wiesbaden; 2023. pp. 75–91. [Google Scholar]
  • 30.Ives J, Dunn M, Molewijk B, Schildmann J, Bærøe K, Frith L, et al. Standards of practice in empirical bioethics research: towards a consensus. BMC Med Ethics. 2018; 19:68. doi: 10.1186/s12910-018-0304-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Grötker R. Expertenkonsultationen und Stakeholder-Befragungen mit Deliberativem Delphi. SSRN Journal. 2017:1–12. doi: 10.2139/ssrn.3256258 [DOI] [Google Scholar]
  • 32.Habibi A, Jahantigh FF, Sarafrazi A. Fuzzy Delphi technique for forecasting and screening items. Asia Jour Rese Busi Econ and Manag. 2015; 5:130–43. doi: 10.5958/2249-7307.2015.00036.5 [DOI] [Google Scholar]
  • 33.Jolly A, Caulfield LS, Sojka B, Iafrati S, Rees J, Massie R. Café Delphi: Hybridising ‘World Café’ and ‘Delphi Techniques’ for successful remote academic collaboration. Soc Sci Humanit Open. 2021; 3:100095. doi: 10.1016/j.ssaho.2020.100095 [DOI] [Google Scholar]
  • 34.Mullen PM. Delphi: myths and reality. J Health Organ Manag. 2003; 17:37–52. doi: 10.1108/14777260310469319 [DOI] [PubMed] [Google Scholar]
  • 35.Shang Z. Use of Delphi in health sciences research: A narrative review. Medicine (Baltimore). 2023; 102:e32829. doi: 10.1097/MD.0000000000032829 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Fernandes L, Hagen KB, Bijlsma JWJ, Andreassen O, Christensen P, Conaghan PG, et al. EULAR recommendations for the non-pharmacological core management of hip and knee osteoarthritis. Ann Rheum Dis. 2013; 72:1125–35. doi: 10.1136/annrheumdis-2012-202745 [DOI] [PubMed] [Google Scholar]
  • 37.Guzman J, Tompa E, Koehoorn M, de Boer H, Macdonald S, Alamgir H. Economic evaluation of occupational health and safety programmes in health care. Occup Med (Lond). 2015; 65:590–7. doi: 10.1093/occmed/kqv114 [DOI] [PubMed] [Google Scholar]
  • 38.Kelly M, Wills J, Jester R, Speller V. Should nurses be role models for healthy lifestyles? Results from a modified Delphi study. J Adv Nurs. 2017; 73:665–78. doi: 10.1111/jan.13173 [DOI] [PubMed] [Google Scholar]
  • 39.Teyhen DS, Aldag M, Edinborough E, Ghannadian JD, Haught A, Kinn J, et al. Leveraging technology: creating and sustaining changes for health. Telemed J E Health. 2014; 20:835–49. doi: 10.1089/tmj.2013.0328 [DOI] [PubMed] [Google Scholar]
  • 40.Nutbeam T, Fenwick R, Smith JE, Dayson M, Carlin B, Wilson M, et al. A Delphi study of rescue and clinical subject matter experts on the extrication of patients following a motor vehicle collision. Scand J Trauma Resusc Emerg Med. 2022; 30:41. doi: 10.1186/s13049-022-01029-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Niederberger M, Spranger J. Delphi technique in health sciences: A Map. Front Public Health. 2020; 8:1–10. doi: 10.3389/fpubh.2020.00457 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.von der Gracht HA. Consensus measurement in Delphi studies. Technol Forecast Soc Change. 2012; 79:1525–36. doi: 10.1016/j.techfore.2012.04.013 [DOI] [Google Scholar]
  • 43.Antonio CAT, Bermudez ANC, Cochon KL, Reyes MSGL, Torres CDH, Liao SASP, et al. Recommendations for Intersectoral Collaboration for the Prevention and Control of Vector-Borne Diseases: Results From a Modified Delphi Process. J Infect Dis. 2020; 222:S726–S731. doi: 10.1093/infdis/jiaa404 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Knight SR, Pathak S, Christie A, Jones L, Rees J, Davies H, et al. Use of a modified Delphi approach to develop research priorities in HPB surgery across the United Kingdom. HPB (Oxford). 2019; 21:1446–52. doi: 10.1016/j.hpb.2019.03.352 [DOI] [PubMed] [Google Scholar]
  • 45.Vicenzino B, Vos R-J de, Alfredson H, Bahr R, Cook JL, Coombes BK, et al. ICON 2019-International Scientific Tendinopathy Symposium Consensus: There are nine core health-related domains for tendinopathy (CORE DOMAINS): Delphi study of healthcare professionals and patients. Br J Sports Med. 2020; 54:444–51. doi: 10.1136/bjsports-2019-100894 [DOI] [PubMed] [Google Scholar]
  • 46.Eini Zinab H, Kalantari N, Ostadrahimi A, Tabrizi JS, Pourmoradian S. A Delphi study for exploring nutritional policy priorities to reduce prevalence of non-communicable diseases in Islamic Republic of Iran. Health Promot Perspect. 2019; 9:241–7. doi: 10.15171/hpp.2019.33 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Pavlova I, Petrytsa P, Andres A, Osip N, Khurtenko O, Rudenok A, et al. Measuring physical literacy in Ukraine: developmentof a set of indicators by Delphi method. Phys. Act. Rev. 2021; 9:24–32. doi: 10.16926/par.2021.09.04 [DOI] [Google Scholar]
  • 48.Spranger J, Homberg A, Sonnberger M, Niederberger M. Reporting guidelines for Delphi techniques in health sciences: A methodological review. Z Evid Fortbild Qual Gesundhwes. 2022; 172:1–11. doi: 10.1016/j.zefq.2022.04.025 [DOI] [PubMed] [Google Scholar]
  • 49.Humphrey-Murto S, Varpio L, Wood TJ, Gonsalves C, Ufholz L-A, Mascioli K, et al. The use of the Delphi and other consensus group methods in medical education research: a review. Acad Med. 2017; 92:1491–8. doi: 10.1097/ACM.0000000000001812 [DOI] [PubMed] [Google Scholar]
  • 50.Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010; 7:1–9. doi: 10.1371/journal.pmed.1000217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021; 372. doi: 10.1136/bmj.n160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Landeta J, Lertxundi A. Quality indicators for Delphi studies. Futures Foresight Sci. 2023. doi: 10.1002/ffo2.172 [DOI] [Google Scholar]
  • 53.Del Grande C, Kaczorowski J. Rating versus ranking in a Delphi survey: a randomized controlled trial. Trials. 2023; 24:543. doi: 10.1186/s13063-023-07442-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Taze D, Hartley C, Morgan AW, Chakrabarty A, Mackie SL, Griffin KJ. Developing consensus in Histopathology: the role of the Delphi method. Histopathology. 2022; 81:159–67. doi: 10.1111/his.14650 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. JAMA. 2013; 310:2191–4. doi: 10.1001/jama.2013.281053 [DOI] [PubMed] [Google Scholar]
  • 56.Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
  • 57.Deutsche Forschungsgemeinschaft. Guidelines for Safeguarding Good Research Practice. Code of Conduct. Zenodo; 2022. [Google Scholar]
  • 58.TIVIAN. unipark. Version EFS Fall 2022. TIVIAN; 2022. [Google Scholar]
  • 59.Okoli C, Pawlowski SD. The Delphi method as a research tool: an example, design considerations and applications. Inf. Manag. 2004; 42:15–29. doi: 10.1016/j.im.2003.11.002 [DOI] [Google Scholar]
  • 60.Lüke C, Kauschke C, Dohmen A, Haid A, Leitinger C, Männel C, et al. Definition and terminology of developmental language disorders-Interdisciplinary consensus across German-speaking countries. PLoS ONE. 2023; 18:e0293736. doi: 10.1371/journal.pone.0293736 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Chen S, Cao M, Zhang J, Yang L, Xu X, Zhang X. Development of the health literacy assessment instrument for chronic pain patients: A Delphi study. Nurs Open. 2023; 10:2192–202. doi: 10.1002/nop2.1468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.R Core Team. R. A language and environment for statistical computing. R Core Team; 2021. [Google Scholar]
  • 63.Niederberger M, Homberg A. Argument-based QUalitative Analysis strategy (AQUA) for analyzing free-text responses in health sciences Delphi studies. MethodsX. 2023; 10. doi: 10.1016/j.mex.2023.102156 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010; 340:c332. doi: 10.1136/bmj.c332 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.German Association of the Scientific Medical Societies (AWMF). AWMF Guidance Manual and Rules for Guideline Development.; 2013. [Google Scholar]
  • 66.Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006; 185:263–7. doi: 10.5694/j.1326-5377.2006.tb00557.x [DOI] [PubMed] [Google Scholar]
  • 67.Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012; 1:60. doi: 10.1186/2046-4053-1-60 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Day J, Bobeva M. A Generic Toolkit for the Successful Management of Delphi Studies. Electron. J. Bus. Res. Methods. 2005; 3:103–16. [Google Scholar]
  • 69.Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: How to decide its appropriateness. WJM. 2021; 11:116–29. doi: 10.5662/wjm.v11.i4.116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Brookes ST, Macefield RC, Williamson PR, McNair AG, Potter S, Blencowe NS, et al. Three nested randomized controlled trials of peer-only or multiple stakeholder group feedback within Delphi surveys during core outcome and information set development. Trials. 2016; 17:1–14. doi: 10.1186/s13063-016-1479-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Turnbull AE, Dinglas VD, Friedman LA, Chessare CM, Sepúlveda KA, Bingham CO, et al. A survey of Delphi panelists after core outcome set development revealed positive feedback and methods to facilitate panel member participation. J Clin Epidemiol. 2018; 102:99–106. doi: 10.1016/j.jclinepi.2018.06.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Foth T, Efstathiou N, Vanderspank-Wright B, Ufholz L-A, Dütthorn N, Zimansky M, et al. The use of Delphi and Nominal Group Technique in nursing education: A review. IJNS. 2016; 60:112–20. doi: 10.1016/j.ijnurstu.2016.04.015 [DOI] [PubMed] [Google Scholar]
  • 73.Barrington H, Bridget Y, Paula R. Williamson. Patient participation in Delphi surveys to develop core outcome sets: systematic review. BMJ Open. 2021; 11:9. doi: 10.1136/bmjopen-2021-051066 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Campbell SM. How do stakeholder groups vary in a Delphi technique about primary mental health care and what factors influence their ratings. Qual. Saf. Health Care. 2004; 13:428–34. doi: 10.1136/qhc.13.6.428 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Beiderbeck D, Frevel N, Gracht HA von der, Schmidt SL, Schweitzer VM. Preparing, conducting, and analyzing Delphi surveys: Cross-disciplinary practices, new directions, and advancements. MethodsX. 2021; 8:101401. doi: 10.1016/j.mex.2021.101401 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Helms C, Gardner A, McInnes E. The use of advanced web-based survey design in Delphi research. J Adv Nurs. 2017; 73:3168–77. doi: 10.1111/jan.13381 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Monica Duarte Correia de Oliveira

23 Jan 2024

PONE-D-23-30916Delphi studies in social and health sciences – recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study.PLOS ONE

Dear Dr. Niederberger,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Although the manuscript presents research with interest to the academic and practitioners' community, as well as is of interest to PLOS ONE readers, some changes are required to improve its clarity, accuracy and depth of analysis.

Please submit your revised manuscript by Mar 08 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Monica Duarte Correia de Oliveira

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Did you know that depositing data in a repository is associated with up to a 25% citation advantage (https://doi.org/10.1371/journal.pone.0230416)? If you’ve not already done so, consider depositing your raw data in a repository to ensure your work is read, appreciated and cited by the largest possible audience. You’ll also earn an Accessible Data icon on your published paper if you deposit your data in any participating repository (https://plos.org/open-science/open-data/#accessible-data).

3. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed).

4. Thank you for stating the following financial disclosure: "all authors Deutsche Forschungsgemeinschaft (DFG) - Projektnummer 429572724 Network promotion". 

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

5. In the online submission form, you indicated that "The data underlying the results presented in the study are available from Prof. dr. Marlen Niederberger (marlen.niederberger@ph-gmuend.de)".

All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either 1. In a public repository, 2. Within the manuscript itself, or 3. Uploaded as supplementary information.

This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons on resubmission and your exemption request will be escalated for approval.

Additional Editor Comments:

The study is on a relevant topic for the PLOS One audience. Nevertheless, in my opinion the manuscript needs to be revised before it can be considered for publication by the journal. Namely:

MAJOR COMMENTS

The study has developed an e-Delphi, which was also described as a classic Delphi. The authors should describe better what is an e-Delphi, and rethink whether the study is a classic Delphi given the features that follow.

There is an adoption of features in the Delphi study that raise issues – usually the items do not change in sequential rounds, but the authors describe “The initial questionnaire was revised based on the results of the first Delphi round, meaning that consented items were removed and the remaining items were reworded as necessary”. However rewording of items in sequential rounds raises issues for interpretation, comparability and analyses which are not reflected upon in the manuscript. Literature should frame why the procedure is acceptable, and I am not sure whether this is a classical Delphi, or its links to an enchained Delphi. It would be important to understand whether participants were informed about the selected majority rules prior to the Delphi.

The authors need to explain how Table 1 was generated.

The authors need to explain the rationale and roots for using the selected importance scale.

The discussion on whether the proposed guideline is useful for distinct types of Delphi process types/variants needs to be extended and deepened (classical Delphi, policy Delphi, etc).

I do not understand the appropriateness of using the sentence “None of the experts faced any repercussions for deciding not to participate or for dropping out”.

As the manuscript claims that a key output is a complete guideline list, I wonder whether it be presented or summarized within the manuscript.

The manuscript does not discuss future research (following the work done), as well as should go deeper in the study limitations.

The authors should provide more detailed information on the search protocol used to get the two Zotero databases.

MINOR COMMENTS

I missed an explicit statement of the objectives before the methods section.

The authors should comment on what could be done to avoid “some of the emails were blocked”.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: PLOS One

Paper title: Delphi studies in social and health sciences – recommendations for an interdisciplinary standardised reporting (DELPHISTAR). Results of a Delphi study.

Ref: PONE-D-23-30916

Thank you for allowing me the opportunity to review this paper it was a delight to read. The study adds to the evidence based relating to the reporting of Delphi Studies. Whilst such guidelines do exist, this study builds upon this evidence and recent application examples to formulate a 3-round modified Delphi. The authors should be commended for this in-depth approach adopted. See below some minor reflections/ suggestions:

As an observation, the rationale underpinning the use of guidelines, could be brought stronger to the fore for example, to address methodological misunderstandings, enhance the rigour and quality. Moreover, this study proposes addressing the need for nonspecific areas but generically across health and social are sciences needs to be brought to the fore in the background, as well as consideration for why this wide area needs this evidence (i.e., most utilised within this field?).

The low uptake of the Delphi within this study from the total sample targeted also needs to be addressed and may be related to the inclusion criteria required to read and speak English, hence limiting participation.

There is a sense from the dicussion that this should be applied but dependent upon journals acceptability. However, would further expert panel consultation not also be beneficial, prior to implementation? Moreover, how authors apply this may also need to be developed. We assume the existing frameworks are appropriate but some blue sky thinking of how to move these forwards are rarely discussed.

Reviewer #2: The article presented is an interesting and necessary contribution to the study and improvement of the Delphi methodology. It is the result of a systematic and well-oriented research process, which has concluded with a Delphi study to propose the most important items, agreed with the scientific community that usually uses this technique, which should be included in the reporting of a Delphi study. This is the next logical and necessary step after the previous study carried out by the Niederberger and Spranger team (2020), which had conducted a systematic review of the methodological work published to date on the Delphi method and on the elements that the various authors recommended to include in the reporting of results.

As could not be otherwise, in this study the method has been used with rigor, the group of experts consulted has been large and justified, the number of responses obtained is sufficient, the results are valuable and the reporting is very well done.

Nevertheless, we would like to make some comments or suggestions, in the hope that the authors will assess their applicability to the improvement of this work.

- Selection of experts. The expert selection process is perfectly justified and defined in the report. However, the decision criteria selected may, in our opinion, have excluded potential quality contributions. In other words, all participants are experts, but perhaps not all genuine experts were able to participate. The criterion of limiting the search to authors who have published between 2016 and 2021 may have left out relevant authors. In the list of published experts I miss the main methodologists of this technique, alive, according to the number of citations that their works have accumulated: Rowe, Wright, Okoli, Pawlowski, Adler, Ziglio, Skulmoski, Krann, Landeta, Von der Gracht, Gordon, Pease, Tapio, Turoff, Hasson... They may have been invited and declined to participate, or they did not want to publicize their names, but the reality is that none of the most cited appears. Moreover, the profile of the experts consulted corresponds mainly to Health Sciences, and very little to Social Sciences. It is not a question of repeating the study, but of indicating more clearly this deficiency in the final limitations.

- In order to accept the inclusion of an item in the final list of items to be included in a report, the criterion was that more than 75% of the experts who responded rated its importance with 6 or 7 points (out of 7). This is a criterion, but I believe it is too restrictive and leads to the loss of valuable information. It is true that in the tables the % of consensus on each item is maintained, but the presentation of results simplifies the analysis with a Yes or No item.

In my opinion, all the items included (65) have their importance and their reason to be in a final report, even some more could have been. Therefore, the results should be, at least, classified in three categories:

a. Items of highly consensual, necessary and recommended inclusion (e.g., above 75% consensus).

b. Items of desirable and generally necessary inclusion (e.g., between 50% and 75%).

c. Possible inclusion items, depending on the study and study objectives (less than 50%).

Or something similar.

- The contribution of the work is valuable and necessary, but at the end of the reading a slight disappointment remains, because a study carried out with such rigor and with the participation of so many experts is limited to providing a list of items to be included in a reporting hierarchy according to the degree of consensus they have reached. We receive no information on the reasoning that these people have used to support their position in favor, or not, of considering each item as very important. I would ask the authors to provide in an auxiliary document the main arguments for and against the inclusion of each item in the final list, gathered from the qualitative contributions of the experts.

- There are two final items that have obtained a high consensus Reliability of the results (80%) and External validity of the findings (74%) that could have been worked on more in this study, providing indicators of the quality of the work based on judgments external to the authors. For example, including a survey of the participating experts in which they are asked about the rigor with which the study was conducted, their confidence in the results, their satisfaction with the participation... A recent publication on this subject is Landeta and Lertxundi (2023). Quality indicators for Delphi studies. Futures & Foresight Science, e172.

In summary, it is an interesting and necessary article, which could be improved by considering the comments made on the limitations of its expert participants, the classification of the items analyzed, the additional qualitative information that could be provided and the inclusion of some indicator of the quality of the study external to the authors who carried it out.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Dr Felicity Hasson

Reviewer #2: Yes: Jon Landeta

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Aug 26;19(8):e0304651. doi: 10.1371/journal.pone.0304651.r002

Author response to Decision Letter 0


20 Mar 2024

Dear editor,

Dear Dr. Felicity Hasson,

Dear Professor Jon Landeta,

Thank you for your valuable insights and comments on our manuscript, "Delphi studies in social and health sciences – recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study." We have given your constructive feedback careful consideration and used it to improve the quality and clarity of our paper. The comments regarding the detailed description of the methodological approach have improved the transparency and understandability of our approach. Above all, reflecting on the study's limitations and the discussion section, we found your remarks to be a valuable way to expand on how to interpret the study results.

We very much appreciate the time and effort that you have spent evaluating our manuscript, and we are happy to send you now the revised version.

With kind regards,

Prof. Dr. Marlen Niederberger on behalf of the authors

Attachment

Submitted filename: Response to Reviewers_final ENGL.docx

pone.0304651.s003.docx (27.2KB, docx)

Decision Letter 1

Monica Duarte Correia de Oliveira

26 Apr 2024

PONE-D-23-30916R1Delphi studies in social and health sciences – recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study.PLOS ONE

Dear Dr. Niederberger,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

The authors have improved the manuscript, considering all suggestions and comments, and the manuscript now provides a sound reporting guideline (DELPHISTAR) that will be helpful for those developing Delphi studies. Before acceptance for publication, the authors should make the minor revision suggested by the referee. ==============================

Please submit your revised manuscript by Jun 10 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Monica Duarte Correia de Oliveira

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors have successfully incorporated the recommendations suggested by the reviewers. The work is now clearer, more open, more self-critical and more transparent.

The decision to make the supporting information of the study available to readers is appreciated.

In sum, it is a rigorous work that makes a necessary and valuable methodological contribution to the development and scientific consolidation of the Delphi method.

A minor comment:

In the revised text the first necessary feature of a Delphi study "Experts are surveyed while typically preserving their anonymity" (line 24 of the first draft) has been deleted. I assume this is a mistake. In addition, both line 24 and line 135 of the corrected manuscript still refer to the five typical characteristics of Delphi studies, despite the fact that only four are included in the new version.

As far as I am concerned, once this error has been corrected, the article is perfectly publishable.

Thank you very much.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Aug 26;19(8):e0304651. doi: 10.1371/journal.pone.0304651.r004

Author response to Decision Letter 1


13 May 2024

Dear editor,

Dear reviewer,

Thank you for your prompt response and positive evaluation of our paper "Delphi studies in social and health sciences – recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study". We have re-added the missing item to the list of Delphi characteristics and are grateful to you for drawing our attention to it. Furthermore, we have minimally changed the phrasing in several sentences. All of the changes are highlighted in color.

Overall, we want to thank you for the helpful information and respect shown during the entire review process. We are happy to have the paper published soon.

With kind regards,

Prof. Dr. Marlen Niederberger on behalf of the authors

Attachment

Submitted filename: Response to Reviewers_2 ENGL.docx

pone.0304651.s004.docx (18.3KB, docx)

Decision Letter 2

Monica Duarte Correia de Oliveira

16 May 2024

Delphi studies in social and health sciences – recommendations for an interdisciplinary standardized reporting (DELPHISTAR). Results of a Delphi study.

PONE-D-23-30916R2

Dear Dr. Niederberger,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Monica Duarte Oliveira

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Monica Duarte Correia de Oliveira

16 Jul 2024

PONE-D-23-30916R2

PLOS ONE

Dear Dr. Niederberger,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Monica Duarte Correia de Oliveira

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Delphi studies in social and health sciences–recommendations for an interdisciplinary standardized reporting (DELPHISTAR).

    (DOCX)

    pone.0304651.s001.docx (20.5KB, docx)
    S2 File. DELPHISTAR–questionnaires and datasets of the Delphi rounds.

    (ZIP)

    pone.0304651.s002.zip (732.7KB, zip)
    Attachment

    Submitted filename: Response to Reviewers_final ENGL.docx

    pone.0304651.s003.docx (27.2KB, docx)
    Attachment

    Submitted filename: Response to Reviewers_2 ENGL.docx

    pone.0304651.s004.docx (18.3KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES