Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 1.
Published in final edited form as: Prof Psychol Res Pr. 2021 Jun 10;52(4):376–386. doi: 10.1037/pro0000371

A Practical Guide to Applying the Delphi Technique in Mental Health Treatment Adaptation: The Example of Enhanced Problem-Solving Training (E-PST)

Paul R King Jr 1,2, Gregory P Beehler 1,3, Kerry Donnelly 2,4,5, Jennifer S Funderburk 6,7, Laura O Wray 1,8,9
PMCID: PMC8384080  NIHMSID: NIHMS1732915  PMID: 34446984

Abstract

Expert consensus methods, such as the Delphi procedure, are commonly employed in consumer, education, and health services research. However, the utility of this methodology has not widely been described in relation to mental health treatment adaptation efforts. This gap is noteworthy given that evidence-based treatments are often modified in terms of core intervention content, method of delivery, and target populations. Expert consensus methods such as the Delphi procedure offer multiple practical benefits (e.g., flexibility, resource-efficiency) for psychologists who need to adapt existing treatments to meet new research and clinical practice needs. The purpose of this paper is to provide a brief overview of the Delphi procedure, and to offer a practical guide to using this method for treatment adaptation. An example is offered using our team’s application of a three-round Delphi procedure to render content and context modifications to an existing problem-solving intervention to optimize its use with a new treatment population. Data were collected from Department of Veterans Affairs clinical subject matter experts. Round 1 utilized semi-structured interviews to determine necessary protocol features and modifications. Rounds 2-3 utilized a forced-choice survey and feedback loop to evaluate expert consensus. More than 91% of rated items reached consensus following Round 2, with the remainder following Round 3. Recommended modifications included minor structural and content edits, and re-balancing time allotments. We conclude that consensus methods may facilitate treatment adaptation efforts, enhance treatment feasibility, and promote content and ecological validity. Considerations for future Delphi-based treatment adaptations are offered.

Keywords: concussion, Delphi procedure, treatment adaptation, integrated primary care, Veterans


Psychologists often face demands to alter evidence-based treatments to meet novel clinical and research needs. Common areas of modification include core intervention content (e.g., adding, blending, reducing or tailoring treatment elements) or treatment context, including treatment method (e.g., individual vs. group; face-to-face vs. virtual) and target of delivery (i.e., new or unique service population) (Stirman, Miller, Toder, & Calloway, 2013). Though these modifications may prove necessary to address real-world practice demands (Stirman et al., 2013) and to improve cultural relevance (Dinos, 2015), rendering nonsystematic adaptations runs the risk of compromising treatment fidelity by fundamentally altering essential therapeutic ingredients. However, it is not feasible to conduct rigorous large-scale studies with multiple permutations of significant content or context edits. Use of an expert consensus method is one potential way to guide treatment adaptation efforts in a systematic, but feasible and resource-efficient manner.

Expert consensus methods are commonly employed in consumer, education, and health services research (Hsu & Sanford, 2007). Although specific procedures vary, the primary aim is to distill opinions from a group of subject matter experts (SMEs; i.e., individuals with advanced or extensive knowledge, abilities, or experience in the topic of interest) to facilitate decision-making (Hasson, Keeney, & McKenna, 2000; McKenna, 1994). General steps include problem identification, data gathering and evaluation of agreement/disagreement, resolution of disagreement, and ultimately convergence of opinion. These methods are perhaps most useful in addressing two key data problems that lead to uncertainty in decision-making, namely an information shortage (i.e., insufficient scientific evidence exists to chart a path forward), or information overload (e.g., contradictory scientific evidence exists rendering clinical or business decision-making difficult) (Hasson et al., 2000; Jones & Hunter, 1995; Powell, 2003).

The Delphi technique (Dalkey & Helmer, 1963) is one specific example of an expert consensus method. This approach is a flexible, iterative means to gather and consolidate multiple rounds of SME opinions in a given domain (McKenna, 1994). Prototypical procedures include: a) identification of a research question/problem statement; b) identification and selection of SMEs based on a priori criteria (e.g., years of clinical/research experience, relevant scientific publications); c) data collection and analysis; d) implementation of a multi-round, structured feedback loop between evaluators and SMEs; and e) a final decision-making process (Hasson et al., 2000; Hsu & Sanford, 2007). Common recommendations for this approach indicate that diversity among SMEs is favorable over homogeneity (Powell, 2003) and that the selected procedures purposefully align with the overall study objective(s) (Keeney, Hasson, & McKenna, 2006). The general structure of the feedback loop is such that SMEs first review and respond to a specified stimulus (Round 1). In turn, the evaluators consolidate and interpret SMEs’ ratings and corrective feedback, edit the stimulus accordingly, and return a revised stimulus to the SME panel along with a summary of their individual and group-level responses. Round 2 ratings are then based on the revised stimulus materials, following a similar review and rating procedure. Subsequent rounds follow the same process, continuing until the predetermined consensus threshold is reached. Typically, two to three rounds are required (Hsu & Sanford, 2007; McMillan, King, & Tully, 2016).

Because of the flexible nature of the Delphi method, the specific questions and procedures used by evaluators vary widely. For instance, variability has been noted in terms of identification and selection of SMEs (e.g., panel size, member anonymity) (McMillan et al., 2016); thresholds used to determine consensus (e.g., percent agreement vs. mean/median item-ratings); and implementation of the feedback loop (e.g., approach used to develop stimulus materials, number and duration of rounds) (Keeney et al., 2006). Quantitative assessment of agreement is most commonly employed, though, the process may also incorporate qualitative components, though these applications also vary (e.g., use of SME interviews/focus groups vs. open-text survey fields) (Jorm, 2015; Keeley et al., 2016; Skulmoski, Hartman, & Krahn, 2007). Exemplar uses of the Delphi procedure include, but are not limited to, development of assessment tools (Beehler, Funderburk, Possemato, & Vair, 2013; Biondo, Nekolaichuk, Stiles, Fainsinger, & Hagen, 2008); selection of health outcome measures (Santaguida et al., 2018) and quality of care indicators (Boulkedid, Abdoul, Loustau, Sibony, & Alberti, 2011); specifying clinical practice standards (Goodyear et al., 2015; Hill, Shand, Torok, Halliday, & Reavley, 2019; van der Linde, Hofstad, van Limbeek, Postema, & Geertzen, 2005); generating policy (Aarts, Schuit, van de Goor, & Oers, 2011); and identifying health care research priorities (Mulligan & Conteh, 2016; Turner, Ollerhead, & Cook, 2017; Yotebieng et al., 2019).

In relation to mental health (MH) research, a 2015 review by Jorm documented 176 discrete consensus studies with topics ranging from the conceptual (e.g., operationally defining “relapse”) to practical (e.g., ways to improve MH practices and training standards). Missing from extant MH applications (Jorm, 2015) are treatment adaptation efforts. This gap is noteworthy, given the ubiquity of adaptation in this arena (Stirman et al., 2013). Although the use of expert consensus methods, such as the Delphi technique, does not substitute for an empirical test of effectiveness, it does offer several practical benefits for clinicians and researchers who find need to adapt MH treatments under time or resource-constrained circumstances. Advantages include the method’s flexibility, ability to capitalize on multiple sources of expertise without excessive sample size demands, and time and resource-efficiency: no complicated analytic methods, tools, or software are required; the procedure is easily learned; consensus interpretations are often simple and intuitive (e.g., percent agreement, mean/median Likert-type ratings); and the process can often be completed in several months (Keeney et al., 2006). In our recent work, we have emphasized enhanced access to mental health care for U.S. veterans through primary care-mental health integration, and found the Delphi procedure to be a suitable fit to achieve a study aim to adapt an intervention to address population health concerns related to chronic post-concussion-like symptoms and co-occurring MH disorders in this cohort. In this context, where the need for rapid turnaround MH services is often great (Gale et al., 2019) in order to meet organizational, operational, and consumer demands, use of the Delphi procedure was especially advantageous. The purpose of this paper is to provide a practical example of the rationale and use of the Delphi procedure employed in our team’s MH treatment adaptation and testing research so that others might use a similar methodology.

Applying the Delphi Framework in Treatment Adaptation

The sections below present a step-by-step guide to using the Delphi procedure for treatment adaption, along with a description of how we operationalized each step in our study. An overall summary of the Delphi framework and our exemplar procedure can be found in Table 1.

Table 1.

Proposed Delphi Flow Process for Treatment Adaptation

Delphi Procedure Example Action/Outcome/Deliverable
Step 1. Identify Research Question/ Problem Statement
• Need to modify content/context of treatment Adapt “enhanced” version of PST-PC: Target deliverable:
• Incorporate compensatory cognitive skills training • Adapted treatment manual
• Tailor to veterans with history of concussion
• Feasible for delivery in integrated primary care

Step 2. Identify & Select SMEs
• Specify method (non-random preferred) Purposeful sampling 7 SME candidates contacted
• Specify expertise needed Three essential content domains: All agreed to participate:
• Concussion rehabilitation • 2 neuropsychologists
• Integrated primary care • 2 psychiatrists, 1 psychologist
• Brief problem-solving interventions • 2 psychologists a

Step 3. Establish Data Collection & Analytic Procedures
• Specify design (quantitative/qualitative, mixed) Mixed-method (sequential, convergent) Delphi study Qualitative interview + survey
• Specify analytic paradigm Rapid analysis (qualitative), descriptive statistics (quantitative) -
• Operationally define consensus ≥ 80% Agreement (content is “acceptable as-is”) -
• Establish communication medium Remote, anonymous to one another but not study team 1:1 Email, telephone calls
• Create stimulus materials Materials vary by the goal of each round: Study team created:
• Prepare and orient SMEs to role • Introductory materials (study overview/ rationale, role induction), content primers (clinical practice guideline excerpts, original PST-PC manual, outline of proposed content changes)
• Collect data (acceptability, feasibility, integrity) Study team created:
 • Interview • Round 1: 11-item semi structured interview
 • Surveys • Round 1: 7-item professional background survey;
Round 2: 58-item survey & 5 open-ended questions; Round 3: 5-item survey
• Data summary sheets Study team created:
 • Qualitative • Bulleted list of critical feedback (rapid analysis)
 • Quantitative • Summary of forced-choice ratings (descriptives)

Step 4. Collect & Analyze Data
• Multi-round feedback loop Planned 3-4 rounds: 3 Rounds needed:
• Interview, Survey x 2, Teleconference • Interview, Survey x 2
• Gather/ consolidate actionable data (varies by round) R1 R1
• Outgoing to SMEs: Introductory materials (study overview/ rationale, role induction), content primers (clinical practice guideline excerpts, original PST-PC manual, outline of proposed content changes), professional background survey • Distributed materials to 7 SMEs
• Incoming from SMEs: Professional background survey, interview feedback • Interviewed 7 SMEs
• Summarized background data
• Created R1 feedback summary
• Interim edits • Drafted treatment manual
• Created 58-item R2 survey
R2 R2
• Outgoing to SMEs: R1 feedback summary, first draft of treatment manual, R2 survey • Distributed materials to 7 SMEs
• Incoming from SMEs: R2 survey with open text feedback • Received surveys from 5 SMEs
• Created R2 feedback summary:
 • 91.4% of items reached consensus
 • 8.6% required edits to content (e.g., additions, clarifications)
• Interim edits • Revised treatment manual
• Created 5-item R3 survey
R3 R3
• Outgoing to SMEs: R2 feedback summary, second draft of treatment manual, R3 survey • Distributed materials
• Incoming from SMEs: R3 survey • Received surveys from 5 SMEs
• Created R3 feedback summary:
 • 100% consensus
• Close feedback loop • Distributed R3 feedback summary
• Finalized treatment manual

Step 5. Decision-Making
• Final assessment (e.g., further study v. implement) Evaluate sufficiency/ quality of product Study team determined sufficient evidence exists to move forward with additional study.
• Next steps Prepare and use final deliverable Adapted treatment manual is now being used in a clinical trial.

Note:

a

Psychologists in this category were also considered integrated primary care experts. PST-PC = Problem-Solving Training in Primary Care; R1 = Round 1; R2 = Round 2; R3 = Round 3; SME = subject matter expert.

Step 1: Identify Research Question/ Problem Statement

The first step in the process is to identify the question that will be addressed by using the Delphi method, and any deliverables that will be created. In the case of treatment adaptation, the primary question would generally pertain to the scope and sufficiency of content and context edits proposed to implement a treatment in a new setting or with a new population. Defining the question in this manner will point directly to the nature of subject matter expertise that will be articulated in Step 2 below. The specific deliverable would be an adapted treatment manual that reflects the aforementioned modifications. If a research application, this early stage would also represent when necessary regulatory approvals are garnered.

Step 1 Example

Our expert consensus study comprised the first phase in an ongoing, multi-part MH treatment adaptation and evaluation project. The overall aim of the parent project is to adapt and test a brief, skills-focused intervention that blends compensatory cognitive skills training with a pre-existing problem-solving intervention (i.e., Problem-Solving Training in Primary Care [PST-PC]; Miller et al., 2015). The specific deliverable for this project was an integrated primary care MH treatment manual that a) blended select content from two behavioral interventions (content modifications), and b) was tailored to address combat veterans with history of concussion (context modifications). We sought SME feedback to ensure the integrity of the blended treatment and to promote feasibility in the intended clinical setting (i.e., integrated primary care). Regulatory approvals were granted prior to commencing work: this study was deemed Institutional Review Board-exempt, though scientific methods were approved by the Research and Development committee at the VA Western New York Healthcare System.

Step 2: Identify and Select Qualified SMEs

The second step in the process is to identify and select members of the SME panel. Ideally, the panel will be comprised of individuals with requisite clinical and/or scientific knowledge and/or experience to address the question at hand, though the precise depth and breadth of credentials is unique to a given application. As noted by Powell (2003, p. 379), “credibility with the target audience” is an important consideration. Thus, in some applications it may be sufficient for SMEs to be proficient in a given methodological or therapeutic technique. Others may require more extensive credentials, such as additional experience in implementing or evaluating policy in the same area. Pragmatic indicators of expertise (e.g., years of experience, number of publications, positions of leadership) may also be deemed critical. It is up to the evaluation team to operationalize the definition of SME in a manner suitable to their aims, and to fashion the panel with individuals likely to make the best use of existing empirical or clinical information. Nonrandom sampling procedures are especially useful in this regard as a statistically representative sample is not needed. Worth mention, however, is that the chosen sampling method will likely guide the minimum sample size determination, and the team may need to identify specific plans to address retention and attrition considerations. In the case of treatment adaptation, representation from SMEs with knowledge in the specific intervention or a substantively similar content area, treatment milieu (e.g., if treatment setting is altered from the original intended setting or feasibility questions exist), and service population would be exemplar areas, though the list could be expanded or contracted if indicated. Early identification of SMEs, along with establishing their commitment to provide feedback, is advantageous. Maintaining anonymity among experts is not required per se but can reduce the likelihood of a consensus built on only the input of highly influential opinion leaders (McMillan et al., 2016).

Step 2 Example

We used purposeful sampling (Palinkas et al., 2015) to identify the SME pool which included seven licensed MH professionals with knowledge and experience in three domains that we determined were relevant to our research question: two neuropsychologists with expertise in concussion rehabilitation; two psychiatrists and one psychologist with expertise in integrated primary care; and two practicing integrated primary care psychologists with expertise in brief problem-solving interventions. All SMEs were full-time employees of the Department of Veterans Affairs (VA), which we deemed essential as the intervention was designed for delivery in the VA health care system. These professionals were considered “critical cases” (Palinkas, 2015) in the sense that their feedback would allow for logical (not statistical) generalization to others with similar expertise. As such, a relatively small panel sufficed. In terms of individual qualifications, we recognized that many different credentials could establish expertise in the areas of interest to our study. Thus, upon identifying candidates, the team met to discuss fit and ensure that sufficient levels of expertise were represented. For instance, each neuropsychologist evidenced years of clinical experience in neuropsychology and cognitive rehabilitation (range: 15 to nearly 30 years at the time of interview) and had been the principal or co-investigator on at least one grant-funded concussion study. Each integrated primary care expert evidenced a leadership role in the field, as indicated by a local or national leadership designation (e.g., a clinic or program director), as well as at least one scientific paper in the area of integrated care; all also evidenced many years of clinical practice in their respective domains (range: 12 to more than 25 years at the time of interview). Each problem-solving expert served as a trainer/consultant in an existing national clinician training experience in PST-PC (Miller et al., 2015). We determined a priori that SMEs would remain anonymous to one another throughout the study to prevent the possibility of social influence.

Step 3: Establish Data Collection and Analytic Procedures

Typical data collection and analytic decisions pertain to general design considerations (i.e., quantitative, qualitative, or mixed-method approach), determination of personnel roles (e.g., if interviews will be used, who will conduct them and how will data be consolidated), operational definitions of agreement (e.g., percent agreement threshold, item mean), and development of any necessary stimulus materials that would serve as points of reference to SMEs. These decisions are common to any Delphi procedure. A unique consideration for treatment adaptation is that stimulus materials (for instance, a treatment manual) may be voluminous and/or highly context dependent, and therefore rating paradigms need to specify precisely what raters will be providing feedback on and how. That is to say, unlike a Delphi procedure to create a psychometric item pool (where raters would provide ratings for individual items), a treatment adaptation study might prompt raters to evaluate content in larger segments, for instance therapist scripting/ exemplars, content of patient education materials, format of handouts, as well as potentially more abstract or complex topics such as session progression. Thus it is incumbent on the research team to attend carefully to orienting SMEs to their roles (Hasson et al., 2000), providing them with adequate content primers, and prepare feedback and rating forms (i.e., interview schedules, surveys) in a manner that aligns the type of question with the specificity or depth of feedback needed from SMEs. Content primers, which may be brief or bulleted summaries of existing evidence, can help orient them to any necessary practice guidelines, policies, or theoretical/ conceptual referents that would be useful to consider as they develop and articulate their opinions. A critical component of the Delphi procedure is the feedback loop. Thus, a procedure is needed to ensure that once SME feedback is obtained, a mechanism is in place to consolidate the variety of opinions that have been gathered and share it with members of the SME panel. Generally, the only required resources are access to a means of communication (i.e., phone, email); basic calculator, spreadsheet and word processing software; and if deemed necessary, a recording medium for any qualitative interviews. Although estimates of the time required for each planned Delphi round vary, eight weeks has been described as “a realistic approximation” (Keeney et al. 2006, p. 209), however thoughtful planning, scheduling, and intentional communication with SMEs can potentially reduce the actual time required.

Step 3 Example

Our overall design was a mixed-method Delphi study that utilized both SME interviews and survey procedures in successive rounds. As such plans were established for managing both qualitative and quantitative data. For each round, the first author (a licensed psychologist leading the intervention adaptation) would schedule and conduct all interviews. A trained research assistant provided support with data management and analysis between each round. We decided in advance to consolidate qualitative feedback using a rapid analytic process (Gale et al., 2019). Unlike traditional qualitative methods, the purpose of rapid analysis is quick turnaround of explanatory data, often in the context of preliminary or time-constrained studies. Data derived in this study were amenable to rapid analysis because SME interview feedback was intended to be both concise and actionable, with questions geared toward information that would guide initial adaptation and refinement of a treatment manual. In subsequent rounds, the main goals were to evaluate agreement. As such, descriptive statistics were emphasized as the primary means of analysis. For the survey, we developed a force-choice rating scale with three options: content is acceptable as-is; content is unacceptable; or content would become acceptable-with-modification. For items rated as acceptable-with-modification, we included an open-text space for SME’s to provide specific commentary on changes needed to make the treatment manual content acceptable in a future iteration. The final survey segment included five open-ended questions on additional ways to improve the treatment manual. We defined SME consensus a priori as ≥80% agreement (Green, Jones, Hughes, & Williams, 1999) that rated manual content was acceptable as-is. We opted for telephone and email communication with SMEs to facilitate data collection.

Step 4: Collect & Analyze Data

Step 4 is essentially execution of the plan specified in Step 3. Due to the multi-round nature of the Delphi process, data from each round are analyzed prior to advancing to the next. This allows results from one round to guide the development or modification of stimulus materials used at the next. For this reason, results of each round may lead to alterations in data collection in the subsequent round (e.g., an unexpected survey result may prompt the research team to gather more in depth qualitative information from SMEs if needed; or conversely, reaching agreement earlier than expected may make additional qualitative feedback unnecessary). Where an adapted treatment manual is the primary deliverable, various revisions would be made between each round, and each subsequent SME feedback round would utilize the most up-to-date version. Any outlying or divergent feedback quickly becomes apparent, but is typically self-correcting due to the nature of the consensus rating and feedback process. Assuming that SME feedback trends toward consensus, the breadth and depth of feedback decreases with each round.

Step 4 Example

We used a three-round Delphi procedure that progressed from an initial qualitative round, followed by two survey rounds. Specific procedures for each round are described separately below.

Round 1.

Goals of Round 1 were twofold. First, to prepare SMEs for the Delphi process using pre-specified stimulus materials, and second, to collect initial qualitative data to guide development of the blended treatment manual. Initial stimulus materials were designed to orient SMEs to the study and prepare them to complete their eventual feedback and rating tasks. Following confirmation of participation, SMEs were emailed a three-part stimulus packet: a) a study overview that included the purpose and rationale for the project, and a description of SMEs’ role in the treatment adaptation process; b) a brief professional background survey; and c) content primers” consisting of excerpts from a published clinical practice guideline for managing concussions (The Management of Concussion-mTBI Working Group, 2016), the PST-PC therapist manual (Miller et al., 2015) that would be adapted in this study, and an outline of content that we sought to blend into this treatment for our newly-adapted intervention (which we dubbed Enhanced Problem-Solving Training; E-PST).

The next action involved a series of semi-structured telephone interviews with seven SMEs over a two-day period. The interview schedule (Appendix A) emphasized features of the combined intervention (e.g., psychoeducational content, approaches to apply behavioral skills in-session); changes that should be made to the current PST-PC protocol (e.g., tailoring, adding, or removing content to improve fit for the clinical population); and considerations to maintain treatment integrity of the two interventions. The core content of each interview was largely the same, though items were tailored for different SME groups (e.g., neuropsychologists provided relatively greater feedback on the valence of concussion education; problem-solving experts provided relatively greater feedback on specific problem-solving skills). Interviews required approximately 30-40 minutes.

Following completion of the interviews, the study team analyzed and discussed results. In executing the rapid analysis, we treated each interview question as a discrete content domain (e.g., essential educational content), with text responses organized into a domain-by-respondent matrix that allowed for comparison across interview respondents. Once organized, we identified actionable corrective feedback as well as eight common themes across respondents. The first three identified the essential intervention elements for: (1) any patient-centered integrated primary care intervention; (2) concussion education; and (3) PST-PC. The next four areas of feedback included more specific structural commentary on: (4) the overall composition of the blended intervention; (5) behavioral skills-training conducted in-session; (6) participants’ at-home practice activities; (7) and strategies to ensure user-friendly intervention materials. The final area (8) included miscellaneous recommendations and editorial comments that were addressed to improve clarity and population-specific content (e.g., augmenting veteran and military-specific language). Open text feedback was then distilled into a bulleted feedback list. This analysis was completed within two weeks.

Following completion of Round 1 data analysis, the study team generated a preliminary version of the adapted treatment manual using the feedback provided. The preliminary manual included a) a 64-page therapist guide, which contained therapist orientation and education materials, exemplar scripting, in-session visual aids and note templates; and b) a 77-page participant workbook, which included written education, activities, and notes pages. Both the therapist guide and participant workbook were adapted from existing materials, which in some cases were edited to reflect tailoring and content recommendations, or repackaged (e.g., multiple handouts were compiled into a single participant workbook). The bulk of content changes appeared in two of the six sessions.

Round 2.

The goals of Round 2 were to gather SME feedback on the acceptability of the adapted treatment manual and determine whether additional changes were required to maintain treatment fidelity or address feasibility concerns. Stimulus items for this stage were designed to evaluate consensus. We provided SMEs with the following materials by e-mail: a) a bulleted summary of qualitative feedback gathered during the Round 1 interviews; b) a draft of the newly-adapted E-PST treatment manual (i.e., therapist guide and participant workbook); and c) a 58-item survey that mapped onto the content in the manual. Given the overall volume of content in our application (>140 total pages), we opted to partition ratings into eight core domains. The first seven directly aligned with content in the treatment manual, each with six items relevant to the manual’s front matter and introduction, five-to-six items for each session’s therapist guide, and three items for each session’s participant workbook material. Exemplar ratings within individual sessions included overall session structure (1 item), essential content/in-session activities (3-4 items), and relevant content from the participant workbook (3 items). The eighth and final domain included five open-text questions related to other potential modifications or means to improve the intervention.

Five of seven SMEs participated in Round 2 (two were lost to schedule conflicts), and percent agreement was calculated for each rated domain. Fifty-three of 58 (91.4%) possible rating areas achieved the minimum consensus threshold of ≥80% agreement during Round 2 (Table 3). Of the five areas that did not achieve consensus, two pertained to introductory content in the E-PST therapist guide, two pertained to altering content and time considerations for treatment session one, and one pertained to altering content and time considerations for session two. SME feedback on necessary modifications was analyzed using the rapid analytic process described above and was completed in less than two weeks. We subsequently revised the treatment manual in accord with SME recommendations.

Round 3.

Round 3 focused on establishing final consensus for items that required changes following Round 2. Stimulus materials were comparable to Round 2, except that the summary of SME ratings and the new survey tool were limited to areas that required additional feedback. We sent SMEs the following materials by e-mail: a) a copy of the Round 2 survey results, which included a personalized summary of individual rating for each item and specific changes we made to the therapist guide and participant workbook; b) the revised E-PST treatment manual; and c) a 5-item survey focused only on items that did not achieve consensus during Round 2.

Data analytic procedures were identical to Round 2. Five of seven SMEs again participated in Round 3. Each area of disagreement was resolved during this round, with 100% consensus reached. Analysis was completed with negligible time commitment (i.e., minutes). As no new actionable feedback was received, the primary focus of this stage was to finalize/copyedit the treatment manual. Had additional feedback been received, we budgeted time for a fourth Delphi round (teleconference) to resolve any lingering areas of disagreement, though this ultimately proved unnecessary.

Step 5: Determining Next Steps

The final step in the process is an overall evaluation of outcomes, and to determine any necessary next steps. In the context of treatment adaptation, this determination essentially is whether a suitable deliverable (e.g., an acceptable and feasible treatment manual or protocol) now exists. This would be the most likely scenario, as the previous steps were specifically designed to identify problems in the treatment manual and move toward consensus of what an acceptable product would be. Assuming that consensus has been reached, the team would move to implement the next planned step, which may be additional study or implementation. If disagreement lingers for some reason, next steps would involve a re-evaluation of the aim and product, and potentially designing a new study (or phase of study) to address the issue.

Step 5 Example

Enthusiastic responses from SMEs suggested the overall feasibility and utility of our approach. We agreed a priori that once consensus was achieved, we would move toward further study. In this case, the next stages of study would involve an open trial to glean patient acceptability data.

Discussion

Though some have critiqued Delphi methods for over-relying on SMEs opinion (McKenna, 1994; Powell, 2003), Delphi procedures can nonetheless be used as a rigorous process to improve clinical and research decision-making when there is a lack of clear empirical guidance. Among the primary advantages of the Delphi procedure is its ability to develop actionable findings by way of structured interactions with SMEs, progressing from initially diverse opinions to relative consensus. In our example, little empirical evidence existed to guide our determination of necessary ingredients of a brief problem-solving intervention combined with compensatory cognitive skills, thus SME feedback provided means to a very specific end: systematic generation of a blended treatment manual that will be used in a subsequent clinical trial.

General considerations for using the Delphi technique have been well-described elsewhere (e.g., McKenna, 1994; Powell, 2003). In terms of using the Delphi process specifically for MH treatment adaptation, our systematic approach yielded timely responses from diverse SMEs that suggested that resulted in key improvements to our intervention protocol. Our findings suggested that minor but specific structural and content edits, as well as re-balancing of time allotments, were needed to maintain treatment integrity and promote feasibility for delivery in integrated primary care settings. In terms of lessons learned, we offer the following recommendations to others who might find need to adapt an existing MH treatment protocol under resource or time-constrained circumstances, and feel that an expert consensus approach may be fitting:

  1. Seek additional information on the process. Though this manuscript provides a summary and exemplar application of the Delphi technique, a number of other works have described the history and variety of Delphi applications in relatively more detail. These manuscripts, several of which are cited here, would be useful referents for those seeking to apply this method.

  2. Consider other consensus-building options. A full discussion of other consensus techniques is beyond the scope of this paper, however it is worth mention that other options do exist. For instance, the nominal group technique is a type of structured but adaptable group interaction which may prove more expedient than the Delphi technique (e.g., days vs. weeks to complete) in instances where there are a small group of questions, and stimulus materials require a lower time-investment for SMEs to appraise (i.e., minutes vs. hours) (McMillan et al., 2016). Benefits of expedience may, however, come at the cost of participant anonymity, and an increased likelihood that group dynamics may become a factor, as this approach is typically conducted via in-person gatherings. Cost-benefit determinations of whether an expert consensus, or another systematic decision-making approach, are project-specific.

  3. Remember that context matters. It is worth mention that the results from a Delphi study are not typically conclusive but often a starting point for a next study or study phase, or clinical initiative. Relative to treatment adaptation, the ultimate aim would typically be to point to actionable changes to improve or extend an intervention. In research applications, the larger context is likely a multiphase effort, with some combination of treatment adaptation and testing. As such, a Delphi study may be a fitting component for grant-funded work aimed in this area, perhaps leading to or culminating in a pilot randomized trial as is often the case in mental health career development (e.g., an NIH K award) or treatment development grants (e.g., NIH R34 mechanism). For clinical applications, the completion of the final step of the Delphi process may lead to a clinic-level training or implementation effort, potentially followed by further quality assurance checks (e.g., program evaluation). In particular for larger-scale implementation efforts, a systematic expert consensus approach such as that which we have outlined here could be beneficial.

  4. Identify relevant stakeholders and establish early buy-in. This recommendation is especially relevant in large systems. Other stakeholders might include administrative leadership, clinic managers, and other practitioners who might be asked to implement the modified treatment protocol. Though patients are important stakeholders, they would not typically be included in the Delphi stage. Despite our use of the Delphi process in the course of a research study, we nonetheless sought institutional support prior to seeking grant funding and also maintained consultation with SMEs and administrators as the project progressed. We also assured them that requests for their time would be kept to a minimum. Although this recommendation may seem obvious, a lack of institutional or team support would serve as a clear barrier to an intervention’s future implementation.

  5. Purposefully construct stimulus items and data collection tools. Careful attention to stimulus items and data collection tools has an obvious link to data quality. However, assuring clear and direct instructions for SMEs can also reduce the cognitive load required to evaluate what may amount to a large compilation of intervention materials. Similarly, be cautious not to overwhelm raters with too much content to review. A thorough role induction will prepare them for the requisite level of investment, and with luck, reduce the likelihood of attrition.

  6. Consider the benefits of incorporating interview data. A benefit of treatment adaptation is that a referent protocol exists. However, substantive changes to content or context, as well as those that might have wide-reaching implications (e.g., regional, national implementation) likely require a greater investment from SMEs. We observed benefits from ensuring diverse SME membership and collecting early interview feedback to guide development of the initial treatment manual. We largely attribute the high rate of consensus following Round 2 to having a clear decision-making path outlined following the Round 1 interviews. In retrospect, time taken to conduct interviews and analyze qualitative data was well-spent in this regard: most of the modifications later required or suggested were modest content additions or clarifications; none required substantial debate among the study team or extensive edits to the treatment manual. Adaptations that do not make as substantive changes (e.g., minor to moderate tailoring edits, content changes to one session, a setting change) may not experience the same return on investment from stakeholder interviews as we did, and may justifiably rely on other methods (e.g., exclusive use of survey methods).

  7. Plan for flexibility in time and effort. We observed that the actual time requirement for direct study work was low. For instance, study interviews required just two days, and data analysis for each round was completed in less than two weeks. Nonetheless, delays emerged. Sources included: 1) time spent awaiting regulatory approvals for each round of data collection (a nuance of our procedure which involved staff participation in a research protocol); 2) nuances in staff/SME schedules (e.g., vacations/holidays, other professional obligations) which also resulted in attrition of two SMEs between Rounds 1-2; and 3) honoring a subset of SMEs requests for additional time to complete manual reviews due to their own schedule limitations. Although we lost two SMEs to attrition between Round 1 and 2, we were nonetheless able to incorporate their more substantive interview feedback into our final product. Though possible, we do not have evidence to suggest that their additional ratings from them at Round 2 and 3 would have altered our outcome. Our study timeline also was not derailed as we accounted for the possibility of delays up front. For a more time-constrained study or clinic-level modification, relatively less flexibility may exist, or possibly more demand to turn results around quickly.

Conclusion

Unique clinical and research needs can drive innovation in treatment adaptation. The use of expert consensus methods, such as the Delphi technique, may facilitate treatment adaptation efforts, enhance treatment feasibility, and promote content and ecological validity. Our future research will evaluate patient-level acceptability and feasibility of the resulting E-PST protocol in a clinical trial.

Public Significance Statement.

Evidence-based mental health treatments are often modified in terms of core intervention content, method of delivery, and target populations. This paper provides an overview and practical example of how an expert consensus technique known as the Delphi procedure can be used to facilitate mental health treatment adaptation efforts, enhance treatment feasibility, and promote content and ecological validity.

Acknowledgments

The authors declare that there were no conflicts of interest with respect to the authorship or the publication of this article. This work was supported by Career Development/Capacity Building Award Number IK2 RX002796 from the United States (U.S.) Department of Veterans Affairs Rehabilitation R&D (Rehabilitation Research and Development) Service, as well as with the resources and use of facilities at the VA Center for Integrated Healthcare and the VA Western New York Healthcare System. The views expressed in this article are those of the authors and do not represent the position or policy of the Department of Veterans Affairs or the United States Government. All authors are full-time employees of the Department of Veterans Affairs, and the work performed was part of usual duties. We wish to thank Brenda Jeffries-Silmon for her assistance with data collection, and Carrie Pengelly for her assistance with manuscript preparation.

Biographies

Author Biographies

PAUL R KING JR received his PhD in counseling psychology from the University at Buffalo/SUNY. He is currently a clinical research psychologist at the Veterans Health Administration Center for Integrated Healthcare at the VA Western New York Health Care System in Buffalo, NY. He is also an adjunct assistant professor in the Department of Counseling, School, and Educational Psychology at the University at Buffalo. His areas of research include post-deployment health care, particularly primary care-based management of concussion and common mental health conditions such as post-traumatic stress disorder and depression, and processes pertaining to integrated primary care.

GREGORY P BEEHLER received his PhD in counseling psychology from the University at Buffalo/SUNY. He is currently the Associate Director for Research at the Veterans Health Administration Center for Integrated Healthcare at the VA Western New York Health Care System in Buffalo, NY. He is also a research assistant professor in the Department of Community Health and Health Behavior in the School of Public Health and Health Professions at the University at Buffalo. His research addresses integrated care settings broadly, with special emphasis on management of chronic pain and related comorbidities, methods of fidelity assessment, and identification and implementation of behavioral health provider best practices.

KERRY DONNELLY received her PhD in Counseling Psychology from the University at Buffalo/SUNY. She is currently a clinical neuropsychologist at the VA Western New York Health Care System in Buffalo, NY. Her research interests focus on neuropsychological functioning associated with traumatic brain injury and post-traumatic stress disorder.

JENNIFER S FUNDERBURK received her PhD in clinical psychology from Syracuse University. She is currently a clinical research psychologist at the Veterans Health Administration Center for Integrated Healthcare at the Syracuse VA Medical Center in Syracuse, NY. She is also an adjunct assistant professor in the Department of Psychology at Syracuse University and the Department of Psychiatry at the University of Rochester Medical Center. Her areas of professional interest include the integration of behavioral health in primary care with a special interest in the development and implementation of brief interventions.

LAURA O. WRAY received her PhD in clinical psychology from Stony Brook University. She is currently the director of the Veterans Health Administration Center for Integrated Healthcare which supports the integration of mental health care into veterans’ medical care through research, education, and implementation support. She is also an associate professor in the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo. Dr. Wray is a health services researcher whose portfolio focuses on implementation science of novel health care practice and care for veterans with dementia.

Appendix A. Interview Schedule.

Problem-Solving Therapy (PST) Experts

Essential elements of a Brief PST Intervention & Education

  • 1.

    What would you consider to be essential features of a PST-inspired intervention that is adapted for use with primary care patients with a history of mild traumatic brain injury?

  • 2.

    What education on problem-solving would you deem to be essential in working with Veterans with history of mild traumatic brain injury?

Composition, Proportion, & Feasibility of PST

  • 3.

    What sorts of PST skills would you prioritize in a brief course of treatment for Veterans with mild traumatic brain injury? Why?

  • 4.

    Are there specific types of skills that you would avoid delivering in a brief course of treatment? Why?

  • 5.

    Over the four-to-six session course of treatment course, we are proposing a two-meeting sequence (sessions 2-3) that expounds upon externalizing skills, and simplification of goals and organizational strategies. How would you envision spending that time?

Functional Application

  • 6.

    What approach would you advise in teaching Veterans to apply basic problem-solving and organizational skills in their day-to-day lives?

  • 7.

    What types of at-home (between-session) practice would you recommend? Is this sort of practice any different from what you would recommend in a general population? If so, why?

General Questions on Protocol Adaptation

  • 8.

    What, if any, elements of the current PST-PC protocol need to be changed to tailor this protocol to Veterans with history of mild traumatic brain injury? How would you suggest changing them?

  • 9.

    As a clinician, what would you expect to see in a treatment manual for a brief PST intervention that emphasizes cognitive skills such as use of calendars and journals in integrated primary care (PC-MHI)?

  • 10.

    Looking over the proposed content for our new intervention, what gaps exist in the content that we have proposed? Are there content areas that you would change, for example, adding new content, or eliminating content?

  • 11.

    Are there any areas that I have not asked about today that you feel are critical considerations for adapting this treatment protocol?

Neuropsychology/ Rehabilitation Experts

Essential elements of Primary Care-Based Intervention & Education

  • 1.

    What would you consider to be essential features of a primary care-based, and behaviorally-focused, management plan for persistent symptoms that Veterans might commonly attribute to mild traumatic brain injury?

  • 2.

    What would you consider to be essential features of brief mild traumatic brain injury education?

Composition, Proportion, & Feasibility of Compensatory Skill Training

  • 3.

    What sorts of cognitive and behavioral skills would you prioritize in a brief course of treatment for Veterans with mild traumatic brain injury? Why?

  • 4.

    Are there skills that you would avoid delivering in a brief course of treatment? Why?

  • 5.

    Over the four-to-six session course of treatment course, we are proposing one meeting that the bulk of mild traumatic brain injury education be delivered in the first session. Is that sufficient and if so, how would you recommend structuring that time?

Functional Application

  • 6.

    What approach would you advise in teaching Veterans with history of mild traumatic brain injury to apply cognitive and behavioral skills in their day-to-day lives?

  • 7.

    What types of at-home (between-session) practice would you recommend?

General Questions on Protocol Adaptation

  • 8.

    What, if any, elements of the current PST-PC protocol need to be changed to tailor this protocol to Veterans with history of mild traumatic brain injury? How would you suggest changing them?

  • 9.

    What would you expect to see in a treatment manual for a brief, behaviorally-focused intervention for mild traumatic brain injury?

  • 10.

    Looking over the proposed content for our new intervention, what gaps exist in the content that we have proposed? Are there content areas that you would change, for example, adding new content, or eliminating content?

  • 11.

    Are there any areas that I have not asked about today that you feel are critical considerations for adapting this treatment protocol?

Integrated Care Experts

Essential elements of Integrated Primary Care (PC-MHI) Intervention & Education

  • 1.

    What would you consider to be essential features of a primary care-based intervention to address mild traumatic brain injury?

  • 2.

    What types of educational topics would you deem to be essential in working with Veterans with history of mild traumatic brain injury?

Composition, Proportion, & Feasibility of PST/ Compensatory Skill Training

  • 3.

    What sorts of skills would you prioritize in a brief course of treatment for Veterans with mild traumatic brain injury? Why?

  • 4.

    Are there specific types of interventions that you would avoid delivering in a brief course of treatment? Why?

  • 5.

    Over the four-to-six session course of treatment course, we are proposing that the first meeting focus on motivational interviewing, goal-setting and education. How would you envision spending that time?

Functional Application

  • 6.

    What approach would you advise in teaching Veterans to apply basic problem-solving and organizational skills in their day-to-day lives?

  • 7.

    What types of at-home (between-session) practice would you recommend?

General Questions on Protocol Adaptation

  • 8.

    What, if any, elements of the current PST-PC protocol need to be changed to tailor this protocol to Veterans with history of mild traumatic brain injury? How would you suggest changing them?

  • 9.

    As a clinician, what would you expect to see in a treatment manual for a brief intervention that incorporates PST and compensatory cognitive skill-training in PC-MHI?

  • 10.

    Looking over the proposed content for our new intervention, what gaps exist in the content that we have proposed? Are there content areas that you would change, for example, adding new content, or eliminating content?

  • 11.

    Are there any areas that I have not asked about today that you feel are critical considerations for adapting this treatment protocol?

References

  1. Aarts MJ, Schuit A, van de Goor IAM, & van Oers HAM, (2011). Feasibility of multi-sector policy measures that create activity-friendly environments for children: Results of a Delphi study. Implementation Science, 6:128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Beehler GP, Funderburk JS, Possemato K, & Vair CL, (2013). Developing a measure of provider adherence to improve implementation of behavioral health services in primary care: A Delphi study. Implementation Science, 8(19). [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Biondo PD, Nekolaichuk CL, Stiles C, Fainsinger R, & Hagen NA, (2008). Applying the Delphi process to palliative care tool development: Lessons learned. Supportive Care in Cancer, 16, 935–942. [DOI] [PubMed] [Google Scholar]
  4. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C, (2011). Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS ONE, 6(6): e20476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Dalkey NC, & Helmer O, (1963). An experimental application of the Delphi method to the use of experts. Managerial Science, 9(3), 458–467. [Google Scholar]
  6. Dinos S, (2015). Culturally adapted mental healthcare: Evidence, problems, and recommendations. BJPsych Bulletin, 39, 153–155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, & Midboe AM, (2019). Comparison of rapid vs. in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration. Implementation Science, 14(11), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Goodyear M, Hill TL, Allchin B, McCormick F, Hine R, Cuff R, & O’Hanlon B, (2015). Standards of practice for the adult mental health workforce: Meeting the needs of families where a parent has mental illness. International Journal of Mental Health Nursing, 24(2), 169–180. [DOI] [PubMed] [Google Scholar]
  9. Green B, Jones M, Hughes D, & Williams A, (1999). Applying the Delphi technique in a study of GPs’ information requirements. Health and Social Care in the Community, 7(3), 198–205. [DOI] [PubMed] [Google Scholar]
  10. Hasson F, Keeney S, & McKenna H, (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008–1015. [PubMed] [Google Scholar]
  11. Hill NTM, Shand F, Torok M, Halliday L, & Reavley NJ, (2019). Development of best practice guidelines for suicide-related crisis response and aftercare in the emergency department or other acute settings: a Delphi expert consensus study. BMC Psychiatry, 19:6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Hsu C, & Sanford BA, (2007). The Delphi technique: Making sense of consensus. Practical Assessment, Research, and Evaluation, 12(10), 1–10. [Google Scholar]
  13. Jones J, & Hunter D, (1995). Qualitative research: Consensus methods for medical and health services research. BMJ, 311:376. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Jorm AF, (2015). Using the Delphi expert consensus method in mental health research. Australian & New Zealand Journal of Psychiatry, 49(10), 887–897. [DOI] [PubMed] [Google Scholar]
  15. Keeley T, Williamson P, Callery P, Jones LL, Mathers J, Jones J, … & Calvert M, (2016). The use of qualitative methods to inform Delphi surveys in core outcome set development. Trials, 17:230. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Keeney S, Hasson F, & McKenna H, (2006). Consulting the oracle: Ten lessons from using the Delphi technique in nursing research. Journal of Advanced Nursing, 53(2), 205–212. [DOI] [PubMed] [Google Scholar]
  17. McKenna HP, (1994). The Delphi technique: A worthwhile approach for nursing? Journal of Advanced Nursing, 19, 1221–1225. [DOI] [PubMed] [Google Scholar]
  18. McMillan SS, King M, & Tully MP, (2016). How to use the nominal group and Delphi techniques. International Journal of Clinical Pharmacy, 38(3), 655–652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Miller SA, Aspnes A, DeMuth L, McGill K, King P, Nezu AM, & Nezu CM, (2015). Problem-Solving Training in Primary Care: A problem-solving approach to achieving life’s goals. Washington, D.C.: Department of Veterans Affairs. [Google Scholar]
  20. Mulligan J, & Conteh L, (2016). Global priorities for research and the relative importance of different research outcomes: An international Delphi survey of malaria research experts. Malaria Journal, 15:585, 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, & Hoagwood K, (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 42, 533–544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Powell C, (2003). The Delphi technique: Myths and realities. Journal of Advanced Nursing, 41(4), 376–382. [DOI] [PubMed] [Google Scholar]
  23. Santaguida P, Dolvich L, Oliver D, Lamarche L, Gilsing A, Griffith LE, … & Raina P, (2018). Protocol for a Delphi consensus exercise to identify a core set of criteria for selecting health related outcome measures (HROM) to be used in primary health care. BMC Family Practice, 19:152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Skulmoski GJ, Hartman FT, & Krahn J, (2007). The Delphi method for graduate research. Journal of Information Technology Education, 6, 1–21. [Google Scholar]
  25. Stirman SW, Miller CJ, Toder K, & Calloway A, (2013). Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science, 8:65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. The Management of Concussion-mTBI Working Group. (2016). VA/DoD Clinical Practice Guideline for the Management of Concussion-Mild Traumatic Brain Injury. Washington, D.C.: Department of Veterans Affairs and Department of Defense. [Google Scholar]
  27. Turner S, Ollerhead E, & Cook A, (2017). Identifying research priorities for public health research to address health inequalities: Use of Delphi-like survey methods. Health Research Policy and Systems, 15:87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. van der Linde H, Hofstad CJ, van Limbeek J, Postema K, Geertzen JHB, (2005). Use of the Delphi technique for developing national clinical guidelines for prescription of lower-limb prostheses. Journal of Rehabilitation Research & Development, 42(5), 693–704. [DOI] [PubMed] [Google Scholar]
  29. Yotebieng M, Brazier E, Addison D, Kimmel AD, Cornell M, Keiser O, … & The IeDEA Treat All in sub-Saharan Africa Consensus Statement Working Group. (2019). Research priorities to inform “Treat All” policy implementation for people living with HIV in sub-Saharan Africa: A consensus statement from the International epidemiology Databases to Evaluate AIDS (IeDEA). Journal of the International AIDS Society, 22:e25218. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES