Abstract
Background:
A major part of the HEALing Communities Study (HCS), launched in 2019 to address the growing opioid epidemic, is evaluating the study’s intervention implementation process through an implementation science (IS) approach. One component of the IS approach involves teams with more than 20 researchers collaborating across four research sites to conduct in-depth qualitative interviews with over 300 participants at four time points. After completion of the first two rounds of data collection, we reflect upon our qualitative data collection and analysis approach. We aim to share our lessons learned about designing and applying qualitative methods within an implementation science framework.
Methods:
The HCS evaluation is based on the RE-AIM/PRISM framework and incorporates interviews at four timepoints. At each timepoint, the core qualitative team of the Intervention Work Group drafts an interview guide based on the framework and insights from previous round(s) of data collection. Researchers then conduct interviews with key informants and coalition members within their respective states. Data analysis involves drafting, iteratively refining, and finalizing a codebook in a cross-site and within-site consensus processes. Interview transcripts are then individually coded by researchers within their respective states.
Results:
Successes in the evaluation process includes having structured procedures for communication, data collection, and analysis, all of which are critical for ensuring consistent data collection and for achieving consensus during data analysis. Challenges include recognizing and accommodating the diversity of training and knowledge between researchers, and establishing reliable ways to securely store, manage, and share the large volumes of data.
Conclusion:
Qualitative methods using a team science approach have been limited in their application in large, multi-site randomized controlled trials of health interventions. Our experience provides practical guidance for future studies with large, experientially and disciplinarily diverse teams, and teams seeking to incorporate qualitative or mixed-methods components for their evaluations.
Keywords: Big qual, qualitative research, implementation science, team science, opioid use disorder
PROJECT OVERVIEW AND CONTEXT
The opioid epidemic remains a serious and growing public health problem, contributing to significant mortality and morbidity (Center for Disease Control and Prevention, 2021), and imposing an economic burden of greater than $1 trillion (Florence et al., 2021). In response, the National Institutes of Health and the Substance Abuse and Mental Health Services Administration launched the HEALing Communities Study (HCS) in 2019 to test the effectiveness of the “Communities that HEAL” (CTH) intervention at reducing opioid-related overdose deaths in 67 highly impacted communities across four states: Kentucky, Massachusetts, New York, and Ohio (Chandler et al., 2020; The HEALing Communities Study Consortium, 2020). The CTH intervention uses a community-engagement approach to build diverse coalitions in each community to expand access to and use of evidence-based overdose prevention practices across healthcare, behavioral health, criminal justice, and other community-based settings.
In addition to testing the effectiveness of the CTH intervention, the HCS is also evaluating the CTH implementation process using a mixed-methods implementation science approach. As part of this approach, researchers from each research site (i.e., state) conduct semi-structured qualitative interviews with key informants to understand the important domains of the internal community context and external policy and systems context that can affect CTH implementation (Drainoni et al., 2022). As findings from various components of the HCS and the CTH implementation evaluation are beginning to emerge (Drainoni et al., 2022; Walker et al., 2022), we undertook the opportunity to reflect on our qualitative data collection and analysis approach to share lessons learned for designing and applying qualitative methods within an implementation science framework to improve understanding of how qualitative methods using a team science approach can be applied in large, multi-site evaluations.
OVERVIEW OF STUDY DESIGN
HCS is a multi-site, wait-listed, community-level cluster randomized trial with 34 communities randomized to Wave 1 and 33 communities randomized to Wave 2 (The HEALing Communities Study Consortium, 2020). Key informants (i.e., local/county health officials, criminal justice officials, medical providers, social service providers, etc., who are community coalition members of local stakeholders in substance use issues) from all HCS communities are recruited to participate in semi-structured qualitative interviews at four timepoints: baseline (i.e., prior to CTH implementation for Wave 1; contextual for Wave 2); follow-up 1 (i.e., midway through CTH implementation for Wave 1; contextual for Wave 2); follow-up 2 (i.e., end of CTH implementation for Wave 1; prior to CTH implementation for Wave 2); and follow-up 3 (i.e., 1 year after CTH implementation has been completed for Wave 1, midway through CTH implementation for Wave 2). To date, we have completed 382 baseline interviews with 389 participants, and 304 follow-up 1 interviews with 310 participants.
The qualitative implementation evaluation component proceeds in a cyclical manner through each interview timepoint (Figure 1). We have completed this process for two timepoints (i.e., baseline and follow-up 1) and will continue using this process for the remaining two rounds of data collection. First, an interview guide is drafted based on the RE-AIM/PRISM evaluation framework (Glasgow et al., 2019) and insights from previous rounds of data collection. Then, researchers and research staff from each site conduct interviews with key informants from their respective states. Data analysis begins with the drafting of a codebook, also grounded in the PRISM/RE-AIM framework, that is iteratively refined in each round of data collection through a cross-site consensus process that involves weekly meetings of the site qualitative leads to resolve difficult coding decisions; this is followed by a within-site consensus process. Once the codebook is finalized, interview transcripts from each research site are divided amongst the coding team from the respective state for individual coding. Emerging themes and changes to the codebook are documented in detail in a log, and the information is used for interview guide and codebook development in subsequent interview rounds. The baseline (i.e., prior to CTH implementation) interview guide as well as the interview and data analysis procedures have been published in detail elsewhere (Drainoni et al., 2022; Knudsen et al., 2020; Walker et al., 2022).
Figure 1.
Qualitative process for each wave of data collection.
PRACTICAL LESSONS LEARNED
Embedded within our qualitative evaluation of the CTH intervention are the notions of Team Science and Big Qual, both of which confer particular benefits and challenges in qualitative research. Team science refers to the leveraging of cross-disciplinary expertise to address complex scientific issues (Salas et al., 2018). By bringing together experts from different fields, team science blurs disciplinary boundaries and fosters a more inclusive and collaborative analytic process that can often enhance understandings of phenomena in ways that cannot be achieved with experts from a singular scientific field (Hall et al., 2012; Stokols et al., 2008). However, team science can create challenges for the qualitative research process, especially with building consensus and maintaining efficiency (Skillman et al., 2019; Vindrola-Padros & Johnson, 2020). These matters are further complicated when the qualitative research is Big Qual, or has data from at least 100 participants (Brower et al., 2019). While the large volume of data in Big Qual is beneficial for theory-building, innovation, and generalizability, the data collection and analysis processes are often time and resource intensive (Brower et al., 2019). And similar to team science, there are concerns with achieving consensus and ensuring data trustworthiness and validity in Big Qual (Hossain & Scott-Villiers, 2019).
The challenges posed by team science and Big Qual to the qualitative research process are especially salient in the CTH implementation evaluation, which not only contains data from several hundred participants, but also involves large teams of interviewers and coders (e.g., at baseline, a team of 28 interviewers and 25 coders) who are collaborating across four distinct research sites. Further, the CTH implementation evaluation also involves collecting data longitudinally and in different study arms, adding to the complexity (‘bigness’) of the dataset. Below, we highlight some of our successes and challenges, and discuss what we have learned from our experience so far with conducting a large-scale, multi-site qualitative implementation science study.
Researcher Collaboration and Skilled Leadership
Effective collaboration amongst all members of a research team is critical for ensuring consistent data collection and for achieving consensus during data analysis, both of which contribute to increase the dependability, credibility, and trustworthiness of our findings (Wisdom et al., 2012). Therefore, it was important to identify point persons at each research site and establish a process for collaborating early on. Given that researchers were geographically dispersed, coupled with the emergence of the COVID-19 pandemic, in-person collaboration was not possible. As such, we adapted to using virtual platforms such as Zoom for meetings and developed clear procedures for keeping track of group decisions and for organizing research documents. Furthermore, two experienced qualitative researchers from each site formed the cross-site qualitative analysis core (QAC), and they were charged with liaising between the QAC and the other interviewers and coders at their respective sites. This structure has helped facilitate efficient group decision making.
In addition to the QAC, each research site also designated a senior researcher as their site’s lead. Site leads varied with respect to their levels of qualitative expertise, but each shared a health services orientation to research and had experience with project management, design, and implementation of qualitative research. As a result, beyond facilitating the data collection and analysis process, the site leads were also critical in helping the interview and coding teams develop and flourish as they progressed through different stages of the implementation evaluation (Tuckman & Jensen, 2010). Furthermore, leads establish the group norms of their sites and provide coaching that help interview and coding teams perform at their expected levels.
Structured Processes for Communication
Another key success to our qualitative evaluation so far has been the use of structured processes for communication. At baseline, the QAC developed standard operating procedures to provide clear instructions on agreed-upon processes to all researcher team members. The QAC also developed procedures for sharing updates on data collection and analysis promptly and widely. Specifically, weekly meetings are held and facilitated by a senior qualitative researcher designated as the cross-site lead. To maximize the efficiency of these weekly meetings, the QAC came up with a process where even though all researchers involved in the qualitative implementation evaluation are expected and encouraged to attend these meetings, comments and feedbacks should be submitted in writing ahead of time to be presented at the meetings by a designated person from each site. This process facilitated discussion of changes and additions to the codebook, including voting when necessary to reach cross-site consensus. All disagreements could then be resolved through discussion with a tiebreaking vote by the cross-site lead. This process informed real-time updates to the written coding log - another structured process that has been critical in our research thus far to keep track of progress, decisions, and changes that have been made at any step of the data collection and analysis process. For example, during the data analysis process at baseline and first follow-up, issues with coding and coding discrepancies are added to the log (see Table 1 for an excerpt of log). These issues can then be brought up for discussion during either cross-site or within-site group meetings, and the date and outcome of the discussions can be added to the log, along with any requisite codebook changes. The cross-site teams utilized a password protected cloud-based drive (i.e., Box) to maintain confidentiality and allow for real-time access to the most current codebook and log.
Table 1:
Excerpt of Log of Coding Discrepancies and Codebook Updates
| Description of Coding Issue/Comment |
Identified by (Research Site) |
Discussed by Qual Subgroup on (Date) |
Description of Consensus Resolution | Codebook definition update (Date) |
|---|---|---|---|---|
| Determine whether housing broadly, and sober homes specifically are part of ‘health services environment’. | 8 early coders | 3/31/2020 | Sober homes specifically are part of ‘health services environment’ and were added in the codebook definition. No decisions made for housing in general–will likely be a context-specific decision. | 4/1/2020 |
| Potential for significant double coding of ‘health services environment’ and ‘community risks’ since the latter includes references to overprescribing and other health services issues. | 8 early coders | 3/31/2020 | Remove overprescribing and any other healthcare systems references from ‘community risks’ definition. Clarify that ‘community risks’ includes concrete services like transportation. | 4/1/2020 |
| ‘Other coalitions’ prohibits coding for other agencies and organizations that exist in the ‘external context’ | 8 early coders | 3/31/2020 | Re-label ‘other coalitions’ as ‘other coalitions/ agencies/ organizations’ | 4/1/2020 |
| ‘Internal Context’ only refers to coalition members. MA and NY did not have formed coalitions at baseline and interviewed key stakeholders who may or may not end up as part of the coalition. These individuals neither fit into the internal or external contexts in the original codebook. | 8 early coders | 3/31/2020 | Re-label ‘HCS community coalition member characteristics’ as ‘coalition member or key stakeholder characteristics.’ Remove ‘HCS’ from other labels. State in Internal Context parent definition that this group of codes applies to HCS coalitions and key stakeholders. | 4/1/2020 |
| Member roles are described in the definitions for ‘HCS community coalition characteristics’ and ‘HCS community coalition member characteristics. Unclear which to code member roles to. | 8 early coders | 3/31/2020 | Restructure coding relationship so that newly renamed ‘coalition member or key stakeholder characteristics’ is a grandchild of ‘HCS Community Coalition Characteristics.’ | 4/1/2020 |
Structured Data Collection and Analytic Approaches
An interesting distinction between our data collection efforts and those of other qualitative research project is the amount of structure in both our interview guide and our interview process. While many interviewers on our implementation evaluation teams had prior experience conducting semi-structured interviews, it became apparent early on that our qualitative data collection process required a more well-defined interview guide than interviewers have used in the past. By incorporating specific question probes that interviewers were required to ask, the more directed interview guide helps to improve consistency throughout data collection while still allowing for flexibility in the interviews.
Also of note is our use of a deductive-dominant approach (Hsieh & Shannon, 2005) to data analysis for this study rather than a more traditional grounded theory approach (e.g., Glaser & Strauss, 1999), which might have been more common in some researchers’ prior work and in smaller studies. To support this shift, researchers met regularly to ensure shared understanding of the RE-AIM/PRISM framework and its alignment with the implementation evaluation interview guide that was used to focus coding discussions. Having a common understanding of this framework prior to data collection, and again before data analysis is critical to maintaining consistency in each round of coding, and to help coders reach consensus in the coding process given prior agreement about definitions and the codes themselves. For instance, in the consensus process as the codebook was refined, the QAC used the framework to guide decisions about the hierarchy of codes as parent, child or grandchild codes. Finally, the deductive-dominant approach also allowed for consideration of emergent new codes (e.g., ‘stories’ to capture anecdotal stories told by participants), yet kept the generation of these inductive codes minimal to remain focused on explicating the theoretical framework. This process was also important in guiding the expansion of inclusion and exclusion criteria as coding progressed (see Table 2 for Structure of the Codebook and Figure 2 for Coding Tree).
Table 2:
Structure of the Codebook
| Sample Code | Definition, Inclusion Criteria, Exclusion Criteria, Potential for Double Coding |
|---|---|
| Health Services Environment [Child Code] | Definition: Statements about the availability, access to, quality of, evidence for, or need for health services related to substance use, mental health, primary care, recovery support services (e.g., recovery cafes, peer supports) or other treatment/prevention/harm reduction services. Sober/recovery homes and shelters are included as part of the health services environment. Inclusion criteria: • _Include statements about the availability, absence, existence or access to health services not related to or funded by HCS. • _Include statements about health services that are needed or need to be expanded. • _Include statements about the observed or intended results of health services provided in the community. • _Include statements about law enforcement related health initiatives (e.g., post-overdose outreach). Exclusion Criteria: • _Exclude health services funded through coalition’s existing (pre-HCS) resources and instead code those to Other Community Coalition Initiatives or Community Coalition Resources as appropriate. • _Exclude comments about the coverage of benefits in an insurance plan (e.g., Medicaid, commercial) and instead code those statements to Policy. Potential for Double Code: • _Passages that describe a lack of transportation services prohibiting individuals from accessing healthcare should be double-coded to Community Risks and Health Services Environment. |
| Public Perceptions of Health Services [Grandchild Code of Health Services Environment] | Definition: Statements about the broader community’s perception of access to, availability/existence of, need for or quality of health services including substance use, mental health, primary care, and recovery services. Inclusion Criteria: • _Include statements about the broader community’s perception of the availability, access to, or quality of health services. • _Include statements about the broader community’s lack of awareness of existing health services. • _Include statements about perceptions of health service quality, groups’ (i.e., different populations within the community) general preference or experience with health services. Exclusion Criteria: • _Exclude statements about the general existence of health services and instead code those to Health Services Environment. • _Exclude statements about an interviewee’s perceptions about health services and instead code those to Health Services Environment or Individual Coalition Member/ Key Stakeholder Personal Attitudes as appropriate. |
Figure 2.
Coding tree follow-up 1 data collection.
Diversity Amongst Research Teams
One of the challenges that emerged during data analysis of baseline and the first follow-up interviews was related to the diversity in experience, training, and knowledge amongst our team of coders. Qualitative data collection and analysis trainings were held prior to the start of each round of data analysis to ensure a consistent approach to coding across all 25 coders. Qualitative trainings were used to review the goals of qualitative research, types of qualitative data collection, interviewing techniques, deductive coding and theme generation approaches, as well as training in the NVivo software that would be used for all project coding and analyses. Trainings provided the coding team with foundational and applied knowledge about the HCS study. However, these trainings could not eliminate differences in “insider knowledge” about HCS communities. For example, coders who were involved in other components in the HCS were more aware about the CTH intervention strategies chosen by a specific community, leading them to code transcripts differently than coders who did not have this information. To overcome these differences, the QAC had discussions on the epistemological stance of our coding and decided that coders should in general take a constructive approach to coding rather than an objectivist one (Chamberlain, 2014). In practice, this position meant that coders were advised to derive their interpretations and meanings based on their reading of the transcripts which involved a methodological tradeoff. While we gained consistency in our coding process, we may have lost additional insights that could have been gained from an objectivist approach. As small group secondary analysis focused on targeted codes would proceed after the completion of baseline coding and this secondary analysis would use a combination of deductive and inductive coding, we believed this would be the opportunity to consider these more nuanced insights.
Data Management and Data Sharing
To facilitate cross-site collaboration, it is important to establish reliable ways to securely store and manage the large volumes of data that the sites were collecting and analyzing. For HCS, RTI International serves as the data coordinating center (DCC). The DCC is responsible for data management and statistical support. After completing baseline coding, the original workflow for sharing of the qualitative data in HCS involved sites sending a single NVivo 12 (the coding software that the QAC selected for all qualitative analysis in HCS, as it was the tool most familiar and used across sites) file including primary coding of all transcripts for each round of interviews. The DCC then merge these files, and investigators could request code reports for analyses for manuscripts. The DCC would then supply these code reports as exported MS Word documents.
Two key issues merged with the original workflow. The first issue concerned sharing of personally identifiable information (PII) in the transcripts. The original workflow required sites to de-identify the transcripts prior to sharing with the DCC. However, sites were concerned that de-identifying the transcripts would diminish interpretability and requested to be able to send identifiable transcripts to the DCC. This change required the DCC to establish a revised data use agreement with the sites.
The second issue pertained to the code reports provided back to the investigators. The MS Word file exports of code reports do not retain the file structure that NVivo applies nor allow for examination of code overlap, making them unsortable by transcript identifiers and inefficient for in-depth analysis of the primary coded data. The DCC’s motivation for this approach was to preserve the data integrity by not sharing the full NVivo file, and to be better able to track use of the data. Moving forward, the QAC and the DCC have decided on several process improvements, including using cases to classify the data along key analytic variables (e.g., site, coalition role, study intervention assignment), and providing investigators with a limited NVivo data file that maintains the file structure and more readily supports secondary analysis to explore relevant code overlaps.
Saturation
Beyond the practical lessons described above, issues emerged around the concept of saturation that merits discussion. Saturation is a core concept in qualitative research (Glaser & Strauss, 1999), and refers to the idea that additional data collection will not produce new findings and is typically used to guide justification for ceasing data collection. For HCS, this concept was problematic to apply as our sampling strategy needs to achieve representation both within and across communities, while the working within the resource constraints of the study. These issues are common to all types of qualitative studies, yet there are additional needs for consistency in a multi-site study in terms of methods and timelines. Further, given our interest in the implementation of the CTH, we purposefully sampled key informants with heterogenous roles in their communities – raising questions about what level of saturation should be identified at (e.g., the role, the community, the site, the study intervention). Ultimately, our sampling approach was determined a priori and was designed to achieve broader study goals of understanding the context of implementation of the CTH (Sim et al., 2018). The strength of this approach is that it is well-oriented toward thematic saturation and code identification as well as framework-driven deductive analysis (Hennink et al., 2017; Saunders et al., 2018). Yet, these strengths must be weighed against the limitations of this approach; namely, that our data have more breadth than depth in any single community and are less ideal for developing theory. This limitation was acceptable for HCS given that the study’s goal was to test and explicate the RE-AIM/PRISM model rather than develop theory and is also aligned with our deductive-dominant coding approach. The impact of this limitation is also minimized in HCS due to the multiple rounds of data collection that can strengthen code refinement, as well as the additional data collection efforts that are part of HCS (e.g., surveys, fidelity reporting, case notes) that can help us triangulate our findings and develop in-depth case studies at the community level.
Challenges
Significant challenges arose in the IS qualitative evaluation related to its being one part of the larger HCS study. Many researchers on the larger HCS are experienced quantitative scholars with a more limited understanding of qualitative evaluation, which led to the need to advocate for sufficient resources to support the qualitative research activities. Additionally, competing goals and requirements that have taxed staff who are working on different aspects of the HCS may have drawn attention from the qualitative data collection and analysis process. Further, given the duration of the study, staff turnover occurred, and new interviewers and coders had to be added to the team throughout the process. While the complexity of the study was challenging to understand for new staff entering at different phases of the project, this challenge was mitigated by the established processes and trainings.
Considerations for Future Research
Given that the HCS is a multi-year study with multiple rounds of data collection, we have been able to incorporate what we learned during the baseline interviews and analysis process into our subsequent rounds of data collection and analysis. One particularly important consideration was the need to consider the timeframe for both our data collection and analysis processes. Due to the time required to obtain ethics approval and the need to quickly start the intervention in Wave 1 communities, baseline data collection occurred from late November 2019 through early January 2020. This resulted in interview schedules becoming compressed given limited interviewee availability over the holidays. In addition to scheduling interviews during a different part of the year for the next wave, our improved understanding of the time required to ensure consistency of coding across sites also led us to schedule additional time for the primary coding process for these new data.
An additional consideration for future research is related to analytic techniques for the qualitative data. So far, our teams have completed primary coding of 686 interviews across two rounds of data collection and anticipate this number will nearly double by completion over the next two rounds of data collection. Primary coding of each round of interviews, from codebook development to final coding, requires teams of 20-25 researchers approximately six to nine months to complete. While the teams have successfully sub-coded some themes for manuscript development, this process is resource intensive, and increasingly complex given the incorporation of study design elements (i.e., the start of the CTH intervention). In short, the HCS study has a Big Qual problem – how can we analyze the data into trustworthy, interpretable, and meaningful components? This challenge is an emergent one for qualitative research, as to our knowledge, there are limited qualitative studies of this scale. A critical component of the success of the HCS qualitative work is the level and duration of funding support for the study from the National Institute of Drug Abuse (NIDA). This level of funding, sustained for 5 years, was achieved by a large investment in a multi-site, multi-method study by NIDA, and allowed the sites to develop a robust process that facilitated collection and analysis of the Big Qual dataset. This situation has been extraordinary, and while qualitative work is becoming a more prominent feature of large-scale funded projects, the resources required to collect and analyze Big Qual datasets remain an obstacle.
Current qualitative methods may need to progress in order to lower the costs involved in managing datasets of this size and to work toward a more efficient and cognitively accessible process. Alternative approaches to traditional qualitative analysis, such as the breadth-and-depth approach (Davidson et al., 2019; Edwards et al., 2021), rapid analysis (Gale et al., 2019; Vindrola-Padros & Johnson, 2020), the matrix approach (Averill, 2002), qualitative comparative analysis (McAlearney et al., 2016), natural language processing (Abram et al., 2020; Crowston et al., 2012; Leeson et al., 2019), all offer potentially innovative methodological tool sets. However, a thorough understanding of the tradeoffs of these approaches is lacking, and deserves attention in future research to enable discovery using Big Qual.
Qualitative methods using a team science approach have been limited in their application in large, multisite randomized controlled trials of health interventions (Lewin et al., 2009; Mannell & Davis, 2019). While this paper reports the results of the experience so far of a large, multi-disciplinary, multi-site team, we are limited in that our focus is on a single study. We are hopeful that the perspectives we provide can inform future large-scale qualitative data collection and analyses projects that advance implementation science across settings. Incorporating qualitative methods is essential to understand intervention uptake and maintenance, particularly for etiologically complex phenomena such as the opioid epidemic. Our experience provides practical guidance for future multisite studies with large and experientially and disciplinarily diverse teams seeking to incorporate qualitative or mixed-methods components.
Acknowledgement
The authors would like to thank Dr. Ramona Olvera for her excellent assistance with this manuscript. This study protocol (Pro00038088) was approved by Advarra Inc., the HEALing Communities Study single Institutional Review Board. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, the Substance Abuse and Mental Health Services Administration or the NIH HEAL InitiativeSM. ClinicalTrials.gov identifier NCT04111939.
Funding:
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by the National Institutes of Health (NIH) and the Substance Abuse and Mental Health Services Administration through the NIH HEAL (Helping to End Addiction Long-term®) Initiative under award numbers UM1DA049394, UM1DA049406, UM1DA049412, UM1DA049415, UM1DA049417 (ClinicalTrials.gov Identifier: NCT04111939).
Footnotes
Conflict of Interest
The authors have no conflicts of interest to declare.
Ethics approval and consent to participate: All procedures were approved by Advarra Inc., the HEALing Communities Study single Institutional Review Board.
Competing interests: The authors declare they have no competing interests.
Contributor Information
Ann Scheck McAlearney, Ohio State University.
Daniel M. Walker, Ohio State University
Karen Shiu-Yee, Ohio State University.
Erika L. Crable, Boston University
Vanessa Auritt, Boston Medical Center.
Laura Barkowski, Boston Medical Center.
Evan J. Batty, University of Kentucky
Anindita Dasgupta, Columbia University.
Dawn Goddard-Eckrich, Columbia University.
Hannah K. Knudsen, University of Kentucky
Tara McCrimmon, Columbia University.
Ramona Olvera, Ohio State University.
Ariel Scalise, Boston Medical Center.
Cynthia Sieck, Ohio State University.
Jennifer Wood, University of Kentucky.
Mari-Lynn Drainoni, Boston University.
REFERENCES
- Abram MD, Mancini KT, & Parker RD (2020). Methods to Integrate Natural Language Processing Into Qualitative Research. International Journal of Qualitative Methods, 19, 1609406920984608. 10.1177/1609406920984608 [DOI] [Google Scholar]
- Averill JB (2002). Matrix Analysis as a Complementary Analytic Strategy in Qualitative Inquiry. Qualitative Health Research, 12(6), 855–866. 10.1177/104973230201200611 [DOI] [PubMed] [Google Scholar]
- Brower RL, Jones TB, Osborne-Lampkin L, Hu S, & Park-Gaghan TJ (2019). Big Qual: Defining and Debating Qualitative Inquiry for Large Data Sets. Int J Qual Methods, 18. https://doi.org/0.1177/1609406919880692 [Google Scholar]
- Center for Disease Control and Prevention. (2021, November 17). Drug Overdose Deaths in the U.S. Top 100,000 Annually. https://www.cdc.gov/nchs/pressroom/nchs_press_releases/2021/20211117.htm
- Chamberlain K. (2014). Epistemology and Qualitative Research. In Qualitative Research in Clinical and Health Psychology. Macmillan International Higher Education. [Google Scholar]
- Chandler RK, Villani J, Clarke T, McCance-Katz EF, & Volkow ND (2020). Addressing opioid overdose deaths: The vision for the HEALing communities study. Drug and Alcohol Dependence, 217, 108329. 10.1016/j.drugalcdep.2020.108329 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crowston K, Allen EE, & Heckman R (2012). Using natural language processing technology for qualitative data analysis. International Journal of Social Research Methodology, 15(6), 523–543. 10.1080/13645579.2011.625764 [DOI] [Google Scholar]
- Davidson E, Edwards R, Jamieson L, & Weller S (2019). Big data, qualitative style: A breadth-and-depth method for working with large amounts of secondary qualitative data. Quality & Quantity, 53(1), 363–376. 10.1007/s11135-018-0757-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Drainoni M-L, Knudsen HK, Adams K, Andrews-Higgins SA, Auritt V, Back S, Barkowski LK, Batty EJ, Behrooz MR, Bell S, Chen S, Christopher M-C, Coovert N, Crable EL, Dasgupta A, Goetz M, Goddard-Eckrich D, Hartman JL, Heffer H, … McAlearney AS (2022). Community coalition and key stakeholder perceptions of the community opioid epidemic before an intensive community-level intervention. Journal of Substance Abuse Treatment, 108731. 10.1016/j.jsat.2022.108731 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards R, Davidson E, Jamieson L, & Weller S (2021). Theory and the breadth-and-depth method of analysing large amounts of qualitative data: A research note. Quality & Quantity, 55(4), 1275–1280. 10.1007/s11135-020-01054-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Florence C, Luo F, & Rice K (2021). The economic burden of opioid use disorder and fatal opioid overdose in the United States, 2017. Drug and Alcohol Dependence, 218, 108350. 10.1016/j.drugalcdep.2020.108350 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, & Midboe AM (2019). Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the Veterans Health Administration. Implementation Science, 14(1), 11. 10.1186/s13012-019-0853-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glaser B, & Strauss A (1999). The Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge. [Google Scholar]
- Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, Ory MG, & Estabrooks PA (2019). RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review. Frontiers in Public Health, 7, 64. 10.3389/fpubh.2019.00064 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hall KL, Vogel AL, Stipelman B, Stokols D, Morgan G, & Gehlert S (2012). A Four-Phase Model of Transdisciplinary Team-Based Research: Goals, Team Processes, and Strategies. Translational Behavioral Medicine, 2(4), 415–430. 10.1007/s13142-012-0167-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hennink MM, Kaiser BN, & Marconi VC (2017). Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough? Qualitative Health Research, 27(4), 591–608. 10.1177/1049732316665344 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hossain N, & Scott-Villiers P (2019). Ethical and Methodological Issues in Large Qualitative Participatory Studies. American Behavioral Scientist, 63(5), 584–603. 10.1177/0002764218775782 [DOI] [Google Scholar]
- Hsieh H-F, & Shannon SE (2005). Three Approaches to Qualitative Content Analysis. Qualitative Health Research, 15(9), 1277–1288. 10.1177/1049732305276687 [DOI] [PubMed] [Google Scholar]
- Knudsen HK, Drainoni M-L, Gilbert L, Huerta TR, Oser CB, Aldrich AM, Campbell ANC, Crable EL, Garner BR, Glasgow LM, Goddard-Eckrich D, Marks KR, McAlearney AS, Oga EA, Scalise AL, & Walker DM (2020). Model and approach for assessing implementation context and fidelity in the HEALing Communities Study. Drug and Alcohol Dependence, 217, 108330. 10.1016/j.drugalcdep.2020.108330 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leeson W, Resnick A, Alexander D, & Rovers J (2019). Natural Language Processing (NLP) in Qualitative Public Health Research: A Proof of Concept Study. International Journal of Qualitative Methods, 18, 1609406919887021. 10.1177/1609406919887021 [DOI] [Google Scholar]
- Lewin S, Glenton C, & Oxman AD (2009). Use of qualitative methods alongside randomised controlled trials of complex healthcare interventions: Methodological study. BMJ, 339, b3496. 10.1136/bmj.b3496 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mannell J, & Davis K (2019). Evaluating Complex Health Interventions With Randomized Controlled Trials: How Do We Improve the Use of Qualitative Methods? Qualitative Health Research, 29(5), 623–631. 10.1177/1049732319831032 [DOI] [PubMed] [Google Scholar]
- McAlearney AS, Walker D, Moss AD, & Bickell NA (2016). Using Qualitative Comparative Analysis of Key Informant Interviews in Health Services Research: Enhancing a Study of Adjuvant Therapy Use in Breast Cancer Care. Medical Care, 54(4), 400–405. 10.1097/MLR.0000000000000503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salas E, Reyes DL, & McDaniel SH (2018). The science of teamwork: Progress, reflections, and the road ahead. The American Psychologist, 73(4), 593–600. 10.1037/amp0000334 [DOI] [PubMed] [Google Scholar]
- Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, Burroughs H, & Jinks C (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality & Quantity, 52(4), 1893–1907. 10.1007/s11135-017-0574-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sim J, Saunders B, Waterfield J, & Kingstone T (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21(5), 619–634. 10.1080/13645579.2018.1454643 [DOI] [Google Scholar]
- Skillman M, Cross-Barnet C, Friedman Singer R, Rotondo C, Ruiz S, & Moiduddin A (2019). A Framework for Rigorous Qualitative Research as a Component of Mixed Method Rapid-Cycle Evaluation. Qualitative Health Research, 29(2), 279–289. 10.1177/1049732318795675 [DOI] [PubMed] [Google Scholar]
- Stokols D, Hall KL, Taylor BK, & Moser RP (2008). The science of team science: Overview of the field and introduction to the supplement. American Journal of Preventive Medicine, 35(2 Suppl), S77–89. 10.1016/j.amepre.2008.05.002 [DOI] [PubMed] [Google Scholar]
- The HEALing Communities Study Consortium. (2020). The HEALing (Helping to End Addiction Long-term SM) Communities Study: Protocol for a cluster randomized trial at the community level to reduce opioid overdose deaths through implementation of an integrated set of evidence-based practices. Drug and Alcohol Dependence, 217, 108335. 10.1016/j.drugalcdep.2020.108335 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tuckman BW, & Jensen MAC (2010). Stages of small-group development revisited. Group Facilitation: A Research & Applications Journal, 10(1), 43–48. [Google Scholar]
- Vindrola-Padros C, & Johnson GA (2020). Rapid Techniques in Qualitative Research: A Critical Review of the Literature. Qualitative Health Research, 30(10), 1596–1604. 10.1177/1049732320921835 [DOI] [PubMed] [Google Scholar]
- Walker DM, Childerhose JE, Chen S, Coovert N, Jackson RD, Kurien N, McAlearney AS, Volney J, Alford DP, Bosak J, Oyler DR, Stinson LK, Behrooz M, Christopher M-C, & Drainoni M-L (2022). Exploring perspectives on changing opioid prescribing practices: A qualitative study of community stakeholders in the HEALing Communities Study. Drug and Alcohol Dependence, 233, 109342. 10.1016/j.drugalcdep.2022.109342 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wisdom JP, Cavaleri MA, Onwuegbuzie AJ, & Green CA (2012). Methodological Reporting in Qualitative, Quantitative, and Mixed Methods Health Services Research Articles. Health Services Research, 47(2), 721–745. 10.1111/j.1475-6773.2011.01344.x [DOI] [PMC free article] [PubMed] [Google Scholar]


