Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2019 Nov 15;14(11):e0224951. doi: 10.1371/journal.pone.0224951

Implementation of a screening, brief intervention and referral to treatment programme for risky substance use in South African emergency centres: A mixed methods evaluation study

Claire van der Westhuizen 1,*, Bronwyn Myers 2,3, Megan Malan 1, Tracey Naledi 4,5, Marinda Roelofse 4, Dan J Stein 6, Sa’ad Lahri 7,8, Katherine Sorsdahl 1
Editor: Cecilia Benoit9
PMCID: PMC6858052  PMID: 31730623

Abstract

Background

Screening, brief intervention, and referral to treatment (SBIRT) for risky substance use is infrequently included in routine healthcare in low-resourced settings. A SBIRT programme, adopted by the Western Cape provincial government within an alcohol harm reduction strategy, employed various implementation strategies executed by a diverse team to translate an evidence-based intervention into services at three demonstration sites before broader programme scale-up. This paper evaluates the implementation of this programme delivered by facility-based counsellors in South African emergency centres.

Method

Guided by the Consolidated Framework for Implementation Research, this mixed methods study evaluated the feasibility, acceptability, appropriateness and adoption of this task-shared SBIRT programme. Quantitative data were extracted from routinely collected health information. Qualitative interviews were conducted with 40 stakeholders in the programme’s second year.

Results

In the first year, 13 136 patients were screened and 4 847 (37%) patients met criteria for risky substance use. Of these patients, 83% received the intervention, indicating programme feasibility. The programme was adopted into routine services and found to be acceptable and appropriate, particularly by stakeholders familiar with the emergency environment. These stakeholders highlighted the burden of substance-related harm in emergency centres and favourable patient responses to SBIRT. However, some stakeholders expressed scepticism of the behaviour change approach and programme compatibility with emergency centre operations. Furthermore, adoption was both facilitated and hampered by a top-down directive from provincial leadership to implement SBIRT, while rapid implementation limited effective engagement with a diverse stakeholder group.

Conclusion

This is one of the first studies to address SBIRT implementation in low-resourced settings. The results show that SBIRT implementation and adoption was largely successful, and provide valuable insights that should be considered prior to implementation scale-up. Recommendations include ensuring ongoing monitoring and evaluation, and early stakeholder engagement to improve implementation readiness and programme compatibility in the emergency setting.

Introduction

Globally, low- and middle-income countries (LMICs) have limited resources to prevent the harms associated with risky substance use [1, 2]. As in many LMICs, risky substance use is highly prevalent in South Africa, with 13% of adults meeting criteria for a lifetime substance use disorder and 43% of alcohol users reporting heavy episodic drinking [3, 4]. This modifiable risk factor contributes to South Africa’s quadruple burden of disease due to HIV and other infectious diseases, injuries and non-communicable diseases [5]. The World Health Organization (WHO) has supported the scale up of substance use screening, brief intervention, and referral to treatment (SBIRT) within healthcare services as a means of reducing risk of injury and other health consequences associated with risky substance use [6]. Such a SBIRT programme often involves the coordination of input from multiple disciplines and management structures, and may include a continuum of care from pre-screening and screening to brief intervention, brief treatment and referral to treatment for high-risk cases [7].

SBIRT for risky substance use is an innovation which has been tested and implemented in diverse settings [8]. In some high-income countries (HICs), such as the United States, SBIRT has become a routine part of primary and emergency or trauma care [9]. As such the bulk of the evidence has emerged from HICs [10, 11]. Research in LMICs regarding SBIRT is relatively sparse and a few countries dominate this literature, notably Brazil [12, 13], South Africa [1416] and Thailand [17, 18]. In South Africa specifically, SBIRT has been tested antenatal care; HIV, TB and chronic disease services and emergency centres [15, 1923]. At the present time, in South Africa, there is ongoing work on establishing the effectiveness of interventions to address substance use [23, 24].

Although there is a substantial evidence base for the efficacy of SBIRT, several questions remain unanswered. First, in certain settings, particularly emergency centres, outcomes are inconsistent [2527], and SBIRT seems to benefit certain populations more than others [28, 29]. Furthermore, there are gaps in SBIRT evidence for certain populations, such as patients presenting to LMIC emergency centres. Second, there is uncertainty regarding how best to deliver and implement SBIRT, and about which contextual factors may affect SBIRT implementation, particularly given the inconsistent findings [7, 11, 30]. Factors which have been explored mainly fall under the CFIR ‘inner setting’ and include provider knowledge, attitudes and behaviours, including stigma related to substance use and behaviour change [3133]; supervision and support structures for implementation [34]; and leadership for the SBIRT programmes within implementing organisations [35, 36]. However, certain aspects of the implementation context, particularly in the ‘outer setting’, have been neglected, such as policies and directives in the environment beyond the organisation where an intervention is implemented [30]. Third, there are questions regarding how the use of frameworks, such as the Consolidated Framework for Implementation Research (CFIR) [37], can explain variation in SBIRT implementation outcomes. This approach has not been widely applied [38], as most studies using the CFIR identify barriers and facilitators to implementation, without linking the findings to implementation outcomes, such as adoption or appropriateness [38, 39]. Implementation research is embedded in some HIC healthcare organisations [40],[41], and is used to inform clinical practice. This is less common in LMICs; although, examples of implementation research being conducted alongside real-world implementation exist, but mainly in primary care [12, 20].

Recently, a unique opportunity arose to address these gaps in the literature by evaluating the implementation of a SBIRT programme, labelled the Teachable Moment programme, in emergency centre settings in South Africa. We employed the CFIR to examine contextual factors, within the implementing organisation and in the outer policy setting, which influenced SBIRT implementation in the low-resourced setting, adding to the sparse LMIC literature on real-world SBIRT service provision. Furthermore, we used our CFIR-based findings to explore the implementation outcomes of the SBIRT programme.

This programme had been tested in a randomised controlled trial (RCT) completed in 2013 in collaboration with the Western Cape Department of Health [15]. The trial, conducted in Western Cape emergency centres, found that the SBIRT programme was effective, acceptable and feasible to implement when task-shared to facility-based counsellors who were added to the emergency care team [15] [42]. The Western Cape Department of Health put forward the SBIRT programme for implementation when the Office of the Premier of the Western Cape province introduced an alcohol harm reduction gamechanger strategy [43], based on WHO’s alcohol harm reduction recommendations [44]. The SBIRT programme was adopted as a gamechanger intervention. The existing collaboration with the original research team was the foundation for further collaborative work in implementing the Teachable Moment programme in three emergency centres serving communities with high levels of alcohol-related injuries. Funding was provided by the Premier’s office and the Department of Health contracted local non-profit organisations to recruit health counsellors to deliver the programme. These counsellors were trained in July 2016, placed in the emergency services as part of the emergency centre team, and the programme was rolled out in August 2016. The programme involves: (i) screening adult (≥18 years) patients with non-life-threatening injuries or conditions for risky alcohol or drug use utilising the Alcohol, Smoking and Substance Involvement Screening Tool (ASSIST), (ii) providing a brief motivational interviewing-based intervention session for patients reporting recent substance use and ASSIST scores greater than 6 for alcohol or 1 for drugs [45], (iii) offering two further intervention sessions based on problem-solving therapy and (iv) referring people who screened at high risk for substance-related harms (with ASSIST scores above 26) to the regional Department of Social Development for further treatment.

Several implementation strategies (see Fig 1) were employed to facilitate the integration of the programme into usual care from the pre-implementation phase to initial implementation and maintenance of programme operations. Various stakeholders were involved at different stages. The provincial Department of Health and the researchers provided initial support in the pre-implementation and implementation phases, decreasing their involvement in the maintenance phase. The researchers had completely handed over their support roles by the maintenance phase. Through the implementation phases, the regional Department of Social Development and the district Department of Health offices increased their support and by the maintenance phase these offices were driving the SBIRT programme, supported by the non-profit organisations.

Fig 1. Implementation strategies and stakeholders responsible by implementation phase.

Fig 1

The implementation strategies were executed by a diverse implementation team to translate an evidence-based intervention into usual services at three demonstration sites before considering broader scale-up of the programme. The current study aimed to evaluate the first two years of implementation of a SBIRT programme in South African emergency centre settings and identify factors as defined by the CFIR, that influenced specific implementation outcomes, namely feasibility, acceptability, appropriateness and adoption of the evidence-based programme. In LMICs, the available human and financial resources for SBIRT are limited and task-sharing approaches, using lay health workers to deliver interventions, are more feasible in these contexts [46], as compared to some programmes in HICs where nurses, doctors or social workers deliver task-shared interventions [47]. Task-sharing or task-shifting describes the use of non-specialists to deliver services [48]. Evaluations of such low-cost programmes using lay health workers, as in the Teachable Moment programme, are vital to inform future SBIRT efforts in low-resourced settings.

Methods

Mixed methods were employed to evaluate implementation outcomes and are reported according to accepted guidelines [49]. A sequential explanatory study design was used [50] whereby quantitative routine programme data from the first year of the programme were accessed and analysed, followed by qualitative interviews with stakeholders during the second year of the programme. In this study, we assessed the feasibility, acceptability, adoption and appropriateness to characterise the implementation of the Teachable Moment programme. Since there is variation in how these implementation outcomes are defined [51], we have used the definitions of feasibility, acceptability, adoption and appropriateness described in Proctor’s taxonomy of implementation outcomes [39]. See Table 1 for the definitions used and for the operationalisation of these terms for this study.

Table 1. Definitions of implementation outcomes.

Implementation outcome Definition* Operationalised for the study
Feasibility “Feasibility is defined as the extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting.” We operationalised feasibility according to the numbers of patients meeting criteria for risky substance use who received at least the first session of the intervention. In the previous RCT, 20% of eligible patients refused study participation. Since the Teachable Moment programme was implemented in services without compensation for patient time spent, we allowed for 40% refusal. Thus, we operationalised success for this construct as at least 60% of eligible patients receiving at least one session.
Acceptability “Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory.” We operationalised acceptability as stakeholder’s positive perceptions of aspects of the Teachable Moment intervention, such as the aim of the intervention, the behaviour change approach used, intervention content, the supervision model, the addition of counsellors to the emergency centre, etc.
Adoption “Adoption is defined as the intention, initial decision, or action to try or employ an innovation or evidence-based practice. Adoption also may be referred to as ‘‘uptake.”“ We operationalised adoption as: (i) stakeholders’ commitment to implementing and supporting Teachable Moment programme operations; (ii) stakeholders’ integration of the Teachable Moment programme into their organisational structures and services and (iii) cooperation between the stakeholders implementing the programme, evidenced by regular communication, sharing of data and joint action taken.
Appropriateness “Appropriateness is the perceived fit, relevance, or compatibility of the innovation or evidence based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem.” We operationalised appropriateness as: (i) the views of stakeholders on how the Teachable Moment programme fit into usual service operations, particularly acute patient care; (ii) stakeholders’ perceptions of patients being open and able to participate in the programme at the acute emergency centre visit and (iii) stakeholders’ perceptions of operational priorities and desired allocation of resources.

*Definitions of implementation outcomes as described in Proctor’s taxonomy

Constructs described in the Consolidated Framework for Implementation Research (CFIR) [37] were used to characterise the factors affecting these implementation outcomes and as such were useful in guiding the qualitative data collection, analysis and reporting of the findings. The CFIR was chosen since it was developed as a ‘meta-theoretical framework’ to conceptualise factors which influence the introduction and sustained use of innovations in health services [37]. Certain CFIR constructs were not suitable for this evaluation and were excluded (see Appendix 1). The use of Proctor’s taxonomy of implementation outcomes in conjunction with CFIR constructs is in accordance with implementation research recommendations regarding the use of theoretical frameworks to link factors affecting implementation to observed implementation outcomes [38]. The methods used for the quantitative process evaluation and qualitative components of this study are described separately.

Quantitative process evaluation

SBIRT programme data were collected by the Department of Health during the first year of the programme, from 1 August 2016 to 31 July 2017. The screening questionnaire administered at the acute visit included sociodemographics, presenting complaint, number of substance use days in the preceding month and the ASSIST. Process data was collected including numbers of: patients screened, eligible patients and sessions given. Quantitative data were imported into SPSS version 22 and descriptive statistics were conducted. These data were used to assess the feasibility of the Teachable Moment programme.

Qualitative enquiry of factors affecting implementation

Participants

All stakeholders directly involved with the Teachable Moment programme at the provincial, district/regional, non-profit organisation and hospital levels were approached to participate in the study. Two individuals refused participation. A total of 27 individual interviews and 3 focus groups were conducted. Stakeholders were selected based on their involvement with the gamechanger initiative and SBIRT service operations. Those that consented to the interviews included: (i) two provincial Department of Social Development officials who are involved in policy and programme planning for the Department at provincial level; (ii) two provincial Department of Health officials who are involved in policy and programme planning for the Department at provincial level; (iii) three officials from the Department of the Premier involved in the Alcohol Harm Reduction gamechanger who are involved in policy and programme planning for the Premier’s Office provincial gamechanger initiatives; (iv) two regional office Department of Social Development officials generally responsible for implementing programmes and providing services at regional level; (v) three district office Department of Health officials generally responsible for implementing programmes at district level; (vi) two hospital managers who oversee all hospital operations and programmes in their respective hospitals; (vii) two EC nursing managers responsible for EC service operations; (viii) two EC medical managers responsible for EC service operations; (ix) six non-profit organisation staff responsible for providing services in healthcare facilities, as contracted by the Department of Health, and (x) three counsellor supervisors employed for the Teachable Moment programme. The Teachable Moment counsellors, who were employed specifically for the SBIRT programme, took part in one of three focus group discussions. The stakeholders interviewed from district and regional offices are programme implementers within the health and social development systems. In these offices, their role could be categorised as that of ‘middle managers’ in that senior managers initially agreed to the Teachable Moment programme implementation, but then assigned all responsibility for implementation to the district and regional office stakeholders.

Procedure

We conducted semi-structured individual interviews with stakeholders in the second year of the programme. These interviews were either conducted at the participants’ place of work or at the Centre for Public Mental Health offices, after obtaining written informed consent. Focus groups were conducted with the counsellors at each site. All interviews were audio-recorded and transcribed. Ethical approval was given by the University of Cape Town Human Research Ethics Committee (094/2017), the Western Cape Department of Social Development Provincial Research Ethics Committee (12/1/2/4) and the Western Cape Department of Health Provincial Health Research & Ethics Committee (WC_2017RP39_880), as well as the ethics committees at each of the hospitals.

Interview schedule

The interview schedule was developed based on the CFIR, complemented by the authors’ knowledge of the emergency centre setting related to: (i) prior experience of conducting the SBIRT randomised controlled trial study in these settings, and (ii) clinical experience in South African emergency centres. Questions included: experiences with the programme and opinions on the intervention, the implementation process, and barriers and facilitators to SBIRT implementation.

Analysis

Quantitative data were extracted from the routine programme data and imported into SPSS version 25. Descriptive statistics were conducted. Qualitative data were transcribed and imported into NVivo 12 for analysis. The framework approach was utilised [52] and we followed the five steps of (i) familiarisation, (ii) identification of a thematic framework, (iii) indexing, (iv) charting and (v) mapping and interpretation. For the framework, different groups of stakeholders were used as the unit of analysis, for example, district/regional officials or counsellors. We coded the data according to the following implementation outcomes [53]: acceptability, appropriateness and adoption. See Table 1 for the operationalised implementation outcomes. Two researchers coded the transcripts. Any coding disagreements were discussed and resolved. The coding was then compared to calculate the kappa statistic and a kappa of 0.93 was achieved, indicating high inter-coder reliability.

Results

The results are described below by implementation outcome. Programme process data is detailed under the feasibility outcome.

Patient characteristics and feasibility

Over the first year of the programme, 13 136 patients were screened of which 7 163 (55%) were men, and the mean age of the screened patients was 37 years. Of the patients screened, 4 847 patients (37%), met criteria for risky substance use on the ASSIST and were offered the programme, and 3 577 (74%) were male with a mean age of 33 years. Risky alcohol use alone was identified in 3301 patients (68% of eligible patients), risky alcohol and drug use in 893 patients (14%) and risky drug use alone in 653 patients (18%). It proved feasible to deliver the first session at the acute visit with 4 005 (83%) of the 4 847 eligible patients receiving the first session, while only 93 second and third sessions were delivered across the sites.

Acceptability

Many stakeholders, particularly those more involved in the programme activities such as the counsellors and emergency centre staff, reported finding the Teachable Moment programme acceptable. First, they felt that the programme was meeting a need in communities by addressing an underlying cause of crime, violent injury and other substance-related harms. This was seen as particularly important for emergency centre patients due to the high burden of substance-related injuries in this group, as highlighted by an emergency centre staff member:

“One of the issues that we have in emergency medicine is we never go to the root cause of why we see so much violence in emergency centres … A large percentage of our patients that present to us with resulting trauma, we can predict it will be on a Friday, Saturday, Sunday night … we are not addressing those issues. It’s like putting a bandage over a gaping wound..” Hospital staff (participant 27)

Second, stakeholders’ reports of patients’ responses to the intervention during their emergency centre visit as well as the changes they described implementing in their lives contributed to stakeholders’ positive views of the programme. A non-profit organisation manager reported receiving letters from patients about their good experiences with the programme and counsellors described positive feedback that they received from programme recipients. The counsellors reported that this positive feedback motivated them in their work and increased their loyalty to the programme. One counsellor recounted a patient’s positive response:

“Patients that is like, ‘Oh wow, I did not know that you guys are offering a service here and I have been looking for help. I have got a substance problem and I really want help, and I never knew where to go, but thank you guys for, you know, putting your hand out and wanting to help me.’” Counsellor (participant 3)

Third, as reported by all the emergency centre staff interviewed, the programme complemented the emergency centre activities by providing an in-house service which the emergency centre staff began to use for referrals once the programme was integrated into routine services. Additionally, staff were pleased that the programme operations did not interfere with the acute clinical care of the patients; one hospital staff member noted that the programme was “non-invasive” and did not slow them down or add to the workload of the emergency centre staff.

Some stakeholders who were more removed from the programme operations found the programme to be less acceptable, partly due to: (i) their lack of awareness about the evidence base for task-sharing approaches and behaviour change interventions in healthcare settings; (ii) stigma regarding substance users and behaviour change and (iii) concerns about the intervention’s transferability to LMICs. Many stakeholders were not familiar with a task-sharing approach, being adamant that a professional person should deliver the intervention, with one provincial official saying, “That is our biggest criticism … that social workers should have been considered for the actual intervention.” One sceptical stakeholder mentioned, “I respect the WHO [World Health Organization], but do local research … does the WHO address patients in [name of local community], noting their social context?” This view was contested, as others were aware of the work that had been done in South Africa, and believed that the evidence provided “a good basis to start from”. This resulted in these stakeholders being more accepting of the programme. Positive beliefs about the effectiveness of the behaviour change approach underpinning the intervention were generally held by those stakeholders who were familiar with the emergency centre setting and those who were more involved in the programme delivery, particularly the counsellors. One stakeholder stated that “I don’t personally think you can change the behaviour of those people through these sessions”. Other stakeholders were also doubtful and these opinions appeared to be based on their own beliefs about substance users being “addicts” and prone to “lying” and promising “that they want to change but tomorrow they are doing the other thing”.

Another contentious component of the programme was the weekly supervision of the counsellors. The majority of the stakeholders were satisfied with the supervision noting that it provided an opportunity to discuss and address problems, and allow quality improvement strategies to be implemented. Although the counsellors’ opinions on supervision were positive, a few non-profit organisation staff members felt that the supervision sessions were unnecessary or excessive. Some felt that sessions made staff management difficult, with one non-profit organisation manager feeling that the counsellor supervisor treated the counsellors “with kid gloves” allowing their staff to “take advantage and not do what they were supposed to.” The manager described difficulties in pushing her staff to reach high targets regarding numbers of patients screened, and felt that the support provided to the counsellors in supervision encouraged excuses from the staff for not reaching their targets.

Appropriateness

The location of the programme in emergency centres also evoked mixed responses, with hospital staff and counsellors generally endorsing the appropriateness of the programme for this setting. One counsellor reported: “the programme is working perfectly in the EC [emergency centre] because the EC never closes; it is 24/7.” A few stakeholders believed that emergency centres are appropriate settings for the programme as counsellors can provide immediate help for those wanting to change their substance use behaviour. Stakeholders highlighted the opportunities to take advantage of the so-called teachable moment when a patient may be considering the negative aspects of excessive substance use.

“One of the stories I have heard is the fact that somebody had a light bulb moment … So they said, "Yoh, Dokter. Ek het nie geweet ek suip so baie nie." [Yoh, Doctor. I didn’t know I drink so much.] So while I’m putting stitches in his head, he is like, oh this is a problem. I think this is what we are trying to get. It’s pointing out to people that this is not normal.” Hospital staff (participant 27)

Other stakeholders were sceptical, mainly due to their perceptions of emergency centre patients’ physical condition and sobriety at the time of screening and intervention, with one stakeholder believing that the emergency centre patients are “intoxicated … not all there”. Another stakeholder thought that it would be “very difficult to get real or solid information” from the patients. Additional potential barriers identified by stakeholders regarding the emergency centre programme included patients’ concerns around confidentiality in this busy setting, as well as patients’ resistance to being screened and receiving an intervention as a result of the long waiting times. This barrier was particularly problematic at one of the sites and is described by a provincial stakeholder: “patients are irritated when they [the counsellors] get to them because they sit for three days, for two days, for long hours … If I am sitting there for three days, I do not feel like speaking to a counsellor.” However, staff at another site mentioned that the counsellors’ activities provided a welcome diversion for some patients which improved the atmosphere in the waiting room and triage area:

“But most of the patients they like it because it is an icebreaker, because some of them have been sitting there for a while without nothing happening in between. So, we call them … the nurses they say we cool them off. Yes. We cool them off.” Counsellor (participant 30)

Some stakeholders, particularly hospital staff, believed that although the programme operations were acceptable and compatible with the setting, it would be more appropriate to spend resources on other urgent operational priorities, such as staffing, equipment and psychosocial support for the emergency centre staff which should happen “as routinely as you would do a blood pressure”. Further priorities directly associated with patient care included improving waiting times, adherence counselling for patients as well as clinical support for psychiatric and self-harm cases seen in the emergency centre. A stakeholder outlined their thoughts on their relative priorities:

“This is a great intervention, right. … but for example like when you struggle; you don’t have gloves and then you struggle with basic consumables and you struggle with patients on the floor. I don’t know whether the prevention thing relating to alcohol is necessarily something I would invest a lot of my money in when I don’t have needles, syringes, gloves, beds. You know if you told me the study is gonna cost R 500 000 and I can maybe get 25 trolleys versus the programme, I would choose the trolleys.” Hospital staff (participant 9)

Adoption

One of the main facilitators regarding the adoption of the programme was the top-down directive from the provincial government departments, accompanied by dedicated funding from the Premier’s office, to implement the service in the demonstration sites. One emergency centre staff member illustrated this by saying that implementers should use the names of the Head of the Western Cape Department of Health and the Premier and stress that they are “heavily invested in the programme” in order for management to agree to programme implementation. Another stakeholder at provincial level stated that it was easier to approach district offices and hospital management saying, “I am asking you to do this, but I will provide you with resources to be able to do that.” Further impetus for programme adoption was provided by the Ministry, with the provincial Minister of Health visiting the sites, causing the hospital staff to be more welcoming of the programme. One of the counsellors mentioned that the hospital staff thought that the programme would fail until “the Minister of Health wants to come; that is when they think ‘Wow, they are getting there’.”

Despite the fact that a number of stakeholders agreed that such a top-down directive was necessary for programme adoption, this approach functioned as a barrier in some instances. One official mentioned that the initiation of the Teachable Moment did not follow the usual processes, whereby Department of Health district offices would identify a need and request support from the provincial office. For the Teachable Moment programme, the provincial office approached the districts with a need and appropriate intervention, which some felt was starting “the wrong way around”. Furthermore, a tight timeline accompanied the top-down directive necessitating a rapid implementation process, leaving some stakeholders feeling as if they were not prepared for implementation and did not have the opportunity to provide input or adapt the process for their setting, particularly those tasked with implementing the services at district/regional and hospital levels. This combination of factors influenced stakeholders’ attitudes and their adoption of the programme:

“Especially the way it was introduced to it was just like, yeah thrown in your lap. … There is just a lot of negativity for me around this project. Even if this project could save the world tomorrow, I am not sure that I would buy into it. Because of the negative start that it had, honestly.” District/regional official (participant 1)

A further barrier to adoption was caused by the incompatibility of the Teachable Moment programme operations with the non-profit organisation model which was functioning within the HIV/AIDS, sexually transmitted infections and tuberculosis services (HAST). The non-profit organisation services use community health workers, who are labelled as HAST counsellors, in usual clinic operating hours on week days, as opposed to the Teachable Moment programme which includes weekend and nigh shifts. The health workers all undergo the same basic training which does not include psychological counselling skills, such as those employed in the Teachable Moment programme. Thus, the Teachable Moment counsellors were seen as being “different” and “not part of the bigger group of counsellors” employed by the organisation. This caused difficulties in that the Teachable Moment counsellors were not able to slot into other sites according to operational needs, and the other health workers could not cover gaps in the Teachable Moment programme roster. Moreover, these health workers involved in usual non-profit organisation services are not provided with the intensive supervision necessary for the implementation of a psychological counselling programme. These differences between the two models caused some difficulties in implementation. Additionally, the non-profit organisations were not familiar with the Teachable Moment model and the rapid implementation did not allow much consultation with the organisations on this model and only one of the non-profit organisations had worked in mental health services previously. Furthermore, logistical issues proved difficult, such as the compilation of a day and night shift roster.

The involvement of diverse stakeholders from various sectors, and levels within departments, both hindered and facilitated the programme implementation. One of the main barriers to implementation was the diffusion of responsibility that resulted from this intersectoral approach with “too many cooks stirring the pudding”. Stakeholders at the district/regional and hospital levels, as well as non-profit organisation stakeholders, reported not taking ownership of the programme since the Department of Health provincial office was driving the implementation, and due to the assistance provided by academic partners. Many stakeholders were confused by the researcher involvement, viewing the programme as just “another research project” and not the implementation of services. This confusion resulted in some stakeholders distancing themselves from the programme implementation. Some did highlight benefits of researcher involvement, including the value added by the researchers to programme monitoring and the facilitation of ongoing feedback loops to improve the programme iteratively.

Stakeholders at provincial and district/regional level also mentioned that this intersectoral approach improved communication between the Departments of Social Development and Health at the district/regional levels, where before the Teachable Moment programme the district/regional offices were “two islands” and the officials “were not able to swim over to one another”. In the second year of the programme, this situation had changed significantly:

“You know, for me I can see that Health is now changing their stance or perception or the ownership of it if you know what I mean. Because remember … substance abuse falls on Social Development. However, we all know it is a public health matter. So bridging those two together has been a major task for us … So, for the first time our clinical setting or the provincial setting will speak to our regional setting–our DSD [Department of Social Development] setting for the first time. So, for me personally, I think that is a great achievement.” Provincial official (participant 33)

Discussion

This study contributes to filling the SBIRT evidence gap highlighted in the introduction in the following ways. First, the study is one of the first to examine the implementation of an evidence-based SBIRT programme in an LMIC emergency centre setting and offers recommendations for SBIRT implementation in similar settings. Second, contextual factors influencing implementation were identified in the ‘inner setting’, such as individuals’ SBIRT knowledge and beliefs, as well as in the ‘outer setting’, such as programme leadership from outside the SBIRT implementation setting. These findings provide a broad perspective of the environment in which healthcare innovations are implemented. Third, the study uses CFIR-based findings to characterise SBIRT implementation outcomes which can inform future implementation and scale-up strategies. The findings are discussed below by implementation outcome.

The Teachable Moment programme proved to be feasible with over 13 000 patients being screened and over 4 000 patients receiving an evidence-based brief intervention in the first year of operation. While delivery of second and third sessions did not prove feasible in the current services, one session may be sufficient for many patients. International programmes integrated into healthcare services frequently include a one-session intervention only, which has been shown to produce positive results and these interventions are included in routine practice recommendations [9, 54].

While some aspects of the implementation plan were successfully completed, other aspects were less successful as reported by stakeholders. Stakeholders’ views on factors affecting the acceptability, appropriateness and adoption of the Teachable Moment programme provide insights into programme implementation at provincial, regional/district, hospital and non-profit organisation levels. The factors affecting programme implementation coalesced into three main themes, namely (i) the complexity of stakeholders’ individual responses to the programme, which were related to the stakeholders’ proximity to the service delivery setting and the compatibility of the programme with the stakeholders’ current operational environment, (ii) implementation readiness, specifically related to all stakeholders’ access to information regarding the intervention and how to incorporate the programme into services, as well as involvement and commitment of leadership at all levels and (iii) myths and misunderstandings around alcohol use, those who use alcohol and early intervention for behaviour change.

A noticeable trend in stakeholders’ views was found in the data according to the stakeholders’ proximity to the programme operations and familiarity with the emergency centre environment, which appeared to influence their individual knowledge and beliefs about the intervention and the setting. Hospital emergency centre staff and counsellors were more likely to find the programme acceptable and appropriate for the setting. Those with no experience of the emergency centre environment generally highlighted perceived barriers to the programme, such as patient sobriety at the time of the emergency centre visits, and were less likely to believe that the emergency centre provided opportunities to intervene for risky substance use. Increasingly, in international literature and practice, the emergency setting is being recognised as providing intervention opportunities for a range of risky health behaviours [55]. Common staff barriers to SBIRT implementation internationally include emergency centre staff having limited time to deliver interventions, the need for clinical staff to focus on urgent patient care and the lack of ongoing support for service providers [10, 56], which were addressed by using counsellors added to the emergency centre team for the Teachable Moment programme. Generally, investigators find that hospital staff are open to the provision of SBIRT in emergency centres, but most indicate that another cadre of health worker should be used to deliver the intervention [10, 42]. In the international literature, the emergency centre setting is deemed appropriate by stakeholders, possibly due to the fact that these studies were mainly conducted in high-income countries where SBIRT has been implemented routinely in healthcare settings [9, 47], and the stakeholders interviewed were generally providers who are familiar with these settings, such as trauma surgeons. In the United States, a wide-spread implementation programme shifted focus during the programme, to target high-volume emergency settings for SBIRT scale-up [9].

Additionally, this trend identified in the Teachable Moment programme data, according to the stakeholders’ proximity to the programme operations in the emergency centre, appeared to be affected by the perceived compatibility of the Teachable Moment programme with current organisational operations, which influenced stakeholders’ openness to the programme. For example, emergency centre staff were accepting of the programme as they realised that emergency centre operations were not affected and that the programme was compatible with their workflows. However, some stakeholders, mainly district/regional and non-profit organisation staff, reported several aspects where the Teachable Moment programme was not compatible with existing workflows and understandings, such as the introduction of the programme by provincial officials, as opposed to the Department of Health districts identifying an operational need. The non-profit organisation staff mentioned barriers to adopting the Teachable Moment programme due to the differences in the requirements of the HAST services, including difficulties in scheduling day and night shifts. Concerns regarding compatibility affect the implementation climate in which a programme is implemented, and the constructs are related in the CFIR where part of the implementation climate includes “the absorptive capacity for change” and “shared receptivity of involved individuals to an intervention” [37]. The importance of addressing compatibility with existing workflows for programme implementation and sustainability has been highlighted in a number of implementation studies [5759], accompanied with recommendations to evaluate the context early, before implementation and adapt the interventions and processes for each delivery setting.

In addition to compatibility issues, implementation readiness factors—such as available resources, leadership engagement and access to knowledge and information—proved to be key factors in the Teachable Moment programme implementation. Available financial and human resources, along with the top-down directive from the provincial government, were vital for programme implementation at all three sites. The degree of leadership engagement in programme implementation influenced the process at all levels of leadership, with provincial leadership involvement driving early implementation processes. While senior management in the district/regional offices and hospitals were engaged early, the desired diffusion of information about the programme from leadership to middle management and front-line staff, including the non-profit organisations, did not occur, partly due to the short time frame for implementation. The lack of timely engagement with staff at these levels caused some resistance to adopting the programme from the leadership figures at these levels. Many of these individuals were only engaged when programme implementation was imminent. Recent research has emphasised the importance of including middle management in implementation efforts for a number of reasons, including their role in disseminating information, in supporting employees implementing the intervention to meet their performance targets and in improving the implementation climate [60, 61].

Furthermore, in the Teachable Moment programme, there were gaps in the provision of knowledge on broader concepts underpinning the intervention and implementation process, such as implementing evidence-based practices, as well as the evidence for task-sharing approaches. Implementation studies have found that lack of knowledge regarding interventions is a common barrier [56, 62], yet few studies have highlighted the broader knowledge gaps, such as the evidence for task-sharing approaches. Task-sharing has been employed in a number of different fields [63, 64] and has been used successfully to deliver psychological interventions in LMICs [46, 48], yet is not routine practice in South African healthcare, even though task-sharing is included in the National Mental Health Policy Framework and Strategic Plan 2013–2020 [65].

Teachable Moment programme stakeholders at the provincial, district/regional and non-profit organisation levels expressed their individual knowledge and beliefs regarding alcohol use and its consequences, characteristics of alcohol users, the impact of alcohol use on the health system, as well as effective interventions for alcohol use. Many stakeholders, excluding hospital staff and counsellors, were doubtful that behaviour change approaches were acceptable to patients or effective in reducing risky substance use. These beliefs influenced stakeholders’ reactions to the programme, with implications at the outer setting external policy and inner setting operational levels. Similarly, myths and stigma-related beliefs regarding alcohol use and effective interventions are prevalent and have been reported in studies investigating SBIRT programmes. Investigators have found that some stakeholders believe that specialist intervention is required for risky substance use, that behaviour change approaches are unlikely to be effective and that the majority of substance users will be extremely resistant to discussing their substance use and to changing their behaviour [33, 56, 66, 67].

For future scale-up, certain factors should be addressed to facilitate programme implementation. These are outlined below by CFIR construct (see Fig 2). Early and sustained engagement is vital with all levels of staff involved in implementation to address a number of aspects [60, 62]. This is particularly important for stakeholders who are more removed from the emergency setting, often have more influence on programme implementation and sustainability, and are less familiar with the emergency centre operations First, efforts should be made to address gaps in knowledge regarding the intervention source and evidence base, as well as knowledge and beliefs of individuals regarding broader issues such as knowledge of evidence for task-sharing approaches as well as messaging to debunk myths around substance use and effectiveness of early intervention and behaviour change interventions. Second, existing systems should be mapped in the implementation setting and stakeholders should be engaged on how to address compatibility, in other words to fit the intervention processes into their workflow. This will enable interventions to be adapted as necessary and for organisational operations to be adjusted to accommodate the intervention. Third, with the involvement of a number of different stakeholders, including researchers, which is desirable for implementation research and practice [68], there is a need to clearly delineate roles when establishing the networks and communication in order to avoid confusion in the implementation setting. Fourth, the ongoing supervision and monitoring of goals and providing feedback during implementation was found to be a facilitator of the implementation process and will be vital in implementing the programme in further sites and adapting the programme to suit the specific setting.

Fig 2. Recommendations by CFIR construct.

Fig 2

Limitations

Researchers who conducted this evaluation were involved in the previous research, the training of the counsellors and implementation efforts in the healthcare facilities. The team may not have been able to adopt an objective approach to the evaluation. To mitigate this, the team members conducting and analysing the qualitative interviews were not closely involved with counsellor supervision and day-to-day implementation support of the programme. These team members were aware of the possible bias and gained input on the study findings from a third team member who had not been involved in programme implementation but was familiar with the intervention. Additionally, the mix of implementers and researchers, while desirable in implementation research does add challenges, such as confusion regarding the nature of the programme (research or services). Regular communication was maintained between researchers and implementers during the implementation and efforts to clearly define and revisit roles continued throughout the process. A further limitation of the research is that the qualitative interviews were conducted relatively early in the implementation process, which could have influenced the findings; unfortunately, it was not possible to conduct further interviews. Access to additional data which could have been used to further contextualise the study findings was limited. The data available from the Department of Health comprises total numbers of patients seen in the emergency centre and is not disaggregated by triage code or day of the month. Since the Teachable Moment counsellors can only screen green- and yellow-triaged patients (ie, patients not seriously ill or injured), and the counsellors do not cover night shifts Monday to Thursday, it was not possible to provide the total number of patients available to be screened by the Teachable Moment counsellors. Additionally, data regarding numbers of patients referred on to the Department of Social Development was not available. Patient responses to the Teachable Moment programme will be included in a separate paper due to the volume of data to report.

Conclusion

The Teachable Moment programme was adopted into routine services and found to be feasible, acceptable and appropriate, particularly by stakeholders familiar with the emergency centre environment, including the counsellors and hospital staff. Other stakeholders not in this position raised a variety of concerns regarding the appropriateness of the programme for the emergency centre setting, the compatibility of the programme with current workflows, influencing programme adoption, as well as the acceptability of the behaviour change approach underpinning the programme. Solutions to the barriers identified mainly involve early engagement regarding stakeholder knowledge and addressing programme compatibility in the implementation setting.

Supporting information

S1 File. Use of the CFIR.

(DOCX)

S1 Table. CFIR constructs used by domain.

(DOCX)

S1 Fig. Outer and inner setting.

(TIF)

Acknowledgments

The authors would like to gratefully acknowledge the assistance of the provincial Departments of Health and Social Development, as well as the emergency centre staff at the three hospitals, particularly Dr Aziz Parker. We also acknowledge the facility-based counsellors for giving us their time and for their passion for their work.

Data Availability

The data are owned by the Western Cape Department of Health and applicants may apply online to the National Health Research Database (https://nhrd.hst.org.za/). The authors confirm they had no special access or privileges that other researchers would not have.

Funding Statement

CvdW received research funding via a postdoctoral fellowship award via the DELTAS Africa Initiative [DEL-15-01], African Academy of Sciences (https://www.aasciences.ac.ke/), funded by the Wellcome Trust (https://wellcome.ac.uk/). KS recieved funding from the Western Cape Office of the Premier. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.World Health Organization. Global Health Observatory (GHO) data: Screening and brief interventions for substance use in primary health care Geneva: World Health Organization; 2018. [cited 2019 29/04/2019]. Available from: https://www.who.int/gho/substance_abuse/service/screening_interventions/en/. [Google Scholar]
  • 2.World Health Organization. Global status report on alcohol and health 2018 Geneva: World Health Organization, 2018. [Google Scholar]
  • 3.Herman AA, Stein DJ, Seedat S, Heeringa SG, Moomal H, Williams DR. The South African Stress and Health (SASH) study: 12-month and lifetime prevalence of common mental disorders. South African Medical Journal. 2009;99(5 Pt 2):339–44. Epub 2009/07/11. [PMC free article] [PubMed] [Google Scholar]
  • 4.Vellios NG, Van Walbeek CP. Self-reported alcohol use and binge drinking in South Africa: Evidence from the National Income Dynamics Study, 2014–2015. South African medical journal = Suid-Afrikaanse tydskrif vir geneeskunde. 2017;108(1):33–9. Epub 2017/12/22. 10.7196/SAMJ.2017.v108i1.12615 . [DOI] [PubMed] [Google Scholar]
  • 5.Pillay-van Wyk V, Msemburi W, Laubscher R, Dorrington RE, Groenewald P, Glass T, et al. Mortality trends and differentials in South Africa from 1997 to 2012: second National Burden of Disease Study. The Lancet Global health. 2016;4(9):e642–53. Epub 2016/08/20. 10.1016/S2214-109X(16)30113-9 . [DOI] [PubMed] [Google Scholar]
  • 6.World Health Organization. mhGAP intervention guide for mental, neurological and substance use disorders in non-specialized health settings: mental health Gap Action Programme (mhGAP), version 2.0 Geneva: World Health Organization, 2016. [PubMed] [Google Scholar]
  • 7.Del Boca FK, McRee B, Vendetti J, Damon D. The SBIRT program matrix: a conceptual framework for program implementation and evaluation. Addiction. 2017;112:12–22. 10.1111/add.13656 [DOI] [PubMed] [Google Scholar]
  • 8.Elzerbi C, Donoghue K, Drummond C. A comparison of the efficacy of brief interventions to reduce hazardous and harmful alcohol consumption between European and non-European countries: a systematic review and meta-analysis of randomized controlled trials. Addiction. 2015;110(7):1082–91. 10.1111/add.12960 [DOI] [PubMed] [Google Scholar]
  • 9.Babor TF, Del Boca F, Bray JW. Screening, Brief Intervention and Referral to Treatment: implications of SAMHSA's SBIRT initiative for substance abuse policy and practice. Addiction. 2017;112:110–7. 10.1111/add.13675 [DOI] [PubMed] [Google Scholar]
  • 10.Janice V, Amanda G, Donna D, Manu S, Bonnie M, Frances DB. Screening, brief intervention and referral to treatment (SBIRT): implementation barriers, facilitators and model migration. Addiction. 2017;112(S2):23–33. 10.1111/add.13652 [DOI] [PubMed] [Google Scholar]
  • 11.Nilsen P, Kaner E, Babor TF. Brief intervention, three decades on:An overview of research findings and strategies for more widespread implementation. Nordic Studies on Alcohol and Drugs. 2008;25(6):453–67. 10.1177/145507250802500608 [DOI] [Google Scholar]
  • 12.Ronzani TM, Mota DCB, Souza ICWd. Prevenção do uso de álcool na atenção primária em municípios do estado de Minas Gerais. Revista de saude publica. 2009;43:51–61. [DOI] [PubMed] [Google Scholar]
  • 13.Cruvinel E, Richter KP, Bastos RR, Ronzani TM. Screening and brief intervention for alcohol and other drug use in primary care: associations between organizational climate and practice. Addiction science & clinical practice. 2013;8:4 Epub 2013/02/13. 10.1186/1940-0640-8-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Pengpid S, Peltzer K, Skaal L, Van der Heever H. Screening and brief interventions for hazardous and harmful alcohol use among hospital outpatients in South Africa: results from a randomized controlled trial. BMC public health. 2013;13:644 Epub 2013/07/13. 10.1186/1471-2458-13-644 ; PubMed Central PMCID: PMCPmc3849548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Sorsdahl K, Stein DJ, Corrigall J, Cuijpers P, Smits N, Naledi T, et al. The efficacy of a blended motivational interviewing and problem solving therapy intervention to reduce substance use among patients presenting for emergency services in South Africa: A randomized controlled trial2015 2015-1-1; 10:[46 p.]. 10.1186/s13011-015-0042-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Ward CL, Mertens JR, Bresick GF, Little F, Weisner CM. Screening and Brief Intervention for Substance Misuse: Does It Reduce Aggression and HIV-Related Risk Behaviours? Alcohol and Alcoholism. 2015;50(3):302–9. 10.1093/alcalc/agv007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Assanangkornchai S, Balthip Q, Guy Edwards J. Screening and brief intervention for substance misuse in Thailand. Public Health. 2013;127(12):1140–2. 10.1016/j.puhe.2013.08.011 [DOI] [PubMed] [Google Scholar]
  • 18.Areesantichai C, Iamsupasit S, Marsden J, Perngparn U, Taneepanichskul S. Effect of "tailored goal oriented community brief intervention model" on AUDIT reduction in Thai communities. J Med Assoc Thai. 2010;93(8):992–7. Epub 2010/08/20. . [PubMed] [Google Scholar]
  • 19.Sorsdahl K, Petersen Williams P, Everett-Murphy K, Vythilingum B, de Villiers P, Myers B, et al. Feasibility and Preliminary Responses to a Screening and Brief Intervention Program for Maternal Mental Disorders Within the Context of Primary Care. Community Ment Health J. 2015. Epub 2015/03/07. 10.1007/s10597-015-9853-9 . [DOI] [PubMed] [Google Scholar]
  • 20.Peltzer K, Matseke G, Azwihangwisi M. Evaluation of alcohol screening and brief intervention in routine practice of primary care nurses in Vhembe district, South Africa. Croat Med J. 2008;49(3):392–401. Epub 2008/06/27. 10.3325/cmj.2008.3.392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Peltzer K, Naidoo P, Louw J, Matseke G, Zuma K, McHunu G, et al. Screening and brief interventions for hazardous and harmful alcohol use among patients with active tuberculosis attending primary public care clinics in South Africa: results from a cluster randomized controlled trial. BMC public health. 2013;13:699 Epub 2013/08/02. 10.1186/1471-2458-13-699 ; PubMed Central PMCID: PMCPmc3733870. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Myers B, Petersen-Williams P, van der Westhuizen C, Lund C, Lombard C, Joska JA, et al. Community health worker-delivered counselling for common mental disorders among chronic disease patients in South Africa: a feasibility study. BMJ open. 2019;9(1):e024277 10.1136/bmjopen-2018-024277 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Parry CD, Morojele NK, Myers BJ, Kekwaletswe CT, Manda SO, Sorsdahl K, et al. Efficacy of an alcohol-focused intervention for improving adherence to antiretroviral therapy (ART) and HIV treatment outcomes—a randomised controlled trial protocol. BMC Infect Dis. 2014;14:500 Epub 2014/09/13. 10.1186/1471-2334-14-500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Myers B, Lund C, Lombard C, Joska J, Levitt N, Butler C, et al. Comparing dedicated and designated models of integrating mental health into chronic disease care: study protocol for a cluster randomized controlled trial. Trials. 2018;19(1):185 Epub 2018/03/20. 10.1186/s13063-018-2568-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Babor TF, Kadden RM. Screening and interventions for alcohol and drug problems in medical settings: what works? Journal of Trauma. 2005;59(3 Suppl):S80–7; discussion S94-100. Epub 2005/12/16. . [DOI] [PubMed] [Google Scholar]
  • 26.Schmidt CS, Schulte B, Seo HN, Kuhn S, O'Donnell A, Kriston L, et al. Meta-analysis on the effectiveness of alcohol screening with brief interventions for patients in emergency care settings. Addiction. 2016;111(5):783–94. Epub 2015/12/08. 10.1111/add.13263 . [DOI] [PubMed] [Google Scholar]
  • 27.Barata IA, Shandro JR, Montgomery M, Polansky R, Sachs CJ, Duber HC, et al. Effectiveness of SBIRT for Alcohol Use Disorders in the Emergency Department: A Systematic Review. The western journal of emergency medicine. 2017;18(6):1143–52. Epub 2017/11/01. 10.5811/westjem.2017.7.34373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Young MM, Stevens A, Galipeau J, Pirie T, Garritty C, Singh K, et al. Effectiveness of brief interventions as part of the Screening, Brief Intervention and Referral to Treatment (SBIRT) model for reducing the nonmedical use of psychoactive substances: a systematic review. Systematic Reviews. 2014;3(1):50 10.1186/2046-4053-3-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.O'Donnell A, Anderson P, Newbury-Birch D, Schulte B, Schmidt C, Reimer J, et al. The impact of brief alcohol interventions in primary healthcare: a systematic review of reviews. Alcohol Alcohol. 2014;49(1):66–78. Epub 2013/11/16. 10.1093/alcalc/agt170 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC health services research. 2019;19(1):189 10.1186/s12913-019-4015-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Zatzick D, Donovan DM, Jurkovich G, Gentilello L, Dunn C, Russo J, et al. Disseminating alcohol screening and brief intervention at trauma centers: a policy-relevant cluster randomized effectiveness trial. Addiction. 2014;109(5):754–65. Epub 2014/01/24. 10.1111/add.12492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Bernstein E, Bernstein J, Feldman J, Fernandez W, Hagan M, Mitchell P, et al. An evidence based alcohol screening, brief intervention and referral to treatment (SBIRT) curriculum for emergency department (ED) providers improves skills and utilization. Substance abuse: official publication of the Association for Medical Education and Research in Substance Abuse. 2007;28(4):79–92. Epub 2007/12/14. 10.1300/J465v28n04_01 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Geerligs L, Rankin NM, Shepherd HL, Butow P. Hospital-based interventions: a systematic review of staff-reported barriers and facilitators to implementation processes. Implementation Science. 2018;13(1):36 10.1186/s13012-018-0726-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Mitchell SG, Schwartz RP, Kirk AS, Dusek K, Oros M, Hosler C, et al. SBIRT Implementation for Adolescents in Urban Federally Qualified Health Centers. Journal of Substance Abuse Treatment. 2016;60:81–90. 10.1016/j.jsat.2015.06.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Vendetti J, Gmyrek A, Damon D, Singh M, McRee B, Del Boca F. Screening, brief intervention and referral to treatment (SBIRT): implementation barriers, facilitators and model migration. Addiction. 2017;112 Suppl 2:23–33. Epub 2017/01/12. 10.1111/add.13652 . [DOI] [PubMed] [Google Scholar]
  • 36.Hargraves D, White C, Frederick R, Cinibulk M, Peters M, Young A, et al. Implementing SBIRT (Screening, Brief Intervention and Referral to Treatment) in primary care: lessons learned from a multi-practice evaluation portfolio. Public health reviews. 2017;38:31 Epub 2018/02/17. 10.1186/s40985-017-0077-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Damschroder L, Aron D, Keith R. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the Consolidated Framework for Implementation Research. Implement Sci. 2016;11:72 Epub 2016/05/18. 10.1186/s13012-016-0437-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Epub 2010/10/20. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Stetler CB, Mittman BS, Francis J. Overview of the VA Quality Enhancement Research Initiative (QUERI) and QUERI theme articles: QUERI Series. Implementation Science. 2008;3(1). 10.1186/1748-5908-3-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Currie G, Lockett A, El Enany N. From what we know to what we do: lessons learned from the translational CLAHRC initiative in England. J Health Serv Res Policy. 2013;18(3 Suppl):27–39. Epub 2013/10/30. 10.1177/1355819613500484 . [DOI] [PubMed] [Google Scholar]
  • 42.Myers B, Stein DJ, Mtukushe B, Sorsdahl K. Feasibility and acceptability of screening and brief interventions to address alcohol and other drug use among patients presenting for emergency services in Cape Town, South Africa. Advances in Preventive Medicine. 2012;2012(569153). Epub 6 November 2012. 10.1155/2012/569153 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Western Cape Government: Department of the Premier. Provincial Strategic Plan: 2014–2019 Cape Town: Western Cape Government: Department of the Premier, 2014. [Google Scholar]
  • 44.World Health Organization. Global strategy to reduce the harmful use of alcohol Geneva: World Health Organization, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.van der Westhuizen C, Wyatt GE, Williams JK, Stein DJ, Sorsdahl K. Validation of the Alcohol, Smoking and Substance Involvement Screening Test in a low- and middle-income country cross-sectional emergency centre study. Drug Alcohol Rev. 2016. Epub 2016/06/02. 10.1111/dar.12424 . [DOI] [PubMed] [Google Scholar]
  • 46.Hoeft TJ, Fortney JC, Patel V, Unutzer J. Task-Sharing Approaches to Improve Mental Health Care in Rural and Other Low-Resource Settings: A Systematic Review. J Rural Health. 2018;34(1):48–62. Epub 2017/01/14. 10.1111/jrh.12229 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Babor TF, McRee BG, Kassebaum PA, Grimaldi PL, Ahmed K, Bray J. Screening, Brief Intervention, and Referral to Treatment (SBIRT): toward a public health approach to the management of substance abuse. Substance Abuse. 2007;28(3):7–30. . Language: English. Entry Date: 20080307. Revision Date: 20080912. Publication Type: journal article. Journal Subset: Biomedical. [DOI] [PubMed] [Google Scholar]
  • 48.van Ginneken N, Tharyan P, Lewin S, Rao Girish N, Meera SM, Pian J, et al. Non-specialist health worker interventions for the care of mental, neurological and substance-abuse disorders in low- and middle-income countries. Cochrane Database of Systematic Reviews [Internet]. 2013; (11). Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD009149.pub2/abstract. [DOI] [PubMed] [Google Scholar]
  • 49.O'Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. Journal of Health Services Research & Policy. 2008;13(2):92–8. 10.1258/jhsrp.2007.007074 . [DOI] [PubMed] [Google Scholar]
  • 50.Hanson WE, Creswell JW, Clark VLP, Petska KS, Creswell JD. Mixed methods research designs in counseling psychology. Journal of Counseling Psychology. 2005;52(2):224–35. 10.1037/0022-0167.52.2.224 . [DOI] [Google Scholar]
  • 51.Proctor E, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation Research in Mental Health Services: an Emerging Science with Conceptual, Methodological, and Training challenges. Adm Policy Ment Health. 2009;36(1):24–34. 10.1007/s10488-008-0197-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Ritchie J, Spencer L. Qualitative data analysis for applied policy research In: Bryman A, Burgess RG, editors. Analyzing Qualitative Research. London: Routledge; 1994. p. 173–94. [Google Scholar]
  • 53.Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Adm Policy Ment Health. 2011;38(2):65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Higgins-Biddle J, Hungerford D, Cates-Wessel K. Screening and Brief Interventions (SBI) for Unhealthy Alcohol Use: A Step-by- Step Implementation Guide for Trauma Centers Atlanta, GA: Centers for Disease Control and Prevention, National Center for Injury Prevention and Control, 2009. [Google Scholar]
  • 55.Bernstein SL. The Clinical Impact of Health Behaviors on Emergency Department Visits. Academic Emergency Medicine. 2009;16(11):1054–9. 10.1111/j.1553-2712.2009.00564.x [DOI] [PubMed] [Google Scholar]
  • 56.Johnson M, Jackson R, Guillaume L, Meier P, Goyder E. Barriers and facilitators to implementing screening and brief intervention for alcohol misuse: a systematic review of qualitative evidence. J Public Health (Bangkok). 2010;33 10.1093/pubmed/fdq095 [DOI] [PubMed] [Google Scholar]
  • 57.Kaiser DJ, Karuntzos G. An Examination of the Workflow Processes of the Screening, Brief Intervention, and Referral to Treatment (SBIRT) Program in Health Care Settings. J Subst Abuse Treat. 2016;60:21–6. Epub 2015/09/19. 10.1016/j.jsat.2015.08.001 . [DOI] [PubMed] [Google Scholar]
  • 58.Mendel P, Meredith LS, Schoenbaum M, Sherbourne CD, Wells KB. Interventions in Organizational and Community Context: A Framework for Building Evidence on Dissemination and Implementation in Health Services Research. Adm Policy Ment Health. 2008;35(1):21–37. 10.1007/s10488-007-0144-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Rosen CS, Matthieu MM, Wiltsey Stirman S, Cook JM, Landes S, Bernardy NC, et al. A Review of Studies on the System-Wide Implementation of Evidence-Based Psychotherapies for Posttraumatic Stress Disorder in the Veterans Health Administration. Adm Policy Ment Health. 2016;43(6):957–77. 10.1007/s10488-016-0755-0 [DOI] [PubMed] [Google Scholar]
  • 60.Birken S, Clary A, Tabriz AA, Turner K, Meza R, Zizzi A, et al. Middle managers’ role in implementing evidence-based practices in healthcare: a systematic review. Implementation Science. 2018;13(1):149 10.1186/s13012-018-0843-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Brooke-Sumner C, Petersen-Williams P, Kruger J, Mahomed H, Myers B. 'Doing more with less': a qualitative investigation of perceptions of South African health service managers on implementation of health innovations. Health Policy Plan. 2019;34(2):132–40. Epub 2019/03/14. 10.1093/heapol/czz017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Li S-A, Jeffs L, Barwick M, Stevens B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Systematic Reviews. 2018;7(1):72 10.1186/s13643-018-0734-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.World Health Organization. Task shifting: global recommendations and guidelines Geneva: World Health Organization, 2007. [Google Scholar]
  • 64.Joshi R, Alim M, Kengne AP, Jan S, Maulik PK, Peiris D, et al. Task Shifting for Non-Communicable Disease Management in Low and Middle Income Countries–A Systematic Review. PLOS ONE. 2014;9(8):e103754 10.1371/journal.pone.0103754 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Department of Health Republic of South Africa. National Mental Health Policy Framework and Strategic Plan 2013–2020 In: Do Health, editor. Pretoria: Department of Health; 2013. [Google Scholar]
  • 66.Williams EC, Achtmeyer CE, Young JP, Rittmueller SE, Ludman EJ, Lapham GT, et al. Local Implementation of Alcohol Screening and Brief Intervention at Five Veterans Health Administration Primary Care Clinics: Perspectives of Clinical and Administrative Staff. Journal of Substance Abuse Treatment. 2016;60:27–35. 10.1016/j.jsat.2015.07.011 [DOI] [PubMed] [Google Scholar]
  • 67.Derges J, Kidger J, Fox F, Campbell R, Kaner E, Hickman M. Alcohol screening and brief interventions for adults and young people in health and community-based settings: a qualitative systematic literature review. BMC public health. 2017;17(1):562 Epub 2017/06/11. 10.1186/s12889-017-4476-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation Research in Mental Health Services: an Emerging Science with Conceptual, Methodological, and Training challenges. Administration and policy in mental health. 2009;36(1):10.1007/s10488-008-0197-4. 10.1007/s10488-008-0197-4 PubMed PMID: PMC3808121. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Cecilia Benoit

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

13 Aug 2019

PONE-D-19-19391

Implementation of a screening, brief intervention and referral to treatment programme for risky substance use in South African emergency centres: a mixed methods evaluation study

PLOS ONE

Dear Dr Claire van der Westhuizen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewers note some important concerns that need to be addressed before your work can make a contribution to the field. Problems are noted regarding your theoretical and methodological approach, description of findings and discussion and conclusion.

=================================

We would appreciate receiving your revised manuscript by Sep 27 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Cecilia Benoit

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study addresses a topic of high relevance to the field. Strengths include the focus on implementation and the LMIC setting. These are important gaps in the current literature. That said, there are some major limitations to this work that appear to restrict its potential contribution to the field. As it is currently written, the theoretical and methodological approach is not sufficiently justified or described, such that the work does not come across as intentional or structured. These limitations may or may not be able to be addressed at this point in time. A strengthened justification for why the study limitations are not fatal flaws is needed to establish the study’s contribution.

1. While I agree that the Introduction (or Methods section) should contain a description of this specific SBIRT programme and the implementation context and strategies, these could be summarized more briefly. More is needed in the Introduction on what is known about SBIRT (including an acknowledgement of the mixed clinical evidence, its role in the broader system/continuum of care, what is known about when and how it is effective) and why studies of implementation are important. The Introduction should clearly outline the rationale for the study and its contribution to the literature. Very little of the vast literature on SBIRT (in different settings/for different substances and levels of use) is cited. There is also no mention of implementation science or how it contributes to system enhancement.

2. The statement of study objectives at the end of the Introduction (lines 113-117) could be strengthened by listing the specific implementation factors and outcomes that were examined (this information is provided later on in the Methods section, but would be good to state up front to better frame the study). The objective(s) should follow from the Introduction and it should be clear how they answer the gap in existing research.

3. More detail is needed to explain how this is a mixed methods study (vs. a multiple methods study; line 119). Using the terminology of Creswell et al. would be helpful to show how the different study components fit together.

4. The study is described as being guided by the CFIR and Proctor’s taxonomy. It is not clearly argued why both are needed, how they fit together, what each brings that complements the other… A clearer framing of the theoretical underpinnings and mixed methods approach (see last comment) would greatly strengthen the front end of the Methods section.

5. Relatedly, a clearer distinction is needed between the constructs of feasibility and adoption. Could the count of patients screened not be considered an indicator of adoption? If possible, the number of patients who were eligible to be screened should be added (e.g., 13,136 patients out of how many were screened?). The meaning of the count of patients screened is hard to interpret in the absence of this information. In addition, if only 1 of 3 planned visits tended to take place, what does that say about feasibility? Finally, no information is provided on the referral to treatment component of SBIRT. This is a critical component of the SBIRT approach and an important aspect of feasibility/adoption. Were there treatment options for those who needed them? Were people referred and did they follow through?

6. Minor point – “game changer strategy” is inconsistently capitalized and written as one/two words (e.g., lines 75 and 142).

7. More information is needed on the sampling strategy for the qualitative component of the study. It looks like efforts were made to recruit stakeholders representing key groups across the system, however, this is not described explicitly. An overall summary of the stakeholder groups and their roles in the system would be helpful (e.g., policy makers, health planners/administrators, clinicians). This is needed to establish how the study answers to its objectives (e.g., who participated in the study and what were they able/not able to speak to?). Currently, the participants section (lines 140-148) is heavy on acronyms and assumes a level of familiarity with the South African system that most readers will not have. A more general statement of stakeholder roles would make this section more widely readable. Finally, is there a justification for the sample size, n=27? Was a sufficient number of people from each (broadly defined) stakeholder group to represent their perspectives?

8. It should be acknowledged as a limitation that patients were not included as participants. This is particularly the case since the Results section refers to “patients’ responses” to the programme and its effectiveness in fostering behavior change (paragraph starting line 208). This form of second-hand reporting (particularly from clinicians involved in delivering the programme) is not a strong approach to evaluating either patient perspectives or their behavior change outcomes. It may not be possible to address this limitation at this point. That said, given that the study is focused on implementation outcomes rather than programme effectiveness, I suggest deleting this paragraph and avoiding any comment on program effectiveness. In the absence of structured evaluation of programme mechanisms, including both positive and negative encounters, this finding is anecdotal.

9. More is needed to justify the claim that programme operations did not interfere with clinical care in the emergency setting (line 223). Was this reported by just one staff member? Did anyone report anything different? Was this explored in a structured fashion?

10. It is not clear what is meant by the quote pertaining to staff taking advantage and not doing what they are supposed to do (line 252). More information is needed on what this finding means and how it relates to issues of staff management (as indicated in line 250).

11. It is noted that there was a lack of compatibility between the SBIRT program and HAST services, and that this caused some difficulties in implementation (line 347). Some specific examples of these difficulties would be helpful here.

12. Many of the findings appear to identify barriers and problems in implementation, raising questions of the extent to which the programme was actually endorsed as appropriate. The authors suggest that those who were more removed from the programme held more negative views of its implementation and impact than those who were closer to the programme. Were the right people asked to report on implementation details? Did all participants know enough about the programme to be able to comment on the details, or are some of them simply echoing negative general perspectives of systems change and/or people who use drugs? Relatedly, the findings indicate a certain level of stigmatizing beliefs held by participants about people who use alcohol and other drugs – rather than stemming from a lack of programme familiarity/proximity per se, this speaks more generally to the negative views that many healthcare providers and administrators hold about problematic substance use. There is a broad literature on occurrence and impact of substance-related stigma in healthcare settings, including emergency room settings, which is relevant to interpreting this finding. As it stands, the relation of these stigmatizing beliefs to programme implementation specifically is not considered in the Results or Discussion sections.

13. There are points made in the Discussion that do not clearly follow from the material presented in the Results section. For example, it is noted that “Available financial and human resources, along with the top-down directive from the provincial government, were vital for programme implementation…” (lines 441-442). How was this assessed? The findings also appeared to identify problems with the top-down directives. Was a structured approach used to assess these features of implementation (e.g., were questions posed to stakeholders about the positive and negative role of these features of implementation and their impact)? Likewise, the Discussion refers to problems in connecting/engaging with middle management, yet this does not clearly follow from the results presented (lines 446-456). How was this evaluated? A thorough review of the Discussion is required to ensure that the interpretation follows from reported findings.

14. Clarification is required on what is meant by “evidence for task-sharing approaches”. Does this pertain to SBIRT interventions or is it about implementation processes more generally? What are the tasks being (or not being) shared?

15. The current Limitations section does not adequately address the limitations of this work and, importantly, how these are expected to affect the findings or what safeguards were used to minimize the impact of potential biases.

Reviewer #2: The authors studied the feasibility and implementability of an SBIRT program in South African emergency rooms throw the examination of relevant outcomes during the first years of the program's implementation, such as screening rates, acceptability, and appropriateness. To do so, they use mainly qualitative but also some quantitative methods. Their results are similar to those encountered in SBIRT implementation studies elsewhere, with particularities relevant to the local context and eventually to other low-middle income contexts. The authors found high levels of stigma towards addiction, represented as a lack of confidence on the effectiveness of the brief interventions, especially among stakeholders not in the clinical sites. They also describe barriers regarding the health network and the clinical workflow. Overall, they found the SBIRT program to be highly implementable in their context, and give some recommendations on how to scale up the innovation.

This study is methodologically sound and addresses one of the more challenging questions in the field of drugs and alcohol brief interventions: how to maximize their implementation to disseminate the strategy. I think it should be accepted with some minor issues that need to be addressed to improve its contextualization and clarity before publication:

MINOR COMMENTS:

Figures:

Figures need to be reviewed and enetually re-made. The implementation process is well explained in figure 1, and the organizational context is depicted in the other figures, but there is some inconsistene and lack of important details:

In Figures S1 and 2: it is not clear why different colors are used. For example, Do they represent hierarchical relationships? Also, the usage of colors does not look consistent between both figures.

Figure 2 looks incomplete. I was expecting to see a summary of the main recommendations for each domain; instead, it only lists the CFIR constructs without any concrete example.

Introduction:

It is a good introduction, but more emphasis could be given to specific aspects of this research regarding the current literature. Other aspects need clarification:

Lines 62 to 69: It is not clear in which aspects the authors expect the implementation to be different due to the socioeconomic background; or if there are clues about that in the new body of literature they mention. I would suggest further illustration.

Line 71: it was difficult for me to follow what program were the authors referring throughout the text: Is the Teachable Moment program the same that was tested in the previous RCT? Is the intervention - training of the counselors implemented here the same that the one used on the RCT program? They mention the 'SBIRT program' or just 'the program' many times, also the 'Game Changer,' but is not clear what program they are referring.

Lines 86 to 88: I understand that the intervention that showed the best effect in the previous RCT was a combined MI + Problem Solving. If that's the case, Why did this program delivered mostly an MI-based intervention? Did this contribute to the supposed lack of evidence ground of the initiative mentioned by some stakeholders?

Methods:

I think this section needs more precision in some critical aspects, particularly more clear operational definitions of the implementation outcomes for this study:

Line 123: Please provide a summary of the CFIR constructs that were not used.

Line 175: the word 'initial' is ambiguous here: does it refers to a general impression or to the idea they had before the program started? It seems to me that the construct of appropriateness was used to assess the suitability some of the innovation's parameters concerning the setting. If this is the case, I think the description given is not clear.

Line 177: it is not clear for me that the Authors mean with 'the intention to try' Later in the paper, they elaborate on the readiness to adopt. Are these concepts equivalent? I would suggest a brief explanation and a more precise operational definition here.

Results:

This part is very clear and consistent in general.

Line 187: Other than meeting criteria for risky substance use, what other requisites were needed to be eligible? Please be precise in the description of the inclusion criteria, because it impacts the overall impression on the program's feasibility the reader will have. Did the ASSIST specific scores define risky substance use?

Line 191: Is it to say that 83% of risky substance users received the first intervention?

Discussion:

The discussion is very well supported by the results, and the paper concludes with recommendations to foster implementation in the future. I think some aspects could be better contextualized or explained to highlight the specific contribution of this research:

Lines 380 to 382: how does this fact relate to local evidence (RCT mentioned in the beginning)?

Lines: 416 to 417: the explanation offered about stake holder's view and how it differs from what's reported in the literature could be further elaborated: it looks like this finding is particularly specific to the context. Also, it is not clear in the last sentence, whether it was a mistake to interview 'distal' stakeholders. Finally, in the recommendations, authors should emphasize a differentiated strategy for early involvement of 'distal' stakeholders based on these findings.

Data: I could not access the dataset; apparently, an application process is needed. I´m not sure whether this precludes from publication in this journal, or if the authors could explain if the dataset is not public for some reason.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Nicolas Barticevic Lantadilla

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2019 Nov 15;14(11):e0224951. doi: 10.1371/journal.pone.0224951.r002

Author response to Decision Letter 0


25 Sep 2019

6 September 2019

Dr Benoit

Academic Editor

PLOS ONE

Dear Dr Benoit

Re: Response to comments for manuscript titled ‘ (PONE-D-19-19391)

Thank you for the helpful comments on this paper. We have addressed them in detail, clarifying the methodology, findings and discussion. Please see our responses below for each point raised.

Reviewer 1

1. While I agree that the Introduction (or Methods section) should contain a description of this specific SBIRT programme and the implementation context and strategies, these could be summarized more briefly. More is needed in the Introduction on what is known about SBIRT (including an acknowledgement of the mixed clinical evidence, its role in the broader system/continuum of care, what is known about when and how it is effective) and why studies of implementation are important. The Introduction should clearly outline the rationale for the study and its contribution to the literature. Very little of the vast literature on SBIRT (in different settings/for different substances and levels of use) is cited. There is also no mention of implementation science or how it contributes to system enhancement.

Thank you for the suggestions for strengthening this section. We cut back the implementation strategies description. We have included more information on research investigating the effectiveness of SBIRT, as well as on SBIRT implementation research (lines 76-95), acknowledging the mixed evidence base. We have also highlighted the contribution that implementation science can make to the field. We have mentioned the SBIRT continuum of care and that contextual factors affect SBIRT implementation and may explain variations in findings (lines 61-63 and 79-87).

2. The statement of study objectives at the end of the Introduction (lines 113-117) could be strengthened by listing the specific implementation factors and outcomes that were examined (this information is provided later on in the Methods section, but would be good to state up front to better frame the study). The objective(s) should follow from the Introduction and it should be clear how they answer the gap in existing research.

We have added to the sentence describing the study aim, including the implementation outcomes and Consolidated Framework for Implementation Research factors. We have also highlighted the study’s contribution to filling a gap in the literature. See lines 77-101.

3. More detail is needed to explain how this is a mixed methods study (vs. a multiple methods study; line 119). Using the terminology of Creswell et al. would be helpful to show how the different study components fit together. Thank you for this suggestion. We used a sequential explanatory study design, defined according to terminology used by Cresswell (described in Hanson, Cresswell et al, 2004).

We used sequential quantitative and qualitative study components with findings from the quantitative data informing the qualitative component (see lines 203-206). For example, factors were explored in the qualitative interviews that contributed to the success of the programme in delivering an evidence-based session to over 80% of eligible patients. Additionally, reasons for the low numbers of follow-up sessions were also explored with stakeholders.

4. The study is described as being guided by the CFIR and Proctor’s taxonomy. It is not clearly argued why both are needed, how they fit together, what each brings that complements the other... A clearer framing of the theoretical underpinnings and mixed methods approach (see last comment) would greatly strengthen the front end of the Methods section.

Thank you for this. We have clarified in the text (see lines 206-233). We followed recommendations found in a systematic review published in Implementation Science on the use of CFIR, where the authors highlight the importance of using CFIR to investigate implementation outcomes, such as those defined by Proctor et al. In assessing the implementation of the study, we used Proctor’s definitions of feasibility, acceptability, adoption and appropriateness. (There is variation in definitions of terms used in implementation research; thus, we decided to use this taxonomy.) The CFIR constructs were used to characterise the factors affecting these implementation outcomes and as such were useful in guiding the data collection, analysis and reporting of the findings.

5. Relatedly, a clearer distinction is needed between the constructs of feasibility and adoption. Could the count of patients screened not be considered an indicator of adoption? If possible, the number of patients who were eligible to be screened should be added (e.g., 13,136 patients out of how many were screened?). The meaning of the count of patients screened is hard to interpret in the absence of this information. In addition, if only 1 of 3 planned visits tended to take place, what does that say about feasibility? Finally, no information is provided on the referral to treatment component of SBIRT. This is a critical component of the SBIRT approach and an important aspect of feasibility/adoption. Were there treatment options for those who needed them? Were people referred and did they follow through?

We have added the definitions of each implementation outcome used as described by Proctor et al and further operationalised these for this study (see Table 1). There is some overlap in these terms and the way that they have been used in the literature. We hope that the Proctor definition and our operationalised definition has clarified this. Since the counsellors conducting the screening and delivering the intervention were employed specifically for the Teachable Moment programme, we did not use their activities as indicators of adoption. We would have liked to include the numbers of patients eligible to be screened, however we could not access these data for two reasons. First, the data available from the Department of Health comprises total numbers of patients seen in the emergency centre and is not disaggregated by triage code or day of the month. The majority of these data are captured by emergency centre staff in hard copy triage books. Second, since the Teachable Moment counsellors can only screen green- and yellow-triaged patients, and did not cover week day night shifts, it was not possible to provide these figures. (The Teachable Moment counsellor shifts cover day shifts Monday to Sunday and night shifts Friday to Sunday.) Additionally, data regarding numbers of patients referred on to the Department of Social Development were not available. The referral system underwent a few changes in the first year of the programme. Initially, hard copy referral letters were delivered to the regional Department of Social Development offices. The main problem with this system was that the letters were often lost and the data regarding these referrals were not available from the counsellors or the regional offices. Thank you – we have added these points to the study limitations (see lines 737-746).

6. Minor point – “game changer strategy” is inconsistently capitalized and written as one/two words (e.g., lines 75 and 142).

Thank you. We have corrected this.

7. More information is needed on the sampling strategy for the qualitative component of the study. It looks like efforts were made to recruit stakeholders representing key groups across the system, however, this is not described explicitly. An overall summary of the stakeholder groups and their roles in the system would be helpful (e.g., policy makers, health planners/administrators, clinicians). This is needed to establish how the study answers to its objectives (e.g., who participated in the study and what were they able/not able to speak to?). Currently, the participants section (lines 140- 148) is heavy on acronyms and assumes a level of familiarity with the South African system that most readers will not have. A more general statement of stakeholder roles would make this section more widely readable. Finally, is there a justification for the sample size, n=27? Was a sufficient number of people from each (broadly defined) stakeholder group to represent their perspectives?

We invited all stakeholders directly involved with the Teachable Moment programme implementation at the provincial, district/regional, non-profit organisation and hospital levels. We have added this to the text (see lines 250-252). We only had two refusals. Due to the small numbers of people involved, we have not mentioned where these people were employed. We have added the roles for each stakeholder group (see lines 255-290) and hope this clarifies their contribution to the study findings. We have replaced the acronyms DSD, DoH and NPO.

8. It should be acknowledged as a limitation that patients were not included as participants. This is particularly the case since the Results section refers to “patients’ responses” to the programme and its effectiveness in fostering behavior change (paragraph starting line 208). This form of second-hand reporting (particularly from clinicians involved in delivering the programme) is not a strong approach to evaluating either patient perspectives or their behavior change outcomes. It may not be possible to address this limitation at this point. That said, given that the study is focused on implementation outcomes rather than programme effectiveness, I suggest deleting this paragraph and avoiding any comment on program effectiveness. In the absence of structured evaluation of programme mechanisms, including both positive and negative encounters, this finding is anecdotal.

Thank you. Yes, we believe that patient perspectives on the programme are vital. We do have these data. However, we believe that we already have a great deal of information in this paper and thus decided to write a separate paper on patient perspectives, including data from a small follow-up study of substance use outcomes. We have added this to the limitations. Your opinion on this approach is welcome.

We agree that the second-hand report of patient responses is not an indication of programme effectiveness. We have added to the text to clarify that the stakeholders’ perceptions of patients’ responses contributed to increased acceptability of the programme from the stakeholders’ perspectives. The counsellors were particularly motivated by the reported positive responses. (See lines 360 and 366-367.)

9. More is needed to justify the claim that programme operations did not interfere with clinical care in the emergency setting (line 223). Was this reported by just one staff member? Did anyone report anything different? Was this explored in a structured fashion?

All the EC staff interviewed reported that the counsellors’ presence was helpful in various ways and that the counsellors had positive interactions with the staff and patients, without hampering clinical care. We have clarified this (see lines 375 and 378). In the qualitative interviews, we asked hospital stakeholders about the positive and negative aspects, specifically exploring patient flow and patient needs in the EC.

10. It is not clear what is meant by the quote pertaining to staff taking advantage and not doing what they are supposed to do (line 252). More information is needed on what this finding means and how it relates to issues of staff management (as indicated in line 250).

We have added a sentence to clarify: The manager described difficulties in pushing her staff to reach high targets regarding numbers of patients screened, and felt that the support provided to the counsellors in supervision encouraged excuses from the staff for not reaching their targets. (See lines 415-417)

11. It is noted that there was a lack of compatibility between the SBIRT program and HAST services, and that this caused some difficulties in implementation (line 347). Some specific examples of these difficulties would be helpful here.

We have clarified the differences between the non-profit organisation services and the Teachable Moment programme and added the following sentence: Additionally, the non-profit organisations were not familiar with the Teachable Moment model and the rapid implementation did not allow much consultation with the organisations on this model and only one of the non-profit organisations had worked in mental health services previously. Furthermore, logistical issues proved difficult, such as the compilation of a day and night shift roster (starting line 521).

12. Many of the findings appear to identify barriers and problems in implementation, raising questions of the extent to which the programme was actually endorsed as appropriate. The authors suggest that those who were more removed from the programme held more negative views of its implementation and impact than those who were closer to the programme. Were the right people asked to report on implementation details? Did all participants know enough about the programme to be able to comment on the details, or are some of them simply echoing negative general perspectives of systems change and/or people who use drugs?

Relatedly, the findings indicate a certain level of stigmatizing beliefs held by participants about people who use alcohol and other drugs – rather than stemming from a lack of programme familiarity/proximity per se, this speaks more generally to the negative views that many healthcare providers and administrators hold about problematic substance use. There is a broad literature on occurrence and impact of substance- related stigma in healthcare settings, including emergency room settings, which is relevant to interpreting this finding. As it stands, the relation of these stigmatizing beliefs to programme implementation specifically is not considered in the Results or Discussion sections.

There were a wide range of opinions expressed regarding the appropriateness of the programme for the emergency centre setting. As the reviewer mentioned, we highlighted that the hospital staff and counsellors who were closest to the programme operations were more likely to report that the programme was appropriate for the setting. As mentioned in response to the question above on sampling strategy, we approached all stakeholders directly involved with the Teachable Moment programme and only 2 people refused so we did have the right group in that sense. Since programme implementation indicators were included in the performance objectives for all stakeholders, they should have had sufficient knowledge of the programme. Many of the stakeholders were required to report on the programme regularly to their superiors.

Regarding stigma related to substance use, we have highlighted a certain aspect related to perceptions that substance users will be resistant to changing their behaviour. We have added specific mention of this in the introduction, results and discussion sections. As the reviewer mentions this belief is prevalent, also among emergency centre staff and studies addressing this are referenced in the discussion.

13. There are points made in the Discussion that do not clearly follow from the material presented in the Results section. For example, it is noted that “Available financial and human resources, along with the top-down directive from the provincial government, were vital for programme implementation...” (lines 441-442). How was this assessed? The findings also appeared to identify problems with the top-down directives. Was a structured approach used to assess these features of implementation (e.g., were questions posed to stakeholders about the positive and negative role of these features of implementation and their impact)? Likewise, the Discussion refers to problems in connecting/engaging with middle management, yet this does not clearly follow from the results presented (lines 446-456). How was this evaluated? A thorough review of the Discussion is required to ensure that the interpretation follows from reported findings.

We have reviewed the discussion thoroughly. Regarding available financial and human resources, stakeholders did mention that without the addition of resources to the emergency services, the programme may not have been implemented. We mention this in the results: One of the main facilitators regarding the adoption of the programme was the top-down directive from the provincial government departments, accompanied by dedicated funding from the Premier’s office …

We have quotes underpinning this statement (see below); we were concerned about making the results section too long. We have added a portion of the provincial level stakeholder quote to the results section. The stakeholders were specifically asked about the barriers and facilitators to implementing the Teachable Moment programme.

Provincial official: “But I think the thing that was good is when I was going to the districts and to the facilities to ask them to do this, I was bringing resources – additional resources. That was easier. It made my life a little bit easier. To say I am asking you to do this but I will provide you with resources to be able to do that. So that was a little bit easier.”

Extra quote not added to the results:

Hospital staff member: “what made it easier for us was… So it wasn’t the money taken away from - it was additional money effectively. So, it wasn’t money taken away from one of the already very lean things that we are running here”

Thank you for this. We think this is due to our lack of clarity regarding senior and middle managers. We have clarified the organisational structure under participants with the following: ‘The stakeholders interviewed from district and regional offices are programme implementers within the health and social development systems. In these offices, their role could be categorised as that of ‘middle managers’ in that senior managers initially agreed to the Teachable Moment programme implementation, but then assigned all responsibility for implementation to the district and regional office stakeholders.’ In the results section the opinions of the middle managers on the top-down directive are addressed under adoption. They felt that the Teachable Moment programme was introduced the “wrong way round”, that they weren’t given opportunity to provide any input on the plans and that the added responsibility was just “thrown in your lap”.

We appreciate the comments about the discussion. We have reviewed this section carefully. In some places, we have clarified statements in order to link these more clearly to the results. In one instance, we added a sentence and quote to the results (lines 387-391). This sentence had been omitted from the results by mistake.

14. Clarification is required on what is meant by “evidence for task-sharing approaches”. Does this pertain to SBIRT interventions or is it about implementation processes more generally? What are the tasks being (or not being) shared?

We have added information on task-sharing approaches to the introduction (lines 194-198). Task-sharing or task-shifting describes the use of non-specialists to deliver services. Thus in the case of task-shared mental health interventions, these services would not be delivered by psychiatrists or psychologists, but by non-specialist doctors, nurses, social workers, lay health workers etc.

15. The current Limitations section does not adequately address the limitations of this work and, importantly, how these are expected to affect the findings or what safeguards were used to minimize the impact of potential biases.

We have added text to this section to describe what was done to mitigate the limitations described (725-730 and 732-735).

Reviewer 2

1. Figures need to be reviewed and eventually re-made. The implementation process is well explained in figure 1, and the organizational context is depicted in the other figures, but there is some inconsistency and lack of important details: In Figures S1 and 2: it is not clear why different colors are used. For example, Do they represent hierarchical relationships? Also, the usage of colors does not look consistent between both figures. Figure 2 looks incomplete. I was expecting to see a summary of the main recommendations for each domain; instead, it only lists the CFIR constructs without any concrete example.

Thank you for the suggestions to improve the figures. The colours in Figure 2 and Figure S1 were not used consistently in our original submission. We have changed the colour of the textboxes in Figure S1 to blue for all provincial/regional/district offices; this is consistent with the use of colours in Figure 2. The colours in this figure are not meant to represent hierarchical relationships, thus all these textboxes are now the same colour.

We have added the recommendations provided in the text to Figure 2 under the relevant CFIR constructs.

2. Introduction:

It is a good introduction, but more emphasis could be given to specific aspects of this research regarding the current literature.

We have added information on evidence related to SBIRT effectiveness and implementation, as well as gaps in the SBIRT literature (see lines 76-97).

3. Other aspects need clarification:

Lines 62 to 69: It is not clear in which aspects the authors expect the implementation to be different due to the socioeconomic background; or if there are clues about that in the new body of literature they mention. I would suggest further illustration.

We have added a description of how the use of implementation research differs in high- and low- and middle-income countries and added three sentences on how evaluation of task-sharing approaches may add to the literature (94-97 and 194-200).

4. Line 71: it was difficult for me to follow what program were the authors referring throughout the text: Is the Teachable Moment program the same that was tested in the previous RCT? Is the intervention - training of the counselors implemented here the same that the one used on the RCT program? They mention the 'SBIRT program' or just 'the program' many times, also the 'Game Changer,' but is not clear what program they are referring.

We have clarified the terminology used to refer to the RCT and the programme implemented (Teachable Moment) in the text. The programme was tested in the RCT and then implemented as the Teachable Moment programme as part of a province-wide Game Changer initiative addressing alcohol harm reduction. The Teachable Moment programme was implemented using the same components of the programme tested in the RCT, namely: (i) screening processes to identify patients using substances at risky levels, (ii) intervention, (iii) cadre of workers as counsellors, (iv) counsellor training and (v) clinical supervision and support structure.

5. Lines 86 to 88: I understand that the intervention that showed the best effect in the previous RCT was a combined MI + Problem Solving. If that's the case, Why did this program delivered mostly an MI-based intervention? Did this contribute to the supposed lack of evidence ground of the initiative mentioned by some stakeholders?

Yes, that’s true. The best effect in the RCT was found for the MI + problem-solving therapy (PST) but the group receiving MI alone also improved – regarding substance use scores. The Teachable Moment programme planned to deliver 2 sessions of PST in addition to the first session of MI. Unfortunately, this did not prove feasible in the services. In the RCT, participants received supermarket vouchers in compensation for their time and the RCT counsellors had more time to telephone participants and remind them of their appointments. This did not contribute to the stakeholders’ perceptions of the lack of evidence, as those who mentioned this were not aware of the RCT at all, or any other evidence from South Africa.

6. Methods:

I think this section needs more precision in some critical aspects, particularly more clear operational definitions of the implementation outcomes for this study.

We have added the definitions of each implementation outcome used as described by Proctor et al and further operationalised these for this study. See Table 1.

7. Line 123: Please provide a summary of the CFIR constructs that were not used.

The following were not included: under the ‘intervention characteristics’ domain, the construct ‘Relative advantage’; under the ‘outer setting’ domain, construct ‘cosmopolitanism’; under the ‘inner setting’ domain, the constructs ‘structural characteristics’ and ‘culture’; under the ‘characteristics of individuals’ domain, the constructs ‘self-efficacy’ and ‘individual stage of change’ and under the ‘process of implementation’ domain, the constructs ‘opinion leaders’, ‘Champions’ and ‘external change agents’ – see File S1.

8. Line 175: the word 'initial' is ambiguous here: does it refers to a general impression or to the idea they had before the program started? It seems to me that the construct of appropriateness was used to assess the suitability some of the innovation's parameters concerning the setting. If this is the case, I think the description given is not clear.

We have deleted the word ‘initial’ to clarify. Yes, we were using the construct to assess stakeholders’ views of the suitability of the Teachable Moment programme for the setting. (See Table 1.)

9. Line 177: it is not clear for me that the Authors mean with 'the intention to try' Later in the paper, they elaborate on the readiness to adopt. Are these concepts equivalent? I would suggest a brief explanation and a more precise operational definition here.

Thank you. As mentioned above, we have addressed this in Table 1.

10. Results:

This part is very clear and consistent in general.

Line 187: Other than meeting criteria for risky substance use, what other requisites were needed to be eligible? Please be precise in the description of the inclusion criteria, because it impacts the overall impression on the program's feasibility the reader will have. Did the ASSIST specific scores define risky substance use?

Thank you, we have clarified this (see lines 136-138). Risky substance use as defined by the ASSIST scores for each substance was used to include patients in the programme.

11. Line 191: Is it to say that 83% of risky substance users received the first intervention?

Yes, that is correct. 83% of patients identified as risky substance users received the first session at the acute emergency centre visit.

12. Discussion:

The discussion is very well supported by the results, and the paper concludes with recommendations to foster implementation in the future. I think some aspects could be better contextualized or explained to highlight the specific contribution of this research:

Lines 380 to 382: how does this fact relate to local evidence (RCT mentioned in the beginning)?

We mentioned that it was not feasible to deliver the second and third sessions as part of usual services with the model implemented for the Teachable Moment programme (also see response to reviewer 2, point 5 above). This was not the case in the RCT, where only 20% of participants did not return for further sessions. (The study participants received compensation for their time, in the form of supermarket vouchers for completing assessments). For relation to the RCT, please see response to your comment (no 5) above. It is known that research doesn’t always translate perfectly into implementation; this highlights the need for an implementation focus in effectiveness trials.

13. Lines: 416 to 417: the explanation offered about stake holder's view and how it differs from what's reported in the literature could be further elaborated: it looks like this finding is particularly specific to the context. Also, it is not clear in the last sentence, whether it was a mistake to interview 'distal' stakeholders. Finally, in the recommendations, authors should emphasize a differentiated strategy for early involvement of 'distal' stakeholders based on these findings.

Thank you. We have added to the text (see lines 620-622). We then elaborate further in the following paragraph. As the reviewer mentions, early involvement of distal stakeholders is important. This is vital to address, given their influence on programming and the fact that they are less familiar with the emergency centre setting. We have added to the discussion to highlight this (see lines 702-704).

14. Data: I could not access the dataset; apparently, an application process is needed. I ´m not sure whether this precludes from publication in this journal, or if the authors could explain if the dataset is not public for some reason.

Yes, there is an application process as these data are owned by the National Department of Health in South Africa. Any party wanting to access data needs to apply on the National Health Research Database (https://nhrd.hst.org.za/).

We believe that the comments provided have helped us strengthen the paper and we really appreciate the careful reviews. We look forward to hearing from you.

Yours sincerely

Claire van der Westhuizen

Attachment

Submitted filename: Response to Reviewers.pdf

Decision Letter 1

Cecilia Benoit

25 Oct 2019

Implementation of a screening, brief intervention and referral to treatment programme for risky substance use in South African emergency centres: a mixed methods evaluation study

PONE-D-19-19391R1

Dear Dr. van der Westhuizen,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Cecilia Benoit

Academic Editor

PLOS ONE

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #2: The authors addressed all the concerns I had with the first version of this manuscript. This new version is much more precise and ready to be published. I have no further comments.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Nicolas Barticevic Lantadilla

Acceptance letter

Cecilia Benoit

8 Nov 2019

PONE-D-19-19391R1

Implementation of a screening, brief intervention and referral to treatment programme for risky substance use in South African emergency centres: a mixed methods evaluation study

Dear Dr. van der Westhuizen:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Cecilia Benoit

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Use of the CFIR.

    (DOCX)

    S1 Table. CFIR constructs used by domain.

    (DOCX)

    S1 Fig. Outer and inner setting.

    (TIF)

    Attachment

    Submitted filename: Response to Reviewers.pdf

    Data Availability Statement

    The data are owned by the Western Cape Department of Health and applicants may apply online to the National Health Research Database (https://nhrd.hst.org.za/). The authors confirm they had no special access or privileges that other researchers would not have.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES