Abstract
Substance use disorders (SUD) are chronic relapsing medical conditions characterised by compulsive substance seeking and use. They constitute a substantial disease burden globally. Labelling of persons with SUD has created barriers to treatment but there are effective management strategies. The dental profession has embraced reforms designed to address the SUD epidemic by promoting continuing education for practitioners and initiating curriculum changes in dental schools. Screening, Brief Intervention and Referral to Treatment (SBIRT) is an evidence-based model for managing patients with SUD. The use of a formative 1-station Objective Structured Clinical Examination (OSCE) for learning and assessment in SBIRT, operationalised with the MD3 rating scale is presented in this study. In 3 years of implementation, the SBIRT OSCE successfully integrated into the curriculum of the College of Dental Medicine, Columbia University. Mean score of total adherent behaviours was 11.80 (SD =4.23) (range: 2 – 24) and Cronbach’s coefficient alpha for across-items reliability in adherent behaviours was 0.66. Adherent behaviours correlated with the global ratings (r = 0.66). Mean of global rating scores were 2.90 (SD =1.01) for collaboration and 2.97 (SD =1.00) for empathy and the global rating scores correlated with each other (r = 0.85). Histograms of global rating scores resembled normal distribution. The 1-station OSCE is a good model for learning about SBIRT. Psychometric analysis was useful in understanding the underlying construct of the MD3 rating scale and supported its reliability, validity and utility in dental education.
Keywords: Dental Curriculum, Objective Structured Clinical Examination, Psychometric analysis, Screening Brief Intervention and Referral to Treatment, Substance use disorders
1 |. INTRODUCTION
Substance use disorders (SUD) are chronic relapsing conditions affecting the brain and characterised by compulsive substance seeking and use despite harmful consequences.1 Following decades of collaboration between the American Psychiatric Association (APA); the original developers of the Diagnostic and Statistical Manual of Mental Disorders (DSM); the World Health Organization (WHO); and the National Institute of Mental Health (NIMH), SUD is now the preferred medical terminology for addiction as it is more scientifically accurate and is linked to a public health approach.2 The fifth edition of DSM (DSM-5) has standardised the international classification of mental health disorders to reflect advances in scientific knowledge3–5 meaning that SUD encompass all levels of severity of substance-related mental health conditions. Some of the specific disorders under SUD include alcohol use disorder (AUD), tobacco use disorder (TUD), opioid use disorder (OUD) and cannabis use disorder (CUD). SUD also share many features with other chronic illnesses including heritability, environmental influences, long-term management and relapse potential6 and constitute a substantial disease burden globally7 with more than a quarter of a billion people using drugs, cannabis being the most commonly used illicit drug, and some 35.6 million people suffering from SUD.8
United States (US) prescribing rates are 2.5 to 4 times higher than Western Europe, and overdose deaths are 7 times more common in the United States compared to Western Europe.9 United States current opioid epidemic is driven by the use of synthetic opioids particularly fentanyl derivates. Indications are that Europe is also becoming an important global market in synthetic drug and polydrug consumption.10
The labelling of persons with SUD as being morally weak is a fallacy that has distorted public policy for many decades and created barriers to treatment.11 SUD are in fact medical conditions and there are a spectrum of effective management strategies depending on the level of severity and duration,12 even as there are gaps in knowledge, values and skills needed by healthcare providers to implement evidence-based practices. For individuals at low risk, the goal is early intervention through screening while treatment is recommended for persons with mild, moderate and severe conditions.13 The specific objective of treatment is to achieve remission by eliminating or diminishing symptoms below a pre-determined level14 while the overarching goal of treatment services is to foster recovery,15 The recovery paradigm has necessitated the construction of a consensus on our understanding of recovery.16 Researchers from both the United Kingdom (UK) and United States have promoted the concept of recovery capital that recognises the lived experience that individuals with substance abuse problems have in terms of discrimination at all levels within society, even after successful resolution of the problem, thereby increasing the risk of relapse, and have concluded that recovery is a process that involves the empowerment of social, physical, human and cultural resources in a way that is impactful on individuals, families and communities.17,18
The United Nations (UN) and the WHO have been involved in developing standards for SUD management that are anchored on scientific evidence, accessibility of care at different levels of the healthcare system, ethical considerations, responsiveness to special needs and support for integrated approaches.19 The Council of European Dentists (CED) strongly advocates best practices in health promotion and disease prevention20 in alignment with the European Union (EU) best practice portal for oral care, non-communicable diseases such as tobacco use and harmful alcohol use as well as integrated care.21 In the United States, the Substance Abuse and Mental Health Services Administration (SAMHSA) has recommended Screening, Brief Intervention and Referral to Treatment (SBIRT) as the evidence-based model for managing patients with SUD22,23 and the American Dental Association (ADA) has issued guidelines and recommendations to assist dental practitioners in managing patients with SUD.24
The role of the dental profession in the opioid epidemic is evolving. 22.3% of all prescriptions written by dentists in the United States were for opioids compared to 0.6% in the UK.10 However, opioid prescribing by US dentists have begun to taper off25 although they are the leading source of opioid prescriptions amongst adolescents.26 Index opioid prescriptions for opioid-naïve adolescents and young adults are associated with significant risk increase in persistent opioid use and abuse.27 With the dental profession embracing reforms, continuing education opportunities are now more readily accessible, risk mitigation strategies in opioid prescribing are being provided28 and dental schools are actively revising their curricula to enable students acquire competencies in the management of patients with SUD.
For European dental schools, the Association of Dental Educators in Europe (ADEE) provided an overarching and outcome-based curriculum consensus document, though not directly addressing SUD training, nevertheless encourages the development and utilisation of medical curriculum guidelines in overlap areas while allowing for student immersion in contemporary advances and giving individual institutions the flexibility to integrate and map biomedical topics into their curriculum.29 In the United States, the Massachusetts Initiative entailed the adoption of core competencies on the role of dentistry in pain management, assessment of risk for SUD and interprofessional collaboration by dental schools of the Commonwealth State.30 Under this plan, each Massachusetts dental school planned to improve student training along primary (screening), secondary (treatment and referral) and tertiary (long-term management and stigma reduction) prevention domains although no specific benchmarks were set. The Columbia University Initiative is a model of integration of SBIRT training along the continuum of curriculum.31 Clearly, the pedagogical approaches to teaching dental students about SUD are multifaceted32 and potentially include classroom lectures, case-based seminars, structured observation at SUD treatment facilities, simulation exercises with standardised patients (SPs), clinical practices at on-site and off-site locations, scholarly and research activities as well as assessment by Objective Structured Clinical Examination (OSCE).
Assessment of students’ progress through any curriculum is also an essential part of the process to ensure acquisition of necessary knowledge and skills and there are a plethora of available tools in addition to the OSCE, such as written assessments (short answer questions, structured essays), assessment by structured observations, standardised oral examinations, multisource assessments (student self-assessment, peer assessment, patient surveys), simulation with SPs and virtual reality scenarios.33 Generally, students’ perception of the wide variety of clinical experiences and learning opportunities are positive although specific concerns have been expressed regarding clinical efficiency, availability of faculty, consistency of feedback and the challenge of meeting procedural requirements.34 Additional challenges are expected to include developing high fidelity formative experiences for enhanced integration of biomedical, behavioural and clinical sciences35 including a deeper understanding of the type of language required for reducing stigma about SUD.2
An OSCE is a standardised stations type clinical examination designed to assess competency in foundational knowledge, attitudes and skills. Structured as mock clinical encounters with SPs, it is essentially a skill performance under instruction36 requiring the students to use their clinical skills to complete one or more problem-solving tasks.37 OSCE stations are typically timed and the marking system is well-defined along with a pre-determined grading criteria.38 Considered to be less subjective with reduced patient and examiner variability, OSCEs have been observed to stimulate learning and more realistic self-assessment by students and there is strong support for it amongst learners.39,40 An acknowledged limitation of the simulation format is that its effectiveness can only partially be determined by the performance of students with actual patients41 and that a summative OSCE may be more of a test of specific skills rather than a marker of clinical experience.42
OSCEs have been validated as an assessment tool in psychiatry training insofar as the assessment instrument is specified,43 and as such, an SBIRT OSCE using a Motivational Interviewing (MI) protocol is conceivably an educational model for achieving competency in the management of patients with SUD. Indeed, in a dental setting, implementation of a formative OSCE using MI techniques in tobacco cessation counselling resulted in a 20% quit rate at 6 months indicating that dental students can learn and successfully practice MI on the smoking behaviour of periodontal patients.44 The specific assessment instrument chosen should be influenced by educational and psychometric considerations.45 Although psychometric considerations were dominant in the past, there is now a growing interest in assessment for learning and its formative value.45 Shifting focus from “assessment of learning” to “assessment for learning” means that more attention could then be given to long-term learning outcomes and how to furnish learners with feedback.45
As the science and practice of MI evolved, several evidence-based instruments for assessing performance in MI such as the Motivational Interviewing Competency Assessment (MICA)46 and the Motivational Interviewing Treatment Integrity (MITI)47 scales were developed. The construct underlying an instrument must be understood in order to appreciate the reliability and validity of the assessment process.48 Usability of such assessment instrument is predicated upon the strength of its psychometric properties49 especially given that these instruments are impactful on academic success.50
The MD3 coding scale is a specialised instrument for comprehensive evaluation of general SBIRT skills amongst multiple healthcare professionals and is considered a psychometrically reliable system for rating SBIRT interpersonal and communication interactions for fidelity, training, assessment and research purposes.51 It consists of 3 subscales (adherent behaviours, non-adherent behaviours and global ratings). The global rating subscale assesses overall motivational style in collaboration with patient and empathy towards patient’s perspective and originated from the revision of the Motivational Interviewing Treatment Integrity (MITI) scale; the standard rating for MI skills; into a more rater-friendly instrument.51 For psychometric study purposes, the global rating scale is scored independent of the adherent and non-adherent subscales.52
The aim of this project was to provide dental learners with training in the management patients with SUD through a formative assessment process, to evaluate the integration of an SBIRT OSCE into a dental curriculum and to test the psychometric properties of the MD3 coding scale and its utility in dental education.
2 |. MATERIALS AND METHODS
2.1 |. Preparation
The investigators applied for and received funding from SAMHSA to simultaneously train medical and dental students in SBIRT (Grant # H79 T1025937). New York State Psychiatric Institute (NYSPI) Institutional Review Board (IRB) considered this study as an evaluative research of a training protocol and approved it as “program evaluation” requiring only student assent (Project # 7238). An SBIRT planning group consisting of faculty and staff from Columbia University College of Dental Medicine (CDM), College of Physicians and Surgeons (P&S) and the NYSPI held multiple implementation meetings.
2.2 |. Eligibility
The SBIRT 1-station OSCE was a mandatory academic exercise embedded in a clinical clerkship course for second-year dental students (DDS2), as a rotation, and DDS2 were managed using an academic course management system. Enrolment was 80 students per class during the three years (2016–2018) under investigation. The average age was 24. Underrepresented minorities were 20%. 48% were female.
2.3 |. OSCE team
Our interdisciplinary team consisted of a clinical psychologist, dentists, psychiatrists and administrators. A psychiatrist provided leadership and tasking. The clinical psychologist recruited and trained the SPs and a dentist worked with the clinical psychologist to develop the character description, directed the course and served as champion of the initiative at CDM. Administrators managed the logistics including marshalling examinees, time keeping and compensation for the SPs.
2.4 |. The OSCE station and training of SPs
This was a 1-station OSCE designed as an educational tool and a formative assessment. It was conducted over a 2-day period in a section of the pre-doctoral clinic reserved for the exercise while routine patient care activities continued in other sections of the clinic floor. There were 4 SPs for 80 DDS2. To achieve uniform standards, SPs received a 1-hour training to ensure familiarity with SBIRT and OSCE, the preferred responses to probing questions and instructions on how to demonstrate ambivalence to change during an interaction. Being professional actors, SPs were expected to be adept at improvisation. Included in the training package was a written character description of a 35-year-old who requested for a dental cleaning and teeth whitening with the intention of regaining employment. The character denied any significant medical history but admitted to a long history of tobacco use. The character had full compliment dentition, signs of periodontal disease and evidence of Stomatitis Nicotina. If probed in a supportive style, SPs were expected to also admit to marijuana use, some shortness of breath, social, financial and family stressors and a failed attempt to become a dental hygienist. The character so described was an amalgam of real clinical narratives.
2.5 |. Training for examinees
The SBIRT OSCE was preceded by a seminar,31 case-based exercises and an OSCE orientation session. The 90-minute interactive SBIRT seminar occurred in the second semester, and its objectives were that students should be able to describe the elements included in SBIRT, explain the rationale for SBIRT, understand how to implement the steps in Screening, Brief Intervention using MI and Referral to Treatment. At the completion of the seminar, learners were expected to develop more positive attitude about patients with SUD.
The objectives of the case-based exercises were to teach the knowledge, values and attitudes about SUD and SBIRT. A total of three cases (alcohol, tobacco and opioids) were reviewed in small group sessions in the third semester with 1 hour devoted to each case. Case write-ups included discussion questions and students were given the cases ahead of the sessions, and they were required to prepare written responses to the questions to help stimulate class discussion. Small group facilitators were briefed and provided with a facilitator’s copy of the case write-ups wherein model answers to discussion questions were suggested. Learners completing the case-based exercise are expected to be prepared to transition to skills acquisition in a simulated scenario. The SBIRT OSCE was typically scheduled for either late third semester or early fourth semester.
OSCE orientation occurred a week before the actual exercise and it included an overview of SBIRT, basic principles of MI, screening tools, basic steps in Brief Intervention and Referral to Treatment resources. Learners were told that the OSCE is a high fidelity formative experience to help prepare them for actual clinical practice, an opportunity learn and practice the skills of SBIRT, to self-reflect and self assess SBIRT skills and also a tool for facilitators to identify students in need of additional guidance. Students were advised to treat SPs as they would treat real patients and to suspend biases and beliefs during the clinical encounter as well as protect the confidentiality of the interaction. Furthermore, the three phases of the work plan (check-in, encounter with SP and checkout) were carefully laid out and students were instructed not to wear nametags and to maintain universal precautions. At check-in, case history, a clinical picture of Stomatitis Nicotina (in lieu of clinical examination of SPs), screening tools, SBIRT provider’s pocket card (Figure 1), MD3 rating scale (Table 1) and Referral to Treatment resources were provided. Then, students were allowed time for self-composure. During the encounters with SPs, students were expected to demonstrate the application of SBIRT skills using the knowledge and values acquired during the preceding seminars and case-based exercises. Learners were given a maximum of 15 minutes each for the encounter with a time advisory at the 13th minute. There were debriefings and paper-work at checkout. The same set of faculty and staff managed each phase of the exercise.
FIGURE 1.
SBIRT provider’s pocket card
TABLE 1.
MD3 Screening, Brief Intervention and Referral to Treatment (SBIRT) Coding Scale by DiClemente et al.51
Recording | Coder | ||||||
---|---|---|---|---|---|---|---|
SBIRT-Adherent Behaviours | |||||||
Behaviour | Notes | Points | |||||
Raise the substance use subject respectfully | 0 | 1 | 2 | ||||
Open-ended questions | 0 | 1 | 2 | ||||
Acknowledge discomfort and/or express genuine concern about patient | 0 | 1 | 2 | ||||
Review current pattern of substance use | Alcohol | Tobacco | Illicit drugs | Prescriptions | 0 | 1 | 2 |
Affirmation/strengths recognition | 0 | 1 | 2 | ||||
Reflections (repeating, rephrasing, paraphrasing, or reflection of feeling) | 0 | 1 | 2 | ||||
Assess readiness to change | 0 | 1 | 2 | ||||
Assess confidence | 0 | 1 | 2 | ||||
Provide relevant medical information | 0 | 1 | 2 | ||||
Respectful advice giving | 0 | 1 | 2 | ||||
Goal setting and developing plan | 0 | 1 | 2 | ||||
Summarize | 0 | 1 | 2 | ||||
Arrange a follow-up and/or referral to treatment | 0 | 1 | 2 |
Total number of adherent behaviours (coded as 1 or 2 only) = Total positive points = | |||||
---|---|---|---|---|---|
SBIRT Non-Adherent Behaviours | |||||
Behaviour | Checks | Points per Behaviour | Total Points | ||
Warning/threatening | −1 | ||||
Being paralysed/unable to respond to patient concerns | −1 | ||||
Untimely or disrespectful advice giving and/or establishing goal or agenda without patient input | −1 | ||||
Labelling, premature diagnoses and/or stereotyping | −1 | ||||
Emphasis of power differential and/or judgmental tone | −1 | ||||
Lecturing and/or using medical jargon | −1 | ||||
Inappropriate response to patient comment/question | −1 | ||||
Total number of non-adherent behaviours =Total negative points = | |||||
Total Points = | |||||
Global Ratings | |||||
Collaboration | 1 | 2 | 3 | 4 | 5 |
Empathy | 1 | 2 | 3 | 4 | 5 |
Note: A separate coding guide with scale anchors and examples is necessary for utilization of the MD3 SBIRT coding scale. Please contact the authors for more information.
2.6 |. Standard setting
Implemented as a low-stakes, 1-station OSCE, completion of the 15-minute SBIRT encounter without intervention from the OSCE team was adjudged to be good enough. There is no gold standard for assessments53 although criterion-referenced and norm-referenced interpretations of results are common. Development of standard-setting methods for SP-based tests continue,54 even as absolute standard-setting procedures are considered more suitable for high-stakes examinations.55 Learners received feedbacks through the course management system.
2.7 |. Data collection
The SPs were provided with digital voice-activated audio recorders (Olympus DS®), which generated audio files. At the conclusion of each OSCE exercise, the recorders were collected from the SPs and the data were downloaded to a computer and archived for subsequent review and rating by the expert coders who were blinded as to the identities of the examinees. Two members of the OSCE team coded all the audio recordings together using the MD3 rating scale. Both had been independently trained on SBIRT, MI, and the MD3 scale. In addition, they practiced coding together to achieve at least 80% matching of all items. A 14-item interaction consistent with SBIRT (adherent behaviours) was coded on a 3-point Likert type scale (0 = behaviour was absent, 1 = behaviour was attempted but insufficiently and 2 = behaviour was present and satisfactory) following which the total adherent behaviour points were derived. A 7-item performance not consistent with SBIRT (non-adherent behaviours) was coded on a non-Likert type scale as behaviour counts (−1 for each item observed), and total non-adherent points were obtained. For the global rating scores, a 2-item (collaboration and empathy) observation was coded on a 5-point Likert type scale from lowest (1) to highest (5) scale. Actual coding time ranged from 17 to 20 minutes per interview. Completion of coding took several weeks and care was taken to reduce fatigue and maintain fidelity by limiting the amount of coding completed at any given time.
Separately, a Behaviour Change Counseling Checklist (BCCC) was developed in collaboration with academic stakeholders, and this was used by SPs to evaluate each DDS2 encountered while all learners completed a self-assessment with the same BCCC (Table 5).
TABLE 5.
Behaviour Change Counseling Checklist (BCCC)
BCCC: SP assessment_ _ _ _ _ _ _ _ _ _ | |||
---|---|---|---|
BCCC: Self-assessment (Student)_ _ | DATE:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | SBIRT ID _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
Question Style: | |||
1 | Questions are closed ended and student interrupts patient. Does not allow for silence. | Utilises one or two open-ended questions, may occasionally interrupt patient. | Utilised multiple open-ended questions, allowed the patient to tell their story. Utilises silence appropriately. |
0 | 0.5 | 1 | |
2 | Communication with patient is unidirectional. Most statements are telling patient what to do. | Elicits information from patient but does not respond to patients or concerns or assess understanding of information provided. | Ensures bidirectional communication with student asking for patient’s perspective. Consistently assesses understanding when providing information or explanations. |
0 | 0.5 | 1 | |
Respects patient autonomy: | |||
3 | Does not ask about patient’s perceptions on behaviours | Student asks about patient’s perceptions but does not explore this. | Student asks about patient’s perception of the behaviour and explores this. |
0 | 0.5 | 1 | |
4 | Does not ask for patient preferences on changing. Does not respect patient preferences when provided. | Respects patient preferences when offered but does not actively solicit preferences. Student may ask for permission to discuss their views on the behaviour change but does not give options (tells the patient what to do next). | Solicits and respects patient preferences. Asks the patient for permission to discuss change or to give their opinion and offers options to the patient to consider. |
0 | 0.5 | 1 | |
Demonstrates a Curious Tone: | |||
5 | Neither explores pro or cons of continuing with behaviour. | Either explores pro or cons of continuing with behaviour. | Explores with the patient both the pros and cons of continuing with their behaviours. |
0 | 0.5 | 1 | |
6 | Does not identify nor respond to the patients unique characteristics that contribute to his behaviour | Seeks to understand some but not all of a patient’s unique characteristics that contribute to the patient’s behaviour. | Both identify and adapts to the patient’s unique characteristics (demographic, cognitive, physical, cultural, socioeconomic, or situational needs) that contribute to the behaviour. |
0 | 0.5 | 1 | |
7 | Does not ask about willingness or readiness to change. | Asks if willing to change but does not explore readiness or confidence in change. | Asks patient’s opinion on willingness to change. Explores readiness or confidence in change. May utilise a scale to rate readiness or confidence in change. |
0 | 0.5 | 1 | |
Fosters Collaborative Relationship: | |||
8 | Does not solicit or address patient preferences for plan. Unidirectional with doctor providing the decision-making. | Provides options for patient to consider but does not ask about their preference. Beginning to incorporate the patient’s opinions in decision-making. | Provides options for the patient to consider and asks about patient’s preferences. Engages in shared decision-making. Actively seeks out patient’s input. |
0 | 0.5 | 1 | |
Empathic Listening: | |||
9 | Does not respond to questions asked, follow-up questions are not based on patient answers, questions are repeated which indicate not fully listening to patients answers earlier in the conversation. | Demonstrates some careful listening skills. | Responds appropriately to questions, follow-up questions are directed by patient’s answers, does not repeat questions or ask about things they have already been told. Actively listens-Reflects patient’s concerns and perspective. |
0 | 0.5 | 1 | |
10 | Does not summarise. | Summarises but does not ask for clarification or ensure that the story has been heard correctly | Summarises information that the student has heard. Checks for clarification. |
0 | 0.5 | 1 | |
11 | Does not recognise or comment on strengths the patient has. | Identifies strengths and/or prior successes but does not provide positive reinforcement (does not highlight for the patient that these are strengths or successes). | Affirmations are used—Student recognises strengths patient may have and provides positive reinforcement for previous successes. |
0 | 0.5 | 1 |
2.8 |. Descriptive and psychometric analyses of MD3 rating scale
Descriptive analysis summarised the data. Reliability and validity tests were used to evaluate the consistency and accuracy of data, respectively. Normality tests supplemented the validation tests. We analysed the across-items (within station) reliability and the construct validity. For normality, the histograms of empathy and collaboration scores were plotted for graphical and statistical assessment.
Reliability refers to consistency of data and variables such as the skill to be assessed, test length and the number of examination stations45 are strongly related to the two broad types of reliability (test-retest and internal consistency).48 Internal consistency reliability is further differentiated into inter-rater, within station (across-items) and across-station reliability with Cronbach’s coefficient alpha being the most commonly reported index of reliability.48,56 Generally, Cronbach’s coefficient alpha ranges from 0 to 1 (mid range is 0.4 – 0.8).57 The foundational literature on the MD3 SBIRT Coding Scale had established a high inter-rater reliability comparable to established MI coding systems,51 and as such, we calculated Cronbach’s coefficient alpha for across-items reliability for the adherent and non-adherent scores.
The function of validity tests is to verify the accuracy of responses in an instrument.58 As the theory and practice of validity evolved with time,59 contextualisation became important as there are different types of validity (face, content, construct, criterion, concurrent and predictive) with construct validity representing the overarching framework60 because of its explanatory concepts.56 The framework for validating measures is to draw inferences from scores61 or to compare scores from other measures.62 The scores of the convergent and divergent validity are required for construct validation.63 To assess convergent validity, the correlation coefficients of the global rating scores to total adherent behaviour scores and amongst the two global rating scores were computed. The correlation coefficients of the global rating scores to total non-adherent behaviour scores were calculated for divergent validity. Correlation coefficient between total adherent and non-adherent scores was also determined.
Normality of data distribution is important to validation because they indicate whether the correct statistical tests had been used.64 Along with visualisation of histograms, the Shapiro-Wilk test is an example of goodness-of-fit tests that are recommended for evaluating normality.64
2.9 |. Comparisons
To investigate the differences between the three years of investigation, regression models of each year on adherent behaviours were applied. The correlation coefficients between the MD3 subscale scores and the BCCC scores were assessed.
3 |. RESULTS
For the three years under investigation, data were collected for 240 DDS2 but we experienced 15.8% data loss due to technical issues such as inaudible recordings, most of which occurred in the first year of investigation. In particular, we did not capture the SP assessment data of the BCCC in the first year. We report data for 202 MD3 recordings (57 in year 1, 66 in year 2 and 79 in year 3) (Table 2).
TABLE 2.
Descriptive data for adherent behaviour assess with MD3 SBIRT coding scale. (N = 202)
Mean | Mode | Median | Standard Deviation (SD) | Distribution of points (%) | |||
---|---|---|---|---|---|---|---|
Adherent behaviours | 0 | 1 | 2 | ||||
1. Raise the substance use subject respectfully. | 1.36 | 2 | 2 | 0.73 | 15.35 | 33.66 | 50.99 |
2. Open-ended questions. | 1.29 | 2 | 1 | 0.78 | 19.80 | 31.19 | 49.01 |
3. Acknowledge discomfort and/or express genuine concern about patient. | 0.33 | 0 | 0 | 0.60 | 74.26 | 18.81 | 6.93 |
4. Review current pattern of substance use. | 1.44 | 1 | 1 | 0.50 | 0.00 | 56.44 | 43.56 |
5. Affirmation/strengths recognition. | 0.53 | 0 | 0 | 0.73 | 60.89 | 25.25 | 13.86 |
6. Reflections (repeating, rephrasing, paraphrasing or reflection of feeling). | 0.72 | 0 | 1 | 0.77 | 47.03 | 33.66 | 19.31 |
7. Explore pros/cons of substance use and/or help patient identify discrepancies. | 1.13 | 2 | 1 | 0.82 | 27.72 | 31.68 | 40.59 |
8. Assess readiness to change. | 1.06 | 1 | 1 | 0.71 | 22.28 | 49.50 | 28.22 |
9. Assess confidence. | 0.21 | 0 | 0 | 0.54 | 84.65 | 9.41 | 5.94 |
10. Provide relevant medical information. | 1.15 | 2 | 1 | 0.79 | 24.26 | 36.14 | 39.60 |
11. Respectful advice giving. | 1.07 | 1 | 1 | 0.77 | 25.74 | 41.09 | 33.17 |
12. Goal setting and developing a plan. | 0.29 | 0 | 0 | 0.58 | 77.23 | 16.34 | 6.44 |
13. Summarise. | 0.42 | 0 | 0 | 0.64 | 65.84 | 26.24 | 7.92 |
14. Arrange follow-up and/or referral to treatment (N = 201). | 0.80 | 1 | 1 | 0.72 | 37.81 | 44.28 | 17.91 |
Total adherent points | 11.80 | 4.23 |
3.1 |. Descriptive data
Mean of total adherent points was 11.80 (SD =4.23) (range: 2 – 24). Adherent behaviour #4 (Review current pattern of substance use) scored the highest with a mean of 1.44 (SD =0.50). Adherent behaviours #1, 2 and 4 had the three highest scores (Table 2). Adherent behaviour #9 (Assess confidence) scored the lowest with a mean of 0.21 (SD =0.54). Mean of total non-adherent points was 0.46 (SD =0.92). The most frequently scored non-adherent behaviour was #17 (Untimely or disrespectful advice giving and/or establishing goal or agenda without patient input) with a mean of 0.14 (SD =0.37) whilst #15 (Warning/threatening) behaviour was not observed.
3.2 |. Reliability of data
Cronbach’s coefficient alpha for across-items in adherent behaviours was 0.664 (raw calculation) and 0.660 (standardised). For non-adherent behaviours, the coefficient alpha across items was 0.435 (raw calculation) and 0.431 (standardised).
3.3 |. Validity of data
For convergent validity, the correlation coefficient of the global ratings to total adherent behaviours was r = 0.66 (collaboration) and r = 0.66 (empathy) while correlation coefficient between collaboration and empathy was r = 0.85. For divergent validity, the correlation coefficient of total non-adherent behaviours to global ratings were r = −0.48 (collaboration) and r = −0.47 (empathy). Total adherent behaviour scores negatively correlated with total non-adherent behaviours score (r = −0.24) (Table 4).
TABLE 4.
Correlation coefficients between MD3 rating subscale scores
Comparison between subscales | Correlation coefficient |
---|---|
Total adherent behaviours and Global ratings (collaboration) | r = 0.66 |
Total adherent behaviours and Global ratings (empathy) | r = 0.66 |
Global ratings (collaboration and empathy) | r = 0.85 |
Total non-adherent behaviours and Global ratings (collaboration) | r = −0.48 |
Total non-adherent behaviours and Global ratings (empathy) | r = −0.47 |
Total adherent and total non-adherent behaviours | r = −0.24 |
3.4 |. Normality of data
The histograms of the global rating scores were tall, sharp, neither centrally peaked nor perfectly symmetrical and had some contamination at the tails (Figures 2 and 3). Mean of collaboration points was 2.90 (SD =1.01) (range: 1 – 5) with a skewness of −0.122 and kurtosis of −0.649 and normality was rejected for alpha =0.05 using the Shapiro-Wilk test (W = 0.90, p < .001). Mean of empathy points was 2.97 (SD =1.00) (range: 1 – 5), skewness was −0.370, and kurtosis was −0.424 with normality rejected for alpha =0.05 using the Shapiro-Wilk test (W = 0.89, p < .001).
FIGURE 2.
Expected and observed percentage in histogram distribution of global rating score for collaboration. Normality is rejected for alpha=0.05 using the Shapiro-Wilk test (W = 0.90, p < .0001)
FIGURE 3.
Expected and observed percentage in histogram distribution of global rating score for empathy. Normality is rejected for alpha=0.05 using the Shapiro-Wilk test (W = 0.89, p < .0001)
3.5 |. Comparisons over three years of investigation
The scores for adherent behaviours #1, 4 and 7 differed by year of investigation (Ward X2(2) =46.33, p < .0001), (Ward X2(2) =33.20, p < .0001) and (Ward X2(2) =13.86, p = .0010), respectively, with students in the second cohort of learners having the highest score. The scores for adherent behaviours #6, 12 and 13 differed by year (Ward X2(2) =19.12, p < .0001), (WardX2(2) =14.12, p = .0009) and (WardX2(2) =22.94, p < .0001), respectively, with year 1 being the highest. Adherent behaviour #10 scores were highest in the third year of investigation (Ward X2(2) =13.43, p = .0012). There was no statistically significant difference in total adherent points, global rating scores and BCCC scores over the three years of investigation (Table 3).
TABLE 3.
Comparison of descriptive data between years of investigation using regression model on individual adherent behaviours and other related measures (N = 202)
Year of investigation | ||||||||
---|---|---|---|---|---|---|---|---|
Adherent behaviours | Total | 1(n=57) | 2(n = 66) | 3(N = 79) | ||||
Mean | Mode | Median | SD | Mean (SD) | Ward X2(2), p-value | |||
MD3 Adherent behaviours (Range: 0–2) | ||||||||
1. Raise the substance use subject respectfully. | 1.36 | 2 | 2 | 0.73 | 0.93a (0.70) | 1.89b (0.43) | 1.22a (0.69) | 46.33, p < .0001 |
2. Open-ended questions. | 1.29 | 2 | 1 | 0.78 | 1.32 (0.69) | 1.32 (0.91) | 1.25 (0.72) | 1.30, p = 0.5228 |
3. Acknowledge discomfort and/or express genuine concern about patient. | 0.33 | 0 | 0 | 0.60 | 0.44 (0.57) | 0.30 (0.70) | 0.27 (0.52) | 6.29, p = 0.0431 |
4. Review current pattern of substance use. | 1.44 | 1 | 1 | 0.50 | 1.05a (0.23) | 1.70b (0.46) | 1.49b (0.50) | 33.20, p < 0.0001 |
5. Affirmation/strengths recognition. | 0.53 | 0 | 0 | 0.73 | 0.74 (0.74) | 0.56 (0.84) | 0.35 (0.56) | 8.27 p = 0.0160 |
6. Reflections (repeating, rephrasing, paraphrasing or reflection of feeling). | 0.72 | 0 | 1 | 0.77 | 1.07a (0.70) | 0.52b (0.79) | 0.65b (0.72) | 19.12 p < 0.0001 |
7. Explore pros/cons of substance use and/or help patient identify discrepancies. | 1.13 | 2 | 1 | 0.82 | 0.79a (0.73) | 1.30b (0.86) | 1.23ab (0.78) | 13.86 p = 0.0010 |
8. Assess readiness to change. | 1.06 | 1 | 1 | 0.71 | 0.96 (0.65) | 1.12 (0.81) | 1.08 (0.66) | 1.75 p = 0.4163 |
9. Assess confidence. | 0.21 | 0 | 0 | 0.54 | 0.19 (0.44) | 0.24 (0.58) | 0.20 (0.56) | 0.52 p = 0.7695 |
10. Provide relevant medical information. | 1.15 | 2 | 1 | 0.79 | 0.82a (0.68) | 1.24b (0.88) | 1.32b (0.71) | 13.43 p = 0.0012 |
11. Respectful advice giving. | 1.07 | 1 | 1 | 0.77 | 0.77 (0.60) | 1.18 (0.88) | 1.20 (0.72) | 11.43 p = 0.0033 |
12. Goal setting and developing a plan. | 0.29 | 0 | 0 | 0.58 | 0.42a (0.57) | 0.06b (0.35) | 0.39a (0.69) | 14.12 p = 0.0009 |
13. Summarise. | 0.42 | 0 | 0 | 0.64 | 0.72a (0.59) | 0.32b (0.68) | 0.29b (0.56) | 22.94 p < .0001 |
14. Arrange follow-up and/or referral to treatment (N = 201). | 0.80 | 1 | 1 | 0.72 | 0.79 (0.53) | 0.83 (0.85) | 0.78 (0.73) | 0.1124, p = 0.9454 |
MD3: Total adherent behaviours (Range, 2–24, N = 202) | 11.80 | 11 | 11 | 4.23 | 11.02 (3.65) | 12.59 (4.54) | 11.71 (4.28) | 4.37, p = 0.1123 |
MD3: Total non-adherent behaviours (Range, 0–6, N = 202) | 0.46 | 0 | 0 | 0.92 | 0.30 (0.71) | 0.44 (0.75) | 0.59 (1.15) | 2.57, p = 0.2761 |
MD3: Global ratings (Collaboration) (Range, 1–5, N = 201) | 2.90 | 3 | 3 | 1.01 | 2.82 (0.90) | 2.89 (0.86) | 2.96 (1.19) | 0.63, p = 0.7295 |
MD3: Global ratings (Empathy) (Range, 1–5, N = 200) | 2.97 | 3 | 3 | 1.00 | 2.79 (0.97) | 3.08 (0.85) | 3.01 (1.12) | 3.12, p = 0.2104 |
BCCC: Self-assessment (Range, 4–11, N = 235) | 9.25 | 10 | 9 | 1.47 | 9.32 (1.56) | 8.79 (1.51) | 9.61 (1.21) | 2.94 p = 0.2295 |
BCCC: SP Assessment (Range 2–11, N = 158) | 7.34 | 9 | 7 | 2.42 | -- | 7.12 (2.43) | 7.50 (2.44) | 0.80 p = 0.3713 |
Note:
Superscripts with different letters indicate significant differences (at p < 0.0025) between cohorts.
3.6 |. Comparison between MD3 rating scale and BCCC
The correlation coefficient between total adherent behaviours and the BCCC was r = 0.19 for student self-assessment and r = 0.34 for SP assessment. Correlation coefficient for global ratings was r = 0.26 (collaboration and self-assessment), r = 0.25 (collaboration and SP assessment, r = 0.15 (empathy and self-assessment) and r = 0.17 (empathy and SP assessment) (Table 6).
TABLE 6.
Correlation coefficients between MD3 rating subscale scores and BCCC assessments
Total adherent behaviours | Total non-adherent behaviours | Collaboration | Empathy | BCCC: Self-assessment | |
---|---|---|---|---|---|
r (p-value) | |||||
Total non-adherent behaviours | −0.24 (p = 0.0004) | ||||
Collaboration | 0.66 (p < .0001) | −0.48 (p < .0001) | |||
Empathy (N = 200) | 0.66 (p < .0001) | −0.47 (p < .0001) | 0.85 (p < .0001) | ||
BCCC: Self-assessment (n = 121) | 0.19 (p = 0.0400) | −0.15 (p = 0.0957) | 0.26 (p = 0.0039) | 0.15 (p = 0.0946) | |
BCCC: SP assessment (n = 65) | 0.34 (p = 0.0051) | −0.14 (p = 0.2643) | 0.25 (p = 0.0421) | 0.17 (p = 0.1835) | 0.27 (p = 0.0290) |
Note: To investigate the differences between the three years of investigation, regression models of each year on adherent behaviours were applied. Except for total adherent points, all other measures were not normal distribution, and thus, cumulative logistic regression models were implemented. To adjust for multiple comparisons, Bonferroni correction was applied and the adjusted significance level was at p < 0.0025.
4 |. DISCUSSION
Descriptive analysis the MD3 rating scores indicated that a broad spectrum of adherent interactions and a very limited number of non-adherent behaviours were demonstrated by DDS2 and that overall skill performances in MI were average, thus implying achievability. Having operationalised the SBIRT 1-station OSCE for 3 years, we report successful integration into the curriculum. The SBIRT OSCE was the third part of an innovative, grant-funded, three-tiered project (seminar, case-based exercise and OSCE) to introduce SBIRT training into a dental curriculum. The OSCE experience was evocative, bringing to fore, snippets of biomedical, behavioural and clinical curricula inputs in SUD education and enabling students to immerse in the topic. In particular, anecdotally, the opportunity to encounter SPs was welcomed because it offered a safe place to mitigate some of the insecurities associated with patient management, ahead of induction into clinical dentistry. Beyond the OSCE, we are witnessing increasing application of SBIRT knowledge, attitudes, values and skills into clinical practice. Consequently, the SBIRT OSCE integrated into the curriculum both horizontally and vertically.
The review of the descriptive data for each adherent behaviour item in the MD3 rating scale (Table 2) showed that students were more able to raise the substance use subject respectfully, ask open-ended questions, review current pattern of substance use, explore the pros and cons, assess readiness to change, provide relevant medical information, give respectful advice and were less able to express genuine concern, assess confidence, recognise strengths, provide reflections, set goals and summarise. The exercise and rating scale were useful in highlighting the strengths and weaknesses of individual learners and where more training would be needed to improve the overall level of proficiency.
Table 3 provided a glimpse into the inescapability that scores can differ from year to year, training can improve with repetition or be impaired by fatigue and that some communication skills are difficult to master. In this study, statistically significant changes in the pattern of scores for adherent behaviours were seen in relation to 7 of the 14 the MD3 adherent behaviour rating items, including noticeably a decline in reflections, goal setting and summaries. However, there were no significant variations in total adherent behaviour scores, total non-adherent behaviour scores, global rating scores and BCCC scores across years of investigation. We theorise that a broadly representative assessment instrument enhances within station (across-items) reliability scores and we recommend 14 items as the standard. It had already been professed that wide sampling of stations improves across-station reliability scores with a benchmark at 17 stations.65 Dental students are trained to collect a lot of data, and they have a tendency to ask a lot of questions. Future trainings would certainly need to place more emphasis on methods to train in the use of reflections, goal setting and summaries. Reflective techniques (listening and statements) are a component of empathy66 and summaries serve as both reflective and transition statements.
Our study design was better suited for across-items reliability because we had only one OSCE station and we used a coder pair thus making calculation of inter-rater reliability redundant. We found the across-items reliability for adherent interactions to be modest and acceptable considering the uniqueness of the skills being evaluated while the weakness of the across-items reliability for non-adherent behaviours is understandable for a 1-station OSCE with only 7 items as the dearth of negative behaviours could impact data sensitivity.
Adherent behaviour scores moderately correlated with global rating scores, while the dyad of collaboration and empathy scores were very strongly related. Non-adherent behaviour scores negatively correlated with both adherent behaviour scores and the global rating scores. Taking into account the data from the foundational MD3 literature,51 these correlation coefficient patterns were contemplated but the correlation between collaboration and empathy exceeded expectation. Although calculations of global rating scores are typically done after completion of the adherent and non-adherent behaviour item counts, certainly, the acquisition of knowledge and attitudes in MI techniques, in preparation for an OSCE, can provide students with the global spirit in empathy and collaboration, enabling them to comfortably perform SBIRT-positive interactions while avoiding SBIRT-negative incidences. Generally, MI curricula increases the likelihood of clinicians addressing behaviours with patients,67 and for the keen learner, MI training can be inspirational, leading to improvements in clinical communication.
Visually, the distribution of the global rating scores had good approximations of a bell curve and the Shapiro-Wilk tests captured the distribution of the scores. There is evidence in the literature that the expectation of elegant normality is reductionist, as most measures inventorying perceptions and opinions do not pass all tests of normality.68 Hence, we surmise that the assessment data were reasonably consistent, the construct was deemed valid, histograms approached normality and the goodness-of-fit tests enhanced the validity of the data. From these, we infer that the MD3 rating scale is a good measure of student performance in the SBIRT OSCE and that the domains of the 14 SBIRT adherent behaviour items are clinically interdependent and relevant to dental education.
Generally, the MD3 rating scores correlated poorly with the BCCC. Statistically, SP assessment correlated better than student self-assessment with total adherent scores and the correlation of global ratings to the BCCC were comparable with collaboration co-efficients being slightly higher than those of empathy. Notably, both rating scales and checklists have been used when assessing interpersonal and communication skills69 although there are concerns that checklist-type communication measures may trivialise content,70 not necessarily provide more reliable scores69 and can be antithetical to the ideas and ideals of MI. Our findings lend credence to the assertions that rating scores are preferable to checklist scores for probing communication skills. For assessing procedural skills, higher levels of correlation between global rating scales and checklists have been reported.71
It had been noted that the quality of research on communication skills in dental education was variable,72 and this was probable true of the training itself. Recent findings also indicate that improvisation, empathy, developing trust, identifying emotion behind words and collaboration are critical to the development of patient-centred therapeutic relationships and clinicians proficient only in rote communication skills would most likely struggle in today’s complex health environment.73 Beyond structurally integrating innovative dental courses, attention and resource should be committed to the science and art of teaching so as to elevate the qualitative accomplishments of the reformed curricula. As such, pedagogy that inter-faces the humanities with dental medicine is a sine qua non of new ideas in dental education.
Some of the limitations of this study are that the character description was limited to tobacco and marijuana, excluding opioids, and it was not feasible, logistically, to deploy more than one coder pair and so we accepted the foundational inter-rater reliability score being that it is evidence based. There will always be some concern about the relatedness of the MD3 rating scale to dentistry and whether it aligns with the standards in communication skills across curricula. Although goodness-of-fit tests have limitations and more evidence is needed to proffer generalisability of our findings, our sample size was large enough to draw some conclusions. This work was based on a formative application of the SBIRT OSCE, to furnish learning by, of and for assessment, apt for dental students, during and beyond dental school years. Assessments have a powerful effect on learners and an appropriate conversation should be about the evolving innovations in pedagogy for aligning assessments with long-term learning.74
In the long-term, certainly in the United States, dentists, with requisite training, can provide treatment for TUD including individual counselling and prescription writing of Food and Drug Administration (FDA) approved medications such as Nicotine Replacement Therapies (NRT) and non-nicotine medications. Training can be enhanced with certification as a Tobacco Treatment Specialist (TTS).75,76 We deem it appropriate to mention that the first author of this manuscript has a National Certificate in Tobacco Treatment Practice (NCTTP). For other SUD, the evidence-based approach would be screening to determine risk level, brief intervention appropriate for the risk assessment and then referral to treatment with a SUD facility in a “Warm handoff” manner. The “Warm hand-off” ensures continuation of dental care in an atmosphere of collaborative interdisciplinary practice and reduces bias to individuals with SUD.77 Biases and problems with access to care are some of the challenges that persons with SUD face within the healthcare system78 and stigmatisation have been reported in dental settings.79 Given that medical education is still actively adapting its curricula to address the knowledge gap in SUD,80 dental curricular initiatives are transformative.
Future research questions should also include whether the SBIRT OSCE has predictive validity. From the perspective of the authors, SBIRT can be incorporated into the clinic workflow and screening tools can be integrated into the electronic health record system.
5 |. CONCLUSION
The utilisation a 1-station OSCE, with an engaging patient storyline, as a formative educational experience along with rigorous data collection, has afforded us the opportunity to advance inquiry into a difficult subject and yield scientific evidence to drive the curriculum imperatives of academic decision-makers. In this project, we intersected substance-related mental health and dental medicine and extended the boundaries of a traditional curriculum. The 1-station OSCE is a good model for training in SBIRT and formative educational activities designed to enable students acquire knowledge, value, attitude and skills for managing patients with SUD should be a supported curriculum.
ACKNOWLEDGEMENTS
The authors wish to express their gratitude to the staff of the College of Dental Medicine and the Department of Psychiatry, Columbia University and the New York State Psychiatric Institute who gave their time and effort in ensuring the completion of this study. We are especially grateful to Jorge Caccavelli for his contributions to the management of the OSCE. We also wish to acknowledge Drs. Michael J. Devlin and Prantik Saha who were the lead authors of the BCCC.
Funding information
Substance Abuse and Mental Health Services Administration, Grant/Award Number: 1H79T1025937
Footnotes
CONFLICTS OF INTEREST
Dr. Odusola is currently a consultant for the American Academy of Addiction Psychiatry (AAAP). Dr. Nunes served as an unpaid consultant to Alkermes, Braeburn-Camurus, Pear Therapeutics, has received in-kind medication for studies from Reckitt/Indivior, Alkermes and has received a therapeutic application from Pear Therapeutics for a study. Dr. Levin has received salary support from NYSPI, grant support from SAMSHA and NIH, material and research support for a study from USWorldMeds, is a consultant for Major League Baseball and is on the Scientific Advisory Board of Alkermes and USWorldMeds. Dr. Levin did not personally receive any compensation in the form of cash payments (honoraria/consulting fees/travel reimbursement or meals). Otherwise, we have no financial relationship with the content of this paper.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author, fo17@cumc.columbia.edu, upon reasonable request.
REFERENCES
- 1.National Institute on Drug Abuse. The Science of Addiction. At: https://www.drugabuse.gov/publications/drugs-brains-behavior-science-addiction/drug-misuse-addiction. Accessed: January 13, 2020.
- 2.Kelly JF, Wakeman SE, Saitz R. Stop talking “dirty”: clinicians, language, and quality of care for the leading cause of preventable death in the United States. A J Med. 2015;128(1):8–9. [DOI] [PubMed] [Google Scholar]
- 3.Regier DA, Kuhl EA, Kupfer DJ. The DSM-5: classification and criteria changes. World Psychiatry. 2013;12(2):92–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.The Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM–5). American Psychiatric Association; 2013. https://www.psychiatry.org/psychiatrists/practice/dsm [Google Scholar]
- 5.Substance Abuse and Mental Health Services Administration. Impact of the DSM-IV to DSM-5 changes on the national survey on drug use and health [Internet]. Rockville (MD): Substance Abuse and Mental Health Services Administration (US); 2016, Substance Use Disorders. Available from: https://www.ncbi.nlm.nih.gov/books/NBK519702/. Accessed October 14, 2020. [PubMed] [Google Scholar]
- 6.McLellan AT, Lewis DC, O’Brien CP, Kleber HD. Drug dependence, a chronic medical illness: implications for treatment, insurance, and outcomes evaluation. JAMA. 2000;284(13):1689–1695. [DOI] [PubMed] [Google Scholar]
- 7.GBD 2016 Alcohol Collaborators. Alcohol use and burden for 195 countries and territories, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet. 2018;392:1015–1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.United Nations Office on Drugs and Crime. Drug Use and Health Consequences. At: https://wdr.unodc.org/wdr2020/field/WDR20_Booklet_2.pdf. Accessed September 29, 2020.
- 9.Meyer A, LeClair C, McDonald JV. Prescription opioid prescribing in Western Europe and the United States. Rhode Island Med J (2013). 2020 Mar 2;103(2):45–48. [PubMed] [Google Scholar]
- 10.European Drug Report 2019: Trends and Development. At: https://www.emcdda.europa.eu/publications/edr/trends-developments/2019_en. Accessed October 05, 2020.
- 11.Dole VP. What have we learned from three decades of methadone maintenance treatment? Drug and Alcohol Review. 1994;13(1):3–4. [DOI] [PubMed] [Google Scholar]
- 12.Substance Abuse and Mental Health Services Administration (US); Office of the Surgeon General (US). Facing addiction in America: the surgeon general’s report on alcohol, drugs, and health [Internet]. Washington (DC): US Department of Health and Human Services; Chapter 4, Early Intervention, Treatment, And Management of Substance Use Disorder. 2016 Nov. Available from: https://www.ncbi.nlm.nih.gov/books/NBK424859/. Accessed October 12, 2020. [Google Scholar]
- 13.American Psychiatric Association. What is Addiction? At: https://www.psychiatry.org/patients-families/addiction/what-is-addiction. Accessed October 12, 2020.
- 14.National Institute on Drug Abuse. Commonly Used Terms in Addiction Science. At: https://www.drugabuse.gov/publications/media-guide/glossary. Accessed October 14, 2020.
- 15.Laudet AB. The road to recovery: where are we going and how do we get there? Empirically driven conclusions and future directions for service development and research. Substance Use Misuse. 2008;43(12–13):2001–2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.White WL. Addiction recovery: Its definition and conceptual boundaries. J Substance Abuse Treat. 2007;33:229–241. [DOI] [PubMed] [Google Scholar]
- 17.Vilsaint CL, Hoffman LA, Kelly JF. Perceived discrimination in addiction recovery: Assessing the prevalence, nature, and correlates using a novel measure in a U.S. National sample. Drug and Alcohol Depend. 2019;107667. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Best D, Laudet AB. The potential of recovery capital. London, UK: Royal Society for the Arts; 2010. [Google Scholar]
- 19.United Nations Office on Drugs and Crime (UNODC). International Standards for the Treatment of Drug Use Disorders. 2017. At: https://apps.who.int/iris/bitstream/handle/10665/331635/9789240002197-eng.pdf. Accessed October 4 2020.
- 20.Council of European Dentists. 2019 CED Annual Report. At: https://cedentists.eu. Accessed October 5, 2020.
- 21.European Commission. Public Health Best Practice Portal. At: https://webgate.ec.europa.eu/dyna/bp-portal/. Accessed October 5, 2020.
- 22.Substance Abuse and Mental Health Services Administration-Health Resources and Services Administration Center for Integrated Health Solutions. At: https://www.integration.samhsa.gov/clinical-practice/sbirt. Accessed May 6, 2019.
- 23.Substance Abuse and Mental Health Services Administration. About Screening, Brief Intervention, and Referral to Treatment (SBIRT). At: https://www.samhsa.gov/sbirt/about. Accessed October 2, 2019.
- 24.American Dental Association. Official Policy and Statements on Substance Use Disorders. At: https://www.ada.org/en/advocacy/current-policies/substance-use-disorders. Accessed January 13, 2020.
- 25.Levy B, Paulozzi L, Mack KA, Jones CM. Trends in opioid analgesic-prescribing rates by specialty. U.S. 2007–2012. Am J Prevent Med. 2015;49(3):409–413. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Volkow ND, et al. Characteristics of opioid prescriptions in 2009. JAMA. 2011;305(13):1299–1301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Schroeder AR, Dehghan M, Newman TB, Bentley JP, Park KT. Association of opioid prescriptions from dental clinicians for US adolescents and young adults with subsequent opioid use and abuse. JAMA Intern Med. 2018;179(2):145–152. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.O’Neil M The ADA Practical guide to substance use disorder and safe prescribing. Hoboken, NJ: John Wiley and Sons Inc; 2015. 10.1002/9781119062738 [DOI] [Google Scholar]
- 29.Bennett JH, Beeley JA, Anderson P, et al. A core curriculum in the biological and biomedical sciences for dentistry. Eur J Dent Educ. 2020;00:1–9. [DOI] [PubMed] [Google Scholar]
- 30.Keith DA, Kulich RJ, Bharel M, et al. Massachusetts dental schools respond to the prescription opioid crisis: a statewide collaboration. J Dent Educ. 2017;81(12):1388–1394. [DOI] [PubMed] [Google Scholar]
- 31.Odusola F, Smith JL, Bisaga A, et al. Innovations in pre-doctoral education: influencing attitudes and opinions about patients with substance use disorder. J Dent Educ. 2020;84(5):578–585. [DOI] [PubMed] [Google Scholar]
- 32.Brondani MA, Pattanaporn K. Integrating issues of substance abuse and addiction into the predoctoral dental curriculum. J Dent Educ. 2017;77(9):1108–1117. [PubMed] [Google Scholar]
- 33.Kramer GA, Albino JEN, Andrieu SC, et al. Dental student assessment toolbox. J Dent Educ. 2009;73(1):12–35. [PubMed] [Google Scholar]
- 34.Henzi D, Davis E, Jasinevicius R, et al. North American dental students’ perspectives about their clinical education. J Dent Educ. 2006;70(4):361–377. [PubMed] [Google Scholar]
- 35.Howard KM, Stewart T, Woodall W, et al. An integrated curriculum: evolution, evaluation, and future direction. J Dent Educ. 2009;73(8):962–971. [PubMed] [Google Scholar]
- 36.Gerhard-Szep S, Guntsch A, Pospiech P, et al. Assessment formats in dental medicine: an overview. GMS J Med Edu. 2016;33(4):1–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.American Dental Association. Dental Licensure Objective Structured Clinical Examination. At: https://www.ada.org/en/education-careers/dental-licensure-objective-structured-clinical-examination?utm_source=adaorg&utm_medium=vanityurldlosce. Accessed January 14, 2020.
- 38.Mossey P, Newton J, Stirrups D. Scope of the OSCE in the assessment of clinical skills in dentistry. Br Dent J. 2001;190(6):323–326. [DOI] [PubMed] [Google Scholar]
- 39.Puryer J Dental undergraduate views of Objective Structured Clinical Examinations (OSCEs): a literature review. Dent J. 2016;4(1):6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Schoonheim-Klein ME, Habets LlMH, Aartman IHA, et al. Implementing an Objective Structured Clinical Examination (OSCE) in dental education: effects on students learning strategies. Eur J Dent Educ. 2006;10(4):226–235. [DOI] [PubMed] [Google Scholar]
- 41.Mckenzie C, Tilashalski K, Peterson DT, White ML. Effectiveness of standardized patient simulations in teaching clinical communication skills to dental students. J Dent Educ. 2017;81(10):1179–1186. [DOI] [PubMed] [Google Scholar]
- 42.Rudland J, Wilkinson T, Smith-Han K, Thompson-Fawcett M. “You can do it late at night or in the morning. You can do it at home, I did it with my flatmate”. The educational impact of an OSCE. Med Teach. 2008;30(2):206–211. [DOI] [PubMed] [Google Scholar]
- 43.Hodges B, Regehr G, Hanson M, Mcnaughton N. Validation of an objective structured clinical examination in psychiatry. Acad Med. 1998;73(8):910–912. [DOI] [PubMed] [Google Scholar]
- 44.Schoonheim-Klein M, Gresnigt C, Velden UV. Influence of dental education in motivational interviewing on the efficacy of interventions for smoking cessation. Eur J Dent Educ. 2012;17(1). [DOI] [PubMed] [Google Scholar]
- 45.Swanson DB, van der Vleuten CPM. Assessment of clinical skills with standardized patients: state of the art revisited. Teach Learn Med. 2013;25(sup1). [DOI] [PubMed] [Google Scholar]
- 46.Motivational Interviewing Competency Assessment. At: http://micacoding.com. Accessed January 20, 2020.
- 47.Moyers TB, Rowell LN, Manuel JK, et al. The Motivational Interviewing Treatment Integrity Code (MITI 4): rationale, preliminary reliability and validity. J Subst Abuse Treat. 2016;65:36–42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Schiekirka S, Feufel MA, Herrmann-Lingen C, Raupach T. Evaluation in medical education: a topical review of target parameters, data collection tools and confounding factors. Ger. Med Sci 2015;13(15). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Nunnally JC, Berstein IH. Psychometric theory (3rd ed.). New York: McGraw-Hill; 1994. [Google Scholar]
- 50.Cömert M, Zill JM, Christalle E, et al. Assessing communication skills of medical students in Objective Structured Clinical Examinations (OSCE) - a systematic review of rating scales. PLoS One. 2016;11(3). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.DiClemente CC, Crouch TB, Norwood AEQ, et al. Evaluating training of Screening, Brief Intervention, and Referral to Treatment (SBIRT) for substance use: Reliability of the MD3 SBIRT coding scale. Psychol Addict Behav. 2015;29(1):218–224. [DOI] [PubMed] [Google Scholar]
- 52.Tavakol M, Pinner G. Enhancing objective structured clinical examinations through visualization of checklist scores and global rating scale. Int J Med Educ. 2018;9:132–136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.van der Vleuten CPM, Cohen-Schotanus J. Standard setting. In Patil N & Chan K (Eds.). Assessment in medical and health sciences education. Hong Kong: Li Ka Shing Faculty of Medicine; 2009:62–71. [Google Scholar]
- 54.Kaufman DM, Mann KV, Muijtjens AM, et al. A comparison of standard-setting procedures for an OSCE in undergraduate medical education. Acad Med. 2000;75(3):267–271. [DOI] [PubMed] [Google Scholar]
- 55.Kramer A, Muijtjens A, Jansen K, et al. Comparison of a rational and an empirical standard setting procedure for an OSCE. Med Educ. 2003;37(2):132–139. [DOI] [PubMed] [Google Scholar]
- 56.Brannick MT, Erol-Korkmaz HT, Prewett M. A systematic review of the reliability of objective structured clinical examination scores. Med Educ. 2011;45(12):1181–1189. [DOI] [PubMed] [Google Scholar]
- 57.Streiner DL, Norman GR, Cairney J. Health measurement scales: A practical guide to their development and use. Oxford: Oxford University Press; 2015. 10.1093/acprof:oso/9780199231881.003.0012 [DOI] [Google Scholar]
- 58.Louangrath PI, Sutanapong C. Validity and reliability of survey scales. Int J Res Methodol Soc Sci. 2018;4(3):99–114. [Google Scholar]
- 59.Kane M, Bridgeman B. Research on validity theory and practice at ETS. Advanc Hum Assess. 2017;489–552. [Google Scholar]
- 60.Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):7–16. [DOI] [PubMed] [Google Scholar]
- 61.Kane MT, Clauser BE, Kane J. A validation framework for credentialing tests. Test Profess. 2017;21–40. [Google Scholar]
- 62.Hodges B Validity and the OSCE. Med Teach. 2003;25(3):250–254. [DOI] [PubMed] [Google Scholar]
- 63.Kerlinger FN. Foundations of behavioral research (2nd Ed). 1973. New York: Holt, Rinehart and Winston. https://catalogue.nla.gov.au/Record/456567 [Google Scholar]
- 64.Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab. 2012;10(2):486–489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Schoonheim-Klein M, Muijtens A, Habets L, et al. On the reliability of a dental OSCE, using SEM: effect of different days. Eur J Dent Educ. 2008;12:131–137. [DOI] [PubMed] [Google Scholar]
- 66.Lord SP, Sheng E, Imel ZE, et al. More than reflections: empathy in motivational interviewing includes language style synchrony between therapist and client. Behav Ther. 2015;46(3):296–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.White LL, Gazewood JD, Mounsey AL. Teaching students behavior change skills: description and assessment of a new Motivational interviewing curriculum. Med Teach. 2007;29(4). [DOI] [PubMed] [Google Scholar]
- 68.Micceri T The Unicorn, the normal curve, and other improbable creatures. Psychol Bull. 1989;105(1):156–166. [Google Scholar]
- 69.Regehr G, MacRae H, Reznick R, Szalay D. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med. 1998;73:993–997. [DOI] [PubMed] [Google Scholar]
- 70.Cohen DS, Colliver JA, Marcy MS, et al. Psychometric properties of a standardized-patient checklist and rating-scale form used to assess interpersonal and communication skills. Acad Med. 1996;71(1):S87–S89. [DOI] [PubMed] [Google Scholar]
- 71.Sim JH, Abdul Aziz YF, Vijayanantha A, et al. A closer look at check-list scoring and global rating for four OSCE stations: do the scores correlate well? Educ Med J. 2015;7(2). [Google Scholar]
- 72.Carey JA, Madill A, Manogue M. Communications skills in dental education: a systematic research review. Eur J Dent Educ. 2010;14(2):69–78. [DOI] [PubMed] [Google Scholar]
- 73.Terregino CA, Copeland HL, Sarfaty SC, et al. Development of an empathy and clarity rating scale to measure the effect of medical improve on end-of-first-year OCSE performance: a pilot study. Medical Education Online. 2019;24(1):1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Boud D, Falchikov N. Aligning assessment with long-term learning. Assess Eval Hig Educ. 2006;31(4):399–413. [Google Scholar]
- 75.UMMC ACT Center for Tobacco Treatment, Education and Research. Tobacco Treatment Specialist Training. At: http://www.act2quit.org/education/certified-tobacco-treatment-specialist-.asp. Accessed October 18, 2020.
- 76.NAADAC National Certificate in Tobacco Treatment Practice. At: https://www.naadac.org/NCTTP. Accessed October 18, 2020.
- 77.Agency for Healthcare Research and Quality. Implementation Quick Start Guide Warm Handoff. At: https://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/patient-family-engagement/pfeprimarycare/warm-handoff-qsg-brochure.pdf. Accessed October 18, 2020.
- 78.Fitzgerald C, Hurst S. Implicit bias in healthcare professionals: a systematic review. BMC Medical Ethics. 2017;18(19). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Brondani MA, Alan R, Donnelly L. Stigma of addiction and mental illness in healthcare: the case of patients’ experiences in dental settings. PLoS One. 2017;12(5). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Ratycz MC, Papadimos TJ, Vanderbilt AA. Addressing the growing opioid and heroin abuse epidemic: a call for medical school curricula. Med Educ Online. 2018;23(1):1–6. [DOI] [PMC free article] [PubMed] [Google Scholar]