Skip to main content
BMJ Open Access logoLink to BMJ Open Access
. 2017 Jul 22;22(4):123–131. doi: 10.1136/ebmed-2017-110714

Mental health literacy in primary care: Canadian Research and Education for the Advancement of Child Health (CanREACH)

Eden S N McCaffrey 1, Samuel Chang 2, Geraldine Farrelly 3, Abdul Rahman 2,3, David Cawthorpe 2
PMCID: PMC5537558  PMID: 28735276

Abstract

The effectiveness of a continuing education programme in paediatric psychopharmacology designed for primary healthcare providers was objectively measured based on the assumption that training would lead to measurable changes in referral patterns and established clinical measures of referred patients. Using established, valid and reliable measures of clinical urgency embedded in to a regional healthcare system since 2002, the referrals to child and adolescent psychiatric services of physicians who participated in the training (n=99) were compared pretraining and post-training, and to non-participating/untrained referring physicians (n=7753) making referrals over the same time period. Referrals were analysed for evidence of change based on frequencies and measures of clinical urgency. Participants of the training programme also completed standardised baseline and outcome self-evaluations. Congruent with participants self-reported evaluative reports of improved knowledge and practice, analysis of referral frequency and the clinical urgency of referrals to paediatric psychiatric services over the study period indicated that trained physicians made more appropriate referrals (clinically more severe) and reduced referrals to emergency services. Quantitative clinical differences as completed by intake clinicians blind to referrals from the study group designations were observed within the trained physician group pretraining and post-training, and between the trained physician group and the unexposed physician group. The results illustrate a novel model for objectively measuring change among physicians based on training in paediatric mental health management.

Keywords: clinical pharmacology, mental health, community child health

Introduction

Continuing medical education (CME) is grounded in the belief that with increased physician knowledge comes better physician practice which leads to improved patient outcomes. A measurable change in patients’ health as a function of CME is rare. Most measures of CME effect focus on physician self-reported CME content uptake.1–6 Yet, it is well documented that self-report, as a consequence of CME participation, invariably suffers from the Hawthorne effect,7 wherein self-reported effects are systematically biased simply through participation. There is little, if any, research employing independent, objective measures of CME programme change effect on physicians’ practice.8

The gaps between perceived, actual and ideal performance in healthcare are real. For example, a recent meta-analysis found that most studies fail to show a significant correlation between CME and health outcomes.5 While research has focused on improving physician practice through an examination of various styles of CME, demonstrating that smaller interactive workshops show greater improvements than didactic sessions,5 it has, to a lesser degree, examined CME effects on physician practice in relation to patient outcomes. When quantifiable, outcomes that are the result of an action or activity1 are more objective, rendering them adequate and unbiased assessments of CME.6 In fact, literature in the area of learning and change has continually called for more rigorous objective measures.3 9

Referral patterns and quality represent import indicators of patient care that reflect professional practice behaviour and, compared with self-report of behavioural change, are relatively objective targets for measuring CME effect,10 such as clinical outcomes. Outcomes, which are defined as the result of an action or activity, are salient to commentary on CME effectiveness as they may be quantified and are more objective.1 Furthermore, it is regarded that outcomes, specifically those that improve healthcare provider performance and the healthcare for the patients they serve, are the most important outcomes of all.3

We tested the hypothesis that a well-developed CME programme would lead to a change in both the quality and the quantity of physicians referring patients from primary care services to specialised mental health services. The tested CME programme specific to this study was the Canadian implementation of an established programme that was developed by The Resource for Advancing Children's Health (REACH) Institute in the USA which is a minifellowship training programme, titled Patient-Centered Mental Health in Primary Care (PPP) (http://thereachinstitute.org). In contrast to typical CME programmes that generally do not measure the effect of the training on care providers’ practices,2 The REACH Institute's PPP is based on science behavioural change theories and methods as well as adult education research. Accordingly, it uses dynamic and interactive teaching techniques over a 3-day workshop, in which the learners develop individualised plans for practice change. The content of this training seeks to teach participants how to correctly identify and differentiate paediatric mental health problems as well as create and implement treatment plans specific to paediatric mental health. Following the face-to-face training workshop, a 6-month cycle of biweekly, hour-long, small group consultation calls occur, wherein participants take turns presenting challenging cases from their own practices and are assisted by their peers (group size of 10–15 participants) and two trained faculty to address the assessment and management challenges of the case examples through the application of the workshop teachings.

Employing a quasi-experimental pre–post, case comparison design,11 we tested the hypothesis that the Canadian Research and Education for the Advancement of Child Health (CanREACH) online supplementary file 1, PPP minifellowship CME would lead to measurable changes in referral practices as captured by referral frequency and measures of clinical urgency. Participants of this particular study were physicians, primarily practising in primary care, family medicine and paediatrics, who had completed the CanREACH PPP training. In addition to the standard self-report surveys prescribed by the REACH Institute, as the main focus of this study, we compared the standard of care clinical screening data gathered on referral, admission and discharge from the regional access and intake clinical database,9 12 13 thus permitting the examination of the relationship between the CME training and the clinical measures of clients referred to specialised mental health services. This approach represented an objective (blind) measurement of the CME training intervention impact on to the physicians’ subsequent referral practices.

Methods

The CanREACH (PPP) 6-month mini-fellowship CME was delivered in its entirety, including both the face-to-face training workshop and the teleconference calls, twice to two cohorts of physician participants (n=99) from May 2015 to June 2016. The data being used for this study summarise the data collected by the CanREACH programme through two distinct data sources: (1) participant completion of self-report surveys three times over the course of the 6-month CanREACH PPP training and (2) referral data collected by intake workers blind to physician participation in the CanREACH PPP training and subsequently extracted from the regional access and intake system, including demographics, clinical measures (Western Canada Waitlist Child Mental Health Priority Criteria Score (WCWL-CMH-PSC), Measurable and Treatment Plan form data9 12 13) and system data (eg, intake outcome, repeated admissions).

Specific to the self-report surveys, the PPP minifellowship, licensed from The REACH Institute, comes with a proprietary standardised assessment survey prescribed by the REACH Institute for independent licensees such as CanREACH to use (should readers of this study desire more information they may contact The REACH Institute). Since the content of the PPP is specific to identification, differentiation and treatment of paediatric mental health problems, these theory-guided self-report measures assess physicians’ attitudes and beliefs about the application of diagnostic and treatment skills taught during training, to include self-efficacy beliefs (the degree of difficulty the physician perceives will be entailed in performing the new behaviours), and their actual stated intentions to perform (or not perform) the new behaviours. By way of these surveys, participants self-rate their knowledge (18 questions), comfort (18 questions) and practices (29 questions) related to the assessment, diagnosis, treatment and management of child mental health difficulties. These self-reported survey measures were completed by CanREACH participants at four distinct times during participant training: before the course (pre), on completion of the 3-day workshop (post-1), on completion of the full 6-month training, which includes the telephone conference component (post-2) and 3 months following the completion of the full training (post-3). Only summary results will be reported given that the self-report measures are not the main focus of this study.

Measurement of post-training outcomes in this study was represented in the clinical measures of physician referrals extracted from our regional access and intake system. This database tracks the sources and the clinical measures of referrals to the Child and Adolescent Addictions Mental Health and Psychiatry Program in Calgary, Alberta, allowing for the extraction and analysis of the clinical measures of referrals from consenting physicians who participated in the CanREACH PPP training (95%) to be compared with their referrals pretraining and to physicians who did not participate in the PPP training programme, who nevertheless made referrals to child and adolescent mental health services in Calgary during the study period. Data collection of all physicians’ referrals was constrained to 1 year before the training programme commenced up to a time that included at minimum a 6-month post-training interval. Clinical referral data included demographics (age and sex) and system indicators (eg, emergent, scheduled, and repeat admissions, the length of stay, wait times, etc). In addition to demographic and system variables, clinical variables related to urgency captured on referral were based on the WCWL-CMH-PCS13 and the admission and discharge variables captured in the measurable treatment plan, function and problem severity.12 14 15

Data analyses

Self-report survey measures

Among CanREACH trained participants, the self-report measures described above were examined using one-way analysis of variance, comparing participants’ pretraining, immediate postworkshop training and at 6 months, examining the effect of change over time.

Clinical measures of patient referrals

The main hypothesis was tested based on the pre–postmeasures within the trained participant group of physicians, as well as the pre–post-training in comparison to the untrained group of physicians (tables 1 and 2). For the referred patient clinical measure data (based on system variables, demographics, the measureable treatment plan and items measuring of clinical urgency (WCWL-CMH-PCS), we have provided descriptive statistics by physician group with comparison of 95% CIs to examine the effect of training on referral frequency and the clinical measure of referrals made by participating physicians in comparison to these same physicians before training and to non-participating physicians over the same time period. In addition, we used multinomial logistic regression analysis to develop a reduced model of the specific changes in clinical measures hypothesised to reflect the effect of training (table 2). Due to potential multicollinearity (high degree of potential shared variance due to a high level of correlation among independent variables measuring a common construct–psychopathology), each independent variable was modelled separately and included in the summary table (table 2) if significant in bivariate analysis. The results were interpreted in terms of their relationship to training content criteria. The clinical measures analysis provided evidence in support of training-related practice change, measured using established instruments completed by third-party clinicians, who were blind to the physicians receiving the training intervention. The total number of data points varies across system demographic, and WCWL-CMH-PCS variables are gathered on internal referrals and readmissions. The smallest reported sample sizes are large enough in terms of power to be representative of each physician group.

Table 1.

Description of independent variables by training groups

Variables Untrained Pretraining Post-training
Obs Mean/Prop.
(LCI, UCI)
Obs Mean/Prop.
(LCI, UCI)
Obs Mean/Prop.
(LCI, UCI)
System
Wait time (days) 5515 26.69
(25.76 to 27.63)
482 24.70
(22.28 to 27.11)
407 (27.96 to 34.67)
Length of stay (days) 4953 116.85*
(112.86 to 120.84)
453 158.90
(145.85 to 171.95)
270 (78.21 to 103.43)
Repeat admissions 14 727 6.76*
(6.70 to 6.83)
1673 7.60
(7.42 to 7.78)
1339 7.25*
(7.04 to 7.46)
Demographics
Age 14 727 11.78*
(11.71 to 11.85)
1673 9.64
(9.44 to 9.85)
1339 9.80*
(9.58 to 10.02)
Sex (0 = female) 14 727 0.521*
(0.513 to 0.529)
1673 0.383
(0.36 to 0.407)
1339 0.397*
(0.37 to 0.423)
Family composition (single parent) (0 = biological/step parent) 8055 0.201
(0.192 to 0.21)
972 0.2
(0.175 to 0.226)
810 0.18
(0.16 to 0.21)
Family composition (foster care/ward) (0 = biological/step parent) 8055 0.16
(0.15 to 0.17)
972 0.14
(0.12 to 0.16)
810 0.16
(0.14 to 0.19)
MTP
Admission CGAS 3586 48.27
(47.91 to 48.62)
340 46.98
(45.87 to 48.07)
256 46.29*
(44.89 to 47.69)
Discharge CGAS 3302 55.37
(54.91 to 55.82)
317 55.66
(54.25 to 57.07)
179 54.89
(52.96 to 56.82)
Admission strength concern 2245 16.88
(16.55 to 17.20)
267 16.12
(15.11 to 17.13)
119 16.65
(14.97 to 18.32)
Discharge strength concern 2095 32.93
(32.26 to 33.60)
260 35.64
(33.56 to 37.73)
112 35.30
(31.78 to 38.83)
Number of provisional comorbid diagnoses 14 727 0.20
(0.19 to 0.21)
1673 0.19
(0.16 to 0.21)
1339 0.23
(0.20 to 0.27)
WCWL-CMH-PCS items
Danger to self 2236 0.54
(0.5 to 0.59)
204 0.45
(0.30 to 0.60)
200 0.29*
(0.22 to 0.36)
Danger to others 2236 0.03
(0.023 to 0.039)
204 0.06
(0.03 to 0.098)
200 0.03
(0.01 to 0.05)
Psychotic symptoms 2236 0.08
(0.05 to 0.10)
204 0.14
(0.03 to 0.24)
200 0.07
(0.02 to 0.12)
Global age-appropriate developmental progress 2236 0.05
(0.04 to 0.06)
204 0.08
(0.04 to 0.12)
200 0.08
(0.04 to 0.11)
CGAS on referral 2236 5.96
(5.84 to 6.09)
204 6.18
(5.72 to 6.63)
200 6.58*
(6.20 to 6.96)
Internalised symptoms 2236 5.09
(4.95 to 5.24)
204 5.19
(4.72 to 5.65)
200 5.00
(4.57 to 5.43)
Externalised symptoms/disruptive behaviour 2236 0.72*
(0.67 to 0.77)
204 1.10
(0.926 to 1.27)
200 1.05*
(0.88 to 1.2)
Comorbid medical conditions 2236 0.24
(0.21 to 0.26)
204 0.25
(0.17 to 0.33)
200 0.24
(0.16 to 0.31)
Comorbid psychiatric conditions 2236 0.65*
(0.61 to 0.70)
204 0.90
(0.72 to 1.08)
200 1.03*
(0.89 to 1.17)
Harmful substance use/misuse 2236 0.04
(0.03 to 0.05)
204 0.05
(0.02 to 0.09)
200 0.04
(0.01 to 0.06)
Significant biological family history of mental illness 2236 1.45
(1.42 to 1.49)
204 1.47
(1.35 to 1.59)
200 1.46
(1.34 to 1.58)
School and/or work 2236 0.11
(0.10 to 0.13)
204 0.13
(0.08 to 0.17)
200 0.09
(0.05 to 0.13)
Social/friendships/community functioning 2236 0.48*
(0.46 to 0.50)
204 0.57
(0.51 to 0.64)
200 0.58*
(0.51 to 0.65)
Does the child/adolescent (patient) have problems in the context of the home? 2236 2.90*
(2.83 to 2.97)
204 3.27
(3.04 to 3.49)
200 2.8*
(2.56 to 3.04)
Family functioning or factors affecting child 2236 0.48
(0.46 to 0.50)
204 0.42
(0.35 to 0.49)
200 0.36*
(0.29 to 0.42)
Prognosis without further intervention 2236 3.48
(3.32 to 3.64)
204 3.67
(3.14 to 4.19)
200 3.54
(3.01 to 4.06)
Degree of likely benefit with further intervention 2236 8.29*
(8.18 to 8.40)
204 7.68
(7.36 to 7.99)
200 8.10
(7.74 to 8.46)
Reason for referral (in each category base comparison, 0 = externalising behaviour)
Internalising/emotional issues 14 727 0.04*
(0.04 to 0.05)
1673 0.02
(0.02 to 0.03)
1339 0.04
(0.03 to 0.05)
Developmental/organic concerns 14 727 0.03
(0.025 to 0.03)
1673 0.03
(0.025 to 0.043)
1339 0.04*
(0.03 to 0.05)
Eating issues 14 727 0.046*
(0.04 to 0.05)
1673 0.01
(0.01 to 0.03)
1339 0.02*
(0.01 to 0.03)
Adjustment problems 14 727 0.13*
(0.125 to 0.136)
1673 0.23
(0.22 to 0.26)
1339 0.22*
(0.20 to 0.25)
School/learning/attention problems 14 727 0.002
(0.001 to 0.003)
1673 0.003
(0.001 to 0.007)
1339 0.004
(0.002 to 0.01)
Social/family issues 14 727 0.009*
(0.007 to 0.011)
1673 0.002
(0 to 0.005)
1339 0.004
(0.001 to 0.009)
Thought disturbances/perceptual issues 14 727 0.33*
(0.327 to 0.342)
1673 0.24
(0.22 to 0.26)
1339 0.25*
(0.22 to 0.27)
Harmful behaviour/thoughts to self 14 727 0.289*
(0.28 to 0.296)
1673 0.32
(0.3 to 0.35)
1339 0.26*
(0.23 to 0.28)
Harmful behaviour/thoughts to others 14 727 0.055*
(0.052 to 0.059)
1673 0.072
(0.06 to 0.085)
1339 0.10**
(0.086 to 0.13)
Addictive or legal issues 14 727 0.039
(0.035 to 0.042)
1673 0.034
(0.026 to 0.044)
1339 0.053
(0.042 to 0.066)
Other 14 727 0.01
(0.008 to 0.012)
1673 0.007
(0.003 to 0.012)
1339 0.01
(0.006 to 0.017)
Emergent (scheduled=0) 14 727 0.004
(0.003 to 0.005)
1673 0.002
(0.001 to 0.006)
1339 0.001
(0.00001 to 0.004)

*In post-training column 95% CIs for post-training group differ from pretraining or untrained group.

*In untrained column 95% CIs for untrained group differ from pretraining group.

CGAS, Children's Global Assessment Scale; LCI, lower confidence interval; MTP, measureable treatment plan; UCI, upper confidence interval; WCWL-CMH-PSC, Western Canada Waitlist Child Mental Health Priority Criteria Score.

Table 2.

Summary of logistic regression analysis of bivariate comparisons of variables representing significant differences between physician groups

Variable OR SE t Value Pr(t) Lower 95% CI Upper 95% CI
Group Pretraining
Active admission 1.73 0.34 2.83 0.005 1.18 2.53
Psychiatric comorbidity 0.62 0.11 −2.73 0.006 0.45 0.88
Past psychiatric comorbidity 0.59 0.12 −2.56 0.01 0.4 0.88
Group Untrained
Moderate to extreme safety risk 3.36 1.72 2.37 0.018 1.23 9.16
Danger to self 3.29 1.39 2.81 0.005 1.43 7.54
Impaired family function 1.66 0.25 3.29 0.001 1.23 2.24
Danger to others 1.63 0.28 2.77 0.006 1.15 2.29
Low to mild safety risk 1.35 0.13 3.05 0.002 1.11 1.65
Friend/social/community impairment 0.67 0.1 −2.65 0.008 0.5 0.9
More comorbidity 0.65 0.08 −3.51 0.00001 0.51 0.82
Disruptive behaviour 0.51 0.08 −4.33 0.0001 0.37 0.69
Past comorbidity 0.41 0.06 −5.98 0.0001 0.31 0.55
Functional impairment 0.39 0.15 −2.37 0.018 0.18 0.85
Baseline comparison group Post-training

Results

PPP self-report summary

Specific to the self-report measures collected, the following overview is provided: 97% of the participating physicians indicated that they would change their practice. The training was highly rated at each of the postmeasures, with the majority of participants rating the course above the top 10% of all CME they have attended. Participants reported increased knowledge and comfort about assessing and diagnosing children's mental health problems as well as increased knowledge and comfort with treating children's mental health problems. These findings were both maintained over baseline at 6 months in 73 of 76 training content areas. Participants indicated that their approach to assessment and diagnosis as well as treatment changed for the better post-training and at the 6-month point this change was maintained and in some items improved. Details indicating the type of treatment and management changes indicated improvement in the appropriate use of medications and psychosocial screens.

Within and between-group comparisons

Physicians made 17 739 referrals over the study period. Untrained physicians (n=1982) made most referrals (n=14 727; 7626 female; 58 to emergency services) over the study. Participating physicians made fewer referrals pretraining with 92 physicians making 1669 referrals (641 female; four to emergency services) and post-training with 69 physicians making 1338 referrals (531 female; one to emergency services). The average pretraining and post-training referrals per physicians to ambulatory care (scheduled services) were pretraining (mean=18) and post-training (mean=19) for participating physicians. The average referral rates to ambulatory care were greater than the untrained physicians (mean=8). The trained group referred fewer females (Χ2 Pr=0.00001) than the untrained group. The average referral rate to emergency services among untrained physicians was 3 per 100 referrals. This was comparable to the average referral rate to emergency services among trained physicians in the pretraining phase (4 per 100 referrals). Post-training, the average per physician emergency referral rate was 1 per 100 referrals. While the trends are in the expected directions for emergency services, and less for untrained physicians compared with both the trained physician groups, the differences in referral rates were not significant (Χ2, p<0.12).

The physician group comparisons for each independent variable are presented in table 1. Table 1 shows that there were non-overlapping 95% CIs (LCI, UCI) for 19 of the 41 independent variables. The post-training group differed from both pretraining and the untrained groups on four variables: greater wait time, shorter length of stay, more referrals for harmful behaviours and thoughts to others and fewer referrals for harmful behaviours and thoughts to self.

The post-training group differed from the untrained group on 14 variables: increased repeat admissions and lower length of  stay, lower age and more females, lower admission function, less danger to self, more impaired function on referral, greater externalising symptoms and disruptive behaviour, greater comorbid psychiatric conditions, greater problems with social/friendships/community, lower family functioning or more problems affecting the child in the family. This group had fewer referrals for thought disturbance in the perceptual issues, more referrals for adjustment problems and fewer referrals for eating issues.

The untrained group differed from the pretraining group on 15 variables: fewer repeatadmissions and lower length of stay, older individuals and equal rate of referrals for males and females, on WCWL-CMH-PCS items, lower externalising symptoms/disruptive behaviour, lower social/friendships/community problems, greater problems related to family functions or factors affecting the child and a greater degree of likely benefit with further intervention, and reasons for referral compared with the best category (externalising issues), the untrained group had increased internalising and emotional issues, increased social and family issues, lower adjustment problems and increased eating issues.

There were differences between the trained and untrained groups at baseline. Differences between the post-training and the untrained groups indicate a shift in the quality of the clinical measure of referrals.

In multinomial logistic analysis considering each independent variable separately (due to multicollinearity), the basis of comparison was the post-training physician group (table 2). Compared with post-training, the pretraining group was more likely to have active admissions, with less current comorbidity, and less past comorbidity. Compared with post-training, the untrained group was more likely to make a greater number of referrals with moderate-to-extreme safety risk, danger to self, impaired family function, danger to others and less likely to make more referrals with low-to-mild safety risk, less friend/social/community impairment, less comorbidity, less disruptive behaviour, less past comorbidity and less functional impairment.

Discussion

This study incorporated one component and three characteristics identified in a systematic review synthesis as key to effective CME.16 For example, the evaluation of the programme impact was based on a quasi-experimental design with a pre–post within the trained group and between trained and untrained group comparisons. Independently measured clinical results aligned with physician self-reported commitment to change their practice and supported that the education programme improved physician performance. Also, the CanREACH education programme involved multiple exposures, was 6 months longer and focused on improving patient care. Evidence of improved patient outcomes is related to the measured changes in the referrals of participating physicians indicating that more severe paediatric mental health clients were being referred to ambulatory tertiary services (eg, table 2 ‘Danger to self’) as compared with their participating physician's pretraining referrals, and in comparison to the referrals from untrained physicians. While it is assumed that these changes were associated specifically with participation in the training programme, clinical measurement has been embedded in the registration system since 200212 and is stable in respect to the rate of successful enrolments from a range of referral sources. Additionally, the clinical measurement system has produced similar results (in preparation) based on regional implementation of a school-based mental health literacy programme.17 Hence, it appeared that the CanREACH PPP training programme had a direct effect on how physicians practised and the outcomes for patients in terms of the changes in the referrals.

The observed changes in the clinical measures aligned with the assumptions underpinning the stated hypothesis: participating physicians of the CanREACH PPP CME training programme would make measurable practice changes from the content of the training as reflected in their referral practices. Observed changes found that CanREACH trained physicians were better able to identify and manage child and adolescent mental health concerns within their primary care practices, leading to a change in the referrals that they do make to specialised services. It is reasonable to assume that physicians who are not confident in their skill set to manage presenting child and adolescent psychiatric problems within their practices often refer these patients to emergency services. The findings that the trained group made fewer referrals to emergency services in association with the observed changes in the clinical measures suggests an improvement in practice, however, the referral to emergency results are preliminary, the sample relatively small and will necessarily need to bear up over time.

The self-report component of the current study examined participant changes, indicating that CanREACH PPP minifellowship CME training increased participant self-reported knowledge and comfort in assessing, diagnosing, managing and treating children's mental health problems for participants. Furthermore, these gains were found to be maintained when measured following training at 3 and 6 months post-training. Adding to and enriching self-reported findings, the results found from the analysis of referral frequency and clinical measures provided evidence of participant behavioural change in relation to the CME training with trained participants making more appropriate use of tertiary services (ie, more appropriate referrals and less use of emergency services) compared with pretraining status and in comparison to untrained physicians.

The referral-based frequency and clinical measures in this study addressed the main gap in the CME efficacy literature and present the first measurement of the specific clinical effect of CME training related to relatively ‘soft’ clinical outcomes (eg, not death/physical morbidity rate reduction or physical disease cure—pneumonia), as measured by independent, blind third parties using standardised assessment. The combined self-report and clinical measure results demonstrated that the Canadian implementation of The REACH Institute PPP minifellowship training is a CME that specifically changes the behaviour of the participating physicians. A recent report in the USA estimated that the lifetime economic cost of childhood mental health disorders was enormous—$2.1 trillion, which with the smaller population would roughly translate to $200 billion in Canada.18 19 Unfortunately, primary care providers are consistent in articulating lack of essential skills and knowledge concerning mental health problems as one of the most significant barriers to providing assessment and treatment services.20 For the fraction of children whose mental health needs are identified by their primary care provider, few mental health specialists are available to care for them, and fewer still receive the required care.21 Publicly funded mental health professionals are scarce, and privately funded services are often too expensive for families to afford. As a result, many children are either not seen by mental health professionals or treated only by their primary care physician.22 Promising solutions and suggested next steps from the results of this study indicate that the proprietary PPP training programme from The REACH Institute could, if delivered with fidelity, be offered across Canada, and even globally, to improve primary care capacity, especially given the centrality of mental health to lifespan adaptation and the dearth of community-based children's mental health services.21 22

A limitation of this study is the trained participant physician sample size; specifically, that the CME being delivered locally is relatively new, still recruiting, and, while the number of referrals made was adequate for group analysis, a larger sample will serve to better generalise these findings. For example, referrals to emergency services were reduced in the post-training group but only trended towards significance. Of interest is that even though the groups are disproportionate in size, the untrained and pretraining groups had nearly identical average referral rates to emergency services, while the post-training group was, in fact, one-third of these prevalues. A larger sample would resolve this issue as each group made relatively few referrals to emergency services.

Similarly, there were systematic differences between the untrained group and the pretraining group. The reasons are unknown and require further study to determine if these differences reside in a statistical artifact of a large untrained group being compared with small pretraining and post-training groups or more reflect aspects of self-selection into the CanREACH PPP training. The results presently suggest that the differences may lie in the process of self-selection. For example, these physicians self-select into the CanREACH CME programme and have a previous interest in the area of paediatric mental health. This in part may also be reflected in the comparatively higher referral rates of the pretraining and post-training groups to scheduled services.

While this was a one-city study, limiting generalisability to other regions, this paper does address the call in the literature in the area of continued education whereby it is consistently stated that studies need to go beyond the self-reported acquisition of knowledge and assess actual practice change, reflected in attitude and objectively measured behaviour. The identified limitation notwithstanding, this paper is intended to be an initial study to demonstrate proof of concept in objectively measuring practice change. The clinical measures of the referrals vis-à-vis has proven robust in providing a basis for measuring the impact of CME training. The findings provided evidence supporting the hypothesis that the CanREACH PPP training CME generated objectively measured outcomes in the population referred by trained physicians for tertiary care treatment: a central characteristic of effective CME.

Footnotes

Contributors: ESNM oversaw the entire project and contributed to the conception, drafting, writing and approving the manuscript. DC contributed to the conception and design, the analysis and interpretation of the data, and drafting and reworking the manuscript. SC, GF and AR contributed to the discussion and planning, revisions and final approval of the manuscript.

Competing interests: None declared.

Ethics approval: Conjoint Health Research Ethics Board University of Calgary.

Provenance and peer review: Not commissioned; externally peer reviewed.

References

  • 1. Al-Azri H, Ratnapalan S. Problem-based learning in continuing medical education: review of randomized controlled trials. Can Fam Physician 2014;60:157–65. [PMC free article] [PubMed] [Google Scholar]
  • 2. Bloom BS. Effects of continuing medical education on improving physician clinical care and patient health: a review of systematic reviews. Int J Technol Assess Health Care 2005;21:380–5. 10.1017/S026646230505049X [DOI] [PubMed] [Google Scholar]
  • 3. Davis D, O'Brien MA, Freemantle N, et al. . Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867–74. [DOI] [PubMed] [Google Scholar]
  • 4. Lloyd JS, Abrahamson S. Effectiveness of continuing medical education: a review of the evidence. Eval Health Prof 1979;2:251–80. [DOI] [PubMed] [Google Scholar]
  • 5. Mansouri M. Lockyer J. A meta-analysis of continuing medical education effectiveness. J Contin Educ Health Prof 2007;27:6–15. [DOI] [PubMed] [Google Scholar]
  • 6. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof 2009;29:1–15. 10.1002/chp.20001 [DOI] [PubMed] [Google Scholar]
  • 7. McCarney R, Warner J, Iliffe S, et al. . The Hawthorne effect: a randomised, controlled trial. BMC Med Res Methodol 2007;7:30. 10.1186/1471-2288-7-30 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Umble KE, Cervero RM. Impact studies in continuing education for health professionals. A critique of the research syntheses. Eval Health Prof 1996;19:148–74. 10.1177/016327879601900202 [DOI] [PubMed] [Google Scholar]
  • 9. Cawthorpe D, Wilkes TC, Rahman A, et al. . Priority-setting for children's mental health: clinical usefulness and validity of the priority criteria score. J Can Acad Child Adolesc Psychiatry 2007;16:18–26. [PMC free article] [PubMed] [Google Scholar]
  • 10. Honigfeld L, Macary SJ, Grasso DJ. A Clinical Care Algorithmic Toolkit for Promoting Screening and Next-Level Assessment of Pediatric Depression and anxiety in Primary Care. J Pediatr Health Care 2017;31:e15–23. 10.1016/j.pedhc.2017.01.008 [DOI] [PubMed] [Google Scholar]
  • 11. Royal K. Robust (and Ethical) Educational Research designs. J Vet Med Educ 2017:1–5. 10.3138/jvme.1015-162R1 [DOI] [PubMed] [Google Scholar]
  • 12. Novick J. A measurable treatment plan: Using the children's global assessment and the problem Severity scales as outcomes of clinical treatment. J Hosp Adm 2016;6:9–15. 10.5430/jha.v6n1p9 [DOI] [Google Scholar]
  • 13. Novick J, Cawthorpe D, McLuckie A, et al. . The validation of the Western Canada Waiting list Children's Mental Health-Priority Criteria Score Instrument: 2002-2015 results Engaging families into child mental health treatment: updates and special considerations. J Can Acad Child Adolesc Psychiatry 2010;19:182–96. [PMC free article] [PubMed] [Google Scholar]
  • 14. Kiresuk TJ, Sherman RE. Goal attainment scaling: a general method for evaluating comprehensive community mental health programs. Community Ment Health J 1968;4:443–53. 10.1007/BF01530764 [DOI] [PubMed] [Google Scholar]
  • 15. Shaffer D, Gould MS, Brasic J, et al. . A children's global assessment scale (CGAS). Arch Gen Psychiatry 1983;40:1228–31. 10.1001/archpsyc.1983.01790100074010 [DOI] [PubMed] [Google Scholar]
  • 16. Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. J Contin Educ Health Prof 2015;35:131–8. 10.1002/chp.21290 [DOI] [PubMed] [Google Scholar]
  • 17. Milin R, Kutcher S, Lewis SP, et al. . Impact of a Mental Health curriculum on knowledge and stigma among High School students: a Randomized Controlled Trial. J Am Acad Child Adolesc Psychiatry 2016;55:383–91. 10.1016/j.jaac.2016.02.018 [DOI] [PubMed] [Google Scholar]
  • 18. Perou R, Bitsko RH, Blumberg SJ, et al. . Mental health surveillance among children – United States, 2005-2011. MMWR Suppl 2013;62:1–35. [PubMed] [Google Scholar]
  • 19. Smith JP, Smith GC. Long-term economic costs of psychological problems during childhood. Soc Sci Med 2010;71:110–5. 10.1016/j.socscimed.2010.02.046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Wolraich ML, Bard DE, Stein MT, et al. . Pediatricians' attitudes and practices on ADHD before and after the development of ADHD pediatric practice guidelines. J Atten Disord 2010;13:563–72. 10.1177/1087054709344194 [DOI] [PubMed] [Google Scholar]
  • 21. Kutcher S, Davidson S. Mentally ill youth: meeting service needs. CMAJ 2007;176:417. 10.1503/cmaj.061694 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Wilkes TC, Cawthorpe D. The need for more children's mental health services. CMAJ 2008;178:1465–6. 10.1503/cmaj.1080017 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Evidence-Based Medicine are provided here courtesy of BMJ Publishing Group

RESOURCES