Abstract
Background:
A physician's personal and professional characteristics constitute only one, and not necessarily the most important, determining factor of clinical performance. Our study assessed how physician, organizational and systemic factors affect family physicians' performance.
Method:
Our study examined 532 family practitioners who were randomly selected for peer assessment by the College of Physicians and Surgeons of Ontario. A series of multivariate regression analyses examined the impact of physician factors (e.g., demographics, certification) on performance scores in five clinical areas: acute care, chronic conditions, continuity of care and referrals, well care and records. A second series of regressions examined the simultaneous effects of physician, organizational (e.g., practice volume, hours worked, solo practice) and systemic factors (e.g., northern practice location, community size, physician-to-population ratio).
Results:
Our study had three key findings: (a) physician factors significantly influence performance but do not appear to be nearly as important as previously thought; (b) organizational and systemic factors have significant effects on performance after the effects of physician factors are controlled; and (c) physician, organizational and systemic factors have varying effects across different dimensions of clinical performance.
Conclusions:
We discuss the implications of our results for performance improvement and physician governance insofar as both need to consider the broader environmental context of medical practice.
Abstract
Contexte :
Les caractéristiques personnelles et professionnelles des médecins ne constituent qu'un, et non nécessairement le plus important, des facteurs déterminant le rendement clinique. Dans cette étude, nous avons évalué comment les facteurs personnels, organisationnels et systémiques affectent le rendement des médecins de famille.
Méthodologie :
Nous avons étudié 532 médecins de famille choisis au hasard et soumis à une évaluation par les pairs effectuée par le Collège des médecins et chirurgiens de l'Ontario. Une série d'analyses de régression multivariée a permis d'examiner l'incidence des facteurs personnels des médecins (aspects démographiques, homologation, etc.) sur la cote de rendement dans cinq domaines cliniques : soins de courte durée, états chroniques, continuité des soins et recommandations aux spécialistes, soins de routine et dossiers médicaux. Une seconde série d'analyses de régression a permis d'examiner l'effet simultané des facteurs personnels, organisationnels (par exemple, volume de la pratique, heures effectuées, pratique en solo) et systémiques (par exemple, pratique en région nordique, taille de la communauté, ratio médecin/population).
Résultats :
Notre étude dégage trois conclusions principales : (a) les facteurs personnels influencent de façon significative la pratique, mais ne semblent pas aussi importants que nous le pensions au départ; (b) les facteurs organisationnels et systémiques ont un effet significatif sur le rendement, et ce, après avoir effectué le contrôle des effets associés aux facteurs personnels; (c) les facteurs personnels, organisationnels et systémiques ont des effets variables sur les divers aspects du rendement clinique.
Conclusions :
Nous discutons des répercussions de nos résultats sur l'amélioration du rendement et sur la gouvernance pour les médecins, puisque toutes deux doivent être prises en compte dans le contexte général de la pratique médicale.
A growing literature suggests that a physician's ability to provide good patient care and avoid medical errors depends on multiple factors (Donabedian 1966, 1988; Skinner 2002; Caulford et al. 1994; Ely et al. 1995; Grol 2002; Becher and Chassin 2001; Berwick 2003; Barach and Moss 2001; Chen and Hou 2002) including, but not limited to, their personal and professional characteristics. For example, numerous studies have demonstrated that physician characteristics such as age, sex, education/training credentials and competence (i.e., knowledge, skills and attitudes) may all influence how well physicians perform (Caulford et al. 1994; Ely et al. 1995; Norton et al. 1994, 1997; McAuley et al. 1990; Norman et al. 1993; Jansen et al. 2000). However, it has also been noted that these physician characteristics account for a surprisingly small proportion of total variation observed in performance; other factors are also at play (Donabedian 2000).
For example, some studies have concluded that older physicians do not perform as well as their younger counterparts (Norton et al. 1997; McAuley et al. 1990), a finding that seems to suggest that older physicians are generally less competent. However, it has also been observed that, compared to their younger colleagues, older physicians tend to work in different practice types, such as solo practice, which may offer fewer supports for effective record keeping and workload management; with different patient populations, including older individuals with more complex continuing care needs; and in different geographic locations, which, particularly outside urban areas, may offer less access to required tests, treatments and specialist referrals (Tepper et al. 2005; Donabedian 1992). Thus, it is possible to imagine an older physician who is well trained and competent, but who nonetheless performs poorly according to standard measures because of organizational and systemic problems (Grol 2002; Kopelow et al. 1992; Rethans et al. 2002). Such different interpretations of the sources of poor performance have major implications for designing and targeting policies and interventions aimed at improving and ensuring performance.
In addition to physician characteristics, administrative and organizational structures (Caulford et al. 1994; Grol 2002; Norman et al. 1993; Donabedian 2000; Robinson 1994; Jones 2000; Ram et al. 1998; Long 2002) and financial incentives/disincentives (Robinson 1994; Safran et al. 2000; Morrow et al. 1995; Gillett et al. 2001; Goldfarb 1999; Hopkins 1999; Safran et al. 2002; Geneau et al. 2008), to name a few factors, can all have different effects on clinical performance and affect clinical behaviour. Yet, performance has traditionally been viewed as devoid of context (LaDuca 1994; LaDuca et al. 1984; Klass 2000, 2007a,b; Geneau et al. 2008), excluding both the context of the patient and the context of the organizational or systemic environments. A reason for this view may be the current lack of a comprehensive and unified conceptual framework of what individual physician performance entails (Klass 2000, 2007b). The concept needs to acknowledge the impact of the practice environment, including both the influence of organizational structures and the larger healthcare system as a whole, on the ability of physicians to perform adequately (Grol 2002; Robinson 1994; Klass 2007b; Long 2002).
In a previous paper (Wenghofer et al. 2006b), we explored the importance of the patient context in physician performance and demonstrated that performance is indeed a multidimensional construct rooted in the unique requirements of different types of physician–patient encounters. In this paper, we go on to explore how performance in these dimensions is influenced by physician factors and, additionally, by the broader organizational and systemic contexts to provide a conceptual framework within which physician performance can be studied. To do this, we analyze data from actual performance assessments of general/family practitioners (GP/FPs). We hypothesized that physicians' personal and professional characteristics constitute only one, and not necessarily the most important, determining factor of performance. We consider the implications of our findings on physician governance and performance improvement.
Data and Methods
Performance data
In 1980, the College of Physicians and Surgeons of Ontario (CPSO) initiated a peer assessment program that includes practice visits to a random sample of the province's approximately 28,000 physicians by trained physician assessors (peers). Approximately 2% to 3% of the total practising physician population of Ontario is assessed annually. In this study, we analyzed data from 532 GP/FPs randomly selected for peer assessments conducted between 1997 and 2000 by the CPSO. Since a detailed description of the CPSO's peer assessment process can be found in previous published studies (Norton et al. 1994, 1997, 1998, 2004; Norton and Faulkner 1999; McAuley and Henderson 1984; McAuley et al. 1990; Wenghofer et al. 2006a,b), we note here only that during their visits to a physician's practice, a single peer assessor typically reviews 20 to 30 complete patient records, discusses the findings with the physician and then fills out a 46-item protocol relating to records and care quality. The inter-rater reliability between assessors has been shown to be excellent (kappa = 0.89) (unpublished internal studies from the CPSO). In our previous work (Wenghofer et al. 2006b), we discussed how we computed scores on multiple-item measures of performance from the assessment protocols for five dimensions of GP/FP performance (see Table A1 in the Appendix for detailed definitions):
managing patients with acute conditions and new presentations (acute)
managing patients with chronic conditions (chronic)
providing patients with continuity of care and referrals (continuity)
providing patients with well care and health maintenance (well care)
managing patient records (records)
The calculated scores for each dimension range from a minimum score of 1.0, indicating poor performance, to a maximum score of 4.0, indicating excellent performance (Wenghofer et al. 2006b).
Factors affecting performance
In this paper, we focus on the extent to which variation in physicians' scores along each performance dimension are explained by physician, organizational and systemic factors.
Physician factors. We define physician factors as those attributes of the individual that have traditionally been the object of interest regarding physician performance and competence assessment. Physician factors specifically focus on those features that physicians “bring with them” to any practice setting or community. In our study these include age; sex; years in practice; medical school (North American vs. Other); College of Family Physicians of Canada (CFPC) certification; years practising in current setting (i.e., as a proxy indicator of experience with current patient population); and whether or not the physician had been previously peer assessed by the CPSO.
Organizational factors. We define organizational factors as representative of the characteristics of the immediate setting in which the physician works. These are features that may change if a physician moves from one setting to another. In this study, these include solo practice, episodic care practice/walk-in clinic (WIC), total number of clinical and administrative staff; hours worked per week in primary practice; number of patient visits per week in primary practice; active hospital appointment (yes/no); teaching (yes/no); and focused practice scope (yes/no). The effects of solo (Norman et al. 1993; Shine 2002) and WIC (Jones 2000, 2006; Brown et al. 2002) practice structures were specifically evaluated because both are often considered to have potentially negative effects on practice.
Systemic factors. The systemic factors we have selected are intended to provide a snapshot of several key features associated with the broader community in which a physician's practice is situated. These include access to 911 services at the time of assessment (yes/no); estimated minutes for access to emergency medical services (EMS); availability of four core diagnostic tests (expressed as a proportion); physician per 1,000 population ratio and northern practice locations (yes/no).
Data for physician, organizational and systemic factors were either extracted from the CPSO registry (which is verified through documentation reviews and extensive credentialling processes) or self-reported by physicians in a pre-assessment questionnaire as a required component of the peer assessment process. The physician-per-1,000-population ratio was calculated by linking CPSO registry data for primary practice address to the 1996 Canadian Census data at the census subdivision level, which closely mirrors municipal divisions. Northern location of practice was indicated at a very high level by the “forward sortation area” (FSA) code of primary practice address postal code (FSA=“P”).
Analysis
Descriptive statistics were produced for each of the five dimensional scores. We conducted two series of multiple regressions using different models. The first independent regression model involved regressing only the physician factors on each of the multiple-item measures of performance. The independent model thus estimates the effects of personal and professional characteristics without controlling for organization or system factors. The second full regression model examined the effects of the physician factors when organizational and systemic factors are entered simultaneously into the regressions. The variance estimates generated by the full regression model indicate the marginal (or net) increase in the variance explained by the group of variables representing the physician, organizational and systemic factors. The variance estimates, regression coefficients, standard errors of the coefficients for each model are reported. Variance inflation factors (VIFs), tolerance and between-predictor correlations were evaluated to determine the level of collinearity in the models. In view of the large number of independent variables entered in our model, we did not explore the potentially large number of interaction effects, as we were somewhat concerned with overparameterizing the model given our sample size (Lewis 2007).
Results
Physician and practice description
The average age of physicians in the sample was 51.0±9.91 years with a median of 50. This is comparable to the 51.2-year average age of Ontario physicians (CPSO 2008a). The sex distribution of the sample shows that 88.9% of the assessed individuals were male and 11.1% female. The sample comprised more male physicians compared to the CPSO registry database, which shows that 67.9% of the physicians in Ontario are male and 32.1% female (CPSO 2008a).
The sample physicians worked an average of 29.8 hours and saw an average of 131 patients per week in their primary office setting. The sample physicians indicated that 50% (median) of the practices employed two or more administrative or clinical staff members (or both). This value did not differentiate between clinical and administrative staff, nor did it distinguish between part-time and full-time staff. In addition, 20.2% of sample physicians engaged in teaching, 5.4 % had clinically focused practices and 64.7% had active hospital appointments. Solo and WIC practices were the primary practice settings for 42.1% and 7.9% of the sample physicians, respectively.
Descriptive statistics of dimensions of performance
The majority (78%) of assessed physicians had satisfactory practices; 14.1% required a reassessment and 7.9% required an interview because of care concerns. This finding is consistent with the typical distribution of assessment results since the inception of the CPSO peer assessment program. The descriptive statistics for the scores on the five performance dimensions were positively skewed, reflecting the propensity of most physicians to do well on assessment (Table 1). However, as reported in earlier studies, the variations present in the dimensional scores are sensitive to significant differences in assessment outcomes (Wenghofer et al. 2006b).
TABLE 1.
n=532 | Acute | Chronic | Continuity | Well care | Records |
---|---|---|---|---|---|
Mean | 3.52 | 3.66 | 3.85 | 3.29 | 3.59 |
Standard Deviation | 0.49 | 0.41 | 0.34 | 0.61 | 0.34 |
Minimum Score* | 1.63 | 1.71 | 1.60 | 1.33 | 1.92 |
Maximum Score* | 4.00 | 4.00 | 4.00 | 4.00 | 4.00 |
Note: Possible range on all dimensional scores is a minimum score of 1.0 and a maximum score of 4.0.
Independent regression model
Results from the independent regression model, in which only the physician factor was evaluated, are presented in Table 2. Collinearity diagnostics indicated that years in practice is highly correlated with physician age (r=0.94); thus, years in practice was removed from all regression models (Kleinbaum et al. 1988). As in previous studies of the peer assessment results (Norton et al. 1994, 1997; McAuley et al. 1990), our results confirmed that personal and professional characteristics, particularly sex and certification, and to a lesser degree age, significantly influenced performance with the exception of continuity of care, for which the independent regression model was not significant. However, unlike previous studies, the effects were found to vary across performance dimensions. For example, the regression results indicated that females perform better in acute care, well care and records management, but sex differences are not found in the other dimensions. Similar variation across performance dimensions were also found with age and CFPC certification. Increasing age was a significant predictor of declining performance in records only, while holding CFPC certification had a positive impact on performance in acute, chronic and well care as well as records. Attending a North American medical school, the number of years in the current practice setting and having been previously assessed did not significantly affect assessment performance in any of the dimensions.
TABLE 2.
Acute regression coefficient (std. error) | Chronic regression coefficient (std. error) | Continuity regression coefficient (std. error) | Well care regression coefficient (std. error) | Records regression coefficient (std. error) | |
---|---|---|---|---|---|
Independent Model R2 | 0.074** | 0.046** | 0.023 | 0.079** | 0.120** |
Age | –0.005 (0.004) |
–0.005 (0.003) |
–0.001 0.003) |
–0.005 (0.004) |
–0.005* (0.002) |
Males |
–0.174* (0.067) |
–0.106 (0.058) |
–0.061 (0.048) |
–0.302** (0.084) |
–0.158** (0.046) |
Attended North American School | 0.052 (0.054) |
0.011 (0.046) |
0.039 (0.039) |
0.002 (0.067) |
–0.014 (0.037) |
Years in Current Practice at Time of Assessment | –0.002 (0.003) |
0.004 (0.003) |
0.004 (0.002) |
0.004 (0.004) |
–0.002 (0.002) |
Holds CFPC Certification |
0.107* (0.045) |
0.126** (0.039) |
0.058 (0.033) |
0.240** (0.057) |
0.110** (0.031) |
Has Been Previously Assessed | 0.066 (0.070) |
0.015 (0.060) |
0.030 (0.050) |
–0.019 (0.088) |
–0.015 (0.049) |
Significant at p<0.05
Significant at p<0.01
Full regression model
The results of the full regression model measuring the simultaneous impact of physician, organizational and systemic factors on performance (Table 3) revealed that the way in which physician factors influence performance change when organizational and systemic factors are taken into account. For example, unlike the independent model, in the full model age was not a significant predictor in any of the performance dimensions, and CFPC certification remained a significant predictor only in well care and records. In addition, years in current practice setting became significant for acute care in the full model. A similar pattern was also found with performance in the chronic and continuity of care dimensions, in that the physician characteristics were no longer significant once the effects of organizational and systemic factors were incorporated in the full model.
TABLE 3.
Acute regression coefficient (std. error) | Chronic regression coefficient (std. error) | Continuity regression coefficient (std. error) | Well care regression coefficient (std. error) | Records regression coefficient (std. error) | ||
---|---|---|---|---|---|---|
Model R2 | 0.199** | 0.142** | 0.123** | 0.193** | 0.233** | |
Significant Physician Factors | Males |
–0.236* (0.095) |
–0.104* (0.050) |
|||
Years in Current Practice |
–0.007* (0.004) |
|||||
Holds CFPC Certification |
0.208** (0.068) |
0.073* (0.036) |
||||
Significant Organizational Factors | WIC Practice |
–0.166* (0.071) |
||||
Number of Patient Visits per Week |
–0.002** (0.000) |
–0.001** (0.000) |
–0.001** (0.000) |
–0.002** (0.001) |
–0.001** (0.000) |
|
Holds Active Hospital Appointment |
0.080* (0.036) |
|||||
Significant Systemic Factors | Proportion of Basic Diagnostic Tests Available |
0.350** (0.131) |
0.391** (0.111) |
0.458* (0.217) |
||
Physician to 1,000 Population Ratio |
0.0328* (0.013) |
0.027** (0.011) |
0.021* (0.009) |
|||
Northern Practice Location |
–0.345** (0.095) |
–0.332* (0.124) |
–0.240** (0.065) |
Significant at p<0.05
Significant at p<0.01
Note: Regression coefficients for variables that were included in the full model but were not significant are not listed owing to space constraints.
In the full regression model, several specific variables from the organizational factors had significant effects on performance. Practice type, patient visits per week and holding an active hospital appointment each had varying effects in several of the dimensions. For example, physicians working in WICs performed less well in the chronic care dimension. The most consistent organizational effects were found with patient visits per week, where performance in all five dimensions improved with declining numbers of patient visits per week.
Specific system variables were also significant in the full regression models. Physicians working in locations with low physician-to-population ratios performed more poorly in the acute, chronic and continuity care performance dimensions. Physicians with better availability of basic diagnostic tests performed better in the chronic, continuity and well care dimensions. Physicians with their primary practices in northern locations performed more poorly in acute care, well care and records than their southern counterparts, even after the effects of the physician-to-population ratio and number of patient visits per week had been taken into account.
The variance estimates from the full regression model are presented in Table 4. The physician factors were significant predictors, to varying degrees, for acute care (R2=0.058; p<0.01), well care (R2=0.067; p<0.01) and records (R2=0.087; p<0.01), but not for chronic conditions or continuity of care. In comparison, the organizational factors had a varying impact on all dimensions except continuity of care, where the systemic factors predominated (R2=0.057; p<0.01). The systemic factors significantly contributed to the variance in all five performance dimensions, but to varying degrees.
TABLE 4.
Total variance explained by independent model | Net R2 values for each factor | Total variance explained by full model | |||
---|---|---|---|---|---|
Physician factor | Organizational factor | Systemic factor | |||
Acute | 0.074** | 0.058** | 0.071** | 0.068** | 0.199** |
Chronic | 0.046** | 0.012 | 0.061** | 0.045** | 0.142** |
Continuity | 0.023 | 0.015 | 0.038 | 0.057** | 0.123** |
Well Care | 0.079** | 0.067** | 0.054** | 0.045* | 0.193** |
Records | 0.120** | 0.087** | 0.052** | 0.051** | 0.233** |
Significant at p<0.05
Significant at p<0.01
Tolerance, VIFs and between-predictor correlations do not indicate any concerning levels of collinearity. The maximum VIF and minimum tolerance in either the independent or full model were 3.3 and 0.30, respectively. The highest level of correlation was found between number of patient visits per week and hours worked per week, with a correlation of r=0.73. As a precaution, hours worked per week was removed from the regressions because it was thought that number of patients per week would give a better indication of practice load than hours alone. All collinearity statistics were well out of range of levels meriting concerns (Kleinbaum et al. 1988), with the one other exception of years in practice, which was noted earlier and was addressed by modifying the regression models.
Discussion
While strategies for improving and ensuring physician performance are increasingly seen as crucial considerations for improving outcomes for patients and the healthcare system, there remains a tendency to address them rather narrowly, as primarily or solely a function of the credentials, training and attributes of individual physicians (Klass 2007b). We suggest that this approach fails to take into account factors in the broader context of practice that are beyond physicians' direct control. We believe it has also led to a relatively negative view of the current strategies employed to improve performance, which place inordinate emphasis on the agency of individual physicians and, in the process, appear to blame them for shortcomings in the organizations or health systems in which they work. Indeed, our data, drawn from actual practice-based assessments of GP/FPs, suggest that in addition to the personal and professional characteristics of physicians, the characteristics of the organizations in which they work and the communities in which those organizations are located also have important and concurrent effects on their ability to provide appropriate care to their patients across a number of key performance dimensions.
Three key findings emerge from our analyses.
First, our findings challenge the assumption that assessment can, or should, be targeted on the basis of individual characteristics alone. Although the results of both the independent and full regression analyses support the findings of previous research that sex, age and certification do affect performance, they do not appear to be nearly as important as previously thought (Norton et al. 1994, 1997; McAuley et al. 1990). For example, our data indicate that while female physicians outperformed males on some dimensions, such as well care or acute care, there were no differences in others (e.g., chronic care) once organizational and systemic differences were taken into consideration. Similarly nuanced findings were found with respect to CFPC certification. The results of previous studies that focused primarily on physician factors have led to several regulatory practices that may now need to be reexamined. For example, in Ontario a physician is selected for peer assessment at age 70 (CPSO 2008b). We are not suggesting that continuing age-related assessment is not important, but rather that other organizational structures may have a greater influence than age alone. Organization-related assessments might also be considered. Initiatives to improve performance targeted on the basis of personal attributes alone may likely miss their mark more often than they hit. Clearly, the broader practice context needs to be considered in regulatory and improvement policies.
Further support for this idea is illustrated in our second key finding, which is that specific organizational and systemic factors, in addition to physician factors, all have significant effects on performance after controlling for physician factors. Of course, the idea that such external factors may influence physician behaviour is not new. For example, many studies have found evidence of small area variation in patterns of health services and physician practice patterns (Jin et al. 2003; Brownell 2002; Brownell et al. 2002; Veugelers et al. 2003; Chaudhry et al. 2001; Hospital Report Research Collaborative 2004a,b,c; Chan 2002; CIHI 2007b; Konkin et al. 2004; CMA 2008), including those found in northern and rural locations (Norton et al. 1997; Tepper et al. 2005; Baldwin et al. 1999; Probst et al. 2002; Chan and Shultz 2005; May et al. 2007; CIHI 2007a). Our findings support these earlier studies, which suggest an impact of the broader practice environment on physician performance. For example, physicians who have better access to diagnostic tests and specialist consults more appropriately diagnose, treat and refer patients; and physicians located in northern locations face practice challenges that are different from those seen among physicians in southern Ontario. Thus, we need to consider that working in different practice environments may require different skills and knowledge specific to the practice context.
A third key finding is that individual, organizational and systemic factors appear to have varying effects across different dimensions of performance, emphasizing the need to conceptualize and measure performance as multidimensional. As a result, the answer to the question, “What influences physician performance the most?” and its corollary, “Where should incentives and policies for improvement be placed?” is, “It depends on the specific dimension of performance under scrutiny.” For example, our finding that the management of chronic conditions in walk-in clinics is poorer than other settings while acute condition management is not, suggests that certain organizational structures may be more supportive and effective for certain types of care over others. As new practice structures are introduced and promoted as part of primary care reform initiatives, this finding may be particularly important for planning. This finding also suggests the importance of systematically monitoring organizational and systemic factors and linking change in these factors, particularly during periods of health system restructuring, to variations in physician performance. For instance, Ontario has implemented two major reforms that affect physicians: a reform of primary care aimed at encouraging more GP/FPs to work in multidisciplinary teams (i.e., family health teams) with shared patient records and alternatives to fee-for-service such as capitation; and the regionalization of hospital, home care and long-term care services into local health integration networks (LHINs). Knowing more about how such reforms affect physician performance could go some considerable way towards identifying and redressing organizational and systemic problems that lead to poor performance, and equipping individual physicians to respond constructively and proactively to a changing environment.
Limitations and strengths
There are some limitations to consider when interpreting the results of these analyses. Most obviously, there is a considerable amount of residual variation that is not explained by the data; the sources of such variation remain to be understood. A likely possibility is that this is related to limitations in the data. While chart reviews are considered one of the standard methods of practice evaluation (Wakefield et al. 1995), charts alone have been shown to represent only a subset of activities actually performed by physicians during a patient visit (Rethans et al. 1994). However, data gathered in the CPSO assessment protocols are augmented with additional information (Brook et al. 1996) from the physician-assessor interview and unpublished CPSO internal quality control studies (e.g., inter-assessor rating and decision validation) have shown the methodology to be reliable. Further, the data representing physician, organizational and systemic factors are by no means exhaustive; neither are our categorizations of the data variables into each of physician, organizational or systemic factors set in stone.
Finally, this study focuses on clinical dimensions of performance. There are other important aspects to performance, such as patient communication, patient outcomes and team performance, to name a few, that were not looked at in this study. Our future work will further investigate the impact of individual practitioner, organizational and systemic factors in these important areas to help complete the performance picture.
Despite these limitations, we think that this study has important implications for physician performance policies in two main areas: performance improvement and governance. We believe the strength of our study lies in understanding physician performance within the broader constructs of the practice environment and demonstrating the importance of collecting these data for future research. Better physician practice data concerning organizational structure and systemic resources will further improve our ability to investigate the impact of the practice environment on performance.
Implications of the study
A core purpose of performance evaluation is needs assessment for education and performance improvement. While continuing medical education (CME) and continuing professional development (CPD) initiatives have typically focused on refreshing the physician's clinical skills and knowledge, our findings suggest that such initiatives may be ineffective if they ignore the broader context in which clinical decision-making takes place, particularly where organizational and systemic factors may be a source of poor performance. While individual competence remains a crucial prerequisite for high performance, it may not be sufficient to conclude that poor performance can simply be rectified through “upgrading.” For example, on dimensions such as chronic care and continuity of care, the results suggest that quality improvement initiatives should also consider organizational and systemic factors because physician factors appear to have less impact on performance in these dimensions. Performance issues that are more heavily influenced by organizational and systemic factors will be more effectively addressed through organizational and systemic policies or programs (e.g., organizational performance incentives, systemic resource allocation, or professional governance) rather than an exclusive reliance on the CPD of individual practitioners as the panacea for performance improvement. This approach speaks to the need both to carefully target CME/CPD to performance issues that are more heavily influenced by individual-level factors, and more generally, for CME/CPD curricula to include content that will assist individual physicians in identifying and coping with external factors that affect their practices.
We feel that our findings have governance implications, particularly suggesting the need for remodelling regulatory and tort systems, which are designed, among other things, to apportion accountability in the health workplace. Such issues become increasingly salient, particularly in jurisdictions such as Ontario, where ongoing primary care reforms have resulted in the introduction of family health teams and the promotion of interdisciplinary care provision, producing increasingly more complex practice environments that involve multiple regulated healthcare professions. The interdependence of competence is not easily accommodated in a system that has been designed to apportion accountability and responsibility only on an individual level. The determination of liability or professional accountability needs to reflect the reality of complex interdependence of physicians in organizations within systems.
Picturing how these concepts might be operationalized is somewhat tricky. Consider the example of physician migration as an illustration. Ensuring the mobility of the physician workforce without compromising patient safety and standards of care has primarily been evaluated by ensuring equivalency of physician training, credentials and certifications across jurisdictions (HealthForceOntario 2007; Norcini and Mazmanian 2005). However, with each move of a physician's practice, it is possible that the population needs, organizational structures and resource availability may differ from those in which the physician was originally trained or gained his or her practice experience. These differences may require physicians to develop new sets of competencies and performance skills to meet local needs and provide care that may be considered specialized or outside their typical scope of practice (Baldwin et al. 1999; Probst et al. 2002; Tulloh et al. 2001; Breon et al. 2003). Yet currently, these contextual aspects of performance are not taken into consideration when evaluating the readiness of a physician to enter a new practice environment. In other words, the skills and knowledge required in one practice setting may not be sufficient for another. As a result, differences in physician performance should no longer be conceptualized simply as the outcome of credentials, training and personal attributes, but rather the product of complex and concurrent effects of physician, organizational and systemic factors.
Conclusions
Our analysis has demonstrated that organizational and systemic factors, in addition to physician factors, can all significantly affect physician performance. Concepts of physician performance have for too long focused primarily or solely on the individual practitioner, with emphasis on attributional elements of competence rather than valid measures of performance. Employing a conceptual framework that considers physician performance within a broader environmental construct will allow us to develop better processes of performance evaluation, to design appropriate interventions and to support performance improvement and governance models for individuals, teams and systems.
Acknowledgements
The College of Physicians and Surgeons of Ontario provided access to the data used for this study. The CPSO, as an organization, was not involved in the design, conduct of the study, management, analysis or interpretation of the data, or the preparation, review or approval of the manuscript.
Appendix
Five dimensions of GP/FP performance
TABLE A1.
Performance dimension | Description |
---|---|
Managing Patient with Acute Conditions and New Presentations (ACUTE) | Physician's performance in dealing with new patients or known patients presenting a new complaint or condition. Conditions are generally non-urgent and will often involve the formulation of a diagnosis, for either acute or chronic conditions, and recommendation(s) for treatment. |
Managing Patients with Chronic Conditions (CHRONIC) | Physician's performance in dealing with patients with chronic conditions. Conditions will usually require long-term monitoring and may be present with or without co-morbidities. |
Providing Patients with Continuity of Care and Referrals (Continuity Care) | Physician's performance in dealing with patients who are referred for treatment, surgical procedures, diagnostic procedures or otherwise, to the care of other physicians. Includes the appropriateness of referral (i.e., indications) and follow-up. |
Providing Patients with Well Care and Health Maintenance (Well Care) | Physician's performance in well care visits and preventive health maintenance, including patient visits for annual check-ups, screening, well baby visits, etc. |
Managing Patient Records and Recording Skills (Records) | Physician's performance in records management and recording skills. This reflects the mandatory elements of record format required by legislation and some additional features of the organization and recording tools used. |
Contributor Information
Elizabeth F. Wenghofer, Assistant Professor, School of Rural and Northern Health, Laurentian University; Assistant Professor, Human Sciences Division, Northern Ontario School of Medicine, Sudbury, ON.
A. Paul Williams, Professor, Department of Health Policy, Management and Evaluation; Faculty of Medicine, University of Toronto, Toronto, ON.
Daniel J. Klass, Associate Registrar and Senior Medical Officer, Quality Management Division, College of Physicians and Surgeons of Ontario; Adjunct Professor, Department of Medicine, University of Toronto, Toronto, ON.
References
- Baldwin L.M., Rosenblatt R.A., Schneeweiss R., Lishner D.M., Hart L.G. Rural and Urban Physicians: Does the Content of Their Medicare Practices Differ? Journal of Rural Health. 1999;15(2):240–51. doi: 10.1111/j.1748-0361.1999.tb00745.x. [DOI] [PubMed] [Google Scholar]
- Barach P., Moss F. Delivering Safe Health Care. British Medical Journal. 2001;323(7313):585–86. doi: 10.1136/bmj.323.7313.585. Clinical Research Ed. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Becher E.C., Chassin M.R. Improving Quality, Minimizing Error: Making It Happen. Health Affairs. 2001;20(3):68–81. doi: 10.1377/hlthaff.20.3.68. [DOI] [PubMed] [Google Scholar]
- Berwick D.M. Improvement, Trust, and the Healthcare Workforce. Quality & Safety in Health Care. 2003;12(6):448–52. doi: 10.1136/qhc.12.6.448. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Breon T.A., Scott-Conner C.E., Tracy R.D. Spectrum of General Surgery in Rural Iowa. Current Surgery. 2003;60(1):94–99. doi: 10.1016/S0149-7944(02)00680-3. [DOI] [PubMed] [Google Scholar]
- Brook R., McGlynn E., Cleary P. Measuring Quality of Care. Part 2. New England Journal of Medicine. 1996;335:966–69. doi: 10.1056/NEJM199609263351311. [DOI] [PubMed] [Google Scholar]
- Brown J.B., Bouck L.M., Ostbye T., Barnsley J.M., Mathews M., Ogilvie G. Walk-in Clinics in Ontario. An Atmosphere of Tension. Canadian Family Physician. 2002;48:531–36. [PMC free article] [PubMed] [Google Scholar]
- Brownell M. Tonsillectomy Rates for Manitoba Children: Temporal and Spatial Variations. Healthcare Management Forum/Canadian College of Health Service Executives. 2002 Winter;:21–26. doi: 10.1016/s0840-4704(10)60178-0. [DOI] [PubMed] [Google Scholar]
- Brownell M., Kozyrkyj A., Roos N., Friesen D., Mayer T., Sullivan K. Health Service Utilization by Manitoba Children. Canadian Journal of Public Health. 2002;93:S57–62. doi: 10.1007/BF03403620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Canadian Institute for Health Information (CIHI) Distribution and Internal Migration of Canada's Physician Workforce. Ottawa: 2007a. [Google Scholar]
- Canadian Institute for Health Information (CIHI) Ottawa: Author; 2007b. Health Indicators. [Google Scholar]
- Canadian Medical Association (CMA) National Physician Survey 2007: Response Rates. 2008. Retrieved October 17, 2009. < http://www.nationalphysiciansurvey.ca/nps/2007_Survey/response_rates-2007-e.asp>.
- Caulford P.G., Lamb S.B., Kaigas T.B., Hanna E., Norman G.R., Davis D.A. Physician Incompetence: Specific Problems and Predictors. Academic Medicine. 1994;69(10):S16–18. doi: 10.1097/00001888-199410000-00028. [DOI] [PubMed] [Google Scholar]
- Chan B.T. The Declining Comprehensiveness of Primary Care. Canadian Medical Association Journal. 2002;166(4):429–34. [PMC free article] [PubMed] [Google Scholar]
- Chan B.T., Shultz S.E. Supply and Utilization of General Practitioner and Family Physicians Services in Ontario: ICES Investigative Report. Toronto: Institute for Clinical and Evaluative Sciences; 2005. [Google Scholar]
- Chaudhry R., Goel V., Sawka C. Breast Cancer Survival by Teaching Status of the Initial Treating Hospital. Canadian Medical Association Journal. 2001;164(2):183–88. [PMC free article] [PubMed] [Google Scholar]
- Chen J., Hou F. Unmet Needs for Health Care. Health Reports. 2002;13(2):23–34. [PubMed] [Google Scholar]
- College of Physicians and Surgeons of Ontario (CPSO) Reaping the Rewards – Striving for Sustainability: 2007 Registration Statistics and Survey Findings. 2008a. Retrieved October 17, 2009. < http://www.cpso.on.ca/uploadedFiles/policies/positions/resourceinitiative/Reaping%20the%20Rewards%20Survey_07.pdf>.
- College of Physicians and Surgeons of Ontario (CPSO) Selection for Assessment. 2008b. Retrieved October 17, 2009. < http://www.cpso.on.ca/members/peerassessment/default.aspx?id=1946>.
- Donabedian A. Evaluating the Quality of Medical Care. Milbank Memorial Fund Quarterly. 1966;44(3 Suppl):166–206. [PubMed] [Google Scholar]
- Donabedian A. The Quality of Care. How Can It Be Assessed? Journal of the American Medical Association. 1988;260(12):1743–48. doi: 10.1001/jama.260.12.1743. [DOI] [PubMed] [Google Scholar]
- Donabedian A. The Role of Outcomes in Quality Assessment and Assurance. Quality Review Bulletin. 1992;18:356–60. doi: 10.1016/s0097-5990(16)30560-7. [DOI] [PubMed] [Google Scholar]
- Donabedian A. Evaluating Physician Competence. Bulletin of the World Health Organization. 2000;78:857–60. [PMC free article] [PubMed] [Google Scholar]
- Ely J.W., Levinson W., Elder N.C., Mainous A.G., Vinson D.C. Perceived Causes of Family Physicians' Errors. Journal of Family Practice. 1995;40(4):337–44. [PubMed] [Google Scholar]
- Geneau R., Lehoux P., Pineault R., Lamarche P. Understanding the Work of General Practitioners: A Social Science Perspective on the Context of Medical Decision Making in Primary Care. BMC Family Practice. 2008;9:12. doi: 10.1186/1471-2296-9-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gillett J., Hutchison B., Birch S. Capitation and Primary Care in Canada: Financial Incentives and the Evolution of Health Service Organizations. International Journal of Health Services. 2001;31(3):583–603. doi: 10.2190/2FEN-AQKK-LCEV-7KU5. [DOI] [PubMed] [Google Scholar]
- Goldfarb S. The Utility of Decision Support, Clinical Guidelines, and Financial Incentives as Tools to Achieve Improved Clinical Performance. Joint Commission Journal on Quality Improvement. 1999;25(3):137–44. doi: 10.1016/s1070-3241(16)30433-3. [DOI] [PubMed] [Google Scholar]
- Grol R. Changing Physicians' Competence and Performance: Finding the Balance between the Individual and the Organization. Journal of Continuing Education in the Health Professions. 2002;22(4):244–51. doi: 10.1002/chp.1340220409. [DOI] [PubMed] [Google Scholar]
- HealthForceOntario. Entry to Practice Requirements for Healthcare Professionals Outside Ontario. 2007. Retrieved October 17, 2009. < http://www.healthforceontario.ca/Work/OutsideOntario/PhysiciansOutsideOntario/PracticeRequirements.aspx>.
- Hopkins J.R. Financial Incentives for Ambulatory Care Performance Improvement. Joint Commission Journal on Quality Improvement. 1999;25(5):223–38. doi: 10.1016/s1070-3241(16)30440-0. [DOI] [PubMed] [Google Scholar]
- Hospital Report Research Collaborative. Hospital Report 2003: Acute Care. 2004a. Retrieved October 17, 2009. < http://www.hospitalreport.ca/downloads/2003/acute_2003.html>.
- Hospital Report Research Collaborative. Hospital Report 2003: Emergency Department Care. 2004b. Retrieved October 17, 2009. < http://www.hospitalreport.ca/downloads/2003/edc_2003.html>.
- Hospital Report Research Collaborative. Hospital Report 2003: Complex Continuing Care. 2004c. Retrieved October 17, 2009. < http://www.hospitalreport.ca/downloads/2003/ccc_2003.html>.
- Jansen J.J., Grol R.P., Van Der Vleuten C.P., Scherpbier A.J., Crebolder H.F., Rethans J.J. Effect of a Short Skills Training Course on Competence and Performance in General Practice. Medical Education. 2000;34(1):66–71. doi: 10.1046/j.1365-2923.2000.00401.x. [DOI] [PubMed] [Google Scholar]
- Jin Y., Marrie T.J., Carriere K.C., Predy G., Houston C., Ness K., Johnson D.H. Variation in Management of Community-Acquired Pneumonia Requiring Admission to Alberta, Canada Hospitals. Epidemiology and Infection. 2003;130(1):41–51. doi: 10.1017/s0950268802007926. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones D. BC Walk-in Clinics Warned. Canadian Medical Association Journal. 2006;175(12):1512. doi: 10.1503/cmaj.061484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones M. Walk-in Primary Medical Care Centres: Lessons from Canada. British Medical Journal. 2000;321(7266):928–31. doi: 10.1136/bmj.321.7266.928. Clinical Research Ed. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klass D. Reevaluation of Clinical Competency. American Journal of Physical Medicine & Rehabilitation. 2000;79(5):481–86. doi: 10.1097/00002060-200009000-00018. [DOI] [PubMed] [Google Scholar]
- Klass D. Assessing Doctors at Work: Progress and Challenges. New England Journal of Medicine. 2007a;356(4):414–15. doi: 10.1056/NEJMe068212. [DOI] [PubMed] [Google Scholar]
- Klass D. A Performance-Based Conception of Competence Is Changing the Regulation of Physicians' Professional Behavior. Academic Medicine. 2007b;82(6):529–35. doi: 10.1097/ACM.0b013e31805557ba. [DOI] [PubMed] [Google Scholar]
- Kleinbaum D.G., Kupper L.L., Muller G. Regression Diagnostics. In: Kleinbaum D.G., et al., editors. Applied Regression Analysis and Other Multivariable Methods. vol. 2. Belmont, CA: Duxbury Press; 1988. [Google Scholar]
- Konkin J., Howe D., Soles T.L. Society of Rural Physicians of Canada. SRPC Policy Paper on Regionalization, Spring 2004. Canadian Journal of Rural Medicine. 2004;9(4):257–59. [PubMed] [Google Scholar]
- Kopelow M., Schnabel G.K., Hassard T.H., Klass D.J., Beazley G., Hechter F., Grott M.. Assessing Practicing Physicians in Two Settings Using Standardized Patients. Academic Medicine. 1992;67(10):S19–21. doi: 10.1097/00001888-199210000-00026. [DOI] [PubMed] [Google Scholar]
- LaDuca A. Validation of Professional Licensure Examinations. Evaluation & the Health Professions. 1994;17(2):178–97. doi: 10.1177/016327879401700204. [DOI] [PubMed] [Google Scholar]
- LaDuca A., Taylor D.D., Hill I.K. The Design of a New Physician Licensure Examination. Evaluation & the Health Professions. 1984;7(2):115–40. doi: 10.1177/016327878400700201. [DOI] [PubMed] [Google Scholar]
- Lewis S. Regression Analysis. Practical Neurology. 2007;7(4):259–64. doi: 10.1136/jnnp.2007.120055. [DOI] [PubMed] [Google Scholar]
- Long M.J. An Explanatory Model of Medical Practice Variation: A Physician Resource Demand Perspective. Journal of Evaluation in Clinical Practice. 2002;8(2):167–74. doi: 10.1046/j.1365-2753.2002.00343.x. [DOI] [PubMed] [Google Scholar]
- May J., Jones P.D., Cooper R.J., Morrissey M., Kershaw G. GP Perceptions of Workforce Shortage in a Rural Setting. Rural and Remote Health. 2007;7(3):720. [PubMed] [Google Scholar]
- McAuley R.G., Henderson H.W. Results of the Peer Assessment Program of the College of Physicians and Surgeons of Ontario. Canadian Medical Association Journal. 1984;131:557–61. [PMC free article] [PubMed] [Google Scholar]
- McAuley R.G., Paul W.M., Morrison G.H., Beckett R.F., Goldsmith C.H. Five-Year Results of the Peer Assessment Program of the College of Physicians and Surgeons of Ontario. Canadian Medical Association Journal. 1990;143(11):1193–99. [PMC free article] [PubMed] [Google Scholar]
- Morrow R.W., Gooding A.D., Clark C. Improving Physicians' Preventive Health Care Behavior through Peer Review and Financial Incentives. Archives of Family Medicine. 1995;4(2):165–69. doi: 10.1001/archfami.4.2.165. [DOI] [PubMed] [Google Scholar]
- Norcini J.J., Mazmanian P.E. Physician Migration, Education, and Health Care. Journal of Continuing Education in the Health Professions. 2005;25(1):4–7. doi: 10.1002/chp.2. [DOI] [PubMed] [Google Scholar]
- Norman G.R., Davis D.A., Lamb S., Hanna E., Caulford P., Kaigas T. Competency Assessment of Primary Care Physicians as Part of a Peer Review Program. Journal of the American Medical Association. 1993;270(9):1046–51. [PubMed] [Google Scholar]
- Norton P.G., Dunn E.V., Beckett R., Faulkner D. Long-Term Follow-up in the Peer Assessment Program for Nonspecialist Physicians in Ontario, Canada. Joint Commission Journal on Quality Improvement. 1998;24(6):334–41. doi: 10.1016/s1070-3241(16)30385-6. [DOI] [PubMed] [Google Scholar]
- Norton P.G., Dunn E.V., Soberman L. Family Practice in Ontario: How Physician Demographics Affect Practice Patterns. Canadian Family Physician. 1994;40:249–56. [PMC free article] [PubMed] [Google Scholar]
- Norton P.G., Dunn E.V., Soberman L. What Factors Affect Quality of Care? Using the Peer Assessment Program in Ontario Family Practices. Canadian Family Physician. 1997;43(10):1739–44. [PMC free article] [PubMed] [Google Scholar]
- Norton P.G., Faulkner D. A Longitudinal Study of Performance of Physicians' Office Practices: Data from the Peer Assessment Program in Ontario, Canada. Joint Commission Journal on Quality Improvement. 1999;25(5):252–58. doi: 10.1016/s1070-3241(16)30442-4. [DOI] [PubMed] [Google Scholar]
- Norton P.G., Ginsburg L.S., Dunn E., Beckett R., Faulkner D. Educational Interventions to Improve Practice of Nonspecialty Physicians Who Are Identified in Need by Peer Review. Journal of Continuing Education in the Health Professions. 2004;24(4):244–52. doi: 10.1002/chp.1340240408. [DOI] [PubMed] [Google Scholar]
- Probst J C., Moore C.G., Baxley E.G., Lammie J.J. Rural–Urban Differences in Visits to Primary Care Physicians. Family Medicine. 2002;34(8):609–15. [PubMed] [Google Scholar]
- Ram P., Grol R., van den Hombergh P., Rethans J.J., van der Vleuten C., Aretz K. Structure and Process: The Relationship between Practice Management and Actual Clinical Performance in General Practice. Family Practice. 1998;15(4):354–62. doi: 10.1093/fampra/15.4.354. [DOI] [PubMed] [Google Scholar]
- Rethans J.J., Martin E., Metsemakers J. To What Extent Do Clinical Notes by General Practitioners Reflect Actual Medical Performance? A Study Using Simulated Patients. British Journal of General Practice. 1994;44(381):153–56. [PMC free article] [PubMed] [Google Scholar]
- Rethans J.J., Norcini J., Baron-Maldonado M., Blackmore D.E., Jolly D.M., LaDuca A., Lew S.R., Page G.G., Southgate L. The Relationship between Competence and Performance: Implications for Assessing Practice Performance. Medical Education. 2002;36:901–9. doi: 10.1046/j.1365-2923.2002.01316.x. [DOI] [PubMed] [Google Scholar]
- Robinson M.B. Evaluation of Medical Audit. Journal of Epidemiology and Community Health. 1994;48(5):435–40. doi: 10.1136/jech.48.5.435. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Safran D.G., Rogers W.H., Tarlov A.R., Inui T., Taira D.A., Montgomery J.E., Ware J.E., Slavin C.P. Organizational and Financial Characteristics of Health Plans: Are They Related to Primary Care Performance? Archives of Internal Medicine. 2000;160(1):69–76. doi: 10.1001/archinte.160.1.69. [DOI] [PubMed] [Google Scholar]
- Safran D.G., Wilson I.B., Rogers W.H., Montgomery J.E., Chang H. Primary Care Quality in the Medicare Program: Comparing the Performance of Medicare Health Maintenance Organizations and Traditional Fee-for-Service Medicare. Archives of Internal Medicine. 2002;162(7):757–65. doi: 10.1001/archinte.162.7.757. [DOI] [PubMed] [Google Scholar]
- Shine K.I. Health Care Quality and How to Achieve it. Academic Medicine. 2002;77(1):91–99. doi: 10.1097/00001888-200201000-00021. [DOI] [PubMed] [Google Scholar]
- Skinner L. Measuring Physician Performance. Journal of the Medical Association of Georgia. 2002;91(1):38–99. [PubMed] [Google Scholar]
- Tepper J., Schultz S.E., Rothwell D.M., Chan B.T. Physician Services in Rural and Northern Ontario: ICES Investigative Report. Toronto: Institute for Clinical Evaluative Sciences; 2005. [Google Scholar]
- Tulloh B., Clifforth S., Miller I. Caseload in Rural General Surgical Practice and Implications for Training. ANZ Journal of Surgery. 2001;71(4):215–17. doi: 10.1046/j.1440-1622.2001.02075.x. [DOI] [PubMed] [Google Scholar]
- Veugelers P.J., Yip A.M., Elliott D.C. Geographic Variation in Health Services Use in Nova Scotia. Chronic Diseases in Canada. 2003;24(4):116–23. [PubMed] [Google Scholar]
- Wakefield D.S., Helms C.M., Helms L. The Peer Review Process: The Art of Judgment. Journal for Healthcare Quality. 1995;17(3):11–15. doi: 10.1111/j.1945-1474.1995.tb00773.x. [DOI] [PubMed] [Google Scholar]
- Wenghofer E.F., Way D., Moxam R.S., Wu H., Faulkner D., Klass D.J. Effectiveness of an Enhanced Peer Assessment Program: Introducing Education into Regulatory Assessment. Journal of Continuing Education in the Health Professions. 2006a;26(3):199–208. doi: 10.1002/chp.70. [DOI] [PubMed] [Google Scholar]
- Wenghofer E.F., Williams A.P., Klass D.J., Faulkner D. Physician–Patient Encounters: The Structure of Performance in Family and General Office Practice. Journal of Continuing Education in the Health Professions. 2006b;26(4):285–93. doi: 10.1002/chp.81. [DOI] [PubMed] [Google Scholar]