Abstract
Despite widespread belief in clinical information technology's (IT) potential to improve quality overall, IT's effects on disparities remain unexamined. We develop a conceptual framework regarding how IT can alter within-physician disparities, and we empirically test some of its implications in the context of coronary heart disease. Using a random experiment on 256 primary care physicians, we analyze the relationships between 3 IT functions (feedback and two types of clinical decision support) and 5 process of care measures. We address endogeneity by eliminating unobserved patient characteristics with vignettes and by proxying for omitted physician characteristics. We find that physicians' diagnostic certainty and treatment differ by patient age, gender and race. Consistent with the framework, IT's effects on disparities are complex. Feedback eliminated the gender disparities, but the relationships differed for other IT functions and process measures. Current policies to reduce disparities and increase IT adoption may be in discord.
Keywords: health disparities, clinical information technology, clinical decision making, Bayesian model, statistical discrimination
The Institute of Medicine's Crossing the Quality Chasm report (2001) stimulated numerous interventions to mitigate disparities in health care. Many focused on improving the diagnostic and treatment decisions made by physicians. These aim at reducing disparities that arise both from between-physician differences in training quality and access to resources (Bach, Pham, Schrag, Tate, & Hargraves, 2004) and from within-physician differences in treatment decisions (van Ryn & Fu, 2003; Bao, Fox & Escarce, 2006). Physicians rely on many complex and inter-related sources of information in their diagnostic and treatment decisions. These include information acquired through interactions with the patient; the patient's history, medical record and insurance coverage; publications and clinical practice guidelines; the experiences of themselves and their colleagues; from more formal training programs like residencies; and from marketing (Azoulay, 2002).
Clinical information technology (IT) can distill information from these myriad sources and alter the relative importance of each of them (Rebitzer, Rege & Shepard, 2008). Despite the degree of interest in IT growing contemporaneously with concerns about disparities, academic research has not considered the confluence of these two high-profile topics. IT adoption among physicians remains low, with 4% having fully functional electronic medical records (EMR) and 13% having basic EMR (DesRoches et al., 2008). The potential for greater adoption of IT to improve quality overall has received considerable attention (Chaudhry, Wang, Wu, Maglione, Mojica, Roth, Morton, and Shekelle, 2006; Langley and Beasley, 2007), and the US Department of Health and Human Services has pursued broad implementation of IT as a way of increasing quality of care, reducing costs, and expanding access.1 Despite this optimism for IT's effects overall, its implications for disparities specifically have not been considered carefully.
We examine these issues in the context of coronary heart disease (CHD). Patient characteristics have been consistently linked to variations in the treatment and care of coronary heart disease (Arber et al., 2004; Barnhart et al. 2006; Harries et al., 2007). Women in particular are treated less aggressively than men in risk assessments and treatment for coronary conditions (Bird et al. 2007; Crilly et al. 2007). In cases with comparable symptom presentation, studies have shown differential use of coronary revascularization services (Popescu, Vaughan-Sarrazin, & Rosenthal, 2007), hospitalization for hypertension (Holmes, Arispe, & Moy, 2005), history taking (James, Feldman, & Mehta, 2006), and differences in attributions of cardiacrelated symptoms (Martin, Gordon, & Lounsbury 1998).
Existing literature has suggested that some sources of these types of disparities are inherently information problems (Balsa & McGuire, 2003). Thus, IT can potentially reduce disparities by helping providers access the information they need to make unbiased decisions. However, IT has capabilities to perform a wide range of functions that can either complement or substitute for existing sources of information, including physicians' interactions with patients or observations about patients' characteristics. Consequently, we expect that the effects of IT on disparities are nuanced, complex, and potentially multivalent, indicating the need for empirical evidence to inform policy makers and managers seeking both to increase IT adoption and to decrease disparities.
New Contribution
In this paper, we use a two-pronged approach to examine this topic: (1) we develop a conceptual framework of how various types of physician IT can influence disparities in care; and (2) we empirically test a subset of specific implications of that framework with data from an experiment on clinical decision-making in the context of CHD. We use a Bayesian framework to highlight tensions in IT's potential effects on disparities. We complement this framework with an empirical analysis of some of the effects it describes. We analyze the effects of three types of IT (electronic reminders, clinical decision support, and electronic feedback) on differences in five diagnosis and treatment decisions across four patient characteristics. Much of the existing research about IT's effects on health care delivery is confounded by both patient and physician characteristics that the researchers have not taken into account. To overcome these common limitations and make causal inferences about IT's effects on physicians' decisions, we rely on a random experiment to eliminate unobserved patient characteristics, and we use proxy variables to account for unobserved physician characteristics. Likewise, we are able to overcome a number of other common limitations of prior studies of clinical IT noted by Bates (2009). Specifically, our data are gathered from a number of different provider organizations rather than just one; we measure multiple aspects of clinical decision making rather than only errors or complications; and we examine the effects of IT as it is currently implemented, rather than relying on hypothetical projections.
Conceptual Framework
IT and Within-Physician Disparities
In a Bayesian framework, clinicians enter physician-patient encounters with preexisting beliefs or expectations, commonly referred to as “priors.” Priors include, among other things, beliefs about epidemiologic base rates, expectations about health behaviors, and implicit cognitive biases toward some types of patients. As they interact with patients, physicians update their priors with patient-specific information, or signals, which include items such as presenting symptoms, patient medical history, and demographic characteristics. Together, priors and patient-specific signals inform diagnostic and treatment decisions, and inappropriate reliance on one type of information at the expense of the other, or inaccuracy in either type, has been identified as one source of disparities in decision making.
The Bayesian framework, and existing empirical evidence (Balsa, McGuire & Meredith, 2005, Lutfey & Ketcham, 2005, McGuire et al., 2007) indicates that disparities can arise not only from prejudice but also from uncertainty. Uncertainty from miscommunication can result in “statistical discrimination” in which clinicians underweight patient-specific information (signals) and overweight priors (Balsa, McGuire & Meredith, 2005). Under statistical discrimination, some types of patients receive more poorly matching treatment because clinicians are not able to ascertain as much information from their signals. Statistical discrimination can also arise due to differences across patient types in a physician's uncertainty or beliefs about factors such as disease prevalence or the relative effectiveness of treatment options (Balsa et al., 2005). That is, a physician can differ across specific patient types either in the weight he puts on the priors, or on the priors themselves.
Clinical IT performs a range of functions that provide alternative sources of information that is incorporated into both physicians' priors and patient-specific signals. These functions can change physicians' decision making by altering the priors, providing signals that are otherwise unobserved by the physician, or by changing the degree of uncertainty about both priors and signals, in turn altering the relative weight the physician places on them. Whether these changes increase or reduce disparities depends on the degree of accuracy and uncertainty of a physician's priors and signals. For example, if a physician's priors include CHD rates that are inaccurately low for a given type of patient, and the use of IT supplies them with an accurate rate,2 then they may be less likely to miss the diagnosis with that patient. If, on the other hand, the use of IT to access a base rate decreases the emphasis that physician places on patient signals in favor of patterns for that type of patient, and disparities could be worsened through IT's reinforcement of (even accurate) racial profiling (Balsa & McGuire, 2003). In the first case, the base rate provided by IT substitutes for the physician's existing knowledge; in the second case, the IT-provided base rate substitutes for patient-specific signals. The tension results from the fact that a greater reliance on patient-specific signals in the decision making process can cause greater incorporation of uncertainty or stereotypes, while a reduced role of signals can eliminate relevant information that can improve the match between the patient and the most appropriate treatment option.3
One type of IT, such as electronic medical records (EMR),4 provides physicians with less expensive, faster access to patient-specific information. This access can increase a physician's use of appropriate patient-specific signals and facilitates customization by providing immediate, detailed information about a patient's medical history, prior lab results, the patients' prescription fill dates, and other relevant details. By providing this information directly, EMR obviates the physician's need to rely on patient demographics or interactions with the patient to make inferences about patient-specific signals. Because such inferences are subject to both prejudice and statistical discrimination, EMR can reduce disparities and promote care that is equally well tailored across different types of patients.
Another type of IT offers clinical decision support (CDS), such as electronic reminders about practice guidelines, protocols or epidemiologic base rates for disease prevalence (hereafter abbreviated “DP”). DP can substitute for other sources of information used to form priors, thereby altering the priors themselves. DP may be more accurate than other sources that inform priors, such as training earlier in the career, and can reduce disparities by providing information that substitutes for misinformed priors including those due to stereotypes. In addition to altering the prior, DP can also increase physicians' reliance on the prior by increasing the physicians' certainty about it relative to the signal. If the underlying epidemiological prevalence rates or the guidelines for care vary across patient demographics, as they do for CHD, DP can reinforce physicians' reliance on observed patient demographics, creating differences in care for patients with identical symptoms. Other types of CDS, however, can increase the reliance on signals by guiding or reminding physicians about how best to treat a patient with the given symptoms. This has ambiguous effects on disparities, as it improves care for the signals provided, but the physician's acquisition and provision of those signals to the IT can incorporate uncertainty or bias.
A third type of IT connects the physician with others. One group within this function includes email with patients or web consults with other physicians.5 Another group within this function includes computerized physician order entry and e-prescribing. These do not have any direct impact on physician information sources and so do not relate to the Bayesian framework. However, they may be important in altering disparities that arise from differences across physicians, as we discuss in the next section.
Finally, the data generated by IT can provide feedback to physicians about their own performances. Research has shown feedback to standardize care and increase the provision of care recommended by guidelines (Masi, Blackman & Peek, 2007, Peek, Cargill & Huang, 2007). If within-provider disparities exist, the provision of feedback can help close them by prompting providers to focus on the cases where they fall short of the guidelines (Bradley et al., 2005).
IT and Between-Physician Disparities
IT also has ambiguous implications for disparities that result from between-physician differences in care. This ambiguity exists even if rates of adoption among physicians are uniform across patient types. Similar to within-physician disparities, the direction of the change depends on whether it serves as a complement or substitute to other sources of knowledge and information. For example, IT has the potential to diminish disparities if it substitutes for training, experience or other factors that contribute to lower quality of physicians that predominantly treat disadvantaged patient (Bao et al., 2006, Bach et al., 2004). IT's coordination effects could also reduce disparities if disadvantaged patients tend to be more complex and require more coordination, or if they are treated by more fragmented providers. Alternatively, if IT complements physician characteristics that already promote high quality care, it could exacerbate disparities if those complementary characteristics are not distributed evenly across types of patients. Disparities would also change if physicians' IT adoption rates differ across patient populations and IT improves quality. The current evidence is mixed regarding how adoption differs across patient populations (Blumenthal et al., 2008, Furukawa, Ketcham & Rimsza, 2007, Menachemi et al., 2007, Miller et al., 2009.)
Method
We used the context of CHD to consider IT's effects on disparities. Considerable evidence documents treatment disparities for CHD (Barnhart and Wassertheil-Smoller, 2006, Harries, Forrest, Harvey, McClelland, and Bowling, 2007) despite well-established treatment guidelines (Gibbons et al.,1999a, Gibbons et al., 1999b, McKinlay et al., 2007), due in part to the wide range of available treatment options. CHD is also among the most common and costly problems presented by older patients to primary care providers (Cohen & Krauss, 2003.)
We focus on three issues: whether physicians' decisions vary with patient characteristics, IT's overall effects on those decisions, and how IT alters the differences across types of patients. As described below, our empirical approach limits us to considering only IT's ability to stimulate physician learning in ways that alter their decisions even when IT itself is not being used. Thus we analyze how physicians' decisions are affected when IT can alter physicians' priors (which can vary by patient type) but no IT is providing additional patient-specific information.
Data
Each physician viewed one video vignette of an actor “patient” who randomly varied by patient age (55 vs. 75), gender, race (black vs. white) and current or former job (janitor vs. school teacher). Patients in the vignette presented with signs and symptoms suggestive of CHD, including chest pain worsening with exertion, pain in the back between the shoulder blades, stress, the Levine fist, and elevated blood pressure. To accurately represent how actual patients present, the vignette also built in several red herring symptoms potentially indicative of a gastrointestinal (GI) diagnosis: the patient complained of indigestion, feeling worse after a large or spicy meal, describing the pain as similar to heartburn experiences in the past but unresponsive to antacids, and feeling full and “gassy.” The vignette also incorporated references to the patient's mood, including the spouse's report that the patient has been difficult to be around and the patient's self-report of feeling irritated and having decreased energy. Prior work has demonstrated that shown clinical vignettes produce unbiased estimates of the influence of systematically manipulated variables on medical decision making (Peabody et al., 2007; Barnhart & Wassertheil-Smoller 2006; Currin, Schmidt, & Waller 2007; Dresselhaus et al., 2000; Epstein et al., 2001; Kales et al., 2005a; Kales et al., 2005b; Sirovich et al., 2005.) To ensure clinical authenticity, professional actors were selected for their comparability in appearance and trained under experienced physician supervision to portray a patient presenting with these signs/symptoms to a primary care provider.
Physicians that were eligible for selection were: (a) internists or family or general practitioners with M.D. degrees (physicians with D.O.s were excluded); (b) graduates from medical school between 1996-2001 or 1960-87 (for two distinct experience levels); and (c) currently working in primary care in North or South Carolina more than half-time (physicians were not recruited from the same practice). A letter of introduction was mailed to prospective participants and screening telephone calls were conducted to identify eligible physicians. Appointments were scheduled with each eligible, willing participant at his/her office for a one-on-one, structured interview, lasting one hour. Appointments were scheduled in physicians' offices for maximum convenience, and in between normal patient appointments so that physicians were immersed in their normal decision making environments. Physicians were recruited into four strata defined by gender and experience. Each participating physician was provided a stipend of $200. We interviewed 256 physicians over ten months in 2006-2007, equivalent to a response rate of 32%. To ensure quality in the data collection process, interviewers were carefully trained and certified; a selection of interviews were tape recorded and reviewed by supervisors on a regular basis; monthly conference calls were held with field interviewers and in-house staff; and a Co-Investigator made a field visit to observe interviews in person.
Vignettes were about seven minutes long and were shown at the start of the hour, after consent forms were signed. Vignettes were randomly assigned and each physician viewed one. After viewing the videotaped vignette, physicians were asked a series of questions regarding diagnosis and treatment of the patient they viewed. Questions were open-ended format, with responses recorded verbatim and coded after the interview was completed. After this, physicians completed a self-administered survey regarding themselves and their practices, including use of IT.
Measures
Physicians provided their diagnoses, their certainty about that diagnosis, and various clinical actions for the vignette patient. Virtually all physicians (98.8%) identified CHD as a diagnosis, so we do not analyze it further. However, physicians varied in their degree of diagnostic certainty. The clinical actions we consider are the number of tests or procedures they would order (“tests ordered”), the number of medications prescribed, questions physicians would ask the patient (“questions asked”), and number of pieces of advice they would give.
Physicians reported their own use of various electronic tools in their practices by replying to the question, “Which of the following resources (programs or tools) do you use in your practice?”6 We focus on the three IT functions that can affect physician priors and influence physicians' decisions regarding vignette patients for whom patient-specific information was not provided via IT. These are feedback, which does not require IT but will be increased by IT adoption,7 and two measures of CDS: electronic tools for estimating individual patient's risk of specific disease (DP) and electronic reminders. Table 1 reports additional details about variable definitions and their descriptive statistics.
Table 1. Survey questions and mean responses for variables used in this study.
Questionnaire Items & Variable Names | Mean | Standard Deviation |
---|---|---|
Certainty: Using a scale of 0-1, with 0 indicating no certainty and 1 indicating complete certainty, how certain are you that this patient has CHD (rescaled for our analysis) | 0.57 | 0.23 |
Tests: Which tests or lab work would you order today? | ||
Number of tests ordered | 6.52 | 2.42 |
Medications: Which medications would you prescribe today? | ||
Number of medications prescribed | 2.24 | 1.32 |
Advice: What specific advice would you offer this patient today? | ||
Pieces of advice given | 4.13 | 3.34 |
Questions: In addition to information elicited in the vignette, what other information would you like to obtain before deciding what's going on with the patient today? | ||
Number of questions asked | 11.19 | 6.04 |
Patient characteristics | ||
Percent Female (vs. Male) | 50.0 | - |
Percent 75 (vs. 55) | 50.0 | - |
Percent Black (vs. White) | 50.0 | - |
Percent Job=Teacher (vs. Job=Janitor) | 50.0 | - |
IT Variables (Percent Yes): Which of the following resources (programs or tools) do you use in your practice? Please select YES or NO for each of the following: | ||
DP: “PDA or computer-based tools for estimating individual patients' risk of specific diseases” | 51.0 | - |
Reminders: “Electronic reminders in either your medical records or your scheduling systems (electronic appointment or testing reminders, etc.)” | 53.4 | |
Feedback: “Any types of audit, feedback, or tracking reports (electronic, paper, telephone or fax)” | 79.8 | |
Proxy variables for unobserved MD characteristics | 0.0 | |
Email: “Electronic communications (email or other web-based communications) with patients” (Percent Yes) | 28.2 | - |
“In our group practice we value information technologies” (Percent “often/a great extent” vs. “sometimes/not at all”) | 82.2 | - |
“In our group practice we are quick to adopt new techniques and practices” (Percent “often/a great extent” vs. “sometimes/not at all”) | 48.6 | - |
MD controls | ||
Experience (years) | 18.22 | 10.57 |
Percent Female (vs. Male) | 50.0 | - |
Percent US/Canadian Medical School (vs. Other) | 85.4 | - |
Percent (Graduate from top 25 schools) | 21.3 | - |
Percent (Practice Size Solo) | 21.0 | |
Percent (Practice Size 2-3) | 22.6 | - |
Percent (Practice Size 4-10) | 31.0 | - |
Percent (Practice Size 11+) | 25.4 | - |
Percent South Carolina (vs. North Carolina) | 34.0 | - |
Percent Urban Area (vs. Rural Area) | 68.0 | - |
Analytic Strategy
Our use of vignette patients overcomes common analytical problems to studying clinical IT's effects. First, actor-patients do not have characteristics that are observed to the physician but not us as researchers. This eliminates bias due to improper risk adjustment. However, vignettes incorporate the effects of prejudice or statistical discrimination (Balsa & McGuire, 2001, Schulman et al., 1999) because physicians' inferences about patient characteristics not provided in the vignette can vary based on their underlying cognitive processes. Second, some types of IT, such as email or the EMR itself, should not have any direct effect on physicians' decisions about treatment for a vignette patient, but they are indirectly related to treatment through their correlations with unobserved physician characteristics. Thus we used adoption of email with patients as a proxy for unobserved physician characteristics that influence their decisions to adopt IT as well as their clinical decisions. In an alternative approach, we proxied for these physician characteristics by instead including non-solo physicians' answers to two questions: “In our group practice we value information technologies,” and, “In our group practice we are quick to adopt new techniques and practices.”
The main limitation of this approach is that physicians did not use IT when responding to the vignettes. Vignette patients do not have EMR, and physicians were not permitted to use IT to inform their responses. Therefore, we only observe the learning effects of IT that spillover beyond immediate use with patients. Thus, we test within-physician spillovers from patients for whom they use IT to vignette patients for whom they do not. Other researchers have hypothesized even broader spillovers from physicians who use IT to those who do not (Javitt, Rebitzer & Reisman, 2007). By altering physician priors, the functions of IT we study here might create such within-physician spillovers. Reminders presented electronically during the course of caring for real patients might be recalled even when the reminder itself is not presented. Likewise, DP can influence diagnostic certainty and subsequent decisions by raising a physician's awareness of prevalence rates of various potential diagnoses even when the rates themselves cannot be viewed with IT. Feedback can promote learning in similar ways.
As the conceptual framework indicates, if disparities in the process of care exist and IT promotes standardization in physicians' use of patient-specific signals, then we expect IT will reduce or eliminate these differences across patient types. For example, feedback might prompt physicians to focus on improving care to patients where they have previously fallen short. Alternatively, if IT provides information that varies across patient types, then IT can create differences in care even where they did not exist previously by increasing physician focus on differences between patient characteristics rather than similarities in their symptoms. In the context of vignettes, this indicates lower quality for some types of patients, since the patients are presented with identical symptoms. In the epidemiologic literature, the prevalence of CHD is higher for male, black and older patients (Rosamond et al., 2008.) If use of IT has taught physicians these differences, we expect that the 3 IT measures, and DP particularly, should increase diagnostic certainty for these types of patients. This would increase the differences in physician certainty across patient types. The work on statistical discrimination and related research indicates that this greater certainty will be accompanied by greater number of medications prescribed (McGuire et al., 2007, Lutfey et al., 2009) and possibly tests ordered, depending on whether tests primarily substitute or complement for diagnostic certainty (Lutfey et al., forthcoming).8 We would similarly expect the use of IT to be associated with asking more questions and providing more advice, as physicians with this support may develop more extensive differential diagnoses and adhere more closely to the full range of clinical guidelines, including providing advice.
Because the IT measures were not randomized within the experiment, we used regressions to identify the incremental relationships between IT and process of care measures. These models include controls for vignette patient characteristics (age, gender, race and job) and physician characteristics: experience and experience squared, gender, whether s/he attended a US or Canadian (versus other foreign) medical school, whether s/he attended a US school ranked in the top 25 in the 2007 (US News and World Report, 2007), practice size (solo, 2-3, 4-10, or 11 or more physicians), state (North Carolina versus South Carolina) and market type (urban versus rural). We implemented two specifications but reported only one because the results were robust in all cases unless reported otherwise. In the version reported in the paper, we included email as a control variable, which should be significant only if it acts as a proxy for other unobserved physician factors that influence physicians' responses to the vignette questions. In the alternative specification we used as proxies the two questions on information technology and adoption of new technologies answered by group physicians.
Fractional logit models were used for the estimates of certainty because it ranged from zero to one (Papke & Wooldridge, 1996). Negative binomial models were used for the remaining dependent variables because they were all counts. Interaction terms are difficult to interpret in these non-linear models (Ai & Norton, 2003), so we considered IT's effects on disparities by repeating these models stratified by each patient characteristic (e.g., separately for males and females.) For all results we report the incremental effects and their robust standard errors.
Findings
Table 2 reports unadjusted descriptive results for the relationships between IT use and physicians' diagnostic certainty and treatment decisions. Only one statistically significant differences exist at p <0.05, with greater diagnostic certainty for physicians receiving feedback (p=0.025). Table 3 reports the incremental effects from separate regressions for each of the three IT variables. All models control for physician characteristics, but their results are not reported. The results indicate that feedback, DP and reminders do not have any statistically significant (at p <0.05) overall effects on physicians' diagnostic certainty or decisions about the process of care in the context of vignettes of patients with CHD symptoms. The results indicate some significant differences across patient characteristics in diagnostic certainty and medications prescribed, but not for tests ordered. Specifically, physicians viewing female vignettes had lower certainty and prescribed fewer medications. Physicians also had significantly greater certainty for older patients. These differences in certainty by gender and age are consistent with the prevalence rates published by the American Heart Association (Rosamond et al., 2008). However, because the vignettes present identical signs and symptoms of CHD, the patient signals should lead to similarly high CHD certainty for all patient types. The results also show that physicians who viewed black vignette patients prescribed more medications. All of these differences across patient characteristics have virtually identical size and significance in the models with the other two IT functions and in the alternative (unreported) specifications.
Table 2. Unadjusted relationships between IT use and physician decisions and characteristics.
Overall | Feedback | Disease Prevalence | Electronic Reminders | |||||||
---|---|---|---|---|---|---|---|---|---|---|
mean | Yes | No | p-value | Yes | No | p-value | Yes | No | p-value | |
|
|
|
|
|||||||
N | 202 | 51 | 130 | 125 | 119 | 135 | ||||
Process of Care | ||||||||||
|
||||||||||
Certainty of CHD Diagnosis (0-1) | 0.57 | 0.59 | 0.51 | 0.025 | 0.56 | 0.59 | 0.211 | 0.57 | 0.57 | 0.957 |
Number of Tests Ordered | 6.15 | 6.54 | 6.51 | 0.938 | 6.61 | 6.44 | 0.581 | 6.67 | 6.37 | 0.325 |
Number of Medications Prescribed | 2.24 | 2.30 | 2.06 | 0.241 | 2.20 | 2.30 | 0.562 | 2.31 | 2.19 | 0.455 |
Number of Questions Asked | 11.19 | 11.27 | 10.92 | 0.715 | 11.23 | 11.22 | 0.993 | 10.59 | 11.89 | 0.087 |
Number of pieces of advice Given | 4.13 | 4.25 | 3.65 | 0.246 | 4.13 | 4.16 | 0.944 | 4.33 | 3.90 | 0.300 |
Physician characteristics | ||||||||||
|
||||||||||
Experience (years) | 18.2 | 18.6 | 16.7 | 0.271 | 15.5 | 21.0 | <0.001 | 17.4 | 19.1 | 0.183 |
Female (%) | 50.0 | 51.5 | 45.1 | 0.415 | 51.5 | 48.0 | 0.572 | 45.2 | 55.9 | 0.088 |
US/Canadian medical school (%) | 85.4 | 85.5 | 84.0 | 0.789 | 89.0 | 81.6 | 0.098 | 84.8 | 85.6 | 0.869 |
Top 25 medical school (%) | 21.3 | 19.0 | 32.0 | 0.046 | 26.8 | 16.0 | 0.037 | 17.4 | 26.3 | 0.090 |
South Carolina (%) | 34.0 | 33.2 | 35.3 | 0.774 | 33.1 | 34.4 | 0.823 | 40.7 | 25.4 | 0.010 |
Urban area (%) | 68.0 | 70.8 | 58.8 | 0.100 | 74.6 | 61.6 | 0.026 | 64.4 | 72.9 | 0.150 |
Practice size (number of MDs) | 0.075 | 0.025 | 0.598 | |||||||
Solo (%) | 21.0 | 18.6 | 26.0 | 15.0 | 26.6 | 18.3 | 22.0 | |||
2-3 (%) | 22.6 | 20.6 | 32.0 | 19.7 | 25.8 | 21.4 | 24.6 | |||
4-10 (%) | 31.0 | 32.2 | 28.0 | 37.8 | 24.2 | 31.3 | 31.4 | |||
11 or more (%) | 25.4 | 28.6 | 14.0 | 27.6 | 23.4 | 29.0 | 22.0 |
NOTES: Sample sizes vary slightly across IT types because physicians did not answer all questions.
P-values are from t-tests for continuous variables and chi-square or Fisher's exact tests (as appropriate) for dichotomous/categorical variables.
Table 3. Incremental effects of information technology adoption and patient characteristics on the process of care.
Certainty of CHD Diagnosis (0-1) | Tests Ordered | Medications Prescribed | Questions Asked | Advice Given | |
---|---|---|---|---|---|
Diagnosis (0-1) | Ordered | Prescribed | Asked | Given | |
| |||||
Number | |||||
|
|||||
A. Feedback | |||||
Audit, feedback, or tracking reports | 0.065 [.039] | 0.104 [.376] | 0.318 [.189] | ‒0.151 [.835] | 0.762 [.506] |
Electronic Communications with Patients | 0.012 [.031] | 0.026 [.327] | ‒0.247 [.202] | ‒0.037 [.737] | ‒0.660 [.399] |
Patient characteristics | |||||
Gender = Female (Reference= Male) | ‒0.101 [.029]** | ‒0.548 [.303] | ‒0.345 [.160]* | ‒0.120 [.671] | ‒0.253 [.415] |
Age = 75 (Reference = 55) | 0.117 [.028]** | 0.007 [.296] | 0.008 [.166] | 0.275 [.664] | ‒0.215 [.398] |
Race = Black (Reference = White) | 0.012 [.029] | ‒0.126 [.291] | 0.387 [.166]* | ‒0.593 [.654] | 0.096 [.395] |
Job = Janitor (Reference = Teacher) | ‒0.030 [.028] | 0.465 [.291] | 0.022 [.158] | ‒0.142 [.634] | 0.165 [.404] |
B. Disease prevalence | |||||
IT to determine disease prevalence | ‒0.020 [.031] | 0.392 [.313] | 0.138 [.175] | 0.217 [.688] | 0.290 [.443] |
Electronic Communications with Patients | 0.022 [.031] | ‒0.098 [.321] | ‒0.264 [.203] | ‒0.115 [.737] | ‒0.718 [.430] |
Patient characteristics | |||||
Gender = Female (Reference = Male) | ‒0.108 [.029]** | ‒0.559 [.298] | ‒0.381 [.162]* | ‒0.151 [.660] | ‒0.351 [.418] |
Age = 75 (Ref = 55) | 0.118 [.028]** | 0.033 [.294] | 0.045 [.165] | 0.196 [.667] | ‒0.261 [.403] |
Race = Black (Ref = White) | 0.010 [.028] | ‒0.112 [.286] | 0.373 [.164]* | ‒0.533 [.650] | 0.133 [.394] |
Job = Janitor (Ref = Teacher) | ‒0.033 [.028] | 0.494 [.288] | 0.002 [.156] | ‒0.080 [.631] | 0.206 [.405] |
C. Electronic Reminders | |||||
Electronic Reminders | ‒0.004 [.030] | 0.304 [.298] | 0.178 [.163] | ‒0.167 [.689] | 0.679 [.397] |
Electronic Communications with Patients | 0.018 [.031] | 0.000 [.316] | ‒0.238 [.200] | ‒0.033 [.739] | ‒0.706 [.398] |
Patient characteristics | |||||
Gender = Female (Ref = Male) | ‒0.108 [.029]** | ‒0.516 [.300] | ‒0.358 [.164]* | ‒0.125 [.665] | ‒0.277 [.418] |
Age = 75 (Ref = 55) | 0.118 [.028]** | 0.025 [.297] | 0.036 [.166] | 0.267 [.666] | ‒0.249 [.401] |
Race = Black (Ref = White) | 0.009 [.029] | ‒0.122 [.288] | 0.379 [.168]* | ‒0.597 [.655] | 0.134 [.397] |
Job = Janitor (Ref = Teacher) | ‒0.033 [.028] | 0.464 [.289] | ‒0.001 [.157] | ‒0.136 [.635] | 0.162 [.406] |
NOTES: Robust standard errors are in brackets. Certainty was estimated with a fractional logit model and the remaining dependent variables were estimated by negative binomial models. All models include controls for the physician characteristics listed in the text.
p < 0.05
p < 0.01
Table 4 reports the incremental effects and their standard errors for each type of IT from models stratified by patient characteristic. Each reported result is from a separate model described above. The results indicate that different functions had different effects on reducing or increasing differences in diagnosis and treatment across patient types. Feedback increased certainty only for female patients,9 and this was accompanied by higher medication prescribing. Comparing the magnitudes to those in Table 3 suggests that the use of feedback eliminated the differences by gender in certainty and prescribing. In contrast, the other types of IT had no effect on the differences by gender observed in Table 3.
Table 4. Incremental effects of IT from models stratified by vignette patient characteristics.
Patient Gender | Patient Age | Patient Job | Patient Race | |||||
---|---|---|---|---|---|---|---|---|
|
|
|
|
|||||
Outcome Measure | Female | Male | 55 | 75 | Janitor | Teacher | Black | White |
|
|
|
|
|||||
Incremental effects for feedback | ||||||||
Certainty of CHD Diagnosis | 0.101 [.048]* | 0.007 [.055] | 0.027 [.059] | 0.101 [.054] | 0.070 [.057] | 0.075 [.056] | 0.059 [.062] | 0.071 [.051] |
Tests Ordered | 0.557 [.444] | ‒0.499 [.723] | ‒0.243 [.590] | 0.732 [.413] | 0.482 [.489] | ‒0.425 [.501] | 0.214 [.443] | 0.187 [.575] |
Medications Prescribed | 0.522 [.242]* | ‒0.013 [.270] | ‒0.037 [.246] | 0.769 [.257]** | 0.007 [.290] | 0.706 [.222]** | 0.094 [.296] | 0.502 [.265] |
Questions Asked | ‒0.279 [1.039] | ‒0.110 [1.505] | 0.173 [1.185] | ‒0.568 [1.305] | 0.097 [1.054] | ‒0.122 [1.184] | ‒0.197 [1.165] | ‒0.628 [1.288] |
Advice Given | 0.531 [.704] | 0.544 [.785] | 1.616 [.579]** | ‒0.270 [.914] | 1.715 [.665]* | ‒0.214 [.782] | 1.809 [.583]** | ‒0.462 [.851] |
| ||||||||
Incremental effects for IT to determine disease prevalence | ||||||||
Certainty of CHD Diagnosis | ‒0.019 [.047] | ‒0.029 [.041] | ‒0.018 [.047] | 0.002 [.038] | 0.005 [.046] | ‒0.027 [.040] | ‒0.063 [.047] | 0.014 [.040] |
Tests Ordered | ‒0.198 [.413] | 0.885 [.418]* | ‒0.244 [.483] | 0.999 [.373]** | 0.15 [.423] | 0.633 [.421] | ‒0.156 [.435] | 0.914 [.446]* |
Medications Prescribed | 0.255 [.256] | 0.036 [.260] | 0.032 [.225] | 0.201 [.278] | 0.248 [.230] | 0.08 [.261] | 0.095 [.299] | 0.12 [.224] |
Questions Asked | ‒1.533 [.855] | 1.165 [1.145] | ‒0.428 [.819] | 0.819 [.933] | ‒0.121 [.895] | 0.338 [.933] | ‒0.166 [.985] | 0.737 [.944] |
Advice Given | 0.541 [.704] | 0.361 [.564] | 0.419 [.643] | 0.388 [.595] | 0.647 [.656] | 0.327 [.604] | 0.657 [.657] | 0.163 [.546] |
| ||||||||
Incremental effects for Electronic Reminders | ||||||||
Certainty of CHD Diagnosis | ‒0.018 [.043] | ‒0.003 [.037] | 0.014 [.050] | ‒0.007 [.038] | 0.014 [.048] | ‒0.029 [.038] | ‒0.028 [.040] | 0.043 [.041] |
Tests Ordered | ‒0.091 [.406] | 0.736 [.432] | 0.487 [.482] | 0.159 [.315] | 0.057 [.387] | 0.853 [.390]* | 0.545 [.417] | ‒0.065 [.461] |
Medications Prescribed | 0.212 [.220] | 0.150 [.235] | 0.299 [.225] | 0.247 [.243] | 0.000 [.244] | 0.370 [.223] | 0.205 [.234] | 0.310 [.245] |
Questions Asked | ‒0.737 [.990] | 1.237 [.977] | 0.001 [.843] | ‒0.163 [1.066] | ‒0.150 [1.022] | 0.851 [.967] | ‒0.620 [.882] | ‒0.410 [1.152] |
Advice Given | 0.610 [.546] | 0.534 [.593] | 0.093 [.594] | 1.189 [.531]* | 0.711 [.642] | 0.798 [.543] | 0.960 [.579] | 0.677 [.608] |
NOTES: Robust standard errors are in brackets. Incremental effects and their standard errors are calculated from models identical to those in Table 2 but run separately for each type of patient. Because of the experimental design, these patient characteristics are independent of each other.
p < 0.05
p < 0.01
IT also appears to have altered the other differences seen in Table 3. None of the three functions had significant effects on certainty for either age of patient or on medications prescribed for either race of patient. Finally, Table 4 provides a number of examples where IT alters the process of care for some types of patients but not their counterparts in contexts where we do not observe disparities to exist overall. For example, DP increases tests ordered for patients who were male, older, or white but not to those who were female, younger, or black. These results by gender and age are consistent with the epidemiological prevalence rates and with our hypotheses, although they are not accompanied by increased diagnostic certainty as we expected. Following from evidence that women are underdiagnosed and undertreated more often than men, this result suggests that the use of DP worsens this disparity and creates statistical discrimination (with DP, priors about base rates are weighed more heavily than the presenting patient signals). The result for DP on race, on the other hand, is contrary to the prevalence rates.
Discussion and Implications
As early as thirty years ago (McDonald, 1976) health policy makers and researchers have proposed greater use of clinical IT to improve the quality of care. Much of the subsequent research focused on barriers to the implementation of IT, including difficulties with developing appropriate products and cultivating high acceptance rates among users (DesRoches et al., 2008; Lester et al., 2008 and Jaspers & Schmidt, 2006 provide reviews). But studies have often failed to find that physician IT improves quality overall. Likewise, our results suggest that IT has limited ability to promote physician learning in quality-improving ways. In the context of vignette patients presenting with symptoms of CHD, we found evidence that physician adoption of clinical decision support (CDS) does not typically promote physician learning in ways that alter diagnostic and treatment decisions overall. Specifically, physicians that had adopted feedback, reminders or IT to determine the patient's risk of specific diseases (DP) did not differ from non-adopters in diagnostic certainty, the number of lab tests ordered, questions asked, medications prescribed or advice given to a vignette patient. One striking example is that physicians who had adopted DP did not have different certainty in their diagnoses of CHD. Because physicians were not using clinical IT in the process of “treating” the vignette patient, these results indicate that IT does not create physician learning spillovers that influence physicians' decisions beyond the immediate patient context in which it is being used.
We also found that physicians varied their diagnostic and treatment decisions with observed patient characteristics. Physicians were less certain about a CHD diagnosis for younger patients and female patients. This lower certainty for females was accompanied by fewer medications. This result for medication is similar to other studies that have found that certainty itself affects subsequent decisions (McGuire et al., 2007, Balsa et al., 2005). We also found that physicians prescribed significantly more medications to black patients, in contrast with extant literature showing blacks are less likely to receive CHD treatment (Bhalotra et al. 2007).
As the Bayesian framework indicated, IT has complex implications for differences across patient types, and the implications vary across IT functions. In some cases, IT eliminates these differences; in others, it leaves them unchanged, and in yet others IT appears to create differences where they otherwise do not exist. Feedback eliminated the difference between male and female patients in physician certainty and test ordering. Because patients presented with identical symptoms, this suggests that feedback eliminated statistical discrimination in which the patient signals are underweighted. Feedback appears to alter physicians' decision-making process by prompting them to increase standardization and rely less on a patient's gender to make inferences about disease prevalence. To the extent the women are consistently underdiagnosed and undertreated in CHD care, this reduction in gender-based disparities is a critical result, especially given the relatively low cost of a Type I (false positive) error in detecting and treating CHD compared to Type II (false negative). By contrast, use of IT that provides disease prevalence increased test ordering for males and older patients. Because this is consistent with the published prevalence rates, it suggests that DP causes physicians to rely more heavily on priors specific to patient types at the expense of ignoring the patient-specific signals. Thus DP, when used apart from EMR, appears to create disparities for identically-presenting patients.
Our results yielded these overall disparities and these different effects of IT on treatment despite the fact that use of vignette patients eliminates some sources of disparities such as miscommunication. First, the actors for every patient type read identical scripts and adopted similar nonverbal cues. Second, vignette patients do not engage in two way communication with physicians, and statistical discrimination might arise from differences in physicians' information-seeking from patients; however, we did not find that the number of questions physicians would have liked to ask varied with the race, gender, age or job of the vignette patient. Our analysis does capture statistical discrimination that results from differences in physicians' interpretations of identical signals, or due to differences in their priors across patient types (Balsa et al., 2005.)
Our use of vignettes had the notable benefit of considering these physician cognitive processes without empirical bias due to incomplete controls for patient characteristics. Three steps minimized the potential for physicians to behave differently in the experiment than they do with real patients. First, the vignettes were performed by professional actors guided by an experienced clinician's oversight. Efforts to ensure the clinical authenticity of the videotaped presentation resulted in 89.8% of physician respondents indicating that the vignette patient was either very typical or reasonably typical of patients they encounter in everyday practice. Second, the physicians viewed the vignette in the context of their practice day, so they likely treated real patients before and after they viewed the vignette patient. Third, physicians were explicitly instructed to view the patient as one of their own patients and to respond as they would typically respond in their own practice.
Our study has several additional strengths that fill gaps in our knowledge about information technology. Much of the previous work on IT adoption is based on single institution studies, focused on single outcomes such as medical errors, or relied on expert opinions about IT's potential future effects (Bates, 2009). Much less is known about how IT functions across a broad range of institutions, how decision outcomes vary by type of IT, or how process outcomes vary by patient and provider characteristics. Our approach incorporates the range of experiences with IT at the time the survey was conducted, when physicians varied in how long they had been using IT, the specific features of their IT, and how they used IT. As a result, our paper analyzes how IT is used by physicians in current clinical practice, rather than being a projection based on ideal best practices. Previous work has suggested that the variation in implementation and use of IT explains differences in IT's success to date. Our paper adds to that complexity by highlighting that IT's effects on quality can also differ by patient types in ways that may exacerbate health disparities, and that these effects vary by IT function. Perhaps most critically, our experimental design allows for identical presentation of signs and symptoms, which allows us to isolate how physicians interpret and use identical signals from different types of patients.
Analysis of real patients can consider a wider range of IT's potential effects on disparities, such as its ability improve coordination, which might have greater benefits for some types of patients, or simply by generating greater data to study disparities, their causes and the effects of interventions on them. However, such analysis is more susceptible to bias from unobserved patient and physician characteristics. A notable exception is the existence of field experiments in which particular functions of IT are randomly assigned to physicians (Javitt et al., 2007, Rebitzer et al., 2008.) Additional analysis of experiments that randomize physicians' use of IT can provide policymakers and managers with a richer understanding of IT's effects on health care and health care disparities. In our approach, the ability to interpret the results for IT as causal hinges on the proxy variables accounting for the typically unobserved confounding physician characteristics. This approach is imperfect if, for example, our measure of feedback also incorporates the effects of participation in pay for performance or in managed care insurers' utilization management programs (which we could not observe) but our proxy variables do not incorporate these effects.
One important managerial implication of our results is that IT should indicate where disease prevalence or appropriate treatments differ by characteristics that are readily observable to physicians. Feedback, in particular, is not typically provided to physicians for specific types of patients.10 At the same time, IT functions that provide information that increases the accuracy and certainty of physicians' priors should be used in conjunction with functions that increase the role of patient-specific signals, such as EMR, guidelines and reminders. Given the range of effects of IT on disparities, potential policies aimed at increasing IT adoption and those designed to decrease disparities may be at odds with each other. Using Bayesian framework to understand how IT influences providers' decision making can help policy makers identify contexts in which these two goals are in discord. This approach can help yield the design and implementation of IT that will improve quality for all patients.
Footnotes
It is important to note that published base rates themselves can incorporate errors that differ by patient type due to differences in clinicians' decisions that lead to diagnosis. See Arber et al. (2006) and McKinlay (1996).
McGuire et al. (2007) elaborate further, stating, “Statistical discrimination can be in the minority patient's best interest if physicians use reliable group differences in the absence of reliable data about the individual. Nonetheless, if physicians must rely less on individual level information, treatment for minority patients is less likely to be matched well to their individual needs.”
We rely on the definitions of the IT functions provided by the Congressional Budget Office (2008).
These two functions are not defined in the Congressional Budget Office report, but email and web consults/web chats in this context are very similar to those commonly used in personal communications.
Although the question asks about the physician's individual use of IT, it could have been misconstrued to ask whether the practice had adopted IT. Fortunately, DesRoches et al. (2008) found that these were virtually identical, noting, “The survey assessed physicians' access to various functions and whether the functions were used. However, since the overwhelming majority of physicians said they used most available functions, we primarily report findings on the availability of electronic health records in the office setting.” (Emphasis added.)
We are not aware of any empirical evidence that directly demonstrates use of clinical IT increases the provision and use of feedback to providers. However, because IT captures the data needed to generate such feedback, belief in this link is widespread, as evident in Blumenthal et al. (2008), which states, “The availability of real time information and decision support for practicing clinicians; producing information for accountability and feedback; the opportunity to link incentives to the information made available through these applications—all these outcomes strongly suggest that electronic records have substantial potential to improve quality.”
Previous work on CHD has shown that increased diagnostic certainty is associated with increased testing. We suspect this result is condition specific, such that testing in this case is not used to explore candidate diagnoses but rather to confirm a suspected one. Our clinical consultant corroborates this explanation by noting that CHD is not a clinical diagnosis, like depression, but rather it needs confirming tests to provide a objective evidence of the diagnosis. Tests for CHD also assess severity rather than simply confirming a diagnosis, which informs the decisions about the type of treatment. Furthermore, in the US, physicians have pressure to document a serious diagnosis like CHD in order to obtain reimbursement. For all of these reasons, tests in this clinical context are likely to complement for diagnostic certainty, particularly given the relatively low stakes of “false positives” from the subjective phase of the diagnosis studied here. For other conditions we expect the calculus would differ and result in different associations.
These results were replicated in one of the alternative specifications but not the other.
Bickell et al. (2008) and Sehgal (2003) are two examples where feedback was not stratified by patient type despite an explicit interest in using quality improvement to reduce disparities.
References
- Ai C, Norton EC. Interaction terms in logit and probit models. Economics Letters. 2003;80:123–129. [Google Scholar]
- Amarasingham R, Plantinga L, Diener-West M, Gaskin DJ, Powe NR. Clinical Information Technologies and Inpatient Outcomes. Archives of Internal Medicine. 2009;169(2):108–114. doi: 10.1001/archinternmed.2008.520. [DOI] [PubMed] [Google Scholar]
- Arber S, McKinlay JB, Adams, Marceau LD, Link CL, O'Donnell AB. Influence of patient characteristics on doctors' questioning and lifestyle advice for coronary heart disease: A UK/US video experiment. British Journal of General Practice. 2004;54:673–678. [PMC free article] [PubMed] [Google Scholar]
- Arber S, McKinlay JB, Adams, Marceau LD, Link CL, O'Donnell AB. Patient characteristics and inequalities in doctors' diagnostic and management strategies relating to CHD: A Video-simulation experiment. Social Science and Medicine. 2006;62:103–15. doi: 10.1016/j.socscimed.2005.05.028. [DOI] [PubMed] [Google Scholar]
- Azoulay P. Do pharmaceutical sales respond to scientific evidence? Journal of Economics and Management Strategy. 2002;11(4):551–594. [Google Scholar]
- Bach PB, Pham HH, Schrag D, Tate RC, Hargraves JL. Primary care physicians who treat blacks and whites. The New England Journal of Medicine. 2004;351(6):575–584. doi: 10.1056/NEJMsa040609. [DOI] [PubMed] [Google Scholar]
- Balsa AI, McGuire TG, Meredith LS. Testing for statistical discrimination in health care. Health Services Research. 2005;40(1):227–252. doi: 10.1111/j.1475-6773.2005.00351.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Balsa AI, McGuire TG. Statistical discrimination in health care. Journal of Health Economics. 2001;20:881–907. doi: 10.1016/s0167-6296(01)00101-1. [DOI] [PubMed] [Google Scholar]
- Balsa AI, McGuire TG. Prejudice, clinical uncertainty and stereotyping as sources of health disparities. Journal of Health Economics. 2003;22:89–116. doi: 10.1016/s0167-6296(02)00098-x. [DOI] [PubMed] [Google Scholar]
- Bao Y, Fox SA, Escarce JJ. Socioeconomic and racial/ethnic differences in the discussion of cancer screening: “Between-“versus “within-“physician differences. Health Services Research. 2006;42(3):950–970. doi: 10.1111/j.1475-6773.2006.00638.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnhart JM, Wassertheil-Smoller S. The effect of race/ethnicity, sex, and social circumstances on coronary revascularization preferences: a vignette comparison. Cardiology in Review. 2006;14:215–22. doi: 10.1097/01.crd.0000214180.24372.d5. [DOI] [PubMed] [Google Scholar]
- Bates DW. The Effects of health information technology on inpatient care. Archives of Internal Medicine. 2009;169(2):105–7. doi: 10.1001/archinternmed.2008.542. [DOI] [PubMed] [Google Scholar]
- Bickell NA, Shastri K, Fei K, Oluwole S, Godfrey H, Hiotis K, Srinivasan A, Guth AA. A tracking and feedback registry to reduce racial disparities in breast cancer care. Journal of the National Cancer Institute. 2008;100(23):1717–1723. doi: 10.1093/jnci/djn387. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bird CE, Fremont AM, Bierman AS, Wickstrom S, Shah M, Rector T, Horstman T, Escarce JJ. Does quality of care for cardiovascular disease and diabetes differ by gender for enrollees in managed care plans? Women's Health Issues. 2007;17(3):131–8. doi: 10.1016/j.whi.2007.03.001. [DOI] [PubMed] [Google Scholar]
- Bhalotra S, Ruwe MB, Strickler GK, Ryan AM, Hurley CL. Disparities in utilization of coronary artery disease treatment by gender, race, and ethnicity: opportunities for prevention. Journal of the National Black Nurses Association. 2007;18(1):36–49. [PubMed] [Google Scholar]
- Blumenthal, DesRoches DC, Donelan K, Ferris T, Jha A, Kaushal R, Rao S, Rosenbaum S, Shield A. Health Information Technology in the United States: Where We Stand, 2008. Robert Wood Johnson Foundation; 2008. Available online http://www.rwjf.org/files/research/3297.31831.hitreport.pdf. [Google Scholar]
- Bradley EH, Herrin J, Mattera JA, Holmboe ES, Wang Y, Frederick P, Roumanis SA, Radford MJ, Krumholz HM. Quality improvement efforts and hospital performance: Rates of Beta-Blocker prescription after acute myocardial infarction. Medical Care. 2005;43(3):282–292. doi: 10.1097/00005650-200503000-00011. [DOI] [PubMed] [Google Scholar]
- Chaudry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, Morton SC, Shekelle PG. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Annals of Internal Medicine. 2006;144(10):742–752. doi: 10.7326/0003-4819-144-10-200605160-00125. Volume. [DOI] [PubMed] [Google Scholar]
- Congressional Budget Office. [Accessed online November 5, 2008];Evidence on the Costs and Benefits of Health Information Technology. 2007 cbo.gov/ftpdocs/91xx/doc9168/05-20-HealthIT.pdf. [Google Scholar]
- Crilly M, Bundred P, Hu X, Leckey L, Johnstone F. Gender differences in the clinical management of patients with angina pectoris: a cross-sectional survey in primary care. BMC Health Services Research. 2007;7:142. doi: 10.1186/1472-6963-7-142. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Currin L, Schmidt U, et al. Variables that influence diagnosis and treatment of the eating disorders within primary care settings: a vignette study. International Journal of Eating Disorders. 2007;403:257–62. doi: 10.1002/eat.20355. [DOI] [PubMed] [Google Scholar]
- DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in ambulatory care—a national survey of physicians. The New England Journal of Medicine. 2008;359(1):50–60. doi: 10.1056/NEJMsa0802005. [DOI] [PubMed] [Google Scholar]
- Dresselhaus TR, Peabody JW, et al. Measuring compliance with preventive care guidelines: standardized patients, clinical vignettes, and the medical record. Journal of General Internal Medicine. 2000;1511:782–8. doi: 10.1046/j.1525-1497.2000.91007.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Epstein SA, Gonzales JJ, et al. Are psychiatrists' characteristics related to how they care for depression in the medically ill? Results from a national case-vignette survey. Psychosomatics. 2001;426:482–9. doi: 10.1176/appi.psy.42.6.482. [DOI] [PubMed] [Google Scholar]
- Gibbons RJ, Chatterjee K, Daley J. AHA/ACP-ASIM guidelines for the management of patients with chronic stable angina: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Patients With Chronic Stable Angina) Journal of the American College of Cardiology. 1999a;33(7):2092–197. doi: 10.1016/s0735-1097(99)00150-3. [DOI] [PubMed] [Google Scholar]
- Gibbons RJ, Chatterjee K, Daley J. ACC/AHA/ACP-ASIM guidelines for the management of patients with chronic stable angina: executive summary and recommendations A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on Management of Patients with Chronic Stable Angina) Circulation. 1999b;99(21):2829–48. doi: 10.1161/01.cir.99.21.2829. [DOI] [PubMed] [Google Scholar]
- Harries C, Forrest D, Harvey N, McClelland A, Bowling A. Which doctors are influenced by a patient's age? A multi-method study of angina treatment in general practice, cardiology and gerontology. Quality and Safety in Health Care. 2007;16:23–7. doi: 10.1136/qshc.2006.018036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holmes JS, Arispe IE, Moy E. Heart disease and prevention: race and age differences in heart disease prevention, treatment, and mortality. Medical Care. 2005;43(3 Suppl):I33–41. [PubMed] [Google Scholar]
- Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: The National Academies Press; 2001. [PubMed] [Google Scholar]
- James TL, Feldman J, Mehta SD. Physician variability in history taking when evaluating patients presenting with chest pain in the emergency department. Academic Emergency Medicine. 2006;13(2):147–52. doi: 10.1197/j.aem.2005.08.007. [DOI] [PubMed] [Google Scholar]
- Jaspers MW, Knaup P, et al. The computerized patient record: where do we stand? Yearbook of Medical Informatics. 2006:29–39. [PubMed] [Google Scholar]
- Javitt JC, Rebitzer JB, Reisman L. Information Technology and Medical Missteps: Evidence from a Randomized Trial. NBER Working Paper No 13493. 2007 doi: 10.1016/j.jhealeco.2007.10.008. [DOI] [PubMed] [Google Scholar]
- Kales HC, Neighbors HW, et al. Race, gender, and psychiatrists' diagnosis and treatment of major depression among elderly patients. Psychiatric Services. 2005;566:721–8. doi: 10.1176/appi.ps.56.6.721. [DOI] [PubMed] [Google Scholar]
- Kales HC, Neighbors HW, et al. Effect of race and sex on primary care physicians' diagnosis and treatment of late-life depression. Journal of the American Geriatrics Society. 2005;535:777–84. doi: 10.1111/j.1532-5415.2005.53255.x. [DOI] [PubMed] [Google Scholar]
- Langley J, Beasley C. Health information technology for improving quality of care in primary care settings. Rockville, MD: Agency for Healthcare Research and Quality; 2007. Prepared by the Institute for Healthcare Improvement for the National Opinion Research Center under contract No. 290-04-0016. AHRQ Publication No. 07-0079-EF. [Google Scholar]
- Lutfey KE, Ketcham JD. Patient and provider assessments of adherence and the source of disparities: Evidence from diabetes care. Health Services Research. 2005;40(6):1803–1817. doi: 10.1111/j.1475-6773.2005.00433.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lutfey KE, Link CL, Grant RW, Marceau LD, McKinlay JB. Is certainty more important than diagnosis for understanding race and gender disparities?: An experiment using coronary heart disease and depression case vignettes. Health Policy. 2009;89(3):279–287. doi: 10.1016/j.healthpol.2008.06.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lutfey KE, Link CL, Grant RW, Marceau LD, Adams A, Arber S, Siegrist J, Bönte M, Knesebeck O, McKinlay JB. Diagnostic certainty as a source of medical practice variation in coronary heart disease: Results from a cross-national experiment of clinical decision making. Medical Decision Making. doi: 10.1177/0272989X09331811. Forthcoming. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martin R, Gordon EE, Lounsbury P. Gender disparities in the attribution of cardiac-related symptoms: contribution of common sense models of illness. Health Psychology. 1998;17(4):346–57. doi: 10.1037//0278-6133.17.4.346. [DOI] [PubMed] [Google Scholar]
- Masi CM, Blackman DJ, Peek ME. Breast cancer screening, diagnosis, and treatment among racial and ethnic minority women. Medical Care Research and Review. 2007;64(5):195S–242S. doi: 10.1177/1077558707305410. Supplement. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McDonald CJ. Computer reminders, the quality of care and the nonperfectability of man. New England Journal of Medicine. 1976;295:1351–5. doi: 10.1056/NEJM197612092952405. [DOI] [PubMed] [Google Scholar]
- McGuire TG, Ayanian JZ, Ford DE, Henke REM, Rost KM, Zaslavsky AM. Testing for statistical discrimination in panel data for depression treatment in primary care. Health Services Research. 2007;43(2):531–551. doi: 10.1111/j.1475-6773.2007.00770.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McKinlay JB. Some contributions from the social system to gender inequalities in heart disease. Journal of Health and Social Behavior. 1996;37(1):1–26. [PubMed] [Google Scholar]
- McKinlay JB, Link CL, Freund KM, Marceau L, O'Donnell A, Lutfey K. Sources of variation in physician adherence with clinical guidelines: Results from a factorial experiment. Journal of General Internal Medicine. 2007;22:298–96. doi: 10.1007/s11606-006-0075-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McKinlay JB, Link CL, Arber S, Marceau L, O'Donnell A, Adams A, Lutfey K. How do doctors in different countries manage the same patient? Results of a factorial experiment. Health Services Research. 2006;41:2182–2200. doi: 10.1111/j.1475-6773.2006.00595.x. Erratum in 41(6):2303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Menachemi N, Matthews MC, Ford EW, Brooks RG. The influence of payer mix on electronic health record adoption by physicians. Health Care Management Review. 2007;32(2):111–118. doi: 10.1097/01.HMR.0000267791.02062.3f. [DOI] [PubMed] [Google Scholar]
- Miller RH, D'Amato K, Oliva N, West CE, Adelson JW. California's digital divide: Clinical information systems for the haves and have-nots. Health Affairs. 2009;28(2):505–516. doi: 10.1377/hlthaff.28.2.505. [DOI] [PubMed] [Google Scholar]
- Papke L, Wooldridge J. Econometric methods for fractional response variables with an application to 401(k) plan participation rates. Journal of Applied Econometrics. 1996;11:619–632. [Google Scholar]
- Peabody JW, Liu A. A cross-national comparison of the quality of clinical care using vignettes. Health Policy and Planning. 2007;225:294–302. doi: 10.1093/heapol/czm020. [DOI] [PubMed] [Google Scholar]
- Peek ME, Cargill A, Huang ES. Diabetes health disparities: A systematic reviw of health care interventions. Medical Care Research and Review. 2007;64(5):101S–156S. doi: 10.1177/1077558707305409. Supplement. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Popescu I, Vaughan-Sarrazin MS, Rosenthal GE. Differences in mortality and use of revascularization in black and white patients with acute MI admitted to hospitals with and without revascularization services. Journal of the American Medical Association. 2007;297(22):2489–95. doi: 10.1001/jama.297.22.2489. [DOI] [PubMed] [Google Scholar]
- Rebitzer JB, Rege M, Shepard C. Influence, information overload, and information technology in health care. NBER Working Paper No 14159. 2008 [PubMed] [Google Scholar]
- Rosamond W, Flegal K, Furie K et al. Heart Disease and Stroke Statistics—2008 Update. Circulation. 2008;117:e25–e146. doi: 10.1161/CIRCULATIONAHA.107.187998. [DOI] [PubMed] [Google Scholar]
- Schulman KA, Berlin JA, Harless W, Kerner JF, Sistrunk S, Gersh BJ, et al. The effect of race and sex on physicians' recommendations for cardiac catheterization. New England Journal of Medicine. 1999;340:618–626. doi: 10.1056/NEJM199902253400806. [DOI] [PubMed] [Google Scholar]
- Sehgal AR. Impact of quality improvement efforts on race and sex disparities in hemodialysis. JAMA. 2003;289(8):996–1000. doi: 10.1001/jama.289.8.996. [DOI] [PubMed] [Google Scholar]
- Sirovich BE, Gottlieb DJ, et al. Variation in the tendency of primary care physicians to intervene. Archives of Internal Medicine. 2005;16519:2252–6. doi: 10.1001/archinte.165.19.2252. [DOI] [PubMed] [Google Scholar]
- Van Ryn M, Fu SS. Paved with good intentions: Do public health and human service providers contribute to racial/ethnic disparities in health? American Journal of Public Health. 2003;93(2):248–255. doi: 10.2105/ajph.93.2.248. [DOI] [PMC free article] [PubMed] [Google Scholar]