Abstract
Objective: To introduce the Q-methodology research technique to the field of health informatics. Q-methodology—the systematic study of subjectivity—was used to identify and categorize the opinions of primary care physicians and medical students that contributed to our understanding of their reasons for acceptance of and/or resistance to adapting information technologies in the health care workplace.
Design: Thirty-four physicians and 25 medical students from the Chicago area were surveyed and asked to rank-order 30 opinion statements about information technologies within the health care workplace. The Q-methodology research technique was employed to structure an opinion typology from their rank-ordered statements. (The rank-ordered sorts were subjected to correlation and by-person factor analysis to obtain groupings of participants who sorted the opinion statements into similar arrangements.)
Results: The typology for this study revealed groupings of similar opinion-types associated with the likelihood of physicians and medical students to adapt information technology into their health care workplace. A typology of six opinions was identified in the following groups: (1) Full-Range Adopters; (2) Skills-Concerned Adopters; (3) Technology-Critical Adopters; (4) Independently-Minded and Concerned; (5) Inexperienced and Worried; and (6) Business-Minded and Adaptive. It is imperative to understand that in the application of Q-methodology, the domain is subjectivity and research is performed on small samples. The methodology is a combination of qualitative and quantitative research techniques that reveals dimensions of subjective phenomena from a perspective intrinsic to the individual to determine what is statistically different about the dimensions and to identify characteristics of individuals who share common viewpoints. Low response rates do not bias Q-methodology because the primary purpose is to identify a typology, not to test the typology's proportional distribution within the larger population.
Conclusion: Q-methodology can allow for the simultaneous study of objective and subjective issues to determine an individual's opinion and forecast their likeliness to adapt information technologies in the health care workplace. This study suggests that an organization's system implementers could employ Q-methodology to individualize and customize their approach to understanding the personality complexities of physicians in their organization and their willingness to adapt and utilize information technologies within the workplace.
In this study we introduce Q-methodology—a unique combination of qualitative and quantitative research techniques that permits the systematic study of subjectivity—to the field of health care informatics.1 Our purpose was to identify, categorize, and understand the opinions of Chicago-area primary care physicians and medical students regarding their acceptance of and/or resistance to adapting information technologies in the health care workplace.
Although Q-methodology has been applied to a broad range of scientific disciplines including the political and communications sciences, psychology, nursing, medicine, and pharmacy, it has rarely been used within the field of health care informatics.1 The methodology was first developed in the 1930s by British physicist-psychologist William Stephenson.1,2,3 Notably, there are over 2,000 theoretical and applied papers addressing Q-methodology in print today.1 Riley and Lorenzi suggest that no matter how good the technology, it is always people who will ultimately determine whether that new system will work well.4,5 A number of studies report physician reluctance in the use of information technology.6,7,8,9 Physicians have voiced their concerns about the effects of information technologies on patient care that include: Privacy, confidentiality, security, dehumanizing and depersonalizing effects on the patient-physician relationship, and over-standardization of health care.6,10,11,12,13 Physician resistance has been associated with numerous variables such as fear of revealing ignorance, fear of an imposed discipline, fear of wasted time, fear of unwanted accountability, and fear of new demands.4,10,14,15 Researchers have long been calling for more refined assessments of attitudes in order to permit targeted educational interventions addressing information technologies.10,12
Although managers are confronted with more than one type of physician opinion, current survey studies designed to assess physician-use continue to report opinions as one composite average opinion. For health care managers involved in the planning and implementation of new health information technologies, the availability of one composite physician opinion profile summarizing physician concerns in their organization is not useful; when dealing with physicians, concerns can appear in varying combinations among individuals within the group.4 These combinations have received little research attention.
The qualitative methods of Q-methodology allow participants to express their subjective opinions and the quantitative methods of Q-methodology use factor analytic data-reduction and induction to provide insights into opinion formation as well as to generate testable hypotheses. Q-methodology research emphasizes the qualitative how and why people think the way they do; the methodology does not count how many people think a certain way. The goal of Q-methodology is, first and foremost, to uncover different patterns of thought (not their numerical distribution among the larger population). Studies using the Q-methodology typically use small sample sizes. The results of these studies are less influenced by low response rates compared with the results of survey studies.1,2,3
Methodology
Q-methodology uncovers and identifies the range of opinions regarding a specific topic under investigation. The methodology involves three stages: Stage one involves developing a set of statements to be sorted; stage two requires participants to sort the statements along a continuum of preference; and in stage three the data are analyzed and interpreted.1
The research instrument is the set of opinion statements, called a Q-sample. The goal in instrument development is to comprehensively represent the discussion about a particular topic in the participants' own words and language. Opinion statements are most typically collected through personal interviews and focus group discussions. In addition, printed sources such as editorials, publications, essays, or any other sources germane to the issue may be used. This collection of items, called the concourse, is not restricted to words and could include paintings, pieces of art, photographs, and even musical selections.1
From the concourse, a subset of statements is selected to form the Q-sample: the group of statements to be rank-ordered by the test subjects. The goal of the Q-sample is to provide, in miniature, the comprehensiveness of the larger process being modeled.1 The concourse is sampled systematically; random and cluster sampling techniques are applied. To ensure content validity, sample statements are usually reviewed by domain experts and tested in one or more pilot studies. In terms of concerns about comprehensiveness and representativeness of any given sample, instrument design in Q-methodology is performed as carefully as participant selection is conducted for survey studies. Study participants typically receive a set of randomly numbered opinion statements (each printed on a separate card); a sheet with sorting instructions called a condition of instruction; and an answer sheet to record the chosen order of statements. Personal opinion of a situation is operationalized as data through the individual's rank-ordering of opinion statements.1,2 Most typically, a participant is asked to rank-order statements (agree to disagree) referred to as Q-sorting (Appendix A). The statements are matters of opinion only, not fact. Q-methodology assumes that opinions are subjective and can be shared, measured, and compared.1,2,3,16,17 The answer sheet used in Q-methodology forces the Q-sort into the shape of a quasi-normal distribution. There are fewer statements that can be placed at the extreme ends and more that are allowed to go into the middle area. The middle represents the grey zone, or almost neutral, reaction. Both the symmetry and predetermined numbers of statements in each category facilitate the quantitative methods of correlation and factor analysis.2,3,17
Analysis
In contrast to most qualitative methods, Q-data are readily amenable to numerical analyses. Quantitative data reduction helps to detect patterns and connections that otherwise might be passed over by nonstatistical methods of data analysis. In Q-methodology, data analysis uses correlation and by-person factor analysis, that is, statistical analysis is not performed by variable, trait, or statement, but rather by person. People correlate to others with similar opinions based on their Q-sorts. Rather than groupings of traits, such as, years of computing experience, age, or sex,18 Q-methodology results in the grouping of expressed opinion profiles based on the similarities and differences in which the statements are arranged by each participant.1,3,19
To begin data analysis, each person's rank-ordered sort of statements is transformed into an array of numerical data. (In this study, for example, the two statements that were placed at the Most Agreeable end of the distribution received scores of +4, the next three scores received +3, the next three scores received +2, and so forth, all the way down to the two statements that were found Most Disagreeable, which received scores of -4.) Statements placed in the middle of the bell-shaped curve by the subjects are assigned scores of 0. Each person's array of numerical data is then intercorrelated with the arrays of all the others.* The resulting correlation matrix shows which participants sorted the statements into similar orders. The correlation matrix is then subjected to factor analysis to obtain groupings of data arrays that are highly correlated.† This determines the factors that represent clusters of participants with similar opinions. In the practice of Q-methodology, people who are associated with one factor have something in common that differentiates them from those who are associated with the other factors. Factor loadings show each participant's association with each of the identified opinion types. A factor loading of 0.80, for example, means that a person's statement array is highly correlated with this factor. Like other correlation coefficients, factor loadings can range from -1.00 through 0 to +1.00 (Table 1).
Table 1.
Factors/Opinion Types
|
||||||
---|---|---|---|---|---|---|
Participant | 1 | 2 | 3 | 4 | 5 | 6 |
A | .36 | .20 | .01 | .73* | -.06 | -.11 |
B | -.12 | -.06 | .23 | -.09 | .66* | -.07 |
C | .28 | .63* | .07 | .17 | .08 | .13 |
D | .82 | .02 | -.09 | -.14 | .08 | .16 |
E | .75* | -.15 | .38 | .06 | -.11 | .10 |
F | -.03 | .10 | .12 | .13 | .02 | .64* |
G | -.07 | .32 | .67* | .12 | -.06 | -.03 |
Denotes a statistically significant factor loading (P < 0.01) in excess of 0.47.
Weighted averaging‡ is used to calculate statement scores, which reveal the level of agreement and disagreement that each statement receives within each of the identified opinion types. When all of the weighted average scores are obtained, statements are arranged in order of descending scores. This arrangement then forms the composite statement array (also referred to as model Q-sort) for this factor. To facilitate comparisons between factors, composite statement scores are transformed back into the whole-number scores (+4, +3, etc.) used in the original sorting process.1,2,3 The interpretation of factors in Q-methodology uses statement scores rather than factor loadings (as is typical in by-variable factor analysis). It involves comparison of statement scores across clusters of participants with similar opinions (factors). Particular attention is given to those statements that distinguish between factors and to those that receive extreme scores (at either end of the sorting continuum). The degree of correlation between factors is assessed. If permitted by the study design, post-sorting interviews are conducted to confirm the researcher's interpretations. The distinctive characteristics of Q-methodology are summarized in List 1.1,2,3 ▶
Table 2.
List 1 |
---|
Distinctive Characteristics of Q-methodology |
Population Sampling |
Specific sampling principles and techniques used in survey research are not necessarily relevant to person sampling in Q-methodology given the contrasting research orientation and purpose. Participant selection can be governed by theoretical (persons are chosen because of their special relevance to the goals of the study, or purposive sampling) or by pragmatic (anyone will suffice, or convenience sampling) considerations. Because of its intensive orientation, Q-methodology tends to use person samples that are small, and single case studies; a preference in keeping with the behaviorist dictum that it is more informative to study one subject for 1,000 hours than 1,000 subjects for 1 hour.3 |
Validity |
Due to its qualitative aspects, questions of research validity in Q-methodology are assessed differently than in quantitative research methods.29 The Q-sorting operation is wholly subjective in the sense that it represents my point of view. There is no external criterion by which to appraise an individual's perspective.1 Each individual's rank-ordered set of statements is considered a valid expression of their opinion. |
Content validity of the Q-sample is addressed by thorough literature review and by eliciting expert advice of those associated with the field under investigation. Face validity of the text and statement wording is addressed by leaving the statements in the participants' words, edited only slightly for grammar and readability. |
Item validity, as understood in more traditional survey research, does not apply to the study of subjectivity. In Q-methodology, one expects the meaning of an item to be interpreted individually. The meaning of how each item was individually interpreted becomes apparent in the rank-ordering and in follow-up interviews. |
Reliability |
Reliability of Q-methodology has been proven through test-retest studies and assessment of reliable schematics. For test-retest reliability, studies have shown that administering the same instrument (Q-sample) to the same individuals at two points in time have typically resulted in correlation coefficients of.80 or higher.2,30,31,32 Q-methodology has also produced consistent findings in two more types of study comparisons: first, when administering the same set of statement to different person samples; and second, when pursuing the same research topic, but using different sets of statements and different person samples.28,30,31,32,33 For reliability and stability of identified opinion clusters (schematics), findings were consistent when the instrument was administered to different person samples, and even when different Q-samples and person samples were used.33 |
Generalizability |
Most Q-methodology studies are exploratory and qualitative in nature and tend not to use random sample designs. Generalizations rarely occur beyond the immediate set of participants and are typically not based on the numerical distribution of study participants among factors. The value of Q-methodology lies in uncovering valid and authentic opinion clusters and does not occur beyond the immediate set of participants. The value of Q-methodology lies in uncovering opinion clusters. Once identified, their prevalence among the larger population can be subsequently tested using large group surveys and standard variance analytic methods. The purpose of a typology is not the creation of an exhaustive classification scheme but to find something in the material worthy of classification, and to provide some of the categories.34 |
Application
The University of Illinois at Chicago Medical Center (UICMC) is in the implementation phase of a replacement clinical information system that includes a data repository. The UICMC staff includes 850 attending and resident physicians. The long-term richness and quality of the repository is inevitably affected by the number of physicians directly interacting with our information system. By incorporating the methods and findings of this study into the systems training phase at UICMC, the implementation team was sensitized toward existing types of physician opinions. The team decided that in order to optimize training efficiency and prevent culture clashes, a flexible mix of classroom, customized, and individualized training interventions would be most acceptable and workable when addressing health information technology and educational interventions within the health care workplace.
To develop the collection of statements (concourse) for this study, group discussions incorporating focus and nominal group techniques were used. In the next step of instrument development, the 118 opinion statements (addressing issues covered in the scientific literature) were collected and sorted into groups of statements that expressed similar or related ideas. Some groupings contained only one statement; therefore, only one statement was used. In cases where one group had multiple statements, the researchers selected one representative statement from each group. This subset of statements is called the Q-sample. This systematic process of instrument development resulted in a Q-sample of 30 opinion statements that ensured comprehensiveness, balance, and representativeness (Appendix B). The Q-sample of statements and the instruction set were then pilot tested.
The final instrument included a set of randomly numbered opinion statements. The participants were asked to sort statements into nine categories, ranging from Most Agree to Most Disagree (Fig. 1). For this study, a short questionnaire was added to obtain demographics, such as age, gender, and medical specialty, followed by sorting instructions for making further distinctions (available from authors). Quantitative analysis of the Q-sorts using correlation and factor analysis were performed with PCQ™ (Stricklin, Lincoln, NE).20 PCQ uses traditional algorithms for statistical calculations and facilitates the tasks of data entry and results-reporting that are specific to Q-methodology.
Results
By-person factor analysis and varimax rotation extracted six opinion types that represented six different primary care physician and medical student views regarding the use of information technologies in the health care workplace. The six opinion types were: (1) Full-Range Adopters; (2) Skills-Concerned Adopters; (3) Technology-Critical Adopters; (4) The Independently-Minded and Concerned; (5) The Inexperienced and Worried; and (6) The Business-Minded and Adaptive. Fifty-one of the 59 participant sorts (86%) were accounted for in the six opinion types (factors). Of the remaining eight sorts, four were not statistically significant (loadings less than 0.47)§ and another four were confounded (loading significantly on more than one factor).1,20 In Q-methodology, data interpretation is based on examining and comparing composite statement arrays, also known as factor scores, calculated for each factor (Table 2). Of the 51 sorts analyzed, about half of all participants were identified as Full-Range Adopters. The remaining half was distributed among the other five opinion types, each representative of three to six participants. (See List 2, and see Table 3.) ▶
Table 2.
Factors
|
||||||
---|---|---|---|---|---|---|
Selected Statements* | 1 | 2 | 3 | 4 | 5 | 6 |
3. I wonder whether I could ever take full advantage of computers? | -2 | +4 | +1 | +4 | +2 | 0 |
4. Confidentiality and security are bigger problems with computer vs. paper records. | -3 | -1 | +4 | +4 | +3 | +2 |
5. It's useful to print out patient education information: that is, what we did during the visit. | +2 | +4 | +3 | +2 | +1 | -1 |
7. Assessing performance is best done by observing, not by computer monitoring. | -1 | 0 | +2 | +3 | +4 | 0 |
8. Physician knowledge and critical thinking abilities will decrease. | -4 | 0 | -2 | -4 | +3 | -4 |
9. It will improve communications and access to records across one's office sites. | +1 | +3 | +4 | +1 | 0 | +4 |
10. Useful to obtain eligibility data and to consolidate insurer rules. | +1 | +1 | +1 | +1 | 0 | +4 |
11. Will use computers only when they are voice-activated. | -4 | -4 | -4 | -4 | -3 | -1 |
14. Interested in clinical information systems and repositories to further personal research. | 0 | -2 | -3 | +3 | +1 | -4 |
16. CPRs force physicians to take better notes (liability/malpractice). | 0 | -4 | -1 | 0 | -4 | -2 |
18. Use of information systems is good practice; it will improve patient care. | +4 | 0 | +2 | 0 | -2 | +1 |
21. Using computers in the room with a patient is depersonalizing. | -3 | -3 | +1 | -2 | +4 | 0 |
27. Centralized CPRs will decrease problems of redundancy and inconsistency. | +4 | +2 | 0 | 0 | -1 | +1 |
30. Diagnostic systems are best used in teaching, not in practice. | -3 | +1 | -4 | 0 | -4 | -3 |
CPRs = computer-based patient records. |
Statements that received extreme scores in any of the six factor arrays.
Table 3.
Opinion Type | Envisioned Uses | Issues of Concern |
---|---|---|
Full-range adopters (Factor 1) | Improved patient/longitudinal care and office/patient management; CPRs reduce redundancy; communication links with sites, insurers, social service; access to literature | None shown |
Skills-concerned adopters (Factor 2) | Improved office/patient management and longitudinal care; CPRs reduce redundancy; communication links with sites, insurers, social service | Computer skills |
Technology-critical adopters (Factor 3) | Improved patient care and office/patient management; communication links with sites, insurers, social service | Confidentiality and security; computer-based performance assessment; computer skills; depersonalizing effect |
The independently-minded and concerned (Factor 4) | Improved office/patient management; communication links with insurers; access to literature/research | Confidentiality and security; computer-based performance assessment; computer skills |
The inexperienced and worried (Factor 5) | Small computer to carry; patient education; access to literature/research | Confidentiality and security; computer-based performance assessment; computer skills; depersonalizing effect; too much standardization; reduces critical thinking; time-consuming |
The business-minded and adaptive (Factor 6) | Communication links with sites; improved patient/longitudinal care and patient management; CPRs reduce redundancy; small computer to carry; communication links with insurers; business/HMO contracts | Vendor manipulation; confidentiality and security; time-consuming |
CPRs = computer-based patient records. |
Table 3.
List 2 |
---|
Q-methodology Factor Descriptions |
Factor 1: Full-range Adopters |
Full-Range Adopters embraced a wide range of uses for information technologies. Factor one participants thought that the use of information systems will improve patient care (see Table 2, statement no. 18, composite score +4). They would like to use information technologies for a variety of applications, that is, as personal tools to access electronic journals and article databases; for office and patient management, including the use of flow sheets, automatic reminders, and drug interaction checking; and to facilitate communication across their office sites as well as with colleagues, insurers, and social service agencies. Full-Range adopters did not display any concerns about possibly negative impacts of technologies. |
Factor 2: Skills-concerned Adopters |
Skills-Concerned Adopters saw a similar range of uses, but expressed insecurity about their computer skills (Table 2, statement no. 3, composite score +4). |
Factor 3: Technology-Critical Adopters |
Technology-Critical Adopters also saw a wide range of uses, but were highly concerned about record confidentiality (no. 4, +4) as well as computer monitoring of their own actions (no. 7, +2). |
Factor 4: The Independently-minded and Concerned |
This group showed a different scope of envisioned uses. Next to office management and communications with colleagues, they emphasized literature access and personal research. They were highly concerned about record confidentiality, their own computer skills, and performance assessment via computer monitoring. Their opinion profile appeared to stress the special nature of medical knowledge, trust in the doctor-patient relationship, as well as professional autonomy and self-regulation. |
Factor 5: The Inexperienced and Worried |
This group saw few benefits from the use of information technologies; these included office management, personal research, and access to electronic journals and article databases. They worried about performance assessment via computer monitoring, record confidentiality and security, their computer skills, depersonalizing effects, and over-standardization of medical care. Their opinion profile revealed their worries about threats to professional autonomy, decreasing trust in the doctor-patient relationship, and the possibility that computers would be catalysts for the degradation of medicine from a profession to a technical occupation. |
Factor 6: The Business-minded and Adaptive |
This group saw benefits from the use of information technologies. When compared with other opinion types, however, they emphasized a different scope of uses. Aside from patient management and connecting with colleagues, they thought these technologies particularly useful to obtain patient eligibility data and to consolidate insurer rules and regulations. Furthermore, they considered the use of computer-based patient records essential to compete with HMO and other business contracts. Their only concerns were record confidentiality and security, and manipulation by computer vendors. |
Discussion
Facilitators of and barriers to information systems implementations have mostly been studied by focusing research attention on the identification of contributing issues and variables.11,14,21,22,23,24,25,26 To say, for example, that certain percentages of the variance in physician reaction to information technology can be explained by tendency for early adoption, lack of computer skills, concern about confidentiality and security, and fear of loss of independence, is to miss the crucial point that, for some, computer skills are tantamount while, for others, confidentiality and security are the most concerning issues. Political scientists Baas and Thomas described the confusing effects of traditional survey designs used in subjectivity research as the standard approach used to assess the impact of certain traits averaged across large numbers of individuals. They said that this traditional survey approach views individuals as irrelevant except in so far as they provide the sources of variables. Often ignored is that in abstracting traits from their individual contexts and averaging them across individuals, one loses the particular intra-individual significance of these traits for each individual. In this approach, focus is obscured from the way individuals (according to their subjective perspective) actually order traits under consideration and, in the process, develop relatively distinct constructions of the world.27
In Q-methodology, opinion types are defined as prototypical exemplars rather than as discrete categories. Most current typologies, for example, the Myers-Briggs Type Indicator, classify individuals into non-overlapping categories depending on whether their scores fall above or below a certain cut-off point.28 Such typologies assume discrete data and clear discontinuities between discrete types. Exemplary prototypes, however, assume neither discontinuous data nor clear cut-off points between typological categories. Here, the focus is on identifying characteristics that are typical for each category. Individuals can differ in their degree of fit to the category prototype, with some being more typical exemplars than others. Generalizations in Q-methodology research are based on the validity and theoretical implications of identified opinion types, and not on their numerical distribution among study participants.
In our application of Q-methodology, the prototypical approach made it possible to identify and categorize participant opinion types by investigating both the issues that were common to all types as well as those that differentiated them. Primary care physician and medical student opinions of information technologies largely fell into six categories (see Results, above). All six opinion types agreed with the use of information technologies to improve patient care and to increase efficiency in office management activities. The three opinion types that were highly concerned about record confidentiality and security expressed, at the same time, strong reservations about the use of computers for monitoring their own performance. This combination of concerns was associated with relatively low levels of envisioned technology adoption and suggests that, for these participants, problems with information technologies might be less rooted in the technology and instead might be related more to physician resentment of the underlying ideology driving the technology. Resentment of a particular health care ideology, not the technology, might contribute to the occurrence of cultural obstacles during systems implementations.
Participants were allowed to blend matters of system functionalities and envisioned uses with issues of concern and professional practice philosophy. Q-methodology assumes that issues can take on different meanings depending upon individual context, which is considered to be shaped by past and present experiences, as well as by hopes and expectations for the future. This approach allowed study participants to demonstrate what issues were important to them. For some, these were mostly related to system functionalities whereas for others issues related to medical practice philosophy were more pressing.
Limitations
Although the purpose of this study was to investigate the quality of physician and medical student opinions, their quantitative distribution in the larger populations was not a consideration. The study used a relatively small set of participants, all of whom were either primary care physicians or third-year medical students. It was limited to the greater Chicago area, and did not exclusively rely on random sampling procedures.
Implications
Future research should address the proportional distribution of opinion types in the large populations of primary care physicians and medical students. The research could be performed using our survey instrument or constructing a new instrument created from our statements that were found to distinguish between opinion types. Using either research instrument, future research should use larger participant samples and random sampling procedures. Our findings suggest that testing associations between opinion types and other quantitative variables, such as medical specialty, age, gender, and actual computer-skills and experience, is feasible. Changes in physician opinions could be studied by administering Q-methodology over a period of time and/or pre- and post-implementation of information systems. We think this study can also be used as a model for future research involving cultural differences in basic goals and values between medical policy makers, information researchers, system planners, physicians, and other clinicians.
Conclusion
By changing the research focus from traits to people, Q-methodology permits a macroscopic people-oriented research design that can be used to identify and categorize physician and medical student opinions of information technologies to uncover underlying sources of resistance to technology. This study revealed that physician opinion types can be differentiated into those who appear largely self-motivated and will likely need only minimal training interventions (Full-Range Adopters); those who will need additional computer training (Skills-Concerned Adopters); and those who are likely to require motivational interventions that are beyond the reach and jurisdiction of information systems departments (Technology-Critical Adopters, The Independently-Minded and Concerned, and The Inexperienced and Worried). Identification of different opinion types in an organization (either by employing this study's instrument or a shortened survey version) can be used for strategic management and systems' roll-out decisions—that is, a roll-out that would not be performed across entire departments but by opinion type.
Once identified, Full-Range Adopters who are highly respected as well as educationally and clinically influential could be asked to function as system champions to persuade their more reluctant colleagues, like the Technology-Critical Adopters.13 The value of Q-methodology lies in uncovering opinion clusters. Once identified, their prevalence among the larger population can be subsequently tested using large group surveys and standard variance analytic methods.
Our study suggests that the opinions of primary care physicians and medical students toward information technologies in the health care workplace are more closely related to their medical practice philosophy than to systems functionalities. Physician reactions to information technologies have largely been studied along the lines of the biomedical model, namely by assuming the existence of independent and objectifiable symptoms. Q-methodology, however, offers a more holistic model. It assumes that individual reactions are rooted in their subjective experiential contexts, in which no variable can exist independently. Our study and approach revealed that physician resentment is not always directed at the technology, but rather at the underlying health care ideology driving the technology.
Acknowledgments
We acknowledge the contributions of the following individuals toward the success of this study: J. Warren Salmon, PhD; Robert G. Mrtek, PhD; Sheldon X. Kong, PhD; and Thomas J. Muscarello, PhD, who at the time of this study were all associated with the University of Illinois at Chicago.
Appendix A
Glossary of Q-methodology Terminology1,2,3
Concourse
The initial collection of statements regarding a particular topic of interest.
Composite statement arrays
The composite Q-sort (opinion profile) summarizing the viewpoint of all the persons loading on any one factor (also referred to as Factor Array or model Q-sort).
Q-sample
A representative sample of statements that is drawn from the collection of statements regarding a particular topic of interest (concourse).
Q-sort
Each participant's rank-ordered set of statements (opinion profile). Q-Sorts are data.
Condition of instruction
The set of instructions consistently used by all participants when rank-ordering sets of statements.
Factor
The cluster of participants whose Q-sorts were similar, i.e., they ranked the statements into similar orders of preference. Each factor represents a different type of opinion.
Factor loadings
These numbers represent each participant's correlation with each of the identified (called extracted) factors.
Factor/statement scores
These scores show the level of agreement/disagreement among statements within each identified opinion cluster. Factor scores serve as the basis of study interpretation.
Appendix B
Sample of Opinion Statements for Information Technology Study (Full statement set available from authors)
Computer-based information networks will improve longitudinal care by providing coordination between specialty and primary care practitioners.
Confidentiality and security are bigger problems with computer records than with paper records.
Computers will increase efficiency in handling patient management issues such as drug interactions, flow sheets, etc.
Assessing performance is best done by directly observing the physician, not by computer monitoring.
Computer-based information networks will be useful to obtain patient eligibility data and to consolidate insurer rules and regulations.
Footnotes
For data entry into statistical software programs, this means that participants are entered as column headings, whereas statements form the rows.
To simplify structure and maximize factor loadings, factor extraction is usually followed by varimax and/or judgmental rotations.
For example, if sorts from three participants were defining one factor with loadings of 0.70, 0.80, and 0.50 respectively, and if these participants (in the same order) had ranked opinion statement #1 as +4, +3, and +4, a weighted average score for this statement would be calculated as: (4 × 0.70 + 3 × 0.80 + 4 × 0.50)/3 = 2.4. This process is repeated in the same fashion for each of the remaining statements (and for each of the identified factors).
The significance of factor loadings is calculated with the formula for zero-order correlation coefficients, i.e., SE = 1/ (sqrt[N]), where SE is the standard error and N is the number of Q-sort statements.1,2 Since there were 30 statements in this study, the standard error comes out to 0.18 (SE = 1/(sqrt[30]) = 1/5.447 = 0.18). Correlations are considered to be statistically significant at the 0.01 level when they are in excess of 2.58 standard errors (irrespective of sign), or (2.58(SE) = 2.58 (.18) = 0.47).
References
- 1.Brown SR. A primer on Q-methodology. Operant Subjectivity. 1993;16: 91-138. [Google Scholar]
- 2.Brown SR. Political subjectivity: application of Q-methodology in political science. New Haven, CT: Yale University Press, 1980.
- 3.McKeown BF, Thomas BD. Q-methodology. Newbury Park, CA: Sage Publications, 1988.
- 4.Riley RT, Lorenzi NM. Gaining physician acceptance of information technology systems. Medical Interface. 1995:78-80; 82,83. [PubMed] [Google Scholar]
- 5.Lorenzi NM, Riley RT. Organizational Aspects of Health Informatics: Managing Technological Change. New York, NY: Springer-Verlag, 1995.
- 6.Alavi M, Joachimsthaler EA. Revisiting DSS implementation research: a meta-analysis of the literature and suggestions for researchers. Management Information Systems Quarterly. 1992;16: 95-115. [Google Scholar]
- 7.Young DW. What makes doctors use computers? Discussion paper. In: Andersen JG, Jay SJ (eds). Use and Impact of Computers in Medicine. New York, NY: Springer-Verlag, 1987; 8-14.
- 8.Massaro TA. Introducing physician order entry at a major academic center: I. impact on organizational culture and behavior. Academic Medicine. 1993a;68: 20-5. [DOI] [PubMed] [Google Scholar]
- 9.Lee F, Teich JM, Spurr CD, Bates DW. Implementation of physician order entry: user satisfaction and self-reported usage patterns. J Am Med Inform Assoc. 1996;3: 42-55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Brown SH, Coney RD. Changes in physician's computer anxiety and attitudes related to clinical information system use. J Am Med Inform Assoc. 1994;1: 381-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Kaplan B. The influence of medical values and practices on medical computer applications. In: Anderson JG, Jay SJ (eds). Use and Impact of Computers in Medicine. New York, NY: Springer-Verlag, 1987; 39-50.
- 12.Detmer WM, Friedman CP. Academic physicians' assessment of the effects of computers on health care. In: Ozboldt JG (ed). Proceedings of the Eighteenth Annual Symposium on Computer Applications in Medical Care. Philadelphia, PA: Hanley & Belfus, 1994; 558-62. [PMC free article] [PubMed]
- 13.Bria II WF, Rydell RL. The Physician-Computer Connection: A Practical Guide to Physician Involvement in Hospital Information Systems. Chicago, IL: American Hospital Publishing, 1992.
- 14.Andersen JG, Aydin CE, Jay SJ. Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994.
- 15.Anderson JG, Jay SJ, Perry J, Anderson MM. Diffusion of computer applications among physicians: a quasi-experimental study. Clin Sociol Rev. 1990;8: 116-27. [Google Scholar]
- 16.Stainton RR. Q-methodology. In: Smith JA, Harre R, Van Langenhove L (eds). Rethinking Methods in Psychology. Thousand Oaks, CA: Sage Publications, 1995; 178-92.
- 17.Stephenson W. The Study of Behavior: Q-technique and Its Methodology. Chicago, IL: University of Chicago Press, 1953.
- 18.Aydin CE. Survey methods of assessing social impacts of computers in health care organizations: In: Anderson JG, Aydin CE, Jay SJ (eds). Evaluating Health Care Information Systems: Methods and Applications. Thousand Oaks, CA: Sage Publications, 1994; 69-96.
- 19.Dennis KE. Q-methodology: Relevance and application to nursing research. Advances in Nursing Science. 1986;8: 6-17. [DOI] [PubMed] [Google Scholar]
- 20.Stricklin M. PCQ: factor analysis program for Q-technique [computer program]. Version 3.8. Lincoln, NE: Stricklin M, 1996.
- 21.Abt Associates Incorporated (US). Overcoming barriers to implementation and integration of clinical information management systems: feasibility study. Department of Commerce, National Technical Information Services, Springfield, VA, 1993.
- 22.Henkind SJ. Physician involvement in information systems: an overview of the issues. In: Proceedings of the 1994 Annual HIMSS Conference. Chicago, IL: Healthcare Information and Management Systems Society. 1994; 33-41.
- 23.Metzger JB, Drazen EL. Computer-based record systems that meet physician needs. Healthcare Information Management. 1993;7: 22-31. [PubMed] [Google Scholar]
- 24.Gardner RM, Lundsgaarde HP. Evaluation of user acceptance of a clinical expert system. J Am Med Inform Assoc. 1994;1: 428-38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Drazen EL. Physicians' and nurses' acceptance of computers. In: Drazen EL, Metzger JB, Ritter JL, Schneider MK (eds). Patient Care Information Systems: Successful Design and Implementation. New York, NY: Springer-Verlag, 1995; 31-50.
- 26.Drazen EL. Physicians' and nurses' satisfaction with patient care information system: two case studies. In: Drazen El, Metzger JB, Ritter JL, Schneider MK (eds). Patient Care Information Systems: Successful Design and Implementation. New York, NY: Springer-Verlag, 1995; 51-81.
- 27.Baas LR, Thomas DB. Presidents in the public mind: the social construction of Bill Clinton. Paper presented at the 11th Annual Conference of the International Society for the Scientific Study of Subjectivity; 1995. Oct 13; Chicago, IL.
- 28.York KL, John OJ. The four faces of Eve: a typological analysis of women's personality in midlife. J Pers Soc Psychol. 1992;63: 494-508. [DOI] [PubMed] [Google Scholar]
- 29.Friedman CP, Wyatt JC. Subjectivist approaches to evaluations. In: Friedman CP, Wyatt JC (eds). Evaluation Methods in Medical Informatics. New York, NY: Springer-Verlag, 1997; 205-22.
- 30.Dennis KE. Q-methodology: new perspectives on estimating reliability and validity. In: Waltz CF, Strickland OL (eds). Measurement in Nursing Outcomes. New York, NY: Springer-Verlag, 1988; 409-19.
- 31.Peritore NP. Socialism, Communism, and Liberation Theology in Brazil: An Opinion Survey Using Q-methodology. Athens, OH: Ohio University Press, 1990.
- 32.Dennis KE. Commentary: looking at reliability and validity through Q-colored glasses. Operant Subjectivity. 1993 1992; 16: 37-44. [Google Scholar]
- 33.Thomas DB, Baas LR. The issue of generalization in Q-methodology: “reliable schematics” revisited. Operant Subjectivity. 1993 1992;16: 18-36. [Google Scholar]
- 34.Richardson L. Writing Strategies: Reaching Diverse Audiences. Newbury Park, CA: Sage Publications, 1990.