Skip to main content
Journal of General Internal Medicine logoLink to Journal of General Internal Medicine
. 2006 Apr;21(4):310–314. doi: 10.1111/j.1525-1497.2006.00337.x

A Test of Knowledge about Prostate Cancer Screening

Online Pilot Evaluation among Southern California Physicians

Douglas S Bell 1, Ron D Hays 1, Jerome R Hoffman 2, Frank C Day 2, Jerilyn K Higa 1, Michael S Wilkes 3
PMCID: PMC1484731  PMID: 16499545

Abstract

BACKGROUND

Although the benefits of prostate cancer screening are uncertain and guidelines recommend that physicians share the screening decision with their patients, most U.S. men over age 50 are routinely screened, often without counseling.

OBJECTIVE

To develop an instrument for assessing physicians' knowledge related to the U.S. Preventive Services Task Force recommendations on prostate cancer screening.

PARTICIPANTS

Seventy internists, family physicians, and general practitioners in the Los Angeles area who deliver primary care to adult men.

MEASUREMENTS

We assessed knowledge related to prostate cancer screening (natural history, test characteristics, treatment effects, and guideline recommendations), beliefs about the net benefits of screening, and prostate cancer screening practices for men in different age groups, using an online survey. We constructed a knowledge scale having 15 multiple-choice items.

RESULTS

Participants' mean knowledge score was 7.4 (range 3 to 12) of 15 (Cronbach's α=0.71). Higher knowledge scores were associated with less belief in a mortality benefit from prostate-specific antigen (PSA) testing (r=−.49, P <.001). Participants could be categorized as low, age-selective, and high users of routine PSA screening. High users had lower knowledge scores than age-selective or low users, and they believed much more in mortality benefits from PSA screening.

CONCLUSIONS

Based on its internal consistency and its correlations with measures of physicians' net beliefs and self-reported practices, the knowledge scale developed in this study holds promise for measuring the effects of professional education on prostate cancer screening. The scale deserves further evaluation in broader populations.

Keywords: physicians' attitudes and practices, knowledge evaluation, continuing medical education, prostate cancer screening


Prostate cancer is a common but heterogeneous illness. Although 3% of U.S. men die from aggressive prostate cancer, about 40% of men harbor asymptomatic prostate cancer by the time they reach 80 years of age, implying that the majority of prostate cancers are indolent and potentially harmless.1 Serum prostate-specific antigen (PSA) tests and digital rectal examinations (DRE) are commonly performed to screen for prostate cancer. However, the benefits of screening and early treatment have not been demonstrated while, conversely, screening is clearly associated with important harms because of the frequent side effects of prostate cancer treatments and the anxiety generated by false-positive test results.2,3

Major professional societies differ in their recommendations for prostate cancer screening. The U.S. Preventive Services Task Force (USPSTF) concludes that the evidence is insufficient to recommend for or against routine screening for prostate cancer.4 Conversely, the American Urological Association and American Cancer Society both recommend that PSA and DRE testing be offered annually to asymptomatic men age 50 and older who have an estimated life expectancy of more than 10 years.5,6 However, all 3 of these major guidelines recommend that clinicians engage in shared decision making, informing patients about the likelihoods and uncertainties associated with different potential outcomes, and then helping them to make a decision that is consistent with their individual value systems.7

In practice, guidelines are often not followed; one major cause is physicians' lack of knowledge about the recommendations.8 Knowledge is especially important for carrying out PSA screening recommendations because to conduct shared decision making, physicians need at least a conceptual understanding of the evidence. However, surveys have consistently found that large proportions of primary care physicians perform PSA screening routinely, believing that screening is beneficial.9,10,11,12 Recent national data showed that 56% of men over age 80 had received PSA screening within the last year, a screening rate that was no different from younger age groups,13 even though the average life expectancy at age 80 is 8.6 years.14 In addition, many men appear to receive PSA screening without any discussion of the test's risks or benefits.15,16 These discrepancies between physicians' practices and even the most aggressive guideline recommendations suggest a need for better physician education.

As a preliminary study in the development of an online physician education program focused on the USPSTF prostate cancer screening guidelines, we sought to develop a test measuring physicians' knowledge of the related evidence and expert opinion. We also sought to explore the association of this knowledge with physicians' self-reported screening practices and their beliefs about the net benefits of screening. We hypothesized that physicians having greater knowledge would believe less strongly in the benefits of PSA and DRE screening, and would report lower levels of routine PSA screening.

METHODS

Knowledge Instrument

We developed a 30-item multiple-choice knowledge test covering the (a) prevalence and natural history of prostate cancer, (b) benefits and harms of prostate cancer treatments, (c) characteristics of prostate cancer screening tests, and (d) guideline recommendations for prostate cancer screening. We authored a total of 19 learning objectives, each of which was explicitly linked to a passage in the most recent review of evidence related to prostate cancer screening2 or in the accompanying guideline4 from the USPSTF. For each learning objective, we authored 1 to 3 different multiple-choice questions assessing the knowledge represented in the guideline passage. Six of the questions were composite “true-false” questions, meaning that any or all of the response options could be correct and users could select all that apply. For the other 24 questions, only 1 response option could be selected as correct. The questions were revised for completeness and content validity, including the accuracy of the “correct” response option, based on expert review by a urologist–health services researcher and by the lead author of the USPSTF recommendations.

Attitudes and Practices Survey

We developed survey questions to assess participants' attitudes and practices related to prostate cancer screening. Beliefs in the benefits of several cancer screening tests were assessed using a set of questions that asked subjects to rate how beneficial each test (including PSA, DRE, fecal occult blood, colonoscopy, mammography, Pap smear, and chest x-ray for lung cancer) is for reducing mortality risk in patients over age 50; the rating options were as follows: “−1. Harmful effect,”“0. No effect,”“1. Small benefit,”“2. Moderate benefit,” or “3. Large benefit.” Physicians' self-reported PSA screening practices were assessed with questions that asked how often they routinely recommend PSA screening for men who have no family history of prostate cancer and who are age 40 to 49, 50 to 59, 60 to 69, 70 to 79, and 80+. For each age group, respondents could select “0% to 33%,”“34% to 66%,” or “67% to 100%.” The wordings of the item stem and response options were based on a previously developed scale used for self-report of colon cancer screening practices.17 Physicians' decision-making style was assessed with the question, “Which approach best characterizes how a final decision about ordering a PSA test is typically made in your practice?” Response options were adapted for physician self-rating from the Control Preferences Scale18—“I make the final decision,”“I prefer to make the final decision, but only after gathering input from the patient about his preference,”“The patient and I share the decision making as fully as possible,”“I ask the patient to make the decision, but only after we have discussed my opinions,” and “I leave the decision up to the patient.” Finally, the survey asked about personal demographic and practice characteristics.

Recruitment and Data Collection

We randomly sampled physicians from the AMA Physician Masterfile, a registry that attempts to include all U.S. physicians.19 Physicians were eligible for sampling if they listed their “major professional activity” as office-based practice, and their specialty as general internal medicine, family practice, or general practice. We restricted our sample to the Los Angeles area because we expected that some participants with low computer skills might require in-person assistance to complete the online exercise. We attempted to contact each sampled physician's office by telephone to confirm or correct the address and specialty designation. Physician records were dropped if no matching physician could be identified practicing in the Los Angeles area or if the office confirmed that the physician was not practicing in 1 of the 3 designated specialties.

Physicians were mailed an invitation letter describing the study, including an offer of $75 for completing the 30-minute online exercise. Each letter included an identifying code for logging in to the study website. Physicians who did not log in were telephoned up to 5 times to inquire about their interest. Additional copies of the invitation letter were faxed or e-mailed, as necessary. The website's login page asked physicians the number of half-day sessions per week in which they deliver primary care to substantial numbers of men over age 40. A response of 2 or more was required to be study-eligible. Upon logging in, subjects were informed of their eligibility and presented with the online consent form. After indicating their consent, they completed the attitudes and practices survey. Then, the website presented each knowledge question individually, in random order. Subjects were required to answer each question before proceeding to the next. If subjects interrupted their session and returned later, the website returned them to their last unanswered question. Data collection was conducted online because we intended to use the knowledge test in a larger online educational program. The study protocol was approved by University of California's Institutional Review Board.

Analysis

Single-answer knowledge questions were scored 0 or 1. Composite true-false questions were scored as the proportion of correct options chosen and incorrect options not chosen out of all response options. Knowledge scales were created by summing the number correct across items. Internal consistency reliabilities for knowledge scales were estimated using Cronbach's coefficient α.20 To explore the possibility of item clusters that might contribute less to the measurement of domain-specific knowledge, such as those that might have difficult wording, we conducted a categorical exploratory factor analysis, examining the empirical fit of models having 1 to 4 underlying factors. To further eliminate poorly performing items from candidate scales, we dropped those having item-scale correlation coefficients <0.20.

Participants' use of routine PSA screening was scored from 0 to 2 for each of the 5 age groups. An overall PSA use score, ranging from 0 to 10, was calculated by summing the scores for the individual age groups. Subjects were then categorized based on this score as low (≤3), intermediate (4 to 6), or high (≥7) users of routine PSA testing for prostate cancer screening. Participants' were classified as sharing the PSA screening decision if they chose the third or fourth response option on the decision-making style question (no subjects chose the fifth response option).

Chi-square or Fisher's exact tests were used to assess associations among categorical variables. T-tests, Wilcoxon rank sum tests, or Kruskal-Wallis tests were used to compare distributions of continuous variables. Multivariate relationships for knowledge and belief scores were evaluated using ordinary least squares regression. Residual analyses, including Breusch-Pagan tests for heteroskedasticity, indicated that these linear models were adequately specified. Statistical calculations were carried out in SAS version 8 (SAS Institute, Cary, NC), except for factor analyses, which were carried out using Mplus version 2.14 (Muthén & Muthén, Los Angeles, CA).

RESULTS

Of 285 physicians recruited, 21 visited the study website after the initial invitation and another 59 did so after an average of 2.5 telephone messages or contacts. Most nonparticipants we reached cited having insufficient time for the online exercise. Of the 80 who visited the website, 2 were ineligible and 2 declined to give consent. Of the 76 who gave consent and were eligible, 70 completed participation, for a net participation rate of 25%. Technical support was not a significant barrier to participation. Participants were fewer years from medical school, and were more likely to be board certified than nonparticipants, but they did not differ in their gender or specialty (Table 1). Participants reported moderately diverse practice settings and race-ethnicity. In addition, 45% reported some affiliation with a medical school or residency training program. Among 25 male participants age 50 or older, 23 (92%) reported having had PSA screening themselves; only 11 (44%) reported having had DRE screening.

Table 1.

Characteristics of Participants*

Characteristic Participants n=70 Eligible Non-participants n=213 P Value
Female 17 (24) 54 (25) NS
Specialty NS
 Family practice (FP) 27 (39) 74 (35)
 General practice 4 (6) 16 (8)
 Internal medicine (IM) 39 (56) 123 (58)
Board certified in IM or FP 60 (86) 148 (69) .008
Age in years, mean (SD) 48 (11) 52 (11) .008
Years since medical school graduation, mean (SD) Practice setting 20 (11) 25 (12) .004
 Solo 15 (22) NA
 Group, 2 to 7 MDs 14 (20)
 Group, >7 MDs 14 (20)
 Other (HMO, VA, etc.) 26 (38)
Race and ethnicity
 Hispanic 7 (20) NA
 Asian 14 (20)
 Black or African American 7 (10)
 White 41 (59)
 Other 5 (7)
*

Unless otherwise stated, values are numbers (percentages) of physicians having each characteristic within the respondent group and the nonrespondent group. Tests of significance are χ2-tests for contingency tables and 2-sided t-tests for means. NS, not significant (P>.20); NA, Data not available for nonparticipants.

Data not available for 1 physician.

Race and ethnicity options were not exclusive—subjects could select all that apply.

The average percent correct for the individual knowledge items ranged from 4% to 93%, and the average total number correct for all 30 questions was 13.7 (standard deviation [SD] 3.7, Cronbach's α 0.58). In exploratory factor analyses, a 2-factor solution provided the most reasonable representation of the knowledge responses. Based on the content of items contributing to each factor, we interpreted 1 factor as representing test-taking skills more than knowledge that specifically informs prostate cancer screening. For example, the question on the predictive value of PSA testing that asked the probability of finding no cancer given a positive test loaded more strongly on the test-taking skills factor while the one with more straightforward wording loaded more strongly on the knowledge factor.

We constructed a reduced knowledge scale by including the items with loading values ≥0.20 for the knowledge factor, and excluding items having item-scale correlation coefficients <0.20 (Appendices A & B). This resulted in a 15-item scale having α=0.71, indicating sufficient internal consistency reliability for group comparisons.21 Participants' scores on this 15-point scale averaged 7.4 (range 3 to 12, median 7, SD 2.6).

Knowledge scale scores were higher, on average, among participants who were board-certified (7.8 vs 5.0, P =.002), who were within 20 years of medical school graduation (8.1 vs 6.6, P =.008), and who had any teaching affiliation (8.3 vs 6.8, P =.008). There were statistical trends toward lower knowledge scores for males (7.1 vs 8.4, P =.06), and general practitioners (5.0 vs 7.5, P =.10). Forty-two subjects (60%) reported a shared decision-making style for PSA screening. These subjects showed a trend toward greater knowledge scores (7.8 vs 6.8, P =.08). In multivariate regression models including gender, specialty, years since graduation, decision-making style, board certification, and teaching affiliation as predictors of knowledge scores, board certification was the only variable independently associated with knowledge (P =.01), although there was a trend toward an independent association for teaching affiliation (P =.08).

Participants' net beliefs in the mortality benefits of specific screening tests, on a rating scale from −1 to 3, ranged from 2.61 for mammography to 0.64 for screening chest x-rays. For PSA screening, the mean (SD) belief score was 1.80 (1.07), and for DRE screening it was 1.78 (0.99). No subject rated PSA screening as having a harmful effect, and 1 subject gave this rating for DRE. Physicians with a teaching affiliation had less belief in PSA screening (1.43 vs 2.05, P =.04), and there were statistical trends toward less belief in PSA screening for female, board-certified, and more recently graduated physicians. Similar but weaker trends were present for belief in DRE screening, but none of these relationships reached statistical significance. Subjects who do not practice shared decision making also had significantly greater belief in the benefit of PSA screening (2.14 vs 1.56, P =.04). Knowledge scores were inversely correlated with belief in PSA screening (r=−.49, P <.001) and with belief in DRE screening (r=−.38, P =.001). In multivariate analyses, no variables other than knowledge had significant associations with belief in PSA or DRE screening, and there were no significant interaction effects with knowledge.

Scores for overall use of routine PSA screening ranged from 0, indicating little use in any age group, to 10, indicating high use among all men over age 40. The mean overall PSA use score was 6.7 (SD 2.7). When participants were categorized as low, intermediate, or high users of PSA screening based on their overall PSA use scores, all of the “intermediate” users were highly age-selective, reporting very high levels of routine use in the 50 to 69 age range and very low levels of routine use in the 40 to 49 and 80+ age ranges (Table 2).

Table 2.

Categorization of Physicians' PSA Screening Practices*

Category Age-specific PSA Use Score
40 to 49 50 to 59 60 to 69 70 to 79 80+
Low users of PSA screening (total PSA use score 0 to 3, n=9) 0.12 0.56 0.56 0.33 0.22
Selective users of PSA (total PSA use score 4 to 6, n=21) 0.19 1.95 1.95 1.24 0.05
High users of PSA screening (total PSA use score 7 to 10, n=40) 1.20 1.95 2.00 1.90 1.50
*

Data are mean scores for the use of routine PSA testing within each age category, on a scale ranging from 0 to 2.

PSA, prostate-specific antigen.

Scores on the scale of knowledge related to PSA screening differed significantly among the groups of physicians with different PSA screening practices (Table 3), with scores being higher for both the low users and the age-selective users of PSA screening. These groups also differed somewhat in other characteristics, including board certification and teaching affiliation, but not gender or years since medical school graduation. Significantly fewer high users reported sharing the PSA screening decision with their patients. The high users of PSA testing had much greater belief in net mortality benefits from both PSA and DRE screening. This group was also much more likely to believe in mortality benefits from screening chest x-rays for lung cancer. The groups' beliefs showed modest differences for fecal occult blood testing and mammography screening, and no significant differences for colonoscopy and Pap smear screening.

Table 3.

Characteristics of Low, Selective, and High Users of PSA Screening*

Characteristic Low Users n=9 Selective Users n=21 High Users n=40 P Value
Female (%) 44 19 23 .34
Board certified in IM or FP (%) 100 100 75 .01
Years since medical school graduation 16 (11) 18 (11) 22 (12) .19
Any teaching affiliation (%) 89 47 33 .01
Shares decision making about PSA screening with the patient (%) 89 71 48 .03
PSA screening knowledge score 8.4 (2.8) 8.9 (2.2) 6.4 (2.3) <.001
Belief in net mortality benefit from …
 PSA screening 0.22 (0.97) 1.30 (0.66) 2.40 (0.71) <.001
 DRE screening 0.89 (0.93) 1.24 (0.77) 2.26 (0.82) <.001
 Chest x-ray screening for lung cancer −0.22 (0.67) 0.14 (0.57) 1.10 (1.06) <.001
 Colonoscopy screening 2.22 (0.97) 2.57 (0.68) 2.68 (0.69) .38
 Fecal occult blood screening 2.44 (0.73) 1.62 (0.74) 1.88 (0.82) .04
 Mammography screening 2.22 (0.83) 2.48 (0.68) 2.78 (0.48) .04
 Pap smear screening 2.67 (1.00) 2.43 (0.81) 2.65 (0.58) .37
*

Unless otherwise stated, values are means (standard deviations) for physicians within each category. Knowledge scores are on a scale ranging from 0 to 15 and beliefs scores are on a scale ranging from −1 to 3. Tests of significance for differences among the 3 groups are Kruskal–Wallis tests for continuous variables and Fisher's exact tests for categorical variables.

Data not available for 1 physician.

PSA, prostate-specific antigen; DRE, digital rectal examinations; IM, Internal medicine; FP, family practice.

DISCUSSION

In this study, we developed a 15-item scale measuring physicians' knowledge related to the prostate cancer screening decision. The scale demonstrated internal consistency reliability sufficient for group comparisons. In addition, higher scores on the scale were associated with less belief in the benefits of PSA testing as well as lower and more age-selective self-reported use of the PSA test in practice. These associations provide support for construct validity,22 as they are in the direction that would be predicted by a theoretical model in which knowledge influences beliefs, and beliefs in turn influence actions. Our results also suggest that physicians' tendency to conduct shared decision making is associated with their beliefs and perhaps with their knowledge about PSA testing.

Although physicians' use of PSA screening is probably influenced by factors other than their belief in the test's efficacy, such as their comfort in dealing with uncertainty,23 our findings imply that physicians' knowledge is an important predictor of their testing behavior. Thus, this study raises the possibility that educational programs could improve physicians' use of shared decision making and reduce their use of PSA testing among men with relatively low life expectancy. The knowledge test that we developed for this study may be useful as a summative evaluation of educational programs on prostate cancer screening and as a diagnostic test to assess the participants' prior knowledge. Our findings of poor familiarity with the USPSTF recommendations demonstrate a particular need for education on this guideline. All male subjects age 50 or older had rated PSA screening as having at least a small net mortality benefit; thus, the finding that nearly all male subjects age 50 or older had received PSA screening themselves shows that these physicians are making personal screening decisions in accord with their beliefs.

Our study has several limitations. First, the low response rate raises the likelihood that our sample does not represent primary care physicians in Los Angeles. However, the sample was diverse enough to identify noteworthy relationships between knowledge, beliefs, and practices. Furthermore, since board-certified and younger physicians were overrepresented among respondents, and since these characteristics were each associated with higher levels of knowledge, the true level of knowledge in our source population may be less than the level we found. It is also possible that knowledge, beliefs, and practice patterns in Los Angeles would differ from those found in other areas. However, we did not find any subject characteristics that modified the relationship between knowledge and beliefs (i.e., we found no interaction effects), suggesting that our measure of knowledge would correlate similarly with beliefs and practices in different physician subpopulations. Nonetheless, our relatively small sample size limits our ability to test complex, multivariate models. The sample size also limited our ability to detect distinct knowledge subdomains using factor analysis. Finally, physicians may not report their screening practices accurately. As PSA screening is simple to order, physicians' perceptions of their PSA screening practices may be relatively unbiased, but a study to validate physician self-report of PSA screening may be warranted.

In conclusion, we have developed a 15-item scale that has content validity for measuring knowledge about prostate cancer screening and that, in our pilot evaluation, correlated as expected with measures of physicians' beliefs and self-reported practices. This scale therefore holds promise for measuring the results of educational programs aimed at improving physicians' knowledge about prostate cancer screening. However, further studies of this scale are warranted in larger and more representative national samples.

Acknowledgments

This study was supported by the U.S. Centers for Disease Control (cooperative agreement U58-CCU920367), and by the National Center for Research Resources (grant G12 RR 03026-13). Dr. Bell had additional support from the Robert Wood Johnson Foundation Generalist Faculty Physician Scholars Program and Dr. Hays from the UCLA/DREW Project EXPORT, National Institutes of Health, National Center on Minority Health & Health Disparities, (P20-MD00148-01), and the UCLA Center for Health Improvement in Minority Elders/Resource Centers for Minority Aging Research.

We thank Drs. Mark Litwin and Russell Harris for their expert review of the knowledge questions, Ms. Karen Spritzer for the statistical programming, and Mr. Lingtao Cao for the web application programming.

Supplementary Material

The following supplementary material is available for this article online:

Appendix A. Characteristics and scaling properties of individual knowledge questions.

Appendix B. Questions comprising the prostate cancer screening knowledge scale.

REFERENCES

  • 1.Coley CM, Barry MJ, Fleming C, Mulley AG. Early detection of prostate cancer. Part I: prior probability and effectiveness of tests. The American college of physicians. Ann Intern Med. 1997;126:394–406. doi: 10.7326/0003-4819-126-5-199703010-00010. [DOI] [PubMed] [Google Scholar]
  • 2.Harris R, Lohr KN. Screening for prostate cancer: an update of the evidence for the U.S. Preventive services task force. Ann Intern Med. 2002;137:917–29. doi: 10.7326/0003-4819-137-11-200212030-00014. [DOI] [PubMed] [Google Scholar]
  • 3.McNaughton-Collins M, Fowler FJ, Jr, Caubet JF, et al. Psychological effects of a suspicious prostate cancer screening test followed by a benign biopsy result. Am J Med. 2004;117:719–25. doi: 10.1016/j.amjmed.2004.06.036. [DOI] [PubMed] [Google Scholar]
  • 4.Preventive Services Task Force. Screening for prostate cancer: recommendation and rationale. Ann Intern Med. 2002;137:915–6. doi: 10.7326/0003-4819-137-11-200212030-00013. [DOI] [PubMed] [Google Scholar]
  • 5.American Urological Association (AUA) Prostate-specific antigen (PSA) best practice policy. Oncology (Huntington) 2000;14:267–80. [PubMed] [Google Scholar]
  • 6.Smith RA, Cokkinides V, Eyre HJ. American cancer society guidelines for the early detection of cancer, 2004. CA Cancer J Clin. 2004;54:41–52. doi: 10.3322/canjclin.54.1.41. [DOI] [PubMed] [Google Scholar]
  • 7.Frosch DL, Kaplan RM. Shared decision making in clinical medicine: past research and future directions. Am J Prev Med. 1999;17:285–94. doi: 10.1016/s0749-3797(99)00097-5. [DOI] [PubMed] [Google Scholar]
  • 8.Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65. doi: 10.1001/jama.282.15.1458. [DOI] [PubMed] [Google Scholar]
  • 9.Hoffman RM, Papenfuss MR, Buller DB, Moon TE. Attitudes and practices of primary care physicians for prostate cancer screening. Am J Prev Med. 1996;12:277–81. [PubMed] [Google Scholar]
  • 10.Austin OJ, Valente S, Hasse LA, Kues JR. Determinants of prostate-specific antigen test use in prostate cancer screening by primary care physicians. Arch Family Med. 1997;6:453–8. doi: 10.1001/archfami.6.5.453. [DOI] [PubMed] [Google Scholar]
  • 11.Fowler FJ, Jr, Bin L, Collins MM, et al. Prostate cancer screening and beliefs about treatment efficacy: a national survey of primary care physicians and urologists. Am J Med. 1998;104:526–32. doi: 10.1016/s0002-9343(98)00124-7. [DOI] [PubMed] [Google Scholar]
  • 12.Voss JD, Schectman JM. Prostate cancer screening practices and beliefs. J Gen Intern Med. 2001;16:831–7. doi: 10.1111/j.1525-1497.2001.10133.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sirovich BE, Schwartz LM, Woloshin S. Screening men for prostate and colorectal cancer in the United States: does practice reflect the evidence? JAMA. 2003;289:1414–20. doi: 10.1001/jama.289.11.1414. [DOI] [PubMed] [Google Scholar]
  • 14.Arias E. United States Life Tables, 2000. National Vital Statistics Reports. Vol. 51. Hyattsville, MD: National Center for Health Statistics; 2002. [PubMed] [Google Scholar]
  • 15.Federman DG, Goyal S, Kamina A, Peduzzi P, Concato J. Informed consent for PSA screening: does it happen. Effect Clin Pract. 1999;2:152–7. [PubMed] [Google Scholar]
  • 16.Chan EC, Vernon SW, Ahn C, Greisinger A. Do men know that they have had a prostate-specific antigen test? Accuracy of self-reports of testing at 2 sites. Am J Public Health. 2004;94:1336–8. doi: 10.2105/ajph.94.8.1336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Dulai GS, Farmer MM, Ganz PA, et al. Primary care provider perceptions of barriers to and facilitators of colorectal cancer screening in a managed care setting. Cancer. 2004;100:1843–52. doi: 10.1002/cncr.20209. [DOI] [PubMed] [Google Scholar]
  • 18.Degner LF, Sloan JA. Decision making during serious illness: what role do patients really want to play? J Clin Epidemiol. 1992;45:941–50. doi: 10.1016/0895-4356(92)90110-9. [DOI] [PubMed] [Google Scholar]
  • 19.American Medical Association. AMA Physician Masterfile. [February 12, 2005]; Available at: http://www.ama-assn.org/ama/pub/category/2673.html.
  • 20.Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297. [Google Scholar]
  • 21.Nunnally JC, Bernstein IH. Psychometric Theory. New York: McGraw-Hill; 1994. [Google Scholar]
  • 22.Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52:281–302. doi: 10.1037/h0040957. [DOI] [PubMed] [Google Scholar]
  • 23.Sorum PC, Shim J, Chasseigne G, Bonnin-Scaon S, Cogneau J, Mullet E. Why do primary care physicians in the United States and France order prostate-specific antigen tests for asymptomatic patients? Med Decis Making. 2003;23:301–13. doi: 10.1177/0272989X03256010. [DOI] [PubMed] [Google Scholar]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine

RESOURCES