Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2014 Dec;104(12):2439–2444. doi: 10.2105/AJPH.2014.302153

Social Network Effects of Nonlifesaving Early-Stage Breast Cancer Detection on Mammography Rates

Sarah A Nowak 1,, Andrew M Parker 1
PMCID: PMC4232109  NIHMSID: NIHMS633942  PMID: 25322304

Abstract

Objectives. We estimated the effect of anecdotes of early-stage, screen-detected cancer for which screening was not lifesaving on the demand for mammography.

Methods. We constructed an agent-based model of mammography decisions, in which 10 000 agents that represent women aged 40 to 100 years were linked together on a social network, which was parameterized with a survey of 716 women conducted through the RAND American Life Panel. Our model represents a population in equilibrium, with demographics reflecting the current US population based on the most recent available census data.

Results. The aggregate effect of women learning about 1 category of cancers—those that would be detected but would not be lethal in the absence of screening—was a 13.8 percentage point increase in annual screening rates.

Conclusions. Anecdotes of detection of early-stage cancers relayed through social networks may substantially increase demand for a screening test even when the detection through screening was nonlifesaving.


Women often overestimate the mortality reduction from mammograms,1–3 and anecdotes of women with early-stage breast cancer being diagnosed through mammography can serve as particularly powerful motivators in encouraging other women to have screening mammograms.4–6 Such diagnoses, which are often shared through discussions between friends, family, coworkers, and other acquaintances, are also often viewed by patients as lifesaving.7 These discussions can be viewed as the transmission of information through a social network. Typically, social networks are defined as sets of individuals with links between pairs of individuals. In this article, we define 2 individuals as linked if they discuss their health history and outcomes (such as breast cancer diagnosis and treatment) with each other.

Although women often view the detection of early-stage breast cancer through mammography as lifesaving, a recent study estimated that more than three quarters of women with screen-detected cancer have not actually had their lives saved by early detection.8 Rather, these women have cancers that (1) would never have been detected clinically if not detected through screening (thus, they were overdiagnosed), (2) eventually would have been detected clinically but would not be lethal, or (3) eventually would be lethal despite being detected by screening. Physicians and epidemiologists have long understood that high rates of early-stage diagnosis through screening do not necessarily indicate that the screening reduces mortality. Furthermore, when issuing cancer screening recommendations, professional organizations generally use mortality reduction as a primary measure of the screening’s efficacy.9–12

Understanding how nonlifesaving early detection of breast cancer through screening drives patient demand for screening has several important implications. First, professional organizations currently disagree about when and how frequently women should be screened for breast cancer.9,10 Those who believe that some women are screened too early and too often may find demand for mammograms driven by such nonlifesaving diagnoses problematic. Our analysis estimated how much demand for mammograms may be driven by women’s incorrect assumptions about whether breast cancer screening was lifesaving for individuals in their social networks. More generally, the fact that patients often use frequency of early detection as a proxy for the screening efficacy7 means that patients with limited time and resources might prioritize a screening test that is relatively ineffective at reducing mortality over a test that more successfully reduces mortality if the former has higher early detection rates than the latter. We examined the potential patient-driven demand for screening that is motivated by nonlifesaving screen-detected cancers discussed in a social network.

It is challenging to quantify this demand through observation or experiment because it is often not possible to determine at the individual level whether early detection through screening affected mortality. If a woman with early-stage, screen-detected breast cancer does not die of breast cancer, it is usually not possible to know with certainty if she would have died from breast cancer had she not been screened. If a woman with early-stage, screen-detected breast cancer ultimately dies of breast cancer, we know that screening was not lifesaving, but there may be many years between her initial diagnosis and death from breast cancer. Therefore, we used an agent-based simulation model to estimate how nonlifesaving early-stage breast cancer detection relayed through a social network might influence mammography rates. A major advantage of using a simulation model to study this problem is that, within the model, we can distinguish between agents (individual women) whose early detection of breast cancer through screening was lifesaving and those whose early detection was not lifesaving.

In our analysis, we explored how population-level mammography rates might change if individuals were able to account for the fact that not all early-stage, screen-detected cancers are lifesaving. We used the results of a customized survey conducted through RAND’s American Life Panel to inform and parameterize our behavioral model. Many individuals assume that detection of early-stage breast cancer through mammography is lifesaving,7 and we assumed that if those diagnoses were known to be nonlifesaving, they would not serve as such powerful motivators in encouraging others to screen. Mammography is lifesaving only when a cancer that would have been lethal when detected clinically is not lethal when detected by mammography. We focused our analysis on categories of cancers whose detection by screening was not lifesaving. First, we simulated changing the incidence of cancers that would not have been detected in the absence of mammography (i.e., were overdiagnosed). Second, we simulated hypothetical interventions that allowed individuals in our model to know that mammography was nonlifesaving for (1) cancers that would be detected but would not have been lethal in the absence of mammography (never-lethal cancers) and (2) cancers that were ultimately lethal despite being screen detected (always-lethal cancers).

METHODS

To inform and parameterize our model of mammography decision-making, we conducted a survey of 716 women through RAND’s American Life Panel.13 Because the focus of this article is on results from our simulation model, we briefly describe the survey and how it informed our model here and provide additional detail in sections 4 and 5 of the Appendix (available as a supplement to the online version of this article at http://www.ajph.org). We designed this survey to allow us to

  1. determine which mechanisms dominate the breast cancer screening decision-making process (section 5.2 of the Appendix),

  2. estimate the effects of these dominant mechanisms on future screening behavior (section 5.2 of the Appendix),

  3. estimate a memory parameter for our model (section 5.1 of the Appendix),

  4. estimate the average number of connections for individuals in the network (section 3.2 of the Appendix), and

  5. estimate age-mixing patterns in the network (section 5.4 of the Appendix).

Simulation Model

We constructed a discrete-time agent-based model in which 10 000 agents that represented women aged 40 to 100 years were linked together on a social network. In our model, each time step represented 1 year. Each year, agents (1) decide whether to be screened for breast cancer, (2) can be diagnosed with breast cancer through screening or clinically, (3) may die from breast cancer or other causes, and (4) can inform other agents about their breast cancer diagnosis and can be informed about the breast cancer diagnoses and deaths of other agents.

Our methodology for modeling breast cancer diagnoses and mortality was based on the SPECTRUM breast cancer model,14 which is used to study population-level effects of breast cancer screening and treatment. This model uses stages (in situ, local, regional, or distant) to describe how advanced a cancer is, which was the same measure we used in our survey, rather than incorporating other measures such as tumor size15–18 or nodal status.19 Our model differed from the SPECTRUM model in 4 main ways. First, SPECTRUM explicitly models estrogen receptor status and adjuvant therapy with multivalent chemotherapy or tamoxifen. Because our goal was not to understand the effect of adjuvant therapy on mortality, we did not model it explicitly. Second, because we were interested in mammography rates as a model outcome, we explicitly modeled agents’ screening decisions, whereas SPECTRUM uses historical data to estimate screening rates. Third, we included overdiagnosis in our model, but SPECTRUM does not. Finally, as presented in Mandelblatt et al.,14 SPECTRUM models a historical period (in this case, 1975–2000); by contrast, our model represented a population in equilibrium, with demographics reflecting the current US population based on the most recent available census data. (We describe the biological model in detail in section 3.2.1 of the Appendix.)

We modeled each agent’s propensity to be screened, which is the logit of her yearly probability of being screened. This allowed us to parameterize our model by using the results of a logistic regression. As noted earlier, one of our goals in designing the survey was to determine which factors were most important in influencing individuals’ decisions to be screened. We used logistic regression analysis of data collected in our survey to test whether the following individual-level factors, identified in earlier studies, were associated with the likelihood of future screening in our study population: physician recommendation,20–22 insurance status,21,23–25 past screening behavior,22 age,21,26 race,26,27 education,26 and income.26,27 Furthermore, we tested whether learning about an early- or late-stage breast cancer diagnosis in a woman who had or had not been regularly screened within the past year affected a respondent’s future likelihood of being screened. (This regression analysis is described in section 5.2 of the Appendix.)

We found from the results of our survey that the following individual-level factors affected a woman’s likelihood of being screened: (1) whether the woman had insurance that partially or fully paid for mammograms, (2) whether the woman’s health care provider recommended screening mammograms, (3) the woman’s age, and (4) the woman’s past frequency of being screened. We therefore included these 4 individual-level factors in our simulation model. We also found that on learning first hand that a neighbor in the woman’s social network was diagnosed with early-stage breast cancer through screening, the woman became more likely to be screened; we included this effect in our simulation as well. (More detail on how we defined reports of cancer diagnoses from our survey as early- or late-stage is presented in section 4.3.1 of the Appendix.)

For each individual, i, we defined a baseline propensity to be screened, Inline graphic which depended on age, insurance status, and whether the individual currently had a provider who recommended breast cancer screening. We assumed that being screened or learning about a case of early-stage, screen-detected breast cancer temporarily made an individual more likely to be screened and that the propensity to be screened tended to decay back to the baseline propensity Inline graphic at a rate Inline graphic, where Inline graphic is a memory parameter estimated from our survey (section 5.1 of the Appendix). Similarly, not being screened temporarily made an individual less likely to be screened in the future. We assumed that the propensity at iteration n+1 evolved as follows:

graphic file with name AJPH.2014.302153equ1.jpg

where Inline graphic is the change in propensity to be screened for individual Inline graphic at iteration Inline graphic In any iteration, Inline graphic can take on one of 4 possible values, which depend on Inline graphic, a change in propensity that results from getting a mammogram in iteration n; Inline graphic, a change in propensity that results from not having a mammogram in iteration n; and Inline graphic, a change in propensity that results from learning of at least 1 person with an early-stage, screen-detected breast cancer in iteration n. The possible values for Inline graphic are shown in Table 1.

TABLE 1—

Values for Inline graphic Based on Individual i’s Experience in Iteration na

Screening Status Learned of an Early-Stage, Screen-Detected Breast Cancer in Iteration n Did Not Learn of an Early-Stage, Screen-Detected Breast Cancer in Iteration n
Screened in iteration n Inline graphic Inline graphic
Not screened in iteration n Inline graphic Inline graphic
a

Values for Inline graphic (the change at iteration n in propensity for individual i to be screened) depend on whether the individual was screened in iteration n and whether the individual learned about an early-stage, screen-detected breast cancer in iteration n.

Specific Scenarios Considered

In this article, we examine the effects of each of 3 categories of nonlifesaving early-stage, screen-detected cancers on population-level mammography rates: (1) cancers that would never have been detected clinically (i.e., were overdiagnosed), (2) cancers that would have been detected but would not have been lethal in the absence of mammography, and (3) cancers that were ultimately lethal despite being screen detected. Throughout the article, we refer to these respective categories of cancers as overdiagnosed, never-lethal, and always-lethal.

Because overdiagnosis of a cancer never benefits the patient, whereas diagnoses through screening of always-lethal and never-lethal cancers can benefit the patient when his or her life is prolonged or when less aggressive treatment can be used compared with a later clinical diagnosis, we modeled overdiagnosis differently from other nonlifesaving diagnoses. In the first set of scenarios, we varied the overdiagnosis rate. In the second and third sets of scenarios, we did not change the number of breast cancer diagnoses. Rather, we modeled an intervention designed to inform agents that detection of certain cancers through mammography was not lifesaving. To do this, we varied the amount by which an agent’s propensity to be screened increased when she learned about the diagnosis of never-lethal or always-lethal cancers through her social network.

Normally, in our model, whenever an agent learned of an early-stage, screen-detected breast cancer in her network, her propensity to be screened increased by Inline graphic Recall that in the real world, when a woman is screened and diagnosed with early-stage breast cancer, it is not possible to predict what her outcome would be if the cancer were diagnosed clinically. Therefore, it is not possible to identify a woman’s cancer as never-lethal or always-lethal. However, in our model, we simulated both a screen-detected outcome and a clinically detected outcome for each woman diagnosed with cancer, which allowed us to identify never-lethal and always-lethal cancers. We assumed that if women knew that certain diagnoses were nonlifesaving, these diagnoses would serve as less powerful motivators compared with when women assume these diagnoses are lifesaving.

In the second set of scenarios, we simulated a hypothetical intervention that informed women that the never-lethal diagnoses in their network were nonlifesaving. To do this, we decreased Inline graphic (the change in screening propensity that results from learning of at least 1 person with an early-stage, screen-detected breast cancer) when an agent learned of a diagnosis we identified as never-lethal. In the third set of scenarios, we simulated a hypothetical intervention that informed women that the always-lethal diagnoses in their network were nonlifesaving. We expected that the changes in population-level screening rates would be highly sensitive to Inline graphic; therefore, we conducted a sensitivity analysis in which we ran the model using values of Inline graphic that were 1 SE above or below the mean (base case) estimate for Inline graphic estimated from our network survey data. From those data, we estimated a mean value of Δe = 1.06 (mean + SE = 0.54; mean − SE = 1.58), which corresponds to an odds ratio (OR) of 2.89 (± SE = 1.72, 4.85). We ran our sensitivity analysis with a lower bound estimate Δe = 0.54 and an upper bound estimate Δe = 1.58.

We performed 5 simulations of 200 iterations for each scenario. Simulations reached equilibrium after approximately 50 iterations; therefore, we excluded the first 50 iterations of each simulation and averaged the mammography rates found in iterations 51 through 200 from all 5 simulations to find the average estimated mammography rate for each scenario.

RESULTS

We first examined overdiagnosed breast cancers, which are nonlifesaving, early-stage cancers that never would have been detected clinically if not detected through mammography. Most overdiagnosis rate estimates lie between 1% and 10%,10 and some studies estimate the overdiagnosis rate to be as high as 30%10 or higher.28 We varied the overdiagnosis rate from 0% to 30% to examine the possible effects of overdiagnosis on population-level mammography rates. Figure 1 shows these results.

FIGURE 1—

FIGURE 1—

Upper bound, base case, and lower bound estimates for the average population-level yearly mammography rates as a function of the overdiagnosis rate based on simulation of 10 000 women’s screening decisions.

Note. Error bars show the SD of the mammography rates estimated across all iterations for each scenario.

Effect of Overdiagnosis on Mammography Rates

We found that an overdiagnosis rate of 30% would be expected to increase yearly mammography rates by approximately 4.7 percentage points compared with those we would expect without overdiagnosis if we assume our base case of Δe = 1.06 (OR = 2.89). This effect was approximately linear over the range we considered such that we would expect each 10 percentage point increase in the overdiagnosis rate to increase population-level mammography rates by approximately 1.6 percentage points. Our sensitivity analysis showed that the effect likely falls between 0.92 (assuming our lower bound on Δe) and 1.70 (assuming our upper bound on Δe) per 10 percentage points of the overdiagnosis rate.

Effect of Never-Lethal Cancers on Mammography Rates

We modeled an intervention that informed agents in our model that mammography was nonlifesaving for the never-lethal cancers in the social network.

We modeled this intervention by reducing Δe only for those cancers that would not be lethal if detected clinically. Figure 2 shows how reducing Δe for these cancers would affect population-level mammography rates. In our base case, we found that overall mammography rates would decrease by 13.8 percentage points per year if we were able to reduce the effect of learning about cancers through a social network from the status quo (0% reduction) to no effect at all (100% reduction). On the basis of our sensitivity analysis, we estimated that the reduction is most likely between 6.7 percentage points (assuming our lower bound on Inline graphic) and 17.8 percentage points (assuming our upper bound on Inline graphic).

FIGURE 2—

FIGURE 2—

Upper bound, base case, and lower bound estimates for the average population-level yearly mammography rates as a function of the percent reduction in Inline graphic for never-lethal cancers based on simulation of 10 000 women’s screening decisions.

Note. Error bars show the SD of the mammography rates estimated across all iterations for each scenario.

Effect of Always-Lethal Cancers on Mammography Rates

Finally, we modeled an intervention that informed agents in our model that mammography was nonlifesaving for the always-lethal cancers in the social network. We modeled the intervention by reducing Inline graphic only for those cancers that are ultimately lethal regardless of whether they are detected clinically or through screening. Figure 3 shows how reducing Inline graphic for these cancers would affect population-level mammography rates. In our base case, we found that overall mammography rates would decrease by 2.9 percentage points per year if we were able to reduce the effect of learning about these cancers through a social network from the status quo (0% reduction) to no effect at all (100% reduction). From our sensitivity analysis, we found that the reduction is most likely between 1.5 (assuming our lower bound on Inline graphic) percentage points and 3.2 percentage points (assuming our upper bound on Inline graphic).

FIGURE 3—

FIGURE 3—

Upper bound, base case, and lower bound estimates for the average population-level yearly mammography rates as a function of the percent reduction in Inline graphic for always-lethal cancers based on simulation of 10 000 women’s screening decisions.

Note. Error bars show the SD of the mammography rates estimated across all iterations for each scenario.

DISCUSSION

We constructed an agent-based simulation model to estimate the effect of nonlifesaving, early-stage, screen-detected breast cancers on screening rates in a social network. These cancers represent instances in which mammography does not reduce mortality. We assumed that individuals who learn about early-stage, screen-detected breast cancers from their social network neighbors are more likely to be screened in future iterations and based the magnitude of this effect on empirical survey results. We found that learning through a social network about 2 types of cancer—overdiagnosed cancers and early-stage, screen-detected cancers that later become lethal—had only modest effects on overall screening rates: about 1.6 points per 10% overdiagnosis and 2.9 percentage points, respectively. By contrast, we found that learning through a social network about a third type of cancer—early stage, screen-detected cancers that would not be lethal if detected clinically—did substantially affect mammography rates, by approximately 13.8 percentage points. These never-lethal cancers had a large effect on screening rates because they constituted most—about 55%—early-stage, screen-detected cancers in our model, reflecting that many early-stage, screen-detected cancers would be treatable if detected clinically. By contrast, overdiagnosed cancers and always-lethal cancers represented about 13% (assuming 10% overdiagnosis) and 9% of early-stage, screen-detected cancers in our model. Screening was lifesaving in the remaining 23% of early-stage detections in our model.

Given the lack of consensus on mammography screening guidelines, the implications of our analysis for breast cancer screening vary depending on one’s view of such screening. For those who believe that women are screened too early and too often, our analysis suggests that targeted interventions designed to inform patients about the true likelihood that detection of early-stage breast cancer is lifesaving could reduce overscreening. However, interventions may not be necessary if overscreening is deemed not to be a problem.

Our analysis has implications beyond breast cancer screening. It suggests that when there is a disconnect between social network information used by individuals to make preventive health decisions and population-level information used by professional organizations to make recommendations, there is the potential for a wide disconnect between demand for the intervention driven from the “bottom up” by patients and demand driven from the “top down” by recommendations. It is not difficult to imagine 2 screening tests: one that produces a moderate reduction in mortality with a moderate number of “early detections” and another that produces no mortality reduction but a large number of “early detections.” Professional organizations would clearly prefer the former, but our results suggested that patient demand for the test could be substantially greater for the latter. Given patients’ limited time and resources, patients may neglect truly beneficial interventions and favor others of questionable utility.

Limitations

Our analysis had several limitations in both the survey used to parameterize the model and the model itself. The survey we used to parameterize the model was cross-sectional; therefore, we measured intention to screen in the future rather than actual future screening behavior. We conducted our survey at the individual level; therefore, we collected information about respondents’ perceptions of the breast cancer histories of those in their network rather than both these perceptions and respondents’ neighbors’ reports of their true health histories. An important extension of this work would be to collect both true health histories and perceptions of the health histories of others in the social network. This would allow us to model misinformation.

In addition, we did not use an empirical network structure, so our model may lack certain nuances of the true social network structure. In particular, we assumed an “undirected” network so that for any pair of neighbors, either neighbor would learn if the other was diagnosed with early-stage, screen-detected breast cancer. We also gave all ties equal weight. In reality, social networks are directional and weighted.29 Despite this limitation, we were able to capture several important aspects of women’s social networks, including the mean number of contacts and age mixing.

Finally, our model was designed to study mammography rates only in equilibrium; we did not predict how mammography rates might evolve over time. Our model could be extended in future work to examine dynamic effects. This would be important for predicting how screening behavior might change shortly after an intervention.

Conclusions

Despite these limitations, we believe that this study helped address some important issues in cancer screening and represents important methodological advances. We developed a simulation model of breast cancer screening and diagnosis that allowed us to endogenously estimate breast cancer screening rates with an empirically based model of women’s breast cancer screening decisions. This represents an important extension of existing agent-based simulations of breast cancer screening and outcomes. In general, such simulation models are important tools in estimating the potential effects of screening methods or treatments.30–36

We have shown that screening rates may be significantly affected by outcomes in the social network. Therefore, if a novel intervention significantly altered the rate of an outcome or people’s awareness or perception of an outcome (such as the rate of early-stage, screen-detected cancers), we would expect screening rates to be significantly affected, and historical screening rates may be a poor indicator of screening rates after the introduction of the intervention. Furthermore, the methodology we have developed could be applied to screening tests for other cancers or, even more broadly, to models of other preventive health interventions.

Acknowledgments

Research reported in this publication was supported by the National Cancer Institute (award R21CA157571) and by the National Institute on Minority Health and Health Disparities (award R24MD008818).

We thank Raffaele Vardavas, Courtney Gidengil, Kayla de la Haye, Christopher Marcum, Mary Sehl, and Craig Pollack for their insights and many helpful discussions. We additionally thank Amber Jaycocks for her insights and assistance with initial analysis of survey data.

Note. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Human Participant Protection

The survey reported on in this article was conducted through the RAND American Life Panel, which obtained institutional review board approval for the study.

References

  • 1.Gigerenzer G, Mata J, Frank R. Public knowledge of benefits of breast and prostate cancer screening in Europe. J Natl Cancer Inst. 2009;101(17):1216–1220. doi: 10.1093/jnci/djp237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Chamot E, Perneger T. Misconceptions about efficacy of mammography screening: a public health dilemma. J Epidemiol Community Health. 2001;55(11):799–803. doi: 10.1136/jech.55.11.799. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Domenighetti G, D’Avanzo B, Egger M et al. Women’s perception of the benefits of mammography screening: population-based survey in four countries. Int J Epidemiol. 2003;32(5):816–821. doi: 10.1093/ije/dyg257. [DOI] [PubMed] [Google Scholar]
  • 4.McQueen A, Kreuter MW. Women’s cognitive and affective reactions to breast cancer survivor stories: a structural equation analysis. Patient Educ Couns. 2010;81:S15–S21. doi: 10.1016/j.pec.2010.08.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Larson RJ, Woloshin S, Schwartz LM, Welch HG. Celebrity endorsements of cancer screening. J Natl Cancer Inst. 2005;97(9):693–695. doi: 10.1093/jnci/dji117. [DOI] [PubMed] [Google Scholar]
  • 6.Chapman S, McLeod K, Wakefield M, Holding S. Impact of news of celebrity illness on breast cancer screening: Kylie Minogue’s breast cancer diagnosis. Med J Aust. 2005;183(5):247–250. doi: 10.5694/j.1326-5377.2005.tb07029.x. [DOI] [PubMed] [Google Scholar]
  • 7.Schwartz LM, Woloshin S, Fowler FJ, Jr, Welch HG. Enthusiasm for cancer screening in the United States. JAMA. 2004;291(1):71–78. doi: 10.1001/jama.291.1.71. [DOI] [PubMed] [Google Scholar]
  • 8.Welch HG, Frankel BA. Likelihood that a woman with screen-detected breast cancer has had her “life saved” by that screening. Arch Intern Med. 2011;171(22):2043–2046. doi: 10.1001/archinternmed.2011.476. [DOI] [PubMed] [Google Scholar]
  • 9.Smith RA, Saslow D, Sawyer KA et al. American Cancer Society guidelines for breast cancer screening: update 2003. CA Cancer J Clin. 2003;53(3):141–169. doi: 10.3322/canjclin.53.3.141. [DOI] [PubMed] [Google Scholar]
  • 10.US Preventive Services Task Force. Screening for breast cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med. 2009;151(10):716–726. doi: 10.7326/0003-4819-151-10-200911170-00008. [DOI] [PubMed] [Google Scholar]
  • 11.Byers T, Levin B, Rothenberger D, Dodd GD, Smith RA. American Cancer Society guidelines for screening and surveillance for early detection of colorectal polyps and cancer: update 1997. CA Cancer J Clin. 1997;47(3):154–160. doi: 10.3322/canjclin.47.3.154. [DOI] [PubMed] [Google Scholar]
  • 12.Moyer VA. Screening for prostate cancer: US Preventive Services Task Force recommendation statement. Ann Intern Med. 2012;157(2):120–134. doi: 10.7326/0003-4819-157-2-201207170-00459. [DOI] [PubMed] [Google Scholar]
  • 13.RAND American Life Panel. September 9, 2012. Available at: https://mmicdata.rand.org/alp. Accessed February 1, 2013.
  • 14.Mandelblatt J, Schechter CB, Lawrence W, Yi B, Cullen J. The SPECTRUM population model of the impact of screening and treatment on US breast cancer trends from 1975 to 2000: principles and practice of the model methods. J Natl Cancer Inst Monogr. 2006;2006(36):47–55. doi: 10.1093/jncimonographs/lgj008. [DOI] [PubMed] [Google Scholar]
  • 15.Fryback DG, Stout NK, Rosenberg MA, Trentham-Dietz A, Kuruchittham V, Remington PL. The Wisconsin breast cancer epidemiology simulation model. J Natl Cancer Inst Monogr. 2006;2006(36):37–47. doi: 10.1093/jncimonographs/lgj007. [DOI] [PubMed] [Google Scholar]
  • 16.Tan SY, van Oortmarssen GJ, de Koning HJ, Boer R, Habbema JDF. The MISCAN-Fadia continuous tumor growth model for breast cancer. J Natl Cancer Inst Monogr. 2006;2006(36):56–65. doi: 10.1093/jncimonographs/lgj009. [DOI] [PubMed] [Google Scholar]
  • 17.Hanin LG, Miller A, Zorin A, Yakovlev AY. The University of Rochester model of breast cancer detection and survival. J Natl Cancer Inst Monogr. 2006;2006(36):66–78. doi: 10.1093/jncimonographs/lgj010. [DOI] [PubMed] [Google Scholar]
  • 18.Plevritis SK, Sigal BM, Salzman P, Rosenberg J, Glynn P. A stochastic simulation model of US breast cancer mortality trends from 1975 to 2000. J Natl Cancer Inst Monogr. 2006;2006(36):86–95. doi: 10.1093/jncimonographs/lgj012. [DOI] [PubMed] [Google Scholar]
  • 19.Berry DA, Inoue L, Shen Y et al. Modeling the impact of treatment and screening on US breast cancer mortality: a Bayesian approach. J Natl Cancer Inst Monogr. 2006;2006(36):30–36. doi: 10.1093/jncimonographs/lgj006. [DOI] [PubMed] [Google Scholar]
  • 20.Meissner HI, Breen N, Taubman ML, Vernon SW, Graubard BI. Which women aren’t getting mammograms and why? (United States) Cancer Causes Control. 2007;18(1):61–70. doi: 10.1007/s10552-006-0078-7. [DOI] [PubMed] [Google Scholar]
  • 21.Maxwell CJ, Bancej CM, Snider J. Predictors of mammography use among Canadian women aged 50–69: findings from the 1996/97 National Population Health Survey. CMAJ. 2001;164(3):329–334. [PMC free article] [PubMed] [Google Scholar]
  • 22.Mayne L, Earp J. Initial and repeat mammography screening: different behaviors/different predictors. J Rural Health. 2003;19(1):63–71. doi: 10.1111/j.1748-0361.2003.tb00543.x. [DOI] [PubMed] [Google Scholar]
  • 23.Sabatino SA, Coates RJ, Uhler RJ, Breen N, Tangka F, Shaw KM. Disparities in mammography use among US women aged 40-64 years, by race, ethnicity, income, and health insurance status, 1993 and 2005. Med Care. 2008;46(7):692–700. doi: 10.1097/MLR.0b013e31817893b1. [DOI] [PubMed] [Google Scholar]
  • 24.Rodríguez MA, Ward LM, Pérez-Stable EJ. Breast and cervical cancer screening: impact of health insurance status, ethnicity, and nativity of Latinas. Ann Fam Med. 2005;3(3):235–241. doi: 10.1370/afm.291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hsia J, Kemper E, Kiefe C et al. The importance of health insurance as a determinant of cancer screening: evidence from the Women’s Health Initiative. Prev Med. 2000;31(3):261–270. doi: 10.1006/pmed.2000.0697. [DOI] [PubMed] [Google Scholar]
  • 26.Calle EE, Flanders WD, Thun MJ, Martin LM. Demographic predictors of mammography and Pap smear screening in US women. Am J Public Health. 1993;83(1):53–60. doi: 10.2105/ajph.83.1.53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.O’Malley MS, Earp JA, Hawley ST, Schell MJ, Mathews HF, Mitchell J. The association of race/ethnicity, socioeconomic status, and physician recommendation for mammography: who gets the message about breast cancer screening? Am J Public Health. 2001;91(1):49–54. doi: 10.2105/ajph.91.1.49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Bleyer A, Welch HG. Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med. 2012;367(21):1998–2005. doi: 10.1056/NEJMoa1206809. [DOI] [PubMed] [Google Scholar]
  • 29.Granovetter M. The strength of weak ties: a network theory revisited. Sociol Theory. 1983;1(1):201–233. [Google Scholar]
  • 30.Berry DA, Cronin KA, Plevritis SK et al. Effect of screening and adjuvant therapy on mortality from breast cancer. N Engl J Med. 2005;353(17):1784–1792. doi: 10.1056/NEJMoa050518. [DOI] [PubMed] [Google Scholar]
  • 31.McMahon PM, Hazelton WD, Kimmel M, Clarke LD. CISNET lung models: comparison of model assumptions and model structures. Risk Anal. 2012;32(suppl 1):S166–S178. doi: 10.1111/j.1539-6924.2011.01714.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Mandelblatt JS, Cronin KA, Bailey S et al. Effects of mammography screening under different screening schedules: model estimates of potential benefits and harms. Ann Intern Med. 2009;151(10):738–747. doi: 10.1059/0003-4819-151-10-200911170-00010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gulati R, Wever EM, Tsodikov A et al. What if I don’t treat my PSA-detected prostate cancer? Answers from three natural history models. Cancer Epidemiol Biomarkers Prev. 2011;20(5):740–750. doi: 10.1158/1055-9965.EPI-10-0718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Etzioni R, Tsodikov A, Mariotto A et al. Quantifying the role of PSA screening in the US prostate cancer mortality decline. Cancer Causes Control. 2008;19(2):175–181. doi: 10.1007/s10552-007-9083-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Welch HG, Albertsen PC. Prostate cancer diagnosis and treatment after the introduction of prostate-specific antigen screening: 1986–2005. J Natl Cancer Inst. 2009;101(19):1325–1329. doi: 10.1093/jnci/djp278. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Hubbard RA, Johnson E, Hsia R, Rutter CM. The cumulative risk of false-positive fecal occult test after 10 years of screening. Cancer Epidemiol Biomarkers Prev. 2013;22(9):1612–1619. doi: 10.1158/1055-9965.EPI-13-0254. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES