Abstract
Decision-analytic models are increasingly used to inform health policy decisions. These models synthesize available data on disease burden and intervention effectiveness to project estimates of the long-term consequences of care, which are often absent when clinical or policy decisions must be made. While models have been influential in informing US cancer screening guidelines under ideal conditions, incorporating detailed data on real-world screening practice has been limited given the complexity of screening processes and behaviors throughout diverse health delivery systems in the United States. We describe the synergies that exist between decision-analytic models and health care utilization data that are increasingly accessible through research networks that assemble data from the growing number of electronic medical record systems. In particular, we present opportunities to enrich cancer screening models by grounding analyses in real-world data with the goals of projecting the harms and benefits of current screening practices, evaluating the value of existing and new technologies, and identifying the weakest links in the cancer screening process where efforts for improvement may be most productively focused. We highlight the example of the National Cancer Institute–funded consortium Population-based Research Optimizing Screening through Personalized Regimens (PROSPR), a collaboration to harmonize and analyze screening process and outcomes data on breast, colorectal, and cervical cancers across seven research centers. The pairing of models with such data can create more robust models to not only better inform policy but also inform health care systems about best approaches to improve the provision of cancer screening in the United States.
An overarching goal of health policy is to advance high-quality care and discourage low-quality and harmful care. Ideally, health policy decisions are evidence based. While information about intermediate or short-term outcomes is often available from clinical studies, policy decisions often must be made in the absence of data on the long-term consequences of care. Cancer screening policies provide a good example of this. The natural history of a cancer can extend over decades, and consequently it can take 10 years or longer to assess the impact of new cancer screening interventions on long-term (eg, mortality) outcomes. Screening programs are often adopted based primarily on short-term outcomes, such as screening test sensitivity and specificity, under the assumption that detection of precancerous lesions and early-stage cancers will reduce cancer incidence, morbidity, and mortality.
In the absence of long-term empirical data, decision-analytic models (eg, decision trees, cohort models, and microsimulation models; hereafter “models”) have emerged as an approach to synthesize existing data to make long-term projections either as we await new evidence or under “what if” scenarios that are otherwise unfeasible or unethical to evaluate in clinical studies. Models have been influential in informing clinical guidelines in the United States, including the US Preventive Services Task Force (USPSTF) recommendations on screening for breast (1), colorectal (2), cervical (3), and lung (4) cancers. The key to effective modeling is integrating high-quality data. In this Commentary, we discuss the synergies between cancer models and emerging research networks that leverage data on health care utilization through the increased adoption of electronic medical record (EMR) systems. We highlight as a recent example the Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) consortium, a National Cancer Institute (NCI)–funded multisite collaborative focused on studying the processes of screening for breast, cervical, and colorectal cancers across a diverse range of US health care settings.
Strengths and Limitations of Modeling
Like any methodological approach, models have both strengths and limitations that must be considered when used to inform clinical guidelines (5,6). One strength is that models are explicit, systematic, and quantitative: Modelers can describe the data used in model development, the structural and analytic assumptions that are made, and the ability of the model to represent the decision problem at hand. Models can incorporate information about the natural history of disease, the ability of tests to detect and diagnose disease, the effectiveness of treatments, and screening participation patterns to project the health impact of interventions under real-world or hypothetical conditions. Additionally, the inclusion of information on resource use and costs enables evaluation of the budget impact and value (ie, cost-effectiveness) of interventions. Because no single empirical study can address all factors relevant to screening, the process of model-building requires multiple data sources and is inherently transdisciplinary (7). Because models piece together available information about disease processes and interventions, they can also help researchers identify what factors are most influential on important outcomes, uncover critical gaps in the state of the science for a specific research question, and assess the value of new information on policy decisions (8).
The primary challenge is that model validity depends on the availability of high-quality data that inform model inputs, structures, and assumptions. Effective modeling requires rigorous specification of model parameters and transparency in underlying assumptions. Cancer natural history models typically require technical information on the progression of existing precancer stages, tumor growth, and survival. Ideally, models would also integrate behaviors at the patient, provider, facility or health systems levels, such as screening adherence, diagnostic referral rates, waiting times, and costs of care. For example, to accurately model screening as currently practiced in a population, understanding the screening behavior of individuals over time would be desirable: At what age do individuals initiate screening? How often do people fail to return to evaluate a positive test result? How frequently do people screen, and is the screening interval related to patient characteristics (eg, age, sex, race/ethnicity), test results (eg, a positive screen), or disease risk? How does screening adherence (ie, initiation, return for follow-up, and rescreening) correlate within individuals? Such data would facilitate the evaluation of personalized screening recommendations by examining the risk-benefit tradeoff in patient subgroups with specific behaviors and characteristics. Given the complexity of screening practices and behaviors and the relative ease with which to simulate “ideal” conditions (ie, perfect adherence to screening regimens), model-based evaluations to date have fallen short of comprehensively incorporating detailed process data reflecting screening “as practiced.”
Research Consortia
In recent years, several large research consortia, including the Breast Cancer Surveillance Consortium (BCSC) (9), the Cancer Research Network (CRN) (10)—and more generally the Health Care Systems Research Network (HCSRN), formerly known as the HMO Research Network (HMORN), (11) and the National Patient-Centered Clinical Research Network (PCORnet) (12)—have undertaken the daunting task of identifying and analyzing important factors related to health care, including the quality and delivery of care and patient outcomes, through multidisciplinary teams and approaches. These data resources assemble millions of observations, usually from multiple institutions, containing individual-level data on demographics, risk factors, and clinical encounters and outcomes. Cancer modeling groups, including members of the Cancer Intervention and Surveillance Modeling Network (CISNET) consortium, have engaged in active collaborations with these research networks to take advantage of these data to enhance model inputs, including prevalence of risk factors, test characteristics, short-term screening outcomes, and costs (13,14). The increasing use of EMR systems in health care delivery settings will further increase opportunities to study and characterize real-world health care practice and to incorporate these findings into decision models.
The PROSPR consortium is a relatively new consortium focused on informing and improving cancer screening processes for breast, colorectal, and cervical cancers through a multisite collaboration (15–18). The PROSPR consortium comprises seven research centers, as well as a statistical coordinating center, geographically dispersed across the United States representing a spectrum of health care institutions, ranging from large health plans to population-based state registries. PROSPR data describe real-world screening practice at multiple levels, including patient (eg, demographic, risk factors), provider (eg, specialty), and facility (eg, location, availability of reminder systems) levels. Unlike other cancer networks to date, the PROSPR consortium has specified comparative effectiveness research as a requisite and central project goal. With five of seven research centers using decision-analytic modeling for such projects, the PROSPR consortium serves as an example of how cancer networks can work closely with models to integrate data.
Modeling Applications Through the PROSPR Consortium
The pairing of decision-analytic models with data from large-scale research consortia provides an important opportunity to enhance model quality by grounding analyses in real-world settings and issues; in turn, models can identify high-priority areas for health improvement through projections of both the short- and long-term comparative effectiveness of screening approaches.
As demonstrated with other research networks, there are several fundamental ways that data from the PROSPR consortium can strengthen existing models and stimulate novel models, including: 1) illuminating both systematic and random between-person variability to inform natural history models; 2) providing directly observed information, derived from clinical records, on individual-level screening behaviors in the general population to quantify the effectiveness and value of current screening practices and to determine the influence of particular factors and alternative scenarios on outcomes; 3) providing evidence on variation in care at provider and system levels, which can be used to assess the potential impacts of adopting “best practices” more widely; 4) linking screening practice with outcomes to test model predictions and assess external validity, an important step in model-based evaluations (19); and 5) providing opportunities for comparative modeling (ie, cross validity (19)) in which independent modeling teams use common, core data inputs to address specific research questions and compare results across models. Model applications within the PROSPR consortium have begun to capitalize on these opportunities and include both policy evaluations and advances in modeling methodologies and approaches.
Policy Questions
Table 1 provides an overview of the types of models that are being used to address a range of policy questions across PROSPR Research Centers (20–26). Common themes of analyses for the different cancer sites include: 1) quantifying the long-term consequences of current cancer screening as practiced in different health organizations and systems; 2) projecting the singular and interactive effects of different breakdowns along the screening process to help inform where to prioritize investments to improve screening impact; 3) evaluating the comparative and cost-effectiveness of newly available or anticipated technologies related to detection, diagnosis, and treatment of cancer; and 4) examining the influence of screening factors on health disparities across subgroup populations (eg, by race/ethnicity) and evaluating interventions to alleviate disparities. For example, the cervical cancer model has been used to simulate current screening practice in New Mexico and quantify the inefficiency compared with national guidelines (24). Likewise, the colorectal cancer models are being used to predict the number of lives saved by the timely follow-up of positive fecal occult blood test (FOBT) results.
Table 1.
Overview of models and policy questions in the PROSPR consortium
| Lead investigator(s), PRC | Model type | PROSPR data source | Example analysis or policy question related to PROSPR | PROSPR data elements used for modeling | Model outcomes |
|---|---|---|---|---|---|
| Breast cancer | |||||
| Anna Tosteson, Dartmouth/ Brigham and Women’s Hospital | Decision tree | Dartmouth-Brigham clinical provider network | What are high-value targets for screening process improvement? | Screening abnormality rates; rate and timing of screening abnormality follow-up | Near-term measures of process outcomes (eg, resolution of initially abnormal mammograms) |
| Brian Sprague, University of Vermont Amy Trentham-Dietz and Oguzhan Alagoz, University of Wisconsin Natasha Stout, Harvard Medical School |
Microsimulation (20–22) | Vermont Breast Cancer Surveillance System | What changes in the harm/benefit balance for breast cancer screening strategies would result from the development of accurate prognostic markers that could minimize overtreatment of DCIS? What are the potential effects of new imaging modalities (eg, digital breast tomosynthesis) on the detection of DCIS and rates of overdiagnosis associated with breast cancer screening? |
Screening test characteristics, by imaging modality and patient characteristics Cancer characteristics, by age and method of detection Distribution of treatments received, by age and cancer characteristics |
Long-term outcomes from screening and use of diagnostic tests (eg, breast cancer mortality, QALY, costs, false-positive tests, treatments avoided) |
| Cervical cancer | |||||
| Jane Kim, University of New Mexico/ Harvard T. H. Chan School of Public Health | Microsimulation (23,24) | New Mexico HPV Pap Registry | What are the benefits, harms, and cost-effectiveness of current cervical cancer screening practice, compared with established guidelines? What are priority investments in improving current failures in the screening process? How do model outputs compare against empiric screening outcomes (ie, model validation)? |
Screening process measures: Screening intensity by age HPV triage test utilization, by preceding Pap result Compliance to diagnostic visit (ie, biopsy), by preceding Pap result Compliance to precancer treatment, by preceding biopsy result Cumulative risk of high-grade precancer Following abnormal Pap results, by age, by HPV genotype, over five-year period |
Harms (eg, colposcopy/biopsy rates), costs, and health benefits (eg, reductions in lifetime cancer risk and mortality, gains in LY/QALY) Cost-effectiveness of current practice vs predicted outcomes with guidelines-based screening Net monetary benefits of improving breakdowns along screening pathway |
| Colorectal cancer | |||||
| Carolyn Rutter, RAND Corporation Ann Zauber, Kaiser Foundation Research Institute/ Memorial Sloan Kettering |
Microsimulation (20,25,26) Microsimulation (20) |
Group Health Research Institute Kaiser Permanente Northern and Southern California |
What is the effectiveness of colorectal cancer screening as practiced in reducing mortality? | Screening process measures: Uptake of screening Time between screening tests Types of screening tests used Time to follow-up of positive tests results |
Long-term outcomes from screening, including reductions in cancer incidence and mortality attributable to screening |
* DCIS = ductal carcinoma in situ; HPV = human papillomavirus; LY = life-years; PRC = Population-Based Research Optimizing Screening through Personalized Regimens research center; PROSPR = Population-Based Research Optimizing Screening through Personalized Regimens; QALY = quality-adjusted life-years.
Model Validation Exercises
Models can be used to simulate many aspects of the disease process that are observable, which enables assessments of model validity against empirical data. Data that are not used directly in model development (ie, as direct inputs or in model calibration) may be used to evaluate the predictive validity of the models. For example, cumulative risk of high-grade precancerous lesions following abnormal Pap smear results are outputs of the model that can then be compared against outcomes from real-world practice (27,28). Models can be used to predict the number of cancers detected in the next year, stratified by age (and sex for colorectal cancer) and past screening history; for breast cancer, models can also predict stage distribution by breast density, screening frequency, and age. Data describing the cumulative risk of a false-positive FOBT over a 10-year program of screening (29) could also be used to validate assumptions about within-person correlation of these tests over time. These types of evaluations are critical for validating models that are used for policy development.
Opportunities for Comparative Modeling
Research networks are ideally poised to provide common data elements harmonized across contributing research centers that can be used for comparative modeling exercises. PROSPR estimates of screening processes (eg, screening rates, loss to follow up) can be incorporated into multiple models and used to predict and compare the impact of screening as practiced on cancer incidence and mortality. Because models make different assumptions about unobservable disease processes, comparative modeling provides more robust predictions than those obtained from a single model. Comparative modeling within the PROSPR consortium can be facilitated by the fact that several of the PROSPR modeling groups are also members of the CISNET modeling consortium that has for many years focused on comparative modeling across multiple cancer organ sites, including those studied in PROSPR (1,20,30).
Transdisciplinary Collaboration
Advancements in cancer-related research require expertise from distinct fields. Within the PROSPR consortium, disease modelers have the opportunity to collaborate directly with researchers in the fields of clinical medicine, health services, epidemiology, health disparities, behavioral science, biostatistics, and operations research. These specialists can assist modelers by providing both expertise to inform modeled processes and data that will make models more robust; in turn, modelers can guide future work in these fields by revealing insights into disease processes and interventions to prevent and treat disease. Importantly, these collaborations will help to strengthen the methodologic toolkit of comparative effectiveness research methodologies currently practiced in applied cancer prevention and control research.
Challenges
Data from cancer research collaboratives, including PROSPR, provide a basis for improving our understanding of—and therefore our ability to accurately model—screening as practiced. However, it takes a considerable amount of time to accrue data that are mature enough to describe longitudinal screening patterns that can be used for model inputs. This issue is most readily apparent for colorectal cancer screening because colonoscopy screening intervals can be as long as 10 years. It also takes time to collect data describing long-term outcomes that can be used for model validation. As these data accrue, models can and should be validated against shorter-term clinical outcomes, such as false-positive rates and rates of screen-detected cancers. While data quality is likely to be high in large research networks given the extensive data quality assurance processes in place, the data are observational in nature, making causal inference challenging; as a result, efforts must be taken to adjust for biases in the data or alternatively to capture these biases (if the source of bias is known) or explore potential sources of biases in the simulations. Finally, while the data networks might include diverse healthcare settings, it is likely that the range of settings does not reflect the full spectrum of how and where care is delivered in the United States, and therefore data may not be generalizable to all settings.
Conclusion
In spite of these challenges, perhaps because of them, it is important to move forward with modeling as part of large cancer research consortia and initiatives. These consortia provide rich repositories of real-world data that can be leveraged in decision-analytic modeling to create more robust models to not only better inform policy but also guide health care systems about best approaches to improving the delivery of cancer screening. PROSPR’s study of variability in these outcomes at multiple levels (eg, patient, provider, health system) may enable models to identify influential points within the screening process in terms of high impact on population-level outcomes and areas where improvement efforts may be most productively focused.
Funding
This study was conducted as part of the National Cancer Institute (NCI)–funded consortium Population-Based Research Optimizing Screening through Personalized Regimens (PROSPR) (grant numbers U54 CA164336, U54CA163307, U54CA163262, U54CA163303, U54CA163313, U54CA163308, U54CA163261).
The study sponsor had no role in the design of the study; the collection, analysis, or interpretation of the data; the writing of the manuscript; nor the decision to submit the manuscript for publication.
The PROSPR consortium principal investigators and NCI representatives include Katrina Armstrong, Massachusetts General Hospital; Mitchell Schnall, University of Pennsylvania; Anna Tosteson, Geisel School of Medicine at Dartmouth; Tracy Onega, Geisel School of Medicine at Dartmouth; Jennifer Haas, Brigham and Women’s Hospital; Brian Sprague, University of Vermont; Donald Weaver, University of Vermont; Aruna Kamineni, Group Health Research Institute; Jessica Chubak, Group Health Research Institute; Celette Sugg Skinner, University of Texas Southwestern; Ethan Halm, University of Texas Southwestern; Jasmin Tiro, University of Texas Southwestern; Douglas Corley, Kaiser Foundation Research Institute; Michael Silverberg, Kaiser Foundation Research Institute; Chun Chao, Kaiser Foundation Research Institute; Virginia Quinn, Kaiser Foundation Research Institute; Chyke Doubeni, University of Pennsylvania; Ann Zauber, Memorial Sloan Kettering Cancer Center; Cosette Wheeler, University of New Mexico; Mark Thornquist, Fred Hutchinson Cancer Research Center; William Barlow, Cancer Research and Biostatistics; Sarah Kobrin, National Cancer Institute; Paul Doria-Rose, National Cancer Institute; and Stephen Taplin, National Cancer Institute.
The authors have no conflicts of interest to declare.
References
- 1. Mandelblatt JS, Cronin KA, Bailey S, et al. Effects of mammography screening under different screening schedules: model estimates of potential benefits and harms. Ann Intern Med. 2009;151(10):738–747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Zauber AG, Lansdorp-Vogelaar I, Knudsen AB, Wilschut J, van Ballegooijen M, Kuntz KM. Evaluating test strategies for colorectal cancer screening: a decision analysis for the U.S. Preventive Services Task Force. Ann Intern Med. 2008;149(9):659–669. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Kulasingam S, Havrilesky LJ, Ghebre R, Myers ER. Screening for cervical cancer: A modeling study for the US Preventive Services Task Force. J Low Genit Tract Dis. 2013;17:193–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. de Koning HJ, Meza R, Plevritis SK, et al. Benefits and harms of computed tomography lung cancer screening strategies: a comparative modeling study for the U.S. Preventive Services Task Force. Ann Intern Med. 2014;160(5):311–320. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Habbema JD, Wilt TJ, Etzioni R, et al. Models in the development of clinical practice guidelines. Ann Intern Med. 2014;161(11):812–818. [DOI] [PubMed] [Google Scholar]
- 6. Bach PB. Raising the bar for the U.S. Preventive Services Task Force. Ann Intern Med. 2014;160(5):365–366. [DOI] [PubMed] [Google Scholar]
- 7. Hiatt RA, Porco TC, Liu F, et al. A multilevel model of postmenopausal breast cancer incidence. Cancer Epidemiol Biomarkers Prev. 2014;23(10):2078–2092. [DOI] [PubMed] [Google Scholar]
- 8. Groot Koerkamp B, Myriam Hunink MG, Stijnen T, Weinstein MC. Identifying key parameters in cost-effectiveness analysis using value of information: a comparison of methods. Health Econ. 2006;15(4):383–392. [DOI] [PubMed] [Google Scholar]
- 9. Breast Cancer Surveillance Consortium (BCSC). http://breastscreening.cancer.gov/ Accessed June 11, 2015.
- 10. Cancer Research Network (CRN). http://crn.cancer.gov/ Accessed June 11, 2015.
- 11.Health Care Systems Research Network (HCSRN). http://www.hcsrn.org. Accessed October 22, 2015.
- 12. National Patient-Centered Clinical Research Network (PCORnet). http://www.pcornet.org/ Accessed June 11, 2015.
- 13. Sprague BL, Stout NK, Schechter C, et al. Benefits, harms, and cost-effectiveness of supplemental ultrasonography screening for women with dense breasts. Ann Intern Med. 2015;162(3):157–166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Stout NK, Lee SJ, Schechter CB, et al. Benefits, harms, and costs for breast cancer screening after US implementation of digital mammography. J Natl Cancer Inst. 2014;106(6)dju092 10.1093/jnci/dju092. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Population-Based Research Optimizing Screening through Personalized Regimens (PROSPR) consortium. http://healthcaredelivery.cancer.gov/prospr/ Accessed June 11, 2015. [DOI] [PMC free article] [PubMed]
- 16. Beaber EF, Kim JJ, Schapira MM, et al. Unifying screening processes within the PROSPR consortium: a conceptual model for breast, cervical, and colorectal cancer screening. J Natl Cancer Inst. 2015;107(6)djv120 10.1093/jnci/djv120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Onega T, Beaber EF, Sprague BL, et al. Breast cancer screening in an era of personalized regimens: a conceptual model and National Cancer Institute initiative for risk-based and preference-based approaches at a population level. Cancer. 2014;120(19):2955–2964. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Tiro JA, Kamineni A, Levin TR, et al. The colorectal cancer screening process in community settings: a conceptual model for the population-based research optimizing screening through personalized regimens consortium. Cancer Epidemiol Biomarkers Prev. 2014;23(7):1147–1158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7. Med Decis Making. 2012;15(6):843–850. [DOI] [PubMed] [Google Scholar]
- 20. Cancer Intervention and Surveillance Modeling Network (CISNET). http://cisnet.cancer.gov/ Accessed June 11, 2015.
- 21. Fryback DG, Stout NK, Rosenberg MA, Trentham-Dietz A, Kuruchittham V, Remington PL. The Wisconsin Breast Cancer Epidemiology Simulation Model. J Natl Cancer Inst Monogr. 2006(36):37–47. [DOI] [PubMed] [Google Scholar]
- 22. Batina NG, Trentham-Dietz A, Gangnon RE, et al. Variation in tumor natural history contributes to racial disparities in breast cancer stage at diagnosis. Breast Cancer Res Treat. 2013;138(2):519–528. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Campos NG, Burger EA, Sy S, et al. An updated natural history model of cervical cancer: derivation of model parameters. Am J Epidemiol. 2014;180(5):545–555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Kim JJ, Campos NG, Sy S, Burger EA, Cuzick J, Castle PE, Hunt WC, Waxman A, Wheeler CM; New Mexico HPV Pap Registry Steering Committee. Inefficiencies and high-value improvements in U.S. cervical cancer screening practice: A cost-effectiveness analysis. Ann Intern Med. 2015;163:589–597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Rutter CM, Savarino JE. An evidence-based microsimulation model for colorectal cancer: validation and application. Cancer Epidemiol Biomarkers Prev. 2010;19(8):1992–2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Rutter CM, Miglioretti DL, Savarino JE. Evaluating risk factor assumptions: a simulation-based approach. BMC Med Inform Decis Mak. 2011;11:55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Wheeler CM, Hunt WC, Cuzick J, et al. The influence of type-specific human papillomavirus infections on the detection of cervical precancer and cancer: A population-based study of opportunistic cervical screening in the United States. Int J Cancer. 2014;135(3):624–634. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Katki HA, Kinney WK, Fetterman B, et al. Cervical cancer risk for women undergoing concurrent testing for human papillomavirus and cervical cytology: a population-based study in routine clinical practice. Lancet Oncol. 2011;12(7):663–672. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Hubbard RA, Johnson E, Hsia R, Rutter CM. The cumulative risk of false-positive fecal occult blood test after 10 years of colorectal cancer screening. Cancer Epidemiol Biomarkers Prev. 2013;22(9):1612–1619. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. van Ballegooijen M, Rutter CM, Knudsen AB, et al. Clarifying differences in natural history between models of screening: the case of colorectal cancer. Med Decis Making. 2011;31(4):540–549. [DOI] [PMC free article] [PubMed] [Google Scholar]
