Skip to main content
BMJ Medicine logoLink to BMJ Medicine
. 2025 Sep 2;4(1):e001332. doi: 10.1136/bmjmed-2025-001332

Common methodological issues in observational epidemiological studies of older adults

Emma Nichols 1,2,, Eleanor Hayes-Larson 2
PMCID: PMC12406907  PMID: 40909441

Key messages.

  • Selection bias, multimorbidity, measurement error, and long latency periods present challenges for epidemiological research on older adults

  • Solutions at both the study design and analysis phases can be effectively applied to reduce bias from common methodological challenges

  • There are many approaches to overcome challenges but no one-size-fits-all solutions exist for every research question and design

Emma Nichols and Eleanor Hayes-Larson review common challenges in the design and analysis of observational epidemiological studies of older adults and discuss potential approaches to reduce bias in future research

Introduction

Expected trends in population ageing and the burden of age related chronic diseases underscore the need for high quality epidemiological studies of older adults (eg, people aged 60 years and older). However, research on older adults poses unique challenges and methodological issues. Inconsistency in how the existing literature approaches these issues might contribute to observed heterogeneity in research findings and clinical uncertainty in optimal treatment and prevention guidelines. We discuss four methodological issues in ageing research (selection bias, multimorbidity, measurement error, and long latency periods) and summarise key approaches to evaluate and reduce bias. While excellent summaries of methodological challenges in epidemiological research more generally are available,1 2 we focus on issues particularly salient for studies of older adults.

In this article, we aim to highlight common challenges in observational epidemiological research focused on evaluating associations (ie, estimates of causal effects such as risk ratios) between risk factors and health outcomes in late life. Throughout the included discussion, we draw on examples from published literature, such as studies on the association between smoking and dementia,3 body mass index and cognitive decline,4 physical activity and fall risk,5 pesticide exposure and Parkinson's disease,6 and diabetes and heart disease.7

Selection bias

Selection bias occurs when the estimated effect of the exposure on the outcome in a study sample does not accurately reflect the true causal effect owing to the selection of the study sample, including both intentional (eg, study design inclusion/exclusion criteria) and incidental (eg, volunteering, mortality) processes.8 For example, in ageing research, the exclusion of those individuals in care homes and nursing homes or those less able to attend clinic visits or give consent owing to disability or other illness can lead to selection bias. These forms of selection bias can sometimes be at least partially resolved by broadening inclusion criteria and carefully considering recruitment practices.

In studies where the outcome is not mortality, selection bias can be caused by mortality as well. Given high mortality rates in older adults, this form of selection is particularly salient for research in older adults and is less amenable to adjustments in study design. Selection bias can be due to pre-study mortality (often known as survival bias) or post-enrolment mortality and study withdrawal (often known as selective attrition). Selection bias due to mortality occurs when mortality is associated with both the exposure and outcome of interest, such that those individuals who survive are different from those who die in ways that affect the estimated effect of the exposure on the outcome.

For example, in studies on the association between smoking and incident dementia, older age at enrolment is associated with null or even inverse findings (eg, that smoking is protective of dementia).9 Beyond true effect heterogeneity by age, the other plausible explanation is pre-study survival bias: smokers who have survived to old age without dementia are healthier than those who were not included in the study, therefore biasing effect estimates. In another study on smoking and dementia focused on selective attrition due to mortality after study enrolment, accounting for selective mortality increased estimated harmful effects of smoking by 56% to 86%.10

Preclinical disease phases common for health outcomes in ageing research can also contribute to selection bias by leading to differential study participation by the outcome. For example, changes in abnormal protein aggregation11 and subclinical cognitive decline12 that lead to dementia could both begin 10-20 years before diagnosis. These conditions increase mortality and can make it harder for affected individuals to continue study participation, leading to both survival bias and selective attrition.

Some study design considerations can alleviate expected survival biases. For example, studies with long follow-up periods are more susceptible to the effects of selective attrition owing to greater mortality over the longer time frame. However, long follow-up is often needed to evaluate rare outcomes, and pre-enrolment survival bias is usually unavoidable in cohorts that enroll adults in older age, so analytical methods are needed to evaluate and mitigate these biases following data collection. Conceptually, survival bias and selective attrition can be resolved analytically by the same approaches (eg, inverse probability weighting; table 1 describes this approach and others); however, the data necessary to model survival bias before study entry are typically unavailable. In these cases, simulation analysis is a powerful tool to understand potential effects of survival bias under specific sets of assumptions.13

Table 1. Common analytical approaches to account for survival bias and selective attrition due to mortality.

Approach Description Key limitations
Inverse probability weighting10 Re-weight the population to create a pseudo-population similar to what would be expected in the absence of mortality Effectiveness is limited by the ability of available variables in the specified weighting model to predict mortality; interpretation of effect estimate (in hypothetical absence of mortality) is challenging
Estimate survivor average causal effects using principal stratification26 Estimate effects among those individuals who would always survive irrespective of whether they were treated or not. Rather than accounting for mortality, conceptually avoids challenges around estimating effects in those who died Definitive identification of the “always survivor” group is unverifiable; interpretation of effect estimate is challenging
Joint models27 Model both longitudinal and survival outcomes simultaneously Requires accounting for all common causes of survival and death in model
Simulation analysis13 Simulate hypothetical studies given different possible scenarios of selective mortality Requires assumption that the scenarios or parameterisation of the simulation analysis reflects reality

When the outcome of interest is cumulative disease incidence, mortality after study enrolment is often called a competing risk because it precludes observation of the outcome. When the outcome of interest is cause specific mortality, death due to other causes can be a competing risk as well. Two common approaches to explicitly consider competing risks are to estimate hazard ratios both for the competing event and the survival outcome of interest, or to use models based on cumulative incidence such as Fine-Gray regression to estimate subdistribution hazards.14 The choice of approach requires being precise about the research question of interest,15 given the differences in the assumptions and interpretations of cause specific hazards and subdistribution hazards.

Multimorbidity

Multimorbidity, characterised by the coexistence of two or more chronic conditions, is very common (>60% prevalence) in older adults.16 Multimorbidity and simultaneous declines in multiple domains of health can pose several challenges when attempting to isolate and estimate the effect of an exposure on a single health outcome.

Firstly, co-occurring declines in multiple systems with unclear temporal ordering can lead to challenges with reverse causation, where an estimated association is actually due to the effect of the outcome on the exposure, rather than the desired effect of the exposure on the outcome. For example, reverse causation might be common in studies of the association between body mass index and cognitive decline, because preclinical biological changes associated with cognitive decline, which are undetectable in objective cognitive testing, can also lead to weight loss.17

No single solution can deal with reverse causation. Study design choices can help, such as ensuring clear temporal ordering of the exposure and outcome. For example, a study on the effect of physical activity on fall risk could actually reflect the negative impact of falls on physical activity, especially if the potential for multiple falls is not considered, but bias might be minimised if investigators exclude those individuals with falls before the measurement of physical activity. However, when the outcome on interest has a long preclinical period (eg, dementia), exclusion of those individuals with outcomes before exposure measurement is likely not sufficient. In this scenario, beginning study follow-up five years after exposure ascertainment might be helpful. Experimental or quasi-experimental approaches to identify effects through direct or indirect manipulation of the exposure also reduce risk of reverse causation. However, when these designs are not possible, sensitivity analyses allow investigators to evaluate the extent to which reverse causation might contribute to findings, although many of these analyses require substantial longitudinal follow-up data (table 2).

Table 2. Common sensitivity analysis approaches to explore potential reverse causation.

Analysis Justification and expectation Key limitations or requirements
Age stratification of estimated effects Reverse causation can vary by age because the rates of many chronic diseases vary by age, therefore heterogeneity in estimated effects by age would be expected in the presence of reverse causation Other reasons for effect heterogeneity by age could exist
Exclude those individuals with potential preclinical disease Removal of those individuals with preclinical disease might lead to less reverse causation, because those with more severe preclinical disease might be more likely to have levels of disease that could lead to changes in exposure Changes the study sample, limits the potential to look at effects on early disease
Increase the time lag between exposure and outcome A longer time lag between exposure and outcome would lead to less risk of reverse causation because it would be less likely for the outcome to lead to exposure Requires more longitudinal data
Compare the association between the exposure at time 1 and the outcome at time 2 with the association between the outcome at time 1 with the exposure at time 2 Associations between the outcome and exposure with the outcome occurring before the exposure would provide evidence for reverse causation Requires exposure and outcome to both be measured longitudinally (at multiple time points)

Secondly, high levels of multimorbidity can also pose challenges related to confounding adjustment in studies of older adults. Given potential uncertainties in the temporal ordering of multiple chronic conditions, it can often be unclear whether co-occurring conditions are precursors or consequences of the exposure. Many statistical approaches such as regression, g-computation, and multistate modeling can be used to adjust for confounding variables but cannot dictate what those variables are. They do not help answer the question of whether comorbid conditions might be precursors of the exposure and therefore confounders that should be included in the adjustment set; or whether they might be consequences of the exposure and therefore mediators that should not be adjusted for when estimating total effects typically of interest. Directed acyclic graphs can provide a useful framework for clarifying assumptions around the temporal ordering of health conditions and for identifying the necessary covariates to include to appropriately adjust for confounding, although they are generally useful in any epidemiological study focused on estimating causal effects.18 Additionally, conducting sensitivity analyses to evaluate models with different sets of covariates that might act as either confounders or mediators can also be helpful.

Measurement error

Measurement error broadly describes the difference between a measured value of a quantity and its unknown true value. Measurement error in exposures, outcomes, or confounders can lead to bias.1 Differential measurement error, which varies in magnitude across values of other variables included in the analysis, can lead to bias in any direction. For instance, in a case-control study of the association between pesticide exposure in early life and Parkinson's disease in older adults, if retrospective reports of pesticide exposure were ascertained after Parkinson's disease onset, individuals with Parkinson's disease might differentially report exposure. Pesticide exposure might be over-reported in those with Parkinson’s disease relative to controls owing to higher concern about past exposures, or under-reported relative to controls if included individuals had late stage disease accompanied with dementia. This form of measurement error is known as recall bias and would lead to differential misclassification, biasing results in either direction depending on the direction of misreporting in cases relative to controls. In contrast, non-differential measurement error, whose magnitude is constant across values of other variables included in the analysis, is more likely to lead to attenuation to the null, with some exceptions.19 For a given research question, the type of measurement error can vary by study design. For example, consider the association between diabetes and later heart disease. If diabetes is self-reported and heart disease is measured via a comprehensive screening protocol for all study participants, although measurement error might be associated with the screening protocol, this is unlikely to be differential by diabetes status. In contrast, if both diabetes and heart disease were ascertained from electronic health record data, ascertainment bias20 (ie, those individuals with diabetes might be followed more closely in the healthcare system and therefore more likely to be screened for heart disease) could lead to differential measurement error.

Some measurement error issues are particularly salient in research on older adults. For instance, concerns about the reliability of self-reported information in the presence of memory decline or dementia motivates the use of informant or proxy reported measures. Importantly, if both self-reported and proxy-reported measures are used, researchers should carefully consider how to combine the information and avoid introducing differential measurement error between individuals with and without proxy respondents. In addition, some exposures, such as those that occur earlier in life, can only be measured retrospectively in cohorts that recruit participants in mid-life (40-60 years) or late life; concern about bias due to retrospective self-reports might be exacerbated for these questions. Use of validation data, such as information from electronic health records or retrospective linkages, can help alleviate concerns about measurement error in self-reported information, although it could introduce other forms of error, such as ascertainment bias20 or selection bias.

Additionally, many phenotypes of interest in health research on older adults are syndromic conditions (eg, mental health, cognitive functioning, frailty) that cannot be directly observed, but are often measured by assessing the symptoms or signs of the underlying condition (ie, through questionnaires of depressive symptoms, cognitive tests). Latent variable measurement models provide one tool to explicitly model these syndromic conditions, examine measurement precision, and integrate knowledge about measurement error into downstream analyses,21 although preclinical phases of these conditions can be particularly challenging to measure accurately given the subtle signs of disease at this stage.

As with most forms of bias, the optimal approach to managing measurement error is through prevention at the study design phase, potentially through collection of gold standard data or triangulation across multiple measurement modalities (eg, capturing hypertension through self-report, routinely collected health records, and predictive models based on data from novel wearable sensors22). However, design based solutions might not always be feasible, and other methods are available to deal with measurement error in the analysis phase (table 3). Many of these approaches require data from calibration subsamples with both mis-measured and gold standard measurements and assume that corrections developed in calibration samples are generalisable to the full sample.

Table 3. Common methods to adjust for and quantify potential measurement error in analysis phase.

Method Calibration sample needed Description Key limitations and notes
Quantitative bias analysis28 No Simulate or calculate potential effects estimates across a range of reasonable assumptions regarding measurement accuracy Results are sensitive to assumptions about the magnitude and structure of measurement error
Simulation extrapolation29 No if estimates of measurement error variance are available elsewhere or replicate measures are available30 Simulate datasets with increasingly larger additive measurement error, then extrapolate back to the condition of no measurement error given an estimate of measurement error variance Assumptions around the choice of extrapolation function need to be correct to ensure appropriate correction
Regression calibration31 Yes Use calibration model in a subsample with both gold standard and mis-measured measurements to predict gold standard measurements in the full sample Subject to correct specification of the calibration model, uncertainty might be underestimated
Multiple imputation for measurement error32 Yes Use calibration sample to develop models to multiply impute gold standard measurements in the full sample Performance depends on how well the multiple imputation model is specified

Long latency periods between exposure and outcomes

The latency period is the period in between an exposure and disease occurrence. Life course models of ageing suggest that exposures throughout life, including very early in life (eg, birth weight23), can have impacts on outcomes in late life, and imply the existence of long latency periods between many exposures and health outcomes in late life.24 Given the potential for long latency periods, it can be challenging to understand when an exposure matters and how best to characterise exposure over the life course. For example, stress can be experienced at any point in the life course and its impact on late life health outcomes likely varies by timing of occurrence and cumulative exposure. When considering life course exposures, researchers ought to consult domain area experts and carefully consider the relevant timing for their research question of interest. Models that allow for effect estimates to vary over time (eg, Cox proportional hazard models with interactions between time and the exposure of interest to model non-constant hazard ratios) could help capture heterogeneity across the life course. Additionally, frameworks to compare competing hypotheses about the way that life course exposures lead to late life outcomes (eg, critical periods, sensitive periods, accumulation) and data reduction techniques to synthesise information across the life course may be helpful analytical techniques. Although data to appropriately capture exposures across the entire life course are not always readily available, investments in data resources promise new possibilities (table 4).

Table 4. Potential data sources for life course research and exposure outcome associations with long latency periods.

Data source Description Key limitations and notes
Birth cohorts Participants are enrolled at birth or in early life and are followed prospectively over time to collect information on health outcomes in early life, mid-life, and late life Extremely resource intensive and rare, challenges related to changes in measures over time might be relevant
Retrospective life histories After study enrolment, older adults are asked to recall exposures and health outcomes over the course of their life Recall bias can be a problem, no objective measures of exposures or health outcomes available in early life or mid-life
Data linkages Data linkages allow for early life and mid-life information from health records, work or tax records, financial institutions, schools, or other sources to be integrated into existing studies Data might not be available for all participants leading to selection bias, measurement error in available information from records could be an issue
Synthetic cohorts33 Synthetic cohorts can be created by using statistical methods to predict early and mid-life health outcomes by linking cohorts of older adults to other cohorts that cover earlier age ranges Prediction models need to be correctly specified and must include all important confounders and mediators, need to assume cohorts come from the same underlying population

Summary

This article highlights key challenges for epidemiological studies of older adults. We focused on challenges to internal validity in studies estimating effects of risk factors on health outcomes in late life, but external validity (ie, generalisability) of such studies is also an important consideration.25 We encourage researchers to consider the issues raised when conducting epidemiological research in older adults (box 1 shows a worked example). There is no one-size-fits-all approach, but the use of thoughtful study design, analytic approaches to mitigate bias, and inclusion of sensitivity analyses can help strengthen confidence in study conclusions. In addition, evidence triangulation and synthesis across studies and designs with different strengths and weaknesses is critical to improving the overall strength of evidence on any given research question.

Box 1. Consideration of key challenges in epidemiological studies of older adults in example research question.

Research question

What is the effect of diabetes on physical functioning (measured using activities of daily living) among older adults?

Survival bias

Individuals with diabetes might be more likely to die before study entry (survival bias) or after study entry (selective attrition). Simulation studies could help quantify the impact of pre-study selection, and inverse probability weights or joint models might be a good choice to account for selective attrition.

Multimorbidity (reverse causation)

Declines in physical functioning could lead to increased sedentary time, increased body mass index, and increased risk for type 2 diabetes. Measuring diabetes temporally before assessments of physical functioning would help lower the risk of reverse causation. Additionally, sensitivity analyses such as age stratification or exclusion of those individuals with slight declines in functioning but no activity of daily living limitations at the time of exposure measurement could strengthen confidence that findings are not due to reverse causation.

Multimorbidity (confounding)

Those individuals with diabetes might also have various other chronic conditions that need to be considered in analyses, potentially as confounders. Care should be taken to collect data across a range of different chronic conditions and identify conditions or groups of conditions that might lead to diabetes and be associated with physical functioning. Directed acyclic graphs can be used to make assumptions explicit and identify confounders.

Measurement error

Self-reported diabetes is a commonly available measure but likely has considerable measurement error. If a subsample with ideal measures of diabetes (eg, glycated haemoglobin, electronic health record diagnosis) is available, regression calibration, simulation extrapolation, or multiple imputation could be used to correct analyses in the full sample to account for measurement error due to the use of self-reported diabetes. Otherwise, quantitative bias analyses might be conducted to understand a range of plausible effect estimates after consideration of measurement error.

Long latency periods

Care should be taken to consider when diabetes is most important. Diabetes is not an exposure that occurs at a single time point, but rather a condition that can develop at any point across the life course and has effects that could accumulate with longer duration of illness. If longitudinal life course data were available, potentially from linked data or retrospective life history data, analyses could probe specific questions related to whether timing of diabetes onset matters and other ways in which cumulative measures capturing diabetes severity over time might influence physical functioning in late life. However, even if such data were unavailable, researchers could consider integrating these topics into discussions of how their results fit within the existing knowledge and theoretical hypotheses regarding life course exposure to diabetes.

Footnotes

Funding: This work was funded by the National Institutes of Health/National Institute on Aging (U24AG088894 to EN; R00AG075317 to EHL).

Provenance and peer review: Commissioned; externally peer reviewed.

References

  • 1.Rothman KJ, Greenland S, Lash TL. Modern epidemiology. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins; 2008. [Google Scholar]
  • 2.Rothman KJ, Huybrechts KF, Murray EJ. Epidemiology: an introduction. Oxford University Press; 2024. [Google Scholar]
  • 3.Reitz C, den Heijer T, van Duijn C, et al. Relation between smoking and risk of dementia and Alzheimer disease. Neurology. 2007;69:998–1005. doi: 10.1212/01.wnl.0000271395.29695.9a. [DOI] [PubMed] [Google Scholar]
  • 4.Suemoto CK, Gilsanz P, Mayeda ER, et al. Body mass index and cognitive function: the potential for reverse causation. Int J Obes (Lond) 2015;39:1383–9. doi: 10.1038/ijo.2015.83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Heesch KC, Byles JE, Brown WJ. Prospective association between physical activity and falls in community-dwelling older women. J Epidemiol Community Health. 2008;62:421–6. doi: 10.1136/jech.2007.064147. [DOI] [PubMed] [Google Scholar]
  • 6.Rugbjerg K, Harris MA, Shen H, et al. Pesticide exposure and risk of Parkinson’s disease--a population-based case-control study evaluating the potential for recall bias. Scand J Work Environ Health. 2011;37:427–36. doi: 10.5271/sjweh.3142. [DOI] [PubMed] [Google Scholar]
  • 7.Folsom AR, Szklo M, Stevens J, et al. A prospective study of coronary heart disease in relation to fasting insulin, glucose, and diabetes. The Atherosclerosis Risk in Communities (ARIC) Study. Diabetes Care. 1997;20:935–42. doi: 10.2337/diacare.20.6.935. [DOI] [PubMed] [Google Scholar]
  • 8.Banack HR, Kaufman JS, Wactawski-Wende J, et al. Investigating and Remediating Selection Bias in Geriatrics Research: The Selection Bias Toolkit. J Am Geriatr Soc. 2019;67:1970–6. doi: 10.1111/jgs.16022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hernán MA, Alonso A, Logroscino G. Cigarette smoking and dementia: potential selection bias in the elderly. Epidemiology. 2008;19:448–50. doi: 10.1097/EDE.0b013e31816bbe14. [DOI] [PubMed] [Google Scholar]
  • 10.Weuve J, Tchetgen Tchetgen EJ, Glymour MM, et al. Accounting for bias due to selective attrition: the example of smoking and cognitive decline. Epidemiology. 2012;23:119–28. doi: 10.1097/EDE.0b013e318230e861. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Jack CR, Jr, Knopman DS, Jagust WJ, et al. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010;9:119–28. doi: 10.1016/S1474-4422(09)70299-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Karr JE, Graham RB, Hofer SM, et al. When does cognitive decline begin? A systematic review of change point studies on accelerated decline in cognitive and neurological outcomes preceding mild cognitive impairment, dementia, and death. Psychol Aging. 2018;33:195–218. doi: 10.1037/pag0000236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mayeda ER, Tchetgen Tchetgen EJ, Power MC, et al. A Simulation Platform for Quantifying Survival Bias: An Application to Research on Determinants of Cognitive Decline. Am J Epidemiol. 2016;184:378–87. doi: 10.1093/aje/kwv451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Andersen PK, Geskus RB, de Witte T, et al. Competing risks in epidemiology: possibilities and pitfalls. Int J Epidemiol. 2012;41:861–70. doi: 10.1093/ije/dyr213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Rojas-Saunero LP, Young JG, Didelez V, et al. Considering Questions Before Methods in Dementia Research With Competing Events and Causal Goals. Am J Epidemiol. 2023;192:1415–23. doi: 10.1093/aje/kwad090. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Salive ME. Multimorbidity in older adults. Epidemiol Rev. 2013;35:75–83. doi: 10.1093/epirev/mxs009. [DOI] [PubMed] [Google Scholar]
  • 17.Crane BM, Nichols E, Carlson MC, et al. Body Mass Index and Cognition: Associations Across Mid- to Late Life and Gender Differences. J Gerontol A Biol Sci Med Sci. 2023;78:988–96. doi: 10.1093/gerona/glad015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Digitale JC, Martin JN, Glymour MM. Tutorial on directed acyclic graphs. J Clin Epidemiol. 2022;142:264–7. doi: 10.1016/j.jclinepi.2021.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.van Smeden M, Lash TL, Groenwold RHH. Reflection on modern methods: five myths about measurement error in epidemiological research. Int J Epidemiol. 2020;49:338–47. doi: 10.1093/ije/dyz251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Zhang H, Clark AS, Hubbard RA. A Quantitative Bias Analysis Approach to Informative Presence Bias in Electronic Health Records. Epidemiology. 2024;35:349–58. doi: 10.1097/EDE.0000000000001714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Muthen BO. Latent variable modeling in epidemiology. Alcohol Health Res World. 1992;16:286. [Google Scholar]
  • 22.Kario K. Management of Hypertension in the Digital Era. Hypertension. 2020;76:640–50. doi: 10.1161/HYPERTENSIONAHA.120.14742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Barker DJP. The developmental origins of adult disease. J Am Coll Nutr. 2004;23:588S–595S. doi: 10.1080/07315724.2004.10719428. [DOI] [PubMed] [Google Scholar]
  • 24.Ben-Shlomo Y, Cooper R, Kuh D. The last two decades of life course epidemiology, and its relevance for research on ageing. Int J Epidemiol. 2016;45:973–88. doi: 10.1093/ije/dyw096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Hayes-Larson E, Zhou Y, Rojas-Saunero LP, et al. Methods for Extending Inferences From Observational Studies: Considering Causal Structures, Identification Assumptions, and Estimators. Epidemiology. 2024;35:753–63. doi: 10.1097/EDE.0000000000001780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Tchetgen Tchetgen EJ. Identification and estimation of survivor average causal effects. Stat Med. 2014;33:3601–28. doi: 10.1002/sim.6181. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Davis-Plourde KL, Mayeda ER, Lodi S, et al. Joint Models for Estimating Determinants of Cognitive Decline in the Presence of Survival Bias. Epidemiology. 2022;33:362–71. doi: 10.1097/EDE.0000000000001472. [DOI] [PubMed] [Google Scholar]
  • 28.Lash TL, Fox MP, MacLehose RF, et al. Good practices for quantitative bias analysis. Int J Epidemiol. 2014;43:1969–85. doi: 10.1093/ije/dyu149. [DOI] [PubMed] [Google Scholar]
  • 29.Sevilimedu V, Yu L. Simulation extrapolation method for measurement error: A review. Stat Methods Med Res. 2022;31:1617–36. doi: 10.1177/09622802221102619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Devanarayan V, Stefanski LA. Empirical simulation extrapolation for measurement error models with replicate measurements. Stat Probab Lett. 2002;59:219–25. doi: 10.1016/S0167-7152(02)00098-6. [DOI] [Google Scholar]
  • 31.Spiegelman D, McDermott A, Rosner B. Regression calibration method for correcting measurement-error bias in nutritional epidemiology. Am J Clin Nutr. 1997;65:1179S–1186S. doi: 10.1093/ajcn/65.4.1179S. [DOI] [PubMed] [Google Scholar]
  • 32.Cole SR, Chu H, Greenland S. Multiple-imputation for measurement-error correction. Int J Epidemiol. 2006;35:1074–81. doi: 10.1093/ije/dyl097. [DOI] [PubMed] [Google Scholar]
  • 33.Kezios KL, Glymour MM, Zeki Al Hazzouri A. An Introduction to Longitudinal Synthetic Cohorts for Studying the Life Course Drivers of Health Outcomes and Inequalities in Older Age. Curr Epidemiol Rep. 2024;12:2. doi: 10.1007/s40471-024-00355-1. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from BMJ Medicine are provided here courtesy of BMJ Publishing Group

RESOURCES