Skip to main content
Occupational and Environmental Medicine logoLink to Occupational and Environmental Medicine
. 2006 Oct 19;64(8):562–568. doi: 10.1136/oem.2006.026690

Bias in occupational epidemiology studies

Neil Pearce 1,2,3,4, Harvey Checkoway 1,2,3,4, David Kriebel 1,2,3,4
PMCID: PMC2078501  PMID: 17053019

Abstract

The design of occupational epidemiology studies should be based on the need to minimise random and systematic error. The latter is the focus of this paper, and includes selection bias, information bias and confounding. Selection bias can be minimised by obtaining a high response rate (and by appropriate selection of the control group in a case‐control study). In general, it is important to ensure that information bias is minimised and is also non‐differential (for example, that the misclassification of exposure is not related to disease status) by collecting data in a standardised manner. A major concern in occupational epidemiology studies usually relates to confounding, because exposure has not been randomly allocated, and the groups under study may therefore have different baseline disease risks. For each of these types of bias, the goal should be to avoid the bias by appropriate study design and/or appropriate control in the analysis. However, it is also important to attempt to assess the likely direction and strength of biases that cannot be avoided or controlled.


The validity of any occupational epidemiology study is determined by the extent of systematic error (bias) that is avoided or minimised.1 Systematic error (bias) can be distinguished from random error because the latter can be reduced by increasing the size of a study, whereas bias can only be reduced by changing the study design. In this paper, we provide an overview of those aspects of bias that are particularly important in occupational epidemiology. There are many different types of bias, but three general forms are commonly distinguished: selection bias, information bias and confounding.

Selection bias

In any occupational epidemiology study, the first practical task is to select the study participants from the source population. Selection bias involves biases arising from the procedures by which the study participants are selected from this source population, or select themselves by agreeing to participate. Thus, selection bias is not an issue in a cohort study involving complete recruitment and follow‐up because in this instance the study cohort comprises the entire source population (bias may still occur because exposure has not been randomly assigned, but this involves confounding rather than selection bias1). However, selection bias can occur if participation in the study or follow‐up is incomplete. For example, in a cohort mortality study, if a national population registry (or some surrogate such as a voter registration list) were not available, then it might be necessary to attempt to contact each worker or his next‐of‐kin to verify vital status. Bias could occur if the response rate was related both to exposure and disease—for example, if it were higher in heavily exposed diseased people than in others (with low exposure and/or without disease).

Although we should recognise the possible biases arising from subject selection, it is important to note that epidemiological studies need not be based on representative samples to avoid bias. For example, in a cohort study, people who developed (non‐fatal) disease might be more likely to be lost to follow‐up than those who did not develop disease; however, this would not affect the relative risk estimate provided that loss to follow‐up applied equally to the exposed and non‐exposed populations.2 On the other hand, case‐control studies have differing selection probabilities of cases and non‐cases as an integral aspect of their design. The general principle that applies to all study designs is that selection bias will only occur when the selection probabilities are related both to exposure and health outcome.

Assessment and control of selection bias

Selection bias can sometimes be assessed and/or controlled in the analysis by identifying factors which are related to subject selection and controlling for them as confounders. For example, if white‐collar workers are more likely to be selected for (or participate in) a study than manual workers (and white‐collar work is negatively or positively related to the exposure and outcomes of interest), then this bias can be partially controlled by collecting information on social class and controlling for social class in the analysis as a confounder.

Information bias

Information bias is the result of misclassification of study participants with respect to disease or exposure status. Thus, the concept of information bias refers to those people actually included in the study, whereas selection bias refers to the selection of the study participants from the source population, and confounding (see below) generally refers to non‐comparability of subgroups within the source population.3

It is customary to consider two types of misclassification: non‐differential and differential. The effects of each will be discussed in turn. An in‐depth examination of the consequences of exposure misclassification, and methods to minimize or correct for misclassification bias, can be found in the text by Armstrong et al.4 More technical issues of statistical corrections are covered by Carroll et al.5

Non‐differential misclassification

Non‐differential misclassification of exposure occurs when the probability of exposure misclassification is not related to disease status—that is, if diseased and non‐diseased people are equally likely to be misclassified according to exposure. Similarly, misclassification of disease status is non‐differential if exposed and non‐exposed people are equally likely to be misclassified according to disease status. Non‐differential misclassification usually, although not always, biases ratio measures of association like the relative risk towards the null value of 1.0.6,7,8 Hence, non‐differential information bias tends to produce “false negative” findings and is of particular concern in studies which find a negligible association between exposure and disease.

Non‐differential misclassification will also produce bias towards the null value when exposure is measured as a continuous variable. In this situation, it will produce “attenuation” of the dose‐response slope so that the regression coefficient is biased towards the null value of zero.9

In practice, it is seldom possible to determine the extent of misclassification because “gold standards” are rarely available. Nonetheless, the extent of misclassification may be estimated by smaller‐scale validation studies, ideally in a subgroup of the population under study. It can also be inferred from prior knowledge, as the following example illustrates. In their study of cardiovascular disease mortality among British Columbia lumber mill workers, Davies et al10 observed a dose‐response gradient of acute myocardial infarction with noise exposure that was especially accentuated among workers hired before hearing protection was commonly applied (table 1). The weaker trend for the entire cohort was likely due to non‐differential misclassification of noise exposure among members of the entire cohort whose exposures were overestimated by failure to take into account hearing protection. More generally, such misclassification occurs not only as a result of failure to take into account specific exposure circumstances (for example, the use of hearing protection), but also because of random variation of exposures over time and space even when the exposure circumstances remain unchanged; such variability is usually greater than the relatively small variability that occurs due to laboratory or sampling errors.1

Table 1 Association of deaths resulting from acute myocardial infarction and cumulative noise exposure.

Cumulative exposure (dB(A)‐year) Full cohort (n = 27464) Subgroup without hearing protection (n = 8668)
Person‐years Deaths SMR (95% CI) Person‐years Deaths SMR (95% CI)
<100 314128 226 1.0 (0.9 to 1.1) 133556 174 1.0 (0.9 to 1.2)
100–104 155837 228 1.0 (0.9 to 1.2) 58940 136 1.0 (0.9 to 1.2)
105–109 116303 231 1.1 (1.0 to 1.2) 37133 120 1.2 (1.0 to 1.5)
110–114 63998 165 1.0 (0.9 to 1.2) 14646 71 1.3 (1.0 to 1.6)
115+ 18479 60 1.1 (0.8 to 1.4) 3071 19 1.3 (0.8 to 2.1)

Source: Davies et al.10

There are some important caveats to the generalisation that non‐differential misclassification produces a bias towards the null. When the specificity of the method of identifying the disease under study is 100%, but the sensitivity (the proportion of true cases that are correctly classified) is less than 100%, then the risk difference will be biased towards the null, but the risk ratio (or rate ratio) will be unbiased.11 The direction of bias may also be influenced by the manner in which the exposure is expressed. Bias toward the null can be expected when the exposure variable is classified as exposed or non‐exposed. However, when exposures are classified in ordered categories (for example, none, low, high), non‐differential misclassification between categories can produce a bias either toward or away from the null, and the bias can be especially pronounced when misclassification occurs between non‐adjacent categories.8 Furthermore, non‐differential misclassification of a positive confounder can produce a bias away from the null, because the confounding will be inadequately controlled. Another assumption required for the bias toward the null is that exposure misclassification is independent of disease misclassification.12,13 A lack of independence between non‐differential misclassification of exposure and non‐differential misclassification of disease might, for example, occur in a cross‐sectional study where both exposure and health status are based on subjects' perceptions of exposure and disease symptoms. For example, Kristensen13 gives the example of a survey of pesticide application in a potato field and self‐reported health complaints which reported a clear association (OR 2.42, 95% CI 1.93 to 3.02) between intensity of odour and several (>3) health complaints, whereas a more objective exposure index (proximity zone) showed a weak negative association. One possible explanation was that the increased odds ratio was due to non‐differential but non‐independent misclassification caused by intersubject variation in threshold levels of perception of both odour and health complaints.

Even when none of these exceptions to the bias toward the null principle holds, and one can show mathematically that the direction of bias from exposure misclassification should be towards the null, it may still be true that in an actual study, chance has conspired to move the effect estimate away from the null. Thus, the direction of error from misclassification can never be known with certainty; the most that can be said is that a result is probably underestimated because of exposure misclassification;14 this is what is meant by “bias towards the null”.

Differential misclassification

Differential misclassification occurs when the probability of misclassification of exposure is different in diseased and non‐diseased people, or the probability of misclassification of disease is different in exposed and non‐exposed people. This can bias the observed effect estimate either towards or away from the null value. For example, in a community‐based case‐control study of cancer, with a control group selected from among community residents free of cancer, the recall of occupational history and related exposures of controls might be different from that of the cases. Cases (or proxy respondents) might have particular motivations to report specific exposures, particularly if they had prior knowledge of presumed causal associations (for example, asbestos as a well‐known cause of lung cancer). In this situation, differential information bias would occur, and it could bias the relative risk estimate (odds ratio) towards or away from the null, depending on whether members of the community who did not develop lung cancer were more or less likely to recall such exposure than the cases.

An example of differential misclassification of exposure is provided by a community‐based study in Norway of respiratory symptoms and asthma in relation to occupational exposures to gases and dusts.15 Exposures were determined by self‐report, but exposure categorisation was also obtained with a structured work history interview. The latter was regarded as the gold standard. The sensitivity of the self‐reported data for quartz exposure varied from 21% to 64% and was higher in those with than in those without the respiratory disorders (table 2). The odds ratios for quartz exposure and respiratory symptoms were approximately halved when the “gold standard” structured interview exposure data were used instead of the data from self‐report: for example, the odds ratio for asthma fell from 1.98 to 1.45.

Table 2 Self‐reported and interview‐based occupational quartz exposure in those with and without respiratory symptoms in a Norwegian general population study, 1987–8.

Symptom Quartz exposure
Self‐reported Interview based Sensitivity Specificity
Morning cough
 Yes (n = 180) 9.4% 12.2% 59.1% 98.6%
 No (n = 534) 2.1% 7.5% 21.0% 99.4%
Chronic cough
 Yes (n = 92) 10.9% 12.0% 63.6% 97.4%
 No (n = 622) 2.9% 8.2% 27.5% 99.5%
Phlegm when coughing
 Yes (n = 179) 8.4% 12.3% 45.5% 96.4%
 No (n = 535) 2.4% 7.0% 27.5% 98.3%
Breathlessness grade 2
 Yes (n = 94) 9.6% 14.9% 50.0% 95.5%
 No (n = 620) 3.1% 7.7% 29.2% 98.3%
Wheezing
 Yes (n = 196) 8.2% 13.3% 50.0% 96.0%
 No (n = 518) 2.3% 6.9% 22.2% 99.6%
Asthma
 Yes (n = 88) 8.1% 18.0% 62.9% 98.0%
 No (n = 626) 2.2% 7.2% 21.0% 100.0%

Source: Bakke et al.15

It has also been shown that categorisation of a continuous exposure variable measured with non‐differential error can introduce differential misclassification.16 This occurs because misclassification is not likely to be uniform within a category, but rather will be greater at the category boundaries. When there is a positive (or negative) association between exposure and disease, categorisation of a continuous exposure variable can introduce a differential misclassification because, within a category, cases are more likely than non‐cases to be at the upper (or lower) end of exposure boundaries. In other words, the resulting exposure misclassification from categorisation will differ according to health status.

Assessment and control of misclassification

The true extent of misclassification bias of exposure or disease can never be known in any one study. We might be tempted to assume that misclassifications of exposure and health outcome are both non‐differential and independent of each other, although there is often no empirical evidence to assess this assumption. Thus, every effort should be made during the conduct and implementation of a study to ensure that these assumptions are supportable. Obvious examples are to ensure that the exposure assessment is performed without the assessors having knowledge (“blinded”) of health status, conducting health examinations blinded to exposure status, and keeping study interviewers unaware of the research hypotheses.

Statistical methods to adjust for misclassification have been described.4,7,17,18,19 These require estimates of sensitivity and specificity, or the reliability of the measurement (incorporating not only the reliability of the laboratory measurements, but also the random variation of the exposure itself in space and time), based on prior knowledge. These estimates are, however, often just guesses. For this reason, we do not advocate reporting “misclassification‐adjusted” effect estimates, although it is an informative exercise to conduct sensitivity analyses that explore the range of results that might have occurred under various scenarios.20

If misclassification cannot be avoided, or controlled in the analysis, it is important to at least assess its possible magnitude. Obtaining additional exposure or health data to investigate misclassification may be done for a sample of the study population when resources are limited. The effort will be justified when additional data can corroborate information already in hand—the best situation is when the observed data can be contrasted against a “gold standard” to establish sensitivity and specificity.

Confounding

Confounding occurs when the exposed and non‐exposed subpopulations of the source population have different background disease risks.21 It can be thought of as a mixing of the effects of the exposure being studied with the effects of other factors (confounders) on risk of the health outcome interest. A confounder, if not adequately controlled in the study design or analysis, may bias the exposure‐disease association, making it either closer or farther from the null than the true effect. Confounding may even reverse the apparent direction of an effect in extreme situations.

Three conditions are traditionally given as necessary (but not sufficient) for a factor to be a confounder.20 First, a confounder is a factor that is predictive of disease in the absence of the exposure under study. Note that a confounder need not be a genuine cause of the disease under study, but merely “predictive.” Hence, surrogates for causal factors (for example, age, socioeconomic status) may be regarded as potential confounders, even though they are not direct causal factors (usually the correlation is not 100% so control for a surrogate for a causal factor will at best only partially control for confounding).

Second, a confounder must be associated with exposure in the source population at the start of follow‐up (that is, at baseline). In case‐control studies this implies that a confounder will tend to be associated with exposure among the controls. An association may also occur among the cases simply because the study factor and a potential confounder are both risk factors for the disease, but this does not cause confounding in itself unless the association also exists in the source population.

Third, a variable that is affected by the exposure—that is, an intermediate in the causal pathway between exposure and disease, should not be treated as a confounder because to do so could introduce serious bias into the results.22,23,24,25,26 For example, in a study of colon cancer among clerical workers, it would be inappropriate to control for low physical activity if it was considered that reduced physical activity was a consequence of being a clerical worker, and hence a part of the causal chain leading from clerical work to colon cancer. On the other hand, if low physical activity itself was of primary interest, then this should be studied directly, and clerical work would be regarded as a potential confounder if it also involved exposure to other risk factors for colon cancer (if not, then clerical work would merely be a surrogate for low physical activity). Similarly, we should avoid controlling for health outcomes that may be part of the pathogenic disease process, such as reduced pulmonary function following exposure to a respiratory hazard in a study of chronic obstructive lung disease. (We would, however, be justified in controlling for baseline—that is, pre‐exposure—lung function if there were reason to believe that baseline lung function was associated with subsequent exposure level.) Evaluating whether certain factors are exposure or health outcome intermediates in causal pathways requires information external to the study.

Selection bias and confounding are not always clearly demarcated. In particular, selection bias in the form of non‐response at baseline of a cohort can be viewed as a source of confounding, because it generates bias by producing associations of exposure with other risk factors in the study cohort. A similar phenomenon occurs in case‐control studies when selection is affected by a factor that itself affects exposure. An example occurs when matching on a factor that is associated with exposure in the source population, but is not an independent risk factor for disease. In this situation, the factor is not a confounder in the source population, but matching may turn it into a confounder which must be controlled in the data analysis.20

The healthy worker effect

The healthy worker effect is perhaps the most common example of confounding in occupational studies. This phenomenon is characterised typically by lower relative mortality from all causes combined, and from selected causes (for example, cardiovascular disease), in an occupational cohort,27,28 and occurs because relatively healthy individuals are likely to gain employment and to remain employed.

Selection occurs at two time points:29,30 selection into the workforce at time of hire (which is influenced by good health); and selection out of the workforce at time of termination of employment (if this is influenced by poor health). The initial selection occurs at time of hire in that relatively healthy people are more likely to seek and to be offered employment; the most direct way to achieve partial control for this phenomenon is to stratify on initial employment status—that is, to compare the mortality of a particular workforce with that of other employed people rather than with a general population sample (which includes invalids and the unemployed).

The second key aspect of the healthy worker effect is the selection of unhealthy people out of the workforce. Thus, the most unhealthy members of a cohort are likely to have the shortest employment duration. Steenland and Stayner31 examined employment status as a potential confounder by analysing 10 large cohort studies and classifying the person‐years at risk as “active” or “non‐active”. They found that total mortality was relatively low during active employment and high during inactive person‐years before age 65 (the typical retirement age), but was not increased during inactive person‐years following retirement. Overall, there was a negative dose‐response gradient with duration of employment, but this pattern virtually disappeared when the active and inactive person‐years were analysed separately. Thus, employment status may be a confounder, because it is a risk factor for death (either because a change in employment status may signify ill‐health, or because being unemployed increases the risk of death), and it is associated with exposure (if we are studying an exposure that only occurs in those who are employed).

Finally, the strength of the healthy worker effect tends to diminish with increasing time since first employment; this problem can be addressed by stratifying on length of follow‐up.30

Thus, there are at least three aspects of the healthy worker effect:1 (1) the selection of healthy people into employment (sometimes called the healthy worker selection effect or healthy hire effect), which can be controlled by making an internal comparison rather than a comparison with national mortality rates; (2) the selection of unhealthy people out of the workforce (sometimes called the healthy worker survivor effect), which can in part be controlled by controlling for (time‐related) employment status; and (3) the length of time the population has been followed, which can be addressed by controlling for length of follow‐up.29

It should be stressed, however, that adjustment for factors such as employment status or length of follow‐up may minimise confounding due to the healthy worker effect, but may not eliminate more complex biases associated with it. In particular, Robins32 has shown that bias may occur if risk factors for disease are also determinants of employment status (and hence of subsequent exposure). For example, if smokers terminate employment early (perhaps due to smoking exacerbating the effects of occupational exposures on disease symptoms, for example respiratory tract irritation), then smokers who have increased disease risks as a result of smoking will have lower cumulative exposures than non‐smokers. More generally, when a confounding factor (such as termination of employment) determines subsequent exposure and is determined by previous exposure, then standard analyses which estimate disease incidence as a function of cumulative exposure may not validly estimate the true exposure effect, even when adjustment is made for the confounder. However, the likelihood of such biases occurring is seldom clear, and adjustment for factors such as employment status may still be warranted even if it will not completely eliminate bias.

Epidemiological studies of workplace risks of non‐fatal outcomes (morbidity), such as asthma or musculoskeletal disorders, are especially prone to bias through aspects of the healthy worker effect. The tendencies for sick workers to leave employment or transfer to less‐exposed jobs are two very commonly observed phenomena in occupational morbidity studies.33 Disorders that involve acute pain or other symptoms will often result in a transfer to a less hazardous job, either by the affected worker's choice or by the employer.

Cross‐sectional studies are particularly prone to bias from the healthy worker effect. When quantifying the prevalent cases of disease in a workplace, one may underestimate the effects of exposure, if it leads not only to disease but also to leaving employment. The bias may also occur if, instead of terminating employment, those injured by exposure transfer into lower exposure areas. For example, Eisen et al34 reported on a cross‐sectional study of the prevalence of self‐reported asthma in a cohort of US automobile workers exposed to metal working fluids (MWF) while engaged in grinding operations. There was a remarkably consistent negative exposure‐response trend in which the prevalence of asthma decreased with increasing MWF exposure. At the highest exposure level, reported asthma was only about 25% of the prevalence in the non‐exposed. The investigators suspected that a healthy worker transfer bias might have occurred. They attempted to correct partially for this by associating asthma cases with the types of MWF to which cases were exposed in the two years before the time that the participant reported the onset of asthma symptoms. When these “pseudo‐incidence” data were analysed using a Cox proportional hazards model, exposure to MWF was no longer associated with a deficit in asthma.

Control and assessment of confounding

Confounding can be controlled in the study design, in the analysis, or both. Control at the design stage is accomplished with two main methods.20 The first is to restrict the study to narrow ranges of values of the potential confounders—for example, by restricting the study to white males aged 35–54. This approach has a number of conceptual and computational advantages, but may severely restrict the number of potential study subjects and ultimately limit the informativeness of the study. A second method of control involves matching study subjects on potential confounders. For example, in a cohort study one would match a white male non‐exposed subject aged 35–39 with an exposed white male aged 35–39. This will prevent age‐sex‐race confounding in a cohort study, but is often expensive and time‐consuming. In case‐control studies, matching does not prevent confounding, but does facilitate its control in the analysis, although matching may actually reduce precision if it is done on a factor which is associated with exposure but is not a risk factor for the disease of interest.20

Confounding can also be controlled in the analysis using the standard methods such as logistic regression for case‐control studies, and Poisson regression for cohort studies.1 The assessment of confounding involves the use of prior knowledge about the potential confounder, together with an assessment of the extent to which the effect estimate changes when the factor is controlled in the analysis. Many epidemiologists prefer to make a decision based on the latter criterion, although this approach can be misleading, particularly if there is misclassification of exposure.35 The decision to control for a presumed confounder can certainly be made with more confidence if there is supporting prior knowledge that the factor is predictive of disease, independently of its association with exposure.

Most occupations involve exposure to more than one potential risk factor, and the possibility of confounding by other occupational exposures must be considered. For example, foundry environments can entail exposures to metal dusts and fumes, silica, carbon monoxide, polycyclic aromatic hydrocarbons, and formaldehyde, as well as to heat, noise and vibration. However, controlling for multiple exposures may be difficult when they are highly correlated, making it problematic to separate their effects. A practical approach to address mutual confounding from multiple agents is to consider a priori the factors most likely to be associated with the health outcome of interest, and to limit the analysis to the particular subset of relevant agents. The subset of agents can vary with health outcome. For example, in an analysis of lung cancer in foundry workers, the analysis of exposures might be limited to metal dusts and fumes, silica, polycyclic aromatic hydrocarbons, and formaldehyde, whereas carbon monoxide, heat and noise might be selected in an analysis of ischaemic heart disease.

An important advantage of studying occupational cohorts is that one can often gather both exposure and health data by using existing databases, without recourse to individual participant interviews or questionnaires. A major limitation of this approach, however, is that often there is no information about potential confounding from individual habits and behaviours, as well as previous occupational exposures. However, the relatively homogenous nature of many working populations, at least for internal comparisons of exposed and non‐exposed workers within a particular workforce, means that uncontrolled “lifestyle” confounding is likely to be small.36,37

When one lacks data on a suspected confounder, and thus cannot control confounding directly, it is still desirable to assess the likely direction and magnitude of the confounding. For example, it may be possible to obtain information on a surrogate for the confounder of interest. For example, social class is associated with many lifestyle factors such as smoking, and may therefore be a useful surrogate for some lifestyle‐related confounders).38 Such analyses should be conducted with caution however, as crudely constructed social class measures may be poor surrogates for lifestyle factors. Industrial cohorts typically are fairly uniform in social class, at least within the broad “blue‐collar” and “white‐collar” segments. Consequently, social class may not have strong explanatory power when studying disease risk within a homogenous segment of the workforce. However, even though confounder control will be imperfect in this situation, it is still possible to examine whether the exposure effect estimate changes when the surrogate is controlled in the analysis, and to assess the strength and direction of the change. For example, if the relative risk actually increases (for example, from 2.0 to 2.3) or remains stable (at 2.0) when social class is controlled for, then it is unlikely that the observed excess risk is due to smoking, because social class is correlated with smoking,38 and control for social class involves partial control for smoking.

Even if it is not possible to obtain confounder information for any study participants, it may still be possible to estimate how strong confounding is likely to be from particular risk factors. This is often done in occupational studies, where tobacco smoking is a potential confounder, but smoking information is rarely available; in fact, although smoking is the strongest risk factor for lung cancer, with relative risks of 10 or 20 times, it appears that smoking rarely exerts a confounding effect of greater than about 1.5 times in studies of occupational disease,36,39,40 although this degree of confounding may still be important in some contexts.

When detailed individual risk factor information is not available on a potential confounder, it may be possible to assess the impact of this factor on risk estimates by conducting a type of sensitivity analysis that estimates the potential direction and extent of confounding.19,20,39,40,41,42,43,44,45,46 In this sensitivity analysis (sometimes called indirect adjustment), the magnitude of the effect of the potential confounder on the disease should be known with some confidence, and the prevalence of the potential confounder among the exposed and comparison groups should be estimable, within reasonable bounds. Then, a range of confounding effects, including a “worst case scenario”, can be calculated.39,41,43,46

This type of sensitivity analysis can also be useful in certain situations in which confounder information has been collected for a subset of study participants. For example, Fingerhut and colleagues47 studied cancer risks in a cohort of chemical workers exposed to dioxin. Mortality from lung cancer was found to be elevated in a cohort of 5172 workers at 12 US chemical plants which were contaminated with 2.3.7,8‐tetrachlorodibenzodioxin (TCDD). The investigators conducted an SMR study, comparing the observed cancer mortality to that expected in the US standard population. The investigators had smoking information from only about 4% of the cohort, at just one point in time. With such limited data, direct control for smoking was not feasible. Instead, the investigators used the reported smoking prevalence from this sample to adjust the expected numbers of lung cancers, and then recalculated the SMRs (table 3). Because the cohort sample reported a higher smoking prevalence than in the US population overall, the effect was to slightly increase the expected number of lung cancer deaths and decrease the SMRs. Such limited information, if taken in all exposure‐disease subgroups, can also be used to control confounding directly in a two‐stage analysis.15,20,48,49

Table 3 Lung cancer mortality in a cohort of chemical workers exposed to TCDD.

Study group Observed cases Not adjusted for smoking Adjusted for smoking*
Exp† SMR 95% CI Exp SMR 95% CI
Full cohort 89 80.1 1.11 0.89 to 1.37 84.8 1.05 0.85 to 1.30
High exposure cohort‡ 40 28.8 1.39 0.99 to 1.89 29.2 1.37 0.98 to 1.87

Source: Fingerhut et al.47

*Adjusted using smoking data for a subset of the study population (see text).

†Expected number of lung cancer deaths.

‡Subcohort with >20 years since first employment and >1 year of exposure.

Main messages

  • The design of occupational epidemiology studies should be based on the need to minimise random and systematic error.

  • In general, it is important to ensure that information bias is minimised and is also non‐differential (for example, that the misclassification of exposure is not related to disease status) by collecting data in a standardised manner.

  • A major concern in occupational epidemiology studies usually relates to confounding, because exposure has not been randomly allocated, and the groups under study may therefore have different baseline disease risks.

  • For each of these types of bias, the goal should be to avoid the bias by appropriate study design and/or appropriate control in the analysis.

  • However, it is also important to attempt to assess the likely direction and strength of biases that cannot be avoided or controlled.

Summary

The design of occupational epidemiology studies should incorporate strategies to minimise systematic error (selection bias, information bias and confounding). Selection bias can be minimised by obtaining a high response rate (in case‐control studies we would also require that the controls be selected from the population generating the cases). Information should be collected in a standardised manner to help ensure that misclassification will be non‐differential. In this situation, if it is independent of other errors, exposure and disease misclassification, if independent, will tend to produce false negative findings and will thus be of greatest concern in studies which have not found an important effect of exposure. Thus, in general, it is important to ensure that information bias is non‐differential and, within this constraint, to keep it as small as possible. The potential for confounding by unmeasured risk factors is of concern in any epidemiological study. The task is therefore to minimise confounding in the study design, and to control for it in the analysis. Strong associations between workplace conditions and health outcomes are seldom attributable solely to uncontrolled confounding. However, confounding can be an important bias in studies where occupational risk factors have relatively modest or weak effects.50

For each of these types of bias, the goal should be to avoid the bias by appropriate study design and/or appropriate control in the analysis. However, reducing one type of bias may increase another type. For example, the use of an expensive biomarker involving a blood test may reduce misclassification of exposure but may increase random error by reducing study size (because of the cost of the biomarker), and may also increase selection bias (if non‐response is greater because of the need for a blood test). Thus, study design always involves a compromise between these competing goals, and there is always the need to assess the likely direction and strength of biases that cannot be avoided or controlled.

Acknowledgements

Funding for Neil Pearce's salary is from a Programme Grant from the Health Research Council of New Zealand. Harvey Checkoway contributed to this paper during a Visiting Scientist Fellowship at the International Agency for Research on Cancer.

Abbreviations

MWF - metal working fluids

SMR - standardised mortality ratio

References

  • 1.Checkoway H, Pearce N, Kriebel D.Research methods in occupational epidemiology. Second edition. New York: Oxford University Press, 2004
  • 2.Criqui M H. Response bias and risk ratios in epidemiololgic studies. Am J Epidemiol 1979109394–399. [DOI] [PubMed] [Google Scholar]
  • 3.Pearce N, Greenland S. Confounding and interaction. In: Ahrens W, Krickeberg K, Pigeot I, eds. Handbook of epidemiology. Heidelberg: Springer‐Verlag, 2004375–401.
  • 4.Armstrong B, White E, Saracci R.Principles of exposure measurement in epidemiology. New York: Oxford University Press, 1992
  • 5.Carroll R J, Ruppert D, Stefanski L A.Measurement error in non‐linear models. New York: Chapman and Hall, 1995
  • 6.Brenner H, Loomis D. Varied forms of bias due to non‐differential error in measuring exposure. Epidemiology 19945510–517. [PubMed] [Google Scholar]
  • 7.Copeland K T, Checkoway H, McMichael A J.et al Bias due to misclassification in the estimation of relative risk. Am J Epidemiol 1977105488–495. [DOI] [PubMed] [Google Scholar]
  • 8.Dosemeci M, Wacholder S, Lubin J H. Does nondifferential misclassification of exposure always bias a true effect toward the null value? Am J Epidemiol 1990132746–748. [DOI] [PubMed] [Google Scholar]
  • 9.Heederik D, Attfield M. Characterization of dust exposure for the study of chronic occupational lung disease: a comparison of different exposure assessment strategies. Am J Epidemiol 2000151982–990. [DOI] [PubMed] [Google Scholar]
  • 10.Davies H W, Teschke K, Kennedy S M.et al Occupational exposure to noise and mortality from acute myocardial infarction. Epidemiology 20051625–32. [DOI] [PubMed] [Google Scholar]
  • 11.Pearce N.A short introduction to epidemiology. Second edition. Wellington: Centre for Public Health Research, 2005
  • 12.Chavance M, Dellatolas G, Lellouch J. Correlated nondifferential misclassifications of disease and exposure: application to a cross‐sectional study of the relation between handedness and immune disorders. Int J Epidemiol 199221537–546. [DOI] [PubMed] [Google Scholar]
  • 13.Kristensen P. Bias from nondifferential but dependent misclassification of exposure and outcome. Epidemiology 19923210–215. [DOI] [PubMed] [Google Scholar]
  • 14.Loomis D, Salvan A, Kromhout H.et al Selecting indices of occupational exposure for epidemiologic studies. Occupational Hygiene 1999573–91. [Google Scholar]
  • 15.Bakke P S, Hanoa R, Gulsvik A. Relation of occupational exposure to respiratory symptoms and asthma in a general population sample: Self‐reported versus interview‐based exposure data. Am J Epidemiol 2001154477–483. [DOI] [PubMed] [Google Scholar]
  • 16.Flegal K M, Keyl P M, Nieto F J. Differential misclassification arising from nondifferential errors in exposure measurement. Am J Epidemiol 19911341233–1244. [DOI] [PubMed] [Google Scholar]
  • 17.Greenland S, Kleinbaum D. Correcting for misclassification in two‐way tables and matched‐pair studies. Int J Epidemiol 19831293–97. [DOI] [PubMed] [Google Scholar]
  • 18.Thomas D, Stram D, Dwyer J. Exposure‐measurement error: influence on exposure‐disease relationships and methods of correction. Annu Rev Public Health 19931469–93. [DOI] [PubMed] [Google Scholar]
  • 19.Armstrong B G. Effect of measurement error on epidemiological studies of environmental and occupational exposures. Occup Environ Med 199855651–656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Rothman K J, Greenland S.Modern epidemiology. Second edition. Philadelphia: Lippincott‐Raven, 1998
  • 21.Greenland S, Robins J M. Identifiability, exchangeability, and epidemiological confounding. Int J Epidemiol 198615413–419. [DOI] [PubMed] [Google Scholar]
  • 22.Cole S R, Hernan M A. Fallibility in estimating direct effects. Int J Epidemiol 200231163–165. [DOI] [PubMed] [Google Scholar]
  • 23.Weinberg C R. Toward a clearer definition of confounding. Am J Epidemiol 19931371–8. [DOI] [PubMed] [Google Scholar]
  • 24.Greenland S, Neutra R R. Control of confounding in the assessment of medical technology. Int J Epidemiol 19809361–367. [DOI] [PubMed] [Google Scholar]
  • 25.Robins J M, Morgenstern H. The foundations of confounding in epidemiology. Comp Math Appl 198714869–916. [Google Scholar]
  • 26.Robins J M, Greenland S. Identifiability and exchangeability for direct and indirect effects. Epidemiology 19923143–155. [DOI] [PubMed] [Google Scholar]
  • 27.Gilbert E S. Some confounding factors in the study of mortality and occupational exposures. Am J Epidemiol 1982116177–188. [DOI] [PubMed] [Google Scholar]
  • 28.McMichael A J. Standardized mortality ratios and the “healthy worker effect”: Scratching beneath the surface. J Occup Med 197618165–168. [DOI] [PubMed] [Google Scholar]
  • 29.Pearce N, Checkoway H, Shy C. Time‐related factors as potential confounders and effect modifiers in studies based on an occupational cohort. Scand J Work Environ Health 19861297–107. [DOI] [PubMed] [Google Scholar]
  • 30.Pearce N. Methodological problems of time‐related variables in occupational cohort studies. Revue d Epidemiologie et de Sante Publique 199240(Suppl 1)S43–S54. [PubMed] [Google Scholar]
  • 31.Steenland K, Stayner L. The importance of employment status in occupational cohort mortality studies. Epidemiology 19912418–423. [DOI] [PubMed] [Google Scholar]
  • 32.Robins J. A new approach to causal inference in mortality studies with a sustained exposure period: application to the healthy worker survivor effect. Mathematical Modeling 198671393–1512. [Google Scholar]
  • 33.Eisen E A, Wegman D H, Louis T A.et al Healthy worker effect in a longitudinal study of one‐second forced expiratory volume (FEV1) and chronic exposure to granite dust. Int J Epidemiol 1995241154–1161. [DOI] [PubMed] [Google Scholar]
  • 34.Eisen E A, Holcroft C A, Greaves I A.et al A strategy to reduce healthy worker effect in a cross‐sectional study of asthma and metalworking fluids. Am J Ind Med 199731671–677. [DOI] [PubMed] [Google Scholar]
  • 35.Greenland S, Robins J M. Confounding and misclassification. Am J Epidemiol 1985122495–506. [DOI] [PubMed] [Google Scholar]
  • 36.Siemiatycki J, Wacholder S, Dewar R.et al Smoking and degree of occupational exposure: are internal analyses in cohort studies likely to be confounded by smoking status? Am J Ind Med 19881359–69. [DOI] [PubMed] [Google Scholar]
  • 37.Kriebel D, Zeka A, Esisen E A.et al Quantitative evaluation of the effects of uncontrolled confounding by alcohol and tobacco in occupational cancer studies. Int J Epidemiol 2004331040–1045. [DOI] [PubMed] [Google Scholar]
  • 38.Kogevinas M, Pearce N, Susser M.et al Social inequalities and cancer. In: Boffetta P, ed. Social inequalities and cancer. Lyon: IARC, 19971–15.
  • 39.Axelson O. Aspects on confounding in occupational health epidemiology. Scand J Work Environ Health 1978485–89. [DOI] [PubMed] [Google Scholar]
  • 40.Axelson O. Confounding from smoking in occupational epidemiology. Br J Ind Med 198946505–507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Bross I D J. Pertinency of an extraneous variable. J Chron Dis 196720487–495. [DOI] [PubMed] [Google Scholar]
  • 42.Cornfield J, Haenszel W, Hammond E C.et al Smoking and lung cancer: recent evidence and a discussion of some questions. J Natl Cancer Inst 195922173–203. [PubMed] [Google Scholar]
  • 43.Schlesselman J J. Assessing effects of confounding variables. Am J Epidemiol 1978993–8. [PubMed] [Google Scholar]
  • 44.Checkoway H, Waldman G T. Assessing the possible extent of confounding in occupational case‐referent studies. Scand J Work Environ Health 198511131–133. [DOI] [PubMed] [Google Scholar]
  • 45.Axelson O, Steenland K. Indirect methods of assessing the effects of tobacco use in occupational studies. Am J Ind Med 198813105–118. [DOI] [PubMed] [Google Scholar]
  • 46.Flanders W D, Khoury M J. Indirect assessment of confounding: graphic description and limits on effect for adjusting for covariates. Epidemiology 19901239–246. [DOI] [PubMed] [Google Scholar]
  • 47.Fingerhut M A, Halperin W E, Marlow D A.et al Cancer mortality in workers exposed to 2,3,7,8‐tetrachlorodibenzo‐p‐dioxin. N Engl J Med 1991324212–218. [DOI] [PubMed] [Google Scholar]
  • 48.White J E. A two stage design for the study of the relationship between a rare exposure and a rare disease. Am J Epidemiol 1982115119–128. [DOI] [PubMed] [Google Scholar]
  • 49.Walker A M. Anamorphic analysis: sampling and estimation for covariate effects when both exposure and disease are known. Biometrics 1982381025–1032. [PubMed] [Google Scholar]
  • 50.Ahlbom A, Steineck G. Aspects of misclassification of confounding factors. Am J Ind Med 199221107–112. [DOI] [PubMed] [Google Scholar]

Articles from Occupational and Environmental Medicine are provided here courtesy of BMJ Publishing Group

RESOURCES