Skip to main content
BMC Medical Research Methodology logoLink to BMC Medical Research Methodology
. 2024 Jul 16;24:151. doi: 10.1186/s12874-024-02277-4

Hypothesis testing and sample size considerations for the test-negative design

Yanan Huo 1, Yang Yang 2, M Elizabeth Halloran 3,4, Ira M Longini Jr 5, Natalie E Dean 6,
PMCID: PMC11251325  PMID: 39014324

Abstract

The test-negative design (TND) is an observational study design to evaluate vaccine effectiveness (VE) that enrolls individuals receiving diagnostic testing for a target disease as part of routine care. VE is estimated as one minus the adjusted odds ratio of testing positive versus negative comparing vaccinated and unvaccinated patients. Although the TND is related to case–control studies, it is distinct in that the ratio of test-positive cases to test-negative controls is not typically pre-specified. For both types of studies, sparse cells are common when vaccines are highly effective. We consider the implications of these features on power for the TND. We use simulation studies to explore three hypothesis-testing procedures and associated sample size calculations for case–control and TND studies. These tests, all based on a simple logistic regression model, are a standard Wald test, a continuity-corrected Wald test, and a score test. The Wald test performs poorly in both case–control and TND when VE is high because the number of vaccinated test-positive cases can be low or zero. Continuity corrections help to stabilize the variance but induce bias. We observe superior performance with the score test as the variance is pooled under the null hypothesis of no group differences. We recommend using a score-based approach to design and analyze both case–control and TND. We propose a modification to the TND score sample size to account for additional variability in the ratio of controls over cases. This work enhances our understanding of the data generating mechanism in a test-negative design (TND) and how it is distinct from that of a case-control study due to its passive recruitment of controls.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12874-024-02277-4.

Keywords: Test-negative design, Case-control study, Vaccines, Sample size, Score test, Continuity correction

Introduction

The test-negative design (TND) is an observational vaccine study that is commonly used to monitor the effectiveness of influenza vaccines [1, 2] as well as vaccines targeting rotavirus [3], cholera [4] and COVID-19 [57]. It originated from the indirect cohort study to measure pneumococcal vaccine effectiveness (VE) in 1980 [8]. In a typical TND [1], patients seek health care for symptoms of a particular disease, and their specimens are taken for laboratory testing for the vaccine-targeted pathogen. Groups of test-positive cases and test-negative controls are formed according to the test results, analogous to cases and controls in a case–control study. Vaccination history and demographic information of each enrolled individual are recorded. The central assumption of the TND is that the vaccine of interest has no impact on other etiologies of disease [2]. TND can also be used to estimate the relative effectiveness of two vaccines in a direct comparison, or the relative effectiveness of a single vaccine over time by stratifying on time since vaccination. Both test-positive cases and test-negative controls are restricted to people who would seek health care if they experienced symptoms, reducing selection bias due to health-care-seeking behavior [9, 10]. In addition, TND studies are cost-effective, as they require neither prospective follow-up nor active sampling of controls from the community [6]. The studies can be integrated into existing surveillance systems [11].

The TND is commonly analyzed as a case-control study using either logistic [1215] or conditional logistic regression [1618]. Covariates often included are age, calendar time, sex, enrollment sites, and comorbidities [5, 6, 1921]. VE is estimated as one minus the adjusted odds ratio with an associated Wald-based confidence interval and p-value [22, 23]. A strength of vaccines is that they can be highly protective, many vaccines against various infectious diseases exhibiting effectiveness above 90%, such as COVID-19 [24] and HPV [25]. As a result, it is not rare to observe low numbers of vaccinated test-positive cases [15]. In these settings, the Wald approach can produce unreliable or even intractable variance estimates. An alternative approach is to add a continuity correction [26, 27] or use exact methods [12, 2830]. Score testing is another option [31]; when estimating the variance under the null hypothesis of no difference, the groups are pooled, which reduces sparsity. Even if a different null hypothesis is used, the results can still be tractable. However, score tests are not commonly used for TND analyses.

Limited guidance is available on power and sample size calculations for the TND. The TND is fundamentally a passive design, with investigators not having direct control over the number of test-positive cases and test-negative controls observed. Yet power and sample size calculations are useful for determining study feasibility and the broadness of eligibility criteria, planning the number of participating sites, and defining the study’s duration. Investigators may conduct interim analyses of data as part of real-time monitoring, and they may wish to time these only after sufficient data have accrued. The most natural approach for power and sample size calculations is to use case–control equivalents to design the TND. For case control studies, Breslow proposed a sample size corresponding to the Wald test in 1987 [32]. Fleiss modified Breslow’s sample size corresponding to the Wald test adding a continuity correction [33]. The score sample size [21] was developed based on score statistics from a logistic regression. Other sample size methods, such as arcsine transformation sample size [32] and exact test sample sizes [34], were also proposed. A limitation of exact approaches is that a closed-form solution for sample size or power does not exist, although software packages are readily available.

In this article, we examine hypothesis-testing methods, assessing the performance of the Wald test without and with continuity corrections, and a score test based on a case-control study applied to TND data, with a focus on sparse data settings. We also compare the performance of their associated sample size calculations. We explore differences between the case-control and TND studies and identify the added variability relative to the case-control studies due to the random ratio of test-positives to test-negatives in the TND. We proposed a sample size calculation strategy for the TND to mitigate that additional variability.

Methods

Sample size methods

We consider three sample size calculation methods corresponding to three different hypothesis tests: a standard Wald test, a Wald test with continuity corrections, and a score test. These are one-sided hypothesis tests for the null and alternative hypotheses of H0:VE1-θ0 and H1:VE>1-θ0,θ00. Data can be summarized in a simple 2 × 2 table with cell counts a, b, c, d as shown in Table 1, and VE is estimated by one minus the odds ratio (OR), i.e., 1-adbc. The equivalent null hypothesis is then that the OR is greater than or equal to θ0, and the equivalent alternative hypothesis is that the OR is less than θ0, i.e., H0:ORθ0 and H1:OR<θ0. Sample size calculations are often based on simplified assumptions, and we discuss the basic scenario without adjusting for confounders here. The approach adjusting for confounders will be similar to a logistic regression with added covariates [35].

Table 1.

2 × 2 contingency table

Test positives Test negatives Total number of tests Source population
Vaccinated a b
Unvaccinated c d
nTP nTN n N

Standard Wald test

The standard Wald test is a common test of the log odds ratio, where variance is estimated by the Delta method utilizing the alternative hypothesis. The Wald test statistic TW based on the four cell counts in Table 1 is:

TW=lnadbc-ln(θ0)1a+1b+1c+1d

With a sufficiently large sample size, this test statistic follows a standard normal distribution, which can be used to derive a corresponding p-value. Note that if any of the cell counts are zero, the standard Wald test statistic TW is intractable (including 1/0 in the denominator). Similar results occur when fitting a logistic regression via glm() in R where the variance is highly inflated resulting in a Wald test statistic near 0.

Corresponding to the standard Wald test, the Fleiss sample size method [33] is widely used in practice for the design of case–control studies. We modify their formula to re-express the sample size in terms of parameters relevant to the TND; these are: VE, the assumed level of vaccine effectiveness; pN, the expected fraction vaccinated among negative tests, which is a proxy for the vaccination coverage in the source population under the central assumption that the vaccine has no effect on test negative illness; 1-e-ΛIτ, the cumulative incidence of test-positive illness in the unvaccinated population, assuming that individuals test positive no more than once during the study period τ (i.e., gaining immunity after infection with the target pathogen); and ΛN(τ), the cumulative hazard of test-negative illness during the study period τ, allowing individuals to repeatedly test negative with different circulating pathogens producing similar symptoms.

From these inputs, we define several related quantities. These are: pI, the expected fraction vaccinated among positive tests, pIpN(1-VE)1-pN×VE (see Supplement); and π, the expected fraction of test-positive cases amongst all tests (i.e., percent positivity); π can be approximated as follows:

π1-pN×VE1-e-ΛIτ1-pN×VE1-e-ΛIτ+ΛN(τ)

Alternatively, π can be estimated based on historical surveillance data.

The quantity π has a parallel to the ratio k of cases to controls that is often specified in case–control studies (e.g. k = 2 for 2:1 controls to cases). In a case–control study with ratio k, the fraction of cases amongst all observations is π=1k+1. In case–control studies, this quantity is pre-specified and fixed by design. In a TND, the number of positive tests or negative tests is typically not controlled due to the passive sampling. Then, π represents the expected fraction of cases amongst all tests.

The standard Wald sample size with one-sided significance level α and desired power 1-γ is as follows, adapted for the TND, is as follows:

nW=Z1-απpI+1-πpN1-πpI-1-πpN+Z1-γ1-πpI1-pI+πpN(1-pN)2π(1-π)pI-pN2

For a TND, nW denotes the estimated required total number of tests in the study.

Wald test with continuity corrections

To avoid zero cell counts which make the standard Wald test statistic intractable, and to better approximate a normal distribution, a small number δ, referred to as a continuity correction, can be added to each cell count. Various continuity corrections are described in the literature [26, 36, 37]. An example is the Yates’ correction, based on δ=0.5. The continuity-corrected Wald test statistic TC is:

TC=lna+δd+δc+δd+δ-ln(θ0)1a+δ+1b+δ+1c+δ+1d+δ

For the continuity-corrected Wald test statistic, a corresponding sample size calculation method is the Fleiss sample size with Yates’ correction, a modification of the standard Wald sample size. The corrected sample size nC is expressed as a function of nW, π, pI and pN:

nC=nW41+1+2π(1-π)nW|pI-pN|2

Score test

The final test considered is a score test based on the likelihood from a simple logistic regression with binary vaccination status [35]. The test statistic Ts utilizes the estimated variance under H0:

TS=UH0Var(UH0)n

where n is the total number of tests, UH0 is the score under the null and Var(UH0) is the variance of UH0 calculated based on the information matrix. For demonstration, when the upper bound of the null set of odds ratio of the hypotheses is 1, i.e.,θ0=1, the test statistic is simplified as p^I-p^Nσ^0n=ad-bca+b+c+d(a+c)(b+d)(a+b)(c+d), wherep^I=aa+c, p^N=bb+d are empirical proportions vaccinated among test positives and test negatives, and σ^02n=(a+b)(c+d)(a+b+c+d)(a+c)(b+d) is the empirical estimated variance of p^I-p^N whenθ0=1. (see supplement for details). Note that the variance in the score test statistic is developed under the null hypothesis. By pooling data from groups under the null hypothesis, the test statistic is tractable even when an individual cell is zero, as long as all margins are non-zero.

For the score test statistic, a corresponding sample size calculation method is as follows:

nS=Z1-γσ1+Z1-ασ02pI-pN2

where σ02=1π(1-π)πpI+1-πpN[π1-pI+1-π1-pN] and σ12=1π(1-π)pIpN(1-pI)(1-pN)πpI1-pI+1-πpN(1-pN). These terms are related to the variance of the test statistic numerator of the score test statistics p^I-p^N, where σ02n and σ12n are the assumed variance of the numerator under H0 andH1, respectively. The variances are derived based on the likelihood of a simple logistic regression (see supplement for details).

Proposed TND score sample size for high vaccine effectiveness

To account for the additional variability of the fraction of test positives over all tests π in the TND, we propose a modification to the case–control score power calculation for high VE. The standard calculation is based on a single assumed fraction π. We took the summation of the power over all possible values of π^=a+cn, weighted by a binomial distribution density, since the test-positive infection is independent from the test-negative infection. To calculate the probability of rejection for each value of π^, it is necessary to define two variance terms. The variance of the test statistic numerator p^I-p^N under the null is roughly constant across values of π^, which we denote as σ02. Meanwhile, the variance under the alternative varies. We use a version derived based on a multinomial distribution σ~12(π^)=pI1-pIπ^+pN1-pN1-π^+2pIpN (see supplement).

1-γ=k=0nPrTs<Zα|π^=knPrπ^=kn=k=0nPrTs<Zα|π^=knnkπk1-πn-k=k=0nPrp^I-p^Nσ0n<Zα|π^=knnkπk1-πn-k=k=0nPrp^I-p^N-pI-pNσ~1πn<Zασ0n-pI-pNσ~1πn|π^=knnkπk1-πn-k=k=0nΦZασ0-pI-pNnσ~1π^=knnkπk1-πn-k

The proposed score sample size for high VE can be found by grid search from the case-control score sample size until the right-hand side of the equation achieves the desired power.

Simulations

To compare the case-control studies and TNDs, we performed a simulation study based on the same vaccine effectiveness and same population vaccine coverage. The ratio of cases to controls is fixed by design in the case–control study but variable in the TND, although we fix the expected value of the ratio for the latter so that the studies can be directly compared.

Scenarios we considered across several vaccine effectiveness VE= 30%, 50%, 70%, 90%, 95% with vaccine coverage pN= 10%, 30%, 50%, 70%, 90%. Vaccination is assumed to be completed before the study starts. Because vaccination coverage is constant over time, calendar time is not a confounder [38]. NpN individuals in the population are randomly selected to be vaccinated and the rest N(1-pN) remain unvaccinated. An all-or-none vaccine [39] model is adopted. Among vaccinated individuals, VE×100% proportion are randomly selected to be fully protected and the rest are not protected, sharing the same incidence rate with unvaccinated.

To focus on the comparison between the TND and the case–control study, we assume a constant hazard for both test positive and test negative illness, i.e., ΛIτ=λIτ, ΛNτ=λNτ. We generate event times separately for test-positive events and test-negative events. Individuals may test positive only once during the study period, assuming there is some short-term immunity from infection. Their probability of testing positive is either 1-e-ΛIτ or 1-VE1-e-ΛIτ, for unvaccinated and vaccinated, respectively. While individuals are not at risk for testing positive again after initially testing positive, they remain in the population and can still test negative. Individuals can test negative many times. Because testing negative is a recurrent event, their expected number of negative tests is ΛN(τ); this is not impacted by vaccination status. In the simulation study, we consider τ as 100 days and constant hazards λI= 0.001 days-1, λN= 0.002 days-1. With different combinations of the vaccine effectiveness and vaccine coverages, 1%-10% population will be infected by the test positive pathogen and around 20% population will be infected by test negative pathogens by the end of study. Each individual may receive a maximum of one positive test and up to three negative tests. Any additional tests are disregarded as occurrences of more than three test-negative infections within the same time period are deemed exceptional. Less than 1% of individuals have more than one negative test in the settings considered.

To ensure the study duration is around 100 days, the source population N is calculated based on the expected cell counts. For each combination of vaccine effectiveness VE and vaccine coverage pN, the unit values of cell counts are calculated using the approach described in Dean et al. [38] (see Supplement for details): ua=EaN=pN1-VE1-e-ΛIτ,ub=EbN=pNΛNτ,uc=EcN=1-pN1-e-ΛIτ,ud=EdN=1-pNΛN(τ). The two Wald and the score sample sizes are calculated at 0.025 significance level and 80% desired power based on nW, nC and nS formulae. The source population size N then is determined by dividing the preset sample size by the sum of unit cell counts, i.e., N=nua+ub+uc+u(d), where n is the calculated sample size. Notice that the source population we consider here is the population who will seek health care and be tested if sick.

The TND data does not require a fixed ratio of test negative controls to test positive cases, so we stop counting events when the number of tests reaches the desired sample size. The case–control data has the fixed ratio π, so we stop counting test positive events when the number of positive tests reaches nπ. Next, n1-π many test negative controls are randomly selected from all test negative events in the population. We also assume 100% sensitivity and 100% specificity of the diagnostic testing. Each scenario runs 100,000 iterations. Simulations are performed using R (R Core Team (2019).

Results

Comparison between the test-negative design data and case–control data

Our simulation results allow us to compare the characteristics of the data generated by a TND and by a comparable case-control study with the same VE, vaccination coverage, and expected ratio of cases to controls. In Fig. 1, we compare the distributions of the four cell counts across the two designs in a setting with 95% VE. The most notable difference was that the distribution of the unvaccinated test positive cases (panel c) had far lower variability in the case-control study. This occurs because the total number of test positives is constrained by design in the case-control study. In contrast, in the TND, only the total number of tests was fixed, yielding greater variability in the individual cell counts. Differences are also visible for panel d, again reflecting the constrained column margins in the case-control study.

Fig. 1.

Fig. 1

Density of cell counts in the case–control study (red) and test-negative design (green) for 95% VE, 30% vaccine effectiveness pN with total sample size of 63. a is presented in parallel position and panel (b)-(d) are presented in identity position

Because the standard Wald test is intractable when a zero is present in the cell counts, we compared the frequency of observing zero vaccinated test-positive cases across case–control and TND studies (Table 2). These can be very common for both study types when VE is high and vaccination coverage in the population is low. Overall, we noticed minimal differences in the frequency of zeros between the two designs, although in general more zeros are observed in the case–control study as compared to the TND, particularly when vaccine coverage is low. Thus, both designs are prone to intractability if a standard Wald test is applied.

Table 2.

Percentage of zero occurred in vaccinated test positive people among 100 K iterations for 95% VE

Vaccine coverage (pN) 10% 30% 50% 70% 90%
CCT 68% 67% 64% 57% 22%
TND 61% 61% 60% 55% 24%

CCT case–control study, TND test-negative design

Adding continuity correction to the Wald test

Moreover, we found that adding continuity corrections to the Wald test stabilized variance but induced bias in the point estimate. In Fig. 2, we scanned the continuity correction from 0 to 2 and evaluated the bias and variance of the log odds ratio. The black line indicates the mean bias of the log odds ratio among 100 k iterations, and the blue line is the standard error of the 100 k log odds ratio estimation. When no continuity correction was added, both bias and variance were intractable since zeros occurred in the denominator. As the continuity correction increased, the estimated variance was stabilized, while the bias increased. Even with the widely used Yates’ correction of adding 0.5 to each cell count, the bias was around 0.5.

Fig. 2.

Fig. 2

Bias and standard error of log odds ratio for various continuity corrections for 30% vaccine coverage pN and 95% VE with naïve Wald sample size 74 in the test-negative design

Power performance of the three testing approaches

To broadly compare the three testing approaches and two study designs, we calculated simulated power for vaccination coverage pN ranging from 10 to 90%, all assuming 95% VE. For each vaccination coverage level, we calculate the sample size using the Wald formula to achieve 80% power. These sample sizes ranged from n = 230 for 10% vaccine coverage to n = 37 for 70% vaccine coverage, minimizing at 70% coverage (supplement). We analyze the data using the three tests, substituting a continuity corrected version of the Wald test where the standard Wald test is intractable. The results are shown in Table 3. For both case control studies and TNDs, the two types of Wald tests failed to achieve the desired 80% power, with some exceptions when vaccine coverage was 90%. Vertically comparing the three tests, we found that the score test performed the best across all scenarios. The score test had more stable performance; recall that the score statistic is still tractable when zero cell counts occur. Type I errors for the three tests were well controlled (Table 4). Comparing the case–control and TNDs from equivalent settings, we observed typically lower power for the TND.

Table 3.

Simulated power under the naïve Wald sample size nw for 95% VE at desired power 80% for the three tests

Tests Studies Vaccine Coverage (pN)
10%
(nw=228)
30%
(nw=74)
50%
(nw=45)
70%
(nw=37)
90%
(nw=54)
Naïve Wald test CCT 0.40 0.43 0.49 0.59 0.84
TND 0.42 0.40 0.42 0.46 0.73
Wald test w. cc CCT 0.42 0.47 0.61 0.74 0.89
TND 0.37 0.40 0.45 0.52 0.77
Score test CCT 0.88 0.86 0.83 0.83 0.86
TND 0.86 0.83 0.79 0.76 0.82

Wald test w. cc Wald test with continuity correction, CCT case–control study, TND test-negative design

Table 4.

Type I errors for the case–control study (CCT) and test negative design (TND) across 10% to 90% vaccine coverage with 100 sample size. The target type I error rate is 0.025. Wald test w. cc: the Wald test adding Yates’ correction

Tests Vaccine Coverage (pN)
10% 30% 50% 70% 90%
Naïve Wald test CCT 0.001 0.020 0.025 0.026 0.023
TND 0.015 0.016 0.022 0.022 0.018
Wald test w. cc CCT 0.001 0.018 0.025 0.025 0.024
TND 0.001 0.014 0.021 0.022 0.019
Score test CCT 0.015 0.023 0.025 0.025 0.028
TND 0.015 0.019 0.023 0.023 0.025

Next, we compared the sample size calculation methods corresponding to each of the three tests. From Fig. 3, we observe that adding the continuity correction increased the Wald sample size by 20% to 50% for 95% VE. The score sample size was the smallest across all scenarios. The standard Wald sample size is similar to the score sample size for 10%-90% vaccine coverage. The required sample size for low vaccine coverage is the largest, while 70% vaccine coverage requires the smallest sample size. As vaccine coverage increases up to 90%, the sample size increases; this reflects more sparsity in the unvaccinated cells in the table.

Fig. 3.

Fig. 3

Standard Wald sample size (green), Wald sample size with continuity correction (blue) and score sample size (red) vary with 10%-90% vaccine coverage for 95% vaccine effectiveness

Next, we considered the simulated power for three scenarios: (i) the standard Wald test but with continuity correction for zero vaccinated test positive along with the standard Wald sample size, (ii) the Wald test with continuity correction along with the standard Wald sample size with continuity correction, and (iii) the score test along with score sample size. The standard Wald test was not evaluated since it is frequently intractable.

Starting with the standard Wald sample size and test, Fig. 4 shows very low power for both case–control and test-negative design studies when VE is high, especially for low vaccine coverage, indicating insufficient standard Wald sample size. For the continuity corrected Wald sample size and test, Fig. 5 shows low power for both types of studies when VE is high and vaccination coverage is low, but high power (above targeted 80%) when both VE and vaccination coverage are high; this indicates that sample size is insufficient for low vaccine coverage but conservative for high vaccine coverage. For the score sample size and test, Fig. 6 shows that power was maintained around the desired power.

Fig. 4.

Fig. 4

Simulated power for the case-control study (red) and the test-negative design (green): the standard Wald test but with continuity correction for zero vaccinated test positive with standard Wald sample size. x axis: vaccine effectiveness, y axis: simulated power. Vaccine coverage pN varies from 10 to 90% for different panels. Desired power is 80%

Fig. 5.

Fig. 5

Simulated power of the Wald test with continuity correction with Wald sample size adding continuity correction for case control (red) and test-negative design (green). x axis: vaccine effectiveness, y axis: simulated power. Vaccine coverage pN varies from 10 to 90% for different panels. Desired power is 80%

Fig. 6.

Fig. 6

Simulated power of score test with score sample size for case control (red) and test-negative design (green). x axis: vaccine effectiveness, y axis: simulated power. Vaccine coverage pN varies from 10 to 90% for different panels. Desired power is 80%

In some of the scenarios where VE is high (90% and 95%), we observe lower power for the TND even though power was sufficient for the case-control study. To explore the reason for this discrepancy, we studied the estimated variance of the score as a function of the total number of test-positives (Fig. 7). Recall that the total number of test positives (a+c) is fixed by design in the case-control study but varies for the TND. When the total number of test positives in the TND is similar to the fixed value for the case-control study (shown in red), both designs have similar variability in the score test statistics. Yet when the total number of test positives is higher than expected, the TND score statistic has greater variance, and when the total number of test positives is lower than expected, the TND score statistic has lower variance. Thus, there is overall higher variability in the score statistic of the TND than in the case-control study, which is not reflected in the sample size calculations based on the case-control design.

Fig. 7.

Fig. 7

Distribution of score test statistics for 30% vaccine coverage pN, 95% VE for the case–control study (red) and the test-negative design (green). Brown dashed line indicates the critical value for the test statistic at the 0.025 significance level

Proposed TND score sample size and power performance for high vaccine effectiveness

Table 5 illustrates the proposed score sample size and the case-control score sample size for 90% and 95% vaccine effectiveness. The proposed score sample size was relatively larger than the case-control sample size across various vaccine coverage for high VE, since it accounted for the additional variability in the TND.

Table 5.

Proposed TND score sample size (np) and the case–control score sample size (ns) for 90% and 95% vaccine effectiveness (VE) across 10%-90% vaccine coverage (pN) with 0.025 type I error and 80% desired power

VE Vaccine Coverage (pN)
10% 30% 50% 70% 90%
ns np ns np ns np ns np ns np
90% 225 268 81 91 55 60 50 54 82 95
95% 179 228 63 75 46 46 36 39 53 62

Table 6 shows the simulated power under the proposed sample size improved compared to the case-control score sample size across different vaccine coverages for 90% and 95% VE. The proposed sample size tended to be conservative, especially for low vaccine coverages.

Table 6.

Simulated power of the score test for the test-negative design under the proposed TND score sample size (np) and the case–control score sample size (ns) in Table 5. Desired power is 80%

VE Vaccine Coverage (pN)
10% 30% 50% 70% 90%
ns np ns np ns np ns np ns np
90% 0.76 0.88 0.75 0.86 0.78 0.84 0.78 0.83 0.79 0.84
95% 0.69 0.90 0.72 0.85 0.73 0.82 0.75 0.82 0.79 0.83

Discussion

We examined properties of the TND in comparison to a standard case-control study, with a focus on hypothesis testing and sample size calculation. We considered two Wald-based methods and a score-based method. For hypothesis testing, a key disadvantage of the Wald test is that it can be intractable for high VE because of sparsity in the number of vaccinated test positives. Adding continuity corrections to the Wald test enabled estimation but induced bias. For both the TND and case-control study, the score test was more robust across settings, particularly for high VE. Thus, we recommend score-based approaches for testing the vaccine effect in the logistic regression model. The score test can be readily fit using standard statistical software, and it would represent an improvement over Wald-based approaches, which are common in the TND literature [22, 23].

With respect to sample size calculation methods, we recommend a score-based approach adapted from the case–control literature. When accompanied with score-based testing, we found this approach to be the most robust at maintaining the desired power. We detected a slight reduction in power for the score-based sample size in settings with high VE and low vaccination coverage. This reduction in power was more pronounced for the TND when compared to a traditional case–control study. While the ratio of cases to controls is constrained in case–control studies, this ratio is itself a random variable in TNDs. This is due to the TND’s passive sampling scheme, where patient enrollment relies on health-care-seeking behavior and is not controlled by the investigators [10, 40]. With too few test positives captured, the score test statistic is closer to the null value. We proposed a modified score sample size strategy for high vaccine effectiveness to account for the additional variability of this ratio with variance calculated under the multinomial distribution. This approach enhances the power performance but provides conservative sample sizes. This work indicates that sample size calculation methods based on case–control designs have limitations when applied to TNDs and so should not be used uncritically. In this setting, study planning with simulation is another valuable tool.

The additional variability on the column margin in the contingency table results in the TND cell counts followed a multinomial distribution rather than a binomial distribution with one-way variability as in the case–control data. Therefore, the likelihood linked with the logistic regression is not able to fully describe the variance of the vaccine coverage between test positives and test negatives, especially for high vaccine effectiveness and low vaccine coverage (few vaccinated test positives). With the distribution-based variance, the proposed sample size tends to yield power higher than desired. An alternative approach not considered here is to derive a score test sample size from a multinomial distribution linked regression.

The work has several limitations. We considered a simplified scenario with constant vaccine coverage, constant VE, and constant disease hazard over time. We did not consider patterns of health care seeking among the source population. The study population we considered is the population who will seek care if sick. Investigators need to account for the fraction of seeking health care if consider the health-care-seeking behavior varies by vaccination status [38], but the testing strategies and power calculations are similar. We also assumed the diagnostic test has perfect sensitivity and specificity [41]. Furthermore, we do not consider confounders, such as age or risk status that are commonly included in TND analysis. We simplified the scenario to focus attention on sample size calculations, which are frequently conducted using a variety of simplifying assumptions. Nonetheless, we expect the central points about sparsity at high causing a breakdown in the analysis and the role of added variability in the ratio of positives to negatives to carry forward into more complex settings. Other analytical methods, such as exact methods [29, 30] and Bayesian methods [4244], are also discussed in the literature but not used here. Exact methods are usually applied in the sparse data settings. More importantly, they lack a closed-form solution for sample size calculations, which is our focus. Prior research has also demonstrated that both unconditional and conditional exact test always control the type I error and the unconditional exact test is more powerful at the price of a higher computational burden [29, 30]. Furthermore, while exact methods are useful for 2 × 2 calculations, model-based methods will be required for analyzing test negative data, where adjustment for confounders is required in real-world applications. The continuity-corrected Wald test also has a link to Bayesian methods with the added cell counts akin to a non-informative prior. Considering the distribution of the test-negative design data, the score test approach based on multinomial distribution can be an alternative for the sample size determination. Finally, we have framed the problem as a hypothesis test to assess whether VE > 0% or relative VE > 0% (in the case of a head-to-head comparison or vaccine waning). Investigators may prefer to test a different null hypothesis or seek a desired precision for the point estimate. This would require further modification.

The TND is a relatively new observational study design that is rapidly growing in popularity. Though it is in many ways similar to case–control studies, it has distinct features resulting from how cases and controls are passively sampled [40]. The convenient sampling method induces extra variability on the number of test positives and the number of test negatives. Unlike in a case-control study, the ratio of cases to controls is not governed by the investigator, which can result in imbalances, with many more cases than expected or many more controls. In practice, while at the outset of a TND study, it may be difficult to predict the number of tests that will accrue and their positivity, these approaches can help investigators assess the potential power of their study and can impact planning decisions such as the number of sites to include and patient eligibility criteria. By our examination, we recommend using score test and score sample size under the case–control framework to design the study. Modifications of the score sample size were proposed to account for the additional variability on the ratio of cases over all tests. The work expands our understanding of the data features of the TND relative to a case–control design, bridging gaps in design approaches for the TND.

Supplementary Information

Authors’ contributions

Y.H. served as the main author, orchestrating the simulation studies and crafting both the methods and results sections. NE.D., the corresponding author, focused on refining the introduction and performing comprehensive manuscript revisions. Y.Y., ME.H., and IM.L. contributed through critical reviews and commentary, enhancing the manuscript’s overall quality. All authors have reviewed the final version of the manuscript and consented to its submission.

Funding

This research was financially funded by NIH/NIAID R01-AI139761. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Availability of data and materials

The research described in this manuscript did not involve the use of any real data or materials.

Declarations

Ethics approval and consent to participate

This study did not involve human participants, human data, or human tissue; therefore, no ethical approval or consent to participate was necessary.

Consent for publication

The authors grant their consent for the publication of this manuscript.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.De Serres G, Skowronski DM, Wu XW, Ambrose CS. The test-negative design: validity, accuracy and precision of vaccine efficacy estimates compared to the gold standard of randomised placebo-controlled clinical trials. Euro Surveill. 2013;18(37):1–9. doi: 10.2807/1560-7917.es2013.18.37.20585. [DOI] [PubMed] [Google Scholar]
  • 2.Jackson ML, Nelson JC. The test-negative design for estimating influenza vaccine effectiveness. Vaccine. 2013;31(17):2165–2168. doi: 10.1016/j.vaccine.2013.02.053. [DOI] [PubMed] [Google Scholar]
  • 3.Schwartz LM, Halloran ME, Rowhani-Rahbar A, Neuzil KM, Victor JC. Rotavirus vaccine effectiveness in low-income settings: an evaluation of the test-negative design. Vaccine. 2017;35(1):184–190. doi: 10.1016/j.vaccine.2016.10.077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Azman AS, Parker LA, Rumunu J, et al. Effectiveness of one dose of oral cholera vaccine in response to an outbreak: a case-cohort study. Lancet Glob Heal. 2016;4:e856–e863. doi: 10.1016/S2214-109X(16)30211-X. [DOI] [PubMed] [Google Scholar]
  • 5.Chua H, Feng S, Lewnard JA, et al. The use of test-negative controls to monitor vaccine effectiveness: a systematic review of methodology. Epidemiology. 2020;31(1):43–64. doi: 10.1097/EDE.0000000000001116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Sullivan SG, Feng S, Cowling BJ. Influenza vaccine effectiveness: potential of the test-negative design. A systematic review. Expert Rev Vaccines. 2014;13(12):1571–1591. doi: 10.1586/14760584.2014.966695.Influenza. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Bernal JL, Andrews N, Gower C, et al. Early effectiveness of COVID-19 vaccination with BNT162b2 mRNA vaccine and ChAdOx1 adenovirus vector vaccine on symptomatic disease, hospitalisations and mortality in older adults in England. medRxiv. Published online March 2, 2021:2021.03.01.21252652. 10.1101/2021.03.01.21252652.
  • 8.Broome CV, Facklam RR, Fraser DW. Pneumococcal disease after pneumococcal vaccination. N Engl J Med. 1980;303(10):549–552. doi: 10.1056/NEJM198009043031003. [DOI] [PubMed] [Google Scholar]
  • 9.Sullivan SG, Tchetgen EJT, Cowling BJ. Theoretical basis of the test-negative study design for assessment of influenza vaccine effectiveness. Pract Epidemiol. 2016;184(5):345–353. doi: 10.1093/aje/kww064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dean NE, Hogan JW, Schnitzer ME. Covid-19 vaccine effectiveness and the test-negative design. N Engl J Med. 2021;385:1431–1433. doi: 10.1056/NEJMe2113151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Cheng AC, Holmes M, Irving LB, et al. Influenza vaccine effectiveness against hospitalisation with confirmed influenza in the 2010–11 seasons: a test-negative observational study. PLoS One. 2013;8(7):1–8. doi: 10.1371/journal.pone.0068760. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bateman AC, Kieke BA, Irving SA, Meece JK, Shay DK, Belongia EA. Effectiveness of Monovalent 2009 Pandemic Influenza A Virus Subtype H1N1 and 2010–2011 Trivalent Inactivated Influenza Vaccines in Wisconsin During the 2010–2011 Influenza Season. Published online 2013. 10.1093/infdis/jit020. [DOI] [PubMed]
  • 13.Anders KL, Cutcher Z, Kleinschmidt I, et al. Cluster-randomized test-negative design trials: a novel and efficient method to assess the efficacy of community-level dengue interventions. Am J Epidemiol. 2018;187(9):2021–2028. doi: 10.1093/aje/kwy099. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Helmeke C, Gräfe L, Irmscher HM, Gottschalk C, Karagiannis I, Oppermann H. Effectiveness of the 2012/13 trivalent live and inactivated influenza vaccines in children and adolescents in Saxony-Anhalt, Germany: a test-negative case-control study. PLoS One. 2015;10(4):1–10. doi: 10.1371/journal.pone.0122910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Griffin MR, Monto AS, Belongia EA, Treanor JJ, Chen Q. Effectiveness of non-adjuvanted pandemic influenza a vaccines for preventing pandemic influenza acute respiratory illness visits in 4 U. PLoS One. 2011;6(8): e23085. doi: 10.1371/journal.pone.0023085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Eisenberg KW, Szilagyi PG, Fairbrother G, et al. Vaccine effectiveness against laboratory-confirmed influenza in children 6 to 59 months of age during the 2003 2004 and 2004 2005 influenza seasons. Pediatrics. 2008;122(5):911–919. doi: 10.1542/peds.2007-3304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Cowling BJ, Chan KH, Feng S, et al. The effectiveness of influenza vaccination in preventing hospitalizations in children in Hong Kong, 2009–2013. Vaccine. 2014;32(41):5278–5284. doi: 10.1016/j.vaccine.2014.07.084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Wang Y, Zhang T, Chen L, et al. Seasonal influenza vaccine effectiveness against medically attended influenza illness among children aged 6–59 months, October 2011-September 2012: a matched test-negative case-control study in Suzhou, China. Vaccine. 2016;34(21):2460–2465. doi: 10.1016/j.vaccine.2016.03.056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Cheng AC, Kotsimbos T, Kelly HA, et al. Effectiveness of H1N1/09 monovalent and trivalent influenza vaccines against hospitalization with laboratory-confirmed H1N1/09 influenza in Australia: a test-negative case control study. Vaccine. 2011;29(43):7320–7325. doi: 10.1016/j.vaccine.2011.07.087. [DOI] [PubMed] [Google Scholar]
  • 20.Belongia EA, Simpson MD, King JP, et al. Variable influenza vaccine effectiveness by subtype: a systematic review and meta-analysis of test-negative design studies. Lancet Infect Dis. 2016;16(8):942–951. doi: 10.1016/S1473-3099(16)00129-8. [DOI] [PubMed] [Google Scholar]
  • 21.Feng S, Cowling BJ, Kelly H, Sullivan SG. Estimating influenza vaccine effectiveness with the test-negative design using alternative control groups: a systematic review and meta-analysis. Am J Epidemiol. 2017;187(2):389–397. doi: 10.1093/aje/kwx251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.The FREQ Procedure - SAS. https://documentation.sas.com/?cdcId=pgmsascdc&cdcVersion=9.4_3.5&docsetId=procstat&docsetTarget=procstat_freq_examples05.htm&locale=en.
  • 23.Aragon TJ, Fay MP, Wollschlaeger D, Omidpanah A. R package epitools. Published 2020. https://cran.r-project.org/web/packages/epitools/epitools.pdf.
  • 24.Rosenberg ES, Dorabawila V, Easton D, et al. Covid-19 vaccine effectiveness in New York State. N Engl J Med. 2022;386:116–127. doi: 10.1056/NEJMoa2116063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Spinner C, Ding L, David IB, Bernstein DI, et al. Human papillomavirus vaccine effectiveness and herd protection in young women. Pediatrics. 2019;243(2):e20181902. doi: 10.1542/peds.2018-1902. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Yates F. Contingency tables involving small numbers and the χ2 Test. J R Stat Soc. 1934;1(2):217–35. doi: 10.2307/2983604. [DOI] [Google Scholar]
  • 27.Haviland MG. Yates’s correction for continuity and the analysis of 2 × 2 contingency tables. Stat Med. 1990;9(4):363–367. doi: 10.1002/sim.4780090403. [DOI] [PubMed] [Google Scholar]
  • 28.Rückinger S, van der Linden M, Reinert RR, von Kries R. Efficacy of 7-valent pneumococcal conjugate vaccination in Germany: an analysis using the indirect cohort method. Vaccine. 2010;28(31):5012–5016. doi: 10.1016/J.VACCINE.2010.05.021. [DOI] [PubMed] [Google Scholar]
  • 29.Storer BE, Kim C. Exact properties of some exact test statistics for comparing two binomial proportions. J Am Stat Assoc. 1990;85(409):146–155. doi: 10.1080/01621459.1990.10475318. [DOI] [Google Scholar]
  • 30.Kang SH, Ahn CW. Tests for the homogeneity of two binomial proportions in extremely unbalanced 2 x 2 contingency tables. Stat Med. 2008;27(14):2524–2535. doi: 10.1002/sim.3055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Agresti A. An introduction to categorical data analysis. New York: John Wiley and Sons; 1996. [Google Scholar]
  • 32.Breslow NE, Day NE. Statistical methods in cancer research. Volume II- - the design and analysis of cohort studies. IARC Sci Publ; 1987. [PubMed]
  • 33.Fleiss JL, Levin B, Paik M. Statistical Methods for rates and proportions. Third Edition. 1981. 10.1002/0471445428.ch18.
  • 34.Casagrande JT, Pike MC, Smith PG. An improved approximate formula for calculating sample sizes for comparing two binomial distributions. Biometrics. 1978;34(3):483–486. doi: 10.2307/2530613. [DOI] [PubMed] [Google Scholar]
  • 35.Borgan O, Breslow N, Chatterjee N, Gail MH, Scoot A, Wild CJ. Handbook of Statistical Methods for Case-Control Studies (1st ed.). Chapman and Hall/CRC. 10.1201/9781315154084.
  • 36.Agresti A. An introduction to categorical data analysis. 2nd ed. John Wiley and Sons, Inc; 2007.
  • 37.Emura T, Liao YT. Critical review and comparison of continuity correction methods: the normal approximation to the binomial distribution. Commun Stat Simul Comput. 2017;47(8):2266–2285. doi: 10.1080/03610918.2017.1341527. [DOI] [Google Scholar]
  • 38.Dean NE, Halloran ME, Longini IM., Jr Temporal confounding in the test-negative design. Am J Epidemiol. 2020;189(11):1402–1407. doi: 10.1093/aje/kwaa084. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Halloran ME, Longini IM Jr, Struchiner CJ. Design and analysis of vaccine studies. Springer; 2010. 10.1007/978-0-387-68636-3.
  • 40.Foppa IM, Haber M, Ferdinands JM, Shay DK. The case test-negative design for studies of the effectiveness of influenza vaccine. Vaccine. 2013;31(30):3104–3109. doi: 10.1016/J.VACCINE.2013.04.026. [DOI] [PubMed] [Google Scholar]
  • 41.Orenstein EW, De Serres G, Haber MJ, et al. Methodologic issues regarding the use of three observational study designs to assess influenza vaccine effectiveness. Int J Epidemiol. 2007;36(3):623–631. doi: 10.1093/ije/dym021. [DOI] [PubMed] [Google Scholar]
  • 42.Shield WS, Heeler R. Analysis of contingency tables with sparse values. J Mark Res. 1979;16(3):382–386. doi: 10.1177/002224377901600310. [DOI] [Google Scholar]
  • 43.Guo SW, Thompson EA. Analysis of sparse contingency tables: Monte Carlo estimation of exact P-values. Department of Statistics: University of Washington; 1989. https://stat.uw.edu/research/preprints/tech-report/analysis-sparse-contingency-tables-monte-carlo-estimation-exact-p.
  • 44.Baglivo J, Oliver D, Pagano M. Methods for the analysis of contingency tables with large and small cell counts. J Am Stat Assoc. 1988;83(404):1006–1013. doi: 10.1080/01621459.1988.10478692. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The research described in this manuscript did not involve the use of any real data or materials.


Articles from BMC Medical Research Methodology are provided here courtesy of BMC

RESOURCES