Skip to main content
Cancer Informatics logoLink to Cancer Informatics
editorial
. 2021 Jan 5;20:1176935120985132. doi: 10.1177/1176935120985132

Goldilocks Rounding: Achieving Balance Between Accuracy and Parsimony in the Reporting of Relative Effect Estimates

Jimmy T Efird 1,
PMCID: PMC7791303  PMID: 33456306

Abstract

Researchers often report a measure to several decimal places more than what is sensible or realistic. Rounding involves replacing a number with a value of lesser accuracy while minimizing the practical loss of validity. This practice is generally acceptable to simplify data presentation and to facilitate the communication and comparison of research results. Rounding also may reduce spurious accuracy when the extraneous digits are not justified by the exactness of the recording instrument or data collection procedure. However, substituting a more explicit or simpler representation for an original measure may not be practicable or acceptable if an adequate degree of accuracy is not retained. The error introduced by rounding exact numbers may result in misleading conclusions and the interpretation of study findings. For example, rounding the upper confidence interval for a relative effect estimate of 0.996 to 2 decimal places may obscure the statistical significance of the result. When presenting the findings of a study, authors need to be careful that they do not report numbers that contain too few significant digits. Equally important, they should avoid providing more significant figures than are warranted to convey the underlying meaning of the result.

Keywords: Accuracy, numeric representation error, precision, relative effect estimates, rounding error

Introduction

Rounding is a practical tool for simplifying scientific expressions but in some cases can yield seemingly illogical or counterintuitive results. Let (γ) denote a positive but very small number and consider the example where

a=(b+γ)=5.0000001,γ0.0000001 (1)

Here, (γ) is negligible (near zero) and (a) is close to but not exactly equal to either 5 or (b) (ie, within rounding error). That is, for any non-negligible positive real value of (a) and (b), we have

a=(b+γ),:γ<<1a,b,γ,|ab|>0 (2)
a2=a(b+γ) (3)
a2=ab+aγ (4)
(a2b2)=(abb2)+aγ (5)

As (γ) is very small, we round the right-hand side of equation (5) to (abb2).

(a2b2)(abb2) (6)
(a+b)(ab)b(ab) (7)
(a+b)b (8)
(b+γ+b)b (9)
(2b+γ)b (10)

Again, as (γ) is very small, we round the left-hand side of equation (10) to (2b).

(2b)b (11)
21.Q.E.D. (12)

By disregarding rounding error, this proof allows for division by zero, yielding the untenable result of 2 ≅ 1. In real-world applications of this concept, numbers are frequency truncated (or rounded) because of floating-point library limitations inherent to the operating system. This occurs, for example, in the field of cancer informatics when computing odds ratios (ORs) and statistical significance for thousands of single-nucleotide polymorphisms (SNPs) in a genome-wide association study (GWAS). Such errors can accumulate and significantly affect computational results and accuracy.

On the contrary, rounding often is used to intentionally aid the parsimonious presentation of data. This helps to simplify ideas without the need for unnecessary complexity. However, as illustrated above, rounding error cannot be simply overlooked, especially in applications where the loss of accuracy may be additive or multiplicative in nature. In this article, technical guidance and intuitive examples are provided to highlight the use and misuse of rounding, focusing on the computation and presentation of relative effect estimates (REEs).

Relative Effect Estimates and Confidence Intervals

The consequences of rounding can have important implications for how the results of an analysis are interpreted. Inevitably, rounding leads to less accurate (closeness to truth) and precise (repeatability) results when reporting parameter estimates. In turn, this is gauged against parsimony and the practical aspects of not reporting findings beyond what is necessary to convey the underlying meaning. Accordingly, rounding should be “not too much and not too little.”1 One must also be attentive to rounding numbers beyond what is warranted by the process or device used to measure the value, to avoid unfounded accuracy in the presentation of results.2

When measurements are taken on the linear scale, rounding to a fixed number of decimal places is a common and reasonable approach. However, the process becomes more complex for REEs such as ORs, hazard ratios (HRs), and relative risks (RRs), which are expressed on a logarithmic scale. In contrast to a linear scale in which values may range from to +, logarithmic values are bounded by zero and infinity. In addition, REEs are centered at unity instead of zero.3 The value 1.0 denotes the point of a null effect, with REEs > 1.0 indicating an increasing occurrence of the event (or outcome) under consideration while those below unity, a decreasing occurrence. On a linear scale, the value 5.0 is equal in magnitude to the opposite effect of –5.0. This differs from 5.0 on the logarithmic scale, which is equal in magnitude to the opposite effect of .20 (ie, the inverse of 5.0). In general, the impact of rounding on REEs is greater for small versus large ratios. As such, REEs and corresponding confidence intervals (CIs) should not be rounded to a fixed number of decimal places but rather based on the number of significant digits necessary to determine the statistical significance of the result.

A result is deemed to be statistically significant when the P value is less than or equal to (α), defined as the probability of falsely rejecting the null hypothesis (ie, no effect difference between the study and comparison populations). A (1α)100% CI for a REE also may be used to declare statistical significance at the α-level if the interval does not span unity. Because REEs (and CIs) are reported on the logarithmic scale, rounding these estimates relative to their distance (eg, number of significant digits) from unity is becoming increasing popular among researchers to help determine statistical significance. For example, when there are contiguous 0’s or 9’s immediately to the right of the decimal place (for values < 1.0), the significant digits are counted from the right to left relative to unity. That is, for values to the left of unity, digits up to but not including the value immediately following the 0’s or 9’s, respectively, are disregarded. Likewise, significant digits are counted from left to right for values above unity, with zeros immediately adjacent to 1.0 not considered significant digits. Contiguous 0’s to the right of the decimal for P values are disregarded, as are contiguous zeros following .05 (assuming α = .05), until the next encountered significant digit. By convention, P values less than .0001 are reported as <.0001, and otherwise to no more than 4 decimal places (eg, P = .00018 is rounded to .0002). As a general rule, it is unneccessary to include a zero before the decimal place of a P value (or positive rational number less than unity), as this value is not considered a significant digit.2 Optionally, when presenting rounding results with a varying number of decimal places in tabular form, aligning the values by the decimal point can help visualize differences in the number of decimal places.1

In Table 1, a few representative examples are provided to compare REEs (and CIs) before and after rounding to 2 significant digits using the above-mentioned strategy. The underlined values (vinculum) in the table denote the respective significant digits of rounding. When the point estimate for the REE is greater than 1.0 but less than 1.05, some researchers round the value to 1.0 and omit the second significant digit. However, this oversimplification can result in the estimate falling outside the corresponding CI (Example #12). Similarly, in Example #10, the HR is rounded to 1.07 instead of 1.1 so that the rounded value is contained within the CI (rather than being equal to the upper limit). To avoid ambiguity, the upper confidence limit (UCL) for Example #13 is rounded to .9989 instead of .999 (wherein the actual value could be as low as .9985 or as high as .9999 if the value was truncated to .999). The P value in Example #15 is rounded to .0502 and is consistent with its CI that spans unity (ie, statistically nonsignificant, given α = .05). In Examples #4 and #6, the trailing zeros in the P value are retained to convey the exactness of rounding to 2 significant digits versus values that may have been rounded to a single significant digit. This is also the case for the trailing zero in the REE (ϕ) for Examples #4 and #15. Numbers greater than 10 generally are rounded to a whole integer (Example #6), except when the application merits otherwise.

Table 1.

Relative effect estimates before and after rounding to 2 significant digits.

# Before rounding After rounding
φ 95% LCL 95% UCL P value φ 95% LCL 95% UCL P valuea
1 1.67176 1.00266 2.78737 .04882 1.7 1.003 2.8 .049
2 1.32054 .70156 2.48561 .38891 1.3 .70 2.5 .39
3 1.46265 .99743 2.14487 .05156 1.5 .997 2.1 .052
4 .79574 .24679 2.56571 .70208 .80 .25 2.6 .70
5 1.83351 1.38517 2.42697 .00002 1.8 1.4 2.4 <.0001
6 1.99294 .26608 14.92738 .50206 2.0 .27 15 .50
7 1.33846 1.05598 1.69650 .01594 1.3 1.1 1.7 .016
8 3.12800 1.61069 6.07466 .00076 3.1 1.6 6.1 .0008
9 2.66667 1.35881 5.23333 .00435 2.7 1.4 5.2 .0044
10 1.07053 1.00020 1.14580 .04933 1.07 1.0002 1.1 .049
11 .93412 .87275 .99980 .04933 .93 .87 .9998 .049
12 1.01315 1.00410 1.02227 .00431 1.01 1.004 1.02 .0043
13 .73986 .54800 .99889 .04916 .74 .55 .9989 .049
14 1.00800 .99961 1.01650 .06159 1.01 .9996 1.02 .062
15 .79725 .63555 1.00018 .05018 .80 .64 1.0002 .0502

Abbreviations: LCL, lower confidence limit; UCL, upper confidence limit; φ, relative effect estimate.

a

Based on rounding the original values rather than using a conversion formula applied to the rounded estimates.

Rounding allows for the presentation of numbers with a higher degree of decimal place accuracy in a more succinct format while still retaining the true meaning of the value. This is often done to facilitate the formatting of tables, enabling the maximum content of information in a limited space. However, a parsimonious solution for rounding may not always be feasible.

Example

Consider the following case:

ϕ=0.9999917017,95%CI=(0.99996411191.0000192924);P=.5555315172 (13)

According to the above-mentioned rules, this would be rounded as

ϕ=0.999992,95%CI=(.999961.00002);P=.56 (14)

The interpretation of the CI which contains unity is consistent with the original unrounded values and corresponds to a statistically nonsignificant P value in both cases. Yet, the number of decimal places required to meaningfully represent the CI after rounding is cumbersome. Unless a CI is mandatory for the example at hand, it may be more reasonable to forego this statistic and only present the P value corresponding to the point estimate.

Logarithmic-based effects also may pose a challenge for rounding when numbers are too small or too large to be conveniently written in decimal form. Typically, computer programs will use scientific notation to represent the confidence bounds, as indicated in the example below

ϕ=0.00002,95%CI=(2.65774E165to2.05454E155);P=.95474 (15)

where “E” denotes multiplying the mantissa by 10 raised to the order of magnitude shown after this symbol. For example

2.65774E165=2.65774×10165 (16)

The above can be further reduced by rounding the mantissas in the CI, as well as the P value, to 2 significant digits. That is

ϕ=0.00002,95%CI=(2.7E165to2.1E155);P=.95 (17)

Although some precision is forfeited, this rounded form still conveys the nonsignificance of the point estimate.

Returning to Table 1, Example 14, let us assume that the researcher opted instead to round the HR to 1.0. to save space (ie, more parsimonious). While this abbreviated point estimate appropriately lies within the CI, we are uncertain if the true value is above or below unity. We can determine the directionality of the estimate on the log scale by computing the midpoint of the CI and adding the result to the lower bound. That is

ϕ={log(LCI+(UCLLCL2))}={log(0.9996+(1.020.99962))}1.01 (18)

Accordingly, we see that this value is logarithmically in the positive direction (ie, above 1.0) and conclude that little information would have been lost by the researcher’s rounding decision (as it is possible to estimate the additional significant digit needed to determine if the point estimate is above or below unity).

Computing P Values as a Simple Measure of Rounding Accuracy

From basic probability theory, recall that

X={log(ϕ){log[LCL(ϕ)]log(ϕ)z(1α2)}}2 (19)
={log(ϕ){2.z(1α2)SE(log(ϕ))}(2.z(1α2))}2 (20)
={log(ϕ){log[UCL(ϕ)]log[LCL(ϕ)]}(2.z(1α2))}2 (21)

follows a chi-square distribution with 1 degree of freedom (df), that is, χ12. Here, (ϕ) denotes the respective REE, (α) is the significance level (ie, the probability of rejecting the null hypothesis when it is true), and z(1α2) is the 100% × (1 – α / 2) percentile of a standard normal distribution. The denominator of equation (21) gives the standard error for log(ϕ) in terms of the corresponding CI. The P value is then computed as

p=P(χ12>X)=1χ12(X) (22)

For example, consider

ϕ=1.0079959299,95%CI=(0.99958026771.0164824454);P=.0626284394_ (23)

and applying equation (21), we have

X={log(1.0079959299)log[1.0164824454]log[0.9995802677]2.1.9599639845}2 (24)
=3.4663743423 (25)

The resulting P value = .0626284426, which only differs from the original P value by the last 3 digits (ie, 394 vs 426).

Converting a REE and corresponding CI to a P value is a simple way to gauge the accuracy of rounding. Rounding the above example to 2 significant digits gives

ϕ=1.01,95%CI=(.99961.02);P=.063 (26)

Applying equation (22) to the rounded values for the REE and CI yields the smaller P value of .053. While rounding to 2 significant digits in this example is suitable for determining if the point estimate is contained within or outside the CI (ie, nonsignificant vs significant), it may not provide enough accuracy for reliably estimating a P value for other purposes. In practice, journal articles often provide a REE with CI, foregoing a P value. An interested reader could estimate a P value using the above conversion equation. In such a case, however, it is important that a sufficient number of significant digits are provided to assure that the derived P value is reasonably close to the actual value.

Accumulated Rounding Error

Ratio of 2 independent REEs

In the previous section, we discussed how to gauge rounding error by estimating a P value when given the REE, lower confidence limit (LCL), and UCL (ie, 3 sources of rounding error). While only the REE and LCL are necessary to estimate a P value, see equation (19), the latter form may not adequately capture the CI’s true width when the estimates have been rounded. Recall that REEs are based on a logarithmic rather than linear scale and consequently rounding may differentially affect the distance of LCL and UCL from the REE.

To further illustrate the accumulation of rounding errors, we consider estimating the (1α)100% CI for the ratio of 2 independent REEs (ie, Cov[log(ϕ1),log(ϕ2)]=0). That is

CIα(ϕ1ϕ2)=elog(ϕ1ϕ2)±z(1α2)SE[log(ϕ1ϕ2)] (27)
=elog(ϕ1ϕ2)±z(1a2)Var[log(ϕ1)]+Var[log(ϕ2)]n (28)
=elog(ϕ1ϕ2)±z(1a2){SE[log(ϕ1)]}2+{SE[log(ϕ2)]}2 (29)

where from the denominator of equation (21) we see that

SE[log(ϕi)]=log[UCL(ϕi)]log[LCL(ϕi)]2.z(1α2) (30)

To test the null hypothesis that (ϕ1/ϕ2)=1, we compare

z=log(ϕ1ϕ2){SE[log(ϕ1)]}2+{SE[log(ϕ2)]}2 (31)

with a standard normal distribution to obtain the corresponding P value, that is,

P=2.{1Φ(z)} (32)

where

Φ(z)=zex222.πdx (33)

In the above equations, we see that there are 6 potential sources of rounding error corresponding to the 2 sets of REEs and CIs {ie,ϕ1,UCL(ϕ1),LCL(ϕ1) and ϕ2, UCL(ϕ2),LCL(ϕ2)}.

Example

A mildly immunosuppressive, genetically modified compound (Factor X) produced by drug company (A) for the treatment of rheumatoid arthritis has been observed to increase the risk of cancer. Company (B) has developed a non-immunosuppressive compound (Factor Y) for rheumatoid arthritis and wishes to test if this new drug reduces cancer risk in this population. The senior management of the company would like to move forward with regulatory submission of Factor (Y) but only if there is a 50% risk reduction over Factor (X) at the P < .01 level of statistical significance. A randomized clinical trial reports the following results

ϕX=1.6,95%CI=(1.32.1),P<.0001 (34)
ϕY=1.1,95%CI=(0.861.3),P=.51 (35)
ϕXϕY=1.45 (36)
CI95(ϕXϕY)=(1.12.0) (37)

and

P=.020 (38)

Because the achieved risk reduction is less than 50% (ie, ~45%) and the P value of .020 fails to satisfy the a priori criteria of <.01, Company (B) decides against further investment in this therapy. However, an independent consultant questions these results given that the analysis was conducted on the reported rounded values. Requesting the unrounded data and reanalyzing the data, her more precise result gives

ϕX=1.6443412302,95%CI=(1.31618069502.0543213327),P=.000011928 (39)
ϕX=1.0774119191,95%CI=(0.86167458001.3471633844),P=.513085655 (40)
ϕXϕY=1.5261955071 (41)
CI95(ϕXϕY)=(1.11335277252.0921246018) (42)

and

P=.0086086358 (43)

We now see that the risk reduction of Factor (Y) versus Factor (X) is ~53%, with a corresponding P value of .0081. Based on the reanalyzed data, Company (B) decides to move forward with submitting their drug for regulatory approval.

Multiplicity-adjusted CIs for REEs

Accounting for multiple comparisons in the form of multiplicity-adjusted CIs is important to avoid the inflation of type I error (ie, the probability of wrongly rejecting 1 or more null hypotheses increases in proportion to the number of risk comparisons being considered). Multiplicity-adjusted CIs also are useful for identifying a parsimonious set of variables to include in multivariable models such as Cox and logistic regression. However, the process of computing multiplicity-adjusted CIs using rounded values may yield imprecise and misleading results.

Given (n)independentCIs, the (1α)100% multiplicity-adjusted CI for the ith REE (ie, ϕi) using the Hochberg step-up procedure is computed as

aCIα(ϕi)=e{log(ϕi)±z(1a2)SE~[log(ϕi)]} (44)

where

SE~[log(ϕi)]=log(ϕi)Φ1(1P~j2) (45)
P~j=P(j).(nj+1) (46)

and P(j) denotes the ordered P values

P(1)<P(2)<P(j)<P(n) (47)

from the set of Pi values (i,j=1,,n) computed using equation (22), with P~j having an upper bound of unity.4

Example

A molecular scientist has bioengineered a benign respiratory animal virus to help combat opportunistic infections. While the approach has been shown to be effective at reducing blood human papillomavirus (HPV) levels and subsequent cancer, an increased occurrence of severe coughing and wheezing episodes has been observed over the 5-year follow-up period. Accordingly, she decides to examine the association of selected SNPs with the outcome, hoping to identify an antidote target. To control for false positives, the Hochberg method is used to adjust CIs for multiplicity.4 The study findings, computed before and after rounding, are summarized in Tables 2 and 3.

Table 2.

Confidence intervals adjusted for multiplicity before rounding.

# SNP φ Unadjusted for multiplicity Adjusted for multiplicity
95% LCL 95% UCL 95% LCL 95% UCL
1 rs402197 1.3517192397 1.1168009355 1.6360524466 1.0689444103 1.7092983372
2 rs464397 1.5126136910 1.2531258554 1.8258343074 1.2235960906 1.8698982415
3 rs422761 1.2788404824 .5986807434 2.7317280496 .5986807434 2.7317280495
4 rs383510 1.1245972747 .9143181123 1.3832374238 .7778465831 1.6259234890
5 rs386416 1.0140354289 1.0016257307 1.0265988777 .9970386257 1.0313219815
6 rs233575 1.4891699509 .8360190116 2.6526037229 .4307668688 5.1480912375
7 rs714205 1.3414272940 1.0805797401 1.6652423863 1.0152419507 1.7724121662
8 rs518394 1.4177869015 1.1593821629 1.7337852543 1.1129480251 1.8061218069

Abbreviations: LCL, lower confidence limit; SNP, single-nucleotide polymorphism; UCL, upper confidence limit; φ, relative effect estimate.

Table 3.

Confidence intervals adjusted for multiplicity after rounding.

# SNP φ Unadjusted for multiplicity Adjusted for multiplicity
95% LCL 95% UCL 95% LCL 95% UCL
1 rs402197 1.4 1.1 1.6 1.1 1.7
2 rs464397 1.5 1.3 1.8 1.3 1.8
3 rs422761 1.3 .60 2.7 .61 2.8
4 rs383510 1.1 .91 1.4 .58 2.1
5 rs386416 1.01 1.002 1.03 .97 1.1
6 rs233575 1.5 .84 2.7 .44 5.2
7 rs714205 1.3 1.1 1.7 .96 1.8
8 rs518394 1.4 1.2 1.7 1.1 1.7

Abbreviations: LCL, lower confidence limit; SNP, single-nucleotide polymorphism, UCL, upper confidence limit; φ, relative effect estimate.

In Table 2 (before rounding), SNP #3 corresponds to a P value of .5253357016 (which is the least significant result in the table), with SE~[log(ϕi)]=0.3872421156 (computational details not shown). Consequently, the multiplicity-adjusted CI for this SNP is approximately equal to the unadjusted CI (only differing by the last decimal place for the UCL), in accordance with the underlying theory. However, in the rounded example (Table 3), both the lower and upper multiplicity-adjusted confidence limits (CLs) for SNP # 3 are larger than the unadjusted CLs. Also, SNP #7 is no longer statistically significant (ie, CI spans unity), while the multiplicity-adjusted CI for SNP #4 is much larger after using rounded values to compute the intervals. These results illustrate the accumulated error that may occur by rounding to only 2 significant digits, suggesting the need for greater accuracy when adjusting CIs for multiplicity.

Discussion

Rounding is a compromise strategy that involves replacing a true or more accurate value with one having less accuracy. The intent is to preserve the meaning and interpretation of the original result. It represents a balance between reporting too few significant digits and losing information versus failing to achieve parsimony (ie, retaining too many inessential digits). In some cases, the numbers are rounded beyond the capacity of the measurement device (spurious accuracy) or practical aspects of the problem at hand. The degree of rounding imposed in a situation depends on the desired accuracy and precision of the result. Additionally important is the need to minimize the error that may accumulate when performing complex computations on rounded values. Such sequential operations can lead to ill-conditioned and inaccurate findings.

The goal of research is to gain insights and answers to relevant scientific questions. Rounding helps to simplify the presentation of data and to make findings easier to understand and compare across research studies. Insufficient or inappropriate rounding (ie, numeric representation error) may affect the credibility and quality of a study, leading to a false sense of discovery. Results also may be difficult to replicate in future studies. Ideally, one aims to minimize the bias associated with rounded, realizing that a “one-size-fits-all” solution or set of rules rarely exists in the real world. In this article, a rounding strategy is presented based on the number of significant digits that are needed to determine if a REE is statistically significant. The method is simple to implement yet reasonably robust in most common applications.

An alternative rounding method for reporting REEs (and associated CIs), known as the “Rule of Four,” is based on the “maximum absolute fractional rounding error,” a value that varies because of the nonlinear (logarithmic) aspect of REEs.3 This rule entails dividing the REE by 4 and rounding down to 2 significant digits, and then reporting the REE to that number of decimal places. In brief, REEs are reported to 3 decimal places for values ranging from (.040 to .399), 2 decimal places for (.40 to 3.99), and 1 decimal place for (4.0 to 39.9), and so forth.

Within certain ranges, this rule has the advantage of reporting rounded values with greater absolute accuracy than the strategy suggested in the current manuscript (ie, “Goldilocks Rule”). For example, the REEs of .2543 and 3.9421 are rounded to .254 and 3.94, respectively, using the Rule of Four. In comparison, the values are rounded to .25 and 3.9 using the Goldilocks Rule. However, the Rule of Four often is less parsimonious from the perspective of using CIs to gauge the statistical significance of a REE. For instance, rounding the lower confidence bound of .3994 to .399 (Rule of Four) versus .40 (Goldilocks Rule) conveys the same interpretation of the REE as being statistically nonsignificant, but in the latter case, fewer decimal places are required to present the value. Furthermore, rounding an UCL of .9973 and a LCL of 1.0042 by the Rule of Four (ie, 1.0 in both cases) does not flag the values as being statistically significant, compared with the Goldilocks Rule which rounds the values to .997 and 1.004, respectively.

Investigators are increasingly advocating for the use of effect sizes and CIs versus a single P value, wherein the latter does not convey the magnitude and relative importance of an effect.5 This approach has the added advantage of being able to gauge the statistical significance of the result (ie, CI excludes unity), which remains a common practice in the literature. Caution, however, is advised when dealing with CLs near unity, as insufficient rounding may obscure the underlying statistical significance (or lack thereof) of the result. Researchers also are counseled to avoid spuriously significant findings which may not be scientifically relevant or practically meaningful, based only on whether the CI excludes unity.6

Conclusions

Little guidance exists in the literature for how to round REEs and CIs, especially with respect to the interpretation of statistical significance. The current manuscript provides a parsimonious framework for rounding to aid cancer researchers in the presentation of their results. Tools also are provided for gauging the rounding accuracy of REEs in terms of associated P values, as well as providing intuitive examples of accumulated rounding error. Importantly, rules for rounding, whether those presented here or by other authors, are merely recommendations and should be carefully considered in the context and aims of the underlying research.

Acknowledgments

Acknowledgments are due to Andrew Thompson (CSPEC/DVAHCS) for providing technical support and quality review.

Footnotes

Funding:The author(s) received no financial support for the research, authorship, and/or publication of this article.

Declaration of conflicting interests:The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Authors’ Note: The content of this manuscript does not represent the views of CSPEC/HSR&D/DVAHCS or the US Government. Examples are hypothetical and presented only for didactic illustration of underlying quantitative concepts.

Author Contributions: JTE contributed to the conceptualization, writing—original draft preparation, and writing—review and editing. The author has read and agreed to the published version of the manuscript.

References

  • 1. Cole TJ. Too many digits: the presentation of numerical data. Arch Dis Child. 2015;100:608-609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Blackstone E. Rounding numbers. J Thorac Cardiovasc Surg. 2016;152:1481-1483. [DOI] [PubMed] [Google Scholar]
  • 3. Cole T. Setting number of decimal places for reporting risk ratios: rule of four. Br Med J. 2015;350:h1845. [DOI] [PubMed] [Google Scholar]
  • 4. Hochberg Y. A sharper Bonferroni procedure for multiple tests of significance. Biometrika. 1988;75:800-802. [Google Scholar]
  • 5. Nuzzo R. Scientific method: statistical errors. Nature. 2014;506:150-152. [DOI] [PubMed] [Google Scholar]
  • 6. Blume J, D’Agostino McGowan L, Dupont W, Greevy R. Second generation p-values: improved rigor, reproducibility and transparency in statistical analyses. PLoS ONE. 2018;13:e0188299. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Cancer Informatics are provided here courtesy of SAGE Publications

RESOURCES