To provide effective health services, it is important to distinguish populations of people with and without diseases. Accurate diagnosis of diseases is important for delivering the appropriate treatments, implementing preventive programs in the community, and finding the causes and etiology of diseases.1 Therefore, a special attention should be paid to the quality of diagnostic tests.
Instead of reporting positive and negative results, a number of diagnostic tests report the values of measured variables. Thus, the determination of a cut-off point, which distinguishes patients and healthy individuals, is necessary. Cut-off point is an important and attractive issue in medical research. Since cut-off point has a great impact on decision making by clinicians, studies for the determination of cut-off points should be designed under special methodological considerations.
The current issue of the Iranian Journal of Medical Sciences publishes an article titled “ELISA cut-off point for the diagnosis of human brucellosis; a comparison with serum agglutination test" by Sanaei and colleagues. Herein, we comment on the methodological pitfalls of this article in particular, and the accuracy of the research on diagnostic tests in general.
Basic steps in the design of diagnostic accuracy studies include determination of study objectives, identification of target-patient population, and selection of the gold standard as well as selection of measures of accuracy.2
The first and second steps have been managed properly in the study by Sanaei and colleagues. However, the authors' approach to the third and fourth steps needs more clarification.
The selection of a gold standard is the most difficult step in studies involving diagnostic tests. Some investigators believe that there is no a true gold standard, since no test or procedure is entirely accurate to differentiate patients and healthy individuals. However, for all studies on the accuracy of diagnostic tests, it is important to establish an operational standard.2 Although the presence of a test with the highest sensitivity and specificity is ideal, some issues including availability, cost and invasiveness of the test should be considered.
Any change in the gold standard alters the sensitivity and specificity of a diagnostic test. Accordingly, when the gold standard was changed (table 3 of the paper), the measure of accuracy of ELISA test was changed as well. Therefore, the authors should have chosen a test, which had a specified sensitivity and specificity, as the gold standard. Application of an imperfect gold standard usually leads to the underestimation of test accuracy, which is termed imperfect gold standard bias.2
The selection of measures of accuracy is another step in the design of studies on diagnostic tests accuracy. The paper by Sanaei and colleagues has reported the positive and negative predictive values. The predictive value is a post-test probability, and is affected by the prevalence of the disease. In contrast to the sensitivity and specificity, the predictive value is not a measure of intrinsic diagnostic accuracy, and varies with any change in the pre-test probability. Therefore, the results of any test must be interpreted considering the pre-test probability of the disease in the desired population.
References
- 1.Gordis L. Epidemiology. 3th ed. Philadelphia: Saunders; 2004. [Google Scholar]
- 2.Zhou XH, Obuchowski NA, McClish DK. Statistical methods in diagnostic medicine. New York: John Wiley; 2002. [Google Scholar]