Skip to main content
Canadian Respiratory Journal logoLink to Canadian Respiratory Journal
editorial
. 2009 Nov-Dec;16(6):181–182. doi: 10.1155/2009/403258

Database epidemiology

PMCID: PMC2807790  PMID: 20011724

In the current issue of the Canadian Respiratory Journal, Gershon et al (1) (pages 183–188) tackle a familiar problem. Our single-payer, largely fee-for-service health care system essentially guarantees that provincial health care authorities have records of physician contacts replete with diagnoses that can be attributed to (anonymous) individuals. These records potentially enable investigators to track physician and hospital contacts for patients with a given diagnosis at relatively low cost and, therefore, are highly attractive to epidemiologists, who would otherwise have to find the requisite patients, then follow them individually for years in the case of chronic diseases such as asthma. However, it is often not clear what exactly is being tracked. In the case of asthma, ‘doctor-diagnosed asthma’ simply means that a physician labelled a patient with asthma at some point in time. This is problematic because there is no accepted objective diagnostic criterion for asthma and, even if there was, the investigator could not be sure that it was applied in the cases considered. In other words, the investigator has no real idea whether the diagnoses he or she are analyzing are accurate.

graphic file with name crj161811.jpg

Nick R Anthonisen

Gershon et al (1) report a detailed and difficult study in an attempt to clarify these issues. They selected 13 primary care physician (PCP) practices in Ontario that had electronic billing systems, work space for an auditor and saw at least 30 adult asthmatic patients in 2003. A total of 518 charts from these PCPs were audited and classified into four categories of equal size: those with asthma, those with ‘related diseases’, such as sinusitis, acute bronchitis and rhinitis, those with chronic obstructive pulmonary disease, and control subjects who had hypertension or musculoskeletal diseases. These charts were submitted to respirologists who essentially classified them into ‘asthma’ or ‘nonasthma’ groups. These expert diagnoses were compared with database diagnoses. The PCP diagnoses underwent the same comparison. Comparison with database diagnoses yielded some results that may have been expected – and some that were of greater interest. Of the latter, there was the finding that the expert diagnosis was no better than that of the PCP. For both groups, if the database criterion for asthma was one physician visit for the disease, sensitivity was excellent: more than 90% of people who were believed to have asthma by either experts or PCP had such a visit, but the specificity of a single visit for asthma was low – approximately 60%; many people who did not have asthma apparently experienced such a visit. If the criterion was two ambulatory care visits or one hospitalization for asthma over two years, the sensitivity decreased to approximately 80%, with the specificity increasing to approximately the same figure for both experts and PCPs. Making the criterion three ambulatory care visits for asthma over two years decreased sensitivity but increased specificity. The authors concluded that two ambulatory care visits and/or one hospitalization for asthma over two years would be satisfactory database criteria for asthma.

When I served in a large federal institution, approximations were characterized as ‘close enough for government work’, the implication being that they might not be close enough for anyone else. The Gershon et al (1) results fall into this category. There were 160 cases of asthma diagnosed by the experts; the PCP did not diagnose more than 20% of these cases. On the other hand, the PCP identified 196 of the 518 patients as being asthmatic. These differences were not trivial; by my calculation, of 228 patients, there were approximately 68 whose diagnosis differed between the two sets of physicians. There are several potential causes of such discrepancies, which are addressed by the authors. It is perhaps remarkable that the sensitivity and specificity of a given database definition did not differ between the two groups, and shows that these numbers contained a substantial degree of variation.

Thus, database epidemiology is, at best, an approximation and must always be recognized as such, as Gershon et al indicate. In the case of asthma, others have examined the issue of database accuracy (24), with somewhat similar results. The fewer the potentially competing diagnoses, the better the results, the best being in children and young adults who are least likely to have chronic obstructive pulmonary disease. Database epidemiology is best confined to these groups, when possible.

REFERENCES

  • 1.Gershon AS, Wan C, Guan J, Vasilievska-Ristovska J, Cicutto L, To T. Identifying patients with physician-diagnosed asthma in health administrative databases. Can Respir J. 2009;16:183–8. doi: 10.1155/2009/963098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.To T, Dell S, Dick PT, et al. Case verification of children with asthma in Ontario. Pediatr Allergy Immunol. 2006;17:69–76. doi: 10.1111/j.1399-3038.2005.00346.x. [DOI] [PubMed] [Google Scholar]
  • 3.Blais L, Lemiere C, Menzies D, Berbiche D. Validity of asthma diagnoses recorded in the Medical Service Database of Quebec. Pharmacoepidemiol Drug Saf. 2006;15:245–52. doi: 10.1002/pds.1202. [DOI] [PubMed] [Google Scholar]
  • 4.Huzel L, Roos LL, Anthonisen NR, Manfreda J. Diagnosing asthma: The fit between survey and administrative database. Can Respir J. 2002;9:407–12. doi: 10.1155/2002/921497. [DOI] [PubMed] [Google Scholar]

Articles from Canadian Respiratory Journal : Journal of the Canadian Thoracic Society are provided here courtesy of Wiley

RESOURCES