Abstract
Background
Quality improvement requires quality measures that are validly implementable. In this work, we assessed the feasibility and performance of an automated electronic Meaningful Use dental clinical quality measure (percentage of children who received fluoride varnish).
Methods
We defined how to implement the automated measure queries in a dental electronic health record (EHR). Within records identified through automated query, we manually reviewed a subsample to assess the performance of the query.
Results
The automated query found 71.0% of patients to have had fluoride varnish compared to 77.6% found using the manual chart review. The automated quality measure performance was 90.5% sensitivity, 90.8% specificity, 96.9% positive predictive value, and 75.2% negative predictive value.
Conclusions
Our findings support the feasibility of automated dental quality measure queries in the context of sufficient structured data. Information noted only in the free text rather than in structured data would require natural language processing approaches to effectively query.
Practical Implications
To participate in self-directed quality improvement, dental clinicians must embrace the accountability era. Commitment to quality will require enhanced documentation in order to support near-term automated calculation of quality measures.
Introduction
“Self-evaluation will ensure that dentistry as a profession can provide evidence to the community at large that its members are responsible stewards of oral health.”1 Dentistry has made strides in developing dental quality measures in the wake of increasing EHR adoption and enhanced focus on accountability in healthcare. This is a critical, and perhaps overdue2, step forward. According to President Clinton's Advisory Commission on Consumer Protection and Quality in the Health Care Industry (1998), “A key element of improving health care quality is the nation's ability to measure the quality and provide easily understood, comparable information on the performance of the industry.”3
In the years since 1998, the clinical quality measure ecosystem has evolved, and the evolution has accelerated with the uptake of electronic health records (EHRs). The Medicare and Medicaid EHR Incentive Programs that are part of the HITECH (Health Information Technology for Economic and Clinical Health) Act provide financial incentives for the “meaningful use” of certified EHR technology. An eligibility criterion for the incentives is the use of the EHR to report clinical quality measures.4 (Please see Box 1 for facts about the Meaningful Use program.) Meaningful Use Stage 2 was the first to incorporate oral health measures. They were developed by the Centers for Medicare and Medicaid Services (CMS) through a contract with Booz Allen Hamilton.5 In parallel, the Dental Quality Alliance (DQA) has also been defining electronic clinical quality measures. In response to a request from CMS, the American Dental Association established the DQA to frame oral health care quality measures using a consensus-building process among a broad base of stakeholders.6 For Meaningful Use Stage 3, the DQA and Meaningful Use efforts may intersect, as the DQA is poised to contribute to Meaningful Use clinical quality measures.7
Before the advent of EHRs, quality measures were gathered through administrative claims data, which are proprietary to the payers and are aggregated only at that level.1 The Meaningful Use program has the potential to transform EHRs into a catalyst for what the Institute of Medicine has called a learning healthcare system, which is defined as a “healthcare system designed to generate and apply the best evidence for the collaborative healthcare choices of each patient and provider; to drive the process of discovery as a natural outgrowth of patient care; and to ensure innovation, quality, safety, and value in health care.”8 The learning healthcare system has implications for quality improvement, as well as clinical research and the analysis of the comparative effectiveness of different treatments.9 Quality measurement is fundamental to quality improvement, and successful implementation of quality measures depends on the availability of timely, accurate, and reliable data sources. For this reason, we believed it was an opportune moment to implement one of these Meaningful Use measures in a real-world EHR setting in order to evaluate the feasibility and performance of this measure. In particular, the objectives were to (a) evaluate whether the measure could be implemented on the basis of structured data available in the record and (b) to compare performance of an EHR-based query against the findings derived from an in-depth manual chart review.
Methods
We assessed the feasibility and performance of Meaningful Use Stage 2 measure CMS74v310,the percentage of children, age 0-20 years, who received a fluoride varnish application. The site chosen was the University of Texas School of Dentistry at Houston. Representatives from different dental institutions (authors) met to frame the exact query definition by reviewing the measure specifications and determining how they may be retrieved from the EHR. The authors represent a range of areas of expertise, as follows: (1) A.B. – dental public health; (2) R.R. – dental public health; (3) E.K. – oral surgery, health policy and management, data standards; (4) A.N. – dental public health; (5) N.B.H. – dental public health; (6) J.M.W. – general dentistry, caries risk management; (7) L.M. – general dentistry; (8) M.F.W. – informatics. Each of the institutions represented uses the axiUm dental EHR. Human subjects approval was received before conducting the retrospective chart reviews. In particular, we followed the procedure delineated below:
Step 1: Total patient population that meets the denominator criteria
We determined the total number of eligible patients who had an encounter (visit) for oral evaluation in the 2013 measurement period by querying the dental EHR. We used evaluation-related CDT11/Treatment codes to determine an encounter as shown in Table 1 for patients who were 20 years old or less before the start of the measurement year.
Table 1.
CDT Procedure Codes queried in the denominator to determine an encounter |
D0120: Periodic oral evaluation |
D0140: Limited oral evaluation |
D0145: Oral evaluation for a patient under three years of age and counseling with primary caregiver |
D0150: Comprehensive oral evaluation |
D0160: Detailed and extensive oral evaluation-problem focused |
D0170: Re-evaluation-limited |
D0180: Comprehensive periodontal evaluation |
CDT Procedure Codes queried in the numerator to determine fluoride application |
D1203: Topical application of fluoride-child |
D1204: Topical application of fluoride-adult |
D1206: Topical fluoride varnish |
D1208: Topical application of fluoride |
Step 2: Sample size necessary for manual review
We determined the sample size requirement using precision of 5% around the estimate at 95% confidence. We obtained data from the State of Texas Medicaid database, estimating fluoride varnish application rate at 44.27% (Yang N, Personal Communication).
Step 3: Population within the sample that meets the numerator criteria through EHR query
We determined the total number of eligible patients who received a fluoride varnish application in the measurement period by querying the set of patients identified in Step 1 (denominator). We used fluoride related CDT/Treatment codes to determine fluoride application (see Table 1).
Step 4: Population within the sample that meets the numerator criteria through manual chart review
Two abstractors (authors AB and NH) conducted independent manual review on sample size determined in step 2 and came to consensus where there were differences. Discrepancies were first identified, after which the reviewers referred back to the definitions of the numerator and denominator and discussed the discrepancies. All differences were able to be resolved in this way. Inter-rater reliability was assessed using the Kappa statistic.
Step 5: Concordance between query and manual review
We compared the performance of the dental EHR query to an in-depth manual chart review, which served as the gold standard with respect to information contained within the record. We calculated sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV).
Results
We describe the results of each step, outlined in the methods, for creating and evaluating the dental quality measure.
Step 1: Denominator
4376 patient charts met the requisite criteria. The average age of these patients was 8.81 years (±4.80), and 51.2% were male. At the time period covered in this review, the site treated Hispanic as one of the categories of race versus an ethnicity that is separately categorized. Thus, the population was categorized as 50.3% Hispanic, 15.8% African American, 14.3% Caucasian, 4.4% Asian = 4.4, 0.4% American Indian/Alaskan, 1.8% other, and 13.0% Unknown/unspecified/blank. Since the time of this review, race and ethnicity have been separately recorded.
Step 2: Manual review sample size
Sample size calculations determined a minimum of 381 charts to have a precision of 5% around the estimate at 95% confidence. Conservatively, we randomly selected 500 charts for review from the denominator records using a random number generator. The average age of these patients was 9.05 years (±4.90) and 50.2% were male. The race/ethnicity distribution was as follows: 51.8% Hispanic, 13.2% African American, 12.6%Caucasian, 5.3% Asian, 0.2% American Indian/Alaskan, 2.0% other, and 15.0% unknown/unspecified/blank. Ten were excluded from the manual review because they belonged to test (i.e., not real) patients. We did not exclude them from the automated review, as they would have been captured in a real-world automated review.
Step 3: Numerator
Among the 4376 denominator charts, 3209 met the numerator criteria (having had a fluoride varnish) through the dental EHR query.
Step 4: Identify numerator through manual chart review of sampled records
The two chart reviewers had excellent inter-rater agreement with respect to the denominator (Kappa=0.907). Abstractors also had excellent inter-rater reliability (K=0.984) determining if fluoride varnish was applied (numerator). Manual chart review indicated fluoride varnish application in 380 of the 490 charts.
Step 5: Concordance between query and manual review
Using the dental EHR-based query 355/500 (71.0%, 95% CI: 67.2% to 75.2%) of patients were found to have had fluoride varnish compared to 380/490 (77.6%, 95% CI: 73.9% to 81.2%) using the manual chart review. Table 2 shows how the automated EHR query performed compared to the manual chart review.
Table 2.
Sensitivity | 90.5% (95% CI: 87.1% - 93.3%) of the time that manual review revealed fluoride varnish application, the query identified fluoride varnish application |
Specificity | 90.8% (95% CI: 84.2% - 95.3%) of the time that the manual review did not reveal fluoride varnish application, the query did not identify fluoride varnish application |
Positive Predictive Value (PPV) | 96.9% (95% CI: 94.5% - 98.4%) of the time that the query identified fluoride varnish application, the manual review revealed fluoride varnish application |
Negative Predictive Value (NPV) | 75.6% (95% CI: 67.3% - 82.0%) of the time that the query did not identify fluoride varnish application, the manual review did not reveal fluoride varnish application. |
The procedure code for fluoride varnish was sometimes not entered and therefore not identifiable by the automated query, which was based on structured data. We found that while the fluoride varnish procedure code was documented 90.3% of the time as structured data, in 9.7% of the cases the application of varnish was only noted in a free text clinical note. Reviewers also found the following reasons documented in the note why fluoride varnish was not applied: 1) fluoride was provided earlier during a visit with another dental clinic, 2) patient had a heart murmur and the dentist wanted to contact the cardiologist first, 3) patient was transferred to another clinic after been accepted to the Children's Health Insurance Program (CHIP), and 4) patient's mother denied fluoride application.
Discussion
Our preliminary results following the implementation of a single quality measure in the EHR are promising and show the feasibility of using EHR data to determine dental quality measures. We felt that the time was right to focus on this, as the momentum builds for widespread EHR adoption in dental practices.12 Indeed, some states now require dental practices to implement EHRs. 13
We chose to implement a Meaningful Use measure that relies upon data that is collected in a standardized way across sites and EHR installations. A measure that requires data that not all sites collect or that are collected in different ways at different sites would be more challenging to implement broadly. In the course of our work, we identified quality measure definition best practices that should be followed as dentistry expands its quality measure set. First, the population defined in the denominator should be appropriate to the numerator being assessed. In the measure we implemented, we followed the Meaningful Use specification and therefore did not exclude those patients who were seen only for an emergency visit, though fluoride varnish would be unlikely to be generally appropriate in the emergency setting. Likewise, we did not explicitly exclude any young infants (< 6 months of age), though they would be unlikely to have teeth available for fluoride varnish application. Second, measures should be unambiguously definable. In the case of the measure we assessed, the denominator is specified as “Children, age 0-20 years, with a visit during the measurement period.”10 In our cross-site discussions, we noted that the term visit is ambiguous because it is not clear, for instance, whether it applies to onetime visits such as a single focused periodontal evaluation or emergency visits. Ambiguity in measure definition leads to cross-site variation in implementation, which inhibits interpretation and comparison.
A great promise of oral healthcare quality measures is the ability to compare and share data across sites. When these data are transformed into knowledge, we establish the foundation of a learning healthcare system. In the case of quality improvement, quality measures are the vehicle by which raw data are first transformed into actionable knowledge. In addition to the importance of consistent specification, there is a need to aggregate measures across sites for shared learning to occur. For the most part, aggregation currently occurs only with administrative claims data at the payor level, as noted by the DQA.1 This status quo is being challenged by initiatives to aggregate clinical data, such as the BigMouth dental data repository, which authors on this paper have developed. BigMouth thus far is amassing data extracted from six dental schools’ EHRs.14 One of our goals for BigMouth is that it will either (1) allow the calculation of quality measures based upon raw data deposited into BigMouth or (2) be a repository for quality measures that have been pre-processed at the originating clinical site, likely via something akin to the Health Quality Measures Format. The Health Quality Measures Format is a standard for representing a health quality measure as an electronic document.15
One of the challenges often mentioned in the framing of dental clinical quality measures is the absence of a standardized dental diagnostic terminology.1, 16, 17 The lack of such a terminology has limited a number of bodies, including DQA, in terms of the types of measures that can be proposed; e.g., one DQA proposed measure is “Percentage of adults treated for periodontitis who received comprehensive oral evaluation ....”18 If a patient has been diagnosed with periodontitis but has not received treatment, s/he would not be captured by such a measure, creating a quality blind spot. In response to this need, our research group has established the DDS Dental Diagnostic System (formerly called EZCodes), which has been designed for use within EHRs in the dental setting.19-22 The DDS is freely available and is currently installed at 16 academic institutions and several large dental group practices in the United States, Caribbean and Latin America, and Europe. The DDS has been designed to serve as a clinician-friendly interface to rich reference terminologies like, SNOMED, into which SNODENT23 has been integrated. Broader use of standardized diagnostic terminologies will give the profession and public a more complete picture of dental healthcare quality. With that said, our own experience with dental diagnostic terms confirms that there may be some documentation gaps.21 This serves as a reminder that quality measures based upon secondary data analysis represent what was documented, which does not always align with what occurred during the clinical visit.
Clearly, the solution is not to forgo clinical data-based quality measures. We instead must encourage, and sometimes enforce, the entry of important structured data. More sophisticated approaches would included natural language processing24 to identify measure elements from free text notes. We should not, however, make light of the work that will be required to accomplish this. Beyond the simplest rule-based NLP, e.g. looking for the phrase “fluoride varnish”, NLP is analytically complex and requires access to large and varied sources of real-world free-text data.25 In addition to NLP, there also exists the potential to infer missing data from elements that are present.26 Ultimately, clinicians should benefit from their documentation efforts, though additional appointment time might be required to complete the thorough documentation needed to correctly and completely capture the data necessary. Through measurement and the sharing of lessons-learned we can work together to live up to the high standards to which the dental profession and our patients hold us.
Box 1. Meaningful Use Facts.
Meaningful Use Facts
Meaningful Use is a component of the Health Information Technology for Economic and Clinical Health (HITECH) Act, which was itself a component of the 2009 American Recovery and Reinvestment Act.
The HITECH programs have been led by the Centers for Medicare & Medicaid Services and the Office of the National Coordinator for Health Information Technology.
Meaningful Use is defined by the use of certified EHR technology in order to meet defined objectives and quality measures.
Meaningful Use has been rolled out in three stages. The final rules for Stage 1 were introduced in July 2010, and the final rules for Stage 2 were introduced in August 2012. The interim rules for Stage 3 were released in March 2015.
There are incentives for eligible professionals to adopt certified EHRs and to demonstrate Meaningful Use.
If an eligible professional did not begin participation by 2015, there would be negative adjustments to Medicare payments.
Few dental EHRs are certified EHRs.
Oral health quality measures were introduced in Meaningful Use Stage 2.
Acknowledgements
We thank Krishna Kumar Kookal from the Office of Technology Services and Informatics (TSI) at the UTHealth School of Dentistry for assistance in extracting data from the EHR. Research reported in this publication was supported in part by the National Institute Of Dental & Craniofacial Research of the National Institutes of Health under Award Number R01DE024166. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Disclosure. None of the authors reported any disclosures.
References
- 1.Dental Quality Alliance Quality Measurement in Dentistry: A Guidebook. [10/30/2014];Dental Quality Alliance. 2014 http://www.ada.org/~/media/ADA/Science and Research/Files/DQA_Guidebook_52913.ashx.
- 2.Committee on an Oral Health Initiative . Advancing Oral Health in America. National Academies Press; 2011. [Google Scholar]
- 3.President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry Quality First: Better Health Care for All Americans, Final Report. 1998 [Google Scholar]
- 4.How to Attain Meaningful Use. http://www.healthit.gov/providers-professionals/how-attain-meaningful-use.
- 5.Aravamudhan K. Progress in Dental Quality Measurement. American Dental Association; 2014. http://www.nationaloralhealthconference.com/docs/presentations/2014/04-28/Krishna Aravamudhan.pdf. [Google Scholar]
- 6.Dental Quality Alliance - Organizational Members . American Dental Association; 2015. [04/03/2015]. http://www.ada.org/en/science-research/dental-quality-alliance/ - org. [Google Scholar]
- 7.American Dental Association, Dental Quality Alliance Federal Business Opportunities. https://http://www.fbo.gov/index?s=opportunity&mode=form&id=166ae35bc1554005752d8b856268b08e&tab=core&_cview=0.
- 8.Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best care at lower cost: the path to continuously learning health care in America. National Academies Press; 2013. [PubMed] [Google Scholar]
- 9.Gluck ME. Early glimpses of the learning health care system: the potential impact of health IT. [03/31/2015];Academy Health. 2012 http://www.academyhealth.org/files/publications/HIT4AKPotential.pdf”.
- 10.Centers for Medicare & Medicaid Services . Clinical Quality Measures for 2014 CMS EHR Incentive Programs for Eligible Professionals. Department of Health and Human Services; Baltimore, MD: 2013. [5/27/2014]. http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/2014_EP_MeasuresTable_June2013.pdf. [Google Scholar]
- 11.American Dental Association . CDT 2013 Dental Procedure Codes. ADA; Chicago, Ill.: 2013. [Google Scholar]
- 12.Dolan P. Why Are EHR Adoption Rates so Low for Dentists? 2013 http://www.healthbizdecoded.com/2013/12/why-are-ehr-adoption-rates-so-lowfor-dentists/
- 13.State Update: Minnesota Moves Ahead on 2015 Mandate for Dental Electronic Health Records. American Dental Education Association; [3/31/2015]. http://www.adea.org/Blog.aspx?id=21354&blogid=20132. [Google Scholar]
- 14.Walji MF, Kalenderian E, Stark PC, et al. BigMouth: a multi-institutional dental data repository. J Am Med Inform Assoc. 2014;21(6):1136–40. doi: 10.1136/amiajnl-2013-002230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.HL7 Version 3 Standard: Representation of the Health Quality Measure Format (eMeasure) DSTU, Release 2. Health Level Seven International; 2014. [10/30/2014]. http://www.hl7.org/implement/standards/product_brief.cfm?product_id=97. [Google Scholar]
- 16.Bader JD. Challenges in quality assessment of dental care. The Journal of the American Dental Association. 2009;140(12) doi: 10.14219/jada.archive.2009.0084. [DOI] [PubMed] [Google Scholar]
- 17.Garcia RI, Inge RE, Niessen L, DePaola DP. Envisioning success: the future of the oral health care delivery system in the United States. Journal of Public Health Dentistry. 2010;70(s1):S58–S65. doi: 10.1111/j.1752-7325.2010.00185.x. [DOI] [PubMed] [Google Scholar]
- 18.Dental Quality Alliance Proposed Adult Measures. American Deantal Asaociation; Chicago, Ill: 2014. [6/18/2014]. http://www.ada.org/~/media/ADA/Science and Research/Files/Adult_Measures_under_consideration.ashx. [Google Scholar]
- 19.Tokede O, White J, Stark P, et al. Assessing the use of a standardized dental diagnostic terminology in an electronic health record. Journal of Dental Education. 2013;77(1):24–36. [PMC free article] [PubMed] [Google Scholar]
- 20.Walji MF, Kalenderian E, Piotrowski M, et al. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR. Int J Med Inform. 2014;83(5):361–7. doi: 10.1016/j.ijmedinf.2014.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.White JM, Kalenderian E, Stark PC, et al. Evaluating a dental diagnostic terminology in an electronic health record. J Dent Educ. 2011;75(5):605–15. [PMC free article] [PubMed] [Google Scholar]
- 22.Kalenderian E, Ramoni RL, White JM, et al. The development of a dental diagnostic terminology. Journal of Dental Education. 2011;75(1):68–76. [PMC free article] [PubMed] [Google Scholar]
- 23.Goldberg LJ, Ceusters W, Eisner J, Smith B. The Significance of SNODENT. Stud Health Technol Inform. 2005;116:737–42. [PubMed] [Google Scholar]
- 24.Nadkarni PM, Ohno-Machado L, Chapman WW. Natural language processing: an introduction. J Am Med Inform Assoc. 2011;18(5):544–51. doi: 10.1136/amiajnl-2011-000464. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Chapman WW, Nadkarni PM, Hirschman L, et al. Overcoming barriers to NLP for clinical text: the role of shared tasks and the need for additional creative solutions. J Am Med Inform Assoc. 2011;18(5):540–3. doi: 10.1136/amiajnl-2011-000465. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Wright A, Pang J, Feblowitz JC, et al. Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. J Am Med Inform Assoc. 2012;19(4):555–61. doi: 10.1136/amiajnl-2011-000521. [DOI] [PMC free article] [PubMed] [Google Scholar]