Abstract
Purpose:
To describe an evaluation conducted by 39 state Early Hearing Detection and Intervention (EHDI) programs on the reporting process and system usability for audiologists when reporting the hearing test results to the EHDI program and the barriers encountered during reporting.
Method:
Each author independently extracted numbers, percentages, and texts from the evaluation reports into an Excel spreadsheet, which then became the dataset. Authors then compared and cross-checked the datasets before coding. Texts conveying similar concepts were coded with the same name and organized into categories. Finally, thematic identification and analysis were performed when a theme(s) or concept(s) that pertained to similar challenges encountered by audiologists was identified and organized under a higher-order domain.
Results:
Some audiologists reported no barriers when reporting hearing test results to the state EHDI programs. Among those audiologists who reported barriers, the most recurrent barrier was a non-user-friendly data system design. The second most recurrent barrier was not having adequate administrative time to report data as a busy clinician. The third most recurrent barrier was an incomplete understanding of the state EHDI reporting requirements. Finally, the method audiologists were required to use when reporting results also posed some challenges, such as no internet connection in rural areas when required to report via an internet portal.
Conclusion:
Because of the wide variety of barriers faced by audiologists, multiple strategies to improve the reporting process would likely be beneficial.
Keywords: reporting hearing result, EHDI program, barriers to reporting, audiologist
All U.S. states and territories have an Early Hearing Detection and Intervention (EHDI) program to help ensure all infants are screened for hearing loss and receive recommended follow-up diagnostic testing and intervention services (National Center for Hearing Assessment and Management [NCHAM], 2020). EHDI programs track and, in some states, coordinate follow-up services for infants who may be deaf or hard of hearing (DHH). Newborns who do not pass their hearing screen are often referred to an audiologist (a licensed provider of hearing evaluation and services) for diagnostic testing by hospital staff or by the state EHDI programs. Audiologists are one of the crucial links in the EHDI surveillance effort because they have information on the hearing status of newborns whom they have tested. Without the audiologists reporting the hearing test results to the state EHDI program timely, service coordination and enrollment into Early Intervention for children who are DHH may be delayed or not completed. It is equally important for audiologists to report normal hearing results to the state EHDI program as state EHDI program staff cannot accurately determine which cases no longer require follow-up and coordination without these results. The non-reported data gap may result in staff time dedicated to tracking a newborn who does not require service coordination, as well as a downstream effect that leads to an inaccurate estimate of the number of newborns who are DHH.
The importance of clinical providers reporting hearing test results to their state EHDI programs in a timely manner is reflected in statutes enacted by several states (Division of State Government Affairs, American Academy of Pediatrics [AAP], 2014; NCHAM, 2019). Detailed requirements for providers can include how, what, and when to report results to the program responsible for tracking newborns who have not passed their newborn hearing screen. Despite statutes and regulations, not all audiologists may routinely comply. In the only known published study on audiologists’ willingness and compliance in reporting hearing assessment results to the EHDI programs in the United States, of the 1,024 audiology facilities surveyed, 8.6% did not report results to their state EHDI program (Chung, Beauchaine, Grimes, et al., 2017). To date, there are no additional published studies that have attempted to identify barriers encountered by audiologists when reporting hearing assessment results to state EHDI programs.
From 2017 to 2020, the Centers for Disease Control and Prevention (CDC) provided funding to U.S. states and territories to identify and implement approaches to strengthen their program’s capacity to capture complete and accurate data on all infants in need of recommended hearing evaluation and intervention services. Not all states applied for the funding. Funded states and U.S. territories were required to evaluate how acceptable the established reporting process and system was to the users when they reported test results to their state’s EHDI program and any barriers they might have encountered. This article describes the evaluations conducted and their findings.
Method
Evaluation Framework and the Data Source
In September 2017, CDC provided guidelines on the key concept definition and type of evaluation questions that funded states should use in their process and system evaluation. The key concept, How acceptable is the EHDI reporting process? is defined as the willingness of persons or organizations to participate or use an established reporting method (the process) and the interface portal or reporting form (the data system) when reporting a hearing assessment result. The evaluation questions were standardized as follows: (a) To what extent do audiologists in the state know about reporting and are using the established reporting portal or method? (b) Are the reporting portal or other established methods user-friendly? (c) What barriers have prevented audiologists from reporting hearing assessment results? and (d) What are the audiologists’ perceptions on the reporting process and system design?
Standardizing how state EHDI programs should evaluate program and system barriers to reporting and at the same time allowing each program room to modify the approach were important. The former allowed us to aggregate the evaluation data across multiple states and the latter allowed the program to adapt the approach to suit their unique process. Although process guidance was also provided to states to help reduce variation in the evaluation process, each state could choose a data collection method, such as survey or interview, that best suited their need and internal process. Process guidance included a requirement to (a) engage key stakeholders in the state to assist in the evaluation, (b) choose an evaluation method(s) that can adequately answer the four evaluation/study questions listed above, and (c) disseminate findings as lessons learned to key stakeholders, in addition to reporting evaluation data and results to CDC.
To ensure all key evaluation elements were reported to the CDC, states and territories used a CDC-designed report template. The following information was requested in the template: (a) the key stakeholders engaged and their role in the evaluation, (b) a description of the statutes and regulation on reporting hearing assessment results to the appropriate program, if applicable, (c) a description of the reporting process audiologists should use, (d) the data collection method(s), and (e) the challenges and barriers encountered by audiologists.
By December 2018, 42 funded EHDI programs successfully completed the process and system evaluation. We excluded three evaluation reports from the analysis, as they were from U.S. territories with either no audiologists or only one audiologist to serve an entire community’s hearing care needs. This left 39 evaluation reports for qualitative data coding, thematic identification, and domain analysis.
Qualitative Data Coding and Analysis
We applied an inductive approach to derive explanations from the collected qualitative data, as opposed to a deductive approach, which is used when a hypothesis is developed prior to data collection (Williams, 2019). The grounded theory framework for analyzing and organizing qualitative data was developed by Glaser and Strauss (1967). For this framework, (a) concepts, not data, are the basic units of analysis, and (b) concepts that pertain to the same phenomenon may be grouped to form categories. Coding is a process of classifying and categorizing text data segments into concepts and categories or constructs. Strauss and Corbin developed various ways to code qualitative data (Corbin & Strauss, 1990; Strauss & Corbin, 1990). Analysis and interpretations are grounded solely on collected data representing the observed phenomenon to reduce biases.
No computer-aided qualitative data analysis software was used. Each author independently extracted numbers, percentages, and texts from the evaluation reports and entered them in an Excel spreadsheet, forming our dataset for analysis. The numbers and percentages reflected number of audiologists who had participated in the evaluation and who had encountered barriers when reporting hearing assessment results. Texts described stakeholders who assisted with the evaluation, the evaluation method used, and the audiologists’ perception of the challenges and barriers when reporting hearing assessment result to the EHDI program. Both authors compared the datasets to ensure the data were the same before proceeding to open coding, a process to identify concepts related to the phenomenon of interest expressed in a text (Medelyan, 2019). Words, phrases, and sentences that conveyed the same meaning or concepts were coded or tagged as the same (Guest & McLellan, 2003). For example, comments such as “busy,” “no time,” and “no time for administrative tasks” were coded as “no time” because they all conveyed the same meaning. Coding comments that conveyed the same meaning with a code or label, such as “no time,” “password reset issue,” “non-user-friendly design”, and “internet connection issue,” also facilitated counting the times a comment recurred. The coded comments were organized into categories. The categories were stakeholder type, stakeholder role, the reporting process created by the EHDI program, type of evaluation method used, survey response rate, and type of barriers reported by audiologists. Each author conducted the coding independently and the results were compared; differences were discussed and resolved before moving to thematic identification and analysis.
The intent of a thematic analysis was to identify concepts that come up repeatedly in a qualitative dataset (Nowell et al., 2017). Each author independently reviewed the meaning of each audiologist’s comments to identify a theme(s) that could connect certain comments together. Since all audiologists’ comments were already labeled with a code, such as “no time,” “password reset issue,” “non-user-friendly design”, or “internet connection issue,” the code also helped to identify a theme. For example, some audiologists reported “system sign-in very cumbersome,” “have to sign in twice to access the system,” or “takes state IT too long to reset expired password,” all of which points to the recurrent theme that system access was a barrier to reporting. Since the number of times certain types of comments recurred was quantified during the previous step, it helped inform the authors of the frequency of certain themes. Both authors compared and resolved any difference in the themes identified before moving to the final phase, selective coding, where themes were further unified around a core. Selective coding usually occurs in the later phase of a qualitative data analysis (Corbin & Strauss, 1990; Williams, 2019). The first author analyzed the 10 themes identified in the previous step to find a higher order domain, or core, that the themes could be subsumed under. For example, the following four themes: system access issue, system reliability, issues locating the right patient file, and non-user-friendly designs could be subsumed under system design domain. See Table 1 for the qualitative data review process and results.
Table 1.
Thematic Analysis and Coding Process on Audiologists’ Perception on the Challenges in Reporting Hearing Assessment Data to State EHDI Programs
First Step: Coding and Counting Comment Frequency | Second Step: Thematic Analysis | Final Step: Theme Consolidation under a Domain |
---|---|---|
| ||
Coding qualitative data and computing frequency of certain type of comments | Identify concepts that come up repeatedly in a qualitative dataset | Subsume related thematic categories under a higher order domain |
| ||
■ Comments such as “no time” or “busy” were coded as busy because both terms conveyed the same meaning. | 10 themes identified from the coded qualitative comments: | |
■ Each comment that reflected having no time to report was counted as 1 | 1) Difficulty accessing system 2) System reliability 3) Difficulty locating patient in the system 4) Non-user-friendly design |
Theme 1–4: System design domain |
■ Although “unaware of reporting,” “unaware that I need to report normal result,” and “don’t know how to report” reflected knowledge lack, type of knowledge lack was different in each comment. | 5) Work demand 6) Assumptions about reporting in a fractured healthcare environment |
Theme 5–6: Work demands & healthcare environment domain |
■ Therefore, comments were kept separate but placed in the same category: knowledge lack. | 7) Incomplete knowledge on reporting requirement 8) Lack resource/tool |
Theme 7–8: Incomplete knowledge and resource domain |
■ Again, each comment that reflects a lack of knowledge from a responder was counted as 1. | 9) Process issue 10) Perception that reporting is a duplicate effort |
Theme 9–10: Processbarrier domain |
Results
Reporting Process and System Evaluation
When conducting their evaluation, state EHDI programs engaged diverse stakeholders. The number of stakeholders who assisted ranged from 3 to 12 overall, and included staff from other departments, such as the state licensure board or epidemiologists. When designing their evaluation, many EHDI programs also engaged community stakeholders, such as audiologists from their own state. State EHDI programs and stakeholders worked collaboratively to design questions for a survey, focus group, or structured interview.
Audiologists were the target population, and EHDI programs compiled a list of audiologists from different sources. Some programs targeted audiologists who had previously reported to the EHDI program. Several programs targeted those audiologists to whom they routinely referred newborns for audiologic assessment, while other programs obtained a list of audiologists from the EHDI-Pediatric Audiology Links to Services website (http://ehdipals.org); Chung, Beauchaine, Hoffman, et al., 2017) or from their state’s licensure board. Only two programs targeted audiologists attending local conferences.
Data collection methods implemented by state EHDI programs also varied. Slightly more than half (56%, n = 22) of the EHDI programs used one method to collect audiologists’ experiences, while the remaining 44% used multiple methods (Table 2). When multiple data collection methods were used, a survey was typically done first, followed by a structured phone interview or an in-depth focus group. Most of the state programs (66%, n = 26) used surveys to collect audiologists’ experiences and perceptions. In the survey, EHDI programs used a combination of open text fields and a multiple-choice format to capture audiologists’ comments. A majority of the state EHDI programs posted their surveys online and contacted audiologists via e-mail to complete the survey. Survey responsiveness ranged from 10% to 100% (median 55%, mean 54%; Table 3); a higher response rate was achieved by surveying regional audiology conference attendees.
Table 2.
Data Collection Methods Used by State EHDI Programs When Evaluating Audiologists’ Perception on the Reporting Process
Number of state EHDI programs | N = 39 |
---|---|
Used only one method | 22 (56%) |
Survey (online, by phone, or onsite at audiology conference) | 21 |
Focus group (in-person) | 1 |
Used multiple methods | 17 (44%) |
Online survey followed by structured phone interview | 8 |
Survey (online, phone, or onsite at audiology conference) followed by a focus group | 5 |
Structured phone interview followed by an in-person focus group | 1 |
Online survey followed by structured phone interview and an in-person focus group | 3 |
Table 3.
Survey Response Rate of Audiologists and Number of States where Audiologists Reported No Barriers to Reporting
Survey Response Rate of Audiologists | |
Number of EHDI programs N = 26 | Response rate of audiologists |
10 | 60–100%* |
9 | 40–59% |
7 | < 40% |
Percent of Audiologists who Reported No Barriers to Reporting | |
Number of states where audiologists reported no barriers n = 13 | Percent of audiologists reporting no barriers |
3 | 81–100% |
3 | 61–80% |
5 | 41–60% |
1 | 21–40% |
1 | 0–20% |
When survey was conducted in-person at a conference or when there was only a small number of audiologist (less than 20) to serve children in the state, the response rate was higher (80–100%)
Range 19 to 100%, median 50%, mean 58%
Reporting Methods Audiologists Can Use
Most of the state EHDI programs (64%, n = 25) implemented a secure, password-protected online portal or interface for audiologists to report hearing assessment results. To report hearing assessment results via the portal, each audiologist must request system access from the EHDI program. In 19 (48%) states, the EHDI programs requested audiologists fax a hearing result form to the program. Two EHDI programs implemented other less labor-intensive reporting alternatives for audiologists. Both programs signed a data sharing agreement with the hospital so program staff could access only a limited area of the electronic medical record to extract hearing assessment data. Additionally, one of the programs also allowed audiologists to upload their diagnostic reports to the online portal.
Audiologist Perception on Reporting Hearing Results to State EHDI Programs
The number of audiologists reporting barriers versus no barriers varied across participating states. In 13 states there was a percentage of audiologists who reported no barriers at all (Table 3). In these 13 states, only 6 states had a large percentage of audiologists (> 60%) who reported having encountered no barriers (range 19–100%, median = 50%, mean = 58%). Among those audiologists who encountered barriers when reporting hearing results, 10 themes emerged from our qualitative data analysis (Table 4). The 10 themes could be further condensed into four domains. The number one barrier reported most often (58 times) was a non-user-friendly system design. The second most reported barrier (36 times) was related to the demands on a clinician. The audiologists were busy, often commenting that they did not have adequate time to report hearing results. The third most reported barrier (32 times) was a lack of knowledge on, or incomplete understanding of, state reporting requirements. Finally, and to a lesser extent, issues with the reporting method, such as fax not going through or no internet connection to access the online reporting portal, were reported 13 times by the audiologists.
Table 4.
Results of the Thematic and Domain Analysis on Audiologist Perception When Reporting Hearing Assessment Results to State Early Hearing Detection and Intervention (EHDI) Programs
Domains and Themes | Frequency of comment |
---|---|
Domain I Barrier: Inherent to the system design domain | n = 58 |
Theme 1 — Reporting system access issue Sample comments: Sign in process cumbersome; Must sign in twice; Takes state IT too long to reset expired password |
11 |
Theme 2 — System reliability/stability Sample comments: Data were not saved properly; Fax not going through or fax not receiving |
7 |
Theme 3 — Locating the right patient in the reporting system Sample comments: Poor search function so finding the right child is difficult; Child’s name often changes after hospital discharge and reporting system requires exact name and date of birth match and I don’t have the birth name |
10 |
Theme 4 — Non-user-friendly design Sample comments: Navigation tab very complicated; Reporting form or reporting page too complicated; Neonatal intensive care and well-baby child records are located in two separate systems |
30 |
Domain II Barrier: Related to work demands on a clinician and the healthcare environment domain | n = 36 |
Theme 5 — Work demands Sample comments: Too busy; No time to report because no time was set aside for paperwork; Short staffed; No financial incentive-reporting reduces time to generate income |
31 |
Theme 6 — Assumptions about the need to report related to the care environment Sample comments: Assume other audiologists have reported because patient has visited another clinic; Patients were seen by different audiologists so likely others have reported |
5 |
Domain III Barrier: Related to incomplete knowledge on the reporting requirement and a lack of helpful tool domain | n = 32 |
Theme 7 — Incomplete knowledge on the requirement and the process Sample comments: Did not know I need to report normal hearing result; Unaware that a reporting requirement exists; Don’t know when or how to report |
27 |
Theme 8 — Lack helpful tool Sample comments: No access to EHDI data system to determine which patients require reporting; Law requires me to report only infants who failed; No access to database to find out which infant has failed |
5 |
Domain IV Barrier: Inherent to the reporting process domain | n = 13 |
Theme 9 — Access to a workable process Sample comments: No computer/internet access because no internet coverage; Clinic computer not compatible with the reporting portal. |
6 |
Theme 10 —Duplicate effort/task Sample comments: Must enter data in patient’s chart and also for the EHDI program; Have to enter data in 3 separate databases–confusing and increase workload. |
7 |
Discussion
Each state has its own unique EHDI data reporting system, some more user-friendly than others. The wide range of audiologists reporting no challenges (19–100%; Table 3) may be a result of this variation in the uniqueness of the reporting system in each state. The most recurrent barrier (reported 58 times) was a non-user-friendly reporting system. The non-user-friendly design covered all areas of the reporting system such as logging on, finding the right child record, and entering and saving data. The following comments from respondents illustrated the different kinds of system design issue:
Neonatal intensive care unit and well-baby in 2 systems. Have to log into two systems to report
Poor search function, so difficult to find child
Difficulty in navigating the reporting tabs
Diagnosis codes audiologists required to use difficult and non-intuitive
Takes too long to enter all required fields
Certain data could not be entered accurately
System unreliable, reported results not saved
Some of these difficulties could be encountered by audiologists who were not frequent users, but some challenges truly reflected a system design issue irrespective of user comfort level (e.g., “order of reporting tabs not logical,” “unsure how to input certain data,” “certain data could not be entered accurately,” “takes too long to enter all required fields,” and “child can have three separate profiles in three different databases. Do not have access to all databases to locate child;” see Table 4).
The second most recurrent barrier (reported 36 times) was related to the demands on a clinician. The primary duty of an audiologist is patient care. Besides patient care, there were other non-direct patient care duties requiring a clinician’s time, such as dictating an evaluation report to the referring physicians, returning patient phone calls, obtaining healthcare insurance authorization for hearing aids on behalf of the patient, and ordering hearing aids or earmolds, etc. These non-direct patient care duties were usually done at the end of the day or when a patient did not show for their appointment. Given limited or no time allocated during a workday for non-patient care tasks, audiologists must prioritize. We hypothesize that tasks that directly impact patient care will rise to the top, exclusive of other duties. Reporting hearing assessment results to the EHDI program is not a patient care task. It could be beneficial for EHDI programs to demonstrate to audiologists how reporting may improve patient care.
Another barrier related to the patient care environment was a lack of communication among clinicians from different clinics. Due to this lack of communication, clinicians likely make certain assumptions. Several audiologists commented that the “Patient has been seen by other audiologists. I assume others have reported.” This assumption was also reported by Chung, Beauchaine, Grimes, et al. (2017). It was not unusual for parents to seek a second opinion by visiting more than one clinic. Chung and colleagues reported that 5.4% of the surveyed clinics stated that not all hearing assessment results were reported to the EHDI program. One reason was that audiologists assumed the clinicians who completed the initial assessment had already reported results to the EHDI program.
In the Chung, Beauchaine, Grimes, et al. (2017) study, authors found 8.6% of the surveyed clinics did not know how to report. We also found this lack of knowledge on the reporting requirement and process, causing it to be the third most recurrent theme. Audiologists reported that they were not aware that there was a requirement to report, and were unsure when, what, and how to do so, as evident in the following comments: “did not know I need to report normal hearing results,” “unsure which case and what to report,” and “don’t know how to report.” Audiologists also commented on a lack of helpful resources or tools that would assist them in reporting hearing assessment results, as evident in the following comments: “The law mandates reporting only infants that don’t pass hearing screens. Lack access to the knowledge of which infant has not passed,” and “no hearing screening result to help me decide if reporting is required.” These barriers all pointed to the need to strengthen training and provide audiologists with access to critical data that would facilitate them reporting hearing results to the EHDI program.
Some audiologists also encountered barriers with the reporting process they were required to follow when reporting a hearing assessment result. This process-related barrier was reported only 13 times by audiologists. For online reporting, audiologists commented that some clinics in rural areas had no internet coverage, their computer was not compatible with the reporting portal, or they had no access to a computer. In states where audiologists were required to report by fax only, audiologists commented that the fax often did not go through. Another process-barrier domain theme was duplication of an effort or task. In addition to notating the patient encounter and results in their medical record and dictating an assessment report for the referring physicians on a daily basis, audiologists also had to enter the same kind of information yet again in the EHDI reporting portal or complete a result form and fax to the program. Besides being perceived as a duplicate effort, reporting results was also perceived as a labor-intensive task by some audiologists who are required to use an online portal to report. The following comment illustrated this perception: “Reporting online could only be done by an audiologist. It would have been helpful if faxing an assessment report was permitted because a support staff could assist.”
Since the barriers encountered by audiologists spanned multiple domains, a multi-prong approach to improve the reporting process would be most efficacious. Foremost, working to reduce the burden of data entry on audiologists and minimizing duplicate efforts would likely be beneficial. Improving the online reporting portal should also be considered and, ideally, include feedback from audiologists through user testing to help ensure that the reporting system is intuitive and friendly. Allowing audiologists access to other child health data that benefit patient care could improve audiologists’ participation in the EHDI process. Finally, recurrent training should be offered, and should cover who, when, what, and how to report hearing assessment results, regardless of whether the audiologists have been previously trained.
There are several limitations with this study. The qualitative data collected by the EHDI programs might be overrepresented by audiologists whose caseloads were predominately children. Audiologists who saw children less frequently might have different challenges. However, barriers reported by audiologists whose caseloads were predominately children should carry greater weight when EHDI programs want to improve the reporting process, since these audiologists would be frequent users. Although we standardized the evaluation questions, it was possible EHDI programs might have interpreted the questions differently, which could have influenced how the questions were posed to audiologists. To help mitigate this possibility, CDC provided definitions for key terms, such as acceptability, and reviewing their evaluation plan before the program executed the evaluation.
Another limitation was the various ways EHDI programs used to collect the evaluation data and determine the pool of audiologists to target for the evaluation. Slightly more than half (56%, n = 22) of the EHDI programs used one method to collect audiologists’ experiences, while the remaining 44% used multiple methods (Table 1). Some programs used licensure board information to determine the pool of audiologists to target, while others targeted audiologists who had previously reported to the EHDI program. This variability created a weakness, as the results might not be generalizable to represent all audiologists. On the other hand, allowing the EHDI programs some flexibility in how the evaluation should be conducted was considered important. For example, some EHDI programs vetted clinics to ensure the clinics had the equipment and capable personnel to evaluate newborns, toddlers, and young children since the equipment needed to evaluate the different age groups varies. If the funding evaluation guidance required states to target all licensed audiologists in the state for the evaluation, it would not be appropriate for states that only require vetted clinics to report and if we required states to use only one data collection method, such as a focus group format, it would be impractical for the EHDI program to collect feedback from audiologists located in rural or frontier areas. Despite this variability in evaluation method used by the state EHDI programs, we found convergence of key themes and issues encountered by audiologists across 39 states.
Despite the above limitations, there were several strengths. First, when the EHDI program chose to use a survey to collect audiologists’ perception, the response rate was generally high; only seven EHDI programs received less than 40% returned surveys. Secondly, there was high degree of convergence in the qualitative data regarding key themes and issues encountered by audiologists from 39 states, in addition to convergence of findings with the Chung, Beauchaine, Grimes, et al. (2017) study. Although the reporting system varies across each state, the barriers and challenges encountered by audiologists were similar across states; we did not encounter any barrier that was unique to only one state. Independent data coding by each author and repeatedly comparing and resolving differences before moving to the next stage of data analysis was used to help improve the consistency in data interpretation and analysis.
Conclusion
Audiologists described barriers to reporting results. Even though the reporting system varies across each state, the identified barriers were similar across states. A non-user-friendly design was the major challenge reported by participating audiologists. In addition, audiologists noted in their survey response that reporting hearing results was not a direct patient care task; it was, instead, perceived as labor-intensive and a duplication of effort. In a busy clinical environment, many audiologists found prioritizing public health reporting of hearing assessment data difficult. In addition, parents often sought second opinions by visiting more than one clinic. Audiologists from different clinics did not routinely communicate with each other. When parents told the audiologist that their child was previously seen by another audiologist from another clinic, some audiologists assumed the hearing results had already been reported. Furthermore, some audiologists were also unaware of the procedures to report hearing assessment results in their state. Assumptions and lack of awareness could be remedied by training, as well as clarifying when and how to report results. Due to the wide spectrum of barriers, a multi-pronged improvement strategy that includes soliciting audiologist feedback for improving the online reporting portal, working with audiologists to address identified reporting barriers, and providing additional training to audiologists may be helpful for state EHDI programs looking to improve their reporting process.
Acknowlegement:
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Acronyms:
- AAP
American Academy of Pediatrics
- CDC
Centers for Disease Control and Prevention
- DHH
deaf or hard of hearing
- EHDI
Early Hearing Detection and Intervention
- NCHAM
National Center for Hearing Assessment and Management
References
- Chung W, Beauchaine KL, Grimes A, O’Hollearn T, Mason C, & Ringwalt S (2017). Reporting newborn audiologic results to state EHDI programs. Ear and Hearing, 38(5), 638–642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chung W, Beauchaine KL, Hoffman J, Coverstone K, Oyler A, & Mason C (2017). Early hearing detection and intervention–pediatric audiology links to services EHDI-PALS: Building a national facility database. Ear and Hearing, 38(4), e227–e231. 10.1097/AUD.0000000000000426 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Division of State Government Affairs, American Academy of Pediatrics. (2014). State early hearing detection and intervention laws and regulations. Retrieved February 14, 2020 from https://www.aap.org/en-us/Documents/pehdic_ehdi_%20state_requirements.pdf [Google Scholar]
- Glaser B, & Strauss A (1967). The discovery of grounded theory: Strategies for qualitative research (1st ed.). Aldine Publishing. [Google Scholar]
- Guest G, & McLellan L, (2003). Distinguishing the trees from the forest: Applying cluster analysis to thematic qualitative data. Field Methods, 15(2), 186–201. 10.1177/1525822X03015002005 [DOI] [Google Scholar]
- Medelyan A (2019). Coding qualitative data: How to code qualitative research. https://getthematic.com/insights/coding-qualitative-data/ [Google Scholar]
- National Center for Hearing Assessment and Management. (2019). Enacted universal newborn hearing screening legislation. Retrieved August 31, 2020 from www.infanthearing.org/legislative/mandates.html [Google Scholar]
- National Center for Hearing Assessment and Management. (2020). Early hearing detection and intervention/universal newborn hearing screening program by state. Retrieved February 14, 2020 from https://www.infanthearing.org/stateguidelines/index.php [Google Scholar]
- Nowell LS, Norris JM, White DE, & Moules NJ (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16, 1–13. 10.1177/1609406917733847 [DOI] [Google Scholar]
- Strauss A, & Corbin J (1990). Basics of qualitative research: Grounded theory procedures techniques (2nd ed.). Sage Publications. [Google Scholar]
- Williams G (2019). Applied qualitative research design (1st ed.). Ed-Tech Press. [Google Scholar]