Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 1.
Published in final edited form as: Ear Hear. 2021 Nov-Dec;42(6):1499–1507. doi: 10.1097/AUD.0000000000001041

Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age

Jan-Willem A Wasmann 1, Cris P Lanting 1, Wendy J Huinck 1, Emmanuel AM Mylanus 1, Jeroen WM van der Laak 2,3, Paul J Govaerts 4, De Wet Swanepoel 5, David R Moore 6,7,8, Dennis L Barbour 9
PMCID: PMC8417156  NIHMSID: NIHMS1671284  PMID: 33675587

Abstract

The global digital transformation enables computational audiology for advanced clinical applications that can reduce the global burden of hearing loss. In this paper, we describe emerging hearing-related artificial intelligence applications and argue for their potential to improve access, precision, and efficiency of hearing health care services. Also, we raise awareness of risks that must be addressed to enable a safe digital transformation in audiology. We envision a future where computational audiology is implemented via interoperable systems using shared data and where health care providers adopt expanded roles within a network of distributed expertise. This effort should take place in a health care system where privacy, responsibility of each stakeholder, and patients’ safety and autonomy are all guarded by design.

Keywords: Artificial Intelligence, Big Data, Computational Audiology, Digital Hearing Health Care, Hearing Loss, Machine Learning, Computational Infrastructure

Introduction

The estimated number of individuals suffering from disabling hearing loss has been growing ever since global reporting began (Vos et al., 2016; World Health Organization, 2019), with WHO projections reaching 900 million by 2050 (World Health Organization, 2019). Besides effects on interpersonal communication, psychosocial well-being, and quality of life, hearing loss has a substantial socio-economic impact (Olusanya et al., 2014; World Health Organization, 2017). Conservative estimates suggest that the overall global annual cost of unaddressed hearing loss is 750–790 billion US dollars (World Health Organization, 2017). In children, hearing loss restricts language development, often resulting in a lasting effect on social and cultural engagement and unfulfilled educational potential. In adults, hearing loss leads to higher unemployment, missed workdays, and social isolation (Kramer et al., 2006). Hearing loss is further associated with more rapid cognitive decline and increased occurrence of dementia-like symptoms (Livingston et al., 2017). Evidence is growing that timely intervention, including hearing aids, can reduce many of these consequences (Maharani et al., 2018).

The actual problem could be even greater, stressing the need for the computational approaches we introduce below, since mild hearing loss (20–34 dB Hearing Level or HL), which is 2–3 times more prevalent than moderate or more severe loss (>35 dB HL), has recently been recognized as an adverse factor in daily life (according to the new GBD 2010 classification on grades of hearing loss; Shield, 2019; B. S. Wilson et al., 2017). Hearing loss is arguably the most prevalent of all impairments in years lived with disability (YLDs; Vos et al., 2016) if we include all known pathologies that currently have no clinical consequences for rehabilitation. Examples include slight or minimal hearing loss (15–20 dB HL; Moore et al., 2020), extended high-frequency loss (8–20 kHz; Motlagh Zadeh et al., 2019), and suprathreshold deficits related to understanding speech in noisy situations (Kollmeier & Kiessling, 2018).

Existing audiological services cannot address the global burden of hearing loss due to inherent barriers, including a dearth of trained professionals, equipment costs, and required expertise (Swanepoel & Clark, 2019). New approaches that transcend current models of practice are essential to overcome global access challenges. Computational augmentation, enhancing and complementing human capabilities by digital tools (H. J. Wilson & Daugherty, 2018), is an essential strategy given the lack of enough qualified human experts in ear and hearing care worldwide (World Health Organization, 2013), the large number of people suffering from hearing loss that is currently underserved, and the growing complexity of high-quality diagnostics and therapeutics.

Computational approaches are enabled by significant global developments, including growing computational power, data storage, and artificial intelligence; a paradigm shift referred to as the fourth industrial revolution (Schwab, Klaus, 2016). An essential enabler for this digital transformation is the exponential growth in internet connectivity in almost every country, exemplified by the broadband subscription penetration in Africa (currently 81%; Jonsson et al., 2019). Continued growth is expected worldwide as 4G and 5G mobile networks become increasingly available. Another catalyst is the tech companies entering the medical market, applying expertise from algorithms and big data to health problems. There is also a trend towards the “quantified self”, which encourages the continuous use of personal tracking devices and stimulates the development of future generations of personal (in-ear) electronics that monitor stress, mental effort, and mental well-being (Crum, 2019).

Other clinical disciplines have implemented computational approaches to parts of the clinical care pathway, but this has not yet resulted in a paradigm shift in health care (Rajkomar et al., 2019). To give a few examples, the field of ophthalmology has adopted the use of automated diagnostic data collection hardware (Bizios et al., 2011). Radiology has begun adopting computational image segmentation for automated diagnoses (Hosny et al., 2018). Genotype information is standardized to evaluate patient health and effective cancer treatment (Benson et al., 2012). Also, mobile phones are becoming standard tools in many disciplines, including diabetes management (Thabit & Hovorka, 2016) and dermatologic diagnoses (Ashique et al., 2015), among many other applications. These are examples of computational approaches for diagnosis, self-evaluation, and treatment. Unfortunately, all the different components identified have developed across different fields – there is no clear indication that all have been applied to a single field. Therefore, if clinical audiology adopts most of the principles defining computational audiology, it can generally become a standard-bearer for modern clinical care delivery. In this perspective paper, we sketch out how computational approaches may further develop audiology and illustrate fundamental advances in diagnosis, therapy, and rehabilitation that could become essential elements in a comprehensive digital transformation of clinical audiology.

Definition and examples of computational audiology that may improve precision

Audiology is an exceptionally strong candidate for computational augmentation and may benefit from the current and novel power of computational science because of its strong mechanistic theory, numerical nature, measurement-driven procedures, and the multitude of clinical decisions to be made. Here, we introduce the term computational audiology, which we define as:

computational audiology.

The approach to diagnosis, treatment, and rehabilitation in audiology that

  • uses algorithms and data-driven modeling techniques, including machine learning and data mining, to generate diagnostic and therapeutic inferences and to increase knowledge of the auditory system;

  • leverages current biological, clinical and behavioral theory and evidence;

  • provides or augments actionable expertise for patients and care providers.

The readily quantifiable nature of audiological procedures makes audiology well suited for modern machine learning and data collection techniques. Translational reasons to apply computational techniques in audiology include (i) improved accuracy, increased speed, wider application of (diagnostic) tests and evaluation (applied to, e.g. audiometry; Schlittenlacher et al., 2018b); (ii) objective and consistent interventions, outcomes, and decisions across clinicians and clinics (applied to, e.g. CI-fitting; Meeuws et al., 2017). Over time, algorithms can become more sophisticated and take over tasks now performed by humans or take on tasks that are currently not performed due to a lack of resources, time, or clinical consequences, including screening for milder forms of hearing impairment. Computational audiology can improve care by dealing with multifactorial data, including indices of psychosocial well-being, quality of life, co-morbidity, and patient-centered, individual descriptors of complaints and symptoms. For example, Palacios et al. (2020) used an unsupervised learning approach to study heterogeneity of patients suffering from tinnitus by analyzing the complaints and symptoms described in an online patient forum. In addition to deterministic methods, it also facilitates the use of probabilistic methods that include uncertainty and likelihood to cope with the wide variability across hearing-impaired individuals.

The application of algorithms in audiology is not new. Historically, it has been restricted mainly to cohort-level inference, for example, in understanding the incidence and degree of hearing loss in the general population (Mościcki et al., 1985), and the prescription of sound-amplification for different types and degrees of hearing loss (Byrne et al., 2001). Individual refinement based on learning systems could be a promising way forward but raises many challenges to perform in an evidence-based manner (Barbour, 2018).

Diagnostics.

In general, diagnostic procedures in audiology consist of a sequence of psychometric and physiologic tests. Clinicians may benefit from computational augmentation because they need to deal with uncertainty, time constraints for testing, and the individual features of the patient. Clinical experts will typically evaluate test results visually and from summary statistics (e.g. average HL), which requires skill and experience but also introduces subjective variability in interpretation, restricts estimates on the certainty of the overall outcome, and impedes more advanced (multifactorial) analysis, which is difficult for humans (Kahneman, 2011).

Limited time for testing is arguably the most significant constraint in collecting high-quality multi-dimensional data for an individual patient. However, machine learning allows, in principle, for flexible, efficient estimation tools that do not require excessive testing time. In an approach known as active learning, new computational tools actively determine which stimuli would be most valuable to deliver to converge onto an accurate estimate rapidly. Active learning was recently applied to diagnostic tests including basic audiometry (Barbour et al., 2019a; Schlittenlacher et al., 2018b), determination of the edge frequency of a high-frequency dead region in the cochlea (Schlittenlacher et al., 2018a), and hearing aid personalization (Nielsen et al., 2014). Also, when multiple factors that share some relationship are available, an active learning method can learn and exploit the relationships in real-time. For instance, data from the National Institute for Occupational Safety and Health (NIOSH) database (Masterson et al., 2013) has been deployed as Bayesian “prior beliefs” to assess the similarity between ears of 1 million participants. A bilateral audiogram procedure that uses these priors speeds up testing considerably (Barbour et al., 2019b).

Principles of computational audiology may be applied to current research and clinical issues. For example, machine learning approaches to image analyses of otoscopy of the eardrum demonstrate the potential to supplement audiological tests with a diagnosis of potential outer and middle ear pathology (Cha et al., 2019; Myburgh et al., 2018). With a reported accuracy of between 81 to 94% and options for capturing and receiving diagnosis using mobile phone-based otoscopy, these approaches provide direct feedback to the clinician and therefore could allow point-of-care interventions and optimize current care (Cha et al., 2019; Myburgh et al., 2018).

Combining self-reported difficulty and genetic data may lead to potential candidate genes for hearing loss. Such a procedure, applied to the data from 250,000 people, identified 44 new genetic loci potentially associated with hearing loss (Wells et al., 2019). Individual, patient-centered (hearing) health care could become more comprehensive by collecting more extensive hearing profiles combined with other patient characteristics beyond the audiogram (Sanchez Lopez et al., 2018). For example, the genetic profile (Hildebrand et al., 2009) can be used to differentiate better various underlying causes leading to hearing loss (Dubno et al., 2013). A probabilistic interpretation of a patient profile can be further refined using auditory modeling (Verhulst et al., 2018) and AI and, among other applications, form the basis for prognosis. It is paramount to know the underlying pathology to determine a specific target therapy or rehabilitation strategy. By combining these examples, audiology may become a prime example of precision medicine.

Rehabilitation.

When fitting cochlear implants or hearing aids, machine learning may help clinicians optimize parameters by minimizing a cost function. A recently developed clinical decision support system calculates a utility function based on a weighted combination of outcome measures (Meeuws et al., 2017). The utility function is continuously updated as the system learns from previous outcomes. The system also incorporates active learning by determining which of the collected outcomes are most clinically useful. Such a system can oversee the effect of considerably more fitting parameters than those commonly adjusted by audiologists. It can be used to make more accurate predictions of the expected outcome, enable cost-benefit evaluation by reducing the time needed by a trained professional to perform tests, and facilitate a more standardized CI fitting (Meeuws et al., 2017). In the future, the system might be extended to individualized cochlear implant surgery based on high-resolution medical images of the cochlea (Heutink et al., 2020). Also, users’ preferences can be collected to make data-driven, individual adjustments to their cochlear implant or hearing aid. The internet of things provides suitable interfaces for users to provide feedback under ecologically valid circumstances (e.g. Ecological Momentary Assessments, EMA; Wu et al., 2015), but also provides tools that monitor behavior that could serve as a proxy to derive user preferences (Johansen et al., 2018).

Another example of computational approaches to improve rehabilitation is applying neural networks to enhance speech-in-noise understanding in cochlear implant (CI) users (Goehring et al., 2019). Noisy speech signals were decomposed into time-frequency units, extracting a set of psychophysically-verified features, fed into a neural network to select frequency channels with a higher signal-to-noise ratio. This pre-processing of the input signal significantly increased speech understanding, even of unfamiliar speakers (i.e. not used to train the network). The developers limited the required computational power and memory for their model to make it implementable on mobile devices.

Hearing research.

Machine learning techniques could also lead to better models of human auditory behavior and a better understanding of the auditory system. Recently, Ausili (2019) used a neural network to model experience-dependent sound localization for different hearing impairmentshttps://www.zotero.org/google-docs/?G5vpzm. Deep neural networks are achieving parity with humans for some tasks, and it is possible that these networks could mimic aspects of representation and functional organization of the human brain ( Güçlü & van Gerven, 2017; Huang et al., 2018; Kell & McDermott, 2019).

We can conclude that the trend of applying computational approaches in audiology could lead to more individualized hearing care and new services, as illustrated in Example 1. We base this claim on above-cited examples in diagnosis, rehabilitation, and hearing research, and on computational approaches in audiology already employed by digital hearing health technologies around the world (Swanepoel & Hall, 2020). A part of these new services could be provided by companies that traditionally did not specifically target customers with hearing loss. For example, speech-to-text apps provide new functionality to hearing-impaired persons (Pragt et al., 2020), and AirPods Pro are nearing the functionality of hearing aids (Bailey, 2020) but do not yet fulfill all FDA requirements and fall short in terms of amplification for the rehabilitation of people with moderate to severe hearing loss.

Example 1 rehabilitation service (based on Crum, 2019).

A person tests her hearing with an app to find out that her hearing profile is similar to that of 1.7 million other people in a global database who reported good results using hearing aids. She buys two hearing aids and signs up for a service, an app that sends programming instructions and settings to the hearing aids and asks for feedback to ascertain audibility and judge sound quality. Indications of momentary and remaining hearing problems, including expressions like “excuse me” or “what did you say?” are detected using automatic speech recognition. After a couple of weeks, the system provides fine-tuning based on her needs and similarity to other cases. It automatically determines that when entering her local subway station, substantial echo cancellation is needed. After a few years, the system detects specific changes in the spectral quality and patterns of sounds when she speaks. After tracking this trend for several months, the system suggests scheduling an appointment with a physician because these changes can correlate with heart disease.

How could computational approaches improve access to hearing health care?

Hearing health care is challenging to deliver in Low- and Middle-Income Countries (LMICs) because it currently requires specialized equipment and trained professionals. Smartphone-mediated telehealth holds great promise to lower many of these barriers (Swanepoel & Clark, 2019). Smartphone penetration now exceeds 80% in LMICs (Jonsson et al., 2019), and low-cost equipment and robust test procedures are becoming available to perform audiometric (Potgieter et al., 2018; Swanepoel & Clark, 2019) and otologic (Chan et al., 2019) diagnostic measures with acceptable levels of quality and reproducibility. We foresee a considerable growth in mobile app usage for self-administered hearing tests (Hazan et al., 2020; Swanepoel et al., 2019) and self-adjustment by hearing aid users (Søgaard Jensen et al., 2019) that in turn could lead to self-fitted hearing aids. In the simplest form of telehealth, the caregiver and patient are physically separated, and technology facilitates interaction. However, telehealth can be expanded by distributing expert knowledge across the health care delivery system, with clinical expertise incorporated into algorithms employed on devices used by patients or by local caregivers, making hearing health care possible and affordable in remote and underserved areas where experts are lacking, as illustrated in Example 2.

Example 2: hearing screening in early childhood (based on Barbour et al., 2019a; Chan et al., 2019; Swanepoel & Clark, 2019).

Children in LMICs typically do not have access to hearing screening. However, a community-based project relying on AI assistance offers screening, diagnosis, and referral in underserved communities.

  1. Screening is conducted via an automated pure-tone-screening test facilitated on a smartphone for children from 3 to 4 years.

  2. Test quality is monitored locally on the smartphone and regionally via uploaded data on a cloud-based data management portal.

  3. If a child fails the screening test, an automated report is generated from the cloud-based data management portal and sent to caregivers by text message or email.

  4. If the child fails the screening a second time, automated threshold pure-tone audiometry facilitated by an operator and AI-supported middle-ear function assessment is carried out. A clinical decision support system assists local caregivers in diagnosing hearing loss and referring to specialized care.

If screening and diagnosis of hearing loss can be improved in LMICs, the next requirement is to provide specialized care and affordable hearing loss rehabilitation. Global awareness for hearing loss has recently been spurred by the formation of a Lancet Commission examining strategies to reduce the burden of hearing loss (B. S. Wilson et al., 2019). Recommendations include stimulating the development of low-cost hearing prostheses, leveraging smartphone technologies for use as hearing assistive devices, and equipping a small number of specialist centers for medical and surgical management of ear disease. Computational audiology as an emerging field is uniquely positioned to combine inexpensive, ubiquitous hardware and software (e.g. smartphones with apps) and sophisticated multifactorial (meta)data modeling. By transforming cheap hardware and equipping it with (AI-based) software, LMICs can benefit from advanced automated diagnostic tools and interventions to address hearing loss. The overall cost of devices and services incurred per user will drop, which is expected to compensate for the resources needed for building and maintaining the computational infrastructure, defined here as all hardware, software, protocols, practices, and regulation needed to apply computational approaches on an international scale (O’Brien, 2020). An interesting (but solvable) question is how governments, companies, health care providers, and users will together bear the cost of computational infrastructure, R&D, IP, licenses, devices (e.g. smartphones), and other indirect costs. How to align the involved stakeholders together with the potential risks, privacy issues, and technical requirements are the topics that we consider in the next sections.

Ethical considerations and technical requirements concerning computational approaches in hearing health care

Whereas AI applications in audiology outlined previously should be considered an improvement, they may also involve some additional risks.

1. Unauthorized or undesirable use.

For example, AI researchers recently introduced new lip-reading technology to facilitate speech understanding in people with a hearing impairment. They trained their algorithm on TV footage, and it outperformed expert lip-readers. This solution could, in theory, allow people with hearing loss to augment their speech understanding (Shillingford et al., 2018). However, the technique could also be used for other purposes, including mass surveillance (Metz, 2018). Footage from closed-circuit TV could be fed into the algorithm to track conversations of unknowing citizens, invading their privacy. A similar privacy issue may apply to devices that incorporate tracking technology. GPS can be used to track a smartphone on a rideshare journey, but it can also track smart hearing aids. Current hearing aids can log users’ preferences in particular environments, monitor adjustments users make in each place, log those preferences, use GPS to detect when they return to those places, and automatically or manually reactivate the preferred settings (Wolfgang, 2019). In courts, tracking the whereabouts of personal devices has already led to erroneous criminal accusations (Valentino-DeVries, 2019).

2. Bias in the data used to train an AI-system.

Buolamwini (2017), for example, uncovered large gender and racial biases in face recognition systems sold by tech giants IBM and Microsoft. Errors in gender identification were substantially lower for lighter-skinned men (1% error rate) than darker-skinned women (35% error rate). One explanation was that the face recognition systems were trained on data sets containing many more men with light skin than women with dark skin. This example shows that real-world biases may translate to inherent biases in the outcome of AI systems, whether we are aware of those biases or not. As a result, it might be a risk to apply data collected in, for example, Western countries to solutions for non-Western regions with other ethnic characteristics, including race and lifestyle.

3. Violation of privacy.

Privacy protection has begun to be taken seriously in recent years, resulting in the EU’s GDPR (Regulation (EU) 2016/679, 2016). In addition to general privacy issues, one article of the GDPR explicitly states that individuals should not be subjected to a decision based on automatic processing, including profiling, except when explicit consent is given (Goodman & Flaxman, 2017). Manufacturers of hearing devices and cochlear implants are already collecting large bodies of data (data profiles) beyond the view of (independent) publicly funded hearing health care providers and researchers. Clinicians use that data for counseling purposes, for instance, to evaluate hearing aid usage based on data-logging (Saunders et al., 2020). However, Mellor et al. (2018) reported that a hearing aid manufacturer did share a large dataset but did not share possibly relevant commercially sensitive information, which may limit insights drawn by researchers from the data. Automatic processing could be problematic with machine learning and big-data designs, even using anonymous data only. When a database uses many types of data from individual subjects, it will increase the likelihood that data can be traced back to individuals (re-identification; Leese, 2014; Rocher et al., 2019). Privacy concerns and the sheer amount of data have led to the development of distributed learning, an approach that allows for decentralized training (Konečný et al., 2016). For example, in federated learning, models are trained locally on a local device (e.g. a smartphone connected to a hearing aid; Szatmari et al., 2020), and only aggregate meta-data (updated priors) travel from central databases to users and back.

4. Restricted access and control over data.

All human stakeholders must have access to relevant information to make the right decision about the diagnosis, treatment, or rehabilitation that affects a patient’s health. Data from which relevant information could be extracted is currently scattered across databases residing with different stakeholders (i.e. companies, hospitals, research institutions). The data are collected for distinct purposes and might have a particular status, for example, proprietary or open. In effect, data are vital for so many processes that control over them may lead to a strategic advantage in business, clinical care, or science. Companies might collect data to improve products (proprietary data) or evaluate services, but also because of legal requirements or for quality assurance. It is mandatory for health care professionals to keep a medical record that contains all information needed to provide accountable care according to good clinical practices1 (article 454 WGBO; Eijpe, 2014). The Health Insurance Portability and Accountability Act (HIPAA) in the US and General Data Protection Regulation (GDPR) in the EU provide the legislative framework that enables patients and care providers access and control over personal data (Forrest, 2018; Individuals’ Right under HIPAA to Access Their Health Information, 2016). An individual can request access to his/her data stored by a health care provider (HIPAA) or any organization (GDPR). Therefore, in theory, it is possible to create a global system that can access patients’ health history. However, in reality, appropriate data-exchange practices are lacking, which seriously hampers patients’ control over their data. The (re)use of proprietary data can be restricted and is subject to trade secrets, patents, copyrights, or licenses (e.g. see for legal rights governing research data; Carroll, 2015; and for property rights; Stepanov, 2020). Vested interests, a motivation to influence factors for your benefit, is a considerable barrier to the reuse of proprietary data. Without access to relevant information, patients cannot make informed (shared) decisions. Clinicians will lack insight into decision support systems, regulators will be unable to inspect and audit, and researchers will be unable to appraise outcomes and methods critically.

5. Liability.

For anyone working with new AI paradigms, it needs to be clear who is responsible if anything goes wrong. Is it the scientist who made the algorithm, is it the health care professional, or is it the patient who is ultimately responsible for their own decisions? For example, how can a clinician (or a patient) ascertain that an algorithm’s outcome is correct and valid? An explicit example of a potentially invalid test result is an auditory steady-state response (ASSR) exam performed on a restless neonate that results in measurement conditions markedly different from the conditions on which the algorithm was trained (Sininger et al., 2018). The test result may not be accurate, but this shortcoming might not be noticeable to the clinician.

Oversight and regulation (in general for medicine; Maddox et al., 2019) for hearing-related AI also needs to be in place. The level of this oversight will need to be increased in cases of highly autonomous and self-learning clinical decision support systems operating in highly complex environments and circumstances that have severe consequences for erroneous actions. Furthermore, AI-based clinical decision support systems need to be transparent to inspection and audit, and robust for application in a specified context (in general for health technology; Shuren & Califf, 2016).

The role of computational audiology in personalized hearing health care

AI, automation, and remote care will become more widespread and better available in the coming years. Redesigning the clinical workflow, implementing AI technology, and changing the clinician’s role should become a top priority (Rajkomar et al., 2019). Below we discuss what role clinicians and other stakeholders might play in the digital transition and its meaning for patients. Already, remote care has become more mainstream due to the COVID-19 pandemic that has provided an unprecedented impetus to develop and employ hearing health solutions that reduce physical contact (De Sousa et al., 2020; Swanepoel & Hall, 2020). This situation has demonstrated that clinicians can adapt if appropriate benefits are clear; for example, keeping practice doors open.

Clinicians’ role.

Hearing health care professionals, including audiologists, have valuable insights needed to implement these new approaches successfully. For instance, algorithmic bias is reduced if a system is trained in a situation comparable to where it is employed. Therefore, early involvement by hearing care professionals in the design of algorithms could lead to products that better fit the clinical pathway. In a concept mapping study, a structured method to produce a conceptual representation, clinicians from Canada reported that structural training on implementation and best-practices of remote care is needed (Davies-Venn & Glista, 2019). Also, the application of AI requires clinicians to have appropriate training to use AI tools and to be aware of their validity and limitations as well as how to use them. Clinicians should also use their position (e.g. in a collective) to advocate for necessary user requirements, including transparency and clarity, so that, as professionals, they can take responsibility for actions and decisions supported by those systems.

Not everything valuable in hearing health care is quantifiable and automatable. Machines do not easily replace a clinician in aspects of care based on clinical judgment, soft skills, and the personal touch that help the clinician understand the patient’s needs. Clinicians need to see the patient’s perspective while offering knowledge, creating realistic expectations, providing rehabilitation, and collecting feedback. They are also the mediators that counsel patients in using remote care options, translating outcomes to individual cases, and interpreting results from AI approaches.

Automation of routine diagnostic procedures might free up clinician time to design more elaborate therapeutic interventions, rehabilitation strategies, or even patient engagement/education initiatives. Technical tasks, including hearing tests and hearing aid fitting, will benefit from best practices for accuracy and efficiency standardized in automated routines. One example could be visual reinforcement audiometry (VRA) for infants, which currently requires two clinicians to implement: one that conditions the child while the other selects each stimulus and the timing of its delivery. Suppose the stimulus selection is optimized through active learning. In that case, a single clinician could condition the child while also registering responses and selecting the timing of delivery with a handheld remote. The result would be more accurate test results with half the labor, potentially enabling a practice to double its patient throughput. In considering such scenarios, clinician concerns about becoming marginalized in the face of automation deserve consideration. AI technology can eventually standardize best practices of efficiency and effectiveness for all clinicians while preserving the necessary human element of care that only a person can provide. In no way are these ideas intended to take clinicians out of the loop or diminish their contribution. On the contrary, their new ability to reach more patients and provide better care is expected to expand their clinical impact.

Collaboration among stakeholders.

This paper attempts to start the dialogue needed to create a shared vision among stakeholders regarding computational audiology, one of the first steps towards effective collaboration. As examples, one could think of health care decision-making and advocacy groups including health departments; non-governmental organizations including WHO and patient associations; but also hearing health care professionals including medical doctors and audiologists; device manufacturers, insurance companies, and researchers in audiology. The way to get there could be by stakeholder collaboration, for which Sekhri et al. (2011) provide successful examples within medicine, which we regard as a necessary step to implement the current advances in computational audiology on a large scale. Besides a shared vision, we also to think about aligning the interests of above stakeholders. By putting patients’ interests first and creating the proper incentives, i.e. rewards that encourage people or organizations to do something, we may overcome professional inertia, defined here as the resistance to change. For this, we need to assess and create awareness about vested interests that hamper innovation (e.g. reimbursement policies, Davies-Venn & Glista, 2019) and find common ground so that by collaboration, we can jointly overcome the barriers and all benefit fairly from the forthcoming advances.

An opportunity to further improve diagnostic and therapeutic procedures is to make anonymous data openly available so that algorithms can train on larger populations. All stakeholders involved who collect data should apply privacy guarded-by-design, which requires built-in safety measures to protect patients’ privacy (A&L Goodbody, 2016). These measures should require all stakeholders to assume responsibility for their specified share within the system. A prerequisite for collaboration is the standardization of clinical procedures and how data is stored and annotated within a computational infrastructure. Only then is pooling of high-quality data possible. The time of small-scale research with small (uniform) samples should be consigned to the past. Here, we may learn from other fields. For example, in neuroimaging and genetics, research groups started a consortium to facilitate data aggregation and sharing on a scale unprecedented in audiology (Bis et al., 2012).

Standardization would help clinicians collect evidence and create independent outcome measures to assess new tools and comparing them with established and validated methods and it ensures that clinicians are talking about the same thing when operating within a network of distributed expertise. Besides, by enabling interoperability between manufacturers and clinics, clinical procedures can be more readily adopted. Interoperable systems in combination with licenses to protect proprietary data will reduce risk and costs for companies (e.g. missing out on a standard, maintaining a platform, adhering to regulatory requirements). These systems keep the option open to compete and excel, and tackle the problem of vendor lock-in that currently limits freedom of choice for clinics and patients.

What does computational audiology mean to patients?

For many people worldwide, the access to screening and diagnosis of hearing impairment will improve. The complexity of a patient’s hearing problem and his / her self-reliance will determine the required degree of professional guidance. A large group with mild and moderate hearing loss may be helped with relatively simple devices and may even apply forms of self-care. More intensified professional help is needed for more complex fittings or for people who cannot apply self-care (e.g. those with specific co-morbidities).

We believe it is still a significant challenge to make self-care by people with hearing loss possible even for those with sufficient autonomy and health literacy, for reasons including lack of trust in the transition and how digital information is presented and exchanged between patients, clinicians, and companies. If information is not clear to the patient, how can he/she act upon it? Clinicians will play an essential role in maintaining patient trust in the transition and adapting to new practices. Hearing health care may evolve to the point where parts of care are organized remotely, for instance, screening of hearing loss, monitoring the status quo, and making adjustments to rehabilitation depending on the patient’s situation.

The future of audiology

Modernization of audiology towards greater quality, accessibility, and equity will benefit immensely from the emerging power of computational sciences. We envision a future where patient well-being is promoted by judicious evaluation of data shared between interoperable systems of public or private origin. Health care providers will adopt expanded roles within a network of distributed expertise that continually updates best practices as they are accumulated and quantified. Clinicians will be empowered to reach more patients by offloading decisions about data collection to supportive tools while reserving complex and rare clinical decisions for human experts. In the next decade, we foresee that widely available devices, including smartphones, will catalyze the democratization of audiology and benefit millions of people who suffer from the disabling effects of hearing loss by helping evaluate and treat them with support and guidance from advanced algorithms. For this to happen, we must join forces with experts in computational sciences, agree on global standards and evidence-based procedures, and carefully consider the possible challenges of big data and AI technology.

Conflicts of Interest and Source of Funding

D.R.M. and D.W.S. received support from NIH grant R21DC016241. D.R.M. receives support from the NIHR Manchester Biomedical Research Centre. The Radboudumc ENT department receives research funding from Cochlear, MED-EL, and Oticon Medical. D.R.M. and D.W.S. have a relationship with the hearX Group (Pty) Ltd, which includes equity, consulting, and potential royalties. P.J.G. receives royalties on Fox, an AI based decision support system for the fitting of cochlear implants. D.W.S. is a paid scientific advisor and shareholder of the hearX Group. D.L.B. owns equity in Bonauria. J.v.d.L. is a member of the scientific advisory boards of Philips, the Netherlands and ContextVision, Sweden and receives research funding from Philips, the Netherlands and Sectra, Sweden.

We thank Lucas Mens, Peter van Hengel, Ad Snik, Skander Taamallah, Pim Huis in ‘t Veld, and Shushman Choudhury for their comments on draft versions of the manuscript.

Footnotes

Data and materials availability:

A Data availability statement is not applicable. For this manuscript no novel data were generated.

Supplementary Materials:

Not applicable

1

For the following examples we chose to apply Dutch law to illustrate a legal framework.

References:

  1. A&L Goodbody. (2016). The GDPR-A Guide for Businesses. A&L Goodbody. https://www.algoodbody.com/media/TheGDPR-AGuideforBusinesses1.pdf
  2. Ashique KT, Kaliyadan F, & Aurangabadkar SJ (2015). Clinical photography in dermatology using smartphones: An overview. Indian Dermatology Online Journal, 6(3), 158. 10.4103/2229-5178.156381 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ausili SA (2019). Spatial Hearing with Electrical Stimulation Listening with Cochlear Implants [Radboud University]. https://repository.ubn.ru.nl/handle/2066/203054
  4. Bailey A (2020, July 1). AirPods Pro Become Hearing Aids in iOS 14. Hearing Tracker. https://www.hearingtracker.com/news/airpods-pro-become-hearing-aids-with-ios-14
  5. Barbour DL (2018). Formal Idiographic Inference in Medicine. JAMA Otolaryngology–Head & Neck Surgery, 144(6), 467–468. 10.1001/jamaoto.2018.0254 [DOI] [PubMed] [Google Scholar]
  6. Barbour DL, DiLorenzo JC, Sukesan KA, Song XD, Chen JY, Degen EA, Heisey KL, & Garnett R (2019). Conjoint psychometric field estimation for bilateral audiometry. Behavior Research Methods, 51(3), 1271–1285. 10.3758/s13428-018-1062-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Barbour DL, Howard RT, Song XD, Metzger N, Sukesan KA, DiLorenzo JC, Snyder BRD, Chen JY, Degen EA, Buchbinder JM, & Heisey KL (2019). Online Machine Learning Audiometry. Ear and Hearing, 40(4), 918–926. 10.1097/AUD.0000000000000669 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Benson DA, Cavanaugh M, Clark K, Karsch-Mizrachi I, Lipman DJ, Ostell J, & Sayers EW (2012). GenBank. Nucleic Acids Research, 41(D1), D36–D42. 10.1093/nar/gks1195 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bis JC, DeCarli C, Smith AV, van der Lijn F, Crivello F, Fornage M, Debette S, Shulman JM, Schmidt H, Srikanth V, Schuur M, Yu L, Choi S-H, Sigurdsson S, Verhaaren BFJ, DeStefano AL, Lambert J-C, Jack CR, Struchalin M, … the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium. (2012). Common variants at 12q14 and 12q24 are associated with hippocampal volume. Nature Genetics, 44(5), 545–551. 10.1038/ng.2237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bizios D, Heijl A, & Bengtsson B (2011). Integration and fusion of standard automated perimetry and optical coherence tomography data for improved automated glaucoma diagnostics. BMC Ophthalmology, 11(1), 1–11. 10.1186/1471-2415-11-20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Buolamwini JA (2017). Gender shades: Intersectional phenotypic and demographic evaluation of face datasets and gender classifiers [Thesis, Massachusetts Institute of Technology]. http://dspace.mit.edu/handle/1721.1/114068
  12. Byrne D, Dillon H, Ching T, Katsch R, & Keidser G (2001). NAL-NL1 procedure for fitting nonlinear hearing aids: Characteristics and comparisons with other procedures. Journal of the American Academy of Audiology, 12(1). [PubMed] [Google Scholar]
  13. Carroll MW (2015). Sharing Research Data and Intellectual Property Law: A Primer. PLOS Biology, 13(8), e1002235. 10.1371/journal.pbio.1002235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cha D, Pae C, Seong S-B, Choi JY, & Park H-J (2019). Automated diagnosis of ear disease using ensemble deep learning with a big otoendoscopy image database. EBioMedicine, 45, 606–614. 10.1016/j.ebiom.2019.06.050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Chan J, Raju S, Nandakumar R, Bly R, & Gollakota S (2019). Detecting middle ear fluid using smartphones. Science Translational Medicine, 11(492), eaav1102. 10.1126/scitranslmed.aav1102 [DOI] [PubMed] [Google Scholar]
  16. Crum P (2019). Hearables: Here come the: Technology tucked inside your ears will augment your daily life. IEEE Spectrum, 56(5), 38–43. 10.1109/MSPEC.2019.8701198 [DOI] [Google Scholar]
  17. Davies-Venn E, & Glista D (2019). Connected hearing healthcare: The realisation of benefit relies on successful clinical implementation. 28(5), 2. [Google Scholar]
  18. De Sousa KC, Smits C, Moore DR, Myburgh HC, & Swanepoel DW (2020). Pure-tone audiometry without bone-conduction thresholds: Using the digits-in-noise test to detect conductive hearing loss. International Journal of Audiology, 1–8. 10.1080/14992027.2020.1783585 [DOI] [PubMed] [Google Scholar]
  19. Dubno JR, Eckert MA, Lee F-S, Matthews LJ, & Schmiedt RA (2013). Classifying human audiometric phenotypes of age-related hearing loss from animal models. Journal of the Association for Research in Otolaryngology: JARO, 14(5), 687–701. 10.1007/s10162-013-0396-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Eijpe. (2014). Overview of the national laws on electronic health records in the EU Member States National Report for the Netherlands (p. 40). Milieu Ltd and Time.lex https://ec.europa.eu/health/sites/health/files/ehealth/docs/laws_netherlands_en.pdf
  21. Regulation (EU) 2016/679, 88 (2016). https://eur-lex.europa.eu/eli/reg/2016/679/oj?eliuri=eli%3Areg%3A2016%3A679%3Aoj
  22. Forrest C (2018, April 24). How to request your personal data under GDPR. TechRepublic. https://www.techrepublic.com/article/how-to-request-your-personal-data-under-gdpr/ [Google Scholar]
  23. Goehring T, Keshavarzi M, Carlyon RP, & Moore BC (2019). Using recurrent neural networks to improve the perception of speech in non-stationary noise by people with cochlear implants. The Journal of the Acoustical Society of America, 146(1), 705–718. 10.1121/1.5119226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Goodman B, & Flaxman S (2017). European Union regulations on algorithmic decision-making and a “right to explanation.” AI Magazine, 38(3), 50. 10.1609/aimag.v38i3.2741 [DOI] [Google Scholar]
  25. Güçlü U, & van Gerven MAJ (2017). Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks . Frontiers in Computational Neuroscience, 11. 10.3389/fncom.2017.00007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hazan A, Rivilla J, Méndez N, Wack N, Paytuvi O, Zarowski A, Offeciers E, & Kinsbergen J (2020, June 4). Test-retest analysis of aggregated audiometry testing data using Jacoti Hearing Center self- testing application. Proceedings of the VCCA2020 conference. https://computationalaudiology.com/test-retest-analysis-of-aggregated-audiometry-testing-data-using-jacoti-hearing-center-self-testing-application/ [Google Scholar]
  27. Heutink F, Koch V, Verbist B, van der Woude WJ, Mylanus E, Huinck W, Sechopoulos I, & Caballo M (2020). Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images. Computer Methods and Programs in Biomedicine, 191, 105387. 10.1016/j.cmpb.2020.105387 [DOI] [PubMed] [Google Scholar]
  28. Hildebrand MS, DeLuca AP, Taylor KR, Hoskinson DP, Hur IA, Tack D, McMordie SJ, Huygen PLM, Casavant TL, & Smith RJH (2009). AudioGene Audioprofiling: A Machine-based Candidate Gene Prediction Tool for Autosomal Dominant Non-syndromic Hearing Loss. The Laryngoscope, 119(11), 2211–2215. 10.1002/lary.20664 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Hosny A, Parmar C, Quackenbush J, Schwartz LH, & Aerts HJ (2018). Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), 500–510. 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Huang N, Slaney M, & Elhilali M (2018). Connecting Deep Neural Networks to Physical, Perceptual, and Electrophysiological Auditory Signals. Frontiers in Neuroscience, 12. 10.3389/fnins.2018.00532 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Johansen B, Petersen MK, Korzepa MJ, Larsen J, Pontoppidan NH, & Larsen JE (2018). Personalizing the fitting of hearing aids by learning contextual preferences from internet of things data. Computers, 7(1), 1. 10.3390/computers7010001 [DOI] [Google Scholar]
  32. Jonsson P, Carson S, Blennerud G, Shim J, Arendse B, Husseini A, Lindberg P, & Öhman K (2019). Ericsson Mobility Report November 2019 (Mobility Reports, p. 36). Ericsson. https://www.ericsson.com/en/mobility-report/reports/november-2019 [Google Scholar]
  33. Kahneman D (2011). Thinking, fast and slow. Farrar, Straus and Giroux. [Google Scholar]
  34. Kell AJ, & McDermott JH (2019). Deep neural network models of sensory systems: Windows onto the role of task constraints. Current Opinion in Neurobiology, 55, 121–132. 10.1016/j.conb.2019.02.003 [DOI] [PubMed] [Google Scholar]
  35. Kollmeier B, & Kiessling J (2018). Functionality of hearing aids: State-of-the-art and future model-based solutions. International Journal of Audiology, 57(sup3), S3–S28. 10.1080/14992027.2016.1256504 [DOI] [PubMed] [Google Scholar]
  36. Konečný J, McMahan HB, Ramage D, & Richtárik P (2016). Federated Optimization: Distributed Machine Learning for On-Device Intelligence. ArXiv:1610.02527 [Cs]. http://arxiv.org/abs/1610.02527 [Google Scholar]
  37. Kramer SE, Kapteyn TS, & Houtgast T (2006). Occupational performance: Comparing normally-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. International Journal of Audiology, 45(9), 503–512. 10.1080/14992020600754583 [DOI] [PubMed] [Google Scholar]
  38. Leese M (2014). The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Security Dialogue, 45(5), 494–511. 10.1177/0967010614544204 [DOI] [Google Scholar]
  39. Livingston G, Sommerlad A, Orgeta V, Costafreda SG, Huntley J, Ames D, Ballard C, Banerjee S, Burns A, & Cohen-Mansfield J (2017). Dementia prevention, intervention, and care. The Lancet, 390(10113), 2673–2734. 10.1016/S0140-6736(17)31363-6 [DOI] [PubMed] [Google Scholar]
  40. Maddox TM, Rumsfeld JS, & Payne PRO (2019). Questions for Artificial Intelligence in Health Care. JAMA, 321(1), 31. 10.1001/jama.2018.18932 [DOI] [PubMed] [Google Scholar]
  41. Maharani A, Dawes P, Nazroo J, Tampubolon G, & Pendleton N (2018). Longitudinal Relationship Between Hearing Aid Use and Cognitive Function in Older Americans. Journal of the American Geriatrics Society, 66(6), 1130–1136. 10.1111/jgs.15363 [DOI] [PubMed] [Google Scholar]
  42. Masterson EA, Tak S, Themann CL, Wall DK, Groenewold MR, Deddens JA, & Calvert GM (2013). Prevalence of hearing loss in the United States by industry. American Journal of Industrial Medicine, 56(6), 670–681. 10.1002/ajim.22082 [DOI] [PubMed] [Google Scholar]
  43. Meeuws M, Pascoal D, Bermejo I, Artaso M, Ceulaer GD, & Govaerts PJ (2017). Computer-assisted CI fitting: Is the learning capacity of the intelligent agent FOX beneficial for speech understanding? Cochlear Implants International, 18(4), 198–206. 10.1080/14670100.2017.1325093 [DOI] [PubMed] [Google Scholar]
  44. Mellor J, Stone MA, & Keane J (2018). Application of Data Mining to a Large Hearing-Aid Manufacturer’s Dataset to Identify Possible Benefits for Clinicians, Manufacturers, and Users. Trends in Hearing, 22, 2331216518773632. 10.1177/2331216518773632 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Metz C (2018, November 26). Efforts to Acknowledge the Risks of New A.I. Technology. The New York Times. https://www.nytimes.com/2018/10/22/business/efforts-to-acknowledge-the-risks-of-new-ai-technology.html [Google Scholar]
  46. Moore DR, Zobay O, & Ferguson MA (2020). Minimal and Mild Hearing Loss in Children: Association with Auditory Perception, Cognition, and Communication Problems. Ear and Hearing, Publish Ahead of Print. 10.1097/AUD.0000000000000802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Mościcki EK, Elkins EF, Baum HM, & McNamara PM (1985). Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study Cohort. Ear and Hearing, 6(4), 184–190. [PubMed] [Google Scholar]
  48. Motlagh Zadeh L, Silbert NH, Sternasty K, Swanepoel DW, Hunter LL, & Moore DR (2019). Extended high-frequency hearing enhances speech perception in noise. Proceedings of the National Academy of Sciences, 116(47), 23753–23759. 10.1073/pnas.1903315116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Myburgh HC, Jose S, Swanepoel DW, & Laurent C (2018). Towards low cost automated smartphone- and cloud-based otitis media diagnosis. Biomedical Signal Processing and Control, 39, 34–52. 10.1016/j.bspc.2017.07.015 [DOI] [Google Scholar]
  50. Nielsen JBB, Nielsen J, & Larsen J (2014). Perception-based personalization of hearing aids using Gaussian processes and active learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(1), 162–173. 10.1109/TASLP.2014.2377581 [DOI] [Google Scholar]
  51. O’Brien E (2020, May 14). The critical role of computing infrastructure in computational audiology. Computational Audiology. https://computationalaudiology.com/the-critical-role-of-computing-infrastructure-in-computational-audiology/ [Google Scholar]
  52. Olusanya BO, Neumann KJ, & Saunders JE (2014). The global burden of disabling hearing impairment: A call to action. Bulletin of the World Health Organization, 92(5), 367–373. 10.2471/BLT.13.128728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Palacios G, Noreña A, & Londero A (2020). Assessing the Heterogeneity of Complaints Related to Tinnitus and Hyperacusis from an Unsupervised Machine Learning Approach: An Exploratory Study. Audiology and Neurotology, 25(4), 173–188. 10.1159/000504741 [DOI] [PubMed] [Google Scholar]
  54. Potgieter J-M, Swanepoel DW, & Smits C (2018). Evaluating a smartphone digits-in-noise test as part of the audiometric test battery. The South African Journal of Communication Disorders = Die Suid-Afrikaanse Tydskrif Vir Kommunikasieafwykings, 65(1), e1–e6. 10.4102/sajcd.v65i1.574 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Pragt L, van Hengel P, Grob D, & Wasmann JW (2020, April 21). Speech recognition apps for the hearing impaired and deaf. Proceedings of the VCCA2020 conference. https://computationalaudiology.com/ai-speech-recognition-apps-for-hearing-impaired-and-deaf/ [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Rajkomar A, Dean J, & Kohane I (2019). Machine Learning in Medicine. New England Journal of Medicine, 380(14), 1347–1358. 10.1056/NEJMra1814259 [DOI] [PubMed] [Google Scholar]
  57. Rocher L, Hendrickx JM, & Montjoye Y.-A. de. (2019). Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications, 10(1), 1–9. 10.1038/s41467-019-10933-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Sanchez Lopez R, Bianchi F, Fereczkowski M, Santurette S, & Dau T (2018). Data-driven approach for auditory profiling and characterization of individual hearing loss. Trends in Hearing, 22, 2331216518807400. 10.1177/2331216518807400 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Saunders GH, Bott A, & Tietz LH (2020). Hearing care providers’ perspectives on the utility of datalogging information. American Journal of Audiology, 29(3S), 610–622. 10.1044/2020_AJA-19-00089 [DOI] [PubMed] [Google Scholar]
  60. Schlittenlacher J, Turner RE, & Moore BC (2018a). A hearing-model-based active-learning test for the determination of dead regions. Trends in Hearing, 22, 2331216518788215. 10.1177/2331216518788215 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Schlittenlacher J, Turner RE, & Moore BCJ (2018b). Audiogram estimation using Bayesian active learning. The Journal of the Acoustical Society of America, 144(1), 421–430. 10.1121/1.5047436 [DOI] [PubMed] [Google Scholar]
  62. Schwab Klaus. (2016). The Fourth Industrial Revolution: What it means and how to respond. World Economic Forum. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ [Google Scholar]
  63. Sekhri N, Feachem R, & Ni A (2011). Public-private integrated partnerships demonstrate the potential to improve health care access, quality, and efficiency. Health Affairs, 30(8), 1498–1507. 10.1377/hlthaff.2010.0461 [DOI] [PubMed] [Google Scholar]
  64. Shield B (2019). Hearing Loss Numbers and Costs. Evaluation of the social and economic costs of hearing impairment (pp. 1–249). Hear it AISBL. https://m.hear-it.org/sites/default/files/BS%20-%20report%20files/HearitReportHearingLossNumbersandCosts.pdf [Google Scholar]
  65. Shillingford B, Assael Y, Hoffman MW, Paine T, Hughes C, Prabhu U, Liao H, Sak H, Rao K, Bennett L, Mulville M, Coppin B, Laurie B, Senior A, & de Freitas N (2018). Large-Scale Visual Speech Recognition. ArXiv:1807.05162 [Cs]. http://arxiv.org/abs/1807.05162 [Google Scholar]
  66. Shuren J, & Califf RM (2016). Need for a National Evaluation System for Health Technology. JAMA, 316(11), 1153. 10.1001/jama.2016.8708 [DOI] [PubMed] [Google Scholar]
  67. Sininger YS, Hunter LL, Hayes D, Roush PA, & Uhler KM (2018). Evaluation of Speed and Accuracy of Next-Generation Auditory Steady State Response and Auditory Brainstem Response Audiometry in Children With Normal Hearing and Hearing Loss. Ear and Hearing, 39(6), 1207. 10.1097/AUD.0000000000000580 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Søgaard Jensen N, Hau O, Bagger Nielsen JB, Bundgaard Nielsen T, & Vase Legarth S (2019). Perceptual effects of adjusting hearing-aid gain by means of a machine-learning approach based on individual user preference. Trends in Hearing, 23, 2331216519847413. 10.1177/2331216519847413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Stepanov I (2020). Introducing a property right over data in the EU: The data producer’s right – an evaluation. International Review of Law, Computers & Technology, 34(1), 65–86. 10.1080/13600869.2019.1631621 [DOI] [Google Scholar]
  70. Swanepoel DW, & Clark JL (2019). Hearing healthcare in remote or resource-constrained environments. The Journal of Laryngology & Otology, 133(1), 11–17. 10.1017/S0022215118001159 [DOI] [PubMed] [Google Scholar]
  71. Swanepoel DW, De Sousa KC, Smits C, & Moore DR (2019). Mobile applications to detect hearing impairment: Opportunities and challenges. Bulletin of the World Health Organization, 97(10), 717–718. 10.2471/BLT.18.227728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Swanepoel DW, & Hall JW (2020). Making Audiology Work During COVID-19 and Beyond. The Hearing Journal, 73(6), 20–22. 10.1097/01.HJ.0000669852.90548.75 [DOI] [Google Scholar]
  73. Szatmari T-I, Petersen MK, Korzepa MJ, & Giannetsos T (2020). Modelling Audiological Preferences using Federated Learning. Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 187–190. 10.1145/3386392.3399560 [DOI] [Google Scholar]
  74. Thabit H, & Hovorka R (2016). Coming of age: The artificial pancreas for type 1 diabetes. Diabetologia, 59(9), 1795–1805. 10.1007/s00125-016-4022-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Individuals’ Right under HIPAA to Access their Health Information, (2016). https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/index.html
  76. Valentino-DeVries J (2019, April 13). Tracking Phones, Google Is a Dragnet for the Police. The New York Times. https://www.nytimes.com/interactive/2019/04/13/us/google-location-tracking-police.html [Google Scholar]
  77. Verhulst S, Altoè A, & Vasilkov V (2018). Computational modeling of the human auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss. Hearing Research, 360, 55–75. 10.1016/j.heares.2017.12.018 [DOI] [PubMed] [Google Scholar]
  78. Vos T, Allen C, Arora M, Barber RM, Bhutta ZA, Brown A, Carter A, Casey DC, Charlson FJ, Chen AZ, Coggeshall M, Cornaby L, Dandona L, Dicker DJ, Dilegge T, Erskine HE, Ferrari AJ, Fitzmaurice C, Fleming T, … Murray CJL (2016). Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990–2015: A systematic analysis for the Global Burden of Disease Study 2015. The Lancet, 388(10053), 1545–1602. 10.1016/S0140-6736(16)31678-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Wells HRR, Freidin MB, Zainul Abidin FN, Payton A, Dawes P, Munro KJ, Morton CC, Moore DR, Dawson SJ, & Williams FMK (2019). GWAS Identifies 44 Independent Associated Genomic Loci for Self-Reported Adult Hearing Difficulty in UK Biobank. The American Journal of Human Genetics, 105(4), 788–802. 10.1016/j.ajhg.2019.09.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Wilson BS, Tucci DL, Merson MH, & O’Donoghue GM (2017). Global hearing health care: New findings and perspectives. The Lancet, 390(10111), 2503–2515. 10.1016/S0140-6736(17)31073-5 [DOI] [PubMed] [Google Scholar]
  81. Wilson BS, Tucci DL, O’Donoghue GM, Merson MH, & Frankish H (2019). A Lancet Commission to address the global burden of hearing loss. The Lancet 10.1016/S0140-6736(19)30484-2 [DOI] [PubMed] [Google Scholar]
  82. Wilson HJ, & Daugherty PR (2018, July 1). Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review, July-August 2018. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces [Google Scholar]
  83. Wolfgang K (2019). Artificial Intelligence and Machine Learning: Pushing New Boundaries in Hearing Technology. The Hearing Journal, 72(3), 26. 10.1097/01.HJ.0000554346.30951.8d [DOI] [Google Scholar]
  84. World Health Organization. (2013). Multi-country assessment of national capacity to provide hearing care. https://www.who.int/pbd/publications/WHOReportHearingCare_Englishweb.pdf
  85. World Health Organization. (2017). Global costs of unaddressed hearing loss and cost-effectiveness of interventions. http://apps.who.int/iris/bitstream/10665/254659/1/9789241512046-eng.pdf
  86. World Health Organization. (2019, March 31). Global estimates on prevalence of hearing loss. Deafness Prevention. http://www.who.int/deafness/estimates/en/
  87. Wu Y-H, Stangl E, Zhang X, & Bentler RA (2015). Construct validity of the ecological momentary assessment in audiology research. Journal of the American Academy of Audiology, 26(10), 872–884. 10.3766/jaaa.15034 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES