Abstract
Compared with other health disciplines, there is a stagnation in technological innovation in the field of clinical neuropsychology. Traditional paper-and-pencil tests have a number of shortcomings, such as low-frequency data collection and limitations in ecological validity. While computerized cognitive assessment may help overcome some of these issues, current computerized paradigms do not address the majority of these limitations. In this paper, we review recent literature on the applications of novel digital health approaches, including ecological momentary assessment, smartphone-based assessment and sensors, wearable devices, passive driving sensors, smart homes, voice biomarkers, and electronic health record mining, in neurological populations. We describe how each digital tool may be applied to neurologic care and overcome limitations of traditional neuropsychological assessment. Ethical considerations, limitations of current research, as well as our proposed future of neuropsychological practice are also discussed.
Keywords: Technology, Digital biomarkers, Digital phenotyping, mHealth, Teleneuropsychology, Ecological momentary assessment
INTRODUCTION
Neuropsychological services are vastly important in the diagnosis and treatment of neurological disorders and injuries (Donders, 2020). However, compared with other medical disciplines, there has been limited technological innovation in the field of clinical neuropsychology in the past century (Bilder & Reise, 2019; Miller & Barr, 2017). Paper-and-pencil tests that were developed before the invention of personal computers remain the most popular tests used by practitioners today (Rabin et al., 2005, 2016). With increasing emphasis on “biomarkers” for neurological conditions (Blennow & Zetterberg, 2018), neuropsychology has become less critical in differential diagnosis. Instead, neuropsychologists are uniquely positioned to characterize patients’ cognitive and behavioral strengths and weaknesses and real-world functioning, which has been hampered by the stagnation in technological advancement. This review will first discuss the limitations of traditional neuropsychological assessment and the value of computerized cognitive assessment, followed by introduction of novel digital technologies that may be adopted in order to enhance the current standard of neuropsychological practice.
Limitations of Traditional Neuropsychological Assessment
The low frequency of data collection is one of the major limitations of traditional neuropsychological tests. The current standard is to capture a snapshot of a patient’s cognitive functioning in a single in-person visit. Even when there are repeated assessments, they are typically conducted from months to years apart in the typical outpatient setting due to the labor-consuming nature of the exams and efforts to minimize practice effects. This limits the ability to detect changes within a shorter time period, such as monthly, weekly, or even daily fluctuations (e.g., cognitive fluctuations observed in Lewy body dementia [Matar et al., 2020]). For disorders with a relapsing–remitting course such as multiple sclerosis (Lublin et al., 2014), being able to detect instantaneous changes may lead to more timely interventions. In addition, neuropsychological assessments are usually conducted in artificially controlled environments (i.e., one-on-one in a quiet room), which restricts ecological validity—one of the most commonly cited limitations by clinical neuropsychologists (Rabin et al., 2016). Traditional clinic-based assessments do not account for contextual factors or elucidate cause-and-effect relationships among clinical variables. For example, real-time stressors, fatigue level, momentary mood, and time of day can all impact cognitive performance (e.g., older adults tend to have better cognitive performance in the morning compared with the afternoon or evening [Walters & Lesk, 2015]), which is difficult to measure within a single neuropsychological assessment session. The inability to comprehensively capture these variables, which are important caveats, may partially contribute to the discrepancy observed between subjective cognitive complaints and objective neuropsychological test scores (Vos et al., 2020). Thus, there is a need to collect high-frequency clinical data in the real-world setting, which may be facilitated by digital technologies.
Moreover, traditional neuropsychological assessment is time-consuming, which limits the number of patients who can be seen in a timely manner and results in extraordinarily long wait lists (Loumidis & Shropshire, 1997). Besides needing better triage and screening systems, finding ways to shorten testing without compromising reliability and validity will make this valuable service more accessible. Furthermore, while neuropsychological instruments have standardized administration procedures, subtle variations inevitably exist among different human raters (Overton et al., 2016). Linguistic and cultural biases are another major limitation since there are very few well-validated non-English instruments with representative norms (Bender et al., 2010). Even in nonverbal measures with limited linguistic demands, biases can still exist (Rosselli & Ardila, 2003) given that most of these tests were developed within very homogeneous samples. Last but not least, a comprehensive neuropsychological evaluation usually contains an assessment of psychopathology. However, this is an area where there has been the least amount of innovation, with most measures solely relying on a patient’s retrospective self-reports (e.g., recalling symptom severity over the past week or two), which may be confounded by recall bias, especially for patients with memory impairment (Hill et al., 2019).
Computerized Cognitive Assessment
As the personal computer became more ubiquitous, computerized versions of some paper-and-pencil neuropsychological tests were developed. These computerized measures may combat some of the aforementioned limitations of traditional neuropsychological tests. For example, computerized paradigms may make testing more efficient by reducing examiner labor and shortening test batteries using computerized adaptive testing in which the difficulty level of test items is automatically adjusted based on previous item performance (Gibbons et al., 2008). Shorter test sessions may free up the neuropsychologist’s time and permit more patients to be seen, further increasing access to testing. Computerized assessment may facilitate group testing, which can enhance access because multiple patients can be tested by one (or no) examiner at once (Schatz, 2017; Sternin et al., 2019). Another benefit of computerized assessment is automated scoring, which saves time and may enable calculation of multiple scores of interest without exorbitant effort, such as comparing to multiple normative samples, applying reliable change (Stein et al., 2010) and regression-based formulas (Duff, 2012), and calculating various process scores to understand patients’ approaches (Milberg et al., 2009). Because the data are already stored in a digital format, they can be easily aggregated into large normative databases (e.g., National Neuropsychology Network [Loring et al., 2022]). Furthermore, computerized paradigms have the potential to be more precise in their presentation of stimuli and measurement of response times, down to the millisecond, compared with human raters (Bauer et al., 2012; Sternin et al., 2019), depending on latency and response time of the hardware and software being used as well as speed and quality of Internet connectivity if required (Nicosia, Wang, et al., 2023; Passell et al., 2021). Unlike immutable physical stimulus booklets, computerized tests may enable creation of almost unlimited numbers of alternative forms (Zygouris & Tsolaki, 2015) using randomly selected items from a large stimuli bank, which can minimize practice effects. Finally, because computerized testing is automated, test instructions can be translated into different languages (Zygouris & Tsolaki, 2015), allowing testing of non-English speakers without requiring examiners who speak those languages.
Despite these potential advantages, there is limited uptake of computerized testing among neuropsychologists (Rabin et al., 2014). One major concern involves psychometric properties; existing studies show only moderate levels of reliability and low concurrent validity with their paper-and-pencil counterparts or clinician diagnoses (Feenstra et al., 2017; Schmand, 2019). Lack of appropriate norms or validation in the clinical populations in question also preclude consistent usage of computerized measures (Rabin et al., 2014). More importantly, most computerized paradigms validated for clinical use are essentially computerized adaptations of the original paper-and-pencil tests and do not address many limitations of traditional neuropsychological assessment as noted above. It is time for neuropsychologists to consider more radical innovations. In this review paper, we will outline novel digital tools and their potential applications to neurologic care. Specifically, we will discuss ecological momentary assessment (EMA), smartphone-based assessment and sensors, wearable devices, passive driving sensors, smart homes, voice biomarkers, and electronic health record (EHR) mining.
OVERVIEW OF NOVEL DIGITAL TECHNOLOGIES
Ecological Momentary Assessment
EMA is a method of repeatedly sampling an individual’s state, behavior, and context in real-time within the individual’s real-world environment (Shiffman et al., 2008). While EMA is not a new method, research using EMA has proliferated in the past several years, as it has become more easily conducted using mobile technology (e.g., smartphone-based administration). Using an EMA framework, information about a participant can be collected actively (e.g., surveys, smartphone-based tasks) and/or passively (using sensors from mobile or wearable devices). EMA enables the collection of longitudinal data (i.e., multiple time points throughout the day or week using active measures, continuous data streams from passive sensors), which can be used to establish real-time temporal associations among symptom self-reports, performances on cognitive and motor tasks, and contextual factors (e.g., surveys about location and activity of participants, time of day based on timestamp, location data from global positioning system [GPS]; Chen, Cherian, et al., 2022; Torous et al., 2018; Weizenbaum et al., 2022). The instantaneous nature of EMA responses reduces recall bias that is often associated with retrospective self-report inventories (Colombo et al., 2019) because patients are asked to report their states and behaviors in real time. In order to gather multiple samples within a relatively short time frame (e.g., within a day, a week, or a month), active EMA surveys/tasks are usually brief (taking only a few minutes at a time). Studies have found convergence between brief EMA symptom reports/task performances and longer gold standard measures administered at baseline in neurologic and psychiatric populations (Harvey et al., 2021; Moore et al., 2022; Sliwinski et al., 2018), validating the use of EMA to study these processes.
EMA complements traditional gold standard assessment in its ability to gather data in higher frequency and within the participant’s naturalistic setting (Shiffman et al., 2008). Such longitudinal and real-world data can help clinicians and patients better understand real-world outcomes, symptom fluctuations, and trajectories over time (Hufford et al., 2001; Schmitter-Edgecombe et al., 2020). Indeed, emerging EMA research has shown significant within-person variability in mood, fatigue, and cognitive and motor functioning that are not captured by traditional single-time-point assessments, among various populations such as heathy aging, multiple sclerosis, and Parkinson’s disease (Chen, Cherian, et al., 2022; Schmitter-Edgecombe et al., 2020; Weizenbaum et al., 2022). It has been proposed that intraindividual variability may be a more sensitive marker for cognitive decline than absolute levels of performance (Anderson et al., 2018; Halliday et al., 2019). Hackett and Giovannetti (2022) illustrated how this intraindividual variability and everyday functioning may be operationalized using digital technologies across the cognitive aging spectrum in their Variability in Everyday Behavior (VIBE) model (see Fig. 1; Hackett & Giovannetti, 2022). According to their model, cognitive abilities decrease subtly in early stages of decline, which may not be detectable by traditional clinical tools but may be captured by high dimensional digital technologies (e.g., decreased social contact or movement via EMA or passive smartphone sensors). During these early stages, intraindividual variability increases, which may be captured by repeated EMA (e.g., surveys, smartphone-based cognitive assessment). Because EMA is conducted in the individual’s real-world environment, external influences (e.g., location, stressors, mood, social context) on clinical outcomes such as cognitive performance, psychotic symptoms, and physical fatigue can be directly examined (Harvey et al., 2021; Powell et al., 2017; Shvetz et al., 2021) instead of qualitatively hypothesized. Studies investigating compliance rates of elderly individuals using EMA suggest good compliance in this population, averaging above 80% (Cain et al., 2009; Rullier et al., 2014; Schweitzer et al., 2017; Yao et al., 2023). It is also possible that these rates may be improved through the use of passive EMA methods, which are outlined below in the discussion of passive sensors, that do not require active engagement from the patient.
Fig. 1.

The Variability in Everyday Behavior (VIBE) model proposed by Hackett and Giovannetti (2022). According to this model, cognitive abilities decrease gradually in early stages of decline and more rapidly in later stages. Intraindividual variability follows an inverse U-shape pattern and peaks during the MCI stage. As it is often difficult to detect subtle cognitive changes during the prodromal and MCI stages, the authors described the advantage of high dimensional data from digital sources in characterizing cognitive functioning and variability in the real world through repeated data points (e.g., EMA, passive sensors). Figure reprinted from open-access article published by (Hackett & Giovannetti, 2022), distributed under the creative commons attribution license, which permits unrestricted use, distribution, and reproduction in any medium.
Smartphone-Based Cognitive Assessments
Given their increasing ubiquity (Pew Resarch Center, 2021), smartphones have become the ideal vehicle with which to examine real-world health outcomes. Using an EMA framework, brief cognitive tasks can now be self-administered through an individual’s personal smartphone, providing multiple brief snapshots of real-world cognitive functioning. There are a variety of mobile adaptations of gold standard neuropsychological tests such as the Stroop Color and Word Test (Moore et al., 2020), the Trail-Making Test (Fellows et al., 2017), and the Symbol Digit Modalities Test (Lam, Van Oirschot, et al., 2022), which have shown moderate to strong correlations with their paper-and-pencil counterparts. While these tests suffer from the same limits in innovation as existing computerized programs, the brief and mobile nature of smartphone-based tests allow more frequent sampling of the individual’s cognitive functioning in more diverse settings. However, unlike many more mature computerized assessments, research in smartphone-based cognitive assessment is still in its infancy and these tests cannot yet to be used for diagnostic purposes. There are ongoing efforts to validate smartphone-based assessments in large normative cohorts. One example is the National Institute on Aging (NIA)-funded Mobile Toolbox (Gershon et al., 2022), which is a smartphone adaptation of the well-established NIH Toolbox (Weintraub et al., 2013).
The biggest advantage of smartphone-based assessment is the ability to gather high frequency, ecologically valid data, which enables the characterization of individual-level changes over time (Liu et al., 2019; Nicosia, Aschenbrenner, Balota, et al., 2022). The high-frequency data may establish a stronger baseline (e.g., mean of five or ten measurements across multiple days is less influenced by confounders such as sleep deprivation and high stress compared with single-time-point measurements) to which changes may be more reliably measured, especially in cases where there are limited normative comparator samples. That being said, due to the high sampling frequency, there can be significant practice and ceiling effects (Liu et al., 2019), which may be attenuated by incorporating mass practice during the pre-baseline period until performance plateaus or utilizing many alternate forms/stimuli (Goldberg et al., 2015). With any self-administered remote data collection, compliance may be of concern. Recent studies utilizing smartphone-based cognitive assessments have shown compliance rates of greater than 70% (Cerino et al., 2021; Lancaster et al., 2019; Moore et al., 2020; Thompson et al., 2022), demonstrating its feasibility. However, of note, individuals with lower baseline cognitive scores and greater symptom burden (e.g., pain, fatigue) tend to have lower response rates (Weizenbaum et al., 2022).
Because smartphone-based assessments are typically administered in unsupervised settings, task performance may be influenced by various contextual factors, including location, motivation, ambient temperature, and recent physical and social activities (Weizenbaum et al., 2020). Unsurprisingly, stronger cognitive performances are observed at home compared with being in unfamiliar locations where there may be more distractions in the environment (Weizenbaum et al., 2022). Ambient temperature has been shown to negatively impact cognitive performance among populations with known problems with heat sensitivity such as multiple sclerosis (Pratap et al., 2020). Attention and working memory may be especially vulnerable to contextual factors (Öhman et al., 2022). While these external effects may be considered confounds, they may also provide valuable clinical information. For example, for patients with cognitive complaints who score within the normal range on traditional neuropsychological tests, the feedback session can be greatly enhanced if the clinician can incorporate real-world data, identifying situations where cognitive performance may be suboptimal. Not only can this validate the patient’s concerns, a personalized recommendation plan can be developed to minimize negative impact from acute stressors on cognition and maximize productivity by taking advantage of periods of peak cognitive performance.
Passive Smartphone Sensors
Besides being a platform to deliver active assessments, smartphones contain many passive sensors that may used to characterize real-world functioning. Smartphone studies have used their inertial measurement units (IMUs; measuring movement; consisting of accelerometer, gyroscopes, and magnetometer), GPS, microphones, cameras, keyboard typing features, and general phone usage metrics such as frequency of calls and texts to predict motor, cognitive, and mood symptoms (Chen, Leow, et al., 2022; Dagum, 2018; Jacobson et al., 2020; Lipsmeier et al., 2018; Omberg et al., 2022; Torous et al., 2018). Because smartphone sensor data are multi-dimensional, there is potential to develop highly accurate predictive algorithms to detect the presence of a disease state from healthy controls, such as multiple sclerosis, Parkinson’s disease, Alzheimer’s disease, human immunodeficiency virus (HIV), amyotrophic lateral sclerosis (ALS), schizophrenia, and mild cognitive impairment (MCI; Hackett & Giovannetti, 2022; Kourtis et al., 2019; Lam, Twose, Lissenberg-Witte et al., 2022; Lipsmeier et al., 2018; McKinney et al., 2020).
Smartphone sensor metrics have been coined as digital phenotypes or digital biomarkers, both of which refer to real-time quantification of states and behavior using digital devices including, but not limited to smartphones (Coravos et al., 2019; Prakash et al., 2021). To facilitate medical discoveries using smartphone sensors, there are open-source frameworks such as Apple’s ResearchKit and CareKit for iOS devices (Apple, 2023) and ResearchStack for Android devices (ResearchStack.org) available for app development. For those who do not have the expertise or resources to develop their own apps, there are open-source solutions available for collecting active and passive smartphone data, including mindLAMP which was developed by the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center (Vaidyam et al., 2022) and mCerebrum from the Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K; Kumar et al., 2017), with options for integrating with wearable devices. The Research Electronic Data Capture (REDCap) now has a new participant-facing mobile app called MyCap (Harris et al., 2022) that includes both survey capability and performance-based tasks through Apple’s ResearchKit (e.g., finger tapping).
The biggest advantage of smartphone sensors over active smartphone-based tasks is the unobtrusive nature of data collection which does not rely on participant adherence. Since users are not required to engage with active tasks, they may be more willing to be monitored over longer periods of time, which may be more conducive to early detection of disease (Kourtis et al., 2019) compared with traditional clinic visits which do not typically take place until more apparent symptoms manifest. One example is the use of keyboard dynamics and smartphone IMU’s to passively collect data which can indicate the presence of symptoms such as essential tremor and multiple sclerosis (Brindha et al., 2022; Lam, Twose, McConchie et al., 2022). Another example is the potential to continuously assess aspects of cognitive functioning via speech dynamics using the smartphone microphone. These methods allow for continuous (or near-continuous) data collection, which provides a richer picture of daily functioning, compared with discrete time points from active smartphone-based tasks or single time points from clinic-based assessment. The highly longitudinal data can provide a more reliable picture of the individual’s functioning. Outliers due to unique circumstances (e.g., higher-than-usual stress levels, sleep deprivation) will have less influence on the overall mean. When outliers truly represent transient disease activity and processes (e.g., relapses), they will be more easily noticeable as they stand out from the average range of data points. Furthermore, the process of actively taking an assessment alters the environment and may thus alter a person’s performance as a result. Passive smartphone sensors make the act of “measuring” less obvious, and the individual is more likely to exhibit their usual behavior. Moreover, smartphone sensors may provide a more objective measure of constructs traditionally reliant on self-report, such as mood symptoms and neuropsychiatric behavioral disruptions (Barnett et al., 2018; Jacobson et al., 2019) or nonobservable physiological symptoms such as pain and fatigue (Jacobson & O'Cleirigh, 2021; Qiao et al., 2016). Metrics such as frequency of calls and messaging, time spent on calls and social phone applications, and movement in and out of the home based on GPS location can help infer mood and behavior (Barnett et al., 2018; Jacobson et al., 2019). Changes in facial expressions based on video recordings and reduction in physical activity may elucidate pain and fatigue (Jacobson & O'Cleirigh, 2021; Qiao et al., 2016). Since self-report regarding activity level and changes in social contacts can vary among individuals and within persons over time, developing objective metrics based on smartphone sensor data may minimize such bias (Harari et al., 2016; Mohr et al., 2017).
For smartphone studies that examine activity and location, one should be cognizant that smartphone sensor data may underestimate activity level as users may not always have their phones with them. Moreover, in order to observe participants’ natural behavior and budgetary considerations, many smartphone studies utilize participants’ own devices (instead of providing study devices). While this allows results to be more generalizable across platforms, there can be additional noise from differences in hardware and software which may need to be corrected. For example, older devices or operating systems may be slower and their sensors may be less accurate (Nicosia, Wang, et al., 2023). Furthermore, the impact of the immediate context has on phone sensor quality must be considered, such as high ambient temperature causing potential overheating and slowed processing. Studies, which establish norms for various devices and operating systems, will be vital to ensuring that assessments using these sensors are valid and reliable.
Wearable Devices
To gather more accurate activity and location data or physiological signals that are unavailable through a smartphone, wearable devices may be more suitable. There is a myriad of research/medical-grade and commercially available wearable devices that contain sensors such as IMUs, portable electrocardiograms or photoplethysmography to detect heart rate, portable electroencephalograms, skin temperature sensors, pedometers, pressure sensors, GPS, and cameras (Iqbal et al., 2021). These sensors can be placed on various parts of the body depending on the signal source of interest, such as the wrist, ankles, chest, head, lower back, or neck (Iqbal et al., 2021). Wearable devices have been used to study a variety of neurologic conditions including the MCI and dementia spectrum, movement disorders such as Parkinson’s disease and essential tremor, and mixed neurologic samples such as stroke, multiple sclerosis, and epilepsy (Bowman et al., 2021; Brindha et al., 2022; Khosroazad et al., 2023; Silva et al., 2017; Zhou et al., 2020). Wearable devices that can be used to monitor physiological states (e.g., heart rate, skin conductance) can further augment smartphone sensors in objectively measuring mood and psychopathology. For example, heart rate variability and skin temperature, in addition to activity levels, may help distinguish between depressive and manic episodes among individuals with bipolar disorder (Maatoug et al., 2022). For those interested in examining wearable device data, the National Institutes of Health (NIH)-funded All of Us research program contains publicly available Fitbit data, in addition to surveys, physical measurements, and EHRs (All of Us Research Program Investigators, 2019).
Like smartphone sensors, wearable devices yield high dimensional, continuous data that can be used as inputs for machine learning models with high accuracy. For example, movement and respiratory sensor-derived measures have been used to classify MCI from individuals with normal cognition (Khosroazad et al., 2023). Abnormal motor outcomes such as tremors and falls can be detected using IMU (movement)-based wearable devices (Brindha et al., 2022; Zhou et al., 2020). Many of these wearable devices and algorithms are ready for prime time, including hardware and software currently authorized by the Food and Drug Administration (FDA) to assess for sleep disorders, seizures, and Parkinson’s disease symptoms (U.S. Food and Drug Administration, 2023a, 2023b, 2023c). Besides assessment, wearable devices may be integrated into interventions. Wearable cameras have been used as memory aids, which have led to patient reports of reduced depression, higher levels of functioning in daily activities, and improved autobiographical memory (Dassing et al., 2020; Silva et al., 2017). Wearable devices may augment biofeedback interventions, most commonly explored during exercise by providing participants with real-time feedback about their gait and balance (Bowman et al., 2021). While passive sensing does not face the same compliance issue that active assessment does, it is still worth considering the role of the user in data collection and quality. For example, users will still have to remember to wear the devices, charge the devices if necessary, and ensure that the devices are worn and used correctly.
Passive Sensing of Real-World Driving
An emerging application of passive sensing technology involves monitoring of real-world driving abilities. GPS devices, cameras, radar, accelerometers, yaw rate sensors, and custom devices that made to capture vehicle operation metrics can be placed in vehicles to collect information related to driving performance every time the vehicle is operated. This includes the average/total distance traveled, travel settings (e.g., highway v. local roads), destinations, number of trips, trip timestamps, hand brake usage, sudden acceleration or deceleration, and over/under speeding (Babulal et al., 2016, 2021; Bayat et al., 2021; Eby et al., 2012; Marshall et al., 2013). Alterations in real-world driving performance have been found to be associated with symptomatic Alzheimer’s disease, such as driving shorter distances, visiting fewer unique destinations, and having a smaller driving space (Bayat et al., 2021; Davis et al., 2012; Kostyniuk & Molnar, 2008). Due to their high-frequency nature, sensor-derived driving data may elucidate subtle cognitive decline in preclinical stages, during which time the patient may not exhibit clinical impairment on traditional neuropsychological tests (Allison et al., 2016; Bayat et al., 2021; Coughlan et al., 2018). Studies have found alterations in real-world driving behavior and increased driving errors in people with preclinical Alzheimer’s disease when compared with healthy controls, demonstrating the feasibility of using driving performance as a neurobehavioral marker for early stages of decline (Babulal et al., 2021; Roe et al., 2017, 2019).
Detecting driving behavior using sensors is advantageous over traditional methods of assessing driving performance, such as on-the-road tests, driving simulators, and self-report inventories and diaries (Babulal et al., 2016; Bayat et al., 2021), which suffer from limitations in objectivity, cost-effectiveness, availability, reliability, and generalizability (Bayat et al., 2021; Eby et al., 2012). By repeatedly collecting data points on metrics such as speed, acceleration, and location throughout operation of the vehicle, driving behaviors and performance can be assessed continuously in a naturalistic environment, with high ecological validity and more precise characterization of intraindividual performance over time (Babulal et al., 2016). Beyond assessment, real-time monitoring of driving may help protect older or impaired drivers. Abnormal driving behaviors can be flagged in real-time and alert caregivers, and GPS allows location of drivers who have become disoriented and lost (Babulal et al., 2021). Furthermore, as some car manufacturers have started to do so, sensor-derived driving metrics may inform real-time feedback and assistance features (e.g., warning drivers of incoming obstacles, directing driver during difficult lane changes).
While promising, there are still many methodological issues that need to be worked out in this research. Depending on budget and feasibility, sensors and analytic approaches vary significantly across studies. There is no consensus regarding common data elements for assessing real-world driving. Another major limitation is the fact that older drivers and drivers in beginning stages of decline often compensate (driving less frequently, avoiding more difficult driving situations [e.g., highways, nighttime] Crizzle & Myers, 2013; Seelye et al., 2017), subtle changes in driving ability (e.g., slower reaction time when braking, not seeing oncoming traffic when turning or yielding) require a closer examination of specific driving behaviors and situations, which has not been well explored by extant literature.
Smart Homes
Smart homes refer to residences which integrate various automated electronic systems into the home (Robles & Kim, 2010). This can range from a home which contains intelligent objects and systems that an individual can interact with using an electronic home controller (Robles & Kim, 2010) to homes with sensors built into any number of appliances or items, ultimately allowing for unobtrusive gathering of data (Martin et al., 2008). Examples include smart carpets and pressure mats used to passively record aspects of a person’s gait, automated pill boxes tracking medication adherence, sensors such as accelerometers built into the bed monitoring sleep quality, motion detectors monitoring movement between rooms and in/out of the house, power usage, and ambient temperature (Ahamed et al., 2020; Gali et al., 2020; Khosroazad et al., 2023; Kim et al., 2020; Santos et al., 2022). As a result, smart homes can simultaneously capture multiple streams of information about the environment, the people within it, and how people interact with the environment and each other. Not only do the capabilities of smart homes allow information to be passively collected, but they also allow for active interaction with tenants for interventional purposes (Martin et al., 2008). Automated processes can activate smart light bulbs, audio from smart speakers, or other types of notification systems to alert and interact with those in the home (Ault et al., 2020; Boumpa et al., 2019; Gali et al., 2020) and robotic systems can physically assist people in daily activities (Wilson et al., 2019).
Studies using smart home technology have demonstrated utility in detecting MCI, dementia, and sleep-related disorders in elderly individuals (Ahamed et al., 2020; Khosroazad et al., 2023; Kim et al., 2020). Technologies such as smart carpets/mats, smart lights, smart speakers, and robots have shown to be effective tools in rehabilitation, assistance in activities of daily living, and emergency intervention for individuals with gait or walking related issues and dementias (Ault et al., 2020; Boumpa et al., 2019; Gali et al., 2020; Santos et al., 2022; Wilson et al., 2019). Additionally, smart home technologies have the ability to improve caregivers’ quality of life (Ault et al., 2020), through robotics that can perform caregiving tasks in place of the caregiver and automated alert systems that can reduce the amount of active oversight of patients required from the caregiver (Ault et al., 2020; Boumpa et al., 2019; Gali et al., 2020; Santos et al., 2022; Wilson et al., 2019). For the interested reader, the Oregon Center for Aging and Technology at the Oregon Health & Science University has made their smart home data available for external researchers (Lyons et al., 2015). Of note, the comprehensiveness of smart homes in capturing an individual’s daily life can also be a disadvantage, as it can feel invasive for the user. Privacy-protecting measures, such as blurring faces in video data, having clear indicators to individuals in the household that video and/or audio recording is occurring, the ability for users to turn recordings on or off, and password protection, should be considered. The large quantities of data from smart homes may introduce analytic challenges. Future research on how to best synthesize and interpret these large amounts of multimodal data will be important for the practical use of smart homes in continuous patient assessment and monitoring.
Voice Biomarkers
Voice biomarkers refer to the features captured from audio signals of human voices (Fagherazzi et al., 2021). Voice biomarkers can elucidate speech alterations (i.e., changes in duration of unvoiced segments, speech rate, duration of syllables), which may be one of the earliest indicators of cognitive decline (Ambrosini et al., 2019; Calzà et al., 2021). This is meaningful because studies suggest that neuropathological changes could begin many years before the onset of apparent clinical symptoms (Hajjar et al., 2023; Mahon & Lachman, 2022) and may be difficult to detect by the patients themselves or through traditional neuropsychological assessment. For example, using its rich longitudinal data, researchers from the Framingham heart study (Lin et al., 2020) found that early speech alterations (e.g., more hesitations and pauses) during the asymptomatic phase were predictive of the later development of dementia. Since voice biomarkers can easily be recorded, they can be used to monitor patients’ cognitive status in a simple, rapid, and reproducible way and assist with clinical diagnosis (Alhanai et al., 2017). Speech and language analyses usually require large corpuses, or collections of written or spoken words. Some of the most commonly used publicly available corpuses come from the TalkBank project (MacWhinney, 2007), which includes speech recordings and transcripts from a myriad of clinical populations, including dementia, aphasia, and traumatic brain injury.
There is a variety of methods to collect voice biomarker data, ranging from recording spontaneous speech to structured language tasks (Robin et al., 2020), which may be analyzed using speech and language algorithms (e.g., acoustic analysis of speech, automatic speech recognition, natural language processing [NLP]). Regarding spontaneous speech, an experienced clinician may be able to qualitatively detect subtle speech abnormalities that reflect cognitive deficits (e.g., paraphasias, word-finding difficulties, abnormalities in pitch, clarity, and intensity) during the clinical interview, which may be quantified using automatic speech analysis techniques (Martínez-Nicolás et al., 2021). Even with existing language tasks (e.g., verbal fluency, confrontation naming), speech algorithms may provide additional process information (e.g., speed or pattern of word retrieval, error types) which can greatly complement the total summed scores (Putcha et al., 2020). In terms of limitations, cross-language studies are scarce with the majority of research in this area conducted with English samples, although some studies demonstrated the feasibility of using automatic speech analysis across different languages (Chien et al., 2018; Espinoza-Cuadros et al., 2014; Mirzaei et al., 2018; Satt et al., 2013). For example, a multilingual Parkinson’s disease study identified certain voice features that were considered language independent, at least among English, Italian, Spanish, German, and Czech speakers in the study. These language independent features included pitch variation, speech rhythm, pause time, silence duration, length of speech, and number of nouns and auxiliaries (Favaro et al., 2023).
Due to the early stage of this type of research, there are currently no standard definitions of voice biomarker features, but some of the most common features can be classified as lexical, syntactic, acoustic, and paralinguistic (Beltrami et al., 2018). Lexical features are derived from the words and meanings that people use when they speak. Some common lexical features include vocabulary size, content density (i.e., the ratio of open-class words [nouns, verbs, and descriptive terms] to closed-class [grammatical function] words) (Land Jr et al., 2020), and lexical content (i.e., proportions of pronouns and verbs) (Ahmed et al., 2013). People with cognitive impairment may experience word-finding difficulties, which may manifest as less frequent use of high-frequency words and word modifiers (i.e., adjectives) (Yeung et al., 2021). Syntactic features investigate grammatical correctness, coherence, and syntactic complexity (i.e., mean length of utterance, number of embedded clauses, syntactic errors, nouns preceded by determiners and verbs with inflections). Patients with cognitive deficits may show more syntactic errors and general simplification (e.g., fewer complex phrases and embedded structures) in their speech compared with people with normal cognition (Zanini et al., 2010). Acoustic features are the characteristics of speech sounds, including language fluency (i.e., production rate, the number of pauses, hesitation), response time, length of segments, and the false start of the sentences (Alhanai et al., 2017; Ammar & Ayed, 2021). More unique features such as relative variation in energy and vocal pitch can also be extracted from sound files (Rapcan et al., 2010). Extant research indicates that compared with acoustic features, more impairments are observed in lexical and syntactic features in early stages of cognitive decline (Ahmed et al., 2013; Alhanai et al., 2017).
Electronic Healthcare Record (EHR) Mining
As of 2021, 88% of the hospitals in the United States have adopted EHRs (Office of the National Coordinator for Health Information Technology, 2022). EHRs have enabled the aggregation of complex patient data into easily accessible databases that better inform diagnosis and treatment compared with paper-based medical record systems (Payne et al., 2015). These databases, including structured fields about patients’ demographics, diagnoses, procedures, medications, and test results, have allowed researchers to study large samples of clinical populations without the expense of collecting their own data (Casey et al., 2016). Besides quality improvement within a particular healthcare system, researchers can use the EHR for public health inquiries, identifying risk factors for disease, determining the effects of treatments through natural experiments, and evaluating risk for various clinical/functional outcomes (Casey et al., 2016). For example, EHR data have helped uncover clinical outcomes associated with the coronavirus disease (COVID-19) infection, including the risk of developing neurologic and psychiatric conditions (Taquet et al., 2021). There are publicly available EHR data that can be leveraged by external researchers, including the Medical Information Mart for Intensive Care III (MIMIC-III; Johnson et al., 2016), the All of Us research program (All of Us Research Program Investigators, 2019), and the National COVID Cohort Collaborative (N3C; Haendel et al., 2021). In recent years, researchers also started exploring the possibility of integrating patient-generated data from digital systems with real-time monitoring function into EHRs. For example, sleep data from smartphone apps can be imported into EHR for clinicians to review, which could be more reliable than patients’ retrospective self-report data (Zulueta et al., 2020).
In addition to the structured fields that can be relatively easy to extract and analyze, there is a large proportion of unstructured clinician notes that require more advanced analytic methods such as NLP (Koleck et al., 2019). The most common approach is to identify frequencies of particular words and phrases that are related to the population or construct of interest within the free text notes. These features may then be used as inputs for algorithms to predict a clinical outcome of interest (e.g., diagnosis). Studies have shown the feasibility of predicting risk for neurologic conditions such as dementia, multiple sclerosis, and delirium by searching for condition-related terms (e.g., symptoms) within unstructured notes (Chase et al., 2017; McCoy Jr. et al., 2019; Pichon et al., 2021; Wong et al., 2018). EHR algorithms have the potential to identify onset of disease before formal diagnosis, which may lead to more timely interventions (Chase et al., 2017; McCoy Jr. et al., 2019; Wong et al., 2018). Besides utilizing symptom keywords, EHR data have also been used to identify lifestyle risk factors for cognitive decline, which can further refine risk algorithms (Zhou et al., 2019).
Because EHR data are collected for clinical, and not systematic research purposes, there are several limitations in using them to answer questions for which the data were not collected. There can be large variances across providers and health systems in terms of data completeness, accuracy, and consistency (Kataria & Ravindran, 2020; Sarwar et al., 2022). There are natural biases in the EHR based on the patient’s specific medical circumstances. For example, sicker patients tend to have more health records compared with healthy patients (Graham et al., 2020; Weiskopf et al., 2013). These factors may be potential confounds that need to be controlled.
DISCUSSION
The digital tools reviewed in this paper have tremendous promise in bringing neuropsychology into the 21st century. Singh and Germine (2021)’s hybrid neuropsychology model provides a useful road map for integrating digital tools into neuropsychological practice, including gradually incorporating digital methods of data collection, leveraging data science in the collection, storage, and retrieval of multimodal sources of data, and engaging in interdisciplinary collaborations (see Table 1) (Singh & Germine, 2021). However, there is still much work to be done before these applications are ready for standard clinical use. In the following sections, we describe ethical considerations, limitations of current research, and our vision of the future of neuropsychology.
Table 1.
Hybrid neuropsychology model proposed by Singh and Germine (2021).
| Action item 1 | Action item 2 | Action item 3 | |
|---|---|---|---|
| Action Item | Develop a technology-based practice | Integrate data science | Engage with innovators in other fields |
| Goal | Incorporate multiple modalities when collecting patient data | Aggregate data collected across patients and multiple modalities | Collaborate with innovators to develop, share, and implement new digital tools |
| Benefits | • Timely delivery • Increased precision and accuracy • Ability to monitor changes in cognition • Access to state- and context-dependent functioning • Greater scalability of the field • Improved health equity |
• Improved methods for accessing, storing, and sharing patient data • Greater consistency and integration of data points collected across clinics • More straightforward development of a national data repository and new tests |
• Better integration with technology-based fields • Ability to fill gaps in evaluations by measuring constructs not currently assessed using traditional tests • Greater exposure to medical and scientific community |
| Barriers | • Validity of novel digital tools • Resistance to adopt innovative technologies |
• Less familiarity with data science • Reaching a consensus regarding types of data management tools to use |
• Risk aversion |
| How to address barriers | • Cross-validate new tools with traditional tests • Ensure validity through empirical studies and strengthen communication within the field |
• Collaborate with individuals in information technology and data science • Increase communication within the field regarding best practices |
• Recognize that neuropsychology can be innovative without eliminating traditional, trusted measures |
Note: Table from (Singh & Germine, 2021), with permission to reprint from the source journal.
Ethical Considerations/Limitations of Digital Technologies
Patient privacy and confidentiality are major concerns for any remote monitoring approach. To prevent unauthorized access of patient data, proper encryption infrastructure for both storage and transmission of data (e.g., uploading from device to cloud-based storage) should be in place. Collection of identifying information through these technologies should be avoided when possible. If a participant has already enrolled in a study or clinic, an anonymized research or clinic ID may be used to log into the apps as opposed to their names to minimize the impact of breach of confidentiality. If age is required for the algorithm, age instead of date of birth should be used if it does not interfere with the research or clinical protocol. Collection of information that is not essential for the research or clinical question should be avoided to protect privacy. For example, in most of the keystroke dynamics studies (Alfalahi et al., 2022), the content of what is typed (i.e., exact words and sentences) is not typically gathered, since researchers are primarily interested in the pattern of typing (e.g., typing speed, rate of typing errors) and not the content. Meanwhile, it is worthwhile to be cautious when using third-party software and hardware to collect data which may lead to personal identification in unexpected ways. For example, sensor data may reveal IP addresses (Martinez-Martin, Greely, et al., 2021; Martinez-Martin, Luo, et al., 2021). Furthermore, it is important to verify the identity of the user when they log into the apps, both to protect any demographic or clinical information that can be accessed through the app dashboards and to ensure that any intervention triggered by information collected through the devices is from the actual user (not someone else using their device). User accounts should be password-protected at a minimum, and more advanced biometric verification (e.g., fingerprint, facial features) may be considered as well. For remote monitoring methods that may involve people other than the patient in question (e.g., other household members in a smart home, bystanders captured by wearable cameras), consent from these external individuals may need to be obtained. If obtaining consent is not feasible, measures need to be taken to ensure privacy of these individuals (e.g., giving participants the option of turning off a wearable camera as needed, blurring the faces of bystanders automatically during preprocessing of videos before analysis).
Technological/digital literacy is another important consideration. Some of the aforementioned technologies require more active engagement from the user (e.g., smartphone-based assessments), which may not be appropriate for individuals with limited familiarity with these devices and software, illiterate or linguistically diverse populations, or cognitively impaired clinical groups. As a result, it will be important to account for factors such as comfortability and familiarity with these devices and whether variance in these factors will contribute to reduced reliability in measurements. Of note, research has shown that lower levels of technological literacy do not inherently lead to reduced adherence (Nicosia, Aschenbrenner, Adams, et al., 2022), highlighting the importance of considering more than technological literacy when assessing whether a particular technology is appropriate for a person. This way, steps can then be taken to reduce these confounding factors, such as making use of more passive monitoring approaches (e.g., smart homes) with some individuals. While smartphones and many wearable devices are becoming increasingly ubiquitous in the United States even among lower resourced populations (Pew Resarch Center, 2021), this may not be true for all parts of the world, which may have limited access to these technologies. Even among groups who own their own devices such as smartphones and wearables, research has demonstrated differences in quality (e.g., lag times) among devices of varying prices; unsurprisingly, less expensive smartphones have longer display and touch latencies than more expensive smartphones (Nicosia, Wang, et al., 2023). This is an important caveat especially when response time is a primary outcome. Without appropriate adjustments for hardware/software, individuals with low socioeconomic status may be unfairly penalized.
Limitations of Current Research/Future Directions
Before these technologies can be used in standard neurological/neuropsychological practice, it is important to address their current limitations. Further psychometric studies are needed to demonstrate the reliability and validity of these new metrics (Bauer et al., 2012). However, simply validating these novel technologies with traditional paper-and-pencil tests is not enough. Since the goal of the novel technologies is to address the limitations of traditional neuropsychological measures, it is critical to utilize ground truths that more accurately represent the patient’s real-world functioning, such as subjective (especially from informants) and objective measures of activities of daily living and clinician diagnoses. Also, one of the largest limitations present in all the novel digital tools reviewed is the restricted generalizability of existing research due to small and homogenous samples. Thus, it will be important for future research to verify the validity of these technologies with larger and more diverse samples. In portable active assessment technologies such as smartphone-based cognitive assessment or symptom questionnaires via EMA, potential confounders in the environment can continuously arise, such as distracting environments and device/connectivity failure. Thus, it will be important to gather contextual information through device metadata or additional sensors. This will help elucidate unintentional confounders but also for intentional cheating, which may be difficult to monitor without human supervision (Nicosia, Wang, et al., 2023).
Furthermore, some studies demonstrated limitations in specificity when distinguishing among different diseases and syndromes (Hajjar et al., 2023; Lin et al., 2020), necessitating further studies using modern psychometrics and machine learning methods to identify more specific phenotypes. This is an important step to overcoming the specificity issues already present in traditional neuropsychological assessment through the use of higher dimensional data that allows for more accurate predictive models. Another potential limitation is the between-person reliability of various hardware and software. Due to the rapid rate at which manufacturers update devices and varying quality levels of devices, between-person differences in factors such as the processing speed of devices or quality of recorded information pose a limitation to the reliability of measurements performed by these devices (Germine et al., 2019). Thus, despite potential improvements in precision over human raters, variables such as differences in device display and latencies (Nicosia, Wang, et al., 2023; Passell et al., 2021) must be controlled. Hardware- and software-specific norms will also be helpful. Practice effects are another limitation that will have to be accounted for in these continuous or high dimensional data streams (Hackett & Giovannetti, 2022; McKinney et al., 2020). Digital stimuli are uniquely equipped to address this limitation by creating almost unlimited numbers of alternate forms.
Future of Neuropsychology
We envision future neuropsychological practice to incorporate some if not most of these novel digital technologies. Unobtrusive monitoring technologies (e.g., smart home and wearable sensors) may be easily integrated into the lives of at-risk populations (e.g., older adults) to monitor for possible decline. This addresses a common problem encountered in neuropsychological evaluations, where it is often difficult to estimate the patient’s premorbid functioning. Diagnostic algorithms embedded within the EHR may serve as another early detection system and alert providers when their patient may be at risk of cognitive decline and appropriate referrals for neurological or neuropsychological services can be made. Risk scores may also be developed using these EHR algorithms to help with triage decisions.
For the actual neuropsychological evaluation, it is not implausible for clinicians to tap into the patient’s existing devices (e.g., smartphones, wearables) to get a sense of their real-life functioning with appropriate informed consent, which may complement their in-clinic assessment. Computerized assessment may make testing more efficient (and keep more precise response times) and reduce burden for both the patient and the clinician. For those concerned about not having human observation of important qualitative information during computerized assessment, many behavioral indicators can be easily programmed into the testing software. For example, time spent on each screen can be recorded, which may help detect inattentiveness to task instructions and/or impulsivity. Additional sensors (e.g., cameras, eye-tracking glasses, activity, and physiological monitoring) may be deployed concurrently to monitor for distractibility, hyperactivity, and anxious mood states that may negatively affect cognitive performance. Voice biomarkers derived from the clinical interview and other language-based tasks may help detect more subtle changes in speech. If the clinician is concerned for a specific instrumental activity of daily living, they may order remote monitoring of that particular activity in the patient’s real life (e.g., driving [Seelye et al., 2017]); no longer do clinicians have to rely primarily on self- or informant-reports. For individuals diagnosed with dementia, it will be crucial to monitor for functional decline and safety (e.g., risk of falls). Smart home sensors may help keep these patients out of institutions for as long as possible, especially for those without 24/7 caregivers.
Besides detection, smart homes may consist of interventional elements, such as voice-based reminders for medications and appointments and contacting emergency services if a possible fall is detected. Wearable and smartphone apps may be used for just-in-time adaptive interventions, which are interventions that are delivered at the right time based on the person’s internal state or context (Nahum-Shani et al., 2018), such as an app that prompts the user to exercise after long sedentary periods (based on activity monitoring) and delivers a mindfulness exercise when heightened anxiety is detected (through heartrate monitoring). We posit that the field of neuropsychology should embrace these technological advancements. Due to financial incentives, these technologies are already being (and will continue to be) applied to neuropsychological care by industry. It is imperative for neuropsychologists to have a seat at the table during the development and validation of these digital tools, in order to maintain scientific rigor, maximize impact on patient care, and minimize potential harm to patients.
Supplementary Material
Contributor Information
Che Harris, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA; Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA.
Yingfei Tang, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA; Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA.
Eliana Birnbaum, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA.
Christine Cherian, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA.
Dinesh Mendhe, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA.
Michelle H Chen, Institute for Health, Health Care Policy and Aging Research, Rutgers University, New Brunswick, NJ, USA; Department of Neurology, Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ, USA.
FUNDING
This article was partially supported by the National Academy of Neuropsychology and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (K23HD104855).
CONFLICT OF INTEREST
None declared.
REFERENCES
- Ahamed, F., Shahrestani, S., & Cheung, H. (2020). Internet of things and machine learning for healthy ageing: Identifying the early signs of dementia. Sensors, 20(21), 6031. 10.3390/s20216031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahmed, S., Haigh, A.-M. F., de Jager, C. A., & Garrard, P. (2013). Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease. Brain, 136(12), 3727–3737. 10.1093/brain/awt269. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alfalahi, H., Khandoker, A. H., Chowdhury, N., Iakovakis, D., Dias, S. B., Chaudhuri, K., et al. (2022). Diagnostic accuracy of keystroke dynamics as digital biomarkers for fine motor decline in neuropsychiatric disorders: A systematic review and meta-analysis. Scientific Reports, 12(1), 1–24. 10.1038/s41598-022-11865-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alhanai, T., Au, R., & Glass, J. (2017). Spoken language biomarkers for detecting cognitive impairment. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Okinawa, Japan, pp. 409–416. 10.1109/ASRU.2017.8268965. [DOI]
- All of Us Research Program Investigators (2019). The “all of us” research program. New England Journal of Medicine, 381(7), 668–676. 10.1056/NEJMsr1809937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Allison, S. L., Fagan, A. M., Morris, J. C., & Head, D. (2016). Spatial navigation in preclinical Alzheimer’s disease. Journal of Alzheimer's Disease, 52(1), 77–90. 10.3233/JAD-150855. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ambrosini, E., Caielli, M., Milis, M., Loizou, C., Azzolino, D., Damanti, S., et al. (2019). Automatic speech analysis to early detect functional cognitive decline in elderly population. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Berlin, Germany, 2019, pp. 212–216. 10.1109/EMBC.2019.8856768. [DOI] [PubMed]
- Ammar, R. B., & Ayed, Y. B. (2021). Evaluation of acoustic features for early diagnosis of Alzheimer disease. In Intelligent Systems Design and Applications: 19th International Conference on Intelligent Systems Design and Applications (ISDA 2019) held December 3–5, 2019, 19 (pp. 172–181). Springer International Publishing. [Google Scholar]
- Anderson, A. E., Jones, J. D., Thaler, N. S., Kuhn, T. P., Singer, E. J., & Hinkin, C. H. (2018). Intraindividual variability in neuropsychological performance predicts cognitive decline and death in HIV. Neuropsychology, 32(8), 966. 10.1037/neu0000482, 972. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Apple . (2023). ResearchKit and CareKit: Empowering medical researchers, doctors, and you. Retrieved Feb 23 from https://www.apple.com/lae/researchkit/.
- Ault, L., Goubran, R., Wallace, B., Lowden, H., & Knoefel, F. (2020). Smart home technology solution for night-time wandering in persons with dementia. Journal of Rehabilitation and Assistive Technologies Engineering, 7, 205566832093859. 10.1177/2055668320938591. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Babulal, G. M., Johnson, A., Fagan, A. M., Morris, J. C., & Roe, C. M. (2021). Identifying preclinical Alzheimer’s disease using everyday driving behavior: Proof of concept. Journal of Alzheimer's Disease, 79(3), 1009–1014. 10.3233/JAD-201294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Babulal, G. M., Traub, C. M., Webb, M., Stout, S. H., Addison, A., Carr, D. B., et al. (2016). Creating a driving profile for older adults using GPS devices and naturalistic driving methodology. F1000Research, 5, 2376. 10.12688/f1000research.9608.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barnett, I., Torous, J., Staples, P., Sandoval, L., Keshavan, M., & Onnela, J.-P. (2018). Relapse prediction in schizophrenia through digital phenotyping: A pilot study. Neuropsychopharmacology, 43(8), 1660–1666. 10.1038/s41386-018-0030-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bauer, R. M., Iverson, G. L., Cernich, A. N., Binder, L. M., Ruff, R. M., & Naugle, R. I. (2012). Computerized neuropsychological assessment devices: Joint position paper of the American Academy of clinical neuropsychology and the National Academy of Neuropsychology. Archives of Clinical Neuropsychology, 27(3), 362–373. 10.1093/arclin/acs027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bayat, S., Babulal, G. M., Schindler, S. E., Fagan, A. M., Morris, J. C., Mihailidis, A., et al. (2021). GPS driving: A digital biomarker for preclinical Alzheimer disease. Alzheimer's Research & Therapy, 13(1), 1–9. 10.1186/s13195-021-00852-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beltrami, D., Gagliardi, G., Rossini Favretti, R., Ghidoni, E., Tamburini, F., & Calzà, L. (2018). Speech analysis by natural language processing techniques: A possible tool for very early detection of cognitive decline? Frontiers in Aging Neuroscience, 10, 369. 10.3389/fnagi.2018.00369. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bender, H. A., García, A. M., & Barr, W. B. (2010). An interdisciplinary approach to neuropsychological test construction: Perspectives from translation studies. Journal of the International Neuropsychological Society, 16(2), 227–232. 10.1017/S1355617709991378. [DOI] [PubMed] [Google Scholar]
- Bilder, R. M., & Reise, S. P. (2019). Neuropsychological tests of the future: How do we get there from here? The Clinical Neuropsychologist, 33(2), 220–245. 10.1080/13854046.2018.1521993. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blennow, K., & Zetterberg, H. (2018). Biomarkers for Alzheimer's disease: Current status and prospects for the future. Journal of Internal Medicine, 284(6), 643–663. 10.1111/joim.12816. [DOI] [PubMed] [Google Scholar]
- Boumpa, E., Gkogkidis, A., Charalampou, I., Ntaliani, A., Kakarountas, A., & Kokkinos, V. (2019). An acoustic-based smart home system for people suffering from dementia. Technologies, 7(1), 29. 10.3390/technologies7010029. [DOI] [Google Scholar]
- Bowman, T., Gervasoni, E., Arienti, C., Lazzarini, S. G., Negrini, S., Crea, S., et al. (2021). Wearable devices for biofeedback rehabilitation: A systematic review and meta-analysis to design application rules and estimate the effectiveness on balance and gait outcomes in neurological diseases. Sensors, 21(10), 3444. 10.3390/s21103444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brindha, A., Sunitha, K., & Wilson, S. R. (2022). Tremor Classification Using Wearable Iot Based Sensors. IOP Conference Series: Materials Science and Engineering, 1219, 012024. 10.1088/1757-899X/1219/1/012024. [DOI] [Google Scholar]
- Cain, A. E., Depp, C. A., & Jeste, D. V. (2009). Ecological momentary assessment in aging research: A critical review. Journal of Psychiatric Research, 43(11), 987–996. 10.1016/j.jpsychires.2009.01.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Calzà, L., Gagliardi, G., Favretti, R. R., & Tamburini, F. (2021). Linguistic features and automatic classifiers for identifying mild cognitive impairment and dementia. Computer Speech & Language, 65, 101113. 10.1016/j.csl.2020.101113. [DOI] [Google Scholar]
- Casey, J. A., Schwartz, B. S., Stewart, W. F., & Adler, N. E. (2016). Using electronic health records for population health research: A review of methods and applications. Annual Review of Public Health, 37(1), 61–81. 10.1146/annurev-publhealth-032315-021353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cerino, E. S., Katz, M. J., Wang, C., Qin, J., Gao, Q., Hyun, J., et al. (2021). Variability in cognitive performance on mobile devices is sensitive to mild cognitive impairment: Results from the einstein aging study. Frontiers in Digital Health, 3. 10.3389/fdgth.2021.758031. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chase, H. S., Mitrani, L. R., Lu, G. G., & Fulgieri, D. J. (2017). Early recognition of multiple sclerosis using natural language processing of the electronic health record. BMC Medical Informatics and Decision Making, 17(1), 1–8. 10.1186/s12911-017-0418-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen, M. H., Cherian, C., Elenjickal, K., Rafizadeh, C. M., Ross, M. K., Leow, A., et al. (2022). Real-time associations among MS symptoms and cognitive dysfunction using ecological momentary assessment. Frontiers in Medicine, 9. 10.3389/fmed.2022.1049686. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen, M. H., Leow, A., Ross, M. K., DeLuca, J., Chiaravalloti, N., Costa, S. L., et al. (2022). Associations between smartphone keystroke dynamics and cognition in MS. Digital Health, 8, 205520762211432. 10.1177/20552076221143234. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chien, Y.-W., Hong, S.-Y., Cheah, W.-T., Fu, L.-C., & Chang, Y.-L. (2018). An Assessment System for Alzheimer's Disease Based on Speech Using a Novel Feature Sequence Design and Recurrent Neural Network. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Miyazaki, Japan, pp. 3289–3294. 10.1109/SMC.2018.00557. [DOI]
- Colombo, D., Suso-Ribera, C., Fernandez-Álvarez, J., Felipe, I. F., Cipresso, P., Palacios, A. G., et al. (2019). Exploring affect recall bias and the impact of mild depressive symptoms: an ecological momentary study. In Pervasive Computing Paradigms for Mental Health: 9th International Conference, MindCare 2019, Buenos Aires, Argentina, April 23–24, 2019, Proceedings, 9, vol 288. Springer, Cham. 10.1007/978-3-030-25872-6_17. [DOI] [Google Scholar]
- Coravos, A., Khozin, S., & Mandl, K. D. (2019). Developing and adopting safe and effective digital biomarkers to improve patient outcomes. NPJ Digital Medicine, 2(1), 14. 10.1038/s41746-019-0090-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coughlan, G., Laczó, J., Hort, J., Minihane, A.-M., & Hornberger, M. (2018). Spatial navigation deficits—Overlooked cognitive marker for preclinical Alzheimer disease? Nature Reviews Neurology, 14(8), 496–506. 10.1038/s41582-018-0031-x. [DOI] [PubMed] [Google Scholar]
- Crizzle, A. M., & Myers, A. M. (2013). Examination of naturalistic driving practices in drivers with Parkinson's disease compared to age and gender-matched controls. Accident Analysis & Prevention, 50, 724–731. 10.1016/j.aap.2012.06.025. [DOI] [PubMed] [Google Scholar]
- Dagum, P. (2018). Digital biomarkers of cognitive function. npj Digital Medicine, 1(1), 10. 10.1038/s41746-018-0018-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dassing, R., Allé, M. C., Cerbai, M., Obrecht, A., Meyer, N., Vidailhet, P., et al. (2020). Cognitive intervention targeting autobiographical memory impairment in patients with schizophrenia using a wearable camera: A proof-of-concept study. Frontiers in Psychiatry, 11, 397. 10.3389/fpsyt.2020.00397. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davis, J. D., Papandonatos, G. D., Miller, L. A., Hewitt, S. D., Festa, E. K., Heindel, W. C., et al. (2012). Road test and naturalistic driving performance in healthy and cognitively impaired older adults: Does environment matter? Journal of the American Geriatrics Society, 60(11), 2056–2062. 10.1111/j.1532-5415.2012.04206.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Donders, J. (2020). The incremental value of neuropsychological assessment: A critical review. The Clinical Neuropsychologist, 34(1), 56–87. 10.1080/13854046.2019.1575471. [DOI] [PubMed] [Google Scholar]
- Duff, K. (2012). Evidence-based indicators of neuropsychological change in the individual patient: Relevant concepts and methods. Archives of Clinical Neuropsychology, 27(3), 248–261. 10.1093/arclin/acr120. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eby, D. W., Silverstein, N. M., Molnar, L. J., LeBlanc, D., & Adler, G. (2012). Driving behaviors in early stage dementia: A study using in-vehicle technology. Accident Analysis & Prevention, 49, 330–337. 10.1016/j.aap.2011.11.021. [DOI] [PubMed] [Google Scholar]
- Espinoza-Cuadros, F., Garcia-Zamora, M. A., Torres-Boza, D., Ferrer-Riesgo, C. A., Montero-Benavides, A., Gonzalez-Moreira, E., et al. (2014). A spoken language database for research on moderate cognitive impairment: design and preliminary analysis. In Advances in Speech and Language Technologies for Iberian Languages: Second International Conference, IberSPEECH 2014, Las Palmas de Gran Canaria, Spain, November 19–21, 2014. Proceedings, (pp. 219–228). Springer International Publishing.
- Fagherazzi, G., Fischer, A., Ismael, M., & Despotovic, V. (2021). Voice for health: The use of vocal biomarkers from research to clinical practice. Digital Biomarkers, 5(1), 78–88. 10.1159/000515346. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Favaro, A., Moro-Velázquez, L., Butala, A., Motley, C., Cao, T., Stevens, R. D., et al. (2023). Multilingual evaluation of interpretable biomarkers to represent language and speech patterns in Parkinson's disease. Frontiers in Neurology, 14, 1142642. 10.3389/fneur.2023.1142642. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feenstra, H. E., Vermeulen, I. E., Murre, J. M., & Schagen, S. B. (2017). Online cognition: Factors facilitating reliable online neuropsychological test results. The Clinical Neuropsychologist, 31(1), 59–84. 10.1080/13854046.2016.1190405. [DOI] [PubMed] [Google Scholar]
- Fellows, R. P., Dahmen, J., Cook, D., & Schmitter-Edgecombe, M. (2017). Multicomponent analysis of a digital trail making test. The Clinical Neuropsychologist, 31(1), 154–167. 10.1080/13854046.2016.1238510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gali, R. L., Sushma, S., Madhuri, S., & Tejaswini, N. (2020). Automated Medicine Box for Geriatrics. In 2020 International Conference on System, Computation, Automation and Networking (ICSCAN). Pondicherry, India, 2020, pp. 1–4. 10.1109/ICSCAN49426.2020.9262358. [DOI]
- Germine, L., Reinecke, K., & Chaytor, N. S. (2019). Digital neuropsychology: Challenges and opportunities at the intersection of science and software. The Clinical Neuropsychologist, 33(2), 271–286. 10.1080/13854046.2018.1535662. [DOI] [PubMed] [Google Scholar]
- Gershon, R. C., Sliwinski, M. J., Mangravite, L., King, J. W., Kaat, A. J., Weiner, M. W., et al. (2022). The mobile toolbox for monitoring cognitive function. Lancet Neurology, 21(7), 589–590. 10.1016/S1474-4422(22)00225-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gibbons, R. D., Weiss, D. J., Kupfer, D. J., Frank, E., Fagiolini, A., Grochocinski, V. J., et al. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59(4), 361–368. 10.1176/ps.2008.59.4.361. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldberg, T. E., Harvey, P. D., Wesnes, K. A., Snyder, P. J., & Schneider, L. S. (2015). Practice effects due to serial cognitive assessment: Implications for preclinical Alzheimer's disease randomized controlled trials. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, 1(1), 103–111. 10.1016/j.dadm.2014.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Graham, S. A., Lee, E. E., Jeste, D. V., Van Patten, R., Twamley, E. W., Nebeker, C., et al. (2020). Artificial intelligence approaches to predicting and detecting cognitive decline in older adults: A conceptual review. Psychiatry Research, 284, 112732. 10.1016/j.psychres.2019.112732. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hackett, K., & Giovannetti, T. (2022). Capturing cognitive aging in vivo: Application of a neuropsychological framework for emerging digital tools. JMIR aging, 5(3), e38130. 10.2196/38130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haendel, M. A., Chute, C. G., Bennett, T. D., Eichmann, D. A., Guinney, J., Kibbe, W. A., et al. (2021). The national COVID cohort collaborative (N3C): Rationale, design, infrastructure, and deployment. Journal of the American Medical Informatics Association, 28(3), 427–443. 10.1093/jamia/ocaa196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hajjar, I., Okafor, M., Choi, J. D., Moore, E., Abrol, A., Calhoun, V. D., et al. (2023). Development of digital voice biomarkers and associations with cognition, cerebrospinal biomarkers, and neural representation in early Alzheimer's disease. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, 15(1), e12393. 10.1002/dad2.12393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Halliday, D. W., Gawryluk, J. R., Garcia-Barrera, M. A., & MacDonald, S. W. (2019). White matter integrity is associated with intraindividual variability in neuropsychological test performance in healthy older adults. Frontiers in Human Neuroscience, 13, 352. 10.3389/fnhum.2019.00352. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harari, G. M., Lane, N. D., Wang, R., Crosier, B. S., Campbell, A. T., & Gosling, S. D. (2016). Using smartphones to collect behavioral data in psychological science: Opportunities, practical considerations, and challenges. Perspectives on Psychological Science, 11(6), 838–854. 10.1177/1745691616650285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris, P. A., Swafford, J., Serdoz, E. S., Eidenmuller, J., Delacqua, G., Jagtap, V., et al. (2022). MyCap: A flexible and configurable platform for mobilizing the participant voice. JAMIA Open, 5(2), ooac047. 10.1093/jamiaopen/ooac047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harvey, P. D., Miller, M. L., Moore, R. C., Depp, C. A., Parrish, E. M., & Pinkham, A. E. (2021). Capturing clinical symptoms with ecological momentary assessment: Convergence of momentary reports of psychotic and mood symptoms with diagnoses and standard clinical assessments. Innovations in Clinical Neuroscience, 18(1–3), 24. [PMC free article] [PubMed] [Google Scholar]
- Hill, N. L., Mogle, J., Whitaker, E. B., Gilmore-Bykovskyi, A., Bhargava, S., Bhang, I. Y., et al. (2019). Sources of response bias in cognitive self-report items: “which memory are you talking about?”. The Gerontologist, 59(5), 912–924. 10.1093/geront/gny087. [DOI] [PubMed] [Google Scholar]
- Hufford, M. R., Shiffman, S., Paty, J., & Stone, A. A. (2001). Ecological momentary assessment: Real-world, real-time measurement of patient experience. In J. Fahrenberg & M. Myrtek (Eds.), Progress in ambulatory assessment: Computer-assisted psychological and psychophysiological methods in monitoring and field studies (pp. 69–92). Hogrefe & Huber Publishers.
- Iqbal, S. M. A., Mahgoub, I., Du, E., Leavitt, M. A., & Asghar, W. (2021). Advances in healthcare wearable devices. npj Flexible Electronics, 5(1), 9. 10.1038/s41528-021-00107-x. [DOI] [Google Scholar]
- Jacobson, N. C., & O'Cleirigh, C. (2021). Objective digital phenotypes of worry severity, pain severity and pain chronicity in persons living with HIV. The British Journal of Psychiatry, 218(3), 165–167. 10.1192/bjp.2019.168. [DOI] [PubMed] [Google Scholar]
- Jacobson, N. C., Summers, B., & Wilhelm, S. (2020). Digital biomarkers of social anxiety severity: Digital phenotyping using passive smartphone sensors. Journal of Medical Internet Research, 22(5), e16875. 10.2196/16875. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jacobson, N. C., Weingarden, H., & Wilhelm, S. (2019). Using digital phenotyping to accurately detect depression severity. The Journal of Nervous and Mental Disease, 207(10), 893–896. 10.1097/nmd.0000000000001042. [DOI] [PubMed] [Google Scholar]
- Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L. W. H., Feng, M., Ghassemi, M., et al. (2016). MIMIC-III, a freely accessible critical care database. Scientific Data, 3(1), 1–9. 10.1038/sdata.2016.35. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kataria, S., & Ravindran, V. (2020). Electronic health records: A critical appraisal of strengths and limitations. Journal of the Royal College of Physicians of Edinburgh, 50(3), 262–268. 10.4997/jrcpe.2020.309. [DOI] [PubMed] [Google Scholar]
- Khosroazad, S., Abedi, A., & Hayes, M. J. (2023). Sleep signal analysis for early detection of Alzheimer's disease and related dementia (ADRD). IEEE Journal of Biomedical and Health Informatics, 27(5), 2264–2275. 10.1109/JBHI.2023.3235391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim, J.-Y., Chu, C.-H., & Kang, M.-S. (2020). IoT-based unobtrusive sensing for sleep quality monitoring and assessment. IEEE Sensors Journal, 21(3), 3799–3809. 10.1109/JSEN.2020.3022915. [DOI] [Google Scholar]
- Koleck, T. A., Dreisbach, C., Bourne, P. E., & Bakken, S. (2019). Natural language processing of symptoms documented in free-text narratives of electronic health records: A systematic review. Journal of the American Medical Informatics Association, 26(4), 364–379. 10.1093/jamia/ocy173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kostyniuk, L. P., & Molnar, L. J. (2008). Self-regulatory driving practices among older adults: Health, age and sex effects. Accident Analysis & Prevention, 40(4), 1576–1580. 10.1016/j.aap.2008.04.005. [DOI] [PubMed] [Google Scholar]
- Kourtis, L. C., Regele, O. B., Wright, J. M., & Jones, G. B. (2019). Digital biomarkers for Alzheimer’s disease: The mobile/wearable devices opportunity. NPJ Digital Medicine, 2(1), 9. 10.1038/s41746-019-0084-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kumar, S., Abowd, G., Abraham, W. T., al'Absi, M., Chau, D. H., Ertin, E., et al. (2017). Center of excellence for mobile sensor data-to-knowledge (MD2K). IEEE Pervasive Computing, 16(2), 18–22. 10.1109/mprv.2017.29. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lam, K.-H., Twose, J., Lissenberg-Witte, B., Licitra, G., Meijer, K., Uitdehaag, B., et al. (2022). The use of smartphone keystroke dynamics to passively monitor upper limb and cognitive function in multiple sclerosis: Longitudinal analysis. Journal of Medical Internet Research, 24(11), e37614. 10.2196/37614. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lam, K. H., Twose, J., McConchie, H., Licitra, G., Meijer, K., de Ruiter, L., et al. (2022). Smartphone-derived keystroke dynamics are sensitive to relevant changes in multiple sclerosis. European Journal of Neurology, 29(2), 522–534. 10.1111/ene.15162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lam, K.-H., Van Oirschot, P., Den Teuling, B., Hulst, H., de Jong, B., Uitdehaag, B., et al. (2022). Reliability, construct and concurrent validity of a smartphone-based cognition test in multiple sclerosis. Multiple Sclerosis Journal, 28(2), 300–308. 10.1177/13524585211018103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lancaster, C., Blane, J., Chinner, A., Wolters, L., Koychev, I., & Hinds, C. (2019). The Mezurio smartphone application: Evaluating the feasibility of frequent digital cognitive assessment in the PREVENT dementia study. MedRxiv, 19005124. 10.1101/19005124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Land, W. H., Jr., Schaffer, J. D. (2020). Alzheimer’s disease and speech background. In The art and science of machine intelligence: With an innovative application for Alzheimer’s detection from speech (pp. 107–135). Springer Cham. 10.1007/978-3-030-18496-4. [DOI]
- Lin, H., Karjadi, C., Ang, T. F., Prajakta, J., McManus, C., Alhanai, T. W., et al. (2020). Identification of digital voice biomarkers for cognitive health. Exploration of Medicine, 1(6), 406–417. 10.37349/emed.2020.00028. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lipsmeier, F., Taylor, K. I., Kilchenmann, T., Wolf, D., Scotland, A., Schjodt-Eriksen, J., et al. (2018). Evaluation of smartphone-based testing to generate exploratory outcome measures in a phase 1 Parkinson's disease clinical trial. Movement Disorders, 33(8), 1287–1297. 10.1002/mds.27376. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu, G., Henson, P., Keshavan, M., Pekka-Onnela, J., & Torous, J. (2019). Assessing the potential of longitudinal smartphone based cognitive assessment in schizophrenia: A naturalistic pilot study. Schizophrenia Research: Cognition, 17, 100144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loring, D. W., Bauer, R. M., Cavanagh, L., Drane, D. L., Enriquez, K. D., Reise, S. P., et al. (2022). Rationale and design of the National Neuropsychology Network. Journal of the International Neuropsychological Society, 28(1), 1–11. 10.1017/S1355617721000199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loumidis, K. S., & Shropshire, J. M. (1997). Effects of waiting time on appointment attendance with clinical psychologists and length of treatment. Irish Journal of Psychological Medicine, 14(2), 49–54. 10.1017/S0790966700002986. [DOI] [Google Scholar]
- Lublin, F. D., Reingold, S. C., Cohen, J. A., Cutter, G. R., Sørensen, P. S., Thompson, A. J., et al. (2014). Defining the clinical course of multiple sclerosis: The 2013 revisions. Neurology, 83(3), 278–286. 10.1212/WNL.0000000000000560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyons, B. E., Austin, D., Seelye, A., Petersen, J., Yeargers, J., Riley, T., et al. (2015). Pervasive computing technologies to continuously assess Alzheimer’s disease progression and intervention efficacy [methods]. Frontiers in Aging Neuroscience, 7, 102. 10.3389/fnagi.2015.00102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maatoug, R., Oudin, A., Adrien, V., Saudreau, B., Bonnot, O., Millet, B., et al. (2022). Digital phenotype of mood disorders: A conceptual and critical review. Frontiers in Psychiatry, 13, 895860. 10.3389/fpsyt.2022.895860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacWhinney, B. (2007). The talkbank project. In Creating and digitizing language corpora: Volume 1: Synchronic databases (pp. 163–180), London: Palgrave Macmillan UK. 10.1057/9780230223936_7. [DOI] [Google Scholar]
- Mahon, E., & Lachman, M. E. (2022). Voice biomarkers as indicators of cognitive changes in middle and later adulthood. Neurobiology of Aging, 119, 22–35. 10.1016/j.neurobiolaging.2022.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marshall, S. C., Man-Son-Hing, M., Bedard, M., Charlton, J., Gagnon, S., Gelinas, I., et al. (2013). Protocol for Candrive II/Ozcandrive, a multicentre prospective older driver cohort study. Accident Analysis & Prevention, 61, 245–252. 10.1016/j.aap.2013.02.009. [DOI] [PubMed] [Google Scholar]
- Martin, S., Kelly, G., Kernohan, W. G., McCreight, B., Nugent, C., & Cochrane Effective Practice and Organisation of Care Group (2008). Smart home technologies for health and social care support. Cochrane Database of Systematic Reviews, (4). 10.1002/14651858.CD006412.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martinez-Martin, N., Greely, H. T., & Cho, M. K. (2021). Ethical development of digital phenotyping tools for mental health applications: Delphi study. JMIR mHealth and uHealth, 9(7), e27343. 10.2196/27343. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martinez-Martin, N., Luo, Z., Kaushal, A., Adeli, E., Haque, A., Kelly, S. S., et al. (2021). Ethical issues in using ambient intelligence in health-care settings. lancet Digital Health, 3(2), e115–e123. 10.1016/S2589-7500(20)30275-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martínez-Nicolás, I., Llorente, T. E., Martínez-Sánchez, F., & Meilán, J. J. G. (2021). Ten years of research on automatic voice and speech analysis of people with Alzheimer's disease and mild cognitive impairment: A systematic review article. Frontiers in Psychology, 12, 620251. 10.3389/fpsyg.2021.620251. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matar, E., Shine, J. M., Halliday, G. M., & Lewis, S. J. (2020). Cognitive fluctuations in Lewy body dementia: Towards a pathophysiological framework. Brain, 143(1), 31–46. 10.1093/brain/awz311. [DOI] [PubMed] [Google Scholar]
- McCoy, T. H., Jr., Han, L., Pellegrini, A. M., Tanzi, R. E., Berretta, S., & Perlis, R. H. (2019). Stratifying risk for dementia onset using large-scale electronic health record data: A retrospective cohort study. Alzheimer's & Dementia. 10.1016/j.jalz.2019.09.084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McKinney, T. L., Euler, M. J., & Butner, J. E. (2020). It’s about time: The role of temporal variability in improving assessment of executive functioning. Clinical Neuropsychologist, 34(4), 619–642. 10.1080/13854046.2019.1704434. [DOI] [PubMed] [Google Scholar]
- Milberg, W. P., Hebben, N., Kaplan, E., Grant, I., & Adams, K. (2009). The Boston process approach to neuropsychological assessment. Neuropsychological Assessment of Neuropsychiatric and Neuromedical Disorders, 3, 42–65. [Google Scholar]
- Miller, J. B., & Barr, W. B. (2017). The technology crisis in neuropsychology. Archives of Clinical Neuropsychology, 32(5), 541–554. 10.1093/arclin/acx050. [DOI] [PubMed] [Google Scholar]
- Mirzaei, S., El Yacoubi, M., Garcia-Salicetti, S., Boudy, J., Kahindo, C., Cristancho-Lacroix, V., et al. (2018). Two-stage feature selection of voice parameters for early Alzheimer's disease prediction. Irbm, 39(6), 430–435. 10.1016/j.irbm.2018.10.016. [DOI] [Google Scholar]
- Mohr, D. C., Zhang, M., & Schueller, S. M. (2017). Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annual Review of Clinical Psychology, 13(1), 23–47. 10.1146/annurev-clinpsy-032816-044949. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore, R. C., Ackerman, R. A., Russell, M. T., Campbell, L. M., Depp, C. A., Harvey, P. D., et al. (2022). Feasibility and validity of ecological momentary cognitive testing among older adults with mild cognitive impairment. Frontiers in Digital Health, 4. 10.3389/fdgth.2022.946685. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore, R. C., Campbell, L. M., Delgadillo, J. D., Paolillo, E. W., Sundermann, E. E., Holden, J., et al. (2020). Smartphone-based measurement of executive function in older adults with and without HIV. Archives of Clinical Neuropsychology, 35(4), 347–357. 10.1093/arclin/acz084. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., et al. (2018). Just-in-time adaptive interventions (JITAIs) in mobile health: Key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine, 52(6), 446–462. 10.1007/s12160-016-9830-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nicosia, J., Aschenbrenner, A. J., Adams, S. L., Tahan, M., Stout, S. H., Wilks, H., et al. (2022). Bridging the technological divide: Stigmas and challenges with technology in digital brain health studies of older adults. Frontiers in Digital Health, 4, 880055. 10.3389/fdgth.2022.880055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nicosia, J., Aschenbrenner, A. J., Balota, D. A., Sliwinski, M. J., Tahan, M., Adams, S., et al. (2022). Unsupervised high-frequency smartphone-based cognitive assessments are reliable, valid, and feasible in older adults at risk for Alzheimer’s disease. Journal of the International Neuropsychological Society, 29(5), 459–471. 10.1017/S135561772200042X. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nicosia, J., Wang, B., Aschenbrenner, A. J., Sliwinski, M. J., Yabiku, S. T., Roque, N. A., et al. (2023). To BYOD or not: Are device latencies important for bring-your-own-device (BYOD) smartphone cognitive testing? Behav Res 55, 2800–2812. 10.3758/s13428-022-01925-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Office of the National Coordinator for Health Information Technology . (2022). Office-based physician electronic health record adoption . Retrieved March 2 from. https://www.healthit.gov/data/quickstats/office-based-physician-electronic-health-record-adoption.
- Öhman, F., Berron, D., Papp, K. V., Kern, S., Skoog, J., Hadarsson Bodin, T., et al. (2022). Unsupervised mobile app-based cognitive testing in a population-based study of older adults born 1944. Frontiers in Digital Health, 4, 227. 10.3389/fdgth.2022.933265. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Omberg, L., Chaibub Neto, E., Perumal, T. M., Pratap, A., Tediarjo, A., Adams, J., et al. (2022). Remote smartphone monitoring of Parkinson’s disease and individual response to therapy. Nature Biotechnology, 40(4), 480–487. 10.1038/s41587-021-00974-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Overton, M., Pihlsgård, M., & Elmståhl, S. (2016). Test administrator effects on cognitive performance in a longitudinal study of ageing. Cogent Psychology, 3(1), 1260237. 10.1080/23311908.2016.1260237. [DOI] [Google Scholar]
- Passell, E., Strong, R. W., Rutter, L. A., Kim, H., Scheuer, L., Martini, P., et al. (2021). Cognitive test scores vary with choice of personal digital device. Behavior Research Methods, 53(6), 2544–2557. 10.3758/s13428-021-01597-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Payne, T. H., Corley, S., Cullen, T. A., Gandhi, T. K., Harrington, L., Kuperman, G. J., et al. (2015). Report of the AMIA EHR-2020 task force on the status and future direction of EHRs. Journal of the American Medical Informatics Association, 22(5), 1102–1110. 10.1093/jamia/ocv066. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pew Resarch Center . (2021). Mobile fact sheet. Retrieved May 23 from https://www.pewresearch.org/internet/fact-sheet/mobile/.
- Pichon, A., Idnay, B., Marder, K., Schnall, R., & Weng, C. (2021). Cognitive Function Characterization Using Electronic Health Records Notes. In AMIA Annual Symposium Proceedings. AMIA Symposium, 999–1008. [PMC free article] [PubMed]
- Powell, D. J., Liossi, C., Schlotz, W., & Moss-Morris, R. (2017). Tracking daily fatigue fluctuations in multiple sclerosis: Ecological momentary assessment provides unique insights. Journal of Behavioral Medicine, 40(5), 772–783. 10.1007/s10865-017-9840-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prakash, J., Chaudhury, S., & Chatterjee, K. (2021). Digital phenotyping in psychiatry: When mental health goes binary. Industrial Psychiatry Journal, 30(2), 191–192. 10.4103/ipj.ipj_223_21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pratap, A., Grant, D., Vegesna, A., Tummalacherla, M., Cohan, S., Deshpande, C., et al. (2020). Evaluating the utility of smartphone-based sensor assessments in persons with multiple sclerosis in the real-world using an app (elevateMS): Observational, prospective pilot digital health study. JMIR mHealth and uHealth, 8(10), e22108. 10.2196/22108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Putcha, D., Dickerson, B. C., Brickhouse, M., Johnson, K. A., Sperling, R. A., & Papp, K. V. (2020). Word retrieval across the biomarker-confirmed Alzheimer's disease syndromic spectrum. Neuropsychologia, 140, 107391. 10.1016/j.neuropsychologia.2020.107391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Qiao, Y., Zeng, K., Xu, L., & Yin, X. (2016). A smartphone-based driver fatigue detection using fusion of multiple real-time facial features. In 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, pp. 230–235. 10.1109/CCNC.2016.7444761. [DOI]
- Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA division 40 members. Archives of Clinical Neuropsychology, 20(1), 33–65. 10.1016/j.acn.2004.02.005. [DOI] [PubMed] [Google Scholar]
- Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 10-year period: A follow-up survey of INS and NAN members. Archives of Clinical Neuropsychology, 31(3), 206–230. 10.1093/arclin/acw007. [DOI] [PubMed] [Google Scholar]
- Rabin, L. A., Spadaccini, A. T., Brodale, D. L., Grant, K. S., Elbulok-Charcape, M. M., & Barr, W. B. (2014). Utilization rates of computerized tests and test batteries among clinical neuropsychologists in the United States and Canada. Professional Psychology: Research and Practice, 45(5), 368–377. 10.1037/a0037987. [DOI] [Google Scholar]
- Rapcan, V., D’Arcy, S., Yeap, S., Afzal, N., Thakore, J., & Reilly, R. B. (2010). Acoustic and temporal analysis of speech: A potential biomarker for schizophrenia. Medical Engineering & Physics, 32(9), 1074–1079. [DOI] [PubMed] [Google Scholar]
- ResearchStack.org ResearchStack: An SDK for building research study apps on android. Retrieved Feb 23 from http://researchstack.org/.
- Robin, J., Harrison, J. E., Kaufman, L. D., Rudzicz, F., Simpson, W., & Yancheva, M. (2020). Evaluation of speech-based digital biomarkers: Review and recommendations. Digital Biomarkers, 4(3), 99–108. 10.1159/000510820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Robles, R. J., & Kim, T. H. (2010). Applications, systems and methods in smart home technology: A review. International Journal of Advanced Science And Technology, 15, 37–48. [Google Scholar]
- Roe, C. M., Babulal, G. M., Head, D. M., Stout, S. H., Vernon, E. K., Ghoshal, N., et al. (2017). Preclinical Alzheimer's disease and longitudinal driving decline. Alzheimer's & Dementia: Translational Research & Clinical Interventions, 3(1), 74–82. 10.1016/j.trci.2016.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roe, C. M., Stout, S. H., Rajasekar, G., Ances, B. M., Jones, J. M., Head, D., et al. (2019). A 2.5-year longitudinal assessment of naturalistic driving in preclinical Alzheimer’s disease. Journal of Alzheimer's Disease, 68(4), 1625–1633. 10.3233/JAD-181242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rosselli, M., & Ardila, A. (2003). The impact of culture and education on non-verbal neuropsychological measurements: A critical review. Brain and Cognition, 52(3), 326–333. 10.1016/S0278-2626(03)00170-2. [DOI] [PubMed] [Google Scholar]
- Rullier, L., Atzeni, T., Husky, M., Bouisson, J., Dartigues, J. F., Swendsen, J., et al. (2014). Daily life functioning of community-dwelling elderly couples: An investigation of the feasibility and validity of ecological momentary assessment. International Journal of Methods in Psychiatric Research, 23(2), 208–216. 10.1002/mpr.1425. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Santos, J., Postolache, O., & Mendes, D. (2022). Ambient Assisted Living using Non-intrusive Smart Sensing and IoT for Gait Rehabilitation. In 2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). Rome, Italy, 2022, pp. 489–494. 10.1109/MetroXRAINE54828.2022.9967674. [DOI]
- Sarwar, T., Seifollahi, S., Chan, J., Zhang, X., Aksakalli, V., Hudson, I., et al. (2022). The secondary use of electronic health records for data mining: Data characteristics and challenges. ACM Computing Surveys (CSUR), 55(2), 1–40. 10.1145/3490234. [DOI] [Google Scholar]
- Satt, A., Sorin, A., Toledo-Ronen, O., Barkan, O., Kompatsiaris, I., Kokonozi, A., et al. (2013). Evaluation of speech-based protocol for detection of early-stage dementia. In Interspeech (pp. 1692–1696). [Google Scholar]
- Schatz, P. (2017). Computer-based assessment: Current status and next steps. In R. L. Kane & T. D. Parsons (Eds.), The role of technology in clinical neuropsychology. Oxford University Press. 10.1093/oso/9780190234737.003.0007. [DOI] [Google Scholar]
- Schmand, B. (2019). Why are neuropsychologists so reluctant to embrace modern assessment techniques? The Clinical Neuropsychologist, 33(2), 209–219. 10.1080/13854046.2018.1523468. [DOI] [PubMed] [Google Scholar]
- Schmitter-Edgecombe, M., Sumida, C., & Cook, D. J. (2020). Bridging the gap between performance-based assessment and self-reported everyday functioning: An ecological momentary assessment approach. The Clinical Neuropsychologist, 34(4), 678–699. 10.1080/13854046.2020.1733097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schweitzer, P., Husky, M., Allard, M., Amieva, H., Pérès, K., Foubert-Samier, A., et al. (2017). Feasibility and validity of mobile cognitive testing in the investigation of age-related cognitive decline. International Journal of Methods in Psychiatric Research, 26(3), e1521. 10.1002/mpr.1521. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seelye, A., Mattek, N., Sharma, N., Witter, P., IV, Brenner, A., Wild, K., et al. (2017). Passive assessment of routine driving with unobtrusive sensors: A new approach for identifying and monitoring functional level in normal aging and mild cognitive impairment. Journal of Alzheimer's Disease, 59(4), 1427–1437. 10.3233/JAD-170116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology, 4(1), 1–32. 10.1146/annurev.clinpsy.3.022806.091415. [DOI] [PubMed] [Google Scholar]
- Shvetz, C., Gu, F., Drodge, J., Torous, J., & Guimond, S. (2021). Validation of an ecological momentary assessment to measure processing speed and executive function in schizophrenia. NPJ Schizophrenia, 7(1), 64. 10.1038/s41537-021-00194-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Silva, A. R., Pinho, M. S., Macedo, L., Moulin, C., Caldeira, S., & Firmino, H. (2017). It is not only memory: Effects of sensecam on improving well-being in patients with mild Alzheimer disease. International Psychogeriatrics, 29(5), 741–754. 10.1017/S104161021600243X. [DOI] [PubMed] [Google Scholar]
- Singh, S., & Germine, L. (2021). Technology meets tradition: A hybrid model for implementing digital tools in neuropsychology. International Review of Psychiatry, 33(4), 382–393. 10.1080/09540261.2020.1835839. [DOI] [PubMed] [Google Scholar]
- Sliwinski, M. J., Mogle, J. A., Hyun, J., Munoz, E., Smyth, J. M., & Lipton, R. B. (2018). Reliability and validity of ambulatory cognitive assessments. Assessment, 25(1), 14–30. 10.1177/1073191116643164. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stein, J., Luppa, M., Brähler, E., König, H.-H., & Riedel-Heller, S. G. (2010). The assessment of changes in cognitive functioning: Reliable change indices for neuropsychological instruments in the elderly–a systematic review. Dementia and Geriatric Cognitive Disorders, 29(3), 275–286. 10.1159/000289779. [DOI] [PubMed] [Google Scholar]
- Sternin, A., Burns, A., & Owen, A. M. (2019). Thirty-five years of computerized cognitive assessment of aging—Where are we now? Diagnostics, 9(3), 114. 10.3390/diagnostics9030114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taquet, M., Geddes, J. R., Husain, M., Luciano, S., & Harrison, P. J. (2021). 6-month neurological and psychiatric outcomes in 236 379 survivors of COVID-19: A retrospective cohort study using electronic health records. The Lancet Psychiatry, 8(5), 416–427. 10.1016/S2215-0366(21)00084-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thompson, L. I., Harrington, K. D., Roque, N., Strenger, J., Correia, S., Jones, R. N., et al. (2022). A highly feasible, reliable, and fully remote protocol for mobile app-based cognitive assessment in cognitively healthy older adults. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, 14(1), e12283. 10.1002/dad2.12283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Torous, J., Staples, P., Barnett, I., Sandoval, L. R., Keshavan, M., & Onnela, J.-P. (2018). Characterizing the clinical relevance of digital phenotyping data quality with applications to a cohort with schizophrenia. npj Digital Medicine, 1(1), 1–9. 10.1038/s41746-018-0022-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- U.S. Food and Drug Administration . (2023a). 510(k) premarket notification: Embrace. Retrieved March 6 from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K172935.
- U.S. Food and Drug Administration . (2023b). 510(k) premarket notification: Rune labs tremor transducer system. Retrieved March 6 from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K213519.
- U.S. Food and Drug Administration . (2023c). 510(k) premarket notification: X8 system - sleep profiler (SP40), X8 system - sleep profiler PSG2 (SP29), X8 system - stat X8 (XS29). Retrieved March 6 from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K152040.
- Vaidyam, A., Halamka, J., & Torous, J. (2022). Enabling research and clinical use of patient-generated health data (the mindLAMP platform): Digital phenotyping study. JMIR mHealth and uHealth, 10(1), e30557. 10.2196/30557. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vos, L., Williams, M. W., Poritz, J. M., Ngan, E., Leon-Novelo, L., & Sherer, M. (2020). The discrepancy between cognitive complaints and neuropsychological test findings in persons with traumatic brain injury. The Journal of Head Trauma Rehabilitation, 35(4), E382–E392. 10.1097/HTR.0000000000000557. [DOI] [PubMed] [Google Scholar]
- Walters, E. R., & Lesk, V. E. (2015). Time of day and caffeine influence some neuropsychological tests in the elderly. Psychological Assessment, 27(1), 161–168. 10.1037/a0038213. [DOI] [PubMed] [Google Scholar]
- Weintraub, S., Dikmen, S. S., Heaton, R. K., Tulsky, D. S., Zelazo, P. D., Bauer, P. J., et al. (2013). Cognition assessment using the NIH toolbox. Neurology, 80(11_supplement_3), S54–S64. 10.1212/WNL.0b013e3182872ded. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weiskopf, N. G., Rusanov, A., & Weng, C. (2013). Sick patients have more data: the non-random completeness of electronic health records. In AMIA Annual Symposium Proceedings. AMIA Symposium, 1472–1477. [PMC free article] [PubMed]
- Weizenbaum, E., Torous, J., & Fulford, D. (2020). Cognition in context: Understanding the everyday predictors of cognitive performance in a new era of measurement. JMIR mHealth and uHealth, 8(7), e14328. 10.2196/14328. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weizenbaum, E. L., Fulford, D., Torous, J., Pinsky, E., Kolachalama, V. B., & Cronin-Golomb, A. (2022). Smartphone-based neuropsychological assessment in Parkinson’s disease: Feasibility, validity, and contextually driven variability in cognition. Journal of the International Neuropsychological Society, 28(4), 401–413. 10.1017/S1355617721000503. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilson, G., Pereyda, C., Raghunath, N., de la Cruz, G., Goel, S., Nesaei, S., et al. (2019). Robot-enabled support of daily activities in smart home environments. Cognitive Systems Research, 54, 258–272. 10.1016/j.cogsys.2018.10.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wong, A., Young, A. T., Liang, A. S., Gonzales, R., Douglas, V. C., & Hadley, D. (2018). Development and validation of an electronic health record–based machine learning model to estimate delirium risk in newly hospitalized patients without known cognitive impairment. JAMA Network Open, 1(4), e181018–e181018. 10.1001/jamanetworkopen.2018.1018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yao, L., Yang, Y., Wang, Z., Pan, X., & Xu, L. (2023). Compliance with ecological momentary assessment programmes in the elderly: A systematic review and meta-analysis. BMJ Open, 13(7), e069523. 10.1136/bmjopen-2022-069523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeung, A., Iaboni, A., Rochon, E., Lavoie, M., Santiago, C., Yancheva, M., et al. (2021). Correlating natural language processing and automated speech analysis with clinician assessment to quantify speech-language changes in mild cognitive impairment and Alzheimer’s dementia. Alzheimer's Research & Therapy, 13(1), 109. 10.1186/s13195-021-00848-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zanini, S., Tavano, A., & Fabbro, F. (2010). Spontaneous language production in bilingual Parkinson’s disease patients: Evidence of greater phonological, morphological and syntactic impairments in native language. Brain and Language, 113(2), 84–89. 10.1016/j.bandl.2010.01.005. [DOI] [PubMed] [Google Scholar]
- Zhou, X., Wang, Y., Sohn, S., Therneau, T. M., Liu, H., & Knopman, D. S. (2019). Automatic extraction and assessment of lifestyle exposures for Alzheimer’s disease using natural language processing. International Journal of Medical Informatics, 130, 103943. 10.1016/j.ijmedinf.2019.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhou, Y., Zia Ur Rehman, R., Hansen, C., Maetzler, W., Del Din, S., Rochester, L., et al. (2020). Classification of neurological patients to identify fallers based on spatial-temporal gait characteristics measured by a wearable device. Sensors, 20(15), 4098. 10.3390/s20154098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zulueta, J., Leow, A. D., & Ajilore, O. (2020). Real-time monitoring: A key element in personalized health and precision health. Focus, 18(2), 175–180. 10.1176/appi.focus.20190042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zygouris, S., & Tsolaki, M. (2015). Computerized cognitive testing for older adults: A review. American Journal of Alzheimer's Disease & Other Dementias®, 30(1), 13–28. 10.1177/1533317514522852. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
