Skip to main content
Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease logoLink to Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease
. 2024 Jan 16;13(2):e031348. doi: 10.1161/JAHA.123.031348

Design and Feasibility Analysis of a Smartphone‐Based Digital Cognitive Assessment Study in the Framingham Heart Study

Preeti Sunderaraman 1,2,3,, Ileana De Anda‐Duran 4, Cody Karjadi 3, Julia Peterson 3, Huitong Ding 3,5, Sherral A Devine 3,5, Ludy C Shih 1,3, Zachary Popp 2,5, Spencer Low 2,5,6, Phillip H Hwang 6, Kriti Goyal 1,3, Lindsay Hathaway 3, Jose Monteverde 3, Honghuang Lin 7, Vijaya B Kolachalama 2,8,9, Rhoda Au 1,2,3,5,6,8
PMCID: PMC10926817  PMID: 38226510

Abstract

Background

Smartphone‐based digital technology is increasingly being recognized as a cost‐effective, scalable, and noninvasive method of collecting longitudinal cognitive and behavioral data. Accordingly, a state‐of‐the‐art 3‐year longitudinal project focused on collecting multimodal digital data for early detection of cognitive impairment was developed.

Methods and Results

A smartphone application collected 2 modalities of cognitive data, digital voice and screen‐based behaviors, from the FHS (Framingham Heart Study) multigenerational Generation 2 (Gen 2) and Generation 3 (Gen 3) cohorts. To understand the feasibility of conducting a smartphone‐based study, participants completed a series of questions about their smartphone and app use, as well as sensory and environmental factors that they encountered while completing the tasks on the app. Baseline data collected to date were from 537 participants (mean age=66.6 years, SD=7.0; 58.47% female). Across the younger participants from the Gen 3 cohort (n=455; mean age=60.8 years, SD=8.2; 59.12% female) and older participants from the Gen 2 cohort (n=82; mean age=74.2 years, SD=5.8; 54.88% female), an average of 76% participants agreed or strongly agreed that they felt confident about using the app, 77% on average agreed or strongly agreed that they were able to use the app on their own, and 81% on average rated the app as easy to use.

Conclusions

Based on participant ratings, the study findings are promising. At baseline, the majority of participants are able to complete the app‐related tasks, follow the instructions, and encounter minimal barriers to completing the tasks independently. These data provide evidence that designing and collecting smartphone application data in an unsupervised, remote, and naturalistic setting in a large, community‐based population is feasible.

Keywords: aging, Alzheimer's disease, digital health, feasibility, mobile health

Subject Categories: Aging, Epidemiology, Digital Health


Nonstandard Abbreviations and Acronyms

FHS

Framingham Heart Study

Gen 2

Generation 2

Gen 3

Generation 3

Clinical Perspective.

What Is New?

  • The majority of participants were able to complete the app‐related tasks and follow the instructions and encountered minimal barriers to completing the tasks independently.

  • Specifically, baseline data from both younger and older participants (n=537 participants) revealed that an average of 76% participants agreed or strongly agreed with feeling confident about using the app, 77% on average agreed or strongly agreed with being able to use the app independently, and 81% on average rated the app as easy to use.

What Are the Clinical Implications?

  • Based on the current evidence, it is feasible to collect smartphone application‐based cognitive data in an unsupervised and remote setting among community dwelling individuals; it remains to be studied whether longitudinal data collected via smartphone application can be used to understand and monitor cognitive abilities in the community.

The limited success of Alzheimer's disease clinical trials has highlighted the importance of identifying individuals at risk for cognitive impairment or in the early stages of the disease. 1 Enrolling these individuals in future trials can improve the chances of developing effective treatments or interventions for Alzheimer's disease. 2 Moreover, relying on the assessment of a single aspect of cognition or annualized assessment intervals may not be sufficient to detect early dementia or mild cognitive impairment due to the clinical heterogeneity and fluctuations that can occur in the course of the disease. 3 , 4 , 5 , 6 It has been observed that factors like lifestyle and comorbid medical conditions can influence cognitive trajectories later in life. 7 Traditional neuropsychological assessments and neuroimaging, which are commonly used, may not capture the fluctuating nature of cognition and can be costly and burdensome for participants. Therefore, there is a pressing need for improved methods that can accurately detect changes in cognitive function in real time.

Smartphones present a promising alternative method for assessing cognitive abilities in a more fluid manner. Given their widespread usage globally, 8 they offer an ideal platform for unsupervised cognitive assessment in a naturalistic setting. In contrast with a clinic or laboratory setting, a naturalistic setting refers to the environment wherein the participant is normally present while completing the tasks, for example, at home or at work. By leveraging the ubiquity and user‐friendly nature of smartphones, we can potentially overcome the limitations of traditional assessment methods and gather valuable insights into cognitive function in an effective and cost‐efficient manner. Understanding participant engagement and perceptions as they perform the tasks without supervision will facilitate early detection and monitoring of cognitive impairment. Additionally, a smartphone study allows for multimodal level data to be captured, such as speech data through embedded voice recorders and time‐stamped coordinates. Numerous cognitively related behavioral metrics can then be derived from these different digital data streams.

The FHS (Framingham Heart Study) has been administering a set of standardized neuropsychological tests to its community‐based multigenerational cohorts since 1999. Beginning in 2005, they began digitally recording all spoken responses to neuropsychological test questions and in 2011, began using a digital pen to digitally record written responses. In recent years, digital voice research has been gaining momentum primarily because, unlike written language and many currently used neuropsychological tests, which can be biased by culture and education, spoken language is a universal phenomenon and data suggest that digital voice has the potential to be used as an indicator of cognitive impairment in the early stages of Alzheimer's disease. 9 , 10 In addition, time‐stamped coordinates that were previously obtained from the digital pen can now also be obtained from a tablet or smartphone; these data have also been derived into measures of cognition. 11 , 12 , 13 , 14 Obtaining cognitive data through the smartphone either through voice recordings or through screen‐based behaviors from embedded sensors allows the possibility of self‐administered longitudinal assessments that can provide insights into interindividual comparability and analyses using inter‐ and intrasession level data, all of which will facilitate detection of cognitive changes more accurately and earlier than previous methods have allowed.

Evidence suggests that digital smartphone technology, although in the nascent stage, demonstrates associations with its analogous in‐person version of cognitive tests 15 , 16 , 17 and can distinguish individuals with mild cognitive impairment and early Alzheimer's disease from cognitively healthy individuals. 18 , 19 , 20 Studies have also found that smartphone‐based assessments correlate to neuroimaging biomarkers 15 , 20 , 21 , 22 , 23 and thus show promise for identifying individuals in the early stages of dementing process. Before exploring the use of smartphones for examining cognition in epidemiological studies, it is essential to understand feasibility of participant engagement and perceptions of using such technology while they perform tasks unsupervised and remotely. A better understanding of the participants' experiences is integral to ensuring the effectiveness and user acceptability of the digital assessment tools. Moreover, a small number of studies have found relatively high adherence rates to completing mobile apps, 20 with studies finding older participants' willingness to engage for longer durations than younger participants. 24 Data also suggest participants enjoy mobile tasks and find them acceptable. 17 , 25 This paper investigates FHS participants' experiences with a smartphone app.

Methods

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Participants

Located in Framingham, Massachusetts, the FHS is a community‐based cohort study that began collecting data to examine cardiovascular disease in 1948. Data on a brief cognitive screener began to be collected from 1976 and from a more comprehensive battery of cognitive tests in 1999. In 1990, FHS began to enroll participants from diverse racial and ethnic backgrounds into multiethnic cohorts (the Omni cohorts). For the current study, digital data are being collected from 2 broad FHS cohorts largely categorized on the basis on the time they were originally enrolled by FHS. The first cohort, called Gen 2 cohort, are relatively older participants from Generation 2 (Gen 2) and Omni Generation 1 (Omni 1) who are currently at the age of being at high risk of Alzheimer's disease (see Table). The second cohort, Gen 3, includes relatively younger participants from Generation 3 (Gen 3) consisting of predominantly White participants, along with Omni Generation 2 (Omni 2) and the New Offspring Spouse cohorts. Data collection for the digital study began August 18, 2022 and for the current paper data collected until March 21, 2023 were considered. Altogether 689 participants consented for the study and registered the app on their smartphones (n=119 from Gen 2 cohort, and n=570 from Gen 3 cohort). Of these, 140 participants successfully downloaded the app but did not complete the tasks and questionnaires, and 12 participants withdrew from the study. Table S1 shows the breakdown of the demographic and questionnaire data for each subcohort of participants (n=537). All participants are invited to participate in the digital assessment study. This study was approved by the Boston Medical Center/Boston University Medical Campus Boston University Medical Campus Institutional Review Board.

Table 1.

Demographic Information for 2 FHS Cohorts

N=537
Gen 2* Gen 3
No. of participants 82 455
Age, y, mean±SD 74.2±5.8 60.8±8.2
Sex, n (%)
Male 37 (45.12%) 186 (40.88%)
Female 45 (54.88%) 269 (59.12%)
Education, n (%)
High school, nongraduate 2 (2.44%) 2 (0.44%)
High school, graduate 8 (9.76%) 43 (9.45%)
Some college 14 (17.07%) 99 (21.76%)
College graduate 56 (68.29%) 311 (68.35%)
Race and ethnicity, n (%)
White 70 (85.37%) 405 (89.01%)
Black 7 (8.54%) 10 (2.20%)
Asian 4 (4.88%) 14 (3.08%)
Other 1 (1.22%) 26 (5.71%)
*

Gen 2 includes the Offspring cohort and Omni 1 cohort.

Gen 3 includes the Third Generation cohort, Omni 2 cohort, and New Offspring Spouse cohort.

In FHS, given the historical nature of data, race and ethnicity information are combined. The Other category consists of 1 individual who self‐reported as Hispanic for Gen 2, and there are 16 Hispanic participants for Gen 3. The latter category also included 8 individuals who reported as belonging to more than 1 race and 2 individuals with unknown or missing race and ethnicity data.

Measures

The smartphone application consists of a set of tasks that are organized into 3 blocks of assessments, lasting ≈15 to 20 minutes in total. Block 1: Participants answer questions to ensure that they can see the stimuli on the phone screen and hear the instructions adequately. Block 2: Participants complete a series of cognitive tasks and respond to open‐ended questions, which are voice recorded. Block 3: To ensure that performance is not influenced by external factors, participants respond to questions, such as their experiences with using the smartphone application. Data on all the blocks were collected at FHS from the initiation of the study. The order of overlapping tasks and the administration schedule over time (every 3 months for a period of 3 years) is fixed. In this way, both the study design and type and order of tasks have been harmonized.

To understand the feasibility of conducting a smartphone‐based study, participants completed a series of questions in different categories that are described in detail here. See Data S1 for the actual items used along with a summary of the responses for each cohort.

  1. Screening questions: these questions were designed to assess problems with vision, hearing, and pain in fingers. These questions were designed to gauge the presence of sensory‐related issues that could have affected testing while using the app; a dichotomous response format was used for items related to vision and hearing (yes/no), and a Likert scale ranging from 0 (no pain) to 10 (severe pain) was used to assess the pain level in their fingers. If participants responded affirmatively to problems with vision and hearing, they saw a message on their screen encouraging them to increase the brightness or volume on their devices. For a pain level above 6, participants were encouraged to discontinue the task and return to it after the pain level attenuated.

  2. Frequency of smartphone use: baseline patterns of smartphone use were determined from 2 questions in this section that focused on the length of smartphone use (ranging from <6 months to ≥5 years) and frequency of use by the hours per week (<1 hour to ≥10 hours).

  3. App‐related questions: this section consisted of 6 items in which participants rate their comfort in using the smartphone application. The items gauge ease of app use, confidence in using the app, ability to use the app without support of another individual, concerns about privacy while using the app, the clarity of the task instructions, and comfort with task completion schedule. Participants rated these items on a 5‐point Likert scale (strongly agree, agree, neutral, disagree, strongly disagree).

  4. Smartphone‐related questions: this section consisted of 4 questions. There were 3 items that mimicked the app‐related items: those related to ease of use, confidence in using smartphone, and ability to use the smartphone without others' support. One other item was related to participants ability to figure out high‐tech products and services using their smartphone with help from others. Participants rated these items on the same 5‐point Likert scale.

  5. Task‐based environment: four questions in this section were designed to understand whether environmental factors may have affected task performance. The first question had participants select from among 4 types of environments in which they had completed the tasks (public/private location×quiet/loud noise level). The second set of questions asked participants whether their performance was affected by their difficulty with seeing the screen due to issues in the environment (eg, glare from sunlight), issues in the app (eg, font size), vision/eye problems (eg, glaucoma, myopia), or some other issue (a free text response field was provided). A similar third question examined whether hearing issues might have affected performance due to environmental factors (eg, distraction, background noise), app‐related issues (eg, low volume, glitches), prompts being read too fast, hearing problems (eg, tinnitus), or other issues wherein they could type in the free text field. The last question asked for their general comments/concerns/suggestions, if any.

Statistical Analysis

Descriptive and frequency‐based analyses for the questions examining feasibility were conducted using scripts written in Python (version 3.6.8), primarily using the Python libraries Matplotlib, seaborn, pandas, and NumPy. The raw JSON data files were retrieved from Amazon S3 cloud storage via the AWS CLI s3 sync command and stored on FHS storage servers. The raw JSON data files were parsed and prepared for the descriptive and frequency‐based analyses via scripts written in Python (version 3.6.8).

Results

The following sections characterize in detail participants' perceptions and their experiences pertaining to their app and smartphone usage.

Screening Questions

Almost all the participants across the 2 FHS cohorts (>99%) reported no problems with reading what was on the screen. A high proportion of participants reported no pain at all or minimal pain (ranging from 1 to 3) in their fingers (see Figure 1). Use of assistive devices for vision was reported in about 90% and 85% of participants in Gen 2 and Gen 3 cohorts, respectively. Use of assistive devices for hearing was reported in about 10% and 5% of participants in Gen 2 and Gen 3 cohorts, respectively.

Figure 1. Participant responses by cohort about their current level of pain in fingers.

Figure 1

Frequency of Smartphone Use

The majority of the participants reported using their smartphone for 5 years or more (88% for Gen 2 cohort, 93% for Gen 3 cohort), with <5% reporting smartphone use for <1 year (see Figure 2). Similarly, the majority of participants (80% for Gen 2 cohort, 84% for Gen 3 cohort) reported using their phone 5 or more hours per week (see Figure 3).

Figure 2. Participant responses by cohort about the frequency of their smartphone use.

Figure 2

Figure 3. Participant responses by cohort about the number of hours of smartphone use per week.

Figure 3

App‐Related Questions and Smartphone‐Related Questions

Participants rated items based on the smartphone and app. As depicted in Figure 4, most participants strongly agreed or agreed to feeling very confident using the smartphone (92% and 88% for Gen 2 and 3 cohorts, respectively). The majority of participants also strongly agreed or agreed to feeling very confident about using the app (73% for Gen 2 cohort, 80% for Gen 3 cohort). Most participants rated the smartphone (72% and 84% for Gens 2 and 3 cohorts, respectively) and the app (79% and 83% for Gens 2 and 3 cohorts, respectively) as easy to use (see Figure 5).

Figure 4. Participant responses by cohort about their confidence in smartphone and app use.

Figure 4

Figure 5. Participant responses by cohort about ease of smartphone and app use.

Figure 5

Regarding their ability to use the smartphone without any support, the majority of participants rated the item as strongly agreed or agreed (83% and 85% from Gen 2 and 3 cohorts, respectively; see Figure 6). For an analogous item about using the app without any support, the ratings were 73% and 81% from Gen 2 and 3 cohorts, respectively. Fewer than 20% of the participants (19% and 10% from Gen 2 and 3 cohorts, respectively) strongly disagreed or disagreed with the item that they could independently figure out novel products and services using their smartphone.

Figure 6. Participant responses by cohort about their ability to use the smart phone and app without support.

Figure 6

A majority of participants agreed or strongly agreed that the schedule for completion of tasks was reasonable (90% for Gen 2 cohort, 87% for Gen 3 cohort; see Figure 7). Similarly, most participants rated the task instructions on the app as clear and easy to follow (88% for Gen 2 cohort and 91% for Gen 3 cohort; see Figure 8). For the item related to not being concerned about their privacy while using the app, the ratings showed more variability with the patterns depicting that most of the participants strongly agreed or agreed to this item (73% of Gen 2 cohort, 63% for Gen 3). Relatively fewer participants (20% and 30% from Gen 2 and 3 cohorts respectively) rated this item as neutral, and only a small proportion (<8% each for both cohorts) disagreed or strongly disagreed with this item (see Figure 9).

Figure 7. Participant responses by cohort about their perception of the schedule of tasks.

Figure 7

Figure 8. Participant responses by cohort about their perception of app‐related task instructions.

Figure 8

Figure 9. Participant responses by cohort about their level of concern regarding their privacy.

Figure 9

Task‐Based Environment

A majority of the participants indicated that their performance on the app was not affected by hearing difficulty (>97% for both cohorts) or by their ability to see the screen (>90% for both cohorts). The environment selected by most of the participants (>92% for both cohorts) was a quiet, private location while completing the tasks (see Figure 10).

Figure 10. Type of environment selected by the participants where the app‐based task was completed.

Figure 10

Among those who reported performance as having been affected by hearing difficulty, a total of 4 participants (1 from Gen 2 cohort, and 3 from Gen 3 cohort) responded affirmatively that environmental factors such as distractions and background noise affected their performance, 3 participants (1 from Gen 2 cohort, and 2 from Gen 3 cohort) responded affirmatively that the cause may have been problems related to sensory issues such as tinnitus. Only 5 out of 537 responded that issues with the app such as audio quality may have affected their performance, and only 2 participants responded that prompts were read too fast.

Among those who reported performance as having being affected by their ability to see the screen, some reasons given were (1) vision/eye problems such as myopia or cataract (n=2 from Gen 2 cohort, n=13 in Gen 3 cohort), (2) issues with app in terms of font size or brightness (n=1 from Gen 2 cohort, n=15 from Gen 3 cohort), and (3) environmental factors such as glare from the sunlight (none from Gen 2 cohort, n=1 from Gen 3 cohort). Fewer than 10 participants reported a combination of some or all these reasons (see Figure 11).

Figure 11. Reasons selected by participants who reported that their ability to see the screen might have affected their app‐related performance.

Figure 11

Discussion

The findings from the current study suggest that self‐administered remote assessments of traditional and novel cognitive tests are feasible and will allow new approaches to longitudinal epidemiologic study design and analysis. This study was also in keeping with good research practice, as suggested by the Food and Drug Administration 26 and it provides objective participants' feedback about their experience with the app. A recent paper highlighted that participant feedback is important for developing strategies to enhance adherence to study protocol. 27

The initial set of questions were designed to screen whether participants could see and hear the app adequately and whether they were experiencing high levels of pain in their fingers that could compromise task performance. There is evidence suggesting that the adoption of novel technology can be affected by physical barriers, such as poor design features affecting visibility. 28 , 29 Therefore, it was important to ascertain that the app used in the current study did not pose challenges to technology adoption that could potentially affect adherence. Almost none of the participants reported any difficulty seeing the stimuli on the screen. Relatively few participants reported problems with hearing, and only about 6% of participants reported requiring hearing aids. The use of glasses for corrective vision was reported by most participants. Most participants reported no pain or minimal pain. In the app, we inserted a notification when displaying the pain level question. Specifically, if pain level was 8 or greater (on a 10‐point scale with 10 being the worst level of pain) participants were requested to discontinue the task and return to it once their pain had subsided. The findings from these questions are optimistic and indicate that the app features do not in themselves pose a risk for technology adoption. The data here provide a baseline for the nature and extent of basic sensory and pain problems that exist in the participants.

After the tasks on the app were completed, participants were asked a series of questions about the challenges they encountered while performing the tasks and their perception about the extent to which these challenges could have impacted their performance. Relatively few participants, 8 out of 537 (1%), reported having hearing difficulty while performing the tasks. Fifteen participants (≈3%) reported difficulties with vision or issues within the application as factors that could have affected their performance. Similar to the initial set of screening questions on pain and sensory issues, it will be important to conduct subgroup analysis while investigating task performance data to ensure cognition, and not visual/hearing problems or environmental factors, is the source of differences in performance. Gathering this information and accounting for this in analysis ensures the effects of such factors are accounted for during the data analysis and while interpreting the results. Interestingly, visual problems might indicate onset of some neurological conditions and need to be tracked and monitored carefully as they might serve as early diagnostic markers. 30 , 31 Moreover, as hearing and visual problems have been associated with cognitive decline, identifying such sensory problems via the app might also enable early intervention to resolve or mitigate sensory issues.

Regarding smartphone ownership and usage, participants in both the younger and older cohorts reported owning a smartphone for more than 5 years and using it for 5 or more hours per week. In future task‐based analyses, it will be interesting to examine whether task performance was affected by the number of years or amount of time participants spent using their smartphone.

In general, participants reported positive experiences with app usage, and they found the schedule of tasks to be reasonable (see Figure 7). The finding that most participants in this study rated the task instructions in an unsupervised remote protocol as clear and easy to follow, and the high degree of confidence reported in using such an app, is reassuring. This finding is consistent with another FHS digital study that found >80% of participants using mobile apps reported that the tasks were enjoyable. 25 Notably, in the current study the confidence in independently using the smartphone aligned closely in both the cohorts (Gens 2 and 3), whereas there was a relatively larger discrepancy in the confidence ratings for independently using the app (see side‐by‐side graphs, Figure 6). A closer examination of the data reveals 2 subtle patterns: (1) participants in both cohorts reported higher confidence in independently using their smartphones relative to their app use, perhaps due to the newness of the app; and (2) the use of smartphone and app without any support aligned closely for the younger cohort (85% and 81%, respectively) relative to the older cohorts wherein there was a larger discrepancy in the confidence ratings for their smartphone (83%) versus app (73%) use. This difference suggests that older adults might take longer to acclimatize to recent aspects of smartphone, such as a new app. Indeed, there are data to suggest that older adults are hesitant to try out technology due to unfamiliar user interface or lower levels of knowledge about the technology. 32 , 33 , 34 Future studies should probe participants about the barriers they experience while using the app independently. In the current study, participants will be asked to rate their confidence about using the app at various study intervals, and it may be expected that participants confidence in using the app will increase over time as they become more familiar with the app's interface and content.

Interestingly, most participants were not concerned about their privacy while using the app, with the older cohort being slightly more unconcerned than the younger cohort (see Figure 9). This means that, conversely, there was a smaller proportion of participants who were concerned about their privacy. However, it is unclear what aspects of privacy the participants were feeling concerned about (eg, loss of their personally identifiable data to a third party, privacy of their browsing history, and task‐performance data not being secure). Future iterations of this study and other studies can collect more nuanced data in this regard as it will be important to allay participants' worries early on. Moreover, it will be important to understand whether this aspect deters compliance with task completion.

Our current findings indicate that an overwhelming majority of participants, including older adults (>92%), are highly compliant and perform the tasks in a quiet, private location as per the examiner instructions. This is similar to the 2022 study by Ohman et al in which 90% participants stated that they were not distracted while performing cognitive tasks on a smartphone app. However, that study did not describe the type of distractions in those who reported being distracted. In the current study, we found that 17 out of 537 (3%) participants did the task in a quiet setting but using a shared space (eg, library or an office setting). Fewer than 3% of the participants completed the tasks in nonideal settings, such as in their home but with loud distractions or in public locations with loud noise (see Figure 10). Although environmental context is important to potentially prevent obvious distractions (such as loud noises in the environment), a participant could still be distracted while working in a quiet setting. Nevertheless, the type of setting ensures at least to some degree that participants are performing the tasks with a relatively higher degree of concentration as compared with working in a public and noisy setting. However, it is also important to note that performance on the app collected in a noisy setting does not necessarily invalidate the data as digital technology data are meant to capture data in a naturalistic setting.

The notion that older adults are resistant or unable to participate in digital studies has been argued to be based on biases related to ageism. 35 , 36 There is evidence that older adults are using technology increasingly to perform everyday tasks such as those related to money management. 37 In line with previous studies, the current study (see Table S1) suggests that demographic variables such as age and race may not have an impact on participation in digital studies. 24 , 36 , 38 In the Jackson Heart Study, 2564 of 4024 (≈64%) middle‐aged and older Black participants completed a survey using a digital platform and expressed further interest in participating in research studies using digital technologies to track health data. 38

In general, previous studies have found that participants, including older adults, are highly adherent to using mobile apps and using wearable devices, such as smart watches. 25 , 35 , 36 One study found that, despite older adults' lower knowledge about technology and its familiarity as compared with young adults, ≈86% of older adults elected to participate in a digital study and showed a high rate (≈86%) of adherence to completing the sessions. 25 Previous studies have found that older adults are willing to complete tasks on a smartwatch over a period of 1 year. 25 , 39 Another recent study found that older adults with knee osteoarthritis wore a smartwatch for 75% of the study duration lasting 3 months, 39 and another study found an overall compliance rate of 83% over a 2‐week interval. 40 Overall, recent trends indicate that older adults can and do engage with technology and comply with task demands of the app.

Collecting digital data in its raw native format is a critical component of our study protocol and is generating a tremendous amount of multimodal data. This high‐dimensional data resource will require use and interpretation by a wide range of content experts from clinical neuropsychology, cognitive neuroscience, computer science and engineering, artificial intelligence, biostatistics, bioinformatics, data management, and related fields who can assist with the management, analysis, and meaningful interpretation of results derived from these digital data streams. We foresee the application of machine learning and other advanced computational methods will be needed to identify and validate digital metrics as digital biomarkers for early detection of disease processes. 41 By tracking data longitudinally, diagnostic and prognostic information can be individualized with preventive interventions being implemented earlier in the disease process to be more effective. For example, if individuals display cognitive decline, then they can be monitored early on for their daily functional activities wherein there is a potential for dangerous consequences, such as tracking their financial management to prevent them from losing money, tracking their medication management to prevent unintended side effects from erroneous medication intake. This new digital era will accelerate the reality of a precision medicine for treatment of cognitive related disorders as well as help usher in a new precision brain health vision of optimizing an individual's cognitive capacity along the entire normal to disease spectrum.

Limitations

Because of the widespread penetration of smartphones across the population, our digital approach could potentially help offset some of the persistent selection bias that has led to underrepresentation of individuals on the basis of race or ethnicity, geography, and income 42 but nonetheless still leads to selection bias for a variety of reasons. Participants not comfortable with technology or those who develop cognitive impairments may not participate in the entire study resulting in missing data that could affect the generalizability of the results. The issue of selection bias and missing data is inherent in any epidemiological study, including those that monitor cognition and health. Although FHS boasts an overall low attrition rate over its 7+ decades of longitudinal characterization, in fact, most participants are unable to participate in all invited examinations. It is rare within a longitudinal study with many years of follow‐up for participants to come to every exam or, within an exam, complete all components of it. Thus, as has been the case with nondigital assessments, we monitor and document missing data during the length of the study. Before data analysis, we will characterize missing data. For analyses using convention epidemiologic and biostatistical methods, we will conduct a sensitivity analysis to examine the impact of missing data on the results, as well as consider using multiple imputation to assess how the results may change based on data that are imputed for missing observations. We will also use novel data science approaches to consider whether missing data may be a metric of interest. The promise of digital is not simply limited to within test performance. The multidimensional of the data that are being tracked including patterns of use enable breaking out of conventional study design and analytic mindset and consideration of how data lost to follow‐up, depending on the prior pattern of use, may also be a clinically meaningful indicator. Another limitation is that the FHS Digital Study consists of predominantly White, well‐educated participants residing in the New England area who own smartphones. Thus, the generalizability of the feasibility and usability data is restricted to this population. However, through an American Heart Association funded initiative, the smartphone data collected in this FHS study are also being collected in 2 other more racially and ethnically diverse cohorts, the Bogalusa Heart Study and the Boston University Alzheimer's Disease Research Center. The harmonization of smartphone data collection across these 3 study sites will create a digital data resource with wider ranges of socioeconomic, educational, racial, and ethnic representation. Table S1 shows the breakdown of the demographic and questionnaire data for each subcohort of participants. Thus, although the findings are overall highly promising, they will need replication across non‐FHS cohorts. Moreover, it remains to be seen whether similar trends in participants' experience with the app will be found in international cohorts and low resource settings.

Conclusions

The findings from the current study based on participant feedback are encouraging and point to the feasibility of conducting unsupervised, remote, mobile‐based assessments in a large, community‐based population. Overall, the study findings provide empirical evidence for designing and implementing an unsupervised, smartphone‐based remote study to collect brain health data in large, diverse epidemiological cohorts.

Sources of Funding

Funding support for this project comes from the American Heart Association (20SFRN35360180), Gates Ventures, and Alzheimer's Research UK. Additional funding support was provided by grants from the National Institutes of Health (R00AG062783, RF1AG072654, U19AG068753, U01AG068221, R01AG062109, R01HL159620, R43DK134273, and R21CA253498), the Framingham Heart Study's National Heart, Lung, and Blood Institute contract (N01‐HC‐25195), the Alzheimer's Drug Discovery Foundation (201902–2017835), and the Karen Toffler Charitable Trust.

Disclosures

Rhoda Au is a scientific advisor to Signant Health, Biogen and NovoNordisk and a consultant to the Davos Alzheimer’s Collaborative (DAC) for the global cohort effort. VBK serves as a consultant to AstraZeneca. All other authors report no conflicts of interest.

Supporting information

Data S1

Table S1

Acknowledgments

We thank the Framingham Heart Study participants for all their patience, volunteer time, and commitment to our research programs and to this project.

This article was sent to Francoise A. Marvel, MD, Guest Editor, for review by expert referees, editorial decision, and final disposition.

For Sources of Funding and Disclosures, see page 12.

References

  • 1. Asher S, Priefer R. Alzheimer's disease failed clinical trials. Life Sci. 2022;306:120861. doi: 10.1016/j.lfs.2022.120861 [DOI] [PubMed] [Google Scholar]
  • 2. Yiannopoulou KG, Anastasiou AI, Zachariou V, Pelidou S‐HJB. Reasons for failed trials of disease‐modifying treatments for Alzheimer disease and their contribution in recent research. Biomedicine. 2019;7:97. doi: 10.3390/biomedicines7040097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Sperling RA, Karlawish J, Johnson KA. Preclinical Alzheimer disease—the challenges ahead. Nat Rev Neurol. 2013;9:54–58. doi: 10.1038/nrneurol.2012.241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Escandon A, Al‐Hammadi N, Galvin JE. Effect of cognitive fluctuation on neuropsychological performance in aging and dementia. Neurology. 2010;74:210–217. doi: 10.1212/WNL.0b013e3181ca017d [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Walker MP, Ayre GA, Cummings JL, Wesnes K, McKeith IG, O'Brien JT, Ballard CG. Quantifying fluctuation in dementia with Lewy bodies, Alzheimer's disease, and vascular dementia. Neurology. 2000;54:1616–1625. doi: 10.1212/WNL.54.8.1616 [DOI] [PubMed] [Google Scholar]
  • 6. Ballard CG, Aarsland D, McKeith I, O'Brien J, Gray A, Cormack F, Burn D, Cassidy T, Starfeldt R, Larsen JP. Fluctuations in attention: PD dementia vs DLB with parkinsonism. Neurology. 2002;59:1714–1720. doi: 10.1212/01.WNL.0000036908.39696.FD [DOI] [PubMed] [Google Scholar]
  • 7. Livingston G, Huntley J, Sommerlad A, Ames D, Ballard C, Banerjee S, Brayne C, Burns A, Cohen‐Mansfield J, Cooper C. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. Lancet. 2020;396:413–446. doi: 10.1016/S0140-6736(20)30367-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Mobile fact sheet. Pew Research Center. 2021. Accessed May 16, 2023. https://www.pewresearch.org/internet/fact‐sheet/mobile/ [Google Scholar]
  • 9. Asgari M, Kaye J, Dodge H. Predicting mild cognitive impairment from spontaneous spoken utterances. Alzheimers Dement (NY). 2017;3:219–228. doi: 10.1016/j.trci.2017.01.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Ostrand R, Gunstad J. Using automatic assessment of speech production to predict current and future cognitive function in older adults. J Geriatr Psychiatry Neurol. 2021;34:357–369. doi: 10.1177/0891988720933358 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Piers RJ, Devlin KN, Ning B, Liu Y, Wasserman B, Massaro JM, Lamar M, Price CC, Swenson R, Davis R. Age and graphomotor decision making assessed with the digital clock drawing test: the Framingham Heart Study. J Alzheimers Dis. 2017;60:1611–1620. doi: 10.3233/JAD-170444 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Davoudi A, Dion C, Formanski E, Frank BE, Amini S, Matusz EF, Wasserman V, Penney D, Davis R, Rashidi PJ. Normative references for graphomotor and latency digital clock drawing metrics for adults age 55 and older: operationalizing the production of a normal appearing clock. J Alzheimers Dis. 2021;82:59–70. doi: 10.3233/JAD-201249 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Lamar M, Ajilore O, Leow A, Charlton R, Cohen J, GadElkarim J, Yang S, Zhang A, Davis R, Penney DJN. Cognitive and connectome properties detectable through individual differences in graphomotor organization. Neuropsychologia. 2016;85:301–309. doi: 10.1016/j.neuropsychologia.2016.03.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Cohen J, Penney DL, Davis R, Libon DJ, Swenson RA, Ajilore O, Kumar A, Lamar M. Digital clock drawing: differentiating “thinking” versus “doing” in younger and older adults with depression. J Int Neuropsychol Soc. 2014;20:920–928. doi: 10.1017/S1355617714000757 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Koo BM, Vizer LM. Mobile technology for cognitive assessment of older adults: a scoping review. Innov Aging. 2019;3:igy038. doi: 10.1093/geroni/igy038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Cerino ES, Katz MJ, Wang C, Qin J, Gao Q, Hyun J, Hakun JG, Roque NA, Derby CA, Lipton R. Variability in cognitive performance on mobile devices is sensitive to mild cognitive impairment: results from the Einstein Aging Study. Front Digit Health. 2021;3:758031. doi: 10.3389/fdgth.2021.758031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Brewster PW, Rush J, Ozen L, Vendittelli R, Hofer S. Feasibility and psychometric integrity of mobile phone‐based intensive measurement of cognition in older adults. Exp Aging Res. 2021;47:303–321. doi: 10.1080/0361073X.2021.1894072 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Wu Y‐H, Vidal J‐S, de Rotrou J, Sikkes SA, Rigaud A‐S, Plichart M. A tablet‐PC‐based cancellation test assessing executive functions in older adults. Am J Geriatr Psychiatry. 2015;23:1154–1161. doi: 10.1016/j.jagp.2015.05.012 [DOI] [PubMed] [Google Scholar]
  • 19. Freedman M, Leach L, Tartaglia M. The Toronto Cognitive Assessment (TorCA): normative data and validation to detect amnestic mild cognitive impairment. Alzheimers Res Ther. 2018;10:65. doi: 10.1186/s13195-018-0382-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Belleville S, LaPlume AA, Purkart R. Web‐based cognitive assessment in older adults: where do we stand? Curr Opin Neurol. 2023;36:491–497. doi: 10.1097/WCO.0000000000001192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Snitz BE, Tudorascu DL, Yu Z, Campbell E, Lopresti BJ, Laymon CM, Minhas DS, Nadkarni NK, Aizenstein HJ, Klunk W, et al. Associations between NIH toolbox cognition battery and in vivo brain amyloid and tau pathology in non‐demented older adults. Alzheimers Dement (Amst). 2020;12:e12018. doi: 10.1002/dad2.12018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Nicosia J, Aschenbrenner AJ, Balota DA, Sliwinski MJ, Tahan M, Adams S, Stout SS, Wilks H, Gordon BA, Benzinger T. Unsupervised high‐frequency smartphone‐based cognitive assessments are reliable, valid, and feasible in older adults at risk for Alzheimer's disease. J Int Neuropsychol Soc. 2023;29:459–471. doi: 10.1017/S135561772200042X [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Papp KV, Samaroo A, Chou HC, Buckley R, Schneider OR, Hsieh S, Soberanes D, Quiroz Y, Properzi M, Schultz AJA, et al. Unsupervised mobile cognitive testing for use in preclinical Alzheimer's disease. Alzheimers Dement (Amst). 2021;13:e12243. doi: 10.1002/dad2.12243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Pratap A, Neto EC, Snyder P, Stepnowsky C, Elhadad N, Grant D, Mohebbi MH, Mooney S, Suver C, Wilbanks J. Indicators of retention in remote digital health studies: a cross‐study evaluation of 100,000 participants. NPJ Digit Med. 2020;3:21. doi: 10.1038/s41746-020-0224-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. McManus DD, Trinquart L, Benjamin EJ, Manders ES, Fusco K, Jung LS, Spartano NL, Kheterpal V, Nowak C, Sardana M. Design and preliminary findings from a new electronic cohort embedded in the Framingham Heart Study. J Med Internet Res. 2019;21:e12143. doi: 10.2196/12143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Policy for device software functions and mobile medical applications . U.S. Food and Drug Administration. 2022. Accessed May 16, 2023. https://www.fda.gov/regulatory‐information/search‐fda‐guidance‐documents/policy‐device‐software‐functions‐and‐mobile‐medical‐applications
  • 27. Badawy SM, Thompson AA, Kuhns L. Medication adherence and technology‐based interventions for adolescents with chronic health conditions: a few key considerations. JMIR Mhealth Uhealth. 2017;5:e8310. doi: 10.2196/mhealth.8310 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Wang S, Bolling K, Mao W, Reichstadt J, Jeste D, Kim H‐C, Nebeker C. Technology to support aging in place: older adults’ perspectives. Healthcare. 2019;7:60. doi: 10.3390/healthcare7020060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Pang C, Wang ZC, McGrenere J, Leung R, Dai J, Moffatt K. Technology adoption and learning preferences for older adults: evolving perceptions, ongoing challenges, and emerging design opportunities. In: CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Presented at: CHI '21: CHI Conference on Human Factors in Computing Systems; May 8–13, 2021; Yokohama, Japan. New York, NY: Association for Computing Machinery; 2021:1–13. [Google Scholar]
  • 30. Gaynor LS, Cid REC, Penate A, Rosselli M, Burke SN, Wicklund M, Loewenstein DA, Bauer R. Visual object discrimination impairment as an early predictor of mild cognitive impairment and Alzheimer's disease. J Int Neuropsychol Soc. 2019;25:688–698. doi: 10.1017/S1355617719000316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Davies‐Kershaw HR, Hackett RA, Cadar D, Herbert A, Orrell M, Steptoe A. Vision impairment and risk of dementia: findings from the English Longitudinal Study of Ageing. J Am Geriatr Soc. 2018;66:1823–1829. doi: 10.1111/jgs.15456 [DOI] [PubMed] [Google Scholar]
  • 32. Lee L, Maher M. Factors affecting the initial engagement of older adults in the use of interactive technology. Int J Environ Res Public Health. 2021;18:2847. doi: 10.3390/ijerph18062847 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Rochmawati E, Kamilah F, Iskandar AC. Acceptance of e‐health technology among older people: a qualitative study. Nurs Health Sci. 2022;24:437–446. doi: 10.1111/nhs.12939 [DOI] [PubMed] [Google Scholar]
  • 34. Choi H, Lee S. Failure mode and effects analysis of telehealth service of minority elderly for sustainable digital transformation. Comput Biol Med. 2022;148:105950. doi: 10.1016/j.compbiomed.2022.105950 [DOI] [PubMed] [Google Scholar]
  • 35. Öhman F, Berron D, Papp KV, Kern S, Skoog J, Hadarsson Bodin T, Zettergren A, Skoog I, Schöll M. Unsupervised mobile app‐based cognitive testing in a population‐based study of older adults born 1944. Front Digit Health. 2022;4:227. doi: 10.3389/fdgth.2022.933265 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Nicosia J, Aschenbrenner AJ, Adams SL, Tahan M, Stout SH, Wilks H, Balls‐Berry JE, Morris JC, Hassenstab J. Bridging the technological divide: stigmas and challenges with technology in digital brain health studies of older adults. Front Digit Health. 2022;4:880055. doi: 10.3389/fdgth.2022.880055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Sunderaraman P, Ho S, Chapman S, Joyce JL, Colvin L, Omollo S, Pleshkevich M, Cosentino S. Technology use in everyday financial activities: evidence from online and offline survey data. Arch Clin Neuropsychol. 2020;35:385–400. doi: 10.1093/arclin/acz042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Anugu P, Ansari MAY, Min Y‐I, Benjamin EJ, Murabito J, Winters K, Turner E, Correa A. Digital connectedness in the Jackson Heart Study: cross‐sectional study. J Med Internet Res. 2022;24:e37501. doi: 10.2196/37501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Beukenhorst AL, Howells K, Cook L, McBeth J, O'Neill TW, Parkes MJ, Sanders C, Sergeant JC, Weihrich KS, Dixon W, et al. Engagement and participant experiences with consumer smartwatches for health research: longitudinal, observational feasibility study. JMIR Mhealth Uhealth. 2020;8:e14368. doi: 10.2196/14368 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Rouzaud Laborde C, Cenko E, Mardini MT, Nerella S, Kheirkhahan M, Ranka S, Fillingim RB, Corbett DB, Weber E, Rashidi P. Satisfaction, usability, and compliance with the use of smartwatches for ecological momentary assessment of knee osteoarthritis symptoms in older adults: usability study. JMIR Aging. 2021;4:e24553. doi: 10.2196/24553 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Au R, Kolachalama VB, Paschalidis I. Redefining and validating digital biomarkers as fluid, dynamic multi‐dimensional digital signal patterns. Front Digit Health. 2022;3:208. doi: 10.3389/fdgth.2021.751629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Barnes LL. Alzheimer disease in African American individuals: increased incidence or not enough data? Nat Rev Neurol. 2022;18:56–62. doi: 10.1038/s41582-021-00589-3 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1

Table S1


Articles from Journal of the American Heart Association: Cardiovascular and Cerebrovascular Disease are provided here courtesy of Wiley

RESOURCES