Abstract
The field of behavioral medicine has a long and successful history of leveraging digital health tools to promote health behavior change. Our 2019 summary of the history and future of digital health in behavioral medicine (Arigo in J Behav Med 8: 67–83, 2019) was one of the most highly cited articles in the Journal of Behavioral Medicine from 2010 to 2020; here, we provide an update on the opportunities and challenges we identified in 2019. We address the impact of the COVID-19 pandemic on behavioral medicine research and practice and highlight some of the digital health advances it prompted. We also describe emerging challenges and opportunities in the evolving ecosystem of digital health in the field of behavioral medicine, including the emergence of new evidence, research methods, and tools to promote health and health behaviors. Specifically, we offer updates on advanced research methods, the science of digital engagement, dissemination and implementation science, and artificial intelligence technologies, including examples of uses in healthcare and behavioral medicine. We also provide recommendations for next steps in these areas with attention to ethics, training, and accessibility considerations. The field of behavioral medicine has made meaningful advances since 2019 and continues to evolve with impressive pace and innovation.
Keywords: Digital health, Mobile applications, Behavior change intervention, Artificial intelligence, Wearable technology
Introduction
For the 40th anniversary of the Journal of Behavioral Medicine (2019), we presented a “history and future” of digital health tools in behavioral medicine (Arigo et al., 2019). We outlined key successes, challenges, and opportunities with respect to technologies: wearable devices, mobile apps, and social media (see Table 1 for a summary of opportunities identified). We described the contributions of wearables, biofeedback, telehealth, social media platforms, and ambulatory assessment methods to the field of behavioral medicine. We highlighted the need for high-quality evidence, new research methods, an emphasis on ethics, and industry-academic collaboration. The science and practice of digital behavioral medicine have seen exponential and exciting growth over the past 5 years. Here we provide a narrative update on progress toward addressing the challenges we identified in 2019 and highlight recent advancements in these areas as well as in the emerging area of artificial intelligence.
Table 1.
Updates on challenges to advancing digital health in behavioral medicine and areas for continued work
| Challenge in 2019 | Progress by 2024 | Suggestions for future directions |
|---|---|---|
| Growing an evidence base for commercial devices/platforms | Growth in the commercial mobile health markets continues to accelerate and industry-academic partnerships continue to be key to success | Dissemination and increased use of pathways from industry-academic partnerships to commercialization of technologies |
|
Modest progress made in incorporation of BCTs in commercial apps Additional trials have also been tested to establish the efficacy of some commercial apps |
Improved understanding about how to best use digital health tools to support BCTs in the context of interventions | |
| Advances in research methods to support the complexity of digital health research questions | Expansion of research using factorial designs, SMART trials, N-of-1, micro-randomization, and hybrid designs have opened avenues to pursue more nuanced questions about how, when, and where to intervene and for whom | Research to capture real-time, contextualized effects of exposures and the use of skills promoted by digital tools (including between exposures) |
| Advances in research on statistical methods and trials considerations for use of these designs | Ongoing work to disentangle between-person versus within-person effects of digital health tools and their components | |
| Lack of a science of engagement | Advances in the conceptualization of both micro- and macro-engagement, including measurement of both | Research to understand the impacts of engagement as measured through objective versus subjective measures and the the impact of engagement quality on behavior change outcomes |
| Expansion of research about the predictors of general engagement | Research to identify predictors of different types of engagement and testing of intervention strategies to facilitate them | |
| Emerging conceptualization of engagement as a dynamic process that may change over time in response to habit formation or other processes | Research to determine when and how often to measure engagement | |
| Limited focus on principles versus technologies | Expansion of no-code platforms as a method to expedite app development | Exploration of the relative cost and time associated with development pathways to provide evidence of potential benefits of each |
| Use of open APIs and large research consortia as methods to work collaboratively towards large-scale data collection and interoperability | Development of more open APIs to access health data for behavioral medicine tools, including use of APIs by researchers to provide access to the tools they develop |
Sequential Multiple Assignment Randomization Trials (SMART); behavior change techniques (BCTs); application programming interfaces (APIs)
Progress on 2019 challenges and opportunities
New evidence supporting commercial devices/platforms
Since 2019, the use of commercial digital health apps and devices has continued to rise. The global digital health market was valued at $180 billion in 2023 and is expected to grow to $549 billion by 2028 (Markets & Markets, 2023). The Apple and Google Play stores now offer over 119,000 health apps (Kalinin, 2024). In 2023, 49% of US adults said they spent money on a health app in the past year (Bashir, 2024) and one third said they had used a wearable device (Dhingra et al., 2023). Behavioral medicine professionals have an opportunity to take the lead in communicating to the public which health apps and devices are grounded in good science.
In 2019, we noted that the use of evidence-based behavior change techniques (BCTs) in commercial health apps and devices was sparse. Recent reviews reveal some, albeit modest, improvement. For example, one review reported that asthma self-management apps included an average of 4 BCTs, with a range of 1–11 BCTs across apps (Ramsey et al., 2019); other reviews have found that physical activity apps for pregnant women (Hayman et al., 2021) and breast cancer survivors (Cooper et al., 2023) include 2–10 and 2–13 BCTs, respectively. Some popular commercial apps have added BCTs since they were initially reviewed in published articles, suggesting that frequent reviews may be necessary to capture the evolution of commercial products. Notably, in a 2013 review of commercial weight loss apps’ use of BCTs in the Diabetes Prevention Program (DPP) lifestyle intervention, the popular app Noom included only 25% of those BCTs (Pagoto et al., 2013). In 2017, Noom began offering human-delivered lifestyle coaching based on the DPP, which put Noom in full compliance with the DPP. In 2023 they launched Noom Med: this program pairs lifestyle coaching with GLP-1 receptor agonist medications, which produce weight loss that exceeds what can be achieved via lifestyle interventions alone (Noom, 2024). Other digital health companies are designing products using evidence-based behavioral intervention. For example, at least 6 commercial apps leverage cognitive behavioral therapy for insomnia (Erten Uyumaz et al., 2021) and several use evidence-based approaches to weight control such as the DASH diet (n = 7) and Mediterranean diet (n = 55; McAleese et al., 2022). These reviews offer important contributions by examining commercial health apps that were designed to disseminate evidence-based interventions.
Randomized controlled trials (RCTs) are the gold standard for establishing intervention efficacy and are increasingly being used to test commercial health apps, particularly those that already have large user bases. Such trials have established the efficacy of the apps Calm, Headspace, Noom, WW (WeightWatchers), and Talkspace on various clinical outcomes (Huberty et al., 2022; Song et al., 2023; Taylor et al., 2022; Thomas et al., 2017; Toro-Ramos et al., 2020). WeightWatchers is a notable example of a commercial program that has been tested in numerous clinical trials over the past 10 years as it has evolved from in-person delivery to hybrid (in-person and digital) (Ahern et al., 2017; Johnston et al., 2013) to digital only (Pagoto et al., 2023; Thomas et al., 2017). As we described in 2019, clinical trials for commercial products are often conducted via industry-academic partnerships, though many are conducted independently by academics. Researchers who do not have industry partnerships may consider the latter approach as an efficient alternative to attempting to develop new products. Given the sheer volume of commercial digital health products on the market, a great need exists for research on those that are already in the hands of millions of users. Research is also needed on the extent to which academia-produced digital health products reach the marketplace and attract users.
Some commercial tools are now routinely used in evidence-based behavioral interventions, given how effectively they assist users in enacting BCTs. For example, MyFitnessPal is a commercial calorie tracking app that is commonly used in weight control clinical trials to enable dietary self-monitoring and goal setting (e.g., Hoerster et al., 2020; Pagoto et al., 2022; Patel et al., 2020; Wang et al., 2017). Fitbit’s activity tracking devices, mobile apps, and scales are also used in clinical trials to enable self-monitoring (e.g., Miller et al., 2023). Further research is needed to determine how commercial digital health tools can be leveraged to support the execution of BCTs in interventions.
Advances in research methods to support the complexity of digital health research questions
Traditional RCTs are the gold standard because they uniquely offer the advantages of random assignment to condition and thus, allow for drawing causal conclusions about the efficacy of interventions. RCTs are often inadequate and/or inefficient for evaluating digital health interventions, however. As we noted in 2019, behavioral interventions often provide “packages” of strategies to change behaviors that affect health outcomes. Unlike traditional face-to-face programs, the use of digital interventions is often self-guided, with respect to the frequency, sequence, and combination of exposure to various skills or components. Thus, in experimental conditions where participants receive the digital intervention, heterogeneity in users’ experiences of the intervention can make it impossible to draw strong conclusions about its effects. In trials that compare digital interventions to no intervention or to other interventions, this heterogeneity is often not reported, though it can mask meaningful effects for some participants. Thus, even when a digital intervention outperforms a comparator, we often don’t know the effective component(s), dose(s), sequence(s), or corresponding mechanism(s) of action.
New experimental designs can address these challenges. Factorial designs randomize a single participant to more than one condition across multiple factors at the start of treatment. As each participant “counts” toward more than one condition, this approach is efficient in that it maximizes power for tests of distinct components. Factorial designs may be uniquely useful for testing digital interventions, as they can efficiently test different combinations of components, doses, or sequences. In one ongoing trial, for example, Butryn and colleagues test the presence versus absence of sharing data from digital monitors of weight, dietary intake, and physical activity with 3 different support sources: the interventionist, a friend or family member, and a group of other participants in the trial (peers; Miller et al., 2023). As each participant is randomized to share ON versus OFF in each condition, it is possible to test the individual, additive, and synergistic effects of each type of data sharing. Factorial designs are now commonly used to optimize digital interventions using the Multiphase Optimization Strategy (MOST) framework (Collins & Guastaferro, 2021; Szeszulski & Guastaferro, 2024), an approach that has also seen impressive growth in popularity since 2019.
Sequential Multiple Assignment Randomization Trials (SMART) also use more than one randomization, but at different points in treatment, to test dynamic treatment regimens (Kidwell, 2015). This can help us understand how to proceed at specific decision points in treatment, such as when patients do not respond to the initial package of skills. With SMART, participants are re-randomized if they meet prespecified criteria at the decision point (or at multiple points), to determine the best overall approach to treatment for different participant trajectories. For example, an ongoing SMART by Zhao et al. (2022) tests whether adding personalized text messages improves smoking cessation rates among non-responders to generic messages, and whether adding other digital components can improve cessation rates among non-responders to personalized text messages. Many trials that use this design are not yet complete and their impact is yet to be determined. Yet, there is clear interest in the promise of SMARTs to generate personalized digital interventions.
N-of-1 and micro-randomization can reveal the effects of digital health tools as they are being used, within-person (cf. Walton et al., 2018). Most trial designs compare people to each other with respect to treatment response, aggregating across weeks, months, or years to determine a single estimate of response for each person in an assigned condition. The resulting information about who experiences long-term change is invaluable. But behavioral treatments are intended to induce a set of changes in cognitions, emotions, and behaviors through practice in daily life; we don’t yet know much about the translation of skills taught during intervention exposures to behavior change between exposures in daily life, because we don’t use methods that can appropriately assess these processes. Worse, although digital tools are meant to reach participants wherever they are, in their natural environments, we rarely capture what happens when participants are exposed to digital components or the differences in a given person’s response to different components (in the moment or in the near future).
This is a missed opportunity that may have negative consequences for health. For example, although exposure to social media has shown a negative association with well-being between-person, there is little (if any) within-person association (Stavrova & Denissen, 2021). Moreover, among adolescents, although greater social media use is positively associated with relationship well-being between-person, it is negatively associated with well-being within-person (Pouwels et al., 2021). Similar divergence may exist for common behavior change techniques in digital interventions (e.g., activating social comparison processes via leaderboards; Arigo et al., 2020), though our methods overlook them. Disentangling these associations is critical to improving the effectiveness of digital tools for health behavior change. Intensive longitudinal assessment designs such as ecological momentary assessment and ambulatory daily diaries capture psychological experiences and behaviors as they occur in real time, using technologies such as smartphones and wearables (Smyth et al., 2017). These designs are increasingly popular and can be combined with experimental methods (e.g., micro-randomization) to test the immediate and short-term effects of digital components on their purported mechanisms of action and longer-term outcomes (Arigo et al., 2024).
Although advanced experimental and intensive research designs existed in 2019 and we referenced them in our earlier paper, databases showed that very little research was published and funding agencies listed very few funded grants using these designs. Fortunately, we have made considerable progress since 2019: PubMed, Google Scholar, and NIH Reporter now contain multiple pages of studies using these designs for tests of digital tools to support physical activity, weight management, smoking and substance use cessation, oral health care, and medical decision making, as well as to improve engagement with digital resources that support these behavior changes. Recent developments provide even more reason for optimism. These include published guidance on statistical methods for advanced trial designs (Cohn et al., 2023; Montoya et al., 2023; Yeaton, 2024), as well as guidance on the advantages, design considerations, and evaluation of these trials that are written for diverse audiences (e.g., SMART; Kidwell & Amirall, 2023). Finally, hybrid experimental designs have been developed to address complex hypotheses about both human-delivered and digital components of an intervention (Nahum-Shani et al., 2022). Specifically, these designs can combine traditional group-level randomization (single or multiple randomizations) and intensive longitudinal assessment to understand the effects of intervention packages at different timescales (Nahum-Shani et al., 2024). Such designs offer unique opportunities to determine when and how exposure to intervention components leads to behavior change between exposures, elucidating the pathways linking digital interventions to health outcomes.
Advances in the science of engagement
The utility of digital behavior change interventions rests on their ability to engage users. Many studies show that greater user engagement predicts better outcomes (Donkin et al., 2011; Lehmann et al., 2024; Power et al., 2019), though promoting engagement can be challenging. It is also measured in highly variable ways and we know little about how much engagement is optimal (Nahum-Shani & Yoon, 2024). Conceptual frameworks that define engagement and its measurement are emerging to address this need (Nahum-Shani & Yoon, 2024). Broadly, researchers must consider what (specifically) users engage with and how they engage. In terms of what, engagement can be thought of as either micro-engagement (i.e., “little e”) or macro-engagement (i.e., “big E”; Cole-Lewis et al., 2019). Micro-engagement refers to user engagement with the intervention interface (e.g., clicks, pages visited) and/or technology-facilitated behavior change strategies (e.g., self- monitoring), whereas macro-engagement refers to user engagement in the target health behavior (e.g., physical activity; Cole-Lewis et al., 2019).
Emerging frameworks for micro-engagement in digital behavior change interventions propose that engagement is multifaceted, including behavioral, cognitive, and affective components. Specifically, Perski et al. (2017) operationalize engagement using objective (e.g., duration of use) and subjective (e.g., perceived interest) measures. Objective measures are more commonly used and typically involve assessing the frequency (i.e., number of uses), intensity (i.e., amount of behavior recorded, such as the number of diet logs or exercises), time spent using the technology and each feature, and type of engagement (Bijkerk et al., 2023). Subjective measures are newer and typically employ self-report via surveys and/or qualitative interviews (Kelders et al., 2020a, 2020b). Engagement has also been conceptualized as active, referring to any engagement in which the user is interacting with the technology, or passive, referring to the user consuming information from the technology but not interacting with it (Perski et al., 2017). Of note, an underexplored aspect is quality, or the degree to which a user engages with the technology as intended (Bijkerk et al., 2023). Little is known about how these affect behavior change.
Objective measures such as frequency of use are important to include in efficacy trials of digital health tools, but such measures cannot always be compared across different technologies. For this reason, 2 self-report measures have been developed since 2019 that assess engagement across a wide range of digital behavior change interventions. The Digital Behavior Change Intervention (DCBI) Engagement Scale (Perski et al., 2020) uses 10 items to capture the user’s reported amount and depth of use, interest, attention, and enjoyment. Scores on this scale were not associated with objective measures of current or future use of the technology, but the extent to which objective and subjective measures of engagement should be related is unclear. Building on this work, Kelders et al. (2020a) developed the Twente Engagement with eHealth Technologies Scale (TWEETS), a 9-item measure of behavioral (e.g., “This technology is part of my daily routine”), cognitive (e.g., “This technology makes it easier for me to work on my goal”), and affective (e.g., “I enjoy using this technology”) engagement with a digital tool. Scores on this scale are associated with scores on the DCBI Engagement Scale and overall perceived behavior change, but not with self-reported frequency of technology use. Because self-report measures capture subjective engagement, using them alongside objective measures of engagement with the technology and the target behavior might be the most comprehensive approach.
Research on predictors of engagement with digital health tools may also be useful for increasing our understanding of engagement. Predictors of high engagement include motivation, self-efficacy, expectations, personal relevance of the technology, receipt of social support via the technology, access to human-delivered counseling, novelty, personalization, aesthetically-pleasing design features, and credibility of the technology predictors of poor engagement include stress, depression, greater symptom severity, and limited access to healthcare (Bijkerk et al., 2023; Perski et al., 2017). Additional work is needed to identify predictors of different facets of engagement (e.g., behavioral, cognitive, affective) and to test intervention strategies that facilitate engagement. For example, a recent meta-analysis of strategies to improve engagement in obesity interventions revealed that social support, shaping knowledge, repetition and substitution, natural consequences, and email or text messages improved engagement (Grady et al., 2023). However, engagement was typically defined narrowly (i.e., frequency of use of the technology), with only 54% of studies using subjective measures of engagement.
Nahum-Shani and Yoon (2024) propose that digital interventions be conceptualized as a collection of stimuli and tasks that may be digital or non-digital, and that engagement can be considered a process of evolving reactions to those stimuli and tasks. As an example of this evolution, high engagement may facilitate habit formation and to the extent that this occurs, engagement with digital stimuli may eventually decline because stimuli are no longer needed to cue the behavior. On the other hand, declining engagement with digital stimuli over time may signal habituation, intervention burden, or other barriers to engagement, and thus, poor outcomes. Conceptualizing engagement as a process may shed light on mechanisms by which engagement influences outcomes in both positive and negative ways.
Although the science of engagement is growing, many questions remain. For example, little is known about when and how often to measure engagement during an intervention. Engagement is dynamic such that it varies over the course of an intervention based on contextual factors, experience with the intervention, and need for further intervention (Bijkerk et al., 2023). Frequent assessment of engagement can be used to identify points during the intervention when users are most likely to disengage. Dynamic or adaptive interventions may then be useful in providing additional intervention to users before they are likely to disengage. Research is also needed to understand how distinct facets of engagement (e.g., behavioral, cognitive, affective) change over time and how different trajectories of engagement are related to behavior change and health outcomes. In addition, it is imperative to better understand which intervention components influence different types of engagement (Milne-Ives et al., 2023). The answers to these questions may differ based on the intervention type, target behaviors, and target population. As such, progress on the science of engagement will require researchers to use advanced research methods and comparable measures and metrics of engagement across studies.
Progress toward a focus on principles versus technologies
In 2019, we noted that technology evolution far outpaces research on these tools, and that tools developed by researchers rarely reach the commercial market. While these problems persist, we have made progress with respect to innovative approaches for efficient and scalable mobile app development. The first is “no-code” platforms, which use intuitive drag-and-drop formatting and pre-built module components; thus, researchers do not need extensive training or a programming background to build app prototypes (Liu et al., in press). Early studies using no-code platforms included the development of an app to reduce sedentary time (Bond et al., 2014). A recent scoping review of no-code tools used to design physical activity apps found 11 platforms to date (e.g., Avicenna, Expiwell, LifeData; Liu et al., in press). Of these, 8 were available with both iOS and Android versions and 7 had multilanguage support. Direct cost analysis and time comparison between no-code and traditional development methods are needed to determine the full potential for no-code platforms.
A second innovation is in open application programming interfaces (APIs) that allow for access without custom programming. For example, the Substitutable Medical Apps and Reusable Technology Health IT project facilitates connection to electronic health record systems. Funded by the U.S. Office of the National Coordinator of Health Information Technology, this project has supported the development of over 100 apps (Smarthealth IT, 2024). Associated federal regulations now require that all electronic health records systems in the U.S. embed two APIs to provide standardized access to health data across systems, ensuring interoperability (Mandl et al., 2024). This broad national approach provides a powerful infrastructure for data accessibility and app development through APIs. In behavioral medicine, open APIs are less common, but are offered through select commercial health apps, helping to facilitate participant data access (see Fitbit, 2024; Fatsecret, 2024). For example, a researcher can develop an app to deliver a behavior change intervention in conjunction with providing a Fitbit, such that participant data collected by a Fitbit device (e.g., physical activity, sleep) are seamlessly integrated. Researchers and commercial health app developers should consider the potential for expanded use of development of standardized systems and APIs that allow for expedited app development and data exchange in the development of future technologies.
A third innovation is the establishment of large research consortia that are unified around common goals of collecting similar outcomes data across studies. Similar to the use of large-scale APIs, this approach has not yet been applied fully to behavioral outcomes but stands as a potential model. An example is the Remote Assessment of Disease and Relapse—Central Nervous System (RADAR-CNS), developed in Europe to support remote monitoring of major depressive disorder, epilepsy, and multiple sclerosis using wearable devices and smartphone technology (Ranjan et al., 2019). The program was funded by the Innovative Medicines Initiative (a public–private partnership between the European Federation of Pharmaceutical Industries and Associations and the European Union; King’s College London, 2024). The platform supported the recruitment of 1,450 participants and collection of 62 terabytes of data, which were recently released under an open-source license to promote use by the broader community. Overall, innovative approaches to the development and efficient scale-up of mobile technologies are increasingly available but underutilized, given their potential.
New frontiers, challenges, and opportunities for digital health in behavioral medicine
Dissemination & implementation science and technology scalability
Despite the development of countless digital health technologies by researchers, few have made the leap from research to broad use at scale. The next era of digital health interventions should leverage commercialization opportunities, as well as dissemination and implementation (D&I) science to ensure adoption in real-world settings. D&I science offers 2 key directions for digital health research. First, it provides frameworks by which to consider which existing technologies with proven efficacy should be used in real-world settings (e.g., healthcare, education, employer systems) and how to evaluate implementation in those settings, with consideration of the multilevel factors that make them more or less likely to be adopted (Shelton et al., 2020). Second, they challenge researchers to consider the context in which tools will be used and the resources available in those contexts, suggesting that future technologies should be designed for dissemination and real-world implementation from the start (see Table 2).
Table 2.
New frontiers, challenges, and opportunities for digital health in behavioral medicine
| New frontiers and challenges | Key points | Suggestions for future directions |
|---|---|---|
| D&I science and technology scalability |
Most digital interventions are difficult to deliver at scale and are rarely commercialized, as there are barriers to integration with existing systems D&I represent distinct disciplines focused on the translation of efficacious approaches to real-world settings |
Emphasis on scalability, dissemination, and implementation concerns from the development stage Use of tools such as no-code platforms and APIs to promote interoperability and facilitate broader use |
| Remote data collection and telehealth | Collecting data and delivering treatment remotely increases accessibility, reduces transmission of infectious diseases, and shows similar treatment outcomes to in-person approaches |
Additional work to establish best practices for remote data collection protocols and high-quality evidence for the efficacy of remote intervention Advocacy for the continued reimbursement of remote treatment delivery and parity with in-person services |
| AI | Tools such as machine learning and large language models have been leveraged to summarize large datasets, generate intervention content, and provide two-way patient education | Continued exploration of opportunities to leverage these technologies in behavioral medicine |
| Ethical considerations | Privacy, data security, and health equity continue to present challenges, in part due to shifting legal landscapes and identification of biases in digital systems |
Increased attention to these issues, the limitations they present for behavioral medicine, and opportunities they present for behavioral medicine professionals to lead improvements Wider use of tools such as the Digital Health Checklist (ReCODE Health) |
| Training and collaboration | Despite the availability of new, complex digital technologies and advanced research methods to generate needed evidence, behavioral medicine professionals rarely have support for staying up to date in these areas |
Emphasis on digital technologies, D&I and ethical concerns, and advanced research methods in behavioral medicine training programs Greater availability of continuing education resources and the protected time to use them Frequent and close collaboration between professionals with and without expertise in these areas |
Dissemination & Implementation (D&I); Artificial Intelligence (AI)
To date, few trials have focused on the implementation of digital health tools in real-world settings. A notable exception is the US Veterans Health Administration, where myriad digital tools have been adopted and routinely implemented for a variety of health issues (e.g., smoking cessation, weight management; Blok et al., 2019; US Department of Veterans Affairs, 2024). The VA’s centralized electronic health record system and payment structures across this clinical context have greatly facilitated research and implementation (Jackson et al., 2011). However, despite the wide availability of health apps specifically for veterans, a recent survey found that uptake is low and the strongest predictor of veteran use of VA-created apps is provider encouragement, which results in nearly 3 times higher odds of use (Hogan et al., 2022). Thus, even in a large, established, and digitally integrated health system, health app uptake in routine practice is low. Additional research and support is are needed to encourage providers to prescribe these tools to patients.
Beyond the VA, implementation of digital health tools in real-world settings has been sparse. Notable examples include a study that examined implementation of digital referrals to web-assisted tobacco interventions in community-based primary care practices, which found that digital referrals produced similar referral rates to a paper system, but threefold greater conversion to intervention registrations (Houston et al., 2015). Implementation facilitators included ease of using the system and the perceived intervention efficacy (Houston et al., 2015). Similarly, the Home BP trial tested a digital intervention for hypertension management in primary care. Researchers first undertook a systematic intervention planning process to consider multilevel factors impacting potential implementation, including feedback from patients and health professionals. In a subsequent randomized trial, they tested their digital intervention versus usual care and collected implementation data (e.g., cost effectiveness) to inform clinical rollout, finding that the intervention led to better hypertension management than usual care and with minimal incremental costs (McManus et al., 2021). Here too, more research is needed on barriers and facilitators to implementation of digital tools in routine practice.
Implementation trials that focus solely on testing strategies to implement digital tools in existing settings and structures are also needed. One such study is the ongoing DIGITIS Trial which tests strategies to integrate prescription digital therapeutics for substance use disorders (Glass et al., 2023). Using a factorial design, clinics are randomized to receive different combinations of implementation techniques to identify the optimal overall approach. Additional insights to facilitate implementation, particularly its sustainability in the clinical setting, will come from the extensive ongoing work with digital mental health treatments (Meyerhoff et al., 2023; Mohr et al., 2021). For example, an interdisciplinary international group of healthcare experts convened in 2019 to consider the barriers and facilitators to broad adoption of digital mental health tools (Mohr et al., 2021). They found that while there is consensus that the tools are effective and cost-effective, there are still complications with proper reimbursements and there is not an established way to evaluate the tools, preventing further clinical implementation (Mohr et al., 2021).
Yet another limit to the implementation of digital behavioral medicine tools is that the quality of evidence thus far has not been strong enough to move many digital solutions to clinical application. For example, a recent review identified 721 studies that describe virtual reality technologies for mental health, yet weaknesses in study design have hindered progression toward clinical adoption (Wiebe et al., 2022). Specifically, few studies use rigorous and evidence-based processes at both the technology development and initial clinical testing phases, resulting in data that cannot support broad implementation (Selaskowski et al., 2024). A protocol-based dual publication model, which is similar to a registered report but specific to the development and evaluation of digital technologies for clinical application, has been proposed to improve methodological quality (Selaskowski et al., 2024). This is a promising approach, as it would encourage more detailed description of the technology development methods and foster greater replicability of digital tools while requiring robust clinical studies to demonstrate the efficacy data needed to justify further use and testing.
Remote data collection and telehealth
The COVID-19 pandemic accelerated a shift to remote data collection and treatment delivery in an effort to reduce the transmission of infectious disease. Remote options are also more accessible than in-person approaches and may offer needed flexibility for hard-to-reach and underprivileged groups, as the burdens of transportation and childcare are minimized or removed altogether. Remote methods typically leverage Bluetooth or wireless connected devices, wearable devices, mobile apps, online platforms, and/or video teleconferencing software. Some of these are freely available (e.g., Zoom) and as noted, some are already in widespread use (e.g., Fitbit), and evidence to support the validity of remote methods is growing. Specifically, evidence shows that weight, waist circumference, and movement assessments can be conducted with cancer survivors (Hoenemeyer et al., 2022), older adults (Villar et al., 2024), and veterans (Ogawa et al., 2021) via Zoom video call, with high reliability and high concurrence with in-person methods. Similar trials are in progress to assess the validity of remote assessments of physical performance and mobility among older adult cancer survivors (Blair et al., 2020).
For those who do not already use these technologies, however, remote protocols may be expensive for researchers and clinics, and poor execution may result in suboptimal patient engagement. For example, Hoenemeyer et al. (2022) note that high shipping costs for the equipment necessary to conduct remote arm curl and grip strength tests (i.e., mailing dumbbells to participants’ homes) prevented the team from including these typical tests in their remote trial. Even when technology or equipment are available (e.g., Zoom), technical difficulties such as poor internet connectivity and environmental conditions such as poor lighting, incorrect camera angles, and distractions in the home can result in low engagement and lower-quality data, relative to in-person procedures. Participants may also perceive researchers to be less directly engaged in remote meetings than in person, as researchers often have to manage multiple tasks simultaneously (e.g., screenshare, recording responses; McClelland et al., 2024).
Studies conducted during the transition from in-person to remote treatment during the pandemic revealed additional challenges, such as declines in use of behavioral strategies such as self-monitoring (Bernhart et al., 2022) and suboptimal acceptability of remote procedures (at least initially) in certain subgroups (e.g., older adults; Pisu et al., 2021; Ross et al., 2021), possibly due to low technology literacy. Recommendations to address such barriers include encouraging participants to have cameras on during video calls (to promote attention and engagement), training research staff to look at the camera rather than the screen (for more direct eye contact) and limit distractions such as electronic notifications, and encouraging both participants and research staff to log into meetings early to troubleshoot any technical problems (McClelland et al., 2024). Researchers can also consider building in fallback options such as phone calls (if technical difficulties cannot be resolved) and offering breaks during longer sessions (McClelland et al., 2024), and build these options into protocols from the start (rather than as deviations from the expected protocol).
Yet, recent evidence for treatment outcomes is highly encouraging: in 2 trials that pivoted from in-person to videoconference-delivered behavioral weight loss intervention, weight loss was comparable between groups who received hybrid in-person (pre-pandemic) followed by remote (during pandemic) and remote-only treatment (Ross et al., 2022; Tchang et al., 2022). Similarly, studies show little (if any) difference between in-person and remote (telehealth) treatment for mental health outcomes (Bulkes et al., 2022; Lin et al., 2022), and attrition does not differ between modalities (Giovanetti et al., 2022). Thus, research increasingly demonstrates that fully remote treatment protocols can produce meaningful change in clinical outcomes with the potential for less burden on participants and patients.
Remotely delivered telehealth services were available before the COVID-19 pandemic, though adoption in routine clinical practice occurred mostly in rural and other settings where access was limited, and reimbursement policies varied greatly across states (Brotman & Kotloff, 2021). Reimbursement barriers were lifted during the pandemic to address the critical public health need. Federal programs such as Medicare have maintained reimbursement for telehealth services through 2024 (Department of Health & Human Services, 2024), but the future is uncertain: some insurance providers offer lower financial compensation for telehealth than in-person visits (Aremu et al., 2022) and the sustainability of legislative and budgetary support for telehealth is unclear. The field of behavioral medicine should prioritize establishing the efficacy for remotely delivered interventions given that such data are needed to inform reimbursement policy, and should continue to advocate for telehealth reimbursement.
Artificial intelligence (AI)
AI, or “technology that simulates human intelligence and problem-solving capabilities” (Stryker & Kavlakoglu, 2024), is increasingly used in daily life and healthcare (e.g., GPS, digital assistants, social media algorithms, ChatGPT) and is a relatively new frontier for behavioral medicine. AI has myriad applications in behavioral medicine and has the potential to improve measurement and prediction of behavior and clinical outcomes with far greater precision than traditional methods (Bucher et al., 2024). AI also has potential to help us design more personalized and effective interventions, which are urgently needed given the rapid evolution of data sources during the 21st century. Traditional data sources have included biological assays, surveys, and focus group/interviews, but in recent years, intensive longitudinal assessment methods, wearable devices, mobile applications, and online platforms have been used to collect high volumes of data (i.e., big data). AI is well-suited for handling big data and its use offers new ways to understand, predict, and intervene on health behavior. Machine learning, natural language processing, generative AI and large language models, and computer vision are four types of AI methods that have great potential to revolutionize behavioral medicine research.
Machine learning (ML) uses algorithms that learn from data to make predictions, identify patterns, and/or make decisions (Shalev-Shwartz & Ben-Davd, 2014). Applications in healthcare include predicting patient outcomes, risk for disease, and disease outbreaks; creating tailored treatment plans; and improving the efficiency of healthcare systems (Dixon et al., 2024). In behavioral medicine, ML has been used to predict intervention outcomes (Khalilnejad et al., 2024), diet lapses during weight loss treatment (Goldstein et al., 2018), smoking behavior (Yu et al., 2024), patient adherence (Masiero et al., 2024), and depressive symptoms (De la Barrera et al., 2024). ML has also been used to predict behavior in real time and to optimize and personalize behavioral interventions (Forman et al., 2019; Presseller et al., 2023; Rocha et al., 2023; Scodari et al., 2023). Natural language processing (NLP) uses algorithms to understand, process, and analyze human language (AlShehri et al., 2024; Vaniukov, 2024), and is leveraged in conversational agents and large language models. NLP has been used in medicine to analyze speech and text from a range of sources including clinic notes, patient comments in an electronic health record, and the research literature to aid in diagnosis, prevention, and patient engagement (AlShehri et al., 2024; Petti et al., 2020). Behavioral medicine researchers use NLP to study qualitative and/or social media data (Cha & Lee, 2024; Lau et al., 2024; Patra et al., 2023), assist with dietary self-monitoring via voice or text entry (Chikwetu et al., 2023), and identify patient-reported outcomes via clinical notes in electronic health records (Ebrahimi et al., 2024; Sim et al., 2023).
Generative AI (genAI) is the use of AI to generate content of all types (e.g., text, images). Large language models (LLM) are a form of genAI that produces text (Yu et al., 2023). LLMs perform tasks in response to text queries and generate human-like responses via a computer program that has been trained on vast amounts of data to interpret human language. They can generate content, engage in conversations, and answer questions, and they are used in conversational agents, text analysis, code generation, language translation, and text summarization (Thirunavukarasu et al., 2023). Conversational agents, a popular type of LLM, use ML and NLP to engage in conversations with humans (Laranjo et al., 2018). Chatbots are one type of conversational agent in which a software program automates specific conversational tasks, like answering a finite set of questions or providing information or assistance to a user (Tudor Car et al., 2020). In behavioral medicine, chatbots have been used for weight management, vaccine communication, smoking cessation, and chronic disease management (Aggarwal et al., 2023; Bak & Chin, 2024; Noh et al., 2023; Passanante et al., 2023). Conversational agents can be used to counsel and educate patients about these and other topics, which may reduce the intensity of behavioral interventions that rely on human delivery.
ChatGPT, often used as a conversational agent, is perhaps the most notable LLM, as it exploded in popularity after its release by OpenAI in 2023 (OpenAI, 2023). In medicine, ChatGPT and other LLMs assist with clinical notes and summaries, answer medical exam or patient questions, provide patient education, augment medical training (Omiye et al., 2024), conduct cancer screening and genetic counseling, assess symptoms, and support caregivers (Jiang et al., 2024). In behavioral medicine, researchers have examined the accuracy of LLMs in providing patient education (Kozaily et al., 2024), debunking health misinformation, developing exercise programs, recommending evidence-based treatments, identifying motivational states, and even conducting systematic reviews (Amin et al., 2023). Behavioral medicine researchers are also using LLMs to create intervention content (Willms et al., in press).
Computer vision may also have myriad uses in behavioral medicine, as it uses videos and/or images to train a computer program to recognize and interpret visual content. These data are gathered via sensors and/or cameras and ML algorithms perform object detection and image classification. For example, computer vision has been used to diagnose skin cancer by detecting characteristics of skin lesions not visible to the naked eye (Akilandasowmya et al., 2024), to classify different types of back pain based on movements (Hartley et al., 2024), to detect pain and acute patient deterioration via changes in facial expression, to monitor mobility in intensive care patients, to detect falls in the elderly, and to assist in the diagnosis of autism via head motion and facial expression data (Lindroth et al., 2024). In behavioral medicine, computer vision has been used to identify foods based on pictures taken by users and use this information to tailor intervention messages (Chew et al., 2024). Together, AI tools have enormous potential to increase the efficiency, accuracy, and reach of behavioral medicine interventions, making this an exciting new area for growth.
Although AI has extraordinary potential to improve individual and public health, it also has extraordinary potential to negatively impact health. For example, AI can be used to produce deepfakes and large volumes of disinformation, in the forms of authentic-appearing online articles with fictional medical references, social media posts and comments, and patient and physician testimonials (Menz et al., 2024). The implications of the ability to rapidly develop and disseminate disinformation are likely to be far-reaching and evidence suggests that AI-powered disinformation campaigns are already infiltrating low- and middle-income nations (Hotez, 2024). In the past few years, the World Health Organization (WHO) has issued statements urging the judicious use of AI for health (World Health Organization, 2023) and laid out guidance on the ethical use of AI for health (World Health Organization, 2021). However, the WHO’s very own health chatbot, S.A.R.A.H. (Smart AI Resource Assistant for Health; World Health Organization, 2024) has come under fire for providing outdated information (Nix, 2024) and includes a disclaimer stating that “answers may not always be accurate” (World Health Organization, 2024). The WHO is calling for researchers to help them identify ways their chatbot can be used to disseminate accurate health information (World Health Organization, 2024). Given the chatbot’s focus on healthy lifestyle topics such as quitting smoking, physical activity, healthy diet, and stress reduction, behavioral medicine researchers are well-positioned to lead this charge. Generally, much research is needed on both the benefits and dangers of genAI to public health.
Ethical considerations
In 2019 we called attention to issues of privacy and data security, to promote the responsible use of digital health tools in behavioral medicine research and practice. Groups such as ReCODE Health (2024) provide a wealth of resources on this topic, including the Digital Health Checklist (Nebeker et al., 2021), which provides guidance to researchers and ethics committees with respect to the use of digital tools in behavioral research. Such resources are invaluable and support for their ongoing revision is essential as the digital health landscape continues to evolve. An important example comes from the 2022 US Supreme Court decision Dobbs v. Jackson Women’s Health Organization, which overturned Roe v. Wade. This decision had immediate implications for digital health. With states moving to outlaw abortion and related reproductive healthcare, data from self-monitoring tools such as Fitbit and menstrual cycle tracking apps could be subpoenaed by law enforcement to aid in the prosecution of those who perform and/or receive restricted services (Kim, 2022). In the wake of announcements that companies would make such data available, many users of these tools reported concerns about their use in research; some indicated that they would decline to participate in such research if use of these tools were required, particularly users who identified with minoritized groups (Salvatore et al., 2024). Legislation restricting reproductive healthcare presents new ethical issues in women’s health research, and protecting the privacy and security of digital health data in behavioral medicine research is paramount.
The use of AI also comes with ethical considerations related to bias, privacy, and informed consent. Algorithmic bias can occur when data used to train AI are biased, when the AI is used in a different context or population than the one for which it was originally designed, or when the AI’s results are interpreted in a biased way (National Library of Medicine, 2024). Such biases may widen health inequities rather than help to close them. The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities recommend the following principles to reduce bias in AI: (1) promote health equity during all phases of the algorithm life cycle, (2) ensure algorithms and their use are transparent and explainable, (3) engage patients and communities during all phases, (4) identify algorithm fairness issues, and (5) establish accountability for equity and fairness in outcomes emanating from algorithms (Chin et al., 2023).
Privacy is another ethical consideration in the use of AI, given the risk that protected health information ends up being used to train algorithms. AI uses must be HIPPA-compliant and privacy protections must be built-in and resistant to data breaches. In 2023, the American Psychiatric Association issued an advisory to clinicians and researchers against the use of patient information in AI systems (American Psychiatric Association, 2023). Researchers must be transparent about the data being used in AI systems and its potential biases and be mindful about informed consent, which can only be obtained when researchers can explain the technology used, how patient data will be used, and the potential limitations and biases of the technology (Diaz-Asper et al., 2024). This is also relevant in clinical settings when AI is used for diagnosis and/or treatment decision making (Park, 2024). Because AI evolves faster than ethical best practices can be established, research is needed on potential harms and ethical issues emanating from the use of AI in behavioral medicine research and practice.
Training and collaboration
To continue to advance behavioral medicine via the use of digital tools, and to ensure high ethical standards and dissemination, 2 efforts are critical: ensuring that training curricula stay up to date and fostering expert consultation and collaboration. For training programs, emerging digital technologies and research methods that can address complex research questions must be included in standard curricula. This will not only ensure that trainees use these tools and designs in their work, but that they will be equipped to train future generations. For example, behavioral medicine training programs should include emphasis on advanced research methods and AI and include specific skill sets such as machine learning and prompt engineering (i.e., crafting generative AI prompts that produce high-quality output; Amazon Web Services, 2024). Continuing education for researchers who have not received such training is also needed and professionals need the protected time and resources to take advantage of such training opportunities. For researchers interested in developing digital health tools, we strongly recommend seeking out training in commercialization and entrepreneurialism, to learn how to bring a product to market and thereby maximize the resources invested in development. Efforts are also needed to educate researchers about alternative development pathways such as no-code platforms. Finally, the need for transdisciplinary teams has never been greater; teams benefit from representing and integrating expertise from behavioral science, computer science, human–computer interaction, and technology ethics.
Conclusion
Digital health tools present exciting opportunities to revolutionize how we conceptualize, study, and intervene on health behavior. The field of behavioral medicine has made impressive advances since 2019, clearly capitalizing on these opportunities. As digital technologies continue to evolve, the field needs to keep pace so we can continue to offer our unique expertise in the broad landscape of healthcare and public health. This requires specific attention to research methods, engagement, remote protocols and services, D&I efforts, and ethics, as well as to training and collaboration. The next 5 years are likely to involve increasing use of advanced research designs, digital health tools, and AI in the field of behavioral medicine. This increase has the potential to accelerate our progress toward the development and testing of more effective, engaging, and personalized interventions that have high potential for dissemination and implementation.
Note: Methods such as the Design Sprint process, a 5-day exercise based on agile and user-centered design principles, can accelerate development and implementation regardless of the technology selected. We recommend the following resources for more information about this process:
Knapp, J., Zeratsky, J., & Kowitz, B. (2016). Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. Simon and Schuster.
Jake-Schoffman, D.E., & McVay, M.A. (2021). Using the Design Sprint process to enhance and accelerate behavioral medicine progress: a case study and guidance, Translational Behavioral Medicine, 11(5), 1099–1106, https://doi.org/10.1093/tbm/ibaa100
Funding
Open access funding provided by Rowan University. Support for the authors’ time during the preparation of this manuscript was provided by the National Institutes of Health: grant numbers DP2HL173857 and K23HL136657 (PI: Danielle Arigo), and K24HL124366 (PI: Sherry L. Pagoto).
Declarations
Conflict of interest
Danielle Arigo and Danielle E. Jake-Schoffman declare that they have no conflicts of interest. Sherry L. Pagoto serves as scientific adviser to Fitbit.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Human and Animal Rights and Informed Consent
Not applicable.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Aggarwal, A., Tam, C. C., Wu, D., Li, X., & Qiao, S. (2023). Artificial intelligence–based chatbots for promoting health behavioral changes: Systematic review. Journal of Medical Internet Research,25, e40789. 10.2196/40789 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahern, A. L., Wheeler, G. M., Aveyard, P., Boyland, E. J., Halford, J. C. G., Mander, A. P., Woolston, J., Thomson, A. M., et al. (2017). Extended and standard duration weight-loss programme referrals for adults in primary care (WRAP): A randomised controlled trial. The Lancet,389(10085), 2214–2225. 10.1016/S0140-6736(17)30647-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Akilandasowmya, G., Nirmaladevi, G., Suganthi, Su., & Aishwariya, A. (2024). Skin cancer diagnosis: Leveraging deep hidden features and ensemble classifiers for early detection and classification. Biomedical Signal Processing and Control,88, 105306. 10.1016/j.bspc.2023.105306 [Google Scholar]
- AlShehri, Y., Sidhu, A., Lakshmanan, L. V. S., & Lefaivre, K. A. (2024). Applications of natural language processing for automated clinical data analysis in orthopaedics. Journal of the American Academy of Orthopaedic Surgeons,32(10), 439–446. 10.5435/JAAOS-D-23-00839 [DOI] [PubMed] [Google Scholar]
- Amazon Web Services. (2024). What is prompt engineering? - AI prompt engineering explained - AWS. Amazon Web Services, Inc. https://aws.amazon.com/what-is/prompt-engineering/
- American Psychiatric Association. (2023). The basics of augmented intelligence: Some factors psychiatrists need to know now. https://www.psychiatry.org:443/news-room/apa-blogs/the-basics-of-augmented-intelligence
- Amin, S., Kawamoto, C. T., & Pokhrel, P. (2023). Exploring the ChatGPT platform with scenario-specific prompts for vaping cessation. Tobacco Control. 10.1136/tc-2023-058009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aremu, T. O., Oluwole, O. E., Adeyinka, K. O., & Schommer, J. C. (2022). Medication adherence and compliance: Recipe for improving patient outcomes. Pharmacy,10(5), 106. 10.3390/pharmacy10050106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arigo, D., Bercovitz, I., Lapitan, E., & Gular, S. (2024). Social comparison and mental health. Current Treatment Options in Psychiatry,11(2), 17–33. 10.1007/s40501-024-00313-0 [Google Scholar]
- Arigo, D., Brown, M. M., Pasko, K., & Suls, J. (2020). Social comparison features in physical activity promotion apps: Scoping meta-review. Journal of Medical Internet Research,22(3), e15642. 10.2196/15642 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arigo, D., Jake-Schoffman, D. E., Wolin, K., Beckjord, E., Hekler, E. B., & Pagoto, S. L. (2019). The history and future of digital health in the field of behavioral medicine. Journal of Behavioral Medicine,42(1), 67–83. 10.1007/s10865-018-9966-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bak, M., & Chin, J. (2024). The potential and limitations of large language models in identification of the states of motivations for facilitating health behavior change. Journal of the American Medical Informatics Association,31(9), 2047–2053. 10.1093/jamia/ocae057 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bashir, U. (2024, March 13). Health app users in selected countries worldwide 2024. Statista. https://www.statista.com/forecasts/1452648/share-of-health-app-users-in-selected-countries-worldwide
- Bernhart, J. A., Fellers, A. W., Turner-McGrievy, G., Wilson, M. J., & Hutto, B. (2022). Socially distanced data collection: Lessons learned using electronic bluetooth scales to assess eeight. Health Education & Behavior,49(5), 765–769. 10.1177/10901981221104723 [DOI] [PubMed] [Google Scholar]
- Bijkerk, L. E., Oenema, A., Geschwind, N., & Spigt, M. (2023). Measuring engagement with mental health and behavior change interventions: An integrative review of methods and instruments. International Journal of Behavioral Medicine,30(2), 155–166. 10.1007/s12529-022-10086-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blair, C. K., Harding, E., Herman, C., Boyce, T., Demark-Wahnefried, W., Davis, S., Kinney, A. Y., & Pankratz, V. S. (2020). Remote assessment of functional mobility and strength in older cancer survivors: Protocol for a validity and reliability study. JMIR Research Protocols,9(9), e20834. 10.2196/20834 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blok, A. C., Sadasivam, R. S., Hogan, T. P., Patterson, A., Day, N., & Houston, T. K. (2019). Nurse-driven mHealth implementation using the technology inpatient program for smokers (TIPS): Mixed methods study. JMIR mHealth and uHealth,7(10), e14331. 10.2196/14331 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bond, D. S., Thomas, J. G., Raynor, H. A., Moon, J., Sieling, J., Trautvetter, J., Leblond, T., & Wing, R. R. (2014). B-MOBILE - A smartphone-based intervention to reduce sedentary time in overweight/obese individuals: A within-subjects experimental trial. PLoS ONE,9(6), e100821. 10.1371/journal.pone.0100821 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brotman, J. J., & Kotloff, R. M. (2021). Providing outpatient telehealth services in the United States. Chest,159(4), 1548–1558. 10.1016/j.chest.2020.11.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bucher, A., Blazek, E. S., & Symons, C. T. (2024). How are machine learning and artificial intelligence used in digital behavior change interventions? A scoping review. Mayo Clinic Proceedings: Digital Health,2(3), 375–404. 10.1016/j.mcpdig.2024.05.007 [Google Scholar]
- Bulkes, N. Z., Davis, K., Kay, B., & Riemann, B. C. (2022). Comparing efficacy of telehealth to in-person mental health care in intensive-treatment-seeking adults. Journal of Psychiatric Research,145, 347–352. 10.1016/j.jpsychires.2021.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cha, E., & Lee, S. (2024). Identifying main themes in diabetes management interviews using natural language processing–based text mining. CIN Computers, Informatics, Nursing.10.1097/CIN.0000000000001114 [DOI] [PubMed] [Google Scholar]
- Chew, H. S. J., Chew, N. W., Loong, S. S. E., Lim, S. L., Tam, W. S. W., Chin, Y. H., Chao, A. M., Dimitriadis, G. K., Gao, Y., So, J. B. Y., Shabbir, A., & Ngiam, K. Y. (2024). Correction: Effectiveness of an artificial intelligence-assisted app for improving eating behaviors: Mixed methods evaluation. Journal of Medical Internet Research,26, e62767. 10.2196/62767 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chikwetu, L., Daily, S., Mortazavi, B. J., & Dunn, J. (2023). Automated diet capture using voice alerts and speech recognition on smartphones: Pilot usability and acceptability study. JMIR Formative Research,7, e46659. 10.2196/46659 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colón-Rodríguez, C. J., Dullabh, P., Duran, D. G., Fair, M., et al. (2023). Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Network Open,6(12), e2345050. 10.1001/jamanetworkopen.2023.45050 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohn, E. R., Qian, T., & Murphy, S. A. (2023). Sample size considerations for micro-randomized trials with binary proximal outcomes. Statistics in Medicine,42(16), 2777–2796. 10.1002/sim.9748 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cole-Lewis, H., Ezeanochie, N., & Turgiss, J. (2019). Understanding health behavior technology engagement: Pathway to measuring digital behavior change interventions. JMIR Formative Research,3(4), e14052. 10.2196/14052 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cooper, K. B., Lapierre, S., Carrera Seoane, M., Lindstrom, K., Pritschmann, R., Donahue, M., Christou, D. D., McVay, M. A., & Jake-Schoffman, D. E. (2023). Behavior change techniques in digital physical activity interventions for breast cancer survivors: A systematic review. Translational Behavioral Medicine,13(4), 268–280. 10.1093/tbm/ibac111 [DOI] [PubMed] [Google Scholar]
- De La Barrera, U., Arrigoni, F., Monserrat, C., Montoya-Castilla, I., & Gil-Gómez, J.-A. (2024). Using ecological momentary assessment and machine learning techniques to predict depressive symptoms in emerging adults. Psychiatry Research,332, 115710. 10.1016/j.psychres.2023.115710 [DOI] [PubMed] [Google Scholar]
- Department of Health and Human Services. (2024). Medicare payment policies. https://telehealth.hhs.gov/providers/billing-and-reimbursement/medicare-payment-policies
- Dhingra, L. S., Aminorroaya, A., Oikonomou, E. K., Nargesi, A. A., Wilson, F. P., Krumholz, H. M., & Khera, R. (2023). Use of wearable devices in individuals with or at risk for cardiovascular disease in the US, 2019 to 2020. JAMA Network Open,6(6), e2316634. 10.1001/jamanetworkopen.2023.16634 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diaz-Asper, C., Hauglid, M. K., Chandler, C., Cohen, A. S., Foltz, P. W., & Elvevåg, B. (2024). A framework for language technologies in behavioral research and clinical applications: Ethical challenges, implications, and solutions. American Psychologist,79(1), 79–91. 10.1037/amp0001195 [DOI] [PubMed] [Google Scholar]
- Dixon, D., Sattar, H., Moros, N., Kesireddy, S. R., Ahsan, H., Lakkimsetti, M., Fatima, M., Doshi, D., Sadhu, K., & Junaid Hassan, M. (2024). Unveiling the influence of AI predictive analytics on patient outcomes: A comprehensive narrative review. Cureus. 10.7759/cureus.59954 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Donkin, L., Christensen, H., Naismith, S. L., Neal, B., Hickie, I. B., & Glozier, N. (2011). A systematic review of the impact of adherence on the effectiveness of e-therapies. Journal of Medical Internet Research,13(3), e52. 10.2196/jmir.1772 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ebrahimi, A., Henriksen, M. B. H., Brasen, C. L., Hilberg, O., Hansen, T. F., Jensen, L. H., Peimankar, A., & Wiil, U. K. (2024). Identification of patients’ smoking status using an explainable AI approach: A Danish electronic health records case study. BMC Medical Research Methodology,24(1), 114. 10.1186/s12874-024-02231-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Erten Uyumaz, B., Feijs, L., & Hu, J. (2021). A review of digital cognitive behavioral therapy for insomnia (CBT-I Apps): Are they designed for engagement? International Journal of Environmental Research and Public Health,18(6), 2929. 10.3390/ijerph18062929 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fatsecret. (2024). Fatsecret API. https://platform.fatsecret.com/platform-api
- Fitbit. (2024). Fitbit developer. https://www.fitbit.com/dev
- Forman, E. M., Goldstein, S. P., Crochiere, R. J., Butryn, M. L., Juarascio, A. S., Zhang, F., & Foster, G. D. (2019). Randomized controlled trial of OnTrack, a just-in-time adaptive intervention designed to enhance weight loss. Translational Behavioral Medicine,9(6), 989–1001. 10.1093/tbm/ibz137 [DOI] [PubMed] [Google Scholar]
- Giovanetti, A. K., Punt, S. E. W., Nelson, E.-L., & Ilardi, S. S. (2022). Teletherapy versus in-person psychotherapy for depression: A meta-analysis of randomized controlled trials. Telemedicine and E-Health,28(8), 1077–1089. 10.1089/tmj.2021.0294 [DOI] [PubMed] [Google Scholar]
- Glass, J. E., Dorsey, C. N., Beatty, T., Bobb, J. F., Wong, E. S., Palazzo, L., King, D., Mogk, J., et al. (2023). Study protocol for a factorial-randomized controlled trial evaluating the implementation, costs, effectiveness, and sustainment of digital therapeutics for substance use disorder in primary care (DIGITS Trial). Implementation Science,18(1), 3. 10.1186/s13012-022-01258-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goldstein, S. P., Zhang, F., Thomas, J. G., Butryn, M. L., Herbert, J. D., & Forman, E. M. (2018). Application of machine learning to predict dietary lapses during weight loss. Journal of Diabetes Science and Technology,12(5), 1045–1052. 10.1177/1932296818775757 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grady, A., Pearson, N., Lamont, H., Leigh, L., Wolfenden, L., Barnes, C., Wyse, R., Finch, M., et al. (2023). The effectiveness of strategies to improve user engagement with digital health interventions targeting nutrition, physical activity, and overweight and obesity: Systematic review and meta-analysis. Journal of Medical Internet Research,25(1), e47987. 10.2196/47987 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guastaferro, K., & Collins, L. M. (2021). Optimization methods and implementation science: An opportunity for behavioral and biobehavioral interventions. Implementation Research and Practice,2, 263348952110543. 10.1177/26334895211054363 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartley, T., Hicks, Y., Davies, J. L., Cazzola, D., & Sheeran, L. (2024). BACK-to-MOVE: Machine learning and computer vision model automating clinical classification of non-specific low back pain for personalised management. PLoS ONE,19(5), e0302899. 10.1371/journal.pone.0302899 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayman, M., Alfrey, K.-L., Cannon, S., Alley, S., Rebar, A. L., Williams, S., Short, C. E., Altazan, A., et al. (2021). Quality, features, and presence of behavior change techniques in mobile apps designed to improve physical activity in pregnant women: Systematic search and content analysis. JMIR mHealth and uHealth,9(4), e23649. 10.2196/23649 [DOI] [PMC free article] [PubMed] [Google Scholar]
- ReCODE Health. (2024). ReCODE Health|Welcome. ReCODE Health. https://recode.health/
- Hoenemeyer, T. W., Cole, W. W., Oster, R. A., Pekmezi, D. W., Pye, A., & Demark-Wahnefried, W. (2022). Test/Retest reliability and validity of remote vs in person anthropometric and physical performance assessments in cancer survivors and supportive partners. Cancers,14(4), 1075. 10.3390/cancers14041075 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoerster, K. D., Collins, M. P., Au, D. H., Lane, A., Epler, E., McDowell, J., Barón, A. E., Rise, P., et al. (2020). Testing a self-directed lifestyle intervention among veterans: The D-ELITE pragmatic clinical trial. Contemporary Clinical Trials,95, 106045. 10.1016/j.cct.2020.106045 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hogan, T. P., Etingen, B., Lipschitz, J. M., Shimada, S. L., McMahon, N., Bolivar, D., Bixler, F. R., Irvin, D., Wacks, R., Cutrona, S., Frisbee, K. L., & Smith, B. M. (2022). Factors associated with self-reported use of web and mobile health apps among US military veterans: Cross-sectional survey. JMIR mHealth and uHealth,10(12), e41767. 10.2196/41767 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hotez, P. J. (2024). Health disinformation-gaining strength, becoming infinite. JAMA Internal Medicine,184(1), 96–97. 10.1001/jamainternmed.2023.5946 [DOI] [PubMed] [Google Scholar]
- Houston, T. K., Sadasivam, R. S., Allison, J. J., Ash, A. S., Ray, M. N., English, T. M., Hogan, T. P., & Ford, D. E. (2015). Evaluating the QUIT-PRIMO clinical practice ePortal to increase smoker engagement with online cessation interventions: A national hybrid type 2 implementation study. Implementation Science,10(1), 154. 10.1186/s13012-015-0336-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huberty, J. L., Espel-Huynh, H. M., Neher, T. L., & Puzia, M. E. (2022). Testing the pragmatic effectiveness of a consumer-based mindfulness mobile app in the workplace: Randomized controlled trial. JMIR mHealth and uHealth,10(9), e38903. 10.2196/38903 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson, G. L., Krein, S. L., Alverson, D. C., Darkins, A. W., Gunnar, W., Harada, N. D., Helfrich, C. D., Houston, T. K., et al. (2011). Defining core issues in utilizing information technology to improve access: Evaluation and research agenda. Journal of General Internal Medicine,26(S2), 623–627. 10.1007/s11606-011-1789-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jiang, Z., Huang, X., Wang, Z., Liu, Y., Huang, L., & Luo, X. (2024). Embodied conversational agents for chronic diseases: Scoping review. Journal of Medical Internet Research,26, e47134. 10.2196/47134 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnston, C. A., Rost, S., Miller-Kovach, K., Moreno, J. P., & Foreyt, J. P. (2013). A randomized controlled trial of a community-based behavioral counseling program. The American Journal of Medicine,126(12), 1143-e19. 10.1016/j.amjmed.2013.04.025 [DOI] [PubMed] [Google Scholar]
- Kalinin, K. (2024). Healthcare app development in 2024: The ultimate guide. Topflight. https://topflightapps.com/ideas/5-steps-to-build-a-healthcare-app/
- Kelders, S. M., Kip, H., & Greeff, J. (2020a). Psychometric evaluation of the Twentey engagement with Ehealth technologies scale (TWEETS): Evaluation study. Journal of Medical Internet Research,22(10), e17757. 10.2196/17757 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelders, S. M., van Zyl, L. E., & Ludden, G. D. S. (2020b). The concept and components of engagement in different domains applied to eHealth: A systematic scoping review. Frontiers in Psychology. 10.3389/fpsyg.2020.00926 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khalilnejad, A., Sun, R.-T., Kompala, T., Painter, S., James, R., & Wang, Y. (2024). Proactive identification of patients with diabetes at risk of uncontrolled outcomes during a diabetes management program: Conceptualization and development study using machine learning. JMIR Formative Research,8, e54373. 10.2196/54373 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kidwell, K. M. (2015). Chapter 2: DTRs and SMARTs: Definitions, designs, and applications. In Adaptive Treatment Strategies in Practice (Vol. 1–0, pp. 7–23). Society for Industrial and Applied Mathematics. 10.1137/1.9781611974188.ch2
- Kidwell, K. M., & Almirall, D. (2023). Sequential, multiple assignment, randomized trial designs. JAMA,329(4), 336. 10.1001/jama.2022.24324 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim, J. (2022, July 2). Data privacy concerns make the post-Roe era uncharted territory. NPR. https://www.npr.org/2022/07/02/1109565803/data-privacy-abortion-roe-apps
- King’s College London. (2024). RADAR-CNS (Remote Assessment of Disease and Relapse – Central Nervous System). King’s College London. https://www.kcl.ac.uk/research/radarcns
- Kozaily, E., Geagea, M., Akdogan, E. R., Atkins, J., Elshazly, M. B., Guglin, M., Tedford, R. J., & Wehbe, R. M. (2024). Accuracy and consistency of online large language model-based artificial intelligence chat platforms in answering patients’ questions about heart failure. International Journal of Cardiology,408, 132115. 10.1016/j.ijcard.2024.132115 [DOI] [PubMed] [Google Scholar]
- Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A. Y. S., & Coiera, E. (2018). Conversational agents in healthcare: A systematic review. Journal of the American Medical Informatics Association,25(9), 1248–1258. 10.1093/jamia/ocy072 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lau, N., Zhao, X., O’Daffer, A., Weissman, H., & Barton, K. (2024). Pediatric cancer communication on twitter: Natural language processing and qualitative content analysis. JMIR Cancer,10, e52061. 10.2196/52061 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lehmann, M., Jones, L., & Schirmann, F. (2024). App engagement as a predictor of weight loss in blended-care interventions: Retrospective observational study using large-scale real-world data. Journal of Medical Internet Research,26, e45469. 10.2196/45469 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin, T., Heckman, T. G., & Anderson, T. (2022). The efficacy of synchronous teletherapy versus in-person therapy: A meta-analysis of randomized clinical trials. Clinical Psychology: Science and Practice,29(2), 167–178. 10.1037/cps0000056 [Google Scholar]
- Lindroth, H., Nalaie, K., Raghu, R., Ayala, I. N., Busch, C., Bhattacharyya, A., Moreno Franco, P., Diedrich, D. A., Pickering, B. W., & Herasevich, V. (2024). Applied artificial intelligence in healthcare: A review of computer vision technology application in hospital settings. Journal of Imaging,10(4), 81. 10.3390/jimaging10040081 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu, S., Wilms, A., Rush, J., Hofer, S., & Rhodes. (in press). Advancing physical activity research methods using real-time and adaptive technology: A scoping review of “No-Code” mobile health app research tools. Sport, Exercise, and Performance Psychology.
- Mandl, K. D., Gottlieb, D., & Mandel, J. C. (2024). Integration of AI in healthcare requires an interoperable digital data ecosystem. Nature Medicine,30(3), 631–634. 10.1038/s41591-023-02783-w [DOI] [PubMed] [Google Scholar]
- Markets and Markets. (2023, November). Digital Health Market by Revenue Model (Subscription, Pay per service, Free apps), Technology (Wearables, mHealth, Telehealthcare, RPM, LTC monitoring, Population Health management, DTx), EHR, Healthcare Analytics, ePrescribing & Region—Global Forecast to 2028. Markets and Markets. https://www.marketsandmarkets.com/Market-Reports/digital-health-market-45458752.html
- Masiero, M., Spada, G. E., Sanchini, V., Munzone, E., Pietrobon, R., Teixeira, L., Valencia, M., Machiavelli, A., Fragale, E., Pezzolato, M., & Pravettoni, G. (2024). Correction: A machine learning model to predict patients’ adherence behavior and a decision support system for patients with metastatic breast cancer: Protocol for a randomized controlled trial. JMIR Research Protocols,13, e55928. 10.2196/55928 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McAleese, D., Linardakis, M., & Papadaki, A. (2022). Quality and presence of behaviour change techniques in mobile apps for the mediterranean diet: A content analysis of android google play and apple app store apps. Nutrients,14(6), 1290. 10.3390/nu14061290 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McClelland, B., Ponting, C., Levy, C., Mah, R., Moran, P., Sobhani, N. C., & Felder, J. (2024). Viewpoint: Challenges and strategies for engaging participants in videoconferencing appointments. Contemporary Clinical Trials,137, 107425. 10.1016/j.cct.2023.107425 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McManus, R. J., Little, P., Stuart, B., Morton, K., Raftery, J., Kelly, J., Bradbury, K., Zhang, J., et al. (2021). Home and online management and evaluation of blood pressure (HOME BP) using a digital intervention in poorly controlled hypertension: Randomised controlled trial. BMJ. 10.1136/bmj.m4858 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Menz, B. D., Modi, N. D., Sorich, M. J., & Hopkins, A. M. (2024). Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: Weapons of mass disinformation. JAMA Internal Medicine,184(1), 92–96. 10.1001/jamainternmed.2023.5947 [DOI] [PubMed] [Google Scholar]
- Meyerhoff, J., Kornfield, R., Lattie, E. G., Knapp, A. A., Kruzan, K. P., Jacobs, M., Stamatis, C. A., Taple, B. J., et al. (2023). From formative design to service-ready therapeutic: A pragmatic approach to designing digital mental health interventions across domains. Internet Interventions,34, 100677. 10.1016/j.invent.2023.100677 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller, N. A., Ehmann, M. M., Hagerman, C. J., Forman, E. M., Arigo, D., Spring, B., LaFata, E. M., Zhang, F., Milliron, B.-J., & Butryn, M. L. (2023). Sharing digital self-monitoring data with others to enhance long-term weight loss: A randomized controlled trial. Contemporary Clinical Trials,129, 107201. 10.1016/j.cct.2023.107201 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milne-Ives, M., Homer, S. R., Andrade, J., & Meinert, E. (2023). Potential associations between behavior change techniques and engagement with mobile health apps: A systematic review. Frontiers in Psychology,14, 1227443. 10.3389/fpsyg.2023.1227443 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mohr, D. C., Azocar, F., Bertagnolli, A., Choudhury, T., Chrisp, P., Frank, R., Harbin, H., Histon, T., et al. (2021). Banbury forum consensus statement on the path forward for digital mental health treatment. Psychiatric Services,72(6), 677–683. 10.1176/appi.ps.202000561 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Montoya, L. M., Kosorok, M. R., Geng, E. H., Schwab, J., Odeny, T. A., & Petersen, M. L. (2023). Efficient and robust approaches for analysis of sequential multiple assignment randomized trials: Illustration using the ADAPT-R trial. Biometrics,79(3), 2577–2591. 10.1111/biom.13808 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nahum-Shani, I., Dziak, J. J., Venera, H., Pfammatter, A. F., Spring, B., & Dempsey, W. (2024). Design of experiments with sequential randomizations on multiple timescales: The hybrid experimental design. Behavior Research Methods,56(3), 1770–1792. 10.3758/s13428-023-02119-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nahum-Shani, I., Dziak, J. J., Walton, M. A., & Dempsey, W. (2022). Hybrid experimental designs for intervention development: What, why, and how. Advances in Methods and Practices in Psychological Science,5(3), 251524592211142. 10.1177/25152459221114279 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nahum-Shani, I., & Yoon, C. (2024). Toward the science of engagement with digital interventions. Current Directions in Psychological Science. 10.1177/09637214241254328 [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Library of Medicine. (2024). Algorithmic Bias. NNLM. https://www.nnlm.gov/guides/data-thesaurus/algorithmic-bias
- Nebeker, C., Gholami, M., Kareem, D., & Kim, E. (2021). Applying a digital health checklist and readability tools to improve informed consent for digital health research. Frontiers in Digital Health,3, 690901. 10.3389/fdgth.2021.690901 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nix, J. (2024). AI-powered world health chatbot is flubbing some answers. Bloomberg. https://www.bloomberg.com/news/articles/2024-04-18/who-s-new-ai-health-chatbot-sarah-gets-many-medical-questions-wrong?leadSource=uverify%20wall
- Noh, E., Won, J., Jo, S., Hahm, D.-H., & Lee, H. (2023). Conversational agents for body weight management: Systematic review. Journal of Medical Internet Research,25, e42238. 10.2196/42238 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noom. (n.d.). Noom Med. https://www.Noom.Com/. Retrieved August 15, 2024, from https://www.noom.com/med/
- Ogawa, E. F., Harris, R., Dufour, A. B., Morey, M. C., & Bean, J. (2021). Reliability of virtual physical performance assessments in veterans during the COVID-19 pandemic. Archives of Rehabilitation Research and Clinical Translation,3(3), 100146. 10.1016/j.arrct.2021.100146 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Omiye, J. A., Gui, H., Rezaei, S. J., Zou, J., & Daneshjou, R. (2024). Large language models in medicine: The potentials and pitfalls: A narrative review. Annals of Internal Medicine,177(2), 210–220. 10.7326/M23-2772 [DOI] [PubMed] [Google Scholar]
- OpenAI. (2023). ChatGPT — Release Notes|OpenAI Help Center. https://help.openai.com/en/articles/6825453-chatgpt-release-notes
- Pagoto, S., Schneider, K., Jojic, M., DeBiasse, M., & Mann, D. (2013). Evidence-based strategies in weight-loss mobile apps. American Journal of Preventive Medicine,45(5), 576–582. 10.1016/j.amepre.2013.04.025 [DOI] [PubMed] [Google Scholar]
- Pagoto, S. L., Schroeder, M. W., Xu, R., Waring, M. E., Groshon, L., Goetz, J. M., Idiong, C., Troy, H., DiVito, J., & Bannor, R. (2022). A facebook-delivered weight loss intervention using open enrollment: Randomized pilot feasibility trial. JMIR Formative Research,6(5), e33663. 10.2196/33663 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pagoto, S., Xu, R., Bullard, T., Foster, G. D., Bannor, R., Arcangel, K., DiVito, J., Schroeder, M., & Cardel, M. I. (2023). An evaluation of a personalized multicomponent commercial digital weight management program: Single-arm behavioral trial. Journal of Medical Internet Research,25, e44955. 10.2196/44955 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park, H. J. (2024). Patient perspectives on informed consent for medical AI: A web-based experiment. Digital Health,10, 20552076241247936. 10.1177/20552076241247938 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Passanante, A., Pertwee, E., Lin, L., Lee, K. Y., Wu, J. T., & Larson, H. J. (2023). Conversational AI and vaccine communication: Systematic review of the evidence. Journal of Medical Internet Research,25, e42758. 10.2196/42758 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patel, M. L., Brooks, T. L., & Bennett, G. G. (2020). Consistent self-monitoring in a commercial app-based intervention for weight loss: Results from a randomized trial. Journal of Behavioral Medicine,43(3), 391–401. 10.1007/s10865-019-00091-8 [DOI] [PubMed] [Google Scholar]
- Patra, B. G., Sun, Z., Cheng, Z., Kumar, P. K. R. J., Altammami, A., Liu, Y., Joly, R., Jedlicka, C., Delgado, D., Pathak, J., Peng, Y., & Zhang, Y. (2023). Automated classification of lay health articles using natural language processing: A case study on pregnancy health and postpartum depression. Frontiers in Psychiatry,14, 1258887. 10.3389/fpsyt.2023.1258887 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perski, O., Blandford, A., Garnett, C., Crane, D., West, R., & Michie, S. (2020). A self-report measure of engagement with digital behavior change interventions (DBCIs): Development and psychometric evaluation of the “DBCI Engagement Scale.” Translational Behavioral Medicine,10(1), 267–277. 10.1093/tbm/ibz039 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Perski, O., Blandford, A., West, R., & Michie, S. (2017). Conceptualising engagement with digital behaviour change interventions: A systematic review using principles from critical interpretive synthesis. Translational Behavioral Medicine,7(2), 254–267. 10.1007/s13142-016-0453-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petti, U., Baker, S., & Korhonen, A. (2020). A systematic literature review of automatic Alzheimer’s disease detection from speech and language. Journal of the American Medical Informatics Association,27(11), 1784–1797. 10.1093/jamia/ocaa174 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pisu, M., Omairi, I., Hoenemeyer, T., Halilova, K. I., Schoenberger, Y.-M.M., Rogers, L. Q., Kenzik, K. M., Oster, R. A., et al. (2021). Developing a virtual assessment protocol for the AMPLIFI randomized controlled trial due to COVID-19: From assessing participants’ preference to preparing the team. Contemporary Clinical Trials,111, 106604. 10.1016/j.cct.2021.106604 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pouwels, J. L., Valkenburg, P. M., Beyens, I., Van Driel, I. I., & Keijsers, L. (2021). Social media use and friendship closeness in adolescents’ daily lives: An experience sampling study. Developmental Psychology,57(2), 309–323. 10.1037/dev0001148 [DOI] [PubMed] [Google Scholar]
- Power, J. M., Phelan, S., Hatley, K., Brannen, A., Muñoz-Christian, K., Legato, M., & Tate, D. F. (2019). Engagement and weight loss in a web and mobile program for low income postpartum women Fit moms/mamás activas. Health Education & Behavior,46(2_suppl), 114S-123S. 10.1177/1090198119873915 [DOI] [PubMed] [Google Scholar]
- Presseller, E. K., Lampe, E. W., Zhang, F., Gable, P. A., Guetterman, T. C., Forman, E. M., & Juarascio, A. S. (2023). Using wearable passive sensing to predict binge eating in response to negative affect among individuals with transdiagnostic binge eating: Protocol for an observational study. JMIR Research Protocols,12, e47098. 10.2196/47098 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ramsey, R. R., Caromody, J. K., Voorhees, S. E., Warning, A., Cushing, C. C., Guilbert, T. W., Hommel, K. A., & Fedele, D. A. (2019). A systematic evaluation of asthma management apps examining behavior change techniques. The Journal of Allergy and Clinical Immunology In Practice,7(8), 2583–2591. 10.1016/j.jaip.2019.03.041 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ranjan, Y., Rashid, Z., Stewart, C., Conde, P., Begale, M., Verbeeck, D., Boettcher, S., Hyve, T., et al. (2019). RADAR-Base: Open source mobile health platform for collecting, monitoring, and analyzing data using sensors, wearables, and mobile devices. JMIR mHealth and uHealth,7(8), e11734. 10.2196/11734 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rocha, P., Pinheiro, D., De Paula Monteiro, R., Tubert, E., Romero, E., Bastos-Filho, C., Nuno, M., & Cadeiras, M. (2023). Adaptive content tuning of social network digital health interventions using control systems engineering for precision public health: Cluster randomized controlled trial. Journal of Medical Internet Research,25, e43132. 10.2196/43132 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ross, K. M., Carpenter, C. A., Arroyo, K. M., Shankar, M. N., Yi, F., Qiu, P., Anthony, L., Ruiz, J., & Perri, M. G. (2022). Impact of transition from face-to-face to telehealth on behavioral obesity treatment during the COVID-19 pandemic. Obesity,30(4), 858–863. 10.1002/oby.23383 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ross, K. M., Hong, Y.-R., Krukowski, R. A., Miller, D. R., Lemas, D. J., & Cardel, M. I. (2021). Acceptability of research and health care visits during the COVID-19 pandemic: Cross-sectional survey Ssudy. JMIR Formative Research,5(6), e27185. 10.2196/27185 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salvatore, G. M., Bercovitz, I., & Arigo, D. (2024). Womens comfort with mobile applications for menstrual cycle self monitoring following the overturning of Roe v. Wade. mHealth,10, 1–1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scodari, B. T., Chacko, S., Matsumura, R., & Jacobson, N. C. (2023). Using machine learning to forecast symptom changes among subclinical depression patients receiving stepped care or usual care. Journal of Affective Disorders,340, 213–220. 10.1016/j.jad.2023.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Selaskowski, B., Wiebe, A., Kannen, K., Asché, L., Pakos, J., Philipsen, A., & Braun, N. (2024). Clinical adoption of virtual reality in mental health is challenged by lack of high-quality research. Npj Mental Health Research,3(1), 24. 10.1038/s44184-024-00069-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding machine learning: From theory to algorithms. Cambridge University Press. [Google Scholar]
- Shelton, R. C., Lee, M., Brotzman, L. E., Wolfenden, L., Nathan, N., & Wainberg, M. L. (2020). What Is dissemination and implementation science?: An introduction and opportunities to advance behavioral medicine and public health globally. International Journal of Behavioral Medicine,27(1), 3–20. 10.1007/s12529-020-09848-x [DOI] [PubMed] [Google Scholar]
- Sim, J., Huang, X., Horan, M. R., Stewart, C. M., Robison, L. L., Hudson, M. M., Baker, J. N., & Huang, I.-C. (2023). Natural language processing with machine learning methods to analyze unstructured patient-reported outcomes derived from electronic health records: A systematic review. Artificial Intelligence in Medicine,146, 102701. 10.1016/j.artmed.2023.102701 [DOI] [PMC free article] [PubMed] [Google Scholar]
- SmarthealthIT. (2024). Smart App Gallery. Smarthealth IT. https://apps.smarthealthit.org/apps?sort=name-asc
- Smyth, J. M., Juth, V., Ma, J., & Sliwinski, M. (2017). A slice of life: Ecologically valid methods for research on social relationships and health across the lifespan. Social and Personality Psychology Compass,11(10), e12356. 10.1111/spc3.12356 [Google Scholar]
- Song, J., Litvin, B., Allred, R., Chen, S., Hull, T. D., & Areán, P. A. (2023). Comparing message-based psychotherapy to once-weekly, video-based psychotherapy for moderate depression: Randomized controlled trial. Journal of Medical Internet Research,25, e46052. 10.2196/46052 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stavrova, O., & Denissen, J. (2021). Does using social media jeopardize well-being? The importance of separating within- from between-person effects. Social Psychological and Personality Science,12(6), 964–973. 10.1177/1948550620944304 [Google Scholar]
- Stryker, C., & Kavlakoglu, E. (2024). What is artificial intelligence (AI)?|IBM. IBM. https://www.ibm.com/topics/artificial-intelligence
- Szeszulski, J., & Guastaferro, K. (2024). Optimization of implementation strategies using the multiphase optimization STratgey (MOST) framework: Practical guidance using the factorial design. Translational Behavioral Medicine. 10.1093/tbm/ibae035 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor, H., Cavanagh, K., Field, A. P., & Strauss, C. (2022). Health care workers’ need for headspace: Findings from a multisite definitive randomized controlled trial of an unguided digital mindfulness-based self-help app to reduce healthcare worker stress. JMIR mHealth and uHealth,10(8), e31744. 10.2196/31744 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tchang, B. G., Morrison, C., Kim, J. T., Ahmed, F., Chan, K. M., Alonso, L. C., Aronne, L. J., & Shukla, A. P. (2022). Weight loss outcomes with telemedicine during COVID-19. Frontiers in Endocrinology,13, 793290. 10.3389/fendo.2022.793290 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thirunavukarasu, A. J., Ting, D. S. J., Elangovan, K., Gutierrez, L., Tan, T. F., & Ting, D. S. W. (2023). Large language models in medicine. Nature Medicine,29(8), 1930–1940. 10.1038/s41591-023-02448-8 [DOI] [PubMed] [Google Scholar]
- Thomas, J. G., Raynor, H. A., Bond, D. S., Luke, A. K., Cardoso, C. C., Foster, G. D., & Wing, R. R. (2017). Weight loss in weight watchers online with and without an activity tracking device compared to control: A randomized trial. Obesity,25(6), 1014–1021. 10.1002/oby.21846 [DOI] [PubMed] [Google Scholar]
- Toro-Ramos, T., Michaelides, A., Anton, M., Karim, Z., Kang-Oh, L., Argyrou, C., Loukaidou, E., Charitou, M. M., Sze, W., & Miller, J. D. (2020). Mobile delivery of the diabetes prevention program in people with prediabetes: Randomized controlled trial. JMIR mHealth and uHealth,8(7), e17842. 10.2196/17842 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tudor Car, L., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T., Joty, S., Theng, Y.-L., & Atun, R. (2020). Conversational agents in health care: Scoping review and conceptual analysis. Journal of Medical Internet Research,22(8), e17158. 10.2196/17158 [DOI] [PMC free article] [PubMed] [Google Scholar]
- US Department of Veterans Affairs. (2024). MOVE! Coach. U.S. Department of Veteran Affairs. https://mobile.va.gov/app/move-coach
- Vaniukov, S. (2024). NLP vs LLM: A comprehensive guide to understanding key differences. Medium. https://medium.com/@vaniukov.s/nlp-vs-llm-a-comprehensive-guide-to-understanding-key-differences-0358f6571910
- Villar, R., Beltrame, T., Ferreira dos Santos, G., Zago, A. S., Bocalini, D. S., & Pontes Júnior, F. L. (2024). Test–retest reliability and agreement of remote home-based functional capacity self-administered assessments in community-dwelling, socially isolated older adults. Digital Health,10, 20552076241254904. 10.1177/20552076241254904 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Walton, A., Nahum-Shani, I., Crosby, L., Klasnja, P., & Murphy, S. (2018). Optimizing digital integrated care via micro-randomized trials. Clinical Pharmacology & Therapeutics,104(1), 53–58. 10.1002/cpt.1079 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang, M. L., Waring, M. E., Jake-Schoffman, D. E., Oleski, J. L., Michaels, Z., Goetz, J. M., Lemon, S. C., Ma, Y., & Pagoto, S. L. (2017). Clinic versus online social network–delivered lifestyle interventions: Protocol for the get social noninferiority randomized controlled trial. JMIR Research Protocols,6(12), e243. 10.2196/resprot.8068 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wiebe, A., Kannen, K., Selaskowski, B., Mehren, A., Thöne, A.-K., Pramme, L., Blumenthal, N., Li, M., et al. (2022). Virtual reality in the diagnostic and therapy for mental disorders: A systematic review. Clinical Psychology Review,98, 102213. 10.1016/j.cpr.2022.102213 [DOI] [PubMed] [Google Scholar]
- Willms, A., Rush, J., Hofer, S., Rhodes, R.E., & Liu, S. (2024). Advancing physical activity research methods using real-time and adaptive technology: A scoping review of “No-Code” mobile health app research tools. sport, exercise, and performance psychology.
- World Health Organization (2021). Ethics and governance of artificial intelligence for health: WHO guidance. https://www.who.int/publications/i/item/9789240029200
- World Health Organization (2023). WHO calls for safe and ethical AI for health. Who.In https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
- World Health Organization (2024). S.A.R.A.H, a smart AI resource assistant for health. Who.Int. https://www.who.int/campaigns/s-a-r-a-h
- World Health Organization (2024). WHO unveils a digital health promoter harnessing generative AI for public health. Who. Int. https://www.who.int/news/item/02-04-2024-who-unveils-a-digital-health-promoter-harnessing-generative-ai-for-public-health [PMC free article] [PubMed]
- Yeaton, W. H. (2024). Re-conceptualizing SMART designs as a hybrid of randomized and regression discontinuity designs: Opportunities, cautions. International Journal of Research & Method in Education,47(2), 140–155. 10.1080/1743727X.2023.2220649 [Google Scholar]
- Yu, H., Kotlyar, M., Thuras, P., Dufresne, S., & Pakhomov, S. V. (2024). Towards predicting smoking events for just-in-time interventions. AMIA Joint Summits on Translational Science,2024, 468–477. [PMC free article] [PubMed] [Google Scholar]
- Yu, P., Xu, H., Hu, X., & Deng, C. (2023). Leveraging generative AI and large language models: A comprehensive roadmap for healthcare integration. Healthcare,11(20), 2776. 10.3390/healthcare11202776 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhao, S. Z., Weng, X., Luk, T. T., Wu, Y., Cheung, D. Y. T., Li, W. H. C., Tong, H., Lai, V., Lam, T. H., & Wang, M. P. (2022). Adaptive interventions to optimise the mobile phone-based smoking cessation support: Study protocol for a sequential, multiple assignment, randomised trial (SMART). Trials,23(1), 681. 10.1186/s13063-022-06502-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
