Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Mar 27.
Published in final edited form as: Int J Behav Med. 2017 Oct;24(5):673–682. doi: 10.1007/s12529-016-9627-y

Return of the JITAI: Applying a Just-in-Time Adaptive Intervention Framework to the Development of m-Health Solutions for Addictive Behaviors

Stephanie P Goldstein 1,, Brittney C Evans 1, Daniel Flack 1, Adrienne Juarascio 1, Stephanie Manasse 1, Fengqing Zhang 1, Evan M Forman 1
PMCID: PMC5870794  NIHMSID: NIHMS950854  PMID: 28083725

Abstract

Purpose

Lapses are strong indicators of later relapse among individuals with addictive disorders, and thus are an important intervention target. However, lapse behavior has proven resistant to change due to the complex interplay of lapse triggers that are present in everyday life. It could be possible to prevent lapses before they occur by using m-Health solutions to deliver interventions in real-time.

Method

Just-in-time adaptive intervention (JITAI) is an intervention design framework that could be delivered via mobile app to facilitate in-the-moment monitoring of triggers for lapsing, and deliver personalized coping strategies to the user to prevent lapses from occurring. An organized framework is key for successful development of a JITAI.

Results

Nahum-Shani and colleagues (2014) set forth six core elements of a JITAI and guidelines for designing each: distal outcomes, proximal outcomes, tailoring variables, decision points, decision rules, and intervention options. The primary aim of this paper is to illustrate the use of this framework as it pertains to developing a JITAI that targets lapse behavior among individuals following a weight control diet.

Conclusion

We will detail our approach to various decision points during the development phases, report on preliminary findings where applicable, identify problems that arose during development, and provide recommendations for researchers who are currently undertaking their own JITAI development efforts. Issues such as missing data, the rarity of lapses, advantages/disadvantages of machine learning, and user engagement are discussed.

Keywords: m-Health, Just-in-time adaptive interventions, Lapses, Addictions

Just-In-Time Adaptive Interventions for Addictive Behaviors

One core manifestation of addiction is lapse behavior (i.e., instances in which an individual acts in a way that jeopardizes their goal of abstinence). It can be difficult to prevent lapses via the traditional therapy structure (e.g., once weekly for one hour) because they are driven by powerful physiological and psychological reinforcement processes [1, 2]. Lapses are regularly occurring behaviors that do not happen at random, in fact, they are predictable by a variety of internal and external factors [1]. The complex and dynamic nature of lapse behavior calls for the development of more accessible methods for addictions treatment, such as just-in-time adaptive interventions (JITAIs) [3]. JITAIs are a type of intervention design in which skill building (e.g., coping strategies, decision-making, planning behavior), emotional support (e.g., encouragement, empathy), and instrumental support (e.g., feedback, reminders) occur in an adaptive manner to facilitate support in the exact moment of need [4, 5]. Though JITAIs can be administered through several means (e.g., in-person, computer, smartwatch), advancements in smartphone technology that allow for continuous in-the-moment participant monitoring and delivery of personalized coping strategies make mobile devices particularly well-suited for delivering JITAIs that are feasible and scalable [4, 6]. For instance, a JITAI for addictions could operate in an application (“app”) on a user’s smartphone to automatically generate personally tailored messages based on a variety of possible lapse triggers (e.g., self-reported moods and urges, or phone sensor data), thus increasing the ecological validity of treatment [79]. In this way, JITAIs may help individuals (a) learn factors associated with lapsing, (b) become aware of the risk of lapsing earlier in the cycle (when it is easier to intervene), and (c) identify relevant coping strategies for use during high-risk situations.

A Case Study of JITAI Development

A structured development framework for both researchers and developers is necessary to fully harness the capabilities of the JITAI approach and the associated smartphone technology [10]. One popular design framework for JITAI development, proposed by Nahum-Shani and colleagues, uses the flexible structure of identifying core JITAI elements (distal outcomes, proximal outcomes, tailoring variables, decision points, decision rules, and intervention options) that can be satisfied using a variety of creative development and design strategies [5]. Though we will briefly mention and define these elements, several reports published by Nahum-Shani and colleagues [4, 5, 10] contain complete reviews of each element and its role within the JITAI development framework. In conjunction with the technical report and extant JITAI studies, an in-depth case study of the decisions, methods, and design tools used during development could be highly useful for scientists aiming to develop JITAIs for lapse behavior.

Overview of DietAlert

As such, we will outline steps taken to develop DietAlert, an app that targets lapses from a weight control diet among overweight and obese individuals. Obesity shares many properties with addictive behaviors, such as endangerment of physical and/or psychological health, compulsivity driven by neurobiological reward system and cravings, resistance to change despite strong intentions, frequent violations of intended behavior (lapses from weight loss diet), and lapses that are driven by specific internal and external cues [11]. Therefore, it is likely that our methods for intervening on weight control lapses could be generalized to other types of behavioral lapses and are thus applicable to the treatment of addictions (and other disorders of self-control).

DietAlert is designed to help individuals following a weight control diet to lose or maintain weight through the prevention of dietary lapses. Users are repeatedly prompted to enter information about lapses from their diets and an array of potentially triggering factors (e.g., mood, food environment, social interactions, etc.) using a repeated sampling method called ecological momentary assessment (EMA) [12]. When a user reports the presence of potential triggers, DietAlert uses a predictive learning algorithm to calculate level of risk for lapsing and determine the top three factors contributing to risk. If risk is determined, a series of “micro-interventions” (e.g., brief, text-based modules) are delivered to the user, providing strategies designed to help prevent lapses. The full suite of these interventions is also housed within the app as a library that the user can access at any time.

To date, we have completed a preliminary testing phase of DietAlert in which a small sample (n = 12) used the app for 6 weeks. This initial version of the app only contained EMA functions so that we were able to (1) collect information regarding feasibility and acceptability prior to evaluating all app features (e.g., EMA and interventions) with a larger sample and (2) build a predictive algorithm from existing user data (described further below). Currently we are conducting an open trial (ntarget = 30; current users = 10) in which participants will use the entire suite of app features for 8 weeks. No comparison condition is included, as we continue to iterate procedures and functions based on participant feedback in order to optimize DietAlert prior to a comparative trial.

The subsequent sections define the core functions of DietAlert and are organized by the design elements specified by Nahum-Shani and colleagues [4, 5, 10]. Our primary goal is to describe the way in which these principles guided the development of DietAlert and detail our approach to various decisions during the development phases. We will report on preliminary findings where applicable, identify problems that arose during development, and provide recommendations for researchers. Though study investigations are ongoing, it is our hope that adopting an “early and often” sharing approach will guide and encourage other researchers in this field [13].

Identification of Proximal and Distal Outcomes

Initial suggested steps for JITAI development, after identifying a target population, are to select a meaningful desired outcome and key factors that impact this outcome [10]. In the case of DietAlert and the majority of behavioral weight loss programs, the distal outcome (e.g., the primary behavior change target [10]) of interest is weight loss. Programs typically achieve weight loss by prescribing dietary guidelines that involve some form of calorie restriction (e.g., limiting calorie intake so that the body expends more energy than it takes in), in which individuals either abstain from or substantially limit intake of high-calorie foods [14]. Treatment failure is frequent [1521] and likely occurs at least due in part to difficulty making and maintaining suggested changes to dietary intake [22].

Any instance in which an individual violates dietary recommendations can be referred to as a “lapse.” Similar to addictions, weight control failure can be conceptualized fundamentally as a problem of lapses, as these eating episodes are the source of calories that lead to weight regain [23]. Therefore, one possible proximal outcome (i.e., target that has potential to impact the distal outcome) would be dietary lapses because (a) lapses cause weight gain/inability to lose weight [22], (b) lapses lead to abandonment of weight loss goals (relapse) [1], and (c) dietary lapses recur regularly and are theoretically and empirically associated with specific triggers that could serve as tailoring variables (reviewed below) [2426]. It should also be noted that we chose dietary lapses as the proximal outcome for DietAlert because we believed that targeting lapses directly would have the most impact on the distal outcome of weight loss; however, there are other behaviors associated with weight loss (e.g., frequency of self-weighing, frequency of food logging, attendance at group meetings, physical activity) that could serve as proximal outcomes for other weight loss JITAIs [2729].

Operationalizing the Proximal Outcome

One notable challenge to this research is the broad operational definition of a lapse, which could leave DietAlert open to bias and subjectivity in reporting. To combat this risk, we provided individuals with daily caloric targets for meals and snacks; they were instructed to report in DietAlert whenever they exceeded a meal or snack target (regardless of the magnitude). Further, participants were provided with the opportunity to classify the lapse, as there are different ways in which an individual could lapse from a diet (e.g., eating a forbidden food, eating too much at one time, purposefully exceeding the target on a special occasion). Ideally, future lapse identification would be facilitated by direct and real-time comparison of caloric intake to calorie goals in order to further reduce subjectivity and human error.

Selection of Tailoring Variables

Once the target outcomes for DietAlert were identified, we selected tailoring variables that would assist in determining intervention provision. First we sought to enumerate possible predictors of lapses; a literature review revealed several various internal and external states that can cue lapse behavior. For instance, lapses have been prospectively associated with increases in self-reported positive and negative moods (e.g., lonely, bored) [24], socializing and eating with others [25], TV watching [25, 30], the presence of high calorie foods in the environment [31], location, cravings, and exposure to food cues [30]. Based on an extensive review of obesity, weight control, and dietary lapse literatures, 21 potential triggers for lapsing (see Table 1 for complete list) were identified as DietAlert tailoring variables.

Table 1.

DietAlert tailoring variables and assessment schedule

Variable name Question frequency Time of day rules
Affect ~3–4 per day All available times
Boredom ~3–4 per day All available times
Hunger ~3–4 per day All available times
Cravings ~3–4 per day All available times
Tiredness ~3–4 per day All available times
Unhealthy food availability ~3–4 per day All available times
Temptations ~3–4 per day All available times
Missed meals/snacks ~3–4 per day All available times
Self-efficacy (confidence) ~ 1–2 per day No nighttime
Motivation ~ 1–2 per day All available times
Socializing (with or without food present) ~1–2 per day Afternoons and evenings
Watching TV ~1–2 per day Afternoons and evenings
Negative interpersonal interactions ~1–2 per day All available times
Healthy food presence ~1–2 per day All available times
Cognitive load ~1–2 per day All available times
Food cues (advertisements) ~1–2 per day All available times
Hours of sleep Once Morning
Exercise Once Evenings
Alcohol consumption Once Afternoons and evenings
Planning food intake Once Morning and afternoon
Time of day Continuous measurement All available times

Depending on the variables of interest and design, JITAIs may include active assessment methods, passive assessment methods (i.e., phone-based or wearable sensors), or both [10]. For feasibility purposes, the majority of DietAlert’s tailoring variables (with the exception of time) were assessed using active assessment. An EMA protocol was utilized to ensure enhanced validity, accuracy of self-report measurement, and minimal assessment reactivity (i.e., the process of monitoring a target behavior results in changes the expression of that behavior) [12]. DietAlert was designed to utilize both event-based sampling (e.g., collecting data around a specific, discrete event) and time-based EMA sampling (e.g., semi-randomly prompting users to input states) [32]. Users were prompted to enter information into the app at six semi-random time intervals throughout the day, spaced approximately 2–3 h apart to adequately capture change in the tailoring variables [32, 33]. Users were allowed 90 min to complete an EMA report in order to accommodate the potential for unavailability (e.g., driving, working) during EMA prompt times.

Addressing User Burden

Assessment frequency must also be balanced by relative user burden of reporting [34]. This is of special concern with DietAlert, as the app is almost solely reliant on self-report data. To reduce user burden, participants were asked eight questions (rather than assessing all 21 tailoring variables) per prompt. Question repetition varied systematically throughout the day such that participants did not answer the same questions at each prompt, but all 21 variables were repeatedly assessed at least once per day (with time automatically being recorded). Question administration was largely dictated by the frequency with which variables were likely to change (see Table 1 for approximate question frequencies and time of day rules). For example, users were only asked about alcohol once in the evening and asked about mood several times throughout the day.

Though our assessment methods created systematic missing data (further discussed below), the procedures resulted in high compliance and reduced burden. For example, in our preliminary assessment phase, participants were highly compliant with app use, completing between 85.2 and 98.9% of delivered EMA prompts (Mcompliance = 94.6%) over a period of 6 weeks. These rates remained relatively stable over time; participants averaged 92.66% in the first study week, 96.23% at week three, and 99.60% during the final study week (week 6). Of note, participants were compensated for completion of EMA prompts and the implications of this procedure are further discussed below. During qualitative interviews, participants described the app as “non-intrusive” and “routine.” In fact, many reported the frequent prompting as helpful in promoting awareness of factors influencing dietary intake.

Future Directions in Assessment

Though EMA procedures are the gold-standard method for self-reporting time-varying states [12], they are still subject to a degree of recall bias and rely on participant compliance with reporting procedures. EMA protocols are thought to minimize assessment reactivity; however, it is still likely that repeated prompting to monitor a target behavior over long periods of time (e.g., 6–8 weeks) may be inducing behavior change via assessment reactivity. Reactivity has typically been viewed as a disadvantage of this assessment method because it precludes obtaining accurate information about the sole impact of interventions on behavior [32]. An alternative view is that reactivity, to the extent that it facilitates desired behavior, is actually a beneficial outcome of EMA. In the context of a JITAI, assessment reactivity could be helpful in that EMA becomes part of a system (including in-the-moment interventions) that is driving behavior change.

Future iterations of DietAlert could also incorporate more passive data streams (e.g., accelerometer for exercise, ambient noise detection for social interactions, GPS for possible location triggers). Passive monitoring could be used in conjunction with feedback to promote user awareness of behavior without the burden of actively entering information into an app. Further, these methods could reduce user burden and provide greater contextual information for intervention. For example, passive sensing could also be used to better detect states of availability to engage with assessment and/or intervention (e.g., driving, exercising, and sleeping).

Decision Rules: Utilizing Machine Learning

Information from tailoring variables was utilized in the decision rules for DietAlert to specify which intervention should be delivered to the user and when. Many extant JITAIs rely on decision rules that are grounded in comprehensive theoretical models and typically rely on a series of conditional statements (e.g., If smoking urge > [threshold], then recommend urge surfing intervention) [10]. However, this process becomes difficult with 21 tailoring variables. Intervening on all (or even just a portion) of the 21 variables at any given time could reduce intervention tolerability and saliency. Therefore, we concluded that DietAlert should intervene on the factors that are most important to the individual during times when a lapse is likely. Further empirical work, especially using advanced statistical models, was necessary to demonstrate the relative magnitude of each tailoring variable by analyzing factors that emerged as the most robust prospective predictors of lapses.

In this respect, one useful statistical method is machine learning, a subfield of artificial intelligence that involves the development of computational systems that can learn and adapt from their experiences over time [35]. Machine learning methods can employ historical data to model general response patterns that predict the proximal outcome (e.g., lapses) [36]. These models can be used in multiple phases of JITAI development, such as when initially exploring previously collected data to narrow down a subset of salient tailoring variables (e.g., variable selection). We employed variable selection procedures on data from our preliminary testing phase (n = 12) to reduce the number of tailoring variables by examining their relative importance to lapse occurrence. Ideally, this procedure would allow us to create decision rules that more adequately target factors that contribute to lapses for each individual.

To examine the ability of group-selected variables to generalize to individuals, variables of importance from the entire dataset were compared to each individual’s selected variables. Unfortunately, only 48.6% (range, 14.3–50.0%) of variables selected for the individual were also selected for the group (e.g., see Goldstein, 2016) [37]. In other words, variables that were predictive for the group were not necessarily predictive for an individual. Based on this analysis, we concluded that (1) attempting to reduce tailoring variables could exclude factors that are important for some individuals and (2) creating overly broad conditional statements that apply to a group of users could lead to inappropriate intervention provision for any given individual. In sum, we opted to maximize the use of DietAlert’s tailoring variables to capitalize on the most important factors for each individual.

In this regard, one promising solution was to use machine learning to identify risk for lapse and select relevant variables for individual users in a live, online fashion. DietAlert was designed to employ online supervised classification algorithms to determine risk for lapse based on selected tailoring variables. Supervised learning is a type of machine learning in which an algorithm is informed by previously collected data, and subsequently predicts outcomes by modeling the function of predictors. To develop this algorithm, data from our preliminary testing phase (n = 12) was used to train several different classification models to predict dietary lapses, and test the model accuracies on a previously unexamined validation data subset (the testing set) [38]. Participants reported a total of 326 lapses and 2572 non-lapses (e.g., a survey was completed and a lapse had not occurred) from which to model relationships. Strategies for managing the unbalanced ratio of lapses to non-lapses (approximately 1:12) will be discussed further below. The models were evaluated by comparing classification accuracy (proportion of correctly to incorrectly classified cases), the sensitivity (true positive rate), and specificity (true negative rate). The combination of classification models with the most promising results were selected for use within the app. These procedures allowed us to retain all tailoring variables and create decision rules that are constantly adapting to new user information to tailor intervention provision. For more information regarding algorithm development and evaluation, see Goldstein (2016) [37].

Decision Points for Intervention

The potential for providing intervention (e.g., decision point) using the above-described algorithm occurs whenever a user replies to an EMA prompt. When new data is entered into the app, it is run through the algorithm and a prediction (lapse vs. no lapse) at the next assessment point (in approximately 2–3 h) is generated. If the algorithm predicts that a lapse will occur, then an additional variable selection model is utilized to determine the likelihood of risk. To make the final decision, predictions from both the classification and variable selection models are taken into consideration. When a lapse is predicted by the classification model, the likelihood of lapse can be high (above 70% lapse likelihood), medium (between 40 and 70% lapse likelihood), or low (below 40% lapse likelihood). If the probability is less than 40%, the risk is considered to be low due to the low consistency between the two types of prediction. If the probability is larger than 70%, that means that lapse which was predicted by the classification model is confirmed by the high probability of lapse identified by the variable selection methods. Therefore, risk level is considered to be high due to the strong consistency between these two models.

If the classification algorithm prediction is that no lapse will occur, then the variable selection model will not be utilized to determine risk level and users will not receive any app notifications until their next survey. Risk thresholds were chosen based on early stages of app prototyping and desired frequency of alerts (e.g., no more than 2–3 times per day to reduce burden and enhance receptivity). Risk alerts were available for users to read for 75 min to preserve the momentary nature of intervention while accounting for possible unavailability of the user at the initial notification time. Pilot testing with DietAlert’s algorithm is ongoing, and incoming data will be used to continuously adjust thresholds and optimize algorithm based on accuracy and user report.

Personalized Intervention Options

At any given decision point (e.g., when data is entered and algorithm calculates level of risk), there are a myriad of potential interventions that could be employed based on tailoring variables and decision rules (including providing no intervention at all). When there is an opportunity for intervention, DietAlert achieves a tailored effect by evaluating the top three factors placing an individual at risk using the variable selection strategies described above [37]. For each risk factor, a bank of 7–10 micro-interventions was developed. When a factor is identified by the algorithm as a top contributor to lapse risk, one micro-intervention is randomly selected from each appropriate intervention bank and delivered to the user. This procedure is commonly used to enhance feelings of personalization and reduce repetition [39].

DietAlert contains a total of 157 brief interventions written by a team of clinical psychology doctoral students and licensed clinical psychologists. Intervention strategies were guided by a behavior change taxonomy, thus ensuring empirically supported content (see Table 2 for examples of how behavior change strategies within the taxonomy guided intervention content) [40]. To enhance engagement, the majority of DietAlert interventions contain simple reflective exercises that require participants to write a brief response (e.g., set an intention, define a goal, identify a barrier) or select items from a list (e.g., which strategies can you commit to trying). Non-psychologists on our development team reviewed intervention text to avoid the use of jargon or overly complex psychological concepts. One challenge was that the intervention content had to be easily digestible without the assistance of a clinician given the self-guided nature of the app. In this respect, it may have been useful to obtain feedback on intervention text from target users before participants began using the full suite of DietAlert interventions in the open trial phase [41]. Interventions are being formally evaluated for acceptability and preliminary effectiveness during our open trial through in-the-moment user helpfulness ratings and qualitative interviews.

Table 2.

Examples of interventions guided by behavior change taxonomy

Triggers Behavioral strategy Sample intervention text
Boredom Prompt intention formation Make a plan to end boredom
Provide instruction Think of something fun to try right now
Missed meals/snacks Prompt barrier identification Think about what gets in the way of eating regularly, and what you can do about it
Prompt self-monitoring Track your food regularly
Unhealthy foods available Goal-setting Take action to change your food environment and/or to avoid high-risk food environments
Plan social support Create a buddy system where you and a friend can keep each other accountable when the food environment is going to be tempting
Urges Prompt self-talk Develop a mantra or a phrase to help resist urges
Provide contingent rewards Reward yourself for resisting a tempting food
Motivation Set graded tasks When you’re feeling less motivated, try to set smaller goals until you get back on track
Encouragement Don’t let low motivation get in the way of your success!
Affect Stress management Try deep breathing to reduce anxiety or stress
Provide info on the behavioral health link Understand why emotional eating occurs so you can prevent it in the future

Only a small sample of triggers and behavioral strategies are presented. Full intervention text not provided

Problems and Future Considerations

Developing JITAIs brings about many logistical and methodological challenges, and DietAlert was no exception. For this reason, it is important for researchers to incorporate novel approaches based on their own concurrent findings, as well as others’ work [42, 43]. Below, we describe several methodological issues that arose while developing DietAlert, and detail recommendations for future researchers.

Tailoring Variables and Missing Data

Problem

Tailoring variables were semi-randomly spread throughout the EMA prompts to reduce participant burden. This approach created large amounts of data missing by design, making imputation and algorithm development difficult (e.g., [44, 45]). As traditional approaches for handling missing data assume that data are missing at random or missing as a function of an unrecorded variable (i.e., missing not at random), a machine learning-specific approach, wherein missing data was coded in a categorically different manner from extant data, was taken [46]. This approach allows for data missing by design (i.e., due to systematic collection restrictions, as in the current problem) to be modeled as predictors, maintaining the completeness of cases, which is necessary for many variable selection and predictive procedures [47]. Unfortunately, the approach also allows for the possibility of the algorithm producing a systematically missing category as a salient predictor, making intervention difficult. For example, this approach allows an individual to be identified as at risk for lapse because data on their mood was not evaluated at that particular decision point. Although it has been identified as a predictor, we cannot provide an intervention tailored to mood because the participant did not report on mood. In sum, there are several statistical and clinical implications for a JITAI that features data missing by design.

Recommendation

We chose to use missingness as a meaningful factor for prediction in the present project, but only use known variables (e.g., the questions that were responded to) when delivering interventions to the individual. Our open trial, currently underway, will allow us to determine the utility of these procedures. However, given the problems with imputation and interpretation, we would caution against collection procedures (such as rotating EMA questions) that result in approximately 50% missing data. If there is a large set of potential predictors, even assessing each predictor with a single EMA question may result in an EMA procedure that is too lengthy (e.g., greater than 15–20 questions). In this respect, datasets of larger EMA studies can be examined for strong factors or latent predictors, and questions touching on similar constructs can be combined into single items. Iterative development steps could include subsetting samples into groups that receive different question sets and analyzing predictive capabilities of each question set.

Lapses are Uncommon Occurrences

Problem

Lapse occurrences will occur much less often than non-occurrences. This unbalanced distribution means that data must be collected for a long enough time frame so as to capture enough lapses to create a mathematical model. These time windows are heavily dependent on the behaviors of interest, and these frames can be estimated based on group-level data (i.e., aggregates across all participants from previous studies). Unfortunately, collecting extensive longitudinal data can create concerns related to time and resource allocation that should be carefully considered.

Recommendation

In order to efficiently utilize limited resources, one solution may be to include fewer participants across a longer time span, which may produce similar amounts of lapse reports to running many participants across a shorter window of time [48]. Additionally, the inclusion of long periods of time within individuals increases the chance of capturing the infrequent behavior of interest. We aimed to collect about 25 cases of interest within each participant. Since this data was still comprised of substantially more non-lapse than lapse cases (unbalanced data), non-lapse cases were randomly sampled in a 1:1 ratio to lapse cases via the ROSE package in the R statistical computing software (e.g., see [49, 50]). This 1:1 sampling resulted in oversampling of the lapse-cases and undersampling of the non-lapse cases in order to create a balanced data set, which is suggested for optimal classification accuracy in these types of models [51]. Future research focusing on lapsing should consider these options. Based on previous work and theory, estimates can be made about behavior frequency and phase length can be defined accordingly so as to capture an adequate sample. Resource limitations can be addressed through careful design and iterative refinement.

Complexities of Machine Learning

Problem

Developers of JITAIs should be mindful of employing machine learning techniques because the statistical and practical application can be time-consuming efforts. Statistically, there are a myriad of (1) methods to handle missing data, (2) cross-validation procedures, (3) model parameters to optimize, (4) combinations of models to test, and (5) variable extraction procedures. Therefore, there appear to be a functionally infinite number of models to examine, and there may be no such thing as a “right” model. Rather, one is artfully selected based on the above-mentioned factors and an appropriate stopping point for analyses is chosen by the research team. Practically, the integration of the chosen model into the JITAI app is logistically challenging, as data analysis packages often utilize different coding languages than app development packages.

Recommendation

Machine learning can be a potent tool to enhance the effectiveness of a JITAI. However, researchers should consider the ultimate goal of the JITAI before deciding to implement a machine learning solution. Utilizing machine learning methods would be contraindicated when enough theoretical and empirical evidence is available to construct an effective decision rule a-priori. For example, an app for alcohol use disorders that is based on evidence that sensitivity to alcohol-related stimuli is related to relapse [52] could trigger an intervention any time the user is within a 1000-m radius of a liquor store (regardless of any other variables). An app of this nature would not need to use machine learning to guide decision rules as the relationship between the chosen tailoring variable and proximal outcome has been specified a priori based on available research support. A thorough examination of timeline and resources should be conducted prior to JITAI development. The implementation of a machine learning algorithm within a JITAI involves tedious management of a multidisciplinary team of experts (e.g., biostatisticians, computer scientists, program developers). Principle investigators should also consider staffing teams with individuals who have dual expertise (e.g., computer science and behavioral health; data mining and computer programming) as to further reduce communication barriers between disciplines.

Engagement and Receptivity

Problem

One of the most pervasive issues in the field of mobile app development is the “law of attrition,” i.e., the tendency of individuals to stop usage of novel technologies and/or drop out of research studies [53]. Though this concept is a significant concern for all app developers, DietAlert’s frequent assessment procedures created high amounts of user burden relative to many other types of apps, and thus created the potential for a steep attrition curve.

Recommendations

To combat the law of attrition, the preliminary phase of DietAlert paid participants $180 (with $0.50 deductions for missed surveys). These procedures resulted in high compliance (reported above) and only one participant lost to follow-up. The monetary compensation, though necessary to ensure adherence and minimize missing data for the machine learning procedures, likely incentivized behavior change and user response to the app. Therefore, any impact of the intervention is confounded by the monetary incentive. Continuing to utilize monetary compensation in future iterations would severely limit the scalability of DietAlert and, as such, is not a feasible long-term option. Therefore, we chose not to pay the users in our open trial for completing EMA prompts in order to isolate the effect of the intervention and set feasible expectations for compliance within future real-world implementations of DietAlert. Without payment, compliance remains high (Mcomplicance = 85.4%) and there are no drop-outs to date (of the 10 current users).

There are a myriad of additional factors that could impact the slope of DietAlert’s attrition curve. It is likely that our participant management procedures, such as managing expectations and frequently obtaining feedback, had a profound influence on the shape and steepness of our attrition curve [53]. Additionally, qualitative data indicated that the frequent DietAlert prompting procedures were viewed as beneficial by participants. During our preliminary studies in which participants were only receiving EMA prompts (no alerts to lapse risk), users reported that, “using DietAlert kept me on my toes and conscious of what my meals were for the day. The program helped me to be disciplined in food decisions” and “it helped me to see where I had breakdowns with my eating habits and taking ownership of my selections in food.” These reports, combined with feedback that the app was “non-intrusive,” lead us to believe that we have been striking a balance between perceived benefit and low burden that is contributing to our flattened attrition curve. It is recommended that researchers strongly consider factors related to attrition when developing JITAIs and designing study protocols. Further, analyzing attrition curves using differing measures of usage can assist in identifying factors that influence attrition rates [53].

Conclusions

The methodologies we utilized to develop DietAlert are likely to be applicable to other JITAIs targeting addictive behaviors given the similarities of these behaviors to dietary lapse. JITAIs appear to be a promising framework for developing mHealth interventions for addictions; however, this research is still in its infancy, meaning that effective and efficient design methods are moving targets due to the quick pace of technology growth. The development of DietAlert illustrates the utility of a semi-structured design framework, as well as salient concerns and recommendations for future researchers. Continued communication regarding the use of unified and innovative development methods is essential for progress of JITAI development and evaluation.

Acknowledgments

The current study was funded by the Karen Miller-Kovach research grant from Weight Watchers and The Obesity Society awarded to Dr. Forman.

Footnotes

Compliance with Ethical Standards

Funding

The study was funded by the Karen Miller-Kovach research grant from Weight Watchers and The Obesity Society.

Conflict of Interest

Evan Forman received a research grant from Weight Watchers to support the development of DietAlert, a companion smartphone application to assist with dietary adherence.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

References

  • 1.Marlatt GA, Donovan DM. Relapse prevention: Maintenance strategies in the treatment of addictive behaviors. Guilford Press; 2005. [Google Scholar]
  • 2.McKay JR. Is there a case for extended interventions for alcohol and drug use disorders? Addiction. 2005;100(11):1594–610. doi: 10.1111/j.1360-0443.2005.01208.x. [DOI] [PubMed] [Google Scholar]
  • 3.Murphy SA. Workshop on Just In Time Adaptive Interventions. Ann Arbor: University of Michigan; 2013. https://community.isr.umich.edu/public/jitai/Workshop.aspx2013. [Google Scholar]
  • 4.Nahum-Shani I, Hekler EB, Spruijt-Metz D. Building health behavior models to guide the development of Just-in-Time Adaptive Interventions: a pragmatic framework. Health Psychol. 2015;34(S):1209. doi: 10.1037/hea0000306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K, Tewari A, et al. Just-in-Time Adaptive Interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann Behav Med. 2016;2016:1–17. doi: 10.1007/s12160-016-9830-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Aldhaban F, editor. 2012 Proceedings of PICMET’12: Technology Management for Emerging Technologies. IEEE; 2012. Exploring the adoption of Smartphone technology: Literature review. [Google Scholar]
  • 7.Depp CA, Mausbach B, Granholm E, Cardenas V, Ben-Zeev D, Patterson TL, et al. Mobile interventions for severe mental illness: design and preliminary data from three approaches. J Nerv Ment Dis. 2010;198(10):715. doi: 10.1097/NMD.0b013e3181f49ea3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.King AC, Ahn DK, Oliveira BM, Atienza AA, Castro CM, Gardner CD. Promoting physical activity through hand-held computer technology. Am J Prev Med. 2008;34(2):138–42. doi: 10.1016/j.amepre.2007.09.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Wei J, Hollin I, Kachnowski S. A review of the use of mobile phone text messaging in clinical and healthy behaviour interventions. J Telemed Telecare. 2011;17(1):41–8. doi: 10.1258/jtt.2010.100322. [DOI] [PubMed] [Google Scholar]
  • 10.Nahum-Shani I, Smith SN, Tewari A, Witkiewitz K, Collins LM, Spring B, et al. Just in time adaptive interventions (JITAIs): an organizing framework for ongoing health behavior support. Methodol Cent Tech Report. 2014;2014:14–126. [Google Scholar]
  • 11.Hebebrand J, Albayrak Ö, Adan R, Antel J, Dieguez C, de Jong J, et al. “Eating addiction”, rather than “food addiction”, better captures addictive-like eating behavior. Neurosci Biobehav Rev. 2014;47:295–306. doi: 10.1016/j.neubiorev.2014.08.016. [DOI] [PubMed] [Google Scholar]
  • 12.Stone AA, Shiffman S. Ecological momentary assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine. 1994;16(3):199–202. [Google Scholar]
  • 13.Hekler EB, Klasnja P, Riley WT, Buman MP, Huberty J, Rivera DE, et al. Agile science: creating useful products for behavior change in the real world. Transl Behav Med. 2016;2016:1–12. doi: 10.1007/s13142-016-0395-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Franz MJ, VanWormer JJ, Crain AL, Boucher JL, Histon T, Caplan W, et al. Weight-loss outcomes: a systematic review and meta-analysis of weight-loss clinical trials with a minimum 1-year follow-up. J Am Diet Assoc. 2007;107(10):1755–67. doi: 10.1016/j.jada.2007.07.017. [DOI] [PubMed] [Google Scholar]
  • 15.Wadden TA, Butryn ML. Behavioral treatment of obesity. Endocrinol Metab Clin North Am. 2003;32(4):981–1003. doi: 10.1016/s0889-8529(03)00072-0. [DOI] [PubMed] [Google Scholar]
  • 16.Wadden TA, Frey DL. A multicenter evaluation of a proprietary weight loss program for the treatment of marked obesity: a five-year follow-up. Int J Eat Disord. 1997;22(2):203–12. doi: 10.1002/(sici)1098-108x(199709)22:2<203::aid-eat13>3.0.co;2-1. [DOI] [PubMed] [Google Scholar]
  • 17.Wadden TA, Sternberg JA, Letizia KA, Stunkard AJ, Foster GD. Treatment of obesity by very low calorie diet, behavior therapy, and their combination: a five-year perspective. Int J Obes. 1989;13(Suppl 2):39–46. [PubMed] [Google Scholar]
  • 18.Jeffery RW, Epstein LH, Wilson GT, Drewnowski A, Stunkard AJ, Wing RR. Long-term maintenance of weight loss: current status. Health Psychol. 2000;19(1S):5. doi: 10.1037/0278-6133.19.suppl1.5. [DOI] [PubMed] [Google Scholar]
  • 19.Fontaine KR, Cheskin LJ. Self-efficacy, attendance, and weight loss in obesity treatment. Addict Behav. 1997;22:567–70. doi: 10.1016/s0306-4603(96)00068-8. [DOI] [PubMed] [Google Scholar]
  • 20.Kramer FM, Jeffery RW, Forster JL, Snell MK. Long-term follow-up of behavioral treatment for obesity: patterns of weight regain among men and women. Int J Obes. 1989;13:123–36. [PubMed] [Google Scholar]
  • 21.Stalonas PM, Perri MG, Kerzner AB. Do behavioral treatments of obesity last? A five-year follow-up investigation. Addict Behav. 1984;9:175–83. doi: 10.1016/0306-4603(84)90054-6. [DOI] [PubMed] [Google Scholar]
  • 22.Lowe MR. Self-regulation of energy intake in the prevention and treatment of obesity: is it feasible? Obes Res. 2003;11(Suppl):44S–59. doi: 10.1038/oby.2003.223. [DOI] [PubMed] [Google Scholar]
  • 23.Wilson GT. Behavioral treatment of obesity: thirty years and counting. Adv Behav Res Ther. 1994;16(1):31–75. [Google Scholar]
  • 24.Carels RA, Douglass OM, Cacciapaglia HM, O’Brien WH. An ecological momentary assessment of relapse crises in dieting. J Consult Clin Psychol. 2004;72(2):341. doi: 10.1037/0022-006X.72.2.341. [DOI] [PubMed] [Google Scholar]
  • 25.Carels RA, Hoffman J, Collins A, Raber AC, Cacciapaglia H, O’Brien WH. Ecological momentary assessment of temptation and lapse in dieting. Eat Behav. 2002;2(4):307–21. doi: 10.1016/s1471-0153(01)00037-x. [DOI] [PubMed] [Google Scholar]
  • 26.McKee HC, Ntoumanis N, Taylor IM. An ecological momentary assessment of lapse occurrences in dieters. Ann Behav Med. 2014;48(3):300–10. doi: 10.1007/s12160-014-9594-y. [DOI] [PubMed] [Google Scholar]
  • 27.Butryn ML, Phelan S, Hill JO, Wing RR. Consistent self-monitoring of weight: a key component of successful weight loss maintenance. Obesity. 2007;15(12):3091–6. doi: 10.1038/oby.2007.368. [DOI] [PubMed] [Google Scholar]
  • 28.Burke LE, Conroy MB, Sereika SM, Elci OU, Styn MA, Acharya SD, et al. The effect of electronic self-monitoring on weight loss and dietary intake: a randomized behavioral weight loss trial. Obesity. 2011;19(2):338–44. doi: 10.1038/oby.2010.208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Wadden TA, West DS, Neiberg RH, Wing RR, Ryan DH, Johnson KC, et al. One-year weight losses in the Look AHEAD study: factors associated with success. Obesity. 2009;17(4):713–22. doi: 10.1038/oby.2008.637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Grenard JL, Stacy AW, Shiffman S, Baraldi AN, MacKinnon DP, Lockhart G, et al. Sweetened drink and snacking cues in adolescents. A study using ecological momentary assessment. Appetite. 2013;67:61–73. doi: 10.1016/j.appet.2013.03.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Thomas JG. Toward a better understanding of the development of overweight: a study of eating behavior in the natural environment using ecological momentary assessment. Drexel University; 2009. [Google Scholar]
  • 32.Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1–32. doi: 10.1146/annurev.clinpsy.3.022806.091415. [DOI] [PubMed] [Google Scholar]
  • 33.Foreyt JP, Goodrick GK. Evidence for success of behavior modification in weight loss and control. Ann Intern Med. 1993;119:698–701. doi: 10.7326/0003-4819-119-7_part_2-199310011-00014. [DOI] [PubMed] [Google Scholar]
  • 34.Witkiewitz K, Desai SA, Bowen S, Leigh BC, Kirouac M, Larimer ME. Development and evaluation of a mobile intervention for heavy drinking and smoking among college students. Psychol Addict Behav. 2014;28(3):639. doi: 10.1037/a0034747. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Domingos P. A few useful things to know about machine learning. Commun ACM. 2012;55(10):78–87. [Google Scholar]
  • 36.Pereira F, Mitchell T, Botvinick M. Machine learning classifiers and fMRI: a tutorial overview. Neuroimage. 2009;45(1):S199–209. doi: 10.1016/j.neuroimage.2008.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Goldstein SP. A Preliminary Investigation of a Personalized Risk Alert System for Weight Control Lapses. Drexel University; 2016. [Google Scholar]
  • 38.Kotsiantis SB, Zaharakis I, Pintelas P. Supervised machine learning: A review of classification techniques. 2007. [Google Scholar]
  • 39.Mulvaney SA, Ritterband LM, Bosslet L. Mobile intervention design in diabetes: review and recommendations. Curr Diab Rep. 2011;11(6):486–93. doi: 10.1007/s11892-011-0230-y. [DOI] [PubMed] [Google Scholar]
  • 40.Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol. 2008;27(3):379. doi: 10.1037/0278-6133.27.3.379. [DOI] [PubMed] [Google Scholar]
  • 41.Tang J, Abraham C, Stamp E, Greaves C. How can weight-loss app designers’ best engage and support users? A qualitative investigation. Br J Health Psychol. 2015;20(1):151–71. doi: 10.1111/bjhp.12114. [DOI] [PubMed] [Google Scholar]
  • 42.Collins LM, Murphy SA, Nair VN, Strecher VJ. A strategy for optimizing and evaluating behavioral interventions. Ann Behav Med. 2005;30(1):65–73. doi: 10.1207/s15324796abm3001_8. [DOI] [PubMed] [Google Scholar]
  • 43.Whittaker R, Merry S, Dorey E, Maddison R. A development and evaluation process for mHealth interventions: examples from New Zealand. J Health Commun. 2012;17(sup1):11–21. doi: 10.1080/10810730.2011.649103. [DOI] [PubMed] [Google Scholar]
  • 44.Lee JH, Huber J Jr, editors. United Kingdom Stata Users’ Group Meetings 2011. Stata Users Group; 2011. Multiple imputation with large proportions of missing data: How much is too much? [Google Scholar]
  • 45.Ziegler ML. Variable selection when confronted with missing data. University of Pittsburgh; 2006. [Google Scholar]
  • 46.Quinlan JR. Induction of decision trees. Mach Learn. 1986;1(1):81–106. [Google Scholar]
  • 47.Liu WZ, White AP, Thompson SG, Bramer MA. Advances in Intelligent Data Analysis Reasoning about Data. Springer; 1997. Techniques for dealing with missing values in classification; pp. 527–36. [Google Scholar]
  • 48.Guo Y, Logan HL, Glueck DH, Muller KE. Selecting a sample size for studies with repeated measures. BMC Med Res Methodol. 2013;13(1):100. doi: 10.1186/1471-2288-13-100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Lunardon N, Menardi G, Torelli N. A peer-reviewed, open-access publication of the R Foundation for Statistical Computing. 2014. ROSE: A Package for Binary Imbalanced Learning; p. 79. [Google Scholar]
  • 50.R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2015. [Google Scholar]
  • 51.Batista GE, Prati RC, Monard MC. A study of the behavior of several methods for balancing machine learning training data. ACM Sigkdd Explorations Newsl. 2004;6(1):20–9. [Google Scholar]
  • 52.Cooney NL, Litt MD, Morse PA, Bauer LO, Gaupp L. Alcohol cue reactivity, negative-mood reactivity, and relapse in treated alcoholic men. J Abnorm Psychol. 1997;106(2):243. doi: 10.1037//0021-843x.106.2.243. [DOI] [PubMed] [Google Scholar]
  • 53.Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. doi: 10.2196/jmir.7.1.e11. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES