Abstract
Understanding mobile Health (mHealth) user engagement patterns and its association to adherence in chronic disease populations is critical for data quality and clinical outcomes. Yet, this remains unexplored and a missed opportunity, given the unique value of mHealth data for traditionally under-documented conditions. Methods: We analyzed 13,997 total days of data from 131 female participants with chronic pelvic pain disorders (CPPDs) and 72 healthy controls. Participants tracked daily symptoms using a research App for 90 days. Using mixed-effects regression, we investigated predictors of adherence and consistency in App engagement, with moderators of CPPD burden and temporal accessibility. Results: Later-day engagement was the strongest predictor of user consistency, independent of pain interference or CPPD. Habitual evening tracking was associated with better adherence for those with CPPD vs healthy controls. Consistency was the strongest predictor of adherence. Conclusion: Prioritizing habit and preference over fixed-time assessments could improve temporal accessibility and data completeness.
Additional Key Words and Phrases: digital health, mHealth, engagement, temporal adherence, chronic pain
1. Introduction
Mobile health applications (mHealth Apps) have emerged as promising tools for longitudinal health and disease symptom monitoring, especially for conditions that are poorly understood and under-documented in traditional data sources such as electronic health records (EHRs) [8]. Adherence remains a challenge, with engagement rates often declining substantially over time in multi-week studies [17, 29], as well as from initial uptake to longer term follow-up (e.g., 73.6% to 28% after 7 months among individuals with chronic pain [3]). This attrition threatens both data quality and clinical benefit, as engagement frequency has been linked to improved pain outcomes [27]. Existing work on mHealth self-tracking has focused on frequency, duration, or completion, leaving the timing of engagement largely unexplored. Unlike fixed-timing ecological momentary assessment (EMA) studies, self-initiated tracking allows flexible time windows, raising questions about whether temporal patterns predict adherence and whether fixed schedules create systematic barriers for clinical populations [25, 29]. Accordingly, it remains unknown whether these temporal patterns carry predictive information about adherence and whether fixed timing creates systematic barriers for clinical populations. This aligns with recent calls to design technology that accommodates the variable ability and temporal experiences of chronically ill users [20].
We herein investigate this gap in women with conditions commonly associated with chronic pelvic pain (e.g., endometriosis, adenomyosis, fibroids) due to their high symptom burden. CPPDs affect approximately 20% of the global female population and account for 40% of laparoscopies and 12% of hysterectomies in the US annually [19]. Yet, they remain poorly understood due to the difficulty capturing symptom fluctuations over time. Temporal patterns in engagement offer a novel lens for understanding adherence facilitators and barriers, and can guide personalized management strategies, thus representing a high-value intervention target. Habit formation literature suggests morning routines may achieve automaticity faster than evening routines in general populations [2, 11, 12], with morning practices reaching habit strength approximately 45% sooner in experimental studies [11]. However, these findings primarily rely on general population samples and may not account for symptom burden in clinical populations (e.g., CPPDs), thus limiting our understanding of mHealth feasibility and benefit. In contrast, user-centered mHealth designs that tailor recommendations and messages based on motivation level could enhance adherence and self-efficacy in chronic pain populations [7]. Similarly, patient response patterns can be strongly influenced by the environment they are in at the time of alert, which can be leveraged to optimize when a participant is most likely to complete tasks [4].
Research Questions and Aims
We used data from a 14-week observational study that involved daily App-based self-Stracking of health symptoms and behaviors with participants with CPPDs and matched healthy controls to investigate: RQ1: Temporal Consistency Effects: Is self-tracking timing consistency associated to adherence beyond chronic symptom burden? We hypothesized that lower variability in response times would predict higher adherence, reflecting successful habit formation. RQ2: Population-Specific Temporal Accessibility: Do the daily engagement patterns over time and their relationships with adherence differ by CPPD diagnosis? We hypothesized that CPPD-related pain may disrupt daily routines and impose variable symptom burden, thus acting as a differential barrier.
2. Methods
2.1. Study Sample and Design
Analyses included 13,997 person-level days (i.e., 2659 weeks) of data from 203 participants (131 with a CPPD diagnosis, 72 healthy controls) across 90 days (14 weeks) of tracking. Cases had a confirmed CPPD diagnosis and controls were healthy volunteers with no history of CPPDs matched on age, race/ethnicity, education and employment status. Participants used the ehive research mHealth app [15] to self-track pain, QoL, sleep, and mood daily, with weekly-administered PROMIS Pain Interference Questionnaire (PIQ). Surveys were self-initiated and could be completed any time during the day. All participants provided informed consent, and the study protocol was approved by the Institutional Review Board.
2.2. Study Outcomes and Variables
Temporal Consistency.
We calculated: (1) Dominant Time-of-Day, defined as the person-level mode of tracking period/week (Morning: until 11:59; Afternoon: 12:00–17:59; Evening: 18:00 and later); and (2) Temporal Variability, defined as the standard deviation (SD) of the tracking hour within each week. Consistent responders were defined as having a weekly SD < 4 hours. This 4-hour threshold was selected based on preliminary sensitivity analysis showing similar person-level SDs for cases (M=4.68h) and controls (M=4.48h), rounded down for a conservative boundary distinguishing consistent vs variable behavior.
Overall Adherence.
Overall adherence was calculated as the percentage of enrolled days on which participants self-tracked at least one daily symptom: (days with ≥1 response / total days enrolled) × 100, where total days enrolled = (end-of-study date - informed consent date) + 1. To avoid ecological fallacy and multicollinearity in models, we decomposed weekly PIQ T-scores into between-person mean (chronic burden) and within-person centered (acute weekly fluctuation) [1].
2.3. Statistical Analyses
Tracking Adherence and Consistency.
We used mixed-effects models (via lmerTest Library [18]) on weekly aggregated data (e.g., number of days tracked per week) to account for repeated-measures data structure. For RQ1, we used a generalized linear mixed model (GLMM) to estimate weekly adherence as a binomial outcome (days tracked | days missed per week). For RQ2, we used a linear mixed model (LMM) to estimate temporal consistency (within-week SD) from predictors: dominant time-of-day category (person-level mode), CPPD status, and PIQ scores. To assess disease specificity, we included CPPD × Time-of-Day and CPPD × Pain interactions. Both models adjusted for study week and included participant as a random intercept. All analyses were conducted in R[23] with α = .05.
Sensitivity Analysis: Longitudinal Clustering.
To corroborate our a priori time-of-day categorization and identify emergent temporal engagement phenotypes, we conducted unsupervised clustering via functional mixture models (FMM) [5, 6]. Following published guidelines and other similar studies using this method [5, 28], we converted each participant’s 90 days into functional trajectories (“curves”) by smoothing via Fourier basis expansion. The FMM clusters these smooth functional curves, which represent each participant’s temporal engagement pattern over time, and selects the optimal number of clusters using the Bayesian Information Criterion (BIC). We then evaluated cluster-level summary statistics.
3. Results
3.1. Overall Sample Characterization
Participant demographics are provided in Appendix Table 2. There were no significant demographic differences between participants with versus without CPPDs. The median weekly adherence was 73.7%, consistent with digital engagement trajectories in other similar studies[17, 29]. Daily tracking times showed a bimodal distribution (See Appendix 3a) with morning (around 10am) and evening peaks (around 10pm). Evening (vs afternoon) responders had higher adherence (71.9% vs 59.1%; Tukey HSD p = 0.010; Figure 1a). Morning completion showed the largest case-control adherence gap (See Figure 1b). Timing variability was comparable between CPPDs and controls (Mean=4.86h vs 4.48h; t=1.55, p=.123), and 69.4% exhibited variable timing (i.e., weekly SD≥4h).
Fig. 1.

(a) Daily adherence by timing consistency and participant group. Consistent responders (weekly SD<4h) achieved 20 percentage points higher adherence than variable responders across both cases and controls (F(1,214)=27.04, p<.001). (b) Daily adherence by time-of-day and participant group. For CPPD cases, evening tracking yielded 15.3 percentage points higher adherence than morning (69.7% vs 54.4%), suggesting symptom-related barriers during earlier windows. Error bars represent 95% confidence intervals.
3.2. RQ1: Temporal Consistency Predicts Adherence
Full GLMER results are provided in Table 1. There was a significant CPPD × Time Window interaction where, evening engagement yielded a unique adherence amplification for CPPD cases (OR = 1.32) vs controls (Figure 4b). Temporal variability independently predicted lower adherence after controlling for PIQ and study duration (OR = 0.96 per 1 SD increase, p = .025). Neither chronic nor weekly PIQ scores were significant predictors (See Table 1), suggesting temporal consistency and symptom burden operate as separable influences on engagement.
Table 1.
Mixed-Effects Regression Results. Left: Generalized Linear Mixed-Effects Regression (GLMER) predicting weekly adherence (days tracked per week), reported as Odds Ratios (OR). Right: Linear Mixed Model (LMM) predicting weekly tracking time variability (SD), reported as unstandardized B estimates.
| GLMM - Weekly Adherence | |||
|---|---|---|---|
| Predictors | OR | CI | p |
| (Intercept) | 5.50 | 3.94 – 7.67 | <0.001 |
| Week on Trial | 0.92 | 0.91 – 0.93 | <0.001 |
| Track Time SD | 0.96 | 0.92 – 0.99 | 0.025 |
| Pain (Chronic Mean) | 0.86 | 0.71 – 1.05 | 0.140 |
| Pain (Acute Centered) | 1.00 | 0.99 – 1.00 | 0.384 |
| Afternoon (vs. Morning) | 1.10 | 0.89 – 1.36 | 0.360 |
| Evening (vs. Morning) | 1.14 | 0.94 – 1.37 | 0.181 |
| Case Group (vs. Control) | 0.82 | 0.53 – 1.30 | 0.402 |
| Afternoon × Case | 1.13 | 0.87 – 1.47 | 0.364 |
| Evening × Case | 1.32 | 1.05 – 1.66 | 0.019 |
| Random Effects | |||
| σ 2 | 0.47 | ||
| τ00 Participant | 0.99 | ||
| ICC | 0.68 | ||
| Nparticipant | 203 | ||
| Observations | 2659 | ||
| Marginal R2 / Conditional R2 | 0.103 / 0.712 | ||
| LMM - Tracking Variability | |||
|---|---|---|---|
| Predictors | Est. | CI | p |
| (Intercept) | 6.17 | 4.68 – 7.66 | <0.001 |
| Pain Int. (Person Mean) | −0.00 | −0.03 – 0.04 | 0.793 |
| Pain Int. (Acute Ctrd) | −0.02 | −0.06 – 0.02 | 0.297 |
| Tracking Hour Mode | −0.19 | −0.21 - −0.17 | <0.001 |
| CPPD Group (vs. Ctrl) | 0.34 | −0.24 – 0.92 | 0.253 |
| Week on Trial | −0.03 | −0.05 - −0.01 | 0.006 |
| Pain Int × CPPD | 0.03 | 0.01 – 0.07 | 0.205 |
| Random Effects | |||
| σ 2 | 5.32 | ||
| τ00 Participant | 1.51 | ||
| ICC | 0.22 | ||
| Nparticipant | 203 | ||
| Observations | 2477 | ||
| Marginal R2 / Conditional R2 | 0.202 / 0.379 | ||
Note: Est. = Estimates; Ctrd = Centered; Ctrl = Control
3.3. RQ2: Time-of-Day Effects on Timing Variability
Results of the LMM are provided in Table 1 and Figure 4a. The LMM indicated that for every 1-hour shift toward evening, within-week variability decreased by 11 minutes (β =−0.191, P < .001). This corresponds to about 2.3 hours of reduced variability for a participant who tracks at 8:00 versus 20:00. The interaction between evening engagement and CPPD status was significant (OR = 1.32, 95% CI [1.05, 1.66], p = .019), indicating that CPPD patients who tracked predominantly in the evening had substantially higher adherence odds compared to the additive effects of CPPD status and time-of-day alone (See Figure 4b).
3.4. FMM-Identified Temporal Engagement Patterns
The FMMs indicated a 4-cluster solution as optimal (BIC=−8769.1; See Appendix Table 3), with cluster sizes of: 18.3% (n=37), 38.1% (n=77), 25.9% (n=53), and 17.5% (n=36). These four temporal engagement patterns are depicted in Figure 2. Clusters 1 and 2 (high SD: 2.9–4.1h) had 41–60% adherence, while Clusters 3 and 4 (low SD: 1.7–2.0h) had 85–93% adherence. This pattern held across CPPD status, corroborating the GLMER finding that consistency benefits both populations. Within Cluster 1, CPPD cases had 18-point higher pain interference (i.e., PIQ scores) than controls (60.7 vs 42.6), but higher adherence (53.3% vs 40.7%), supporting the LMM null finding on pain interference and temporal variability. Cluster-level summary statistics are provided in Appendix Table 4.
Fig. 2.

FMM-identified clusters based on time of App-based self-tracking (y-axis) over 90 days (x-axis).
4. Discussion
This work contributes novel evidence on how temporal stability and time-of-day patterns relate to adherence in a chronic pain mHealth context, and articulate corresponding mHealth design principles. We identify three key insights emerging that point to self-tracking consistency as a behavioral marker:
Routine Stability Predicts Engagement. The first key finding is that how consistently a user engages matters more than how much pain they are in. Contrary to the common assumption that high symptom burden drives attrition, our models indicated that within-week temporal consistency, but not pain, was a significant predictor of adherence. That is, users with established stable timing habits (lower SD) maintained higher adherence regardless of their symptom status. This is in line with previous reports of user consistency as a stronger predictor of sustained compliance better than demographic or clinical characteristics [17, 29]. Further, routine consistency improved (β =−0.031, P = .006) while adherence declined (β =−0.085, P < .001) over the 14 weeks, likely reflecting attrition bias where less motivated participants dropped out, leaving more consistent responders. It is possible that mHealth fatigue (“habituation decay”) outweighs the benefits of routine learning over time and thus incentive structures and engagement refreshers are needed to sustain adherence beyond the initial weeks.
Chrono-Ergonomics and Evening Anchoring. The LMM results suggest a possible routine anchoring effect of later-day tracking that reduced within-week variability (See Table 1). However, this stability translated into adherence differently for those with vs without a CPPD, with a unique adherence amplification for the former. Unsupervised clustering analysis independently identified an “Evening-Stable” cluster where adherence was similar between participants with vs without a CPPD (See Appendix 4), in contrast to the “Morning-Moderate” cluster that showed a 12.6% group difference. This convergent evidence from a method blind to group status and our hypotheses supports a flexible chrono-ergonomic framework to accommodate those with symptom-related constraints [14, 21].
Population-Specific Temporal Accessibility. Third, pain interference did not significantly predict either adherence or routine consistency. Morning periods may be challenging for individuals with chronic pain [21], explaining why general population principles fail to transfer, and this could be due to various possible mechanisms. The relationship between pain and adherence could be indirect and mediated by temporal factors, e.g., adherence is hindered because patient symptoms disrupt the consistent daily routines that enable habit formation. These findings suggest separability of pain management from adherence management, and that targeting routine stability may be more effective than simply addressing pain intensity in interventions. In sum, optimal timing could be population-specific, though future work can investigate whether this framework applies to other conditions with similar symptomology [9, 13, 14, 26].
Design Implications. These findings provide actionable levers for: designers of mHealth apps (timing of prompts, onboarding), researchers setting mHealth-based tracking protocols (flexible vs fixed times), and clinicians using self-tracking as part of chronic symptom management. First, rather than prescribing times, on-boarding algorithms can help users identify their natural “stability window”—the time of day when they can most consistently engage, even on their worst symptom days. Second, systems could provide consistency-based feedback (e.g., visualizing timing precision [22, 24]) rather than completion rates alone, reinforcing habit loops and enhancing self-efficacy through individualized strength-focused messages [7]. Feedback messages could be individualized to highlight each participant’s specific strengths to support intrinsic motivation [7]. Third, adaptive quiet periods could be implemented to determine participants’ optimal engagement window based on contextual factors [4]. For example, just-in-time adaptive interventions could use temporal pattern recognition to detect a user’s emerging stability window and provide increasingly targeted support during that window.
4.1. Limitations and Future Directions
Several measurement limitations warrant consideration. First, observational data precludes causal claims about consistency- adherence directionality. Future work can examine mechanisms underlying consistency benefits and experimentally manipulate notification timing to test causal effects. The 14-week duration limits assessment of whether consistency patterns persist long-term or reflect initial habit formation phases. Third, the 4-hour consistency threshold requires validation as a meaningful boundary versus a context-specific threshold. Generalizability may be limited to female populations and those with chronic pain conditions, thus whether findings extend to other symptomatic populations should be tested.
5. Conclusion
These findings support reframing mHealth accessibility in research to accommodate those with varying temporal accessibility or agency to reduce systematic bias. Personalization for mHealth design can support routine formation at personally sustainable times rather than prescribing “optimal” windows derived from general populations. Such “chronoergonomic” designs can reduce the burden of self-tracking and improve the inclusivity of mHealth research [10, 16]. As digital health tools become integral to clinical care and research, attention to temporal accessibility represents a critical dimension of inclusive design.
CCS Concepts:
• Software and its engineering → Software design tradeoffs; • Applied computing → Health care information systems; Health informatics; • Social and professional topics → Women; Remote medicine; • Human-centered computing → Interactive systems and tools; Accessibility.
Acknowledgments
We thank all study participants for their time and engagement. This study was funded by a grant from Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD) of the National Institutes of Health (NIH) under Award Number R01HD108263 (PI=Ensari).
A. Appendix
A.1. Sample Characteristics
Table 2.
Participant demographics by study group. Values are Mean (SD) or N (%).
| Characteristic | CPPD Case (n=131) |
Control (n=72) |
|---|---|---|
| Age (years) | 36.6 (9.2) | 36.2 (9.5) |
| Race/Ethnicity a | ||
| White | 86 (50%) | 40 (53%) |
| Black | 44 (26%) | 17 (23%) |
| Asian | 17 (10%) | 6 (8.0%) |
| Hispanic/Latino | 47 (28%) | 19 (27%) |
| Other/Unknown | 24 (14%) | 12 (16%) |
| Education Level | ||
| College degree+ | 144 (83%) | 61 (84%) |
| Some college or less | 28 (17%) | 14 (19%) |
| Employment Status | ||
| Employed | 138 (78%) | 62 (83%) |
| Student | 9 (5.1%) | 4 (5.3%) |
| Inactiveb | 25 (14%) | 9 (12%) |
| Household Income a | ||
| Under $50,000 | 27 (18%) | 13 (19%) |
| $50,000–$149,999 | 62 (41%) | 32 (46%) |
| $150,000 and over | 51 (34%) | 25 (36%) |
Missing data: Age n=35 (14%); Race/Ethnicity n=17 (7%); Education n=4 (1.6%); Income n=33 (13%).
“Inactive” includes Unemployed (n=21), Unable to Work (n=10), Caregiver (n=4), and Retired (n=1).
Fig. 3.

(a) Cohort-wide distribution of survey completion times (N=14,235 surveys) showing a bimodal pattern with morning peaks at 10–11am (~10% of surveys) and evening peaks from 9pm-midnight (11–12% per hour). This distribution motivated categorization of participants into morning, afternoon, and evening phenotypes based on their dominant tracking time. (b) Supplementary visualization of consistency-adherence relationship. Box plot shows adherence distributions for consistent (SD<4h) versus variable (SD≥4h) responders. Temporal consistency was negatively correlated to adherence (r = −0.36, p < 0.0001). Consistent responders had a median adherence of 92.4%, while variable responders had a median of 62.6% with wider spread, supporting the correlation shown in Figure 2a.
Fig. 4.

(a) Fixed-effects (i.e., group-level) point estimates of the LMM.(b) GLMER interaction effects (CPPD Status X Pain Interference). Both models include participant as random intercept and are adjusted for study duration. Pain Int= PROMIS Pain Interference score.
A.2. Sensitivity Analysis: Functional Clustering of Temporal Trajectories
A.2.1. Methods.
For the clustering analysis, we implemented the funFEM algorithm by Bouveyron et al. [5, 6], which employs a discriminative functional mixture model. For each participant, daily tracking hours across the 90-day study period are treated as discrete observations of an underlying continuous function. After smoothing these into functional curves via Fourier transform, the funFEM algorithm clusters them using expectation-maximization (EM) and alternates between: estimating posterior cluster probabilities, determining subspace orientation, and updating mixture parameters. We tested K=2–6 clusters using the AkjBk model (i.e., cluster-specific subspace dimensions and noise variances) available from the funFEM R Library [6].
A.2.2. Results.
The model-identified clusters (“phenotypes”) are depicted in Figure 2. The BIC values for K=2–6 cluster resolution are provided in Table 3 3. The four distinct clusters are as follows:
Cluster 1: “Morning-Moderate” (n=37, 18.3%).
This cluster exhibited earlier daily tracking patterns compared to other clusters (See Table 4) with the highest temporal variability (SD=4.1h) for cases. Adherence was lowest among all clusters. CPPD cases in this cluster exhibited the highest pain interference (M=60.7) and lowest pain self-efficacy scores (M=32.5).
Cluster 2: “Afternoon-Variable” (n=77, 38.1%).
This was the largest cluster, characterized by afternoon tracking (mean hour: 16.2–16.8) with moderate-to-high variability (SD=2.9–3.5h). Adherence was moderate (57.2–59.9%), with minimal case-control differences. Pain interference was moderate for both groups.
Cluster 3: “Evening-Stable, High-Adherence” (n=53, 25.9%).
This cluster exhibited later daily tracking times (mean hour: 18.5–18.9) with low variability (SD=1.9–2.0h) and the highest adherence across all clusters regardless of CPPD status (88.7–89.5%, See Table 4). Case-control adherence was nearly identical (89.5% vs 88.7%), both with low temporal variability and high retention (14.7–14.9 weeks).
Cluster 4: “Midday-Stable” (n=36, 17.5%).
This smaller cluster tracked around midday-to-early-afternoon (mean hour: 12.4–13.7) with low variability (SD=1.7–2.6h) and high adherence (84.2–92.9%). Control participants in this cluster had the highest adherence (92.9%, See Table 4), with somewhat lower but still strong adherence for CPPD cases (84.2%). Of note, this cluster was associated with the highest pain self-efficacy scores for the CPPD cases.
Table 3.
BIC and ΔBIC for funFEM cluster solutions.
| K | BIC | ΔBIC |
|---|---|---|
| 2 | −9036.7 | – |
| 3 | −8838.5 | 69.4 |
| 4 | −8769.1 | 0.0 |
| 5 | −8777.0 | −7.9 |
| 6 | −8810.9 | – |
Table 4.
Key Cluster Characteristics by CPPD Status
| Cluster | Group | n | Adh. (%) | Pain Int. | Track Hour | Track SD (h) | Self-Eff. |
|---|---|---|---|---|---|---|---|
| Cluster 1: Morning-Moderate | |||||||
| Control | 15 | 40.7 | 42.6 | 13.3 | 1.4 | 58.4 | |
| Case | 22 | 53.3 | 60.7 | 10.8 | 4.1 | 32.5 | |
| Cluster 2: Afternoon-Variable | |||||||
| Control | 29 | 57.2 | 45.0 | 16.2 | 3.5 | 53.9 | |
| Case | 48 | 59.9 | 56.1 | 16.8 | 2.9 | 39.8 | |
| Cluster 3: Evening-Stable | |||||||
| Control | 18 | 88.7 | 43.6 | 18.5 | 1.9 | 57.5 | |
| Case | 35 | 89.5 | 57.1 | 18.9 | 2.0 | 36.9 | |
| Cluster 4: Midday-Stable | |||||||
| Control | 10 | 92.9 | 44.6 | 12.4 | 1.7 | 53.9 | |
| Case | 26 | 84.2 | 54.9 | 13.7 | 2.6 | 43.0 | |
Adh. = Adherence (% days tracked); Pain Int. = PROMIS Pain Interference; Track Hour = Mean tracking hour (24h); Track SD = Within-person variability; Self-Eff. = PROMIS Pain self-efficacy score.
Contributor Information
SAMIA SHAHNAWAZ, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States.
RHYANN CLARKE, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States.
SUZANNE BAKKEN, School of Nursing, Columbia University, United States and Department of Biomedical Informatics, Columbia University, United States.
NOEMIE ELHADAD, Department of Biomedical Informatics, Columbia University, United States.
KYLE LANDELL, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States.
JOVITA RODRIGUES, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States.
MATTEO DANIELETTO, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States.
IPEK ENSARI, Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, United States and Department of Obstetrics, Gynecology, and Reproductive Science, Icahn School of Medicine at Mount Sinai, United States.
References
- [1].Amtmann Dagmara, Cook Karon F., Jensen Mark P., Chen Wen-Hung, Choi Seung, Revicki Dennis, Cella David, Rothrock Nan, Keefe Francis, Callahan Leigh, and Lai Jin-Shei. 2010. Development of a PROMIS item bank to measure pain interference. Pain 150, 1 (2010), 173–182. 10.1016/j.pain.2010.04.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Berardi Vincent, Fowers Richard, Rubin Greg, and Stecher Chad. 2023. Time of day preferences and daily temporal consistency for predicting the sustained use of a commercial meditation app: Longitudinal observational study. JMIR mHealth and uHealth 11 (2023), e42482. 10.2196/42482 [DOI] [Google Scholar]
- [3].Bhatia Anuj, Kara Jamil, Janmohamed Tazeen, Prabhu Abhimanyu, Lebovic Gerald, Katz Joel, and Clarke Hance. 2021. User engagement and clinical impact of the Manage My Pain app in patients with chronic pain: A real-world, multi-site trial. JMIR mHealth and uHealth 9, 3 (2021), e26528. 10.2196/26528 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Boukhechba Mehdi, Cai Lihua, Chow Philip I., Fua Karl, Gerber Matthew S., Teachman Bethany A., and Barnes Laura E.. 2018. Contextual analysis to understand compliance with smartphone-based ecological momentary assessment. In Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth ‘18). ACM, New York, NY, USA, 232–238. 10.1145/3240925.3240967 [DOI] [Google Scholar]
- [5].Bouveyron Charles, Côme Emilie, and Jacques Jacques. 2015. The discriminative functional mixture model for a comparative analysis of bike sharing systems. The Annals of Applied Statistics 9, 4 (2015), 1726–1760. 10.1214/15-AOAS861 [DOI] [Google Scholar]
- [6].Bouveyron Charles and Jacques Jacques. 2021. funFEM: Clustering in the Discriminative Functional Subspace. https://CRAN.R-project.org/package=funFEM R package version 1.1.
- [7].Debackere Florian, Clavel Céline, Roren Alexandra, Rannou François, Nguyen Christelle, Tran Viet thi, Messai Yosra, and Martin Jean-Claude. 2025. Evaluation of a tailored mobile application for self-management of low back pain: Towards a metamodel for designing behavior change technologies. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ‘25). ACM, New York, NY, USA, Article 516, 17 pages. 10.1145/3706598.3713224 [DOI] [Google Scholar]
- [8].Ensari Ipek, Pichon Adrienne, Sharon Lipsky-Gorman Suzanne Bakken, and Elhadad Noémie. 2020. Augmenting the clinical data sources for enigmatic diseases: A cross-sectional study of self-tracking data and clinical documentation in endometriosis. Applied Clinical Informatics 11, 5 (2020), 769–784. 10.1055/s-0040-1718755 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Finan Patrick H., Goodin Burel R., and Smith Michael T.. 2013. The association of sleep and pain: An update and a path forward. The Journal of Pain 14, 12 (2013), 1539–1552. 10.1016/j.jpain.2013.08.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Flaherty Michael G.. 2011. The Textures of Time: Agency and Temporal Experiences. Temple University Press, Philadelphia, PA. [Google Scholar]
- [11].Fournier Marie, Longueville Julie d’Arripe, and Radel Rémi. 2017. Effects of circadian cortisol on the development of a health habit. Health Psychology 36, 11 (2017), 1059–1064. 10.1037/hea0000510 [DOI] [PubMed] [Google Scholar]
- [12].Gardner Benjamin, Lally Phillippa, and Wardle Jane. 2012. Making health habitual: The psychology of ‘habit-formation’ and general practice. British Journal of General Practice 62, 605 (2012), 664–666. 10.3399/bjgp12X659466 [DOI] [Google Scholar]
- [13].Gibbs Julie E., Blaikley John, Beesley Stephen, Matthews Laura, Simpson KD, Boyce Susan H., Farrow SN, Else Kathryn J., Singh Dave, Ray David W., and Loudon Andrew S. I.. 2016. The circadian clock regulates inflammatory arthritis. Proceedings of the National Academy of Sciences 113, 33 (2016), 9433–9438. 10.1073/pnas.1519774113 [DOI] [Google Scholar]
- [14].Hagenauer Megan H. and Lee Jinkyung. 2022. Circadian pain patterns in human pain conditions: A systematic review. Pain Reports 7, 5 (2022), e1028. 10.1097/PR9.0000000000001028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Hirten Robert P., Danieletto Matteo, Landell Kyle, Zweig Micol, Golden Eddye, Orlov Georgy, Rodrigues Jovita, Alleva Eugenia, Ensari Ipek, Bottinger Erwin, and Nadkarni Girish. 2023. Development of the ehive digital health app: protocol for a centralized research platform. JMIR Research Protocols 12, 1 (2023), e49204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Hvidt Elisabeth Assing. 2024. “Time work”: An analysis of temporal experiences and agentic practices in the “good” doctor-patient relationship in general practice. Health (London) 28, 1 (2024), 144–160. 10.1177/13634593221116504 [DOI] [PubMed] [Google Scholar]
- [17].Jones Christopher J., Smith Adam M., and Brown Laura K.. 2024. Compliance trends in a 14-week ecological momentary assessment study of chronic pain: Predictors and patterns. Assessment 31, 2 (2024), 456–470. 10.1177/10731911231159937 [DOI] [Google Scholar]
- [18].Kuznetsova Alexandra, Brockhoff Per B., and Christensen Rune Haubo Bojesen. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software 82, 13 (2017), 1–26. 10.18637/jss.v082.i13 [DOI] [Google Scholar]
- [19].Lamvu Georgine, Carrillo Jessica, Ouyang C, and Rapkin Andrea. 2021. Chronic pelvic pain in women: A review. JAMA 325, 23 (2021), 2381–2391. 10.1001/jama.2021.2631 [DOI] [PubMed] [Google Scholar]
- [20].Mack Kelly, McDonnell Emma J., Findlater Leah, and Evans Heather D.. 2022. Chronically Under-Addressed: Considerations for HCI Accessibility Practice with Chronically Ill People. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘22). Association for Computing Machinery, New York, NY, USA, Article 9, 1–15. 10.1145/3517428.3544803 [DOI] [Google Scholar]
- [21].Mayo Clinic. 2024. Chronic pelvic pain - Symptoms and causes. Retrieved January 8, 2026 from https://www.mayoclinic.org/diseases-conditions/chronic-pelvic-pain/symptoms-causes/syc-20354368.
- [22].Inbal Nahum-Shani Shawna N. Smith, Spring Bonnie J., Collins Linda M., Witkiewitz Katie, Tewari Ambuj, and Murphy Susan A.. 2018. Just-in-time adaptive interventions (JITAIs) in mobile health: Key components and design principles for ongoing health behavior support. Annals of Behavioral Medicine 52, 6 (2018), 446–462. 10.1007/s12160-016-9830-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [23].R Core Team. 2023. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ [Google Scholar]
- [24].Rabbi Mashfiqui, Aung Min Hane, Zhang Mi, and Choudhury Tanzeem. 2018. MyBehavior: Automatic personalized health feedback from user behaviors and preferences using smartphones. In Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ‘18). ACM, New York, NY, USA, 707–716. 10.1145/3267305.3267335 [DOI] [Google Scholar]
- [25].Shiffman Saul, Stone Arthur A., and Hufford Michael R.. 2008. Ecological momentary assessment. Annual Review of Clinical Psychology 4 (2008), 1–32. 10.1146/annurev.clinpsy.3.022806.091415 [DOI] [Google Scholar]
- [26].Smith Michael T. and Haythornthwaite Jennifer A.. 2019. Sleep deficiency and chronic pain: Potential underlying mechanisms and clinical implications. Neuropsychopharmacology 44, 1 (2019), 167–179. 10.1038/s41386-018-0179-x [DOI] [Google Scholar]
- [27].Thomson Cameron J., Pahl Holly, and Giles Luisa V.. 2024. Randomized controlled trial investigating the effectiveness of a multimodal mobile application for the treatment of chronic pain. Canadian Journal of Pain 8, 1 (2024), Article 2352399. 10.1080/24740527.2024.2352399 [DOI] [Google Scholar]
- [28].Tricoche Bryan T., Caceres Billy A., Shaw Leslee J., Garber Carol Ewing, Konigorski Stefan, Kolli Sahiti, Fuchs Thomas J., and Ensari Ipek. 2025. Physical activity phenotypes in endometriosis using unsupervised learning via functional mixture models. BMC Women’s Health 25 (2025), Article 45. 10.1186/s12905-025-03536-5 [DOI] [Google Scholar]
- [29].Wen Cheng K., Schneider Stefan, Stone Arthur A., and Spruijt-Metz Donna E.. 2023. Evaluating declines in compliance with ecological momentary assessment protocols: A mixed-methods study. JMIR mHealth and uHealth 11 (2023), e43826. 10.2196/43826 [DOI] [Google Scholar]
