Abstract
Based on the literature and collective experience and consensus of the authors, this article provides a definition for an event-driven electronic diary (EDeD), outlines considerations for the use of EDeDs, and provides best practice recommendations for their design and implementation in clinical trials. This is a much-needed resource to optimize data quality in clinical trials and to support a necessary level of standardization using this form of data capture.
Subject terms: Outcomes research, Clinical trials, Randomized controlled trials
Introduction
In the context of clinical research, understanding how a patient is feeling and functioning, through patient-reported outcome measures (PROMs), or other clinical outcome assessments (COAs), is often an important assessment of the efficacy of new treatments1. Collecting data in an event-driven manner is a method especially beneficial and appropriate for clinical study designs where a change in the number of occurrences of a given event informs the endpoint, or high-resolution information is wanted about the occurrence. Their primary driver is the occurrence of an event, hence the term “event-driven”.
To enhance the successful capture of robust, high-quality data using Event-driven Electronic Diaries (EDeDs), this article proposes aligned terminology around EDeDs and provides best practice recommendations and considerations for the design and implementation of EDeDs in clinical trials (summarized in Table 1), including examples of how these recommendations can be applied to a range of indications (Tables 2–5).
Table 1.
Key Recommendations and Considerations for the Design and Implementation of an EDeD
| Design steps and decision points | Recommendations and considerations |
|---|---|
| Define what data to capture |
− Align items in the diary with the endpoints in the protocol – consider not adding items that do not ultimately support the research questionCollecting the date and time of each event may help reporters distinguish one event from another, especially in scenarios where data can be entered for prior days (see data entry time window) − Length of time of events may be important − Plan for any additional functionality required based on the specific disease/condition |
| Specify the reporter(s) |
− Prioritize self-reporting − Avoid having more than three reporters − Use reporter-agnostic item wording in trials where there are multiple reporters − Have a unique account log-in to the system for each reporter, with the data associated with a single participant ID, and data from all reporters being stored in the same dataset − Assign a role to each reporter − Multiple reporters per participant have implications for numerous elements of the EDeD implementation, that are detailed within their respective sections |
| Define how the event will be captured |
− Self-reported (or observer-reported) or passive sensor trigger − If only one event can occur at a time, when the EDeD is initiated and there is an existing event open, ensure the reporter is prompted to close the event before it is possible to report a new event − If multiple events can occur in parallel, remind reporters to close any ongoing events when they have ended but do not restrict them from reporting a new event − If using a passive sensor triggered approach, ask for confirmation of an event and always incorporate the self-reported event trigger option too. |
| Plan your device strategy |
− BYOD is beneficial for the event-driven nature of data collection. If BYOD is not feasible, a provisioned device should be provided, and a provisioned option should always be offered alongside BYOD − If an app solution is used, reporting by multiple reporters could be facilitated by allowing more than one app download per trial participant − If a sensor-based DHT is being utilized with the EDeD, ensure the appropriate level of integration between systems and clearly document the dataflow and workflow − There are pros and cons to offline versus online solutions which should be considered |
| Display a summary of previous entries on diary opening and prior to new event reporting |
− On opening of the EDeD or any catch-up diary, and prior to the entry of a new event, display a summary of previously recorded events to mitigate against duplicate data entry − Limit entries displayed to the allowed time window for data entry − Only display information required to identify a unique event − Display in order of event occurrence (not time of data entry) − Do not include previous data that could bias any responses being entered as part of the current event, e.g. the severity of previously reported events − Where there are multiple reporters, include who reported each event − If events can be ongoing, clearly display which events are still ‘open’ |
| Schedule a regular reminder to report events | − Schedule a regular reminder to record events that were not entered in real-time or confirm the absence of events in a given reporting window |
| Determine the data entry time window |
− Define a clear maximum allowed time interval between the event occurring and reporting it − Do not allow changes to previous entries within the EDeD, but have a data change request process in place for when a reporter realizes they need to make a change/correction to the recorded data |
| Adhere to electronic implementation best practices |
− Always follow general best practices for implementing PROMs electronically − Ensure comprehensive training specific to the context of event-driven data capture, including clear expectations about entering data contemporaneously |
| Ensure usability | − Document usability of the eCOA system or the specific EDeD implementation |
| Implement compliance monitoring |
− Use the scheduled reminder system to facilitate the calculation and tracking of compliance rates − Define a threshold for the number of consecutive days of missing entries that will trigger a notification to site for follow-up resolution |
BYOD bring your own device; DHT digital health technology; eCOA electronic clinical outcome assessment; PROM patient-reported outcome measure.
Table 2.
An example of applying the best practice recommendations for EDeD design and implementation to Developmental and Epileptic Encephalopathy (DEE)
| Design steps | Features of data capture | Implications for design and implementation |
|---|---|---|
| Define what data to capture | Seizure occurrence1 | − Where seizure frequency is high within a day, it may not be feasible for all events to the entered in an event-driven way and so providing a means to enter multiple events at once may be required (i.e., entering multiple events within one diary entry, rather than having to enter each event separately which could be burdensome if, for example, 20 events were being entered at once) |
| Type of seizure |
− The medical term for the seizure is not normally used by participants/caregivers, and so the ‘nickname’ for the seizure type is required to be tailored to the individuals (e.g., in seizure diaries, programming the medical term for a seizure to the participant’s/caregiver’s nickname for it at baseline)8 − If there is an adjudication process for confirmation of seizure type, then there should be a mechanism for applying any changes to all the data points impacted5 (i.e., if a seizure type was classified by the site which is then modified after adjudication, any reports of this type should be modified to represent the correct classification) − New seizure types (with nicknames) should be able to be added to the diary during the study |
|
| Was rescue medication taken | − Recommend this is kept simple and the name of the medication is recorded in the eCRF. If the name does have to be recorded in the EDeD, use a pre-populated list rather than free text | |
| Duration |
− If multiple seizures can be entered at once, consideration should be given to if duration should be averaged as opposed to at a per-event level − If data can be entered for prior days, consider if duration of each event can be accurately recalled |
|
| Specify the reporter(s) | Often caregiver report for DEE2 |
− If there are multiple potential reporters, multiple user accounts may be required or this could be captured as an explicit question prior to completion of the EDeD − Ensure all reporters complete any training − Consider what can be observed and recalled by caregivers |
| Define how the event will be captured | Caregiver report (and potentially a sensor as exploratory) | − Whilst use of a sensor to detect when a seizure event is occurring could be incorporated, based on current evidence this would be exploratory3. If it is included, ensure the level of integration required is possible with the sensor and the EDeD |
| Display a summary of previous entries on diary opening and prior to new event reporting | List of individual events or number of each type of seizure |
− If events are not always being entered contemporaneously, time of each event can help differentiate events on the summary screen − Where there are multiple reporters, include who reported each event − Number of entries displayed should be in line with the allowed time window for data entry − If events are high frequency an alternative is to include the number of each type of event |
| Schedule a regular reminder to report events | Daily |
− As events can be high frequency, a reminder sent daily to report events that have not been reported yet or to confirm that absence of event should be implemented − Where there are multiple reporters, it also reminds the given reporter for that day to enter data and keep all user accounts/devices up to date |
| Determine the data entry time window | Often data can be entered up to 7 days after the event8 |
− Capturing date will ensure events are associated with the correct date when entered for prior days and help distinguish events − If data on occurrences can be entered for prior days, then entering multiple seizures at once may be beneficial. However, it should be encouraged that events are entered contemporaneously − Limit the dates that the reporter can select based on the time window |
| Implement compliance monitoring | Trigger based on data entry time window | − Trigger a notification to site that would allow enough time for them to mitigate the amount of missing data in the given data entry time window period (i.e., at which point the reporter could no longer add data for a given date), and be practicable and least burdensome to site staff |
DEE developmental and epileptic encephalopathy; eCRF electronic case report form.
Table 5.
An example of applying the best practice recommendations for EDeD design and implementation to Hypoglycemic events
| Design steps | Features of data capture | Implications for design and implementation |
|---|---|---|
| Define what data to capture | Hypoglycemic events |
− When a sensor is used to trigger the EDeD, the initial question in an EDeD should confirm an event occurred − If the sensor did not trigger the EDeD, consider how blood glucose value will be captured and if branching logic in the EDeD is needed for these scenarios where it is not automatically transmitted from the sensor |
| Additional signs and symptoms | − If a caregiver can report, ensure that any signs and symptoms in the EDeD are observable | |
| Treatment | − Consider a pre-populated list, or a Yes/No rather than free text | |
| Specify the reporter(s) | Participant or caregiver |
− If a caregiver, consider how they would confirm the event based on the sensor trigger − If there are multiple potential reporters, multiple user accounts may be required or this could be captured as an explicit question prior to completion of the EDeD − Ensure all reporters complete any training − Consider what can be observed and recalled by caregivers |
| Define how the event will be captured | Passive-sensor and self-report43 |
− As the sensor will trigger the EDeD, ensure level of integration required is possible − Should also be possible to report an event via self-reported initiation of the EDeD and the diary should be continuously available to enter events |
| Display a summary of previous entries on diary opening and prior to new event reporting | Time |
− Time can help differentiate events on the summary screen − Number of entries displayed should be in line with the allowed time window for data entry − Determine if events captured by the sensor but missing from the EDeD entry are displayed − Consider including how the EDeD was triggered and in the case of multiple reporters, who reported each event |
| Schedule a regular reminder to report events | Daily and linked with sensor-identified events that have not been confirmed in the EDeD |
− Would also require the ability for a trigger to be programmed on a non-scheduled basis − Where there are multiple reporters, it also reminds the given reporter for that day to enter data and keep all user accounts/devices up to date − Consider if the lack of sensor-identified events can be used as the confirmation of the absence of events or if active confirmation from the reporter is required |
| Determine the data entry time window | Consider frequency of event |
− As the sensor is the main source of capturing the event and should trigger the EDeD in real-time, if events are high-frequency, consider a small time-window due to recall bias, as opposed to if events are less frequent and therefore potentially more memorable and a larger time window could be used − Limit the dates that the reporter can select based on the time window |
| Implement compliance monitoring | Based on sensor-identified events | − If the sensor is the main trigger for the diary, consider if compliance monitoring should be based on if the sensor detected an event that was then not confirmed in the EDeD or if it should be based on missing EDeD days |
What is an EDeD?
An event may be a manifestation of a disease or treatment or another pre-defined occurrence that is clinically relevant, such as a bowel movement or a longer duration event, such as a sickle cell crisis. EDeDs can be triggered by the reporter (i.e., the participant themself or an observer, such as a caregiver) recognizing a defined event, or by a passive sensor-based digital health technology (DHT), such as a smartwatch. The use of digital sensors to identify an event and trigger assessments presents emerging opportunities to better understand patient experiences2–4(Box 1).
Electronic diaries increase data quality and better ensure contemporaneous data collection over paper-based methods5–8, and so we suggest electronic reporting as the preferred method of capturing event-driven data. EDeDs are not necessarily substitutes for scheduled COAs, but rather an approach that can provide more immediate and granular information about individual events, potentially reducing burden associated with recalling details of an event sometime after its occurrence while increasing the accuracy of reported data9–11.
It is important to note that this is distinct from ecological momentary assessment (EMA; also referred to as ambulatory assessment). While EMA also involves real-time data capture in naturalistic settings, it is characterized by repeated sampling intended to assess experiences or behaviors over time, independent of event occurrences. In contrast, EDeDs focus on capturing data immediately following the occurrence of a specific, predefined event.
EDeDs have been used across a wide range of therapeutic areas (TAs) and data from EDeDs have supported regulatory approvals and labeling statements3,9,12–17. For example, in the case of both overactive bladder (OAB) and epilepsy, pivotal trials have utilized EDeDs as the basis for primary endpoints and drug approvals8,18. However, although the literature highlights the benefits of EDeDs10, and FDA refers to the use of “event-triggered data collection” for some conditions (e.g., urination events, asthma exacerbations) in Patient-Focused Drug Development Draft Guidance 419, there is a lack of consensus about how to define this type of electronic diary and best practices for implementation in clinical trials10.
Box 1: Definition of an Event-Driven eDiary (EDeD).
An EDeD is defined here as an electronic data collection tool used to capture the occurrence of a pre-defined event of interest and any data associated with that specific event, as close as possible to the time the event occurs. This approach contrasts with scheduled assessments that are made at fixed frequencies (e.g., once daily, once weekly).
When is an EDeD appropriate?
Many patient-data capture scenarios will be best suited to a scheduled diary to record event occurrence and associated information; however, the nature of some disease-related activity can drive the decision to use an EDeD, especially where timely and precise recording of events are desired. While it may not be possible to provide a singular, comprehensive recommendation on whether to use an EDeD or not, the main considerations around the appropriateness of using an EDeD are the expected frequency of given events and when measuring event duration is required. Most events will fall broadly into one of two categories: high-frequency events or low-frequency events.
When an event may occur multiple times within a day and it is important to have detailed information about each occurrence, it may be cognitively burdensome, if not impossible, for reporters to accurately recall specific details for each event using a scheduled data collection tool, such as an end-of-day diary9,11. Factors, such as recency effects, severity, and mood all may affect accurate recall20,21. Use of an EDeD can reduce some biases, omissions, or difficulties ‘averaging’ a report across heterogeneous experiences and thus improve data quality and reduce the loss of information10,11,20.
For example, in irritable bowel syndrome (IBS), the recent development of a series of measures to capture bowel movements in real-time using an EDeD rather than a daily diary was based on patient feedback indicating concerns about accurately reporting the frequency and characteristics of their bowel movements over a 24 h recall period15. In contrast, if only the most severe instance of an event is of interest (e.g., worst pain) or events are relatively uncommon and easily aggregated within a given recall period (e.g., number of headaches), an EDeD may not be necessary11,22.
Consideration should also be given to attributes of disease events that would inform an endpoint definition. For events in which duration or resolution (e.g., bleeding in hemophilia) would be crucial to defining a study endpoint, real-time data capture may ensure greater accuracy of these data.
Finally, consideration should be given to who is recording the data. The preference is always for participants to self-report, but characteristics of the population under study, such as age, and physical or cognitive ability, and if these impact the accuracy of reporting, will factor into the decision to allow an observer to record data about the participant5. Of note, as with any observer-reported outcome (ObsRO) measure, due consideration must be given to where clinical events are observable and ensuring that the EDeD only includes questions about things the reporter can witness (e.g., coughing) and report accurately, as opposed to concepts, such as pain and migraine symptoms that can only be self-assessed. The feasibility of contemporaneous reporting of event details should be evaluated in circumstances when the reporter might not have access to a device (e.g., during work or school) or, in the case of an observer report, if there are long periods when the observer is not with the participant, (e.g., when a child is in school).
It is crucial that the event of interest is clearly defined from the outset, including what is and is not classified as an occurrence, using terminology appropriate to the reporter. This is true for both the event-driven and scheduled approach to data capture.
Best practice recommendations for the electronic design and implementation
The design and implementation of EDeDs for use in clinical trials must balance the need for accurate reporting without placing undue burden on study participants. Here, we provide a set of recommendations for the design and implementation of an EDeD (See Table 1 for a summary of each recommendation and the key considerations, and Tables 2–5 for practical examples of how these can be applied to specific indications). Note that the order is not necessarily fixed, and some steps will occur in parallel.
Table 3.
An example of applying the best practice recommendations for EDeD design and implementation to Hemophilia
| Design steps | Features of data capture | Implications for design and implementation |
|---|---|---|
| Define what data to capture | Bleed occurrence |
− As events are not discrete point-in-time events, the ability to open and close an event is likely to be required − If multiple events can occur in parallel and treatment is also being captured, consider if this needs to be (or is possible) to associate with a specific event or if it can be captured in a separate EDeD − Consider if there is a limit on the number of events that can be ongoing at one time |
| Location |
− Decide what response type will be used to capture this (e.g., using a pre-populated list or a selectable body image) − If multiple events can occur in parallel, consider if they can occur at the same location, and if not, ensure the previous event at this location has been closed/ended before a new one can be reported |
|
| Duration |
− Consider if this will be done once the event is closed in the EDeD (e.g., report the length of the event) or if a start and stop time will be reported − Consider if any limits on the duration an event can last should be implemented |
|
| Specify the reporter(s) | Participant or caregiver |
− If there are multiple potential reporters, multiple user accounts may be required or this could be captured as an explicit question prior to completion of the EDeD − Ensure all reporters complete any training − Consider what can be observed and recalled by caregivers |
| Define how the event will be captured | Self- or caregiver report |
− Remind users to close any events that have ended prior to reporting a new event but do not inhibit a new one being reported if multiple events can occur in parallel − If there are multiple reporters, consider if it must be the same reporter that opens and closes a single event |
| Display a summary of previous entries on diary opening and prior to new event reporting | Start and end time of an event, location |
− Number of entries displayed should be in line with the allowed time window for data entry and clearly display which events are still ‘open’ − Where there are multiple reporters, consider including who reported each event |
| Schedule a regular reminder to report events | Daily |
− As events can span multiple days, consider if additional reminders are required about closing ongoing events − Where there are multiple reporters, it also reminds the given reporter for that day to enter data and keep all user accounts/devices up to date |
| Determine the data entry time window | Date | − As events can span multiple days, and there can be multiple events occurring in parallel, consider how the data entry time window will account for this if an event that has not been closed is outside of this window |
| Implement compliance monitoring | Trigger based on days of missing data and events that are ongoing |
− Trigger a notification to site that would allow enough time for them to mitigate the amount of missing data in the given data entry time window period (i.e., at which point the reporter could no longer add data for a given date) − If a compliance alert is linked with ongoing events that have not been closed, consider the duration that would trigger this |
Table 4.
An example of applying the best practice recommendations for EDeD design and implementation to Overactive Bladder (OAB)
| Design steps | Features of data capture | Implications for design and implementation |
|---|---|---|
| Define what data to capture | Micturition and incontinence events | − As additional data on events is often required to be linked with each event (such as level of urgency), the ability to report multiple events at once (i.e., with one EDeD entry) is not recommended |
| Volume voided over 24 h | − Where the volume is only required to be reported over 24 h, consider if branching logic will be required to display the additional diary items on these days or if a separate diary will be implemented for these items | |
| Level of urgency with each event | − This is subjective and so would require self-report | |
| Specify the reporter(s) | Participant | − One user account |
| Define how the event will be captured | Self-report |
− Continuously available EDeD but only for the specified time-period (generally 3 consecutive days within a 7-day window prior to a site visit) − Consider if the system will need to know when a site visit is expected to trigger the availability. Since visits dates can change, this can be risky − Consider other mechanisms to limit the access of the participant to the diary until the appropriate time, such as a code provided by the site |
| Display a summary of previous entries on diary opening and prior to new event reporting | Time of event |
− If events are not always being entered contemporaneously, this can help differentiate events on the summary screen, especially given their high frequency − Consider the balance between burden of having to enter this and the use it provides for recall given the high-frequency nature of events − Number of entries displayed should be in line with the allowed time window for data entry or as event numbers can be high, consider if past X number of entries is more suitable so it fits on a single screen in the diary |
| Schedule a regular reminder to report events | Twice daily |
− If three consecutive days of entry are achieved, reminders should stop − The daily reminder to report events will be dependent on if data entry was initiated − As event occurrence overnight is often important to capture, consider a reminder at the end of the day and the beginning of the day |
| Determine the data entry time window | 24 h |
− Consider how memorable the events are and how accurate recall would be for recall beyond 24 h − Limit the dates that the reporter can select based on the time window |
| Implement compliance monitoring | Daily | − May need to be linked to when data entry is initiated and if the 3 consecutive days in the 7 days prior to the visit could be missed |
Define what data to capture
A critical step when designing and implementing an EDeD is to specify exactly what data will be captured, and items should align with the endpoints in the protocol8,10. Strong consideration should be given to how much detail is required per event, which can impact respondent burden19. For example, for a seizure EDeD it would be possible to ask about the type, severity, time, length, use of rescue medication, and related hospitalization occurrence. It is important to ask which of this information is actually answering the research question, as well as exploring whether the data could be captured from other sources, such as site reports of patient hospitalization.
Although EDeD data entry is timestamped, this may not be the exact time of event occurrence, as not all events will be entered strictly contemporaneously. Thus, collecting the time of each event may be important—even if not essential to the endpoints—as it can serve as a reference tool for the reporter to distinguish one event from another (particularly in the case of high frequency events). It may also provide meaningful context, such as whether the event occurs more frequently at night. Capturing the date of an event will also be required for EDeD designs that allow the reporting of events from previous days (See Section 7).
For certain indications, the event is not a discrete point-in-time event and can last a significant length of time (such as a migraine) - sometimes extending over a day. Additionally, more than one event can occur at once (such as a bleed in hemophilia). When knowing the duration of events is required, this can be captured through reporting both the start and end time, sometimes referred to as ‘opening’ an event when it starts and ‘closing’ it when it ends by using approximate days/hours/minutes, or selecting from prespecified ranges (which can also identify events that span across days). This EDeD feature creates additional complexity for electronic implementation, because in addition to recording the occurrence of the event, it requires the capture of ‘opening/starting’ and ‘closing/ending’ of each discrete event.
At this stage, understanding if any of the data being captured require additional functionality in the EDeD that is specific to a given indication is also important. For example, in epileptic conditions where seizure type is being captured, it is common to have a personalized list available in the EDeD to select based on the participant’s and/or caregiver’s own words, which map back to the medical seizure names in the dataset8, or in acute migraine studies participants report their most bothersome symptom prior to treatment and are subsequently asked about the impact of treatment on that specific symptom (which will differ per participant)23.
Specify the reporter(s) (i.e., the individual(s) completing data capture)
It is important to specify who will be using the EDeD and how many reporters are allowed to enter data. As mentioned previously, where possible participant self-report is preferred. However, when it is appropriate to garner reports from others, the most common combinations of reporters are a) the participant or caregiver only; b) the participant and a single caregiver; c) the participant and two caregivers; d) two caregivers only. The authors do not recommend more than three reporters, as it could create challenges for data analyses and interpretation. Where there are multiple reporters entering data into a single version of an EDeD, the authors recommend using reporter-agnostic item wording; for example, “Select the type of seizure”, instead of “Select the type of seizure you had” or “Select the type of seizure your child had”. This removes the need for multiple versions of an EDeD with slightly different wording for the same items, programming logic to display the correct version based on who is completing the EDeD, as well as additional translation work. As previously mentioned, for observer-report, items should not include concepts that can only be self-assessed (such as pain).
In the case of multiple reporters, the authors recommend that each reporter has a unique account log-in for the system, with the data associated with a single participant ID, and stored within the same dataset8. The authors recommend that each reporter is assigned a specific role, for example, a primary role being assigned to the main caregiver, and a secondary role to the additional reporter. It may not always be possible to create separate accounts in the system or feasible for reporters to have separate accounts (e.g., a young child); in instances of a shared account, ensure attributability via a question that captures who the reporter is prior to entering data into the EDeD.
The risk of the same event being recorded multiple times is greater with multiple reporters, therefore ensuring that all devices and accounts are in sync is important to keep the EDeD up to date for all reporters. Of note, if standardized, scheduled ObsROs involving ratings are also part of the study, data from multiple reporters can be problematic for analysis and interpretation24,25, and the authors recommended that the same reporter responds throughout the study, with the electronic system flagging when a change of reporter occurs.
Multiple reporters per participant has implications for numerous elements of the EDeD implementation, which are addressed throughout.
Define how the event will be captured
Another consideration in the implementation of an EDeD is how the event of interest is captured and what triggers the diary being available for completion. This could occur in several ways:
Self-reported (or observer-reported) event trigger
The most common way to initiate an EDeD is for the participant (or a caregiver) to report within the eDiary the occurrence of an event. The system must be set up for continuously available data entry (i.e., not within a certain time window as is common with a scheduled daily diary). The initial question in an EDeD should involve confirmation that there is a new event to report. Consideration should be given to whether events must be entered one-by-one or if multiple can be entered at a time – the approach will depend on the indication and the level of data required to be captured per event.
If only one ongoing event can occur at a time (such as a migraine), when the EDeD is initiated again an ongoing event must be closed before a new event can be reported (in this instance, the reporter must record that their previous migraine has resolved before they can record that a new migraine has begun). The reporter should be informed of this requirement with instructional text and as part of initial training (see Section 8).
For indications where multiple events can occur in parallel (such as bleeds), when the EDeD is initiated again reporters should be prompted to close any ongoing events that have ended, but they should not be inhibited from reporting that a new event has started. Additional edit checks can be programmed to confirm that the reporter wishes to report a new event, rather than close an existing one (See Fig. 1.).
Fig. 1. Screenshots illustrating an example of the summary screen and workflow for events that can occur in parallel.
Upon opening of the EDeD, the user would see a summary screen listing the previous closed entries and ongoing entries and be able to choose if they then wish to report a new event or end an ongoing event. If they select to report a new event and have open events that are no longer ongoing and have ended, they are reminded to close them before reporting this new event.
Passive sensor trigger
With the rapid evolution of consumer- and medical-grade DHTs, an EDeD can also be triggered by a passive sensor-based DHT. Such an approach requires integrating sensor data with the EDeD such that the triggering event data synchronizes in near real-time with the smartphone, leading to the reporter receiving a notification to enter data into the EDeD. For example, a wrist-worn sensor detects a potential fall that triggers a notification on the reporter’s smartphone to respond in the EDeD, or a continuous glucose monitor automatically transmits to the smartphone when a diabetic hypoglycaemic event has been detected and initiates the participant’s reporting of more details in the EDeD.
As there may be scenarios where the sensor data has identified the occurrence of an event and triggered the EDeD, but the reporter does not believe an event has taken place3, the initial question in an EDeD should confirm an event occurred. The reporter may also want to report an event that has not been identified by the sensor3,14. Further, whilst sensors may be used to trigger an EDeD or provide useful contextual data, such as what a sensor was recording at the time of an event (regardless of whether it was the trigger3), they are not themselves an EDeD. Therefore, the authors recommend that the passive sensor triggered approach always incorporates the self-reported event trigger option too.
Plan your device strategy
Once it is determined what data will be captured, who will be completing the EDeD, and how data capture will be initiated, it is important to plan a supporting device strategy. An app that reporters can install on their own smartphone (“bring your own device [BYOD]”26,27) may be a particularly beneficial strategy given the event-driven, real-time nature of data capture, and the fact people usually have their personal device on hand. However, not all individuals will have a suitable personal device or wish to use their own device, and in such circumstances a provisioned device should be provided (or multiple in the case of more than one reporter). Where there are multiple reporters using different devices, installation of the app must be available for all reporters.
Where a sensor is included as part of the event data capture strategy, ensure the level of appropriate integration between the relevant systems exists (e.g., the EDeD software can receive a trigger from the wearable software in a timely manner for near-real time collection), and clearly document the data flow.
It is also important to consider whether the solution can be online or offline; if events are intended to be recorded at any time but connectivity could be unreliable, offline capabilities for data capture are essential to ensure the EDeD can still be completed.
Display a summary of previous entries on diary opening and prior to new event reporting
Ensuring the same event is not recorded multiple times is an important consideration for high quality implementation of EDeDs. On opening of the EDeD, and prior to the entry of a new event, the authors recommend displaying a summary of previously recorded events in order of event occurrence; this reminder acts as a risk mitigation against duplicate data entry28. This should also be the first screen seen in any scheduled catch-up diary19 (See Section 6).
The number of previous data entries and the information displayed should be limited and not list all events ever reported, nor all item responses. The authors recommend that entries displayed on the summary screen align with the allowed time window for data entry into the EDeD and only display information required to identify a unique event. For example, if the time window for data entry is 3 days between the event occurring and reporting it (See section 7), then only events from the past 3 days should be displayed, along with information required to identify a unique event, such as the time of occurrence and type of seizure or location of a bleed. It is especially important not to include previous data that could bias any responses being entered as part of the current event (e.g., a severity rating). Where the EDeD can be completed by multiple reporters (e.g., two parents), including who reported each event is also helpful.
For events that require opening and closing, the summary screen should also provide information on which events are ongoing (i.e., which are still open; Fig. 1.). Careful consideration must be exercised when there are multiple reporters, and specifying if a different reporter than the one who opened an event can close it.
Schedule a regular reminder to report events or confirm absence of events
The authors recommend scheduling a regular reminder to record any events that were not entered in real-time19 (i.e., that were not entered in an event-driven way) or confirm the absence of events in a given reporting window. For events that require ‘closing’, this can also act as a reminder to record the end of any open events or confirm that the event is ongoing. For most indications, scheduling this at the end of the day is appropriate; however, where events are low frequency, a less frequent schedule could be considered (e.g., weekly).
Given a primary purpose of this design feature is to distinguish between the lack of any events in a given period, and someone forgetting to report the event8, it is important to have active confirmation that no events occurred. That is, the reminder should not just notify the reporter to open the EDeD to report any missed events, but explicitly ask them to confirm the absence of events and record this entry in the dataset.
All options for designing this workflow include a scheduled notification being sent to the device to inform reporters they have tasks to complete. They can then open the EDeD and add events or actively select that they have no events to report. Alternatively, a separate “catch-up diary” can be implemented on a fixed schedule and the absence of events can be specifically captured in this catch-up session. If utilizing the catch-up diary option, this should be scheduled within a specific time-window; as is standard practice with scheduled diaries, if data is not entered after the first reminder, further reminders are utilised8, and once complete the catch-up diary is no longer available for that time period. A design choice regarding whether the EDeD is still available during these time-periods or not shown when the catch-up diary is available must be made. In some circumstances it may also be appropriate to implement a morning reminder to enter any events that occurred overnight. Where there are multiple reporters, quick syncing of data across user accounts/devices to mitigate against duplicate data will be critical.
Some options for the design and flow of this are laid out in Fig. 2. All require consideration, dependent on the capabilities of the given system as to how the data are represented in the backend (i.e., will they be stored within the same event set or separately and merged at a later date). There is currently a lack of published evidence to make a recommendation about the optimal design. If the reporter enters that no events have occurred that day in the catch-up diary but they go on to report an event in the EDeD before the end of the day, this should supersede the prior (time-stamped) non-occurrence report8.
Fig. 2. Options for setting up a scheduled reminder workflow.
Upon the user receiving a scheduled notification to the device reminding them they have tasks to complete, Options 1 and 2 outline the potential designs that could be implemented for ensuring all events are captured or obtaining active confirmation of the absence of events that day. In Option 1, the reporter would open the app and see a “Catch-up diary” on the home screen, on selecting this they would be shown a summary screen and asked if they have an event to report or to confirm that no events occurred. If they have no events to report, they would select this and then be taken to the end of the diary. If they select that they do have events to report, they could either: 1a) be asked all items from the EDeD as part of the “Catch-up diary”; 1b) be instructed to open the EDeD from the home screen and report the events; or 1c) the EDeD would auto-launch. In Option 2, the reporter would open the app and see the EDeD on the home screen, on selecting this they would be shown a summary screen and asked if they have an event to report, and if so, they would proceed to complete the EDeD data entry as usual; if not, the diary would end.
Determine the data entry time window
The intention of an EDeD is for contemporaneous reporting of events and reducing recall bias; however, reporters may need some flexibility in when they enter data. For example, certain events can impact a reporter’s willingness or ability to contemporaneously report the event in the EDeD due to pain, fatigue, hospitalization, or other symptoms8,9. The authors recommend defining a clear data entry time window (i.e., time interval allowed between the event and the report).
The data entry time window requires careful consideration based on the context of use1,11,20, keeping in mind the balance between mitigating against the risk of missing data whilst still being confident in reliable recall. Previous research has shown the majority of entries in EDeDs are made in a relatively timely manner (e.g., with only 5.4% of entries made 3 days or more after the event8), and that longer intervals between an event occurring and the reporting of the event show greater rating variance, with more difficult and less accurate recall20,21, and a trend towards higher symptoms scores and worse quality of life29.
Multiple factors should be considered, including (but not limited to) the condition under study, the severity of the event of interest, the expected frequency of the event and how memorable it is11,20,22, and the data being collected about the event. For example, if a diary is collecting urinary events which are high frequency and difficult to distinguish over a longer time period, a short time window for data entry, such as 24 h could be appropriate. In contrast, if the event is a sickle cell crisis, which can last for days or weeks, then allowing the reporting of events within a week-long window may be a reasonable time frame.
Defining the data entry time window is an important part of the electronic implementation and requires the system to support dynamically (based on current date and time which will constantly be changing) programmed limits and validation checks if a date or time is entered which is outside the defined data entry time window (where the system has not entirely disabled the dates outside of the specified window, such as in a calendar selection). Additional considerations are required for events that require opening and closing; depending on the indication and how long a given event is expected to last, it is recommended that limits on the time frame of ‘closing’ an event or an edit check confirming the length of the event be programmed.
Of note, reporters should not be able to make any changes to previous entries made directly in the EDeD once that entry has been submitted8,30; however, there should be a data change request (DCR) procedure in place to address the situation where the reporter realizes they need to make a correction to the recorded data30,31.
Adhere to electronic implementation best practices
Best practices for electronic implementation of PROMS should be followed28.
Further, there are some additional considerations specific to EDeDs with regard to training, which build upon the guidelines outlined in Mowlem et al.28 to provide on-device instruction and training on system use, which includes “comprehensive instructions on how and when to use the system, and incorporate a training module containing sample questions composed of (at least) the individual response scale types used in the study”. As with any study, ensuring that participants and anyone contributing to data entry understand the importance of the data to the trial is a core component of effective training. In the case of EDeDs, ensuring consistency in reporting across all reporters through training on what constitutes a discrete event and how to record it will be a core component of training. Further, the authors recommend providing guidelines on clear expectations around entering data as close to the event as possible32, including the time interval allowed between the event occurring and reporting when not entered contemporaneously. This includes providing guidelines around not entering events in real-time when they are not well enough. Regarding safety monitoring, reporters must be informed that the EDeD data are not monitored in real-time and are not used for AE detection, and trained on correct procedures to follow in instances of a safety concern.
As with any clinical technology, well trained and supported sites have a significant positive impact on the participant’s experience33. Dedicating the time and resources to ensure sites have a complete understanding of the EDeD and the enabling eCOA system (relevant software and device) is required for successful deployment.
Ensure usability
Usability testing aims to examine whether a target population can use the hardware and software being implemented in a clinical trial appropriately, including navigating the electronic platform, following instructions, and reading and responding to questions34,35. While recent guidelines have suggested that there is generalizable evidence of usability for many electronic implementations of PROMs28,36, there still may be a desire to confirm usability of the data collection mode in a specific participant population, with a specific technology, or with a specific study design. This may be a particularly important consideration in the context of EDeD, where reporters are expected to proactively engage with the eCOA system and submit data outside the context of a traditional, scheduled questionnaire10. Whether usability testing of the specific EDeD within a specific study is required or usability at the eCOA system level is sufficient should be judged on a case-by-case basis, but the authors recommend having documented usability of a given eCOA system in some capacity. Detailed information on usability testing is provided elsewhere, and we direct the reader to these resources34,36–38.
Implement compliance monitoring
Published data on compliance rates for EDeDs demonstrates high percentages8,13,39, in line with electronic data collection more widely. Ensuring an adequate method to monitor and calculate compliance for any data entry method is important19; however, a significant challenge for EDeD lies in the unknown quantity of expected reports. Unlike for a scheduled assessment where the number of expected entries within a given time frame is known, for an EDeD the number of expected entries is a reflection of how many events actually occur. If a diary is not submitted on any given day, this could indicate non-compliance, forgetting to report event(s), or the absence of events. This highlights the value of a scheduled reminder to reporters to submit events or to obtain their explicit confirmation that no events occurred during a given time period (See Section 6), and the authors recommend using the scheduled reminder system to facilitate the calculation and tracking of compliance rates.
In addition, the authors recommend defining a threshold for the number of consecutive days of missing entries that will trigger a notification to site for follow-up resolution. For example, in studies where daily entry is expected, it has been shown that, within a 7-day period at least 4 days of data entry (i.e., up to three days are missing) are robust to missingness across a range of day-to-day variability, and maintains the interpretability of the data40 (highlighting the importance of planning for sensitivity analysis). Therefore — in this scenario — in the interests of balancing site burden (by not alerting them to every single day of missing entries) and data quality, and given the likelihood that most studies will allow event reporting for a certain number of days after the event, the authors would recommend that when two days of consecutive data are missing the site is notified, hopefully providing adequate time to resolve before more than 3 days of data entry are missed. The threshold should be applicable to the use case and determined based on the frequency of expected event reporting, and so EDeDs where data entry is less frequent would use a different threshold.
Illustrative examples of indications that commonly employ EDeDs
Tables 2–5 provide examples of how the best practice recommendations in this paper could be applied to some indications that commonly use EDeDs. We provide examples of the features of data capture for these indications and the types of implications this can have on the EDeD. The examples of the features of data capture should not be taken as recommendations for what data should be captured for a specific indication and are not exhaustive; as outlined in this paper, items in the EDeD should align with the endpoints in the protocol. As the device strategy, adherence to electronic implementation best practices, and ensuring usability remain largely similar as a function of indication, these have not been included.
Conclusions
Despite Gater et al. 10 noting almost a decade ago that “while symptom-related label claims are those most frequently granted by regulatory authorities, no guidance specific to support the development, psychometric evaluation, and interpretation of endpoints derived from patient diaries exists”, there still remains a distinct lack of guidance specifically addressing this data collection approach. Although EDeDs tend to be bespoke designs to meet a specific trial protocol’s requirements, application of general guidelines offers scope for improvements in how these data are captured. Here, based on the collective experience of the authors — representing pharmaceutical companies, eCOA technology and allied service companies, Critical Path Institute staff, and regulatory agencies — and through collating available evidence, we outline best practice recommendations for the design and implementation of EDeDs in clinical trials. The authors believe that filling this gap is a critical contribution to the literature and a much-needed resource to support optimizing data quality in clinical trials and provide a necessary level of standardization when using this form of data capture.
Though EDeDs can be complex in design, implementation, and operational execution, the literature shows that participants (and caregivers/study partners) are willing to complete data capture in a contemporaneous way using EDeDs and that they can be easy and intuitive to use12. Successful use within clinical trials should focus data capture on the most critical outcomes so that the EDeD solution is not overly long and burdensome and as simple as possible for the reporter.
However, there is currently a lack of publicly available information to assess the adoption of an event-driven approach to data capture in clinical trials, and an understanding of the design and implementation methods adopted across TAs is hindered by a lack of published resources. Even when published resources are identified, the data collection approach may be described only as “an electronic diary” rather than specifying whether it was event-driven or completed at defined timepoints, while providing sparse details about the actual design and implementation (including the parameters outlined in this article). This sparsity creates challenges for ascertaining what approaches have been taken, including what has and has not been successful. We encourage those using EDeDs to provide more details on the design choices undertaken with this data capture method in published material. For example, understanding how much data are entered in real-time compared to at a later-point, would help inform the appropriate time window allowed between event occurrence and event reporting in the EDeD.
There is also a lack of shared understanding on the expectation of the required level of evidence to be submitted to regulators to demonstrate that the COA formed by the items in an EDeD is fit-for-purpose for its specific context of use. Whilst guidance on developing COAs in general exists, it does not explicitly address assessments that are capturing specific event information using an event-driven data capture approach. It is widely known that the items within event-driven diaries tend to be created on an as-needed basis for a given study or program, and a perception exists amongst many that event-driven diaries are collecting more objective data, and therefore do not need to be subject to the full development process, including empirical evaluation and documentation of their measurement capabilities1. Whereas others believe EDeDs should follow the same development and validation process as other COAs, showing it is fit-for-purpose in the target population.
Regardless, an EDeD requires the same thoughtful preparation and rigor regarding implementation as any other COA, especially given they often collect data that are used to derive key endpoints in support of medical product labeling claims.
A clearer expectation around the evidence required to show that an EDeD is fit-for-purpose would be beneficial to support alignment and next steps. Regardless, the authors encourage a level of standardization in the approach taken by those implementing EDeDs, and we hope the best practice recommendations outlined here support this. Additionally, sponsors are advised to engage with regulatory bodies early on what they will accept as evidence of a measure being fit-for-purpose for use to support a labeling claim.
With the future in mind, EDeDs offer an exciting demonstration of the potential of technologies in clinical research. Electronic data capture systems, particularly those paired with wearable sensors, can capture patient insights that are impossible with paper. To date, there has been limited research utilizing this approach, although we are starting to see early work in this area41,42, and we hope this manuscript helps advance the conversation in this regard.
As is the case for all technology-based capture of participant experience in a clinical trial, careful consideration must be given to the needs of those with certain characteristics or the complexity caused by their condition. Importantly, the design of an EDeD will benefit from obtaining input from multiple stakeholders, including clinicians, patients, caregivers, data managers and statisticians, to ensure that the data are easily collectable, clinically relevant, meaningful to patients, comprehensive, can be analyzed to plan, and produce score changes that can be interpreted appropriately.
Acknowledgements
The co-authors would like to acknowledge David Reasner for his contributions to the genesis of this article and articulation of its potential value. In addition, we thank Valdo Arnera, Nikala Kwech, Steve Hwang, Maham Idrees, Joshua Maher and David Reasner for their comments and suggestions. Support for the Electronic Clinical Outcome Assessment (eCOA) Consortium comes from membership fees paid by members of the eCOA Consortium (https://c-path.org/programs/ecoac/). Support for the Patient-Reported Outcome (PRO) Consortium comes from membership fees paid by members of the PRO Consortium (https://c-path.org/programs/proc/). The Critical Path Institute is supported by the Food and Drug Administration (FDA) of the Department of Health and Human Services (HHS) and is 56% funded by the FDA/HHS, totaling $23,740,424, and 44% funded by non-government source(s), totaling $18,881,611. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement by, FDA/HHS or the U.S. Government.
Author contributions
F.D.M. led and was responsible for the design of the work. F.D.M., J.J.T., J.V.P., J.B., E.F., J.L.A., K.D., P.O.D., S.G., R.N.C., O.I., S.D.K., J.J., M.F., D.O., R.W., M.B., N.S., S.E., L.H., A.F., L.R., M.C., and S.K. drafted the work or reviewed it critically for important intellectual content, and have read and provided final approval of the version to be published, and agree to be accountable for all aspects of the work.
Data Availability
No datasets were generated or analyzed during the current study.
Competing interests
FDM is a full-time employee of uMotif. JT and RNC are full-time employees of Eli Lilly and Company. JP was a full-time employee of Suvoda. JB is full-time employee and shareholder of UCB. EF is a full-time employee and stockholder of AstraZeneca. JLA is a full-time employee of AbbVie and owns stock/stock options. KD is a full-time employee of Clario.POD is a full-time employee of Medidata Solutions. SG was a full-time employee of Parexel International and has no competing interests to declare. SDK is a full-time employee and stockholder of GSK. DO is a full-time employee of Otsuka. RW is a full-time employee of Novartis and stockholder of Novartis and Pfizer. MB was a full-time employee of Signant Health. NS is a full-time employee of Parexel International and has no competing interests to declare. LH is a full-time employee of IQVIA. AF is a full-time employee of Medrio. LR is a full-time employee of AbbVie. SE and SK are full-time employees of Critical Path Institute and have no competing interests to declare. OI, JJ, MF and MC are full-time employees of the US Food and Drug Administration and have no competing interests to declare.
Footnotes
^JP is no longer at Suvoda, SG is no longer at Parexel, and MB is no longer at Signant Health, but they were at the time of this project.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Lists of authors and their affiliations appear at the end of the paper.
Change history
1/27/2026
A Correction to this paper has been published: 10.1038/s41746-026-02396-w
Contributor Information
Florence D. Mowlem, Email: florence.mowlem@umotif.com
The Patient-Reported Outcome Consortium:
Jos Bloemers, Emuella Flood, Jessica L. Abel, Ryan Naville-Cook, Randall Winnette, Sonya Eremenco, Luisana Rojas, and Jeremiah J. Trudeau
References
- 1.U.S. Department of Health and Human Services Food and Drug Administration. Patient-Focused Drug Development: Selecting, Developing, or Modifying Fit-for-PurposeClinical Outcome Assessments. (FDA, 2025).
- 2.Izmailova, E. S. & Ellis, R. D. When work hits home: the cancer-treatment journey of a clinical scientist driving digital medicine. JCO Clin. Cancer Inform.10.1200/CCI.22.00033 (2022)
- 3.Brinkmann, B. H. et al. Seizure diaries and forecasting with wearables: epilepsy monitoring outside the clinic. Front. Neurol.12, 690404 (2021).
- 4.Regalia, G., Onorati, F., Lai, M., Caborni, C. & Picard, R. W. Multimodal wrist-worn devices for seizure detection and advancing research: focus on the Empatica wristbands. Epilepsy Res.153, 79–82 (2019). [DOI] [PubMed] [Google Scholar]
- 5.Fisher, R. S. et al. Seizure diaries for clinical research and practice: limitations and future prospects. Epilepsy Behav.24, 304–310 (2012). [DOI] [PubMed] [Google Scholar]
- 6.Abrams, P. et al. Electronic bladder diaries of differing duration versus a paper diary for data collection in overactive bladder. Neurourol. Urodyn.35, 743–749 (2016). [DOI] [PubMed] [Google Scholar]
- 7.Quinn, P., Goka, J. & Richardson, H. Assessment of an electronic daily diary in patients with overactive bladder. BJU Int91, 647–652 (2003). [DOI] [PubMed] [Google Scholar]
- 8.Patel, J. et al. Use of an electronic seizure diary in a randomized, controlled trial of natalizumab in adult participants with drug-resistant focal epilepsy. Epilepsy Behav.118, 107925 (2021). [DOI] [PubMed] [Google Scholar]
- 9.Miller, K. R., Barnard, S., Juarez-Colunga, E., French, J. A. & Pellinen, J. Long-term seizure diary tracking habits in clinical studies: evidence from the Human Epilepsy Project. Epilepsy Res203, 107379 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Gater, A., Coon, C. D., Nelsen, L. M. & Girman, C. Unique challenges in development, psychometric evaluation, and interpretation of daily and event diaries as endpoints in clinical trials. Ther. Innov. Regul. Sci.49, 813–821 (2015). [DOI] [PubMed] [Google Scholar]
- 11.Norquist, J. M., Girman, C., Fehnel, S., DeMuro-Mercon, C. & Santanello, N. Choice of recall period for patient-reported outcome (PRO) measures: criteria for consideration. Qual. Life Res.21, 1013–1020 (2012). [DOI] [PubMed] [Google Scholar]
- 12.Dellon, E. S. et al. Mo1184 An Episode-based patient-reported outcome measures of dysphagia experience is feasible and not burdensome for patient with eosinophilic esophagitis. Gastroenterology158, S-818 (2020). [Google Scholar]
- 13.Khurana, L. & Emerson, J. The impact of trial design on diary compliance in clinical trials for acute and preventive migraine treatments (P7-12.001). Neurology102, 6532 (2024).
- 14.Bidwell, J., Khuwatsamrit, T., Askew, B., Ehrenberg, J. A. & Helmers, S. Seizure reporting technologies for epilepsy treatment: a review of clinical information needs and supporting technologies. Seizure32, 109–117 (2015). [DOI] [PubMed] [Google Scholar]
- 15.Fehnel, S. E. et al. Development of the diary for irritable bowel syndrome symptoms to assess treatment benefit in clinical trials: foundational qualitative research. Value Health20, 618–626 (2017). [DOI] [PubMed] [Google Scholar]
- 16.Birring, S. S. et al. Efficacy and safety of gefapixant in women with chronic cough and cough-induced stress urinary incontinence: a phase 3b, randomised, multicentre, double-blind, placebo-controlled trial. Lancet Respir. Med.12, 855–864 (2024). [DOI] [PubMed] [Google Scholar]
- 17.US Food and Drug Administration. Center For Drug Evaluation And Researchhttps://www.accessdata.fda.gov/drugsatfda_docs/nda/2020/212102Orig1s000OtherR.pdf (FDA, 2020). application number: 212102Orig1s000.
- 18.Center for Drug Evaluation and Research. Center for Drug Evaluation and Research. Application Number: 202611Orig1s000Summary Review. https://www.accessdata.fda.gov/drugsatfda_docs/nda/2012/202611Orig1s000SumR.pdf (FDA, 2012).
- 19.U. S. Food and Drug Administration. Patient-Focused Drug Development: Incorporating Clinical Outcome Assessments into Endpoints for Regulatory Decision-Making. (FDA, 2023).
- 20.Stull, D. E., Leidy, N. K., Parasuraman, B. & Chassany, O. Optimal recall periods for patient-reported outcomes: challenges and potential solutions. Curr. Med. Res. Opin.25, 929–942 (2009). [DOI] [PubMed] [Google Scholar]
- 21.Miller, V. E. et al. Comparing prospective headache diary and retrospective four-week headache questionnaire over 20 weeks: Secondary data analysis from a randomized controlled trial. Cephalalgia40, 1523–1531 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Bensink, M. et al. Tracking migraine digitally: the future of migraine management. J. Nurse Practitioners17, 462–470 (2021). [Google Scholar]
- 23.US Food and Drug Administration. Migraine: Developing Drugs for Acute Treatment Guidance for Industry. (FDA, 2018).
- 24.Morris, C., Gibbons, E. & Fitzpatrick, R. Child and Parent Reported Outcome Measures: A Scoping Report Focusing on Feasibility for Routine Use in the NHS. (NHS, 2009).
- 25.Matza, L. S. et al. Pediatric patient-reported outcome instruments for research to support medical product labeling: report of the ISPOR PRO good research practices for the assessment of children and adolescents task force. Value Health16, 461–479 (2013). [DOI] [PubMed] [Google Scholar]
- 26.Mowlem, F. D., Tenaerts, P., Gwaltney, C. & Oakley-Girvan, I. Regulatory acceptance of patient-reported outcome (PRO) data from bring-your-own-device (BYOD) solutions to support medical product labeling claims. Ther. Innov. Regul. Sci.56, 531–535 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Byrom, B., Gwaltney, C., Slagle, A., Gnanasakthy, A. & Muehlhausen, W. Measurement equivalence of patient-reported outcome measures migrated to electronic formats: a review of evidence and recommendations for clinical trials and bring your own device. Ther. Innov. Regul. Sci.53, 426–430 (2019). [DOI] [PubMed] [Google Scholar]
- 28.Mowlem, F. D. et al. Best practices for the electronic implementation and migration of patient-reported outcome measures. Value Health27, 79–94 (2024). [DOI] [PubMed] [Google Scholar]
- 29.Peasgood, T., Caruana, J. M. & Mukuria, C. Systematic review of the effect of a one-day versus seven-day recall duration on patient reported outcome measures (PROMs). Patient Patient-Centered Outcomes Res.16, 201–221 (2023). [Google Scholar]
- 30.Delong, P. S. et al. Best practice recommendations for electronic clinical outcome assessment data changes. J. Soc. Clin. Data Manage.1, 249 (2023).
- 31.European Medicines Agency. Guideline on Computerised Systems and Electronic Data in Clinical Trials. https://www.ema.europa.eu/en/documents/regulatory-procedural-guideline/draft-guideline-computerised-systems-electronic-data-clinical-trials_en.pdf (EMA, 2021).
- 32.Bolger, N., Davis, A. & Rafaeli, E. Diary methods: capturing life as it is lived. Annu. Rev. Psychol.54, 579–616 (2003). [DOI] [PubMed] [Google Scholar]
- 33.Ly, J. J. et al. Training on the use of technology to collect patient-reported outcome data electronically in clinical trials: best practice recommendations from the ePRO consortium. Ther. Innov. Regul. Sci.53, 431–440 (2019). [DOI] [PubMed] [Google Scholar]
- 34.Coons, S. J. et al. Recommendations on evidence needed to support measurement equivalence between electronic and paper-based patient-reported outcome (PRO) measures: ISPOR ePRO good research practices task force report. Value Health12, 419–429 (2009). [DOI] [PubMed] [Google Scholar]
- 35.Muehlhausen, W. et al. Standards for instrument migration when implementing paper patient-reported outcome instruments electronically: recommendations from a qualitative synthesis of cognitive interview and usability studies. Value Health21, 41–48 (2018). [DOI] [PubMed] [Google Scholar]
- 36.O’Donohoe, P. et al. Updated recommendations on evidence needed to support measurement comparability among modes of data collection for patient-reported outcome measures: a good practices report of an ISPOR task force. Value Health26, 623–633 (2023).
- 37.Aiyegbusi, O. L. Key methodological considerations for usability testing of electronic patient-reported outcome (ePRO) systems. Qual. Life Res.29, 325–333 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.US Food and Drug Administration. Applying Human Factors Testing and Usability Engineering to Medical Devices. https://www.fda.gov/media/80481/download (FDA, 2016).
- 39.McKenzie, S. et al. Proving the eDiary dividend: eDiary data can reduce recall bias and enhance the sensitivity of PRO data, paying off in reduced error variance and significant financial savings in future clinical trials. Appl. Clin. Trials13, https://www.appliedclinicaltrialsonline.com/view/proving-ediary-dividend (2004).
- 40.Griffiths, P., Williams, A. & Brohan, E. How do the number of missing daily diary days impact the psychometric properties and meaningful change thresholds arising from a weekly average summary score? Qual. Life Res.31, 3433–3445 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Delobelle, J. et al. Fitbit’s accuracy to measure short bouts of stepping and sedentary behaviour: validation, sensitivity and specificity study. Digit. Health10, 20552076241262710 (2024).
- 42.Schuler, T. et al. Wearable-triggered ecological momentary assessments are feasible in people with advanced cancer and their family caregivers: feasibility study from an outpatient palliative care clinic at a cancer center. J. Palliat. Med.26, 980–985 (2023). [DOI] [PubMed] [Google Scholar]
- 43.Bastyr, E. J. et al. Performance of an electronic diary system for intensive insulin management in global diabetes clinical trials. Diab. Technol. Ther.17, 571–579 (2015). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
No datasets were generated or analyzed during the current study.


