Abstract
Data collection is an important component of evidence-based behavioral interventions for children with autism, but many one-to-one aides (i.e., behavioral support staff) do not systemically collect quantitative data that are necessary for best-practice client progress monitoring. Data collection of clients’ behaviors often involves labor-intensive pen-and-paper practices. In addition, the solitary nature of one-to-one work limits opportunities for timely supervisor feedback, potentially reducing motivation to collect data. We incorporated principles from behavioral economics and user-centered design to develop a phone-based application, Footsteps, to address these challenges. We interviewed nine one-to-one aides working with children with autism and seven supervisors to ask for their app development ideas. We then developed the Footsteps app prototype and tested the prototype with 10 one-to-one aides and supervisors through three testing cycles. At each cycle, one-to-one aides rated app usability. Participants provided 76 discrete suggestions for improvement, including 29 new app features (e.g., behavior timer), 20 feature modifications (e.g., numeric type-in option for behavior frequency), four flow modifications (e.g., deleting a redundant form), and 23 out-of-scope suggestions. Of the participants that tested the app, 90% rated usability as good or excellent. Results support continuing to develop Footsteps and testing its impact in a clinical trial.
Keywords: Digital mental health, m-Heath applications, Behavioral data collection, Autism spectrum disorder, Behavioral economics, User-centered design, Participatory design
The use of technology in the delivery, coordination, and monitoring of therapeutic or behavioral health interventions continues to grow (Raney et al., 2017). However, there are few mobile applications available for one-to-one aides (i.e., behavioral support staff) or therapists to track clients’ progress. Two main barriers limit client data tracking. First, data collection often relies on pen-and-paper methods, which are subject to error and impede the delivery of services in fast-paced and busy therapeutic environments (Dale & Hagen, 2007; Le Jeannic et al., 2014; Riggleman, 2021). This reliance on pen-and-paper methods also complicates behavior tracking over time and data sharing among team members (Riggleman, 2021). Managing children’s challenging behaviors while keeping up with data collection requirements can be extremely difficult and stressful for one-to-one aides and therapists (Riggleman, 2021). Indeed, data show that challenging behaviors in children with autism contribute to burnout among staff (Hastings & Brown, 2002). Quantitative data collection (e.g., tracking frequency of a child’s challenging behaviors) in autism treatment programs is necessary for high-quality treatment planning but is challenging to achieve. Data collection is consistent with best practice guidelines for autism as it ensures progress is tracked and that goals and strategies are updated accordingly (Steinbrenner et al., 2020). However, many agencies still use paper-and-pen data collection systems (Marcu et al., 2013) or only require qualitative summary notes of sessions which impede accuracy and efficiency in progress monitoring.
Second, the solitary nature of therapy and behavioral health work, which is often delivered in a one-to-one format, often limits opportunities for timely supervisor feedback regarding data collection (Melamed et al., 2001). The complex nature of challenging behavior in children with autism (Cohen et al., 2011), along with the complex nature of autism behavioral interventions (Steinbrenner et al., 2020), necessitates timely supervisor feedback (Rispoli et al., 2011), but large caseloads and often inadequate resources mean that agencies cannot provide appropriate supervision. Specific to the current study, supervisor oversight on data collection of children’s behaviors and progress over time is a critical component of effectively understanding children’s changing support needs.
Digital technology could not only replace pen-and-paper data collection methods but also address these timely data collection feedback needs at a lower cost than in traditional supervision. Although many technologies have been applied to therapy for individuals with autism, from augmentative and alternative communication devices to robots designed to improve social interaction (Goldsmith & LeBlanc, 2004; Kientz et al., 2013; Sutherland et al., 2018), few technologies are designed to support their one-to-one aides or therapists (Nuske & Mandell, 2021). Digital technology has supported the implementation of evidence-based practices in other sectors such as healthcare. For example, data show that use of computerized clinical decision support systems with individualized real-time reminders is associated with higher fidelity to evidence-based practices and better patient outcomes among providers who work in chaotic environments, such as ambulatory clinics (Hunt et al., 1998; Saleem et al., 2005, 2007; Vashitz et al., 2007). Therefore, digitalizing data collection in one-to-one programs and incorporating digital forms of timely feedback on data collection via digital technology has the potential to form an elegant solution for the identified barriers raised above.
To address these barriers, user-centered design practices are critical to ensure a genuine problem-solution fit for the digital technology that is endorsed by key stakeholders whilst ensuring its feasibility and usability. User-centered design practices include engaging with key community stakeholders early and often in every step of the technology development process, from conception to prototype development to refinement to testing and dissemination (Abras et al., 2004). Several user-centered design methods are available to gather stakeholders’ needs and design ideas, including interviews, field observations, field tests, surveys, focus groups, and community advisory boards (Dopp et al., 2019).
Another approach to addressing implementation barriers is to use principles from behavioral economics. Behavioral economics incorporates findings from social and cognitive psychology into factors associated with decision making, with a particular focus on irrational heuristics and biases (Mullainathan & Thaler, 2000; Samson, 2014). Behavioral economics principles have been applied by previously to health technology development to improve physical health outcomes using smartphone mobile applications and wearable devices (Case et al., 2015; Cotton & Patel, 2019; Kim & Patel, 2018; Patel et al., 2017, 2020). However, there is limited work on integrating behavioral economics with mental health (Beidas et al., 2019) and interest in integrating behavioral economics with education is still growing (Jabbar, 2011; Koch et al., 2015; Lavecchia et al., 2016; Levitt et al., 2016; List et al., 2018). We used the behavioral economics framework developed by the international Behavioural Insights Team, EAST (Easy, Attractive, Social and Timely; EAST, n.d.) to enact behavioral change.
Current Study
We employed several user-centered design approaches and incorporated behavioral economics principles (EAST, see above) to adapt an existing mobile application and associated web portal for one-to-one aides who support children with autism in reaching their educational and behavioral goals. First, we conducted interviews with one-to-one aides and supervisors to understand how to improve data collection procedures. This feedback directly informed the development of the app. Second, once we had a working prototype of the app, we iteratively improved the app by gathering detailed stakeholder feedback from newly recruited one-to-one aides and supervisors on three iterative app testing cycles.
Materials and Methods
Setting and Context
We partnered with three behavioral health agencies in Philadelphia, Pennsylvania, USA that employ one-to-one aides (usually bachelors-prepared individuals) and clinical supervisors (usually board-certified behavior analysts) who work children with autism in community settings. Children with autism in Pennsylvania often qualify for a one-to-one aide because they present with challenging behaviors that require additional support. Although one-to-one aides often work in schools, they are hired from an outside agency and do not report to the classroom teacher. Instead, they have a clinical supervisor who provides periodic (often monthly) supervision at their agency rather that the school.
Participants
Interview participants.
The research team visited the partnering community behavioral health agencies in the Philadelphia region to recruit one-to-one aides and their supervisors. The team presented the study to behavioral health support staff and their supervisors and invited them to participate. Our inclusion criterion was that staff and their supervisors currently worked with at least one student diagnosed with or educationally classified as having autism on their caseload who attended a Philadelphia public school (Pre-K through grade 12). Behavioral health support staff who worked only in home settings or daycares were ineligible. Research staff conducted 16 interviews in total, nine with one-to-one aides and seven with supervisors. All participants gave informed consent before participating in the study.
App testing cycles participants.
Research staff recruited 10 behavioral support workers to participate in three app testing cycles. All participants from the first two cycles were invited to participate in later cycles; however, due to the COVID-19 pandemic there was a substantial delay between cycles 2 and 3, resulting in only one participant carrying over to cycle 3 (none participated in all cycles, five participated in two cycles, five participated in one cycle). Inclusion criteria of the one-to-one aides were as follows: 1) they had to support children with autism in a school, daycare, home, or community setting in February or March of 2020 before the shutdown due to the COVID-19 pandemic, or to have currently worked with clients with autism in any capacity, including remotely over web conferencing software (for cycle 1 and 2, see Procedure section); and 2) they had to currently work in a school, daycare, home or community setting (for cycle 3 only). See Table 1 for all participant demographic characteristics. All participants gave informed consent before participating in the study.
Table 1.
Participant Demographic Characteristics
Interviews (n = 16) |
App Testing Cycles (n = 10) |
|
---|---|---|
Race | ||
Black or African American | 7 | 7 |
White | 5 | 1 |
Asian | 1 | 0 |
Native Hawaiian or Pacific Islander | 1 | 0 |
Prefer not to disclose | 0 | 2 |
Other | 1 | 0 |
Missing | 1 | 0 |
Hispanic or Latino/a/x | ||
No | 11 | 7 |
Yes | 3 | 1 |
Prefer not to disclose | 0 | 2 |
Missing | 2 | 0 |
Gender | ||
Female | 10 | 8 |
Male | 5 | 0 |
Prefer not to disclose | 0 | 2 |
Missing | 1 | 0 |
Materials
Interview guides.
The research team developed two semi-structured interview guides adapted from the ‘Theory Informed Topic Guide’ (Potthoff et al., 2019), one to interview behavioral health support staff and one to interview their clinical supervisors. Both interview guides included questions about current assessment practices and how to make data collection easier, and the one-to-one aide version also included questions about their beliefs and attitudes around data collection.
App development.
The study team researched autism data collection applications available for download in the United States and identified six for consideration. The research team discussed the strengths and weaknesses of each application, and created a list of basic requirements, including HIPAA compliancy, offline availability, and the option to download data as a .csv file. This narrowed down the list to three applications. The team scheduled meetings with the developers of these three applications to gauge their interest in the study and learn more about each application’s capabilities. Based on a variety of factors (e.g., cost, ease of use, user interface, data export options, device compatibility [i.e., iPhone and Android, iPad and tablet], interest and availability to build in new features as per the aims of the study), the team partnered with a digital health technology company that focuses on customized digital platforms to support complex healthcare needs.
Our partnering behavioral health agency leaders advised on app design and facilitated recruitment across studies. Prior to the current study we also hosted an innovation tournament, a mechanism to crowdsource ideas on a topic (Terwiesch & Ulrich, 2009), through the University of Pennsylvania’s Your Big Idea platform (About ∣ Your Big Idea, n.d.). Our group has found this method to be ideal for gathering ideas as a starting place for solving a specific problem in behavioral health (Beidas et al., 2019; Stewart et al., 2019). In this innovation tournament, we asked one-to-one aides and their supervisors for ideas on how to make data collection for behavioral support workers easier and more motivating, so as to identify the most innovative solutions (Terwiesch & Ulrich, 2009). Seventy-one percent of the ideas submitted in our innovation tournament suggested some form of technology or could be implemented via a data collection application for aiding one-to-one aides in sessions with children, which provided support for the app development idea. After completing the innovation tournament, research staff conducted the interviews with one-to-one aides and their supervisors. Following this, we then completed three app testing cycles (see above) with one-to-one aides. Feedback gathered through each of these user-centered design methods informed the final development of the Footsteps client data collection app. See Figure 1 for a schematic of the app development process.
Figure 1.
App development process.
Footsteps: Client data collection app.
Our application was designed to address barriers inherent to data collection by making it easier and more attractive to take data on therapy programs, by applying the EAST behavioral economics principles described above. The app was intended to be Easy: featuring basic digital data collection features; Attractive: including client and provider data graphs; Social: showing comparisons to agency expectations, including a supervisor-provider messaging platform; Timely: giving start-of-session feedback messages on previous session’s data collection performance (percentage of intervals data was collected in, either synchronously or asynchronously), and giving in-session and end-of-week pop-up reminders. The basic data collection forms are personalized per client to include the behavior and/or skills being tracked in their therapy program. The basic data collection features and behavioral economics features are described in more detail in Table 2. See Figure 2 for images of the app main features.
Table 2.
Footsteps App: Basic Data Collection and Behavioral Economics Features
App Tile* | Basic Data Collection Features | Behavioral Economics Features |
---|---|---|
Start Session | Start session, edit session length or end session early. Choose location (e.g., School, Home, Community, Daycare, Other). |
Encouraging or celebratory feedback message upon starting a session, depending on whether their agency’s threshold of percentage of intervals data taken should be taken on was reached (e.g., “Well done for logging data for 100% of intervals in your last session. Let’s keep it up today!”). |
Track Behavior/ Skills Form | Behavior name, definition, and associated goal. Time behavior occurred (in case asynchronous data collected). Behavior metrics (chosen upon account set-up), including frequency, duration, intensity (three levels with editable descriptors), % opportunities and context. Additional notes. |
Behavior did not occur quick button (as nudge to take absence of behavior data). In-session push notification reminders if data has not been collected during a time interval (e.g., at the end of the hour, interval set on account set-up): “Hello! Just reminding you to take data for this hour”. |
See Data Graphs | Data graph of client’s behavioral data based on data entry, available on associated web platform. | Data graph of client’s behavioral data based on data entry, available in app and on associated web platform. Two data graph of one-to-one’s data entry (% of intervals data collected): 1) comparing the current week with the previous week with an encouraging or celebratory feedback message based on current this week’s data collection performance, and 2) comparing the current week with agency expectation threshold (% of intervals in which data should be taken), with encouraging (“Room for Improvement”) or celebratory feedback messages (“Good”, “Great”, “You are a top performer!”) and associated graphics (e.g., smiley faces, fireworks GIF). End of week (Friday, 4pm) push notification reminders to check week-to-week comparison graphs: “Let’s review your data this week compared to last week”. |
Summary Note | Free form text field, available once at least one quantitative behavior form has been entered during a session. | |
Timer | Stopwatch timer to make it easier to collect behavior duration data. | |
Messaging Platform | Basic messaging platform available via the app and/or associated web platform, which can be used for supervisor interaction. |
Note.
As per the Start Page shown in Figure 2.
Figure 2.
App Design Incorporating Behavioral Economics EAST (Easy, Attractive, Social and Timely) Principles.
Procedure
Interviews.
Screening calls were conducted to determine eligibility. All interviews were audio recorded and transcribed once completed. Any app updates suggestions were flagged and compiled for consideration with the research team and mobile developer for future app iterations.
App testing.
After interviews were completed, we incorporated participant feedback in the app design (see Data App Collection section above) and prepared the app for testing. In each app testing cycle, we showed one-to-one aides a beta version and had them complete exercises to give us feedback on how to improve the app. In each cycle, we conducted screening calls with participants to determine eligibility, gather details to create a customized participant account, and familiarize the participant with the application. While we planned for app testing to take place in schools, due to COVID-19 restrictions, app testing in the first two of three testing cycles took place in the one-to-one aides’ home offices, where they used the app as they would usually do in schools with their clients.
In cycle 1, four participants viewed screenshots of the app and participated in two “think aloud” exercises on how they would take data in real time (Jaspers et al., 2004). Each think aloud exercise required the participant to think about a common behavior they often observe and report how they would take data on them using the basic data entry features of the app, highlighting anything about the app that was confusing or that they would change.
In cycle 2, six participants downloaded the app onto their phones. They then completed two think aloud exercises, this time based on two videos of therapy sessions with children with autism. After each video, the participant took data on the child’s behaviors using the application while telling the research staff how they were navigating the basic data entry features of the app in real time. Participants then provided feedback on the first draft of the behavioral economics features, including usability and layout of the child and data collection graphs, and impressions on receiving reminders to take data via push notifications, again highlighting anything about the app that was confusing or that they would change.
In cycle 3, five participants tested the live app prototype. Research staff met in a video conferencing platform with each participant to complete a brief training on how to use the application and the web portal to download child data entered into the app. Staff asked for details of the participant’s client with autism to create an individualized app account. Once the account was set up, participants used the application with their client for at least two sessions. After this was completed, research staff conducted post-test interviews with participants to gather feedback on the basic data collection features and the behavioral economics features. This included the layout, features and functionality of each form and feature in the application: 1) the start session form with feedback on the previous session’s data collection performance (percentage of intervals data collected in, either synchronously or asynchronously); 2) behavior forms; 3) child data graphs; 4) data collection performance graphs; 5) in-session reminders to take data; 6) the data collection feedback message at the end of the week; and 7) the messaging platform. The interview guide asked semi-structured questions about feedback and improvement ideas on the basic data collection and the behavioral economics features. The research team also included a set of questions on the general experience using the app in each cycle that assessed: usability, acceptability, feasibility, appropriateness, burden, and comparison to other systems or applications. As for interviews, any app updates suggestions were flagged and compiled for consideration with the research team and mobile developer for future app iterations.
To measure the usability of the app on each testing cycle, participants also completed the System Usability Scale (SUS; Brooke, 1996). The SUS is a 10-item questionnaire on the usability of a system or product with five Likert scale response options ranging from strongly disagree (1) to strongly agree (5), with higher total scores (calculated as a proportion score /100) indicating higher usability: ≥ 85 = Excellent; ≥ 71 = Good; ≥ 51 = Okay (Bangor et al., 2008). We adapted the measure for this study by replacing the/this “system” with “app” across all items.
Supervisors of one-to-one aides were asked for feedback during cycle 1 and cycle 3 during agency advisory meetings. As supervisors do not work directly with clients, we gathered their feedback during these meetings by showing them the versions of the app prototype using Figma, an interactive web-based design software that allowed supervisors to view the app as it would appear on their phone (e.g., with scroll capability). We did not ask them to rate the app on the SUS as they were not directly using the app with clients, so the 10 one-to-one aides that gave usability ratings all had experience with the live prototype. We did, however, ask supervisors for their app improvement ideas given their wealth of experience in the field so we could further improve the app.
Reliability on categorizing app improvement ideas.
Two coders categorized app improvement ideas from interview participants as either 1) a feature to consider adding or 2) out of scope for the goals of the project (100% overlap on coding with a M = 95% agreement on categories, range 92%-100%). App improvements ideas from the app testing cycles were categorized as either 1) a new feature, or 2) a feature modification. App improvement suggestions were discussed with the entire research team and with the mobile developer for consideration of adding these to the app.
Results
Below we present the app improvement ideas results and app usability results as gathered via interviews and app testing.
App Improvement Idea Results
One-to-one aides and supervisors provided 76 discrete suggestions for improvement; 53 were actionable suggestions, including 29 new app features (e.g., interval data collection form), 20 feature modifications (e.g., numeric type-in option for behavior frequency), and four flow modifications (e.g., deleting a redundant behavior form submission confirmation). Twenty-three other suggestions were not actionable (e.g., were in contrast with the core project’s aims to incorporate motivational messages grounded in behavioral economic theory, or were outside the project scope to create an app for 1:1s and not youth). As shown in Tables 3 and 4, one-to-one aides and supervisors helped to design and refine the application, and largely had consistent ideas for improving the application.
Table 3.
App Improvement Ideas: Interviews
Category | Participants | Improvement Idea/Feedback |
---|---|---|
Feature | Supervisors | Taking frequency and duration data by pushing a button in the app every time a behavior occurs. |
Take duration of behavior using an app. | ||
Positive messaging when taking data collection. | ||
Include a visual of the data. | ||
A recorder button to record a behavior when it occurs and takes duration data. | ||
Take data on replacement behavior(s). | ||
Make the app work with and without Wifi or cellular data. | ||
Add a timer in the app. | ||
Sync the app to determine reliability when support worker and supervisor are taking data. | ||
One-to-one aides | Customizing the app with client behaviors. | |
Make it simple to take data, such as clicking a + or – sign to indicate if a client exhibited a behavior. | ||
Add a note section. | ||
Customize the behavior, including the time interval per behavior. | ||
A form or device to take data by pushing a button. | ||
Include a list of behaviors to take data on in an app. | ||
Include duration and intensity to take data on in an app. | ||
Show visual data with a graph. | ||
Add percentage option for frequency. | ||
Take data in an app by including a button per behavior that you can click to take frequency. | ||
Take data on a handheld device. | ||
Take data in an app by including a button per behavior that you can click to take frequency and duration. | ||
Add a note section. | ||
User interface should be easy to use. | ||
Take data on a phone app. | ||
Add a timer in an app. | ||
Take data on a device that does not require internet. | ||
Have the option to record behaviors in real time. | ||
Out of scope | Supervisors | Be able to record interventions and outcomes. |
Be able to copy the data into electronic health record (EHR). | ||
Be able to customize the app to match the format of the agency’s EHR. | ||
Provide Wifi. | ||
An interactive component to teach how to use the app (i.e. Clippy from Microsoft Word). | ||
Be able to choose between frequency and partial interval data. | ||
One-to-one aides | Record the data by using Bluetooth technology. | |
Record the data by using a microphone. | ||
Provide multiple different ways to take data through technology. | ||
A clicker to record data for behaviors that happen frequently. | ||
Be able to customize the app in real time (add behaviors). | ||
Flow modification | Supervisors | Add a way to log multiple behaviors at once (for behaviors that often happen concurrently). |
One-to-one aides | Add a way to log multiple behaviors at once (for behaviors that often happen concurrently). | |
Remove the need to press the "close" button after submitting a behavior. | ||
Out of scope | Supervisors | Tally button for each behavior on start page. |
One-to-one aides | Tally button for each behavior on start page. | |
Include the agency's progress note form within the app. | ||
Incorporate client facing features in the app (e.g., show the data graphs to show. progress and show positive messages on behalf of the client). |
Table 4.
App Improvement Ideas: App Testing Cycles
Cycle | Category | Participants | Improvement Idea/Feedback |
---|---|---|---|
Cycle 1 | New Feature | Supervisors | Line graph to visually display patterns in behavior. |
Interval data collection form. | |||
One-to-one aides | A function to compile individual notes at the end of the session. | ||
Feature modification | Supervisors | Percentage of opportunities button in 10% increments. | |
Numeric type-in option for frequency (type in number). | |||
One-to-one aides | Numeric type-in option for frequency (type in number). | ||
Flow modification | One-to-one aides | Remove "Behavior occurred" button. | |
Out of scope | Supervisors | Link the app to the agency's scheduler to make the app align with billing requirements. | |
One-to-one aides | Intervention data collection tab. | ||
E-sign the data submitted after each session. | |||
Add behavior form section to captures outcome of behavior (mood/redirection/outcome). | |||
Cycle 2 | New Feature | One-to-one aides | N/A |
Feature modification | One-to-one aides | Reviewing data in the graphs by month would be more useful than by week. | |
Wants more detailed description of the behaviors along with the goal description. | |||
Only view one day's worth of intervals in child data graph instead of multiple days at one time. | |||
Have additional details fields (frequency, duration, etc) pop up automatically after selecting "yes, behavior occurred". | |||
Include option in App to note that client "left early" or "no show". | |||
BHT has a super flexible schedule that changes frequently, so a "start" and "end" session button would be useful instead of having a set schedule. | |||
Flow modification | One-to-one aides | N/A | |
Out of scope | One-to-one aides | Include a component in the app that the child can engage with. | |
Include a way to track antecedents and interventions. | |||
Have the app match the one-to-one aides’ data sheet identically. | |||
Cycle 3 | New Feature | Supervisors | Add a method to easily take note of the absence of a behavior. |
One-to-one aides | N/A | ||
Feature modification | Supervisors | Timer button on start page. | |
Add the option to specify location (home, daycare, school, community) since some kids receive service in multiple locations. | |||
Change “significant” to “severe” when describing levels of severity. | |||
Make the severity definitions appear before a BHT selects a severity level option (e.g., “expand all definitions” option, “view definitions” option). | |||
Change % of opportunity to be a drop down where user can input # of successful opportunities and total # opportunities. | |||
Add option on behavior form to add context (e.g., whole class instruction, 1:1, option to edit list). | |||
Change client data graph to be line graph instead of bar graph. | |||
One-to-one aides | On Start Page, have option to manually enter the exact start time. | ||
Opportunity to record more behaviors, specifically the positive behaviors. | |||
Make the app compile the notes from the behavior forms and summary note(s) together into one compilation of notes at the end of a session. | |||
Add a button you can push that says the behavior did not occur. |
App Usability Results
As shown in Figure 3, most participants (70%) rated the Footsteps client data collection app as “Excellent” on usability (total SUS score) by their final testing cycle. Two participants rated the app as “Good” and one as not acceptable (<51) on usability by their final testing cycle.
Figure 3.
System Usability Scale (SUS) Scores Across App Testing Cycles. Each dot color represents one participant. Higher total scores (calculated as a proportion score /100) indicate higher usability: ≥ 85 = Excellent; ≥ 71 = Good; ≥ 51 = Okay (Bangor et al., 2008).
Discussion
We developed an app to track client behaviors and therapy progress in partnership with behavioral health agencies incorporating feedback from supervisors and one-to-one aides at every development stage, from conception to prototyping to field testing. In each of these stages, we relied on principles and methods from user-centered design and behavioral economics. We conducted a preliminary test of the app with one-to-one aides who work with clients with autism. Most one-to-one aides rated the app as highly usable, suggesting the app is ready for more definitive testing in a randomized control trial.
The community partnerships strengthened as part of this project were vital to the success of the app development. New technologies or programs are often challenging to implement in community settings due to many barriers including lack of leadership buy-in and limited resources (Iadarola et al., 2015; Langley et al., 2010). One way to address these barriers is to develop meaningful partnerships with key community stakeholders, including those who will be responsible for supporting, implementing and consuming the technology or program (Pellecchia et al., 2018). One-to-one aides and supervisors from our partnering behavioral health agencies gave us a multitude of app improvement ideas. These allowed us to design and refine the application with the knowledge that we were fulfilling the needs of the community for data collection and progress monitoring on their therapy programs.
Limitations and Future Directions
This study is not without limitations. First, the sample sizes for each part of the project were relatively small. A larger scale study is needed to ensure that the app’s design and features are palatable to the larger community behavioral health community. However, involving supervisors in the interviews and app testing cycles allowed us to learn from their wealth of expertise having worked across many settings and clients.
Second, there were several app improvement ideas that were out of scope for the current project, including client facing features, tracking triggers/antecedents and interventions/consequences to behavior, and integration into existing electronic health records. These form excellent future directions for the educational and therapy app design field.
Third, no data was collected on whether the behavioral economics features of the application help to increase the quantity or quality of data collection by one-to-one aides. The app has the potential to improve data collection practices and therefore clinical care. We are currently running a pilot randomized controlled trial comparing the Footsteps app vs. a basic data collection app to examine this question, and plan to follow-up with a fully powered randomized controlled trial.
Acknowledgements
We would like to thank our wonderful community partners, including our three partnering agencies (Gemma Services, Children's Crisis Treatment Center, and NET Centers) and their agency directors and supervisors (Amy Wasersztein, Tristan Dahl, Michelle Ruppert-Daly, Emmeline Williamson, Bridget Donohue, Patrick Bevenour) and all the other supervisors and one-to-one aides working at these agencies who were involved in the project. Without them we could not have completed this project and developed the application to such a high standard. We would also like to thank our industry partner for their commitment to incorporating community feedback and seeing through the development of the application to its final state.
Funding
This research was funded by National Institute of Mental Health, grant number P50MH113840 (PIs: Rinad Beidas, David Mandell, and Kevin Volpp).
Footnotes
Conflicts of Interest
The authors declare no conflict of interest.
Ethics Statement
All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the City of Philadelphia (2019-32).
References
- About – Your Big Idea. (n.d.). Retrieved July 28, 2021, from https://bigidea.pennmedicine.org/about
- Abras C, Maloney-Krichmar D, & Preece J (2004). User-centered design. Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications, 37(4), 445–456. [Google Scholar]
- Bangor A, Kortum PT, & Miller JT (2008). An Empirical Evaluation of the System Usability Scale. International Journal of Human–Computer Interaction, 24(6), 574–594. 10.1080/10447310802205776 [DOI] [Google Scholar]
- Beidas RS, Volpp KG, Buttenheim AN, Marcus SC, Olfson M, Pellecchia M, Stewart RE, Williams NJ, Becker-Haimes EM, Candon M, Cidav Z, Fishman J, Lieberman A, Zentgraf K, & Mandell D (2019). Transforming Mental Health Delivery Through Behavioral Economics and Implementation Science: Protocol for Three Exploratory Projects. JMIR Research Protocols, 8(2), e12121. 10.2196/12121 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brooke john. (1996). SUS: A “Quick and Dirty” Usability Scale. In Usability Evaluation In Industry. CRC Press. [Google Scholar]
- Case MA, Burwick HA, Volpp KG, & Patel MS (2015). Accuracy of Smartphone Applications and Wearable Devices for Tracking Physical Activity Data. JAMA, 313(6), 625. 10.1001/jama.2014.17841 [DOI] [PubMed] [Google Scholar]
- Cohen IL, Yoo JH, Goodwin MS, & Moskowitz L (2011). Assessing challenging behaviors in Autism Spectrum Disorders: Prevalence, rating scales, and autonomic indicators. In International handbook of autism and pervasive developmental disorders (pp. 247–270). Springer. http://link.springer.com/chapter/10.1007/978-1-4419-8065-6_15 [Google Scholar]
- Cotton V, & Patel MS (2019). Gamification Use and Design in Popular Health and Fitness Mobile Applications. American Journal of Health Promotion, 33(3), 448–451. 10.1177/0890117118790394 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dale O, & Hagen KB (2007). Despite technical problems personal digital assistants outperform pen and paper when collecting patient diary data. Journal of Clinical Epidemiology, 60(1), 8–17. [DOI] [PubMed] [Google Scholar]
- Dopp AR, Parisi KE, Munson SA, & Lyon AR (2019). A glossary of user-centered design strategies for implementation experts. Translational Behavioral Medicine, 9(6), 1057–1064. [DOI] [PubMed] [Google Scholar]
- EAST: Four Simple Ways to Apply Behavioural Insights. (n.d.). Retrieved May 24, 2021, from https://www.bi.team/publications/east-four-simple-ways-to-apply-behavioural-insights/
- Goldsmith TR, & LeBlanc LA (2004). Use of technology in interventions for children with autism. Journal of Early and Intensive Behavior Intervention, 1(2), 166. [Google Scholar]
- Hastings RP, & Brown T (2002). Coping strategies and the impact of challenging behaviors on special educators’ burnout. Mental Retardation, 40(2), 148–156. [DOI] [PubMed] [Google Scholar]
- Hunt DL, Haynes RB, Hanna SE, & Smith K (1998). Effects of computer-based clinical decision support systems on physician performance and patient outcomes: A systematic review. Jama, 280(15), 1339–1346. [DOI] [PubMed] [Google Scholar]
- Iadarola S, Hetherington S, Clinton C, Dean M, Reisinger E, Huynh L, Locke J, Conn K, Heinert S, & Kataoka S (2015). Services for children with autism spectrum disorder in three, large urban school districts: Perspectives of parents and educators. Autism, 19(6), 694–703. 10.1177/1362361314548078 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jabbar H (2011). The Behavioral Economics of Education: New Directions for Research. Educational Researcher, 40(9), 446–453. 10.3102/0013189X11426351 [DOI] [Google Scholar]
- Jaspers MW, Steen T, Van Den Bos C, & Geenen M (2004). The think aloud method: A guide to user interface design. International Journal of Medical Informatics, 73(11–12), 781–795. [DOI] [PubMed] [Google Scholar]
- Kientz JA, Goodwin MS, Hayes GR, & Abowd GD (2013). Interactive Technologies for Autism. Synthesis Lectures on Assistive, Rehabilitative, and Health-Preserving Technologies, 2(2), 1–177. 10.2200/S00533ED1V01Y201309ARH004 [DOI] [Google Scholar]
- Kim RH, & Patel MS (2018). Barriers and Opportunities for Using Wearable Devices to Increase Physical Activity Among Veterans: Pilot Study. JMIR Formative Research, 2(2), e10945. 10.2196/10945 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koch A, Nafziger J, & Nielsen HS (2015). Behavioral economics of education. Journal of Economic Behavior & Organization, 115, 3–17. 10.1016/j.jebo.2014.09.005 [DOI] [Google Scholar]
- Langley AK, Nadeem E, Kataoka SH, Stein BD, & Jaycox LH (2010). Evidence-based mental health programs in schools: Barriers and facilitators of successful implementation. School Mental Health, 2(3), 105–113. 10.1007/s12310-010-9038-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lavecchia AM, Liu H, & Oreopoulos P (2016). Chapter 1 - Behavioral Economics of Education: Progress and Possibilities. In Hanushek EA, Machin S, & Woessmann L (Eds.), Handbook of the Economics of Education (Vol. 5, pp. 1–74). Elsevier. 10.1016/B978-0-444-63459-7.00001-4 [DOI] [Google Scholar]
- Le Jeannic A, Quelen C, Alberti C, & Durand-Zaleski I (2014). Comparison of two data collection processes in clinical studies: Electronic and paper case report forms. BMC Medical Research Methodology, 14(1), 7. 10.1186/1471-2288-14-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levitt SD, List JA, Neckermann S, & Sadoff S (2016). The Behavioralist Goes to School: Leveraging Behavioral Economics to Improve Educational Performance. American Economic Journal: Economic Policy, 8(4), 183–219. 10.1257/pol.20130358 [DOI] [Google Scholar]
- List JA, Samek A, & Suskind DL (2018). Combining behavioral economics and field experiments to reimagine early childhood education. Behavioural Public Policy, 2(1), 1–21. 10.1017/bpp.2017.6 [DOI] [Google Scholar]
- Marcu G, Tassini K, Carlson Q, Goodwyn J, Rivkin G, Schaefer KJ, Dey AK, & Kiesler S (2013). Why do they still use paper? Understanding data collection and use in Autism education. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3177–3186. 10.1145/2470654.2466436 [DOI] [Google Scholar]
- Melamed Y, Szor H, & Bernstein E (2001). The loneliness of the therapist in the public outpatient clinic. Journal of Contemporary Psychotherapy, 31(2), 103–112. [Google Scholar]
- Mullainathan S, & Thaler RH (2000). Behavioral Economics (No. w7948). National Bureau of Economic Research. 10.3386/w7948 [DOI] [Google Scholar]
- Nuske HJ, & Mandell DS (2021). Digital health should augment (not replace) autism treatment providers. Autism, 13623613211043368. 10.1177/13623613211043368 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patel MS, Foschini L, Kurtzman GW, Zhu J, Wang W, Rareshide CAL, & Zbikowski SM (2017). Using Wearable Devices and Smartphones to Track Physical Activity: Initial Activation, Sustained Use, and Step Counts Across Sociodemographic Characteristics in a National Sample. Annals of Internal Medicine, 167(10), 755–757. 10.7326/M17-1495 [DOI] [PubMed] [Google Scholar]
- Patel MS, Polsky D, Kennedy EH, Small DS, Evans CN, Rareshide CAL, & Volpp KG (2020). Smartphones vs Wearable Devices for Remotely Monitoring Physical Activity After Hospital Discharge: A Secondary Analysis of a Randomized Clinical Trial. JAMA Network Open, 3(2), e1920677. 10.1001/jamanetworkopen.2019.20677 [DOI] [PubMed] [Google Scholar]
- Pellecchia M, Mandell DS, Nuske HJ, Azad G, Benjamin Wolk C, Maddox BB, Reisinger EM, Skriner LC, Adams DR, & Stewart R (2018). Community–academic partnerships in implementation research. Journal of Community Psychology, 46(7), 941–952. [DOI] [PubMed] [Google Scholar]
- Potthoff S, Presseau J, Sniehotta FF, Breckons M, Rylance A, & Avery L (2019). Exploring the role of competing demands and routines during the implementation of a self-management tool for type 2 diabetes: A theory-based qualitative interview study. BMC Medical Informatics and Decision Making, 19(1), 23. 10.1186/s12911-019-0744-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raney L, Bergman D, Torous J, & Hasselberg M (2017). Digitally Driven Integrated Primary Care and Behavioral Health: How Technology Can Expand Access to Effective Treatment. Current Psychiatry Reports, 19(11), 86. 10.1007/s11920-017-0838-y [DOI] [PubMed] [Google Scholar]
- Riggleman S (2021). Using Data Collection Applications in Early Childhood Settings to Support Behavior Change. Journal of Special Education Technology, 36(3), 175–182. 10.1177/0162643420942763 [DOI] [Google Scholar]
- Rispoli M, Neely L, Lang R, & Ganz J (2011). Training paraprofessionals to implement interventions for people autism spectrum disorders: A systematic review. Developmental Neurorehabilitation, 14(6), 378–388. [DOI] [PubMed] [Google Scholar]
- Saleem JJ, Patterson ES, Militello L, Anders S, Falciglia M, Wissman JA, Roth EM, & Asch SM (2007). Impact of Clinical Reminder Redesign on Learnability, Efficiency, Usability, and Workload for Ambulatory Clinic Nurses. Journal of the American Medical Informatics Association, 14(5), 632–640. 10.1197/jamia.M2163 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saleem JJ, Patterson ES, Militello L, Render ML, Orshansky G, & Asch SM (2005). Exploring Barriers and Facilitators to the Use of Computerized Clinical Reminders. Journal of the American Medical Informatics Association, 12(4), 438–447. 10.1197/jamia.M1777 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Samson A (2014). The behavioral economics guide 2014. London, UK: Behavioral Economics Group. [Google Scholar]
- Steinbrenner JR, Hume K, Odom SL, Morin KL, Nowell SW, Tomaszewski B, Szendrey S, McIntyre NS, Yücesoy-Özkan S, & Savage MN (2020). Evidence-based practices for children, youth, and young adults with Autism. The University of North Carolina at Chapel Hill, Frank Porter Graham Child Development Institute, National Clearinghouse on Autism Evidence and Practice Review Team. [Google Scholar]
- Stewart RE, Williams N, Byeon YV, Buttenheim A, Sridharan S, Zentgraf K, Jones DT, Hoskins K, Candon M, & Beidas RS (2019). The clinician crowdsourcing challenge: Using participatory design to seed implementation strategies. Implementation Science, 14(1), 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sutherland R, Trembath D, & Roberts J (2018). Telehealth and autism: A systematic search and review of the literature. International Journal of Speech-Language Pathology, 20(3), 324–336. 10.1080/17549507.2018.1465123 [DOI] [PubMed] [Google Scholar]
- Terwiesch C, & Ulrich KT (2009). Innovation Tournaments: Creating and Selecting Exceptional Opportunities. Harvard Business Press. [Google Scholar]
- Vashitz G, Meyer J, & Gilutz H (2007). General Practitioners’ Adherence with Clinical Reminders for Secondary Prevention of Dyslipidemia. AMIA Annual Symposium Proceedings, 2007, 766–770. [PMC free article] [PubMed] [Google Scholar]