Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2021 Oct 1;15(3):804–814. doi: 10.1007/s40617-021-00651-y

Evaluating the Effects of Technology-Based Self-Monitoring on Positive Staff–Consumer Interactions in Group Homes

Sandra A Ruby 1, Florence D DiGennaro Reed 1,
PMCID: PMC9582100  PMID: 36457829

Abstract

The quality and frequency of positive interactions between staff and consumers are related to reductions in consumer problem behavior and increases in other desired outcomes, such as leisure and self-help skills. Unfortunately, the frequency with which group home staff positively interact with consumers is often low and regularly requires intervention. We evaluated the effects of technology-based self-monitoring on positive interactions between staff and consumers during consumer leisure time. Participant data were collected off-site through video recordings from cameras already present in the group homes. During baseline, participant interactions were low. Upon introduction of an intervention containing self-monitoring completed via a tablet device, staff interactions increased and maintained when the intervention was in effect. Supplemental feedback via text message was provided to two of the three participants to reach criterion. These findings demonstrate the utility of technology-based self-monitoring for some individuals to increase positive staff–consumer interactions in group homes.

Keywords: Self-monitoring, Positive interactions, Text-message feedback, Group homes, Technology


Of the estimated 7.38 million individuals with intellectual and developmental disabilities (IDD) in the United States, about 1.28 million receive long-term supports; 296,097 of whom receive services in group homes (Larson et al., 2020). Nearly 473,000 individuals are on waitlists for home and community-based services (United Cerebral Palsy & ANCOR Foundation, 2020). The approximately 1.3 million direct support professionals who work in service agencies (President’s Committee for People with Intellectual Disabilities, 2017) are not sufficient to meet the growing demand for services. Thus, it is essential that interventions and other services within group homes are effective, efficient, and produce meaningful outcomes for consumers.

Ensuring positive interactions between staff and consumers occurs is one approach to achieving meaningful outcomes for consumers (e.g., Burg et al., 1979; Baldwin & Hattersley, 1984; Felce & Emerson, 2001; Manente et al., 2010; Zoder-Martell et al., 2014). The importance of positive interactions has been demonstrated through the active treatment literature (Medicaid, n.d.; Parsons et al., 1989). For example, Parsons et al. (1989) showed positive interactions as part of active treatment correlated with increased consumer engagement and self-help, leisure, social, and community living skills. Research also shows higher consumer compliance, on-task behavior, and indices of happiness when consumers interact with familiar staff with whom a relationship exists than with unfamiliar staff (Parsons et al., 2016). Taken together, these findings suggest it is worthwhile to ensure positive interactions between staff and consumers occur.

Unfortunately, positive interactions occur at disappointingly low levels. Chan and Yau (2002) observed staff–consumer interactions in an institutional care setting. Results showed staff–consumer interactions occurred in only 37% of intervals. In addition, multiple studies conducted in various settings have shown interactions do not occur frequently during baseline (e.g., Burg et al., 1979; Parsons et al., 1989; Zoder-Martell et al., 2014). Based on these patterns of results, it appears there is a need for interventions that increase and maintain positive staff–consumer interactions.

Researchers have evaluated ways to increase positive interactions between staff and consumers across a small number of studies (i.e., Burg et al., 1979; Burgio et al., 1983; Baldwin & Hattersley, 1984; Doerner et al., 1989; Kamana et al., 2021; Montegar et al., 1977; Mowery et al., 2010; Zoder-Martel et al., 2014). For example, Kamana et al. (2021) evaluated the effects of behavioral skills training (i.e., instructions, model, rehearsal, and feedback) and on-the-job feedback on several recommended practices for staff behavior, including positive interactions, in group homes. Results showed staff–consumer positive interactions increased and were more stable during intervention compared to baseline. In another study, Zoder-Martell et al. (2014) evaluated the use of real-time prompts delivered via an in-ear headphone on the rate of positive verbal interactions initiated by participating direct support professionals. During baseline, participants were uninformed of the dependent variable and did not receive training. Upon implementation of direct training using prompts, all participants increased their positive interactions with consumers, which were maintained for at least five sessions. Results maintained during follow-up probes conducted 2 and 6 weeks after maintenance. These findings suggest common staff training procedures can increase and maintain positive staff–consumer interactions. Although promising, both studies required significant supervisory resources to achieve desired outcomes. Thus, additional research should evaluate interventions that are resource-sensitive and do not require the presence of a supervisor.

One intervention worthy of investigation is self-monitoring, which has the potential to reduce resources and mitigate supervisor presence. Self-monitoring is a procedure where an individual records the occurrence or nonoccurrence of their own behavior (Olson & Winchester, 2008). Self-monitoring has been implemented across various workplace settings (e.g., Rinaldi-Miles et al., 2019; Rodriguez et al., 2006; Rose & Ludwig, 2009) and with a range of treatment packages that include prompts (Petscher & Bailey, 2006), feedback (Normand, 2008), and goal setting (Calpin et al., 1988). Workplace applications of self-monitoring suggest it is useful for schedule adherence (Richman et al., 1988), safety (Heckman & Geller, 2003), and treatment integrity (Belfiore et al., 2008; Mouzakitis et al., 2015). Taken together, these findings suggest self-monitoring can be flexibly used to achieve desired staff performance.

Self-monitoring has been implemented with various types of employees (e.g., Hillman et al., 2021; Mouzakitis et al., 2015), including employees working in human-service settings. For example, Petscher and Bailey (2006) used a multiple baseline design across participants and behaviors to evaluate a treatment package consisting of self-monitoring, a tactile prompt, and feedback on token economy implementation by teachers working with students with disabilities. The self-monitoring package effectively increased accurate token economy implementation. In addition, video self-monitoring has been researched in human-service settings (Aherne & Beaulieu, 2018; Belfiore et al., 2008; Pelletier et al., 2010). Belfiore et al. (2008) successfully used video self-monitoring to improve the accuracy with which staff implemented discrete trial instruction with children with autism. The effects maintained for up to 4 weeks.

Researchers have targeted positive interactions between staff and consumers using intervention packages containing self-monitoring(e.g., Baldwin & Hattersely, 1984; Burg et al., 1979; Burgio et al., 1983; Doerner et al., 1989; Parsons et al., 1989). Burgio et al. (1983) evaluated staff’s use of self-monitoring, goal setting, self-evaluation, and self-reinforcement procedures on positive interactions. Others have shown improvements in positive interactions with self-monitoring in addition to supervision (Burg et al., 1979) and group meetings (Baldwin & Hattersley, 1984). Given the packaged interventions in these studies, it is unknown the extent to which self-monitoring contributed to improvements. Evaluating the effects of self-monitoring in isolation and incorporating additional performance management interventions, such as feedback, only if necessary, addresses this limitation and has the potential to aid organizational leaders in adopting resource-efficient procedures.

Researchers have shown self-monitoring’s effectiveness using a range of modalities such as golf clickers (Doerner et al., 1989), wrist-worn response counters (Burgio et al., 1983), clipboards with removeable stickers (Burg et al., 1979; Baldwin & Hattersley, 1984), paper and pen (Plavnick et al., 2010), and technology (Sigurdsson et al., 2011). Advances in technology in recent decades necessitate additional research investigating the efficacy of technology-based self-monitoring interventions on the behavior of staff working in group homes. For example, many organizations have made the transition to electronic data recording rather than using paper data sheets (e.g., Sleeper et al., 2017). Leveraging the technology already available in organizations makes a technology-based self-monitoring intervention potentially more feasible.

Given the literature documenting the beneficial outcomes associated with staff positively interacting with consumers living in group homes (e.g., Burg et al., 1979; Burgio et al., 1983) and repeatedly observed low levels of interactions (e.g., Chan & Yau, 2002), additional research evaluating ways to foster positive interactions is worthwhile. Studies measuring positive interactions commonly evaluate self-monitoring as a packaged performance management intervention. However, packaged interventions are not always ideal for human-service settings given the need to develop effective interventions with as few resources as possible (President’s Committee for People with Intellectual Disabilities, 2017). Moreover, group home self-monitoring research has not included advances in technology as other aspects of the job of group home staff have evolved over time (e.g., electronic data collection). Therefore, the purpose of the current study is to evaluate a technology-based self-monitoring intervention on positive staff–consumer interactions in group homes.

Method

Participants and Setting

Approval from the university’s Institutional Review Board and the organization’s Human Rights Committee was obtained before starting the experiment. Participants included three staff who worked in different group homes for adults with IDD. Sierra was a 49-year-old woman with an associate’s degree who worked at the organization as a home supervisor for almost 1 year. Billie was a 27-year-old woman with some college education who worked at the organization as a direct support professional for almost 2 years. Carson was a 23-year-old man with some college education who worked at the organization as a direct support professional for almost 1 year. The setting consisted of three different three-bed, one-bath townhomes with a kitchen, dining room, living room, hallway, and fenced-in outdoor area. Homes had built-in cameras with continuous video and audio recording capabilities in the public areas of the homes. Recordings could be viewed at the organization’s office suites.

Upon hire, participants experienced a 5-day preservice workshop where they were informed of job responsibilities, including the organization’s expectations for how to implement various practices to prevent problem behavior and enhance the quality-of-life of consumers. An hour-long presentation focused on describing and modeling several research-supported practices to help participants meet this organization expectation. One of these practices included initiating positive interactions with consumers. In this training, participants received information about the rationale for positive interactions (i.e., positive interactions promote healthy relationships, decrease problem behavior, and increase appropriate behavior) and that they should interact at least once every 5 min per consumer. Although the organization communicated this standard to its employees, its leaders recognized that service-delivery responsibilities may compete with this standard and were satisfied if employee interacted with consumers at least once approximately every 5 min per consumer.

Supervisors were asked to nominate staff who could benefit from participating in a project that targeted positive interactions between staff and consumers. The first author contacted nominated staff through text message on their work phone and provided a brief explanation of the study. Staff who tentatively agreed to participate in the study met with the experimenter to sign the consent form and review job responsibilities pertaining to interacting with consumers. Participants were informed that observations would begin on the day of consent, would occur during evening leisure time, and would be conducted remotely via the cameras in the home. Because observations required participants to be in the home and consumer and staff schedules varied, observations depended on availability and participant assent1 prior to each session. At most, one observation (i.e., session) was possible each day participants worked. Upon completion of the study, participants were paid $50.00 via a prepaid debit card.

Materials

Participants received a tablet (i.e., Samsung Tab A6), charger, and laminated written instructions. The Samsung Tab A6 had two preinstalled applications, Countee© (Peić & Hernández, 2021) and an interval timer. Laminated instructions included the tablet password, the experimenter’s contact information, examples of positive interactions, and a task analysis for using the applications.

Data Collection and Response Measurement

All observations were recorded via in-home cameras and microphones and were saved to a secure network hosted by the group-home organization. Observations occurred during evening leisure activities and were not conducted during meals or medication administration lasting longer than 10 min.

Primary Dependent Variable

The primary dependent variable was the percentage of 5-min intervals with a positive interaction. We used 5-min partial interval recording during each 30-min session. No more than one session was scheduled per day. Organizational leaders desired employees to engage in positive interactions with consumers approximately once every 5 min; thus, the measurement system adopted in this experiment reflected that expectation. Definitions for positive interactions were identical to Kamana (2019) and are shown in Table 1. A positive interaction included several behaviors such as a compliment, conversation, greeting, appropriate physical interaction, expression of care, or praise. A positive interaction did not include participants teaching individualized programs, implementing procedures from a behavior plan, or giving redirection statements. If a participant was absent for three or more 1-min intervals within a 5-min interval, the data from that interval were excluded for that session. The percentage of 5-min intervals with a positive interaction was calculated by dividing the total number of intervals with a positive interaction by the total number of intervals, multiplied by 100.

Table 1.

Definition and Examples of Positive Interactions (Kamana, 2019)

Interaction Definition Example
Compliment Saying something favorable about the consumer to the consumer. “You look nice today!”
Conversation Talking about topics that consumers may prefer or commenting on an activity in which consumers were engaged. “Did you enjoy the movie?”
Greet consumer A salutation to the consumer. “Hi, great to see you!”
Appropriate physical interaction Making physical contact that is appropriate for adults. High fives or a pat on the back
Expression of care Acknowledging when consumers appeared sad, tired, upset, or need help. “It looks like you are sad. Is everything okay?”
Praise Acknowledging appropriate consumer behavior. “Excellent job (specify behavior).”

Secondary Dependent Variables

We also measured two secondary dependent variables. First, we determined whether participants showed scalloped patterns of interacting with consumers by analyzing how they allocated their interactions throughout the 5-min intervals. To conduct this analysis, we summed the total number of 1-min intervals with an interaction2 for each minute of every 5-min interval (i.e., 1st, 2nd, 3rd, 4th, and 5th min) for the entire 30-min session. Thus, the maximum possible value is six (30-min session /5-min intervals = 6 opportunities for each 1-min interval to be scored) for each minute.

Second, we measured participant recording accuracy to determine if participants correctly recorded their behavior. To determine accuracy, experimenter observation data were compared to participant output data from the Countee© app. An accurate response occurred when the experimenter and participant recorded the same response within the last 30 s (+15 s) of each 5-min interval. A +15 s buffer period was included because participants were instructed to do their best recording interactions every 5 min, but that other household and clinical needs may take priority temporarily. Thus, this brief window recognized that participants might not record at the exact end of the interval. An error of omission occurred when the experimenter recorded that a positive interaction occurred, but the participant recorded that a positive interaction did not occur or did not record any interaction. An error of commission occurred when the experimenter recorded a positive interaction did not occur, but the participant recorded a positive interaction occurred. The percentage of accurate responses was calculated by dividing the total number of accurate responses by the total number of 5-min intervals, multiplied by 100. The percentage of omission errors was calculated by dividing the total number of omission errors by the total number of 5-min intervals, multiplied by 100. The percentage of commission errors was calculated by dividing the total number of commission errors by the total number of 5-min intervals, multiplied by 100.

Interobserver Agreement (IOA)

A second observer recorded data on participants’ positive interactions for a minimum of 30% of sessions for each phase and participant to calculate interobserver agreement. For an agreement to occur, both observers must have recorded that an interaction did or did not occur in each 1-min interval.3 Agreement was calculated using the interval-by-interval method by dividing the number of intervals with agreement by the total number of intervals, multiplied by 100. Mean agreement for Sierra was 91.9% (range: 86.7%–100%), for Billie was 89.6% (range: 80%–100%), and for Carson was 89.6% (range: 80%–96.7%).

Procedure

An ABAB design was used to evaluate the effects of self-monitoring on positive staff–consumer interactions for Sierra. Although we had intended to withdraw the intervention for Billie and Carson, participant attrition prevented that analysis. As a result, we used a nonconcurrent multiple baseline design across participants for Billie and Carson to show experimental control.

Baseline and Withdrawal

During baseline and withdrawal, participants performed their job responsibilities as they typically would. When they provided informed consent, participants were reminded that interactions with consumers should occur at least once every 5 min. They did not receive experimenter feedback about their performance or have access to the tablet. Both phases continued until participant behavior data were stable or decreasing based upon visual inspection.

Self-Monitoring

Self-monitoring training

We trained participants to self-monitor using an evidence-based training procedure, behavioral skills training (Parsons et al., 2012), including instructions, modeling, rehearsal, and feedback. Training was conducted in one session lasting no more than 40 min. Participants received both written and vocal instructions for a positive interaction, which included exemplars and nonexemplars. Participants also received written and vocal instructions on how to use the tablet, start the applications, and send the data to the researcher via encrypted email. Instructions included information about the procedures. In particular, participants were instructed to start the applications, and self-monitor their interactions with consumers as described below. To prompt participants to self-monitor, an interval timer was used. The interval timer beeped twice: after 4 min, 30 sec of each 5-min interval and at the end of the 5-min interval. Participants were instructed to record the occurrence or nonoccurrence of an interaction during the last 30 sec of the 5-min interval (i.e., between the two beeps). After providing instructions, a model was provided on how to use the tablet, start the applications, and send the data. Participants then had the opportunity to imitate the model and ask questions. Participants received corrective feedback (e.g., “Remember to press start on the interval timer.”) and/or praise (e.g., “Great job double-checking the session name.”) until they independently set up a self-monitoring session on the tablet. Once participants started a session independently, they rehearsed two 5-min intervals and recorded the occurrence or nonoccurrence of positive interactions. The experimenter provided corrective feedback if participants made an error and praise for accurate self-monitoring.

During training, researchers instructed participants to start a self-monitoring session by opening both the interval timer and Countee© apps. Participants used Countee© to record their interactions (i.e., to self-monitor). Participants started a new session by clicking a button labeled “start new session” and labeled the session with the date. They would then switch to the interval timer app and press the play button to begin the 30-min timer. The interval timer had three audible beeps that sounded before the interval began. During the three audible beeps, participants would return to Countee© and press the “start” button. After 30 min, the interval timer stopped and Countee© closed. Data were automatically saved within the Countee© app. Participants then sent their data to the experimenter through encrypted email by pressing the “share data” icon (i.e., airplane icon in the top right corner of the Countee© app), selecting that day’s data set, typing in the correct email address, and pressing send.

Self-monitoring

The purpose of this phase was to evaluate the effects of self-monitoring on the percentage of 5-min intervals participants engaged in positive interactions with consumers. Once training was complete and the self-monitoring phase began, participants were contacted each workday morning via text message to confirm participants were available to self-monitor. Approximately 5 min before the start of each session, a text message was sent informing participants they could start the session. Participants used both the interval timer and Countee© apps to self-monitor. During this phase, participants did not receive performance feedback. This phase continued until participant data maintained at or above a criterion of 80% or higher across several sessions, or participant data were stable or decreasing upon visual inspection (in the latter situation, we introduced the next phase).

Self-Monitoring and Feedback

The purpose of this phase was to evaluate the effects of both self-monitoring and feedback on the percentage of intervals participants positively interacted with consumers. We introduced this phase if participants did not meet criterion with self-monitoring alone based on visual inspection. Experimenter feedback included a text message that contained the participant’s percentage of intervals with an interaction for the previous session. In addition, if participants reached or surpassed 80%, the text message contained a praise statement. If participants did not reach 80%, they were reminded of the goal and were asked if they needed additional information or help. For example, participants would receive a text message containing the following information if they surpassed the 80% goal: “Good morning, are you able to self-monitor this afternoon? In your last session, you interacted with consumers in six out of six intervals, so 100%. Fantastic job interacting with consumers!” If participants did not meet the 80% goal, they received a text message containing the following information: “Good morning, are you able to self-monitor this afternoon? In your last session, you interacted with consumers in four out of six intervals, so 67%. Remember it would be great to have a positive interaction in at least 80% of intervals. Please let me know if you have further questions or need any help.”

Participants did not ask follow-up questions or request additional information. Feedback on their last performance was delivered before the next scheduled session, which occurred one day to one week following the previous session. This phase continued until participant data maintained at or above 80% for three consecutive sessions.

Training Integrity

An independent observer reviewed 100% of the self-monitoring training sessions at the organization’s office suite and collected data on experimenter training integrity. To measure training integrity, the observer completed a checklist consisting of training activities that included instructions, modeling, rehearsal, and feedback for each self-monitoring step. Training integrity was calculated by dividing the number of correctly implemented steps by the total number of steps, multiplied by 100. Training integrity ranged from 96% to 100%.

Social Validity

Upon completion of the study, participants had the opportunity to complete an anonymous social validity questionnaire that could be returned via the postal service. The questionnaire was a modified version of the Intervention Rating Profile-15(Martens et al., 1985) that consisted of 11 items asking participants to rate the acceptability of the self-monitoring training and self-monitoring intervention on a 6-point Likert-type scale where 1 indicated “strongly disagree” and 6 indicated “strongly agree.” Higher scores represent higher acceptability. Other questions included one yes or no question that asked if participants used self-monitoring outside of the research study and one open-ended question that asked for additional comments.

Results

Figure 1 depicts the percentage of 5-min intervals with a positive interaction for each participant (left y-axis) and when participants interacted within each 5-min interval (right y-axis). The percentages for all participants were variable, low, and decreasing during baseline. In particular, the data for Sierra (M = 40%; range: 16.7%–66.7%), Billie (M = 22.2%; range: 5%–50%), and Carson (M = 38.6%; range: 0%–100%) were below the criterion of 80% (i.e., the organization’s expectation of approximately one interaction every 5 min).

Fig. 1.

Fig. 1

Time-Series Data of 5-min Intervals and Heat Map of 1-min Intervals

Upon the introduction of self-monitoring, Sierra’s behavior immediately increased and maintained above the 80% criterion for 8 of 10 consecutive sessions (M = 84.5%; range: 50%–100%). When self-monitoring was discontinued, the percentage of intervals in which Sierra engaged in a positive interaction decreased to baseline levels by the end of this phase (M = 60.7%; range: 16.7%–100%). High levels of responding were observed upon reintroduction of self-monitoring (M = 87.8%; range: 80%–100%).

Billie and Carson showed initial increases in their percentages at the start of self-monitoring, but performance did not maintain. Both Billie’s (M = 67.3%; range: 33.3%–100%) and Carson’s (M = 65%; range: 25%–100%) performance was well below the organization’s expectation. With the addition of feedback, performance increased to criterion levels (Billie M = 83.3%; range: 66.7%–100%; Carson M = 95%; range: 80%–100%).

The heat map, also shown in Figure 1 (right y-axis), depicts when positive interactions occurred by minute. The purpose of this analysis was to determine how participants allocated their interactions for each minute of every 5-min interval across a 30-min session. Darker shading indicates a higher number of 1-min intervals with positive interactions. Results showed no clear scalloping pattern; that is, interactions occurred across each minute of the 5-min interval for most sessions. However, there are minutes in which interactions never occurred.

Accuracy

Accuracy varied by participant as did errors of omission and commission. Sierra’s accuracy and omission errors averaged 49% with few commission errors (1%). Billie’s accuracy averaged 21% with a high percentage of omission errors (71%) and relatively few commission errors (8%). Carson’s accuracy was higher than other participants averaging 68% with relatively few omission errors (10%) and commission errors averaging 22%. We conducted a supplemental analysis to determine if omission errors were due to self-monitoring errors or if participants failed to self-monitor altogether. The analysis revealed that participants were more likely to fail to self-monitor than to incorrectly self-monitor. That is, Sierra, Billie, and Carson failed to self-monitor in 76%, 97%, and 88% of omission errors, respectively.

Social Validity

Of the three participants, only one returned a completed questionnaire. Due to the lack of responding, we are unable to make valid conclusions about the social validity of the self-monitoring procedure.

Discussion

The purpose of the present study was to evaluate the effects of technology-based self-monitoring on positive staff–consumer interactions in group homes. Results show technology-based self-monitoring (alone or with feedback) produced increases in the percentage of intervals with positive interactions between three participating staff and consumers. These data are similar to the findings of previous studies and support the use of self-monitoring, or self-monitoring as part of a packaged intervention, to achieve and maintain desired employee performance (Baldwin & Hattersley, 1984; Burg et al., 1979; Burgio et al., 1983; Richman et al., 1988). In addition, during the most effective intervention for each participant (i.e., self-monitoring for Sierra; self-monitoring and feedback for Billie and Carson), participants tended to have more than one interaction with consumers in the 5-min interval. These results are promising as participants improved performance during intervention relative to baseline during which they had few, if any, positive interactions. Because self-monitoring was effective as a standalone intervention for only one participant, these findings offer the most support for the use of self-monitoring as part of an intervention package

Previous research has documented that human-service staff demonstrate scalloped patterns of responding for some job tasks (e.g., conducting observations; Reed et al., 2010). Although research is lacking indicating a scalloped pattern of positive interactions with consumers is problematic, ideally staff interactions would occur at various times and as natural social opportunities present themselves. In the current study, participants varied the minute in which they interacted throughout the 5-min interval. Previous self-monitoring research has not assessed temporal patterns of responding, so our data are novel and show that the current participants engaged in interactions throughout a 5-min interval rather than demonstrating a scalloped pattern of responding (i.e., waiting until the end of the interval to interact). Participants were not instructed to avoid scalloping, and these data suggest it may be unnecessary to instruct some staff to do so.

The findings from the present study also showed participants had low self-monitoring accuracy. Previous studies demonstrated socially significant improvements in target behaviors when participants implemented self-monitoring with higher accuracy (Gravina et al., 2013; Sasson & Austin, 2004). It may not always be feasible for staff to implement self-monitoring with high levels of accuracy in human-service settings given competing responsibilities, such as implementing teaching programs, behavior intervention plans, or assisting with self-care skills. Thus, determining that the intervention may produce changes in behavior despite lower self-monitoring accuracy is important. The present finding is preliminary, so additional research is necessary before firm conclusions for the applied setting can be made.

These findings contribute to the literature on increasing positive staff–consumer interactions in group homes in several ways. Although research has shown that the quality and frequency of positive interactions influence consumer outcomes (e.g., Burg et al., 1979; Burgio et al., 1983; Felce & Emerson, 2001; Manente et al., 2010), only a handful of studies have addressed how to increase positive interactions (e.g., Kamana et al., 2021; Zoder-Martel et al., 2014). The present findings contribute to this small body of research and support the beneficial effects of self-monitoring as part of a packaged intervention.

The results of this study also contribute to the technology-based self-monitoring literature. This study adopted technology to assist participants with self-monitoring, which reflects recent trends of using technology in human-service settings (LeBlanc et al., 2020). Previous self-monitoring research in group homes relied on golf clickers (Doerner et al., 1989) and clipboards with removable stickers (Burg et al., 1979; Baldwin & Hattersley, 1984), which were effective but may not reflect current data collection methods in human-service settings. Embedding the intervention into currently used technology-based methods could streamline procedures and possibly remove barriers to self-monitoring. Because incorporating technology into service-delivery settings is gaining popularity (DiGennaro Reed & Reed, 2013), evaluating technology-based interventions, such as the current intervention, is a worthwhile endeavor. Future research should continue the evaluation of technology-based interventions in service-delivery settings.

An additional contribution to the self-monitoring literature includes the evaluation of self-monitoring as a single intervention. Previous research used interventions containing two to four components and did not evaluate the effects of self-monitoring alone (e.g., Burg et al., 1979; Burgio et al., 1983; Calpin et al., 1988; Doerner et al., 1989; Mowery et al., 2010; Parsons et al., 1989). Evaluating packaged interventions makes it difficult to determine if all components are necessary without conducting a component analysis. Some components may be unnecessary to produce desired behavior change and, as a result, may waste resources. The current study evaluated the effects of self-monitoring alone and incorporated additional components (experimenter feedback delivered via text message) only if needed. This sequential approach has precedence in the literature (e.g., Erath et al., 2020; Howard & DiGennaro Reed, 2014) and was intentionally designed to ensure careful use of resources. One participant met criterion with self-monitoring alone. Two participants showed improved performance with self-monitoring but required feedback to reach criterion levels. These data suggest that, under certain circumstances and for some individuals, self-monitoring alone may yield desired outcomes. Future research should investigate these variables further to aid in the design of resource-efficient interventions.

This study also contributes to the feedback literature. Feedback is a common performance management intervention as it can be relatively inexpensive to administer and is effective (e.g., Sleiman et al., 2020). Technology-based feedback has been delivered in a variety of formats such as emails (Barton & Wolery, 2007; Brown & Woods, 2011; Hemmeter et al., 2011), telecommunication technologies (Marturana & Woods, 2012; Zhu et al., 2020), and text messages (Warrilow et al., 2020). Despite its popularity, we are unaware of any research that has evaluated text-message feedback on the performance of staff working in group homes. Feedback via text message was selected because of its potential advantages. First, if adopted by the organization, text-message feedback could be cost-effective for supervisors to implement. The organization in this study had continuous audio and video recording capabilities, so supervisors could observe and provide text-message feedback at a time that was convenient for them. This method reduces the amount of time traveling to homes, thus increasing valuable personnel time and reducing mileage reimbursement. Second, the delivery of feedback via text message could help reduce the response effort associated with giving in-person feedback or delivering feedback via video or telephone conferencing. These latter forms of feedback require both the supervisor and supervisee to be available at the same time. Third, text-message feedback did not require additional training and was quick to deliver. It is presently unknown the extent to which staff become distracted by other alerts and apps on their personal phones when receiving text-messaged feedback. One way to potentially mitigate the phone’s distraction is to deliver the feedback using a text-messaging app on a work tablet that has preapproved apps downloaded.

A final contribution to the feedback literature is the finding that feedback was effective even though it was delayed relative to when the participant performed the behavior. Some research recommends supervisors should provide feedback immediately (i.e., within 60 s) after a target behavior is performed (Goomas & Ludwig, 2007; Luke & Alavosius, 2011; Sleiman et al., 2020). However, other research suggests providing feedback immediately before a task can improve performance more than feedback delivered following a task (Aljadeff-Abergel et al., 2017; Bechtel et al., 2015). Henley and DiGennaro Reed (2015) did not find significant differences in the timing of feedback when comparing pre- and post-session feedback. In the current study, we provided feedback 1 day to 1 week following a session but several hours before the next session. Although we did not systematically compare pre- and post-session feedback, the timing of our feedback suggests pre-session feedback can increase interactions.

The results of this study suggest technology-based self-monitoring may have utility in human-service settings, but it contains limitations to be addressed in future research. For instance, an assessment (e.g., Performance Diagnostic Checklist—Human Services; Carr et al., 2013) was not administered to assist in developing an intervention to increase interactions. Future studies addressing staff–consumer interactions may benefit from using this type of assessment to develop interventions that are unique to an organization or individual.

Due to time constraints and participant attrition, components of the intervention were not systematically withdrawn. Further evaluation of the effects of self-monitoring components, such as the beep that prompted participants to self-monitor or feedback that was provided after each session, were not evaluated. It would be interesting for future studies to evaluate the effects of fading strategies, such as the removal of the beep or intermittent feedback, on positive interactions. Future research regarding the outcome of fading could make the intervention more manageable for staff and supervisors.

Limitations regarding the intervention’s maintenance, generalization, and social validity should also be addressed. Future research should measure long-term maintenance of performance as well as generalization of the effects of the intervention to other times of the day. An important area to address is the intervention’s effectiveness when group-home supervisors or other relevant employees deliver performance feedback as opposed to researchers. The intervention’s social validity should also be a topic in future research as only one participant returned the social validity questionnaire. The lack of responses may indicate participants did not find the intervention acceptable or that self-monitoring was not easy to implement.

Additional limitations include the extent to which self-monitoring alone increases interactions and the time required to observe. Results show only one participant reached criterion using self-monitoring alone. Although Billie’s and Carson’s percentages increased when self-monitoring was implemented, their performance did not meet criterion. Thus, if an individual or organization were to implement self-monitoring as a single intervention, we recommend observing performance to ensure desired outcomes. Because observations (and possibly feedback) are necessary, adopting self-monitoring does not absolve supervisors of their responsibilities to conduct observations and deliver timely feedback.

The results of the present study provide some support for technology-based self-monitoring and self-monitoring plus feedback in group homes as a potentially resource-efficient way to increase interactions between staff and consumers. It is important for research to continue evaluating interventions that are resource-sensitive and use technology in human-service settings because resources are scarce (President’s Committee for People with Intellectual Disabilities, 2017) and consumer outcomes may be affected (Burg et al., 1979; Burgio et al., 1983). Using technology-based self-monitoring with and without feedback can be one way to embrace such organizational challenges and still deliver quality services.

Declarations

Conflict of Interest

The first author does not have a conflict of interest. The second author serves on the Board of Directors for the organization where the research was conducted. The second author did not interact directly with participants.

Ethical Approval

The study was approved both by the University IRB and the organization’s Human Rights Committee where the research was conducted.

Informed Consent

All participants provided written informed consent. Moreover, participants could assent or not assent to a research session being conducted on any given day.

Footnotes

1

At the beginning of the day, participants were contacted via text message to confirm they would be in the home and available to self-monitor.

2

We did not record frequency per interval; we used partial-interval recording.

3

Although the primary dependent measure involved 5-min intervals, data collectors scored behavior in 1-min intervals. Thus, the agreement percentages were calculated using 1-min intervals.

This study was conducted by the first author in partial fulfillment of the requirements for the M.A. in behavior analysis at the University of Kansas. The authors thank Maranda J. Scheller, Andrew Widener, and Allie Heiner for their contributions to this project.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Aherne CM, Beaulieu L. Assessing long-term maintenance of staff performance following behavior skills training in a home-based setting. Behavioral Interventions. 2018;35(1):79–88. doi: 10.1002/bin.1642. [DOI] [Google Scholar]
  2. Aljadeff-Abergel E, Peterson SM, Wiskirchen RR, Hagen KK, Cole ML. Evaluating the temporal location of feedback: Providing feedback following performance vs. prior to performance. Journal of Organizational Behavior Management. 2017;37(2):171–195. doi: 10.1080/01608061.2017.1309332. [DOI] [Google Scholar]
  3. Baldwin S, Hattersley J. Use of self-recording to maintain staff-resident interaction. Journal of Mental Deficiency Research. 1984;28(1):57–66. doi: 10.1111/j.1365-2788.1984.tb01602.x. [DOI] [PubMed] [Google Scholar]
  4. Barton EE, Wolery M. Evaluation of e-mail feedback on the verbal behaviors of pre-service teachers. Journal of Early Intervention. 2007;30(1):55–72. doi: 10.1177/105381510703000105. [DOI] [Google Scholar]
  5. Bechtel NT, McGee HM, Huitema BE, Dickinson AM. The effects of the temporal placement of feedback on performance. The Psychological Record. 2015;65(3):425–434. doi: 10.1007/s40732-015-0117-4. [DOI] [Google Scholar]
  6. Belfiore PJ, Fritts KM, Herman BC. The role of procedural integrity: Using self-monitoring to enhance discrete trial instruction (DTI) Focus on Autism & Other Developmental Disabilities. 2008;23(2):95–102. doi: 10.1177/1088357607311445. [DOI] [Google Scholar]
  7. Brown JA, Woods JJ. Performance feedback to support instruction with speech-language pathology students on a family-centered interview process. Infants & Young Children. 2011;24(1):42–55. doi: 10.1097/iyc.0b013e3182001bf4. [DOI] [Google Scholar]
  8. Burg MM, Reid DH, Lattimore J. Use of a self-recording and supervision program to change institutional staff behavior. Journal of Applied Behavior Analysis. 1979;12(3):363–375. doi: 10.1901/jaba.1979.12-363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Burgio LD, Whitman TL, Reid DH. A participative management approach for improving direct-care staff performance in an institutional setting. Journal of Applied Behavior Analysis. 1983;16(1):39–53. doi: 10.1901/jaba.1983.16-37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Calpin JP, Edelstein B, Redmon WK. Performance feedback and goal setting to improve mental health center staff productivity. Journal of Organizational Behavior Management. 1988;9(2):35–58. doi: 10.1300/J075v09n02_04. [DOI] [Google Scholar]
  11. Carr, J. E., Wilder, D. A., Majdalany, L., Mathisen, D., & Strain, L. A. (2013). An assessment-based solution to a human-service employee performance problem: An initial evaluation of the performance diagnostic checklist—Human services. Behavior Analysis in Practice, 6(1), 16–32. 10.1007/BF03391789 [DOI] [PMC free article] [PubMed]
  12. Chan JS, Yau MK. A study on the nature of interactions between direct-care staff and persons with developmental disabilities in institutional care. British Journal of Developmental Disabilities. 2002;48(94):39–51. doi: 10.1179/096979502799104274. [DOI] [Google Scholar]
  13. DiGennaro Reed FD, Reed DD. HomeLink support technologies at Community Living Opportunities. Behavior Analysis in Practice. 2013;6(1):80–81. doi: 10.1007/BF03391794. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Doerner M, Miltenberger RG, Bakken J. The effects of staff self-management on positive social interactions in a group home setting. Behavioral Interventions. 1989;4(4):313–330. doi: 10.1002/bin.2360040404. [DOI] [Google Scholar]
  15. Erath TG, DiGennaro Reed FD, Sundermeyer HW, Brand D, Novak MD, Harbison MJ, Shears R. Enhancing the training integrity of human service staff using pyramidal behavioral skills training. Journal of Applied Behavior Analysis. 2020;53(1):449–464. doi: 10.1002/jaba.608. [DOI] [PubMed] [Google Scholar]
  16. Felce D, Emerson E. Living with support in a home in the community: Predictors for behavioral development and household and community activity. Mental Retardation & Developmental Disabilities. 2001;7(2):75–83. doi: 10.1002/mrdd.1011. [DOI] [PubMed] [Google Scholar]
  17. Goomas DT, Ludwig TD. Enhancing incentive programs with proximal goals and immediate feedback: Engineered labor standards and technology enhancements in stocker replenishment. Journal of Organizational Behavior Management. 2007;27(1):33–68. doi: 10.1300/J075v27n01_02. [DOI] [Google Scholar]
  18. Gravina NE, Loewy S, Rice A, Austin J. Evaluating behavioral self-monitoring with accuracy training for changing computer work postures. Journal of Organizational Behavior Management. 2013;33(1):68–76. doi: 10.1080/01608061.2012.729397. [DOI] [Google Scholar]
  19. Heckman JS, Geller ES. A safety self-management intervention for mining operations. Journal of Safety Research. 2003;34(5):299–308. doi: 10.1016/s0022-4375(03)00032-x. [DOI] [PubMed] [Google Scholar]
  20. Henley AJ, DiGennaro Reed FD. Should you order the feedback sandwich? Efficacy of feedback sequence and timing. Journal of Organizational Behavior Management. 2015;35(3–4):321–335. doi: 10.1080/01608061.2015.1093057. [DOI] [Google Scholar]
  21. Hemmeter, M. L., Snyder, P., Kinder, K., & Artman, K. (2011). Impact of performance feedback delivered via electronic mail on preschool teachers’ use of descriptive praise. Early Childhood Research Quarterly, 26(1), 96–109. 10.1016/j.ecresq.2010.05.004
  22. Hillman CB, Lerman DC, Kosel ML. Discrete-trial training performance of behavior interventionists with autism spectrum disorder: A systematic replication and extension. Journal of Applied Behavior Analysis. 2021;54(1):374–388. doi: 10.1002/jaba.755. [DOI] [PubMed] [Google Scholar]
  23. Howard VJ, DiGennaro Reed FD. Training shelter volunteers to teach dog compliance. Journal of Applied Behavior Analysis. 2014;47(2):344–359. doi: 10.1002/jaba.120. [DOI] [PubMed] [Google Scholar]
  24. Kamana, B. U. (2019). Increasing staff healthy behavioral practices in programs for adults with intellectual and developmental disabilities (Publication No. 22589620). [Doctoral dissertation, University of Kansas]. ProQuest Dissertations and Theses Global.
  25. Kamana, B. U., Dozier, C. L., Kanaman, N. A., DiGennaro Reed, F. D., Glaze, S. M., Markowitz, A. M., Hangen, M. M., Harrison, K. L., Bernstein, A. M., Jess, R. L., & Erath, T. G. (2021). Increasing staff healthy behavioral practices in programs for adults with intellectual and developmental disabilities [Manuscript submitted for publication]. Department of Applied Behavioral Science, University of Kansas.
  26. Larson, S. A., Eschenbacher, H. J., Taylor, B., Pettingell, S., Sowers, M., & Bourne, M. L. (2020). In-home and residential long-term supports and services for persons with intellectual or developmental disabilities: Status and trends through 2017. Minneapolis: University of Minnesota, Research and Training Center on Community Living, Institute on Community Integration. https://ici-s.umn.edu/files/aCHyYaFjMi/risp_2017
  27. LeBlanc LA, Lerman DC, Normand MP. Behavior analytic contributions to public health and telehealth. Journal of Applied Behavior Analysis. 2020;53(3):1208–1218. doi: 10.1002/jaba.749. [DOI] [PubMed] [Google Scholar]
  28. Luke MM, Alavosius M. Adherence with universal precautions after immediate personalized performance feedback. Journal of Applied Behavior Analysis. 2011;44(4):967–971. doi: 10.1901/jaba.2011.44-967. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Manente CJ, Maraventano JC, LaRue RH, Delmolino L, Sloan D. Effective behavioral intervention for adults on the autism spectrum: Best practices in functional assessment and treatment development. The Behavior Analyst Today. 2010;11(1):36–48. doi: 10.1037/h0100687. [DOI] [Google Scholar]
  30. Martens BK, Witt JC, Elliott SN, Darveaux DX. Teacher judgments concerning the acceptability of school-based interventions. Professional Psychology: Research & Practice. 1985;16(2):191–198. doi: 10.1037/0735-7028.16.2.191. [DOI] [Google Scholar]
  31. Marturana ER, Woods JJ. Technology-supported performance-based feedback for early intervention home visiting. Topics in Early Childhood Special Education. 2012;32(1):14–23. doi: 10.1177/0271121411434935. [DOI] [Google Scholar]
  32. Medicaid. (n.d.). Intermediate care facilities for individuals with intellectual disability. Centers for Medicare and Medicaid Services. https://www.medicaid.gov/medicaid/long-term-services-supports/institutional-long-term-care/intermediate-care-facilities-individuals-intellectual-disability/index.html
  33. Montegar CA, Reid DH, Madsen CH, Ewell MD. Increasing institutional staff to resident interactions through in-service training and supervisor approval. Behavior Therapy. 1977;8(4):533–540. doi: 10.1016/S0005-7894(77)80182-2. [DOI] [Google Scholar]
  34. Mouzakitis A, Codding RS, Tryon G. The effects of self-monitoring and performance feedback on the treatment integrity of behavior intervention plan implementation and generalization. Journal of Positive Behavior Interventions. 2015;17(4):223–234. doi: 10.1177/1098300715573629. [DOI] [Google Scholar]
  35. Mowery JM, Miltenberger RG, Weil TM. Evaluating the effects of reactivity to supervisor presence on staff response to tactile prompts and self-monitoring in a group home setting. Behavioral Interventions. 2010;25(1):21–35. doi: 10.1002/bin.296. [DOI] [Google Scholar]
  36. Normand MP. Increasing physical activity through self-monitoring, goal setting, and feedback. Behavioral Interventions. 2008;23(4):227–236. doi: 10.1002/bin.267. [DOI] [Google Scholar]
  37. Olson R, Winchester J. Behavioral self-monitoring of safety and productivity in the workplace: A methodological primer and quantitative literature review. Journal of Organizational Behavior Management. 2008;28(1):9–75. doi: 10.1080/01608060802006823. [DOI] [Google Scholar]
  38. Parsons MB, Cash VB, Reid DH. Improving residential treatment services: Implementation and norm-referenced evaluation of a comprehensive management system. Journal of Applied Behavior Analysis. 1989;22(2):143–156. doi: 10.1901/jaba.1989.22-143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Parsons MB, Bentley E, Solari T, Reid DH. Familiarizing new staff for working with adults with severe disabilities: A case for relationship building. Behavior Analysis in Practice. 2016;9(3):211–222. doi: 10.1007/s40617-016-0129-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Parsons MB, Rollyson JH, Reid DH. Evidence-based staff training: A guide for practitioners. Behavior Analysis in Practice. 2012;5(2):2–11. doi: 10.1007/BF03391819. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Peić, D., & Hernández, V. (2021). Countee—Data collection for BA (Version 2.2.1) [Mobile app]. https://apps.apple.com/us/app/countee/id982547332
  42. Pelletier K, McNamara B, Braga-Kenyon P, Ahearn WH. Effect of video self-monitoring on procedural integrity. Behavioral Interventions. 2010;25(4):261–274. doi: 10.1002/bin.316. [DOI] [Google Scholar]
  43. Petscher ES, Bailey JS. Effects of training, prompting, and self-monitoring on staff behavior in a classroom for students with disabilities. Journal of Applied Behavior Analysis. 2006;39(2):215–226. doi: 10.1901/jaba.2006.02-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Plavnick JB, Ferreri SJ, Maupin AN. The effects of self-monitoring on the procedural integrity of a behavioral intervention for young children with developmental disabilities. Journal of Applied Behavior Analysis. 2010;43(2):315–320. doi: 10.1901/jaba.2010.43-315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. President’s Committee for People with Intellectual Disabilities. (2017). America’s direct support workforce crisis: Effects on people with intellectual disabilities, families, communities and the U.S. economy.https://www.acl.gov/sites/default/files/programs/2018-02/2017%20PCPID%20Full%20Report_0.PDF
  46. Reed DD, Fienup DM, Luiselli JK, Pace GM. Performance improvement in behavioral health care: Collateral effects of planned treatment integrity observations as an applied example of schedule-induced responding. Behavior Modification. 2010;34(5):367–385. doi: 10.1177/0145445510383524. [DOI] [PubMed] [Google Scholar]
  47. Richman GS, Riordan MR, Reiss ML, Pyles DA, Bailey JS. The effects of self-monitoring and supervisor feedback on staff performance in a residential setting. Journal of Applied Behavior Analysis. 1988;21(4):401–409. doi: 10.1901/jaba.1988.21-401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Rinaldi-Miles A, Das BM, Kakar RS. Evaluating the effectiveness of implementation intentions in a pedometer worksite intervention. Work: Journal of Prevention, Assessment & Rehabilitation. 2019;64(4):777–785. doi: 10.3233/WOR-193039. [DOI] [PubMed] [Google Scholar]
  49. Rodriguez, M., Wilder, D. A., Therrien, K., Wine, B., Miranti, R., Daratany, K., Salume, G., Baranovsky, G., & Rodrigues, M. (2006). Use of the Performance Diagnostic Checklist to select an intervention designed to increase the offering of promotional stamps at two sites of a restaurant franchise. Journal of Organizational Behavior Management, (25)3, 17–35. 10.1300/J075v25n03_02
  50. Rose HS, Ludwig TD. Swimming pool hygiene: Self-monitoring, task clarification, and performance feedback increase lifeguard cleaning behaviors. Journal of Organizational Behavior Management. 2009;29(1):69–79. doi: 10.1080/01608060802660157. [DOI] [Google Scholar]
  51. Sasson JR, Austin J. The effects of training, feedback, and participant involvement in behavioral safety. Journal of Organizational Behavior Management. 2004;24(4):1–30. doi: 10.1300/J075v24n04_01. [DOI] [Google Scholar]
  52. Sigurdsson SO, Ring BM, Needham M, Boscoe JH, Silverman K. Generalization of posture training to computer workstations in an applied setting. Journal of Applied Behavior Analysis. 2011;44(1):157–161. doi: 10.1901/jaba.2011.44-157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Sleeper JD, LeBlanc LA, Mueller J, Valentino AL, Fazzio D, Raetz PB. The effects of electronic data collection on the percentage of current clinician graphs and organizational return on investment. Journal of Organizational Behavior Management. 2017;37(1):83–95. doi: 10.1080/01608061.2016.1267065. [DOI] [Google Scholar]
  54. Sleiman AA, Sigurjonsdottir S, Elnes A, Gage NA, Gravina NE. A quantitative review of performance feedback in organizational settings (1998–2018) Journal of Organizational Behavior Management. 2020;40(3–4):303–332. doi: 10.1080/01608061.2020.1823300. [DOI] [Google Scholar]
  55. United Cerebral Palsy & ANCOR Foundation. (2020). The case for inclusion 2020 key findings report. https://caseforinclusion.org/application/files/5015/8179/5128/Case_for_Inclusion_2020_Key_Findings_021420web.pdf
  56. Warrilow GD, Johnson DA, Eagle LM. The effects of feedback modality on performance. Journal of Organizational Behavior Management. 2020;40(3–4):233–248. doi: 10.1080/01608061.2020.1784351. [DOI] [Google Scholar]
  57. Zhu J, Hua Y, Yuan C. Effects of remote performance feedback on procedural integrity of early intensive behavioral intervention programs in China. Journal of Behavioral Education. 2020;29(2):339–353. doi: 10.1007/s10864-020-09380-8. [DOI] [Google Scholar]
  58. Zoder-Martell KA, Dufrene BA, Tingstrom DH, Olmi DJ, Jordan SS, Biskie EM, Sherman JC. Training direct care staff to increase positive interactions with individuals with developmental disabilities. Research & Developmental Disabilities. 2014;35(9):2180–2189. doi: 10.1016/j.ridd.2014.05.016. [DOI] [PubMed] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES