Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Nov 27.
Published in final edited form as: J Behav Educ. 2014 Sep 6;23(4):421–434. doi: 10.1007/s10864-014-9204-x

Implementation of a self-monitoring application to improve on-task behavior: A high school pilot study

Howard P Wills 1, Benjamin A Mason 1
PMCID: PMC4662407  NIHMSID: NIHMS684317  PMID: 26617453

Abstract

Technological innovations offer promise for improving intervention implementation in secondary, inclusive classrooms. A withdrawal design was employed with two high school students in order to assess the effectiveness of a technologically-delivered, self-monitoring intervention in improving on-task behavior in a science classroom. Two students ages 14 and 15 with diagnoses of specific learning disability (student 1) and attention deficit hyperactivity disorder (ADHD: student 2) were selected by case manager referral due to difficulties with on-task behavior despite long-term administration of psychostimulant medication. After baseline data were collected, both students were trained in the use of a self-monitoring application (I-Connect) delivered via a handheld tablet. On-task prompts were delivered at five min intervals in an ABAB withdrawal design. The intervention resulted in positive, stable improvements in the primary dependent variable of on-task behavior for both students and less clear improvement in the generalization variable of disruptive behavior.

Keywords: self management, self monitoring, intervention, emotional/behavioral disorders, high school


Self management represents a broad array of skills and strategies individuals use to assess and regulate their behavior (Cooper, Heron, & Heward, 1987; Mooney, Ryan, Uhing, Reid, & Epstein, 2005; Hansen, Wills, Kamps, & Greenwood, 2013; Snyder, 1979). While multiple strategies are subsumed within the broader category of self management, self monitoring (SM) includes the assessing and recording of one’s own behavior and is the most frequently investigated component within the school-based intervention literature (Fantuzzo & Polite, 1990). Self monitoring is defined as a multiple step process where the student observes the occurrence or non-occurrence of the behavior and records features of the observed behavior (Mace, Belfiore, & Hutchinson, 2001). Self-monitoring interventions have been successfully implemented for a broad array of behaviors that include increasing on-task behavior (Amato-Zech, Hoff, & Doepke, 2006; DiGangi, Maag, & Rutherford, 1991; Holifield, Goodman, Hazelkorn, & Heflin, 2010; Legge, DeBar, & Alber-Morgan, 2010), increasing social behaviors (Parker & Kamps, 2011), and improving academic performance (Paquin, 1978; Varni & Henker, 1978).

However, studies of interventions that investigate self management and self monitoring are not typically conducted in general education settings. In a review of 22 self-management intervention studies that included 78 participants with emotional and behavioral disorders (EBD), Mooney and colleagues (2005) found that 73% of studies had been conducted in public schools but zero studies that met inclusion criteria had been conducted in general education classrooms. In an earlier review, McDougall (1998) also supported the need for research conducted in general education settings, noting that of 240 self-management studies consisting largely of self-monitoring interventions, only 14 had targeted students with disabilities in general education settings. Given that the majority of high school students with EBD (Mooney et al., 2005), ADHD, and specific learning disabilities are placed in general education classrooms, particularly for science instruction (Vannest et al., 2009), additional research exploring the efficacy of SM interventions in this setting is warranted.

Additionally, while self-monitoring procedures hold promise for adolescents (Young, 1991), the majority of intervention studies supporting self-monitoring interventions in the school setting have focused on younger students. For example, Fantuzzo and Polite (1990) completed a meta-analysis of 42 self-management intervention studies, and no studies that met inclusion criteria had an average participant age that exceeded 12, and more than two-thirds had an average age of less than ten. McDougall (1998) noted only 5 of 14 studies in general education settings included participants 13–18 years of age. In a review of self-regulatory strategies (self-monitoring being a primary component) for children with ADHD (Reid, Trout, & Schartz, 2005), none of the 51 participants across 16 studies were of high-school age. While early intervention continues to be critical in improving academic and behavioral performance for students with and without disabilities, interventions must be developed and validated for adolescents - especially those at greatest risk for school failure.

One potential strategy to support self monitoring for adolescents in general education classrooms is the use of technology such as tablets or mobile devices. The use of such technology has only recently begun to appear in self-management and self-monitoring literature. For example, Gulchak (2008) used an electronic self-monitoring form delivered via a mobile, handheld computer (i.e. Palm Zire 72) to teach self monitoring to a young student with EBD. Fidelity for use of the system by the 3rd grader was 100%, and on-task behavior improved from a mean of 64% in baseline to 90% during self monitoring. Gulchak stressed that mobile computing devices and handhelds are highly motivating to students and thus more likely to be used for behavioral progress monitoring.

Despite the potential use of technology-enabled self-monitoring demonstrated above, technological availability has had little impact on self-monitoring literature at this time. In their review of 62 single-case, peer-reviewed articles using self-monitoring interventions with cueing components, Mason and Davis (2013) found that adoption of more current technology in self-monitoring interventions has not kept pace with technological availability. Of the 21 single-case studies that met What Works Clearinghouse standards for single case design and otherwise met inclusion criteria, only one used a cell phone (Quillivan at al., 2011), and no studies used tablets, smartphones, or mobile devices. Additionally, the cell phone in the Quillivan study was used to prompt paper-based self monitoring for a Kindergarten student, reducing the device to a notification system. Similarly, Blood, Johnson, Ridenour, and Simmons (2011) used an iPod touch and video modeling to assist a 5th grader with EBD and ADHD in learning how to self-monitor his on-task behavior. The iPod touch cued the student every two min to mark his on/off-task behavior on a piece of paper. Despite positive results, the iPod touch functioned as an alarm, or signal to record in paper format. Innovations in handheld technology provide the opportunity to implement self-monitoring interventions in inclusive settings in an unobtrusive manner. However, research is needed to determine if technologically-based self monitoring yields similar or greater results than traditional formats. This paper simultaneously addressed the limited number of self-monitoring interventions for older students in general education settings as well as the lack of technology-based self-monitoring interventions that do not rely on paper-based self monitoring.

In the current study, students used an application loaded on a mobile device (Samsung Galaxy Player 5.0 a smartphone size tablet) to monitor on-task behavior. The application used in this study was the self-monitoring component of the I-Connect Intervention Package. The I-Connect intervention package includes a self-monitoring application in conjunction with a school-based mentor who meets weekly with students to review monitoring of homework, attendance, on-task, and/or appropriate behavior. The current study solely investigated the self monitoring of on-task behavior independently without the mentoring and weekly problem solving meetings included within the full treatment package. The purpose of this study was to investigate the use of the I-Connect self-monitoring intervention for two high school students receiving special education services in a general education science classroom. The following research questions were addressed: (a) what is the effect of the intervention on the primary dependent variable of students’ on-task behavior; (b) what is the effect of the intervention on the generalization variable of disruptive behavior; (c) can the intervention be implemented with high fidelity; and (d) what is the treatment acceptability of the intervention as reported by the students and their teacher?

Method

Setting and Participants

The study was conducted at a suburban high school located in the Midwest United States. The high school’s total 9th through 12th grade population was 1,450 students with 21% minority enrollment and 25% qualifying as low socioeconomic status. The classroom in which the study was conducted was a general education, 9th grade remedial level science classroom that included 14 students in the general and special education programs. The classroom represented roughly even percentages of students with and without disabilities and approximately a third of students were repeating the class after prior course failure. A teacher and a co-teacher (serving almost exclusively in a disciplinary rather than instructional role) staffed the classroom. Experimental sessions were conducted during the regularly scheduled 55 min period. The class structure consisted of lecture, individual seatwork, and small group assignments, with the first two categories representing all but two of observed baseline and intervention sessions.

A high school teacher that served as a case manager for students receiving special education services facilitated recruitment of participants. Participants were eligible for study inclusion if the following criteria were met: participants received special education services, participants exhibited off-task behavior and/or classroom disruptions, participants were not served currently by any other university study, participants were struggling academically in a general education classroom, and, based on the case-manager’s opinion, the nominated students would be willing to engage in self-monitoring of behavior in a classroom setting. Of three nominated students, one was omitted due to concurrent participation in another university study in which the case manager was not participating.

The two remaining students and their parents were provided information on the study and asked to participate. Students and parents were informed of acceptable-use guidelines for the tablet including appropriate internet access and acceptable use within classrooms. The science classroom was chosen as the intervention setting of recruitment due to failing grades at the time of recruitment in the class for both students. The classroom teacher consented to allow the classroom observations and to provide a measure of perceived interventional efficacy once the experiment was concluded but was otherwise not required to engage in any direct intervention, measurement, or prompting of students. Both students entered the study upon parental consent and after completing an assent procedure that described the purpose of the study, the benefits of participation, and the limited potential risks.

Student 1, a male age 15 years and 7 months of Native American descent, was served in both special and general education classrooms at study onset. His classification for special education purposes was specific learning disability, although he had previously received services under the category of emotional disturbance due to a diagnosed mood disorder as well as significant anxiety in elementary and middle school. Despite having been medicated for ADHD for multiple years up to and including the current study, difficulties with off-task behavior were contributing to school failure. Work initiation was reported as problematic by the classroom teacher and school-based case manager. Disruptions were not noted to be a concern, as Student 1 was described by the classroom teacher as completely disengaging and “shutting down” when upset or anxious or being distracted and off task. The teacher reported that both disengaging and being distracted contributed to Student 1’s poor classroom performance. Student 1 was repeating the course after failing it in the prior academic year, although the teacher stated he had shown he was capable of meeting course requirements (in class assignments, tests).

Student 2, a Caucasian male age 14 years and 11 months, was served in a similar educational arrangement as Student 1. His classification for special education purposes was other health impaired (OHI) due to a diagnosis of ADHD. Student 2 received medication for ADHD prior to and throughout the study. Student 2 exhibited both inattention and impulsivity in regular and special education settings. Off-task behavior and work completion were reported as contributing to school failure by the classroom teacher, and classroom disruptions were noted as a shared concern by both the teacher and the student’s case manager. While Student 2 was taking the course for the first time, the teacher reported that while he passed exams, his off-task behavior was greatly impacting his assignment completion in class.

Materials

The I-Connect application consisted of an Android application designed to provide scheduled prompts for participants to self-evaluate and self-monitor targeted behaviors. The mobile device used in the study was a Samsung Galaxy Player 5.0 tablet with a five-inch screen. This tablet was the size of a typical smartphone. The tablet had wireless internet capabilities and did not have cellular (phone or data plan) connectivity. The tablet was preconfigured to restrict availability of internet content, loaded with the I-Connect application, and tested on the tablets prior to intervention use to insure compatibility with school firewalls. The application allowed text cues such as “Are you on task?” to appear on the tablet at customizable intervals. The response options of “yes” and “no” were buttons which the student could touch to record. The application offered three options for notification of when the question “Are you on task?” was presented. These included a flashing screen, a chime tone, or vibration. The application was set to remove the question after 6 seconds and initiate the next interval. Upon recording a response the application sent the data to a secure database through a wireless network connection. This password protected database was housed on a remote server.

Dependent Measures

The primary dependent variable was the percentage of time participants were on task for each 15 min observation period. On task was defined as the percentage of observed duration in which the students were passively (e.g., attending to the teacher during lecture) or actively engaged with instructional content via choral response, raising hand, responding to teacher instruction, writing, reading, or otherwise actively completing an assigned task (e.g., typing on computer, engaged with assigned materials). Duration recording was used for observation periods with on task recorded continuously for the entire observation period. All phase-change decisions were made using the on-task variable.

Disruptive behavior was a generalization variable recorded as a frequency count during observation periods and was defined as the participant engaging in behavior that was contrary to the behavioral expectations of the assigned task that could potentially disrupt instruction. This variable was included primarily because of the teacher’s concern for Student 2’s often disruptive behavior in the classroom. Examples of disruptive behaviors included talking during individual seatwork or moving around the room during lecture. Disruptive behavior was recorded as a frequency of occurrences during the period, with a disruption only tallied once until either: (a) the behavior ceased and a new behavior was coded, or (b) the teacher redirected the behavior but the behavior continued (e.g. two ignored redirections to the participant’s desk that were ignored would result in three total disruptive behaviors).

Dependent variables including student on-task and disruptive behaviors were recorded using the Multiple Option Observation System for Experimental Studies (MOOSES; Tapp, Wehby, & Ellis, 1995) on a Dell Windows-based tablet with a 5 × 8 inch screen. MOOSES allows for simultaneous duration and event recording of teacher and student behavior along a real-time continuum. Observation sessions using the MOOSES program were 15 min in duration and conducted from the back of the room. The investigators and a research assistant conducted all observations and were trained to a 90% interobserver agreement rate on each behavior code prior to collecting baseline data. Training consisted of study of definitions, independent practice coding videos, coaching and feedback coding videos, and in-class coaching and feedback prior to study onset following procedures outlined in Wills, Kamps, Abbott, Bannister, and Kaufman (2010).

Interobserver Agreement

Interobserver agreement data were collected for on-task and disruptive behavior on 20% of the observations across all phases of the study for each participant. Interobserver agreement (IOA) was collected using methods outlined by MacLean, Tapp, and Johnson (1985) using a 5-s window around each frequency code recorded by the primary observer. Percentage of agreement scores were calculated using the formula: (agreements/agreements + disagreements) × 100. Duration codes were calculated with second-by-second reliability estimates using the same IOA formula. The overall average IOA was 92% (range – 78%–98%) for on-task behavior and 88% (range – 82%–94%) for disruptions.

Fidelity

Fidelity of the intervention was directly observed for all sessions. Observers used a structured I-Connect fidelity and implementation form that assessed proper use of the self-monitoring application throughout the intervention period and appropriate device usage (i.e. no other use of device during instruction). The measure included two items regarding obtrusiveness of the intervention for the teacher and fellow students (e.g. the I-Connect application is not impacting the attention of the teacher in the classroom). Fidelity data were also collected during baseline and withdrawal sessions to insure no other self-monitoring tasks were observed.

Treatment Acceptability

Upon completion of the intervention, both participants completed an I-Connect intervention rating form. The researcher-developed form consisted of five Likert-style items rating participants’ perceived ease of use and interventional efficacy with one qualitative item allowing for open-ended response. Additionally, the teacher completed the Intervention Rating Profile (IRP-15; Martens, Witt, & Darveaux, 1985), a frequently used, 6-point, 15-item scale that is used to assess treatment acceptability with higher scores denoting higher degrees of acceptability. Total scores that may be obtained from the IRP-15 range from 15 to 90 with scores above 60 representing overall acceptability (i.e. the average across all questions is positive). Factor analysis of the IRP-15 reported in Martens et al. (1985) resulted in one primary factor (general acceptability) and reliable scores (Cronbach’s alpha of .98).

Design and Procedure

An ABAB withdrawal design (Kazdin, 1982) was used to evaluate the effects of the I-Connect self-monitoring intervention on the participants’ off-task behaviors in a science classroom and the generalization measure of disruptions. The phases of this study included baseline, I-Connect self-monitoring intervention, withdrawal, and return to intervention. As previously stated, all phase change decisions were based on the on-task data. A rule of a minimum of 5 data points per condition was established and additional data points per conditions were collected if visual analysis suggested trending or variable data.

Baseline

Baseline data collection occurred during science class and consisted of one to two 15 min observations per day, with instructional activity serving as the guide to the number of data collection sessions. If independent work or lecture was the instructional activity and sufficient time remained for the second observation, then it was completed. If a transition to video was anticipated or had occurred prior to the second observation, then only one observation was conducted as teacher expectations for on-task behavior were minimal during video presentation. Throughout baseline, students were given no requests to monitor and were not provided a tablet used for I-Connect self-monitoring.

I-Connect Self-Monitoring Intervention

After a stable baseline of five observations (with IOA) was established, I-Connect self-monitoring was implemented. Students were trained by investigators in one 20-min session to open the I-Connect application on the mobile device, enter classroom information, and begin behavioral monitoring. Training focused on both the use of the application and the discrimination of on-task and off-task behavior. Training consisted of demonstration, guided practice, and then independent completion of application use. Application use required selection of the program on the device screen, ID and password entry, and a “yes” response to the question “Do you want to monitor your citizenship goals.” After training in accessing the I-Connect self-monitoring application, participants completed the process successfully twice. Discrimination training included examples and non-examples of on-task behavior and participants correctly identifying 2 examples and 2 non-examples. Following training, participants were instructed to begin monitoring daily in the science classroom with the device placed on the upper right corner of their desk. Self-monitoring periods were to include all classroom activities in the science classroom and consisted of prompted questions of “are you on task” with a binary yes/no option as a response presented at automatic 5 min fixed intervals. In order to minimize potential peer distraction, prompts were visual (flashing screen) and tones/vibrating notifications were disabled during the study. If no response was entered within 6 seconds, the response option disappeared and the subsequent five-min interval was initiated by the application. The students were then provided the mobile devices for use during class. Participants were allowed to keep the devices for self monitoring when research personnel were not present with the agreement that internet content viewing remained appropriate and within school guidelines. During intervention, students began each class period by setting the device on the corner of their desk and initiating the application. If they failed to initiate the application, the researcher prompted them to begin. Student 1 was prompted one time and Student 2 was prompted four times through the course of the study. Throughout intervention, students did not receive feedback or contingencies on the accuracy of their self-monitoring data. In addition, students did not have access to reviewing their recorded data.

Withdrawal

Following improvement in the targeted behavior and stability of data, the withdrawal phase was instituted. The withdrawal phase was conducted in the same manner as the baseline phase. Participants were informed that monitoring with the I-Connect application would no longer occur and that the device would be withdrawn during these sessions. There were no specific guidelines given to participants regarding behavior during these sessions. Although students were not prohibited from self monitoring with other methods during withdrawal, such as paper and pencil, neither student initiated observable forms of monitoring.

Reintroduction of I-Connect Self-Monitoring Intervention

Following the collection of a minimum of five data points with a return to baseline levels and demonstrated stability, the intervention was reintroduced. The student was provided the mobile device with the I-Connect application and instructed to resume monitoring. Additional training and instruction on how to use the application and mobile device were not provided and students did not request assistance. With the reintroduction of the intervention all procedures remained consistent with the initial implementation protocols without retraining.

Results

Results examine: (a) the functional relationship between the introduction of the intervention and change in students’ on-task behavior, (b) the level of implementation fidelity of the I-Connect self-monitoring intervention, and (c) the social validity of the intervention as reported by the students and their teacher. Disruptive behaviors are also included as a secondary data comparison.

Effects of I-Connect Self-monitoring on On-Task Behavior

Student 1’s percentage of time on task per observation session across baseline and intervention conditions is graphically displayed in Figure 1. Visual analysis of baseline data indicates some variability, yet during baseline Student 1 was on task an average of 51% of the time ranging from 71% to 41%. When the I-Connect intervention was implemented, his on task average immediately improved to an average of 95% (range 77%–100%). When the withdrawal of the intervention occurred, Student 1’s percentage of on-task behavior decreased to below baseline levels averaging 41% (range 32%–51%). The reintroduction of I-Connect yielded an immediate increase in on-task behavior, average of 94% (range 84%–100%), consistent with levels demonstrated during the first introduction of I-Connect.

Figure 1.

Figure 1

Percentage of time on-task during baseline and intervention conditions for Student 1 (15 min observations).

Student 2’s percentage of time on task per observation session across baseline and intervention conditions is graphically displayed in Figure 2. Visual analysis indicates a steady decline in percentage of on-task behavior for Student 2, averaging 18% of time on task (range 80-0%). As can be seen in Figure 2, the introduction of I-Connect resulted in a clear positive mean shift. Student 2’s percentage of on-task behavior remained above 88% for 7 of the 8 sessions with one session at 59% (averaging 91% ranging 100%-59%). Again, with the removal of the I-Connect self-monitoring system, a distinct negative mean shift is noted (see Figure 2, withdrawal phase), with an immediate decline of nearly 30%. A steady decreasing trend occurred similar to baseline performance levels (average 42%, range 71-16%). When I-Connect was reintroduced, Student 2’s on-task behavior recovered to previous intervention levels, again averaging 91% (range 97%-81%).

Figure 2.

Figure 2

Percentage of time on-task during baseline and intervention conditions for Student 2 (15 min observations).

Student 1’s disruptions per observation averaged 2.2 (range 1–4) during baseline (See figure 3), with intervention disruptions reduced to an average of 1 (range 0–5). When the withdrawal of the intervention occurred, Student 1’s disruptions rose slightly to an average of 2 (ranging 1–3). When I-Connect was reintroduced, his disruptions averaged .4 (range 0–2).

Figure 3.

Figure 3

Frequency of disruptions during baseline and intervention conditions for Student 1 (15 min observations).

The teacher had been more concerned with Student 2’s disruptions which averaged 4.3 per observation during baseline (range 0–9). With the implementation of I-Connect, his disruptions immediately decreased to 0; only 2 of 8 observations contained disruptions higher than 1 compared to 5 of 9 higher than 1 during baseline. The intervention average was 1.5 (range 0–7). Upon removal of the intervention his disruptions initially remained low (2 observations with no disruptions) then returned to previous baseline levels. The average disruptions per observation during withdrawal was 3.8 (range 0–9). With the reintroduction of I-Connect, Student 2’s disruptions immediately reduced to an average of .6 (range 0–1).

Fidelity of Implementation

Fidelity of implementation was assessed for 100% of baseline and intervention sessions. Of the three fidelity and implementation items, positive responses were obtained for 100% of items for both students during intervention. Additionally, there were no observable instances in which the application’s use was distracting to peers or teacher during intervention. No alternative self-monitoring system was observed during baseline or withdrawal phases.

Treatment Acceptability

Both participants completed the researcher-developed treatment acceptability form at the end of the study, with generally positive responses. Obtained scores for the five treatment acceptability items were 33/35 for Student 1 and 31/35 for Student 2. Both reported high levels of satisfaction with the intervention, with the lowest obtained scores (6/7 for Student 1 and 5/7 for Student 2) reported for the item “When using the device to monitor, how do you think the teacher would rate you being on task?” Upon completion of the study, the classroom teacher completed the social validity IRP-15 rating scale. The classroom teacher rated the intervention as a 66, with consistently positive ratings obtained on items related to intervention benefits. Of responses, 10/15 items were answered as “agree” with agree being positive. The only item with a “strongly disagree” response was related to past intervention use (“this intervention is consistent with those I have used in classroom settings”).

Discussion

The purpose of this study was to examine the effects of a technologically-based self-monitoring intervention on the on-task and disruptive behavior for two secondary students with diagnosed ADHD. This study also examined participant and teacher perception of intervention effects. The intervention resulted in clear improvements in on-task behavior and improved but more variable results for disruptive behavior despite no added teacher workload associated with the intervention and no specific reinforcement for on-task behavior by the classroom teacher or project staff. Both students and the classroom teacher reported improvements in on-task behavior, with the intervention rated positively by the classroom teacher on all categories related to interventional benefit.

Results of the self-monitoring intervention support prior literature in this area (Harris, Friedlander, Saddler, Frizzelle, & Graham, 2005; Reid, Trout, & Schartz, 2005) and extend the literature base in five ways. First, inclusion of secondary students as participants in the self-monitoring literature is rare. This study serves to expand the interventional knowledge base for secondary students receiving special education services. Second, no reinforcement strategies were incorporated in the design apart from the use of the self-monitoring application and the mobile device. Third, longer intervals were used in this study than are typical of self-monitoring interventions in general, with five-min intervals serving to extend the face validity of an intervention that has historically tended to use intervals of one min or less. Fourth, the use of wireless, internet-ready technology to deliver the self-monitoring intervention offered a novel application of a well-researched intervention. Last, the study was conducted in a general education, high school classroom - a rarely used setting for self-monitoring research.

Although the intervention resulted in improved on-task behavior for participating students, the generalization measure of disruptive behavior was less clearly improved. It is noted that increased disruptive behavior and lower on-task behavior (particularly for student 2) during one unusual observation period (session 11 for student 1; session 14 for student 2). During this unusual observation period, three snakes maintained in the classroom became audibly agitated and active in their terrariums. During this event, both students became more disruptive and less on-task. If one were to exclude these datapoints, improved classroom disruptions are immediately more visually convincing.

Several limitations may have impacted results and interpretation of study results. First, direct observation and evaluation of self-monitoring occurred in one class only and thus it is unknown if results generalized to other classes or if monitoring in additional settings might have improved on-task behavior for the observed classroom. Second, while withdrawal resulted in a noticeable mean shift across phases, no programming for generalization occurred by fading the intervention. Third, as interval length was not varied within the study, it is not known if the five-min intervals selected were optimal or suboptimal for the behaviors assessed, but as this was not a variable of focus for the present study, no conclusion regarding interval frequency may be made. Fourth, no academic data were collected to measure achievement outcomes. Last, although What Works Clearinghouse standards for single case research were followed for intervention sessions (i.e., three demonstrations of effect, staggered intervention onset to control for classroom effects), additional participants should be included within future research of the I-Connect intervention.

Future research using technologically-based self monitoring will likely prove beneficial if it investigates the same literature gaps found in self monitoring and secondary school-based research. Studies that investigate generalization to non-intervention classes and evidence of generalizability to those classes within a tiered prevention model are sorely needed. As the use of technological applications to prompt self monitoring is in its infancy, additional work conducted in general and special education settings would benefit from investigation of critical components of self-monitoring research. These components include the interval frequency and format (e.g. tones, visual) of cues/prompts. If optimal interval length could be approximated through repeated studies of student populations, practitioners could maximize productive class time while also minimizing self-monitoring task demands. Last, future investigations of technologically-based self-monitoring should incorporate achievement measures within the intervention design. Achievement measures should include both proximal (i.e., assignment completion) and distal (i.e., course grades) effects while meeting WWC standards of single-case research. Answering these critical questions would assist researchers and practitioners in making informed intervention choices about technological adoption.

Figure 4.

Figure 4

Frequency of disruptions during baseline and intervention conditions for Student 2 (15 min observations).

References

  1. Amato-Zech NA, Hoff KE, Doepke KJ. Increasing on-task behavior in the classroom: Extension of self-monitoring strategies. Psychology in the Schools. 2006;43(2):211–221. [Google Scholar]
  2. Blood E, Johnson J, Ridenour L, Simmons K. Using an i-pod touch to teach social and self-management skills to an elementary student with emotional/behavioral disorders. Education and Treatment of Children. 2011;34:299–321. [Google Scholar]
  3. Cooper JO, Heron TE, Heward WL. Applied behavior analysis. 2. Upper Saddle River, NJ: Prentice Hall; 2006. [Google Scholar]
  4. DiGangi SA, Maag JW, Rutherford RB., Jr Self-graphing of on-task behavior: Enhancing the reactive effects of self-monitoring on on-task behavior and academic performance. Learning Disability Quarterly. 1991:221–230. [Google Scholar]
  5. Fantuzzo JW, Polite K. School-based, behavioral self-management: A review and analysis. School Psychology Quarterly. 1990;5(3):180–198. [Google Scholar]
  6. Gulchak DJ. Using a mobile handheld computer to teach a student with an emotional and behavioral disorder to self-monitor attention. Education and Treatment of Children. 2008;31(4):567–581. [Google Scholar]
  7. Hansen BD, Wills HP, Kamps DM, Greenwood CR. The effects of function-based self-management interventions on student behavior. Journal of Emotional and Behavioral Disorders. 2013 Advance online publication. doi:1063426613476345. [Google Scholar]
  8. Harris KR, Friedlander BD, Saddler B, Frizzelle R, Graham S. Self-monitoring of attention versus self-monitoring of academic performance effects among students with ADHD in the general education classroom. The Journal of Special Education. 2005;39(3):145–157. [Google Scholar]
  9. Holifield C, Goodman J, Hazelkorn M, Heflin LJ. Using self-monitoring to increase attending to task and academic accuracy in children with autism. Focus on Autism and Other Developmental Disabilities. 2010;25(4):230–238. [Google Scholar]
  10. Kazdin AE. Single-case research designs: Methods for clinical and applied settings. New York: Oxford University Press; 1982. [Google Scholar]
  11. Legge DB, DeBar RM, Alber-Morgan SR. The effects of self-monitoring with a MotivAider [R] on the on-task behavior of fifth and sixth graders with autism and other disabilities. Journal of Behavior Assessment and Intervention in Children. 2010;1(1):43–52. [Google Scholar]
  12. Mace FC, Belfiore PJ, Hutchinson JM. Operant theory and research on self-regulation. In: Zimmerman BJ, Schunk DH, editors. Self-regulated learning and academic achievement theoretical perspectives. New Jersey, NJ: Lawrence Erlbaum Associates Publisher; 2001. pp. 39–66. [Google Scholar]
  13. MacLean WE, Jr, Tapp JT, Sr, Johnson WL. Alternate methods and software for calculating interobserver agreement for continuous observation data. Journal of Psychopathology and Behavioral Assessment. 1985;7(1):65–73. [Google Scholar]
  14. Martens BK, Witt JC, Elliot SN, Darveaux DX. Teacher judgments concerning the acceptability of school-based interventions. Professional Psychology: Research & Practice. 1985;16:191–198. [Google Scholar]
  15. Mason BA, Davis JL. Cueing strategies for self-monitoring interventions: A meta-analysis of effects. Poster presented at the Midwest Symposium for Leadership in Behavior Disorders; Kansas City, KS. 2013. Feb, [Google Scholar]
  16. McDougall D. Research on Self-Management Techniques Used by Students with Disabilities in General Education Settings A Descriptive Review. Remedial and Special Education. 1998;19(5):310–320. [Google Scholar]
  17. Mooney P, Ryan JB, Uhing BM, Reid R, Epstein MH. A review of self-management interventions targeting academic outcomes for students with emotional and behavioral disorders. Journal of Behavioral Education. 2005;14(3):203–221. [Google Scholar]
  18. Paquin MJ. The effects of pupil self-graphing on academic performance. Education and Treatment of Children. 1978;1(2):5–16. [Google Scholar]
  19. Parker D, Kamps D. Effects of task analysis and self-monitoring for children with autism in multiple social settings. Focus on Autism and Other Developmental Disabilities. 2011;26(3):131–142. [Google Scholar]
  20. Quillivan CC, Skinner CH, Hawthorn ML, Whited D, Ballard D. Using a cell phone to prompt a kindergarten student to self-monitor off-task/disruptive behavior. Journal of Evidence-Based Practices in the Schools. 2011;12:131–146. [Google Scholar]
  21. Reid R, Trout AL, Schartz M. Self-regulation interventions for children with attention deficit/hyperactivity disorder. Exceptional Children. 2005;71(4):361–377. [Google Scholar]
  22. Snyder M. Self-monitoring process. Advances in Experimental Social Psychology. 1979;12:85–128. [Google Scholar]
  23. Tapp J, Wehby J, Ellis D. A multiple option observation system for experimental studies: MOOSES. Behavior Research Methods, Instruments, & Computers. 1995;27(1):25–31. [Google Scholar]
  24. Vannest KJ, Mason BA, Brown L, Dyer N, Maney S, Adiguzel T. Instructional settings in science for students with disabilities: Implications for teacher education. Journal of Science Teacher Education. 2009;20(4):353–363. [Google Scholar]
  25. Varni JW, Henker B. A self-regulation approach to the treatment of three hyperactive boys. Child Behavior Therapy. 1979;1(2):171–192. [Google Scholar]
  26. Wills HP, Kamps DM, Abbott M, Bannister H, Kaufman J. Classroom observations and effects of reading interventions for students at risk for emotional and behavioral disorders. Behavioral Disorders. 2010;35(2):103–119. [Google Scholar]
  27. Young KR. Teaching Self-Management Strategies to Adolescents. Sopris West, 1140 Boston Ave., Longmont, CO 80501: 1991. [Google Scholar]

RESOURCES