Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2020 Jun 2;14(1):1–10. doi: 10.1007/s40617-020-00421-2

Using Gamification to Promote Accurate Data Entry of Practicum Experience Hours in Graduate Students

Diana Parry-Cruwys 1,, Jacquelyn MacDonald 1
PMCID: PMC7900260  PMID: 33732573

Abstract

The current study evaluated the effects of a gamified package intervention on the accurate data entry of practicum experience hours by 15 behavior analysis graduate students at a small private university. The gamified intervention (Practicum Slayer) included feedback, added reinforcement in the form of points to access putative primary reinforcement, badges, and thematic enhancement. We compared this intervention to a feedback-only condition using a multiple-baseline design across classes. We collected weekly data on the percentage of students per class who entered data into the Behavior Analyst Certification Board (BACB) Fieldwork Tracker with 100% accuracy. The intervention was effective at increasing accurate data entry for all classes, and 93.33% (14 of 15) of participants reached 100% accuracy in their data entry by the end of the study. Social validity data indicated that participants received the gamification package with moderate positivity. We discuss limitations of the study and areas for future research.

Keywords: BACB, Data entry, Experience, Fieldwork, Game, Gamification, Practicum


Graduate students preparing to sit for the Behavior Analyst Certification Board (BACB) exam must complete documented experience hours in their field of study. Recent changes from the BACB in March 2018 (BACB, 2018) resulted in a significant increase in the response effort on the part of fieldwork participants to complete the documentation related to their practicum experience regardless of type. Particularly, the BACB provided an Excel-based Fieldwork Tracker in which all students are strongly encouraged to log their hours. Each hour to be counted toward practicum experience must be listed (i.e., experience was gathered on a certain date, at a certain time, in a certain setting, and with a particular supervisor and was one of several categories of experience type), and the total hours by month must meet several strict and complex overlapping requirements. Although a completed Fieldwork Tracker is not required during the initial application to sit for the BACB exam, if a trainee’s materials are audited as part of the application process, submission of a completed Fieldwork Tracker may be required within 7 days of the request. Therefore, the BACB strongly recommends accurate and thorough completion of the Fieldwork Tracker by all BACB trainees. Trainees must also track their hours in a unique secondary documentation system, which should be submitted if requested; however, the format of this system is not prescribed by the BACB (BACB, 2018).

This study focused on the context of the intensive practicum model. The intensive practicum model requires that (a) no more than 50% of students’ experience can occur while working with clients; (b) 10% of the total time must occur in the presence of a supervisor, but only 50% of that 10% can be in a group format; (c) there are four observations per month; and (d) at least 50% of students’ total experience must be categorized as unrestricted activity, which only some of the supervisory time counts toward. Additionally, some of these requirements must be managed monthly, and some only need to be met over the course of the entire experience. Due to the complexity of this requirement, the researchers anecdotally observed that many trainees in previous sections of the intensive practicum course experienced difficulty accurately recording their practicum experience hours. For example, at the end of each workday, trainees may enter their restricted time and unrestricted time they have accrued. At the end of the month, trainees may need to go back and delete restricted hours so that they represent no more than 50% of the time accrued for that month. Trainees may also need to adjust hours based on the total amount of supervision provided for that month. This may lead to errors and frustration on the part of the trainee. If trainees fail to accurately enter data at the end of each month, they may not accrue the percentage of restricted, unrestricted, and supervisory hours needed to sit for the BACB exam, postponing their exam until these percentages are correct. We were interested in identifying ways to increase the likelihood of accurate entry of practicum experience hours by manipulating the potential motivational variables for entering experience hours and adding reinforcement for accurate data entry. We selected gamification of the practicum experience as a method to explore this issue.

Although gamelike elements have been applied in varying capacities related to consumer behavior (e.g., earning points and badges for driving using the Waze app), Deterding, Dixon, Khaled, and Nacke (2011) first defined gamification from an academic standpoint; their definition stated that gamification is best defined as “the use of game-designed elements in non-game contexts” (p. 10). Some of the common elements used in gamification can be best categorized by their level of abstraction from the game-play experience. As defined by Deterding et al., these include game interface design patterns, which are the components a player directly interacts with or sees game progress, such as points, badges, or leaderboards. Also included are game design patterns and mechanics, which are elements that recur during game play that affect player behavior, such as taking turns or having a time or resource constraint, and game design principles and heuristics, which relate to the overall conceit and “feel” of the game, such as the goals the players are trying to achieve in the game and the design of the game. The game model is also important and includes the gaming experience for the player and the concept of the game (e.g., a role-playing game, a quest game, a question-and-answer game). Finally, the game design method should be considered. This element occurs behind the scenes as the game is being created rather than on the consumer’s end.

Morford, Witts, Killingsworth, and Alavosius (2014) added that the definition of gamification should consider the function of game play and, from that, build a definition of game playing as a class of behavior distinct from other behavior. This view of game playing is based on the behavior of the player rather than on the elements of the game, as player behavior is targeted for change in a gamification model. In doing this, Morford et al. established six characteristics that define functional game play. First, player behavior has a direct impact on the game and can influence the results of that game. Second, the player can indicate clear goals within the game or at the end of the game (e.g., what the objective of the game is and how a player knows when the game is over). Third, there are rules and barriers established within the game that the player adheres to when playing. Fourth, the game has a probabilistic outcome that prevents the player from knowing exactly how the game will end. Fifth, the player develops strategies and heuristics for game play as the game develops and in response to consequences (resultant from rules or from other players’ actions) within the game. Finally, behavior should only be classified as game playing if the initiation of the game occurred without coercion.

Whereas there are relatively few peer-reviewed publications related to gamification, the number is increasing, with the use of the term gamification only gaining ground around 2010 (Hamari, Koivisto, & Sarsa, 2014; Morford et al., 2014). Gamification research has developed in a few key areas, specifically health behavior and education (see Johnson et al., 2016, for a summary of the health and well-being gamification literature). In the field of education, overall positive results from the use of gamification have been found thus far, although the definition of gamification, the gamified elements functioning as the independent variable, and the dependent variables targeted have varied widely.

Dichev and Dicheva (2017) completed a thorough review of the existing research on gamification in education and categorized the studies by topic being gamified, behavior being measured, game elements included, and outcome. They found 41 studies that listed the outcome as measurable according to affective, behavioral, or cognitive outcome. The outcomes of these studies were then classified as conclusive and positive (29%), conclusive and negative (7%), or inconclusive (63%). The authors note that their classification of outcome as conclusive and inconclusive was largely based on the categories provided by the original studies’ authors. Many studies reported limitations, such as a small sample size, the lack of a control group, and a limited length of study. Other variables that reflect the varied nature of gamification research and may have influenced the outcomes include (a) a wide range of topics, from computer science classes to literature classes; (b) the format of the game elements, which ranged from online homework to lab projects; (c) the number of game elements included, which ranged from one to three or more; and (d) the type of game element included, which ranged from points only to points, badges, and leaderboards combined. Dichev and Dicheva summarized their extensive analysis by stating,

While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach. (p. 1)

These findings point to the importance of continuing to research the benefits of gamification in educational settings given the inconclusive results. Lister (2015) only found 19 studies that addressed the use of gamification in postsecondary (e.g., college and above) settings. Huang, Hew, and Lo (2018) compared gamified and nongamified sections of a flipped-model information management undergraduate class with 48 students in each section. Gamification included badges and levels to be earned and achieved, and leaderboards and progress trackers to measure student progress individually and as compared to the group. The gamified group was more likely to complete their homework and follow-up class activities and scored higher on their final exam.

Examples specifically related to students at the master’s level or above are sparse. Meyer (2008) conducted a study in which a gamified point system was added to a 6-week master’s- and doctoral-level class that contained five weekly online discussions. Researchers asked the students to post two to three times per week on the discussion board and then, at the end of the week, to select what they felt was the “best post” of the class. The student whose post was selected as “best post” for the week was awarded two points on his or her final grade for the class. Meyer analyzed the quality of the posts using Bloom’s taxonomy and categorized the posts as pertaining to the taxonomy of create, evaluate, apply, understand, and/or know. Although the complexity of the category of responses did not change over the course of the 6-week class, participants self-reported that the posts became more complex over time. This study highlighted a way in which gamification could be applied to an upper level critical thinking seminar. However, because the results were inconclusive, more work at the graduate level should certainly be done to determine if gamification is a possibly useful component of graduate school classes.

Skinner (1984) noted that game playing likely produced automatic reinforcement for many individuals, and modern games often have variable schedules of reinforcement built into them to promote continued play (Morford et al., 2014). Therefore, a well-designed gamified class should produce self-satisfaction while playing and make students more likely to continue to engage with class materials. Several studies have supported this general hypothesis. Chapman and Rich (2018) conducted a follow-up survey for undergraduates who participated in a gamified class. Of the 124 respondents, 83 (67%) said that the gamified course was more motivating to participate in than a regular, nongamified course. Lister (2015) found that, in 12 of 19 studies on postsecondary-level gamification, students self-reported higher motivation in the gamified classes. However, she also reported that 10% of students noted negative motivation or affect in response to gamification. Similarly, Haaranen, Ihantola, Hakulinen, and Korhonen (2014) evaluated the use of badges as part of a gamification package for an online bachelor’s-level course. Participants earned badges for learning, time management, and carefulness, and badges were ranked bronze, silver, and gold. In a social validity follow-up survey, only 52% of participants wanted badges included in the course were they to take a gamified class again, and 32% of participants reported that they did not see the usefulness of the badges. These negative responses to gamification may be specific to college-level respondents; more research is needed to determine if gamifying higher education is considered off-putting to those seeking higher education.

Limited research has been conducted in the field of gamification of education, and even less research has been conducted with graduate students. We are of the opinion that increasing the accuracy of tracking the hours and activities related to the fieldwork experience is a socially significant endeavor and that gamification may produce favorable outcomes. Therefore, the purpose of the current study was to extend the research on gamification in education by applying gamified elements to an intensive practicum class to determine its effects on the accuracy of data entry meeting BACB practicum experience requirements. We also collected social validity data on the acceptability of gamification in a graduate-level class.

Method

Participants and Materials

Participants

The participants were 15 graduate students (aged 23–32 years) receiving their master’s degree in applied behavior analysis at a small private university. None of the participants had any prior experience with the Fieldwork Tracker outside of the prepracticum training, described later. Participants were enrolled across three sections of a practicum course. The study authors taught two sections of the course, and an instructor with experience teaching practicum and familiarity with the Fieldwork Tracker and BACB experience standards taught the third section. Each class consisted of seven to eight students. We implemented the gamification intervention for all students in the class; however, data were included in this study only for those students who met inclusion criteria and gave informed consent. Any students who were enrolled in practicum before (n = 1) or were not collecting intensive practicum hours (n = 2) were excluded from the study. Of the students meeting inclusion criteria (n = 18), three did not provide consent to participate. Any student in the class could earn points and badges and experienced all elements of the game, regardless of their inclusion in the study.

Classes were conducted simultaneously, in separate classrooms. Although the progress of each class was not kept secret, the game was not structured as a competition, so the progress of other classes in their level of the game or their progress on coloring in their dragon was not highlighted during class time.

Materials

Each participant had access to Microsoft Excel or Google Sheets and Moodle and used a laptop, tablet, or computer to collect fieldwork experience data and review feedback from the instructor. We provided participants with the BACB Fieldwork Tracker. Copies of the Fieldwork Tracker were saved in individual student folders on the secured OneDrive at the university. Each participant also had access to a self-study PowerPoint presentation that reviewed the necessary components of data collection for the intensive practicum experience and linked to the BACB experience standards (BACB, 2018). All instructional materials were posted on Moodle and available at least 1 week before class began. Additionally, for the gamification portion of the class, a poster (described later) and markers were provided during class time.

Dependent Variables and Measurements

The dependent variable was the accurate entry of data into the Fieldwork Tracker according to the requirements of the intensive practicum experience as laid out by the BACB. We evaluated input accuracy by analyzing the participant’s permanent product of the Fieldwork Tracker that was uploaded to a secure OneDrive folder at the beginning of each week. We measured input accuracy on the requirements listed in Table 1. To aid in clarifying expectations for the participants, the requirement that unrestricted time be 50% or more of the total time was applied to each month’s experience accrual, even though it is an across-experience requirement rather than a monthly requirement according to the BACB monthly standards (2018). If a participant did not enter any new data into the Fieldwork Tracker since the previous week, he or she received a “−” on Requirements 4–10 of the Fieldwork Tracker Requirements Checklist, giving the participant a total possible percentage independence of 30% for the week. Percentage accuracy was determined by dividing the number of correct components by the total number of components for that week and multiplying by 100. Data were collected by the first and second authors for the purpose of calculating interobserver agreement in 39.44% of sessions for a mean of 99.01% agreement (range 80%–100%). As the BACB requires perfect accuracy in data collection of experience hours, the dependent measure used to determine the effectiveness of the intervention was the percentage of participants in each class who completed 100% of the checklist items independently that week. This was determined by dividing the number of participants who received a 100% on their checklist by the total number of participants within the class and multiplying by 100.

Table 1.

Fieldwork Tracker Requirements Checklist and Number (Percentage) of Errors of This Type Out of Total Errors (n = 179)

Step Requirement Number (%)
1 Name on Fieldwork Tracker 9 (5.03)
2 Supervisor tab information completed 21 (11.73)
3 Setting tab information completed 10 (5.59)
4 Data entered for current week or new data entered N/A (incomplete samples removed from this comparison)
5 For the current month, restricted time is 50% or less of total time 34 (18.99)
6 For the current month, supervision time is 10% or more of total time 18 (10.06)
7 For the current month, group supervision time is 50% or less of total supervision time 37 (20.67)
8 For the current month, minimum of one observation has occurred equal to each of the weeks that have occurred in the month 17 (9.50)
9 Unrestricted time has notes in the “Other” column on the Fieldwork tab 16 (8.94)
10 Fieldwork tab has no negative numbers, observation “yes/no” has been entered, and no other obvious errors 17 (9.50)

We assessed procedural integrity in multiple ways. First, we assessed whether the instructor provided the instruction to color the appropriate number of dragon squares to the participants in each section. The instructor provided the instruction 100% of the time. This was assessed via the permanent product of the total number of dragon squares colored at the end of the session. After each class, all dragon squares were colored except for one class in one section. The students did not want to complete the coloring on this day. They completed the coloring in the following class. Second, prior to each class section, the primary author posted the appropriate badges and the addition of the storyline to each section of Moodle 100% of the time, determined via a digital time stamp.

Experimental Design and Conditions

The study was designed as a multiple-baseline design across classes (Baer, Wolf, & Risley, 1968).

Prepracticum training

Participants were provided with a self-study PowerPoint prior to starting the practicum course, which outlined the requirements for collecting and tracking experience hours. As part of the PowerPoint activity, we instructed participants to watch a 30-min video provided by the BACB on the Fieldwork Tracker and enter five to six sample lines of data into the Fieldwork Tracker Excel spreadsheet. Completion of the self-study PowerPoint was encouraged but not required before starting the practicum course.

Baseline

Throughout all conditions, participants received instruction to enter their experience hours data into the BACB Fieldwork Tracker Excel spreadsheet daily. Data entry could occur at any time of the day. We checked data weekly on a set day of the week. Feedback was provided digitally through a “Notes” section of the Fieldwork Tracker and individually in person during class time. During baseline, we provided only corrective and/or constructive feedback (e.g., “Your unrestricted time needs to be at least 50% of your total time,” “Nice job this week!”) in written form on the Fieldwork Tracker. Throughout the study, the first author provided all written feedback on the Fieldwork Tracker. The class instructor provided additional verbal feedback and assistance during class if needed. The baseline phase occurred for 2, 3, or 4 weeks depending on the class. Determination of which class received which length of baseline was randomized.

Gamification intervention

Following baseline, we introduced a “gamified” practicum class. The gamified class was in place for the remainder of the term for either 8, 9, or 10 weeks depending on the length of baseline for that class section. The game was called Practicum Slayer. Students were given an introductory story posted to the discussion board on Moodle that described them joining a medieval group made up of knights, mages, damsels, and stable helpers on a mission to slay a dragon. Participants were given the rules sheet for the game on Moodle as well. The rules sheet detailed how points were earned during the game, how badges were earned, and the award available for winning the game and defeating the dragon (appetizers ordered of the class’s choice). During the game, each class worked as a team to gain enough points to “slay” their dragon, which was represented by a 46 cm × 61 cm poster with a dragon printed on it. Each poster was divided into squares of approximately 4 cm × 5 cm. Each square was worth five points. As the class earned points according to the rules of the game, they could color in the corresponding number of squares on the dragon poster. Once all squares were colored in, the dragon was defeated, and the class won the Practicum Slayer game. Gamified components of Practicum Slayer included points; badges; and levels, story, and discussion board, described later.

  • Points. The class could color in one square of the dragon poster for every five points earned. Points could be earned for filling in the Fieldwork Tracker, even if the data were not perfect (five points for each participant each week); filling in the Fieldwork Tracker correctly (each correct hour equaling one point); and completing assignments for class (three assignments across the term, each worth 20 points per participant for an on-time submission and five points per participant for a late submission). The total points earned by the class for the week were verbally communicated to the participants at the start of each class. Individual participants were not told how many points they contributed to the total score during class, although the total number of points they accrued for the week was noted for them in their Fieldwork Tracker notes. The participants could then color the dragon during class time while engaging in group work and class discussion.

  • Badges. Each week, each badge was awarded to one participant (or more, in the event of a numeric tie) on the Moodle Practicum Slayer discussion page for each class. The badges were “Top Performer” (most experience hours acquired that week) and “So Supervised” (most supervisory hours acquired that week).

  • Levels, story, and discussion board. Every 150–200 points earned, participants moved up a level and received the next part of the Practicum Slayer story uploaded to the Practicum Slayer discussion page on Moodle. The story provided a detailed account of the party moving through the forest and closer to the dragon, meeting creatures and having adventures along the way. In each level of the story, a new item was collected by the protagonists as well. Participants were encouraged to interact with each other on the discussion board following story postings to think about the new items they had received and how they might be useful in dragon slaying/survival. At the final level (Level 14), participants had all the items necessary to slay their dragon and were encouraged to communicate on the discussion board as to how they would like to slay it (multiple solutions would have been accepted). Upon gaining the criterion number of points, the final chapter of the story was released in which the dragon was slain by the story characters. The criterion number of points was determined individually for each class by calculating the total possible points and then determining the number of points nearest to 90% of the total that could also be divided evenly by similarly valued whole numbers for the purposes of drawing the grid on the dragon poster. For example, Class B received 9 weeks of gamification intervention. The total points this class could receive was 1,995. The total points required to defeat the dragon was 1,800, as this number was 90.2% of the total and could be divided into an 11 × 15 grid for the dragon poster. All criterion numbers ranged between 87.9% and 90.2% of their possible class totals.

  • Reinforcement. All classes had the opportunity to gain enough points to slay their dragon and win the game (classes with shorter gamification intervention time had the number of points needed adjusted accordingly). Classes that slew their dragon before the last day of class had appetizers of their choice delivered for the last day of class. Classes that slew their dragon up to 1 week after the last day of class (to allow for accrual and tallying of the last week of experience hours) received one point added to their final grades. The addition of the extra point incentive was announced in Week 8 for Class A and Week 9 for Classes B and C in response to a perceived wavering of motivation and variable responding across participants.

Maintenance

The study was designed such that if any class met the criteria for reinforcement and defeated their dragon before the end of the class (Week 14), correct data entry would be measured for any weeks after these criteria were met to determine maintenance of effect. The maintenance condition was implemented for one class, Class A, for 1 week.

Social Validity and Demographic Information Follow-Up

At the end of the study, all participants were e-mailed a brief (3–5 min), anonymous social validity survey. Questions and results of the survey are shown in Table 2. The survey collected basic demographic information (e.g., age, prior experience with a practicum tracker) and participants’ ratings of acceptability and preference for the gamified class model. Eleven of 15 participants (73.33%) completed the survey. Following the close of the survey, we debriefed all participants on the study.

Table 2.

Mean Percentage of Independent Completion of Steps of the Fieldwork Tracker Checklist for All Baseline Sessions and Final Three Treatment Sessions per Participant

Group Participant Mean Percentage in Baseline Mean Percentage in Last Three Treatment Sessions
A Deirdre 55 93.33
Sasha 50 93.33
Gabi 35 100
Colleen 90 100
B Emma 20 83.33
Suzy 63.33 76.67
Declan 63.33 100
Loretta 63.33 100
Dani 53.33 96.67
Irene 80 100
C Regina 72.5 100
Ava 85 100
Matt 40 100
Henry 17.5 90
Cameron 40 93.33
Total Average 55.22 95.11

Results

Figure 1 displays the results from the gamification intervention. Although winning the game was determined by the total number of class points, these results are not displayed here as they include points earned by classmates who did not give their consent for their data to be included in the study. Rather, Fig. 1 shows the percentage of participants in the study who completed their Fieldwork Trackers with 100% accuracy each week. In baseline, no participants completed their Fieldwork Trackers with 100% accuracy when receiving only experimenter and instructor feedback. With the introduction of the gamification intervention, all classes showed an immediate though small increase in percentage of correct responding, with an increasing trend over subsequent weeks of the intervention. Both Classes A and B reached 100% of participants completing 100% of their Fieldwork Trackers correctly by the end of the study. Class C reached 80% of participants completing their Fieldwork Trackers with 100% accuracy at the end of the study. The addition of the extra point incentive during Weeks 9 and 10 did not produce a reliable change in responding. Additionally, all three classes “won” the game, with Class A earning enough points to reach Level 14 after Week 13 (earning appetizers and receiving an extra point on their final grades) and Classes B and C doing so after Week 14 (receiving an extra point on their final grades). Because Class A completed the game in Week 13 and accessed reinforcement before the end of the study, we assessed whether responding would maintain in the absence of the gamification components in Week 14. Eighty percent of the participants maintained their responding at 100% accuracy in this maintenance condition for Class A.

Fig. 1.

Fig. 1

Percentage of participants meeting 100% of Fieldwork Tracker criteria across classes. The asterisk denotes the addition of an extra credit incentive; BL = baseline

The mean percentages of independent completion of steps of the Fieldwork Tracker Requirements Checklist for all baseline sessions and the final three treatment sessions per participant are displayed in Table 2. The average mean percentage of independent completion of the checklist in baseline was 55.22% (range 17.5%–90%). The average mean percentage of independent completion of the checklist in the final three treatment sessions was 95.11% (range 76.67%–100%). All participants showed improvement in mean score from baseline to the end of treatment, although for some participants (notably Colleen, Irene, and Ava), baseline scores were high preceding the intervention, and for one participant (Suzy), the change in mean score from baseline to final treatment was relatively small (63.33%–76.67%).

Included in Table 1 is also an error analysis showing the percentage of occurrence of each type of error out of all errors on the components of the Fieldwork Tracker Requirements Checklist. Out of 180 Fieldwork Tracker checks that occurred over the course of the study, 28 samples (15%) had no newly entered data; those samples were excluded from the error analysis. In the remaining 153 samples, 179 categorical errors occurred. The two most common errors were that too much group supervision time was recorded (20.67% of errors) and too much restricted time was recorded (18.99% of errors). The third most common error was the supervisor’s tab of the Fieldwork Tracker being incomplete (11.73% of errors), and the fourth most common error was too little supervision time recorded (10.06% of errors).

Eleven of 15 participants (73.33%) completed the social validity survey. Results are shown in Table 3. We structured all questions using a 7-point Likert-type scale in which a score of 1 indicated strongly disagree with the presented statement and a score of 7 indicated strongly agree with the presented statement. Results of the social validity survey were moderately positive. Mean scores on the social validity questions ranged from 4.3 as the lowest (in response to the statement “Playing Practicum Slayer helped me work cooperatively with my class”) to 5.1 as the highest (in response to the statements “I liked earning points to defeat the dragon” and “I liked earning badges [if applicable]”). One participant scored all questions as 1 (strongly disagree). Some participants also supplied optional comments. These comments were overall positive and included suggestions for improving the game. Two participants mentioned they thought the game would be more fun as a competition between or within classes, one participant thought the badges should be printed out, and two participants commented that they did not read the story.

Table 3.

Social Validity Questions and Answers Categorized by Percentage Positive, Neutral, and Negative and Mean Score (n = 11)

Question % Positive (5 to 7) % Neutral (4) % Negative (1 to 3) Mean (1 to 7)
1 I liked playing Practicum Slayer as part of my practicum experience. 81.8 0 18.2 5.0
2 Playing Practicum Slayer increased my motivation to correctly input my experience hours. 45.5 27.3 27.3 4.4
3 Playing Practicum Slayer helped me work cooperatively with my class. 36.4 9.1 36.4 4.3
4 I liked earning points to defeat the dragon. 81.8 0 18.2 5.1
5 I liked earning badges (if applicable). 63.6 18.2 18.2 5.1
6 I liked reading the Practicum Slayer story. 63.6 18.2 18.2 4.6
7 I liked coloring in the dragon to defeat it. 54.5 9.1 27.3 4.7

Discussion

The purpose of this study was to examine the effects of gamification on behavior that has anecdotally proven challenging for graduate students in behavior analysis: the accurate data entry of practicum experience hours according to the BACB’s requirements. Results suggest that the gamification intervention was effective at increasing accurate Fieldwork Tracker data collection for 93.33% (14 of 15) of the participants. The application of gamelike elements to a practicum course is a new approach to the socially significant problem of accurate collection of BACB Fieldwork Tracker data. The baseline condition alone, which included instructions and individualized weekly written and/or verbal feedback, was not enough to produce data collection with 100% accuracy for the participants. The addition of a class-wide treatment package that included a gamified structure, however, produced a meaningful increase in data entry accuracy across classes.

The gamification intervention was a package intervention structured as an interdependent group contingency. Several components common to gamified education—points, levels, and badges—were included (Sailer, Hense, Mayr, & Mandi, 2017). Game elements included likely conditioned reinforcers (points) earned as a class and accrued toward putative backup reinforcers (appetizers and an extra credit point on the final grade), individual recognition in the form of weekly badges, and thematic enhancements (the Practicum Slayer story and dragon poster for coloring). Because we presented these components at the same time, it is unknown which elements of the gamification intervention were most effective at producing behavior change. Each of these components may be independently sufficient to promote behavior change through gamification, or they may only be interdependently sufficient. Future research should examine the components of a gamification intervention to determine the most effective elements to maximize the efficiency of gamification implementation, as well as to ensure the intervention is well received by the participants or consumers.

Additionally, it is unclear why written feedback was insufficient to produce behavior change in baseline. It may be that the students did not read the written feedback or did not understand the written feedback because they did not complete the self-study module prior to class. Future research should examine whether making the self-study module compulsory and providing additional examples to practice within the self-study module increase accuracy without the use of gamification making the treatment package more cost and time efficient.

Questions remain regarding the role of reinforcement in this study. One limitation is that the participants were not surveyed to determine a preferred reward for winning Practicum Slayer; appetizers were chosen arbitrarily as a putative reinforcer and, although seemingly met with enthusiasm by the participants, their effectiveness as a reinforcer remains unknown. In the future, asking participants to rank order potentially effective putative reinforcers before implementing the intervention (e.g., Waldvogel & Dixon, 2008) could be a useful extension of this line of research. Additionally, the extra credit incentive added during Weeks 9 and 10 did not appear to modify the establishing operation (EO) for accurate Fieldwork Tracker completion, as no notable change in behavior was seen in any class following its introduction. As research with graduate students is limited, effective reinforcers for this group remain unknown. Whether these incentives functioned as potential reinforcers or EO modifiers should be evaluated in future research with graduate students and future gamification studies. The addition of putative reinforcers (i.e., extra point in class, appetizers) may not have been necessary for all students. For example, three participants (Colleen, Irene, Ava) engaged in high accuracy across baseline sessions, demonstrating that other variables may have affected their performance. It is possible that their responding was sensitive to the feedback provided in baseline or that their responding may have been maintained by the threat of an audit or a rejection of their hours by the BACB and the increased time between graduation and the opportunity to take the BACB exam. One participant (Suzy) demonstrated a limited change from baseline to posttreatment sessions, resulting from a failure to record any new data in the Fieldwork Tracker during 1 week of the last three intervention sessions. It is possible that (a) the naturally occurring reinforcers of having a “perfect” Fieldwork Tracker and avoiding potential censure by a BACB audit were more powerful motivators for some participants and/or (b) the magnitude or schedule of reinforcement presented within the game was too small or thin for other participants to be motivated to accurately enter data into their Fieldwork Trackers. For these reasons, it is not possible to allocate change in behavior to the reinforcement component of the gamification package without a further component analysis.

There are also procedural limitations to the current study. First, not all of the practicum students provided consent; three of 18 potential participants (16.67%) did not provide consent. One of these three potential participants was not present when we presented the opportunity to provide informed consent to the class and did not return the form when we left it for the potential participant after arriving at class. Two of these potential participants were present and chose not to participate. More research should be conducted with this population to determine if this level of opting out is standard among graduate students, and if so, why this may be.

Second, due to the time constraints of this study, only one class experienced the maintenance condition, and only for one class session. This limited examination of maintenance does not allow any strong conclusions to be drawn regarding the durability of accurate Fieldwork Tracker completion following a gamification package. In the maintenance data collected in this study, responding maintained at 100% for three of four participants (75%). However, this is a lower percentage than demonstrated during treatment, and he trend of completion accuracy during maintenance is unknown given the time limitations. This limitation should be addressed in future iterations of this line of research.

The error analysis displayed in Table 1 allowed for a detailed look at some of the challenges that participants experienced with accurate data entry according to the BACB experience standards requirements. The four most commonly occurring errors were entering too much restricted time, entering too much group supervision time, not entering supervisors’ information, and entering too little supervision time. Three of these errors (all except the supervisors’ error, which can be easily resolved) are extremely important to catch early, as they can greatly impact how many hours the trainee can claim as accrued experience. It was not uncommon during a check to see an example such as this: A trainee enters 50 hr of restricted time, 20 hr of unrestricted time, 1.5 hr of individual supervision, and 3 hr of group supervision for the month. The trainee might think based on what had been entered that he or she was on track to accrue 70 hr of experience for the month; however, the restrictions on accrual type would only put the trainee on track to accrue 30 hr of experience for the month. We also recognize that the requirement of having 50% or more of the time for each month be unrestricted time is self- rather than BACB imposed; the BACB only requires that 50% or more of the total accrued time be unrestricted, not per month. However, given that the trainees were working to complete their experience in the relatively short duration of the intensive practicum, and that anecdotally the trainees appeared confused by this distinction (monthly requirement vs. total experience requirement), the decision was made to make this a monthly requirement to eliminate trainees accruing very little unrestricted time each month during the intensive practicum and therefore being unable to complete their experience within the time frame. Additionally, we should note that the graduate students were accruing hours under the standards set forth by the intensive practicum experience. Although the intensive practicum experience will be withdrawn with the implementation of the new BACB standards, all fieldwork hours require the same 100% adherence to data accuracy and complex requirements. The methods of this study could be applied to future fieldwork hour requirements.

Another limitation of the current study was that we did not collect procedural integrity on how the story was delivered during each class section. The story and badges were posted prior to each class, but it is possible that each instructor dealt with the badges and story differently (i.e., read the story aloud, had students quietly read the story). Future research should collect procedural integrity on the behavior of the instructor in each class section to strengthen the procedures of future gamification research. Additionally, the first and second authors of this study collected data, preventing blinded ratings of participant accuracy. Although the authors were careful to accurately summarize data in accordance with participant performance on the Fieldwork Tracker Requirements Checklist, in the future, best practice should allow for an outside observer to collect data to prevent unforeseen bias.

In the current study, social validity scores were moderately positive, with mean scores for the social validity statements averaging between 4.3 and 5.1, or between neutral and agree. In response to the general statement “I liked playing Practicum Slayer as part of my practicum experience,” 81.8% of participants responded positively, and 18.2% responded negatively. These results were in line with previous postsecondary student social validity responses to gamification (67% and 10%, respectively; Lister, 2015). To date, no other studies that we reviewed conducted social validity on gamification specifically with a graduate-level population. This group may respond differently to thematized gamification than other groups based on age, stress, expectations of their studies, or additional factors that have yet to be explored. The participants in the current study also responded more positively to the inclusion of individually earned badges than was anticipated based on previous research (Haaranen et al., 2014). Modifying elements of the gamification component of the course based on participant feedback (more emphasis on competition and badges, less emphasis on thematic elements) should be considered to determine if such modifications will make the game structure more positively received by all participants in future studies.

Anecdotally, graduate students enrolled in practicum self-report high rates of stress around gathering practicum hours and properly inputting those hours. However, we did not examine levels of stress in the current study. Additional exploration of graduate students’ perceptions of the practicum experience is warranted. If this experience is found to be self-reported as stressful, examining methods to make the practicum data collection experience less effortful or stressful for graduate students, while simultaneously increasing the accuracy of data entry, would be a meaningful and socially significant problem to pursue.

This study adds to the limited literature on gamification in postsecondary education. Although gamification in education has been found to be summarily positive in the literature thus far, and this study adds to those findings, additional exploration of this topic is warranted to determine what elements of gamification are most effective in postsecondary education and the social validity of its use among graduate students.

Funding

The authors did not receive any funding for this research.

Compliance with Ethical Standards

Conflict of Interest

The authors declare they do not have any conflicts of interest.

Ethical Approval

This research was completed with human subjects and approved through a university-affiliated institutional review board.

Informed Consent

All participants gave their informed consent before participation in the study and were debriefed following the conclusion of the study.

Footnotes

Research Highlights

• Many practitioners also supervise BACB candidates completing their practicum or fieldwork experience.

• This research highlights the need for additional systems for assisting practicum or fieldwork students in entering their data according to BACB guidelines.

• Supervisors may find the gamification model helpful in incentivizing correct data entry for their practicum or fieldwork students.

• Supervisors may find the table useful as a checklist for the required components of practicum/fieldwork data entry.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97. 10.1901/jaba.1968.1-91. [DOI] [PMC free article] [PubMed]
  2. Behavior Analyst Certification Board. (2018). BCBA/BCaBA experience standards: Monthly system. Retrieved from https://www.bacb.com/wp-content/uploads/BACB_Experience-Standards_190730.pdf.
  3. Chapman, J. R., & Rich, P. J. (2018). Does educational gamification improve students’ motivation? If so, which game elements work best? Journal of Education for Business, 7, 315–322. 10.1080/08832323.2018.1490687.
  4. Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining “gamification.” Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, 9–15.
  5. Dichev, D., & Dicheva, D. (2017). Gamifying education: What is known, what is believed, and what remains uncertain: A critical review. International Journal of Educational Technology in Higher Education, 14(9). 10.1186/s41239-017-0042-5.
  6. Haaranen, L., Ihantola, P., Hakulinen, L., & Korhonen, A. (2014). How (not) to introduce badges to online exercises. Proceedings of the 45th ACM Technical Symposium on Computer Science Education, 33–38. 10.1145/2538862.2538921.
  7. Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? A literature review of empirical studies on gamification. Proceedings of the Annual Hawaii International Conference on Systems Sciences. 10.1109/HICSS.2014.377.
  8. Huang B, Hew KF, Lo CK. Investigating the effects of gamification-enhanced flipped learning on undergraduate students’ behavioral and cognitive engagement. Interactive Learning Environments. 2018;27:1106–1126. doi: 10.1080/10494820.2018.1495653. [DOI] [Google Scholar]
  9. Johnson, D., Deterding, S., Kuhn, K., Staneva, A., Stoyanov, S., & Hides, L. (2016). Gamification for health and wellbeing: A systematic review of the literature. Internet Intervention, 6, 89–106. 10.1016/j.invent.2016.10.002. [DOI] [PMC free article] [PubMed]
  10. Lister, M. (2015). Gamification: The effect on student motivation and performance at the post-secondary level. Issues and Trends in Educational Technology, 3(2), 1–22. Retrieved from https://www.learntechlib.org/p/171075/.
  11. Meyer, K. A. (2008). Do rewards shape online discussions? Journal of Interactive Online Learning, 7(2), 126–138. Retrieved from http://www.ncolr.org/jiol/issues/pdf/7.2.3.pdf.
  12. Morford ZH, Witts BN, Killingsworth KJ, Alavosius MP. Gamification: The intersection between behavior analysis and game design technologies. The Behavior Analyst. 2014;37:25–40. doi: 10.1007/s40614-014-0006-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Sailer M, Hense JU, Mayr SK, Mandi H. How gamification motivates: An experimental study of the effects of specific game design elements on psychological need satisfaction. Computers in Human Behavior. 2017;69:371–380. doi: 10.1016/j.chb.2016.12.033. [DOI] [Google Scholar]
  14. Skinner, B. F. (1984). The shame of the American education. American Psychologist, 39, 947–954. 10.1037/0003-066X.39.9.947.
  15. Waldvogel, J. M., & Dixon, M. R. (2008). Exploring the utility of preference assessment in organizational behavior management. Journal of Organizational Behavior Management, 28, 76–87. 10.1080/01608060802006831.

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES