Skip to main content
Behavior Analysis in Practice logoLink to Behavior Analysis in Practice
. 2018 Apr 9;11(2):129–138. doi: 10.1007/s40617-018-0243-y

An Evaluation of the Performance Diagnostic Checklist—Human Services (PDC–HS) Across Domains

David A Wilder 1,, Joshua Lipschultz 1, Chana Gehrman 1
PMCID: PMC5959821  PMID: 29868337

Abstract

The Performance Diagnostic Checklist—Human Services (PDC–HS) is an informant-based tool designed to assess the environmental variables that contribute to poor employee performance in human service settings. Although the PDC–HS has been shown to effectively identify variables contributing to problematic performance, interventions based on only two of the four PDC–HS domains have been evaluated to date. In addition, the extent to which PDC–HS-indicated interventions are more effective than nonindicated interventions for two domains remains unclear. In the current study, we administered the PDC–HS to supervisors to assess the variables contributing to infrequent teaching of verbal operants and use of a timer by therapists at a center-based autism treatment program. Each of the four PDC–HS domains was identified as contributing to poor performance for at least one therapist. We then evaluated PDC–HS-indicated interventions for each domain. In addition, to assess the predictive validity of the tool, we evaluated various nonindicated interventions prior to implementing a PDC–HS-indicated intervention for two of the four domains. Results suggest that the PDC–HS-indicated interventions were effective across all four domains and were more effective than the nonindicated interventions for the two domains for which they were evaluated. Results are discussed in terms of the utility of the PDC–HS to identify appropriate interventions to manage therapist performance in human service settings.

Keywords: Performance assessment, Performance Diagnostic Checklist—Human Services, Performance management, Staff evaluation, Verbal operants


Performance analysis is the organizational equivalent to functional assessment in clinical applications of behavior analysis. In performance analysis, the goal is to identify the variables contributing to employee performance problems. Three methods of performance analysis exist: informant, descriptive, and experimental. The most popular method of performance analysis, the informant method, has been used to assess a variety of target performances, including poor customer service (Eikenhout & Austin, 2005), the offering of promotional stamps (Rodriguez et al., 2005), cash register shortages (Rohn, Austin, & Lutrey, 2003), and problems completing maintenance tasks (Austin, Weatherly, & Gravina, 2005). The most common type of informant-based assessment is the Performance Diagnostic Checklist, which was used in the aforementioned examples to identify variables contributing to poor performance in restaurants and retail stores (Austin, 2000).

Recently, the Performance Diagnostic Checklist was modified specifically for use in human service settings such as clinics, schools, and residential treatment facilities. The Performance Diagnostic Checklist—Human Services (PDC–HS) consists of 20 items across four domains:

  1. Training.

  2. Task clarification and prompting.

  3. Resources, materials, and processes.

  4. Performance consequences, effort, and competition.

Although the PDC–HS is primarily an informant-based tool, it does include a small direct observation component; the person completing the PDC–HS is prompted to answer a few questions by directly observing the employee performing the task, if possible (Carr & Wilder, 2016).

The training domain consists of four questions designed to identify the extent to which the employee has been taught to correctly perform the task (e.g., Is there evidence that the employee has accurately completed the task in the past?). Possible interventions based on this domain include training and improved personnel selection. The task clarification and prompting domain consists of five questions designed to identify the clarity of the task and the presence of prompts that may occasion task performance (e.g., Is the employee ever verbally, textually, or electronically reminded to complete the task?). Possible interventions based on this domain include task clarification, prompting, and altering the task location. The resources, materials, and processes domain consists of six questions designed to determine the availability of any resources necessary for task completion and the extent to which any existing organizational processes interfere with task completion (e.g., If materials such as teaching stimuli or preferred items are required for task completion, are they readily available?). Possible interventions based on this domain include adjusting staffing and improving access to task materials. Finally, the performance consequences, effort, and competition domain consists of five questions on the amount of effort required to perform the task and the extent to which automatic and programmed consequences support task completion (e.g., Does the employee receive feedback about the performance?). Possible interventions based on this domain include increasing supervisor presence, providing feedback, and reducing task effort.

To date, the PDC–HS has been evaluated in two studies. Carr, Wilder, Majdalany, Mathisen, and Strain (2013) used the PDC–HS to evaluate poor upkeep of therapy rooms by staff at a clinic-based treatment facility. The authors found that poor upkeep was due to a lack of training and inadequate delivery of consequences (i.e., the training domain and the performance consequences, effort, and competition domain were identified as problematic). A PDC–HS-indicated package intervention—training and graphed feedback—was effective at increasing performance. The authors also evaluated a non-PDC–HS-indicated intervention with task clarification and increased access to materials and found that it was ineffective at increasing performance.

Ditzian, Wilder, King, and Tanz (2015) used the PDC–HS to examine the variables responsible for infrequent closing of therapy room doors at a clinic-based autism treatment facility. Using a multiple-baseline design across participants, the authors found that the consequences delivered by supervisors were inadequate to support high levels of staff performance. Based on this, a PDC–HS-indicated intervention—graphed feedback—was implemented and produced large increases in performance across participants. The authors also evaluated a nonindicated treatment that consisted of a visual prompt. The nonindicated treatment was ineffective at increasing staff performance.

Although these studies suggest that the PDC–HS may be useful to identify the variables supporting problematic performance, only two of the four PDC–HS domains were indicated in these studies. Interventions based on these domains were effective at increasing performance, but interventions based on the other two domains—task clarification and prompting and resources, materials, and processes—have yet to be examined. Thus, the purpose of the current study is to illustrate the contribution of all the four PDC–HS domains to two employee performance problems and to evaluate interventions based on each of these indicated domains. In addition, as in previous PDC–HS studies, we assessed the predictive validity of the tool by evaluating the utility of interventions derived without benefit of the PDC–HS; that is, we compared arbitrary, nonindicated interventions to indicated interventions for two PDC–HS domains. In Experiment 1, we examined the utility of the PDC–HS to identify the variables contributing to infrequent teaching of verbal operants by therapists at a center-based treatment facility. In Experiment 2, we evaluated the utility of the PDC–HS to identify the variables contributing to irregular use of a timer by therapists when conducting a skill acquisition program at the same facility.

Experiment 1

Method

Supervisors, staff members, and clients at a university-based clinic for children with autism participated. Supervisors who were Board Certified Behavior Analysts (BCBAs) with at least 3 years of post-BCBA experience served as respondents for the PDC–HS. Therapist–client dyads consisted of an individual therapist who worked with one specific client. The specific therapist and the client with whom she worked remained constant throughout the experiment. Four therapists and clients participated in Experiment 1. Therapists were employees who had worked at the center for at least 3 months and had received basic training in the application of behavior–analytic principles to teach language to young children with autism. They were all women between the ages of 22 and 25, and all had earned a bachelor’s degree. Clients were all boys between the ages of 4 and 8. All clients had a diagnosis of autism and had varying verbal operant repertoires; all were capable of emitting one- to two-word vocalizations.

The study took place at a university-based center for children with autism located in the Southeastern United States. The materials used in the study included a clipboard with a data sheet, a pen, and various toys and objects typically found in a children’s clinic. These toys and objects were used in the context of the presentation of opportunities to emit various verbal operants.

Dependent Variables

Data on three dependent variables were collected during Experiment 1: (a) rate of therapist-presented mand opportunities during a session, (b) rate of therapist-presented tact opportunities during a session, and (c) rate of therapist-presented listener responding opportunities during a session. These dependent variables were selected because manding, tacting, and listener responding were part of the children’s curriculum and therapists were not providing sufficient opportunities for these to occur. A mand opportunity was defined as the therapist withholding an item, interrupting an activity, or blocking access or movement to an activity or item to evoke a response after a child had initially approached or interacted with the activity or item. A tact opportunity was defined as the therapist saying “What is it?” to the client in the presence of a nonverbal stimulus. Although a client’s response was technically under tact intraverbal control in this arrangement, we describe it as a tact for brevity. A listener responding opportunity was defined as the therapist providing a vocal antecedent (instruction) that required the client to engage in the behavior described in the instruction. For example, the therapist might have said “Walk to the door.” The dependent variable was therapist presentation of these mand, tact, and listener responding opportunities; data on client responding (correct or incorrect) were not the focus of the study.

Before each session began, therapists worked with their clients at a small table doing discrete trial teaching (DTT). Sessions were conducted once therapists and clients took a break from DTT. Therapists had been instructed by their supervisors to conduct natural environment teaching (NET) by focusing on teaching mands, tacts, and appropriate listener responding while on these breaks before beginning the study. During breaks, therapists took clients to the playground. The specific mand, tact, and listener responding opportunities presented by therapists were part of each client’s individualized curriculum. All sessions were 5 min in duration and began immediately after the therapist said “Time for a break” (or some variation of that statement). The session ended 5 min later. The rate per minute of presentation of verbal operant opportunities was calculated by dividing the number of presentations in a given session by 5. For each therapist–client dyad, one to three sessions were conducted per day 3 to 5 days per week. Sessions were conducted over a 12-week period.

Independent Variables

Four independent variables were evaluated during Experiment 1. Although the PDC–HS lists more than one intervention possibility for each domain (e.g., a task clarification intervention and a prompting intervention for the task clarification and prompting domain), we selected only one intervention from each domain. Specific interventions from each domain were selected based on the answers to questions in each domain (e.g., if respondents reported that the task had been adequately described to employees but no prompts were provided, we selected a prompting intervention).

Graphed feedback, which was the intervention based on the performance consequences, effort, and competition domain of the PDC–HS, was implemented with Therapists 1 and 2. Graphed feedback consisted of the experimenter (a trained graduate student) presenting a line graph depicting individual therapist performance (baseline and intervention) to the therapist at the beginning of each day. The experimenter showed the therapist a graph depicting her performance and briefly discussed the data every three sessions during this phase.

Increased availability of materials, an intervention based on the resources, materials, and processes domain of the PDC–HS, was implemented with Therapists 2 and 3. The materials delivered included items involved in presenting mand and tact opportunities during sessions (e.g., preferred stimuli such as toys for mand opportunity sessions and preference-neutral stimuli such as a key and a hat for tact opportunity sessions). Specifically, a hands-free bag (strapped around the waist) was given to the therapists. The bag contained items for use throughout the session. When the experimenter gave the therapist the bag, he did not mention presenting mand or tact opportunities; the experimenter only noted that the purpose of the hands-free bag was to help make it easier to carry items during sessions.

Task clarification, which was the intervention based on the task clarification and prompting domain of the PDC–HS, was implemented with Therapists 2 and 4. Task clarification consisted of the presentation of information about the relevant type of verbal operant (i.e., mand, tact, or listener responding). For each verbal operant, the experimenter read a brief (one paragraph) description of the operant to the therapist at the beginning of the workday on the first day of this phase. The description included a sentence noting that it was important to provide clients with opportunities to practice the verbal operant at the clinic. However, the experimenter did not model how verbal operant opportunities should be presented and provided no feedback to therapists. The experimenter then gave the task clarification sheet, which included the aforementioned paragraph, to the therapist at the beginning of each workday by placing it on her clipboard in this phase; this sheet was available on the therapist’s clipboard throughout the day.

Behavioral skills training (BST), which was the intervention based on the training domain of the PDC–HS, was implemented with Therapist 3. BST included four components: instruction, modeling, rehearsal, and feedback. The instruction component included a description of the purpose of presenting mand and/or tact opportunities to children with autism and describing the procedures for presenting these opportunities to children. The modeling component involved the trainer demonstrating how to present mand and/or tact opportunities to children with autism. The rehearsal component involved the trainee demonstrating how to present mand and/or tact opportunities to children with autism. The feedback component consisted of the experimenter presenting information about how the trainee performed during the rehearsal component. The criterion for mastery was 80% correct responses across ten consecutive opportunities. If the trainee did not reach the mastery criterion, the experimenter provided feedback on the steps that were missed and gave the trainee another ten opportunities to present mand and/or tact opportunities. Following the implementation of the BST package at the start of this phase, no additional feedback was provided to the therapist. That is, the feedback component was only presented in the context of the training and did not continue during the remainder of the phase.

Various experimental designs were used to evaluate the effect of the PDC–HS-indicated and nonindicated interventions on therapist presentation of verbal operant opportunities. A multiple-baseline, across-participants (therapists) design with an embedded withdrawal was used for Therapists 1 and 2. A multiple-baseline, across-verbal operants design with an embedded withdrawal was used for Therapist 3. A multiple-baseline, across-verbal operants design was used for Therapist 4.

Procedure

Baseline data on therapist presentation of verbal operant opportunities with specific clients were first collected by trained graduate students. As with the intervention data, baseline data were collected during 5-min sessions in which NET was supposed to be conducted. Data collectors discreetly observed therapists during these breaks. Next, two supervisors for each therapist–client dyad were interviewed separately in small conference rooms by the second and third authors using the PDC–HS. The PDC–HS does not require interviewing two supervisors; we interviewed two in order to obtain an informal measure of interrater reliability. Each interview took approximately 20 min. The clinic assigned one senior-level supervisor and one junior-level supervisor to cases for training purposes. Interviews were recorded for interobserver agreement (IOA) purposes. For PDC–HS items that involved direct observation, the second or third author discreetly observed the therapist and recorded the relevant information. After the interview, the second author analyzed both supervisors’ responses and calculated the mean score for each PDC–HS domain. Mean scores were calculated by dividing the number of questions answered “no” by the total number of questions in that domain and multiplying by 100. This number was then converted to a percentage. The domain with the highest mean score was deemed the most relevant domain, and an intervention based on that domain (i.e., a PDC–HS-indicated intervention) was then implemented. For Therapists 2 and 3, non-PDC–HS-indicated interventions were also evaluated; the effectiveness of these nonindicated interventions was compared to that of indicated interventions. Because of time constraints, Therapists 1 and 4 were not exposed to nonindicated interventions.

IOA and Treatment Integrity

Data on IOA were collected for the PDC–HS administration; a second rater scored the PDC–HS using the recorded interviews. IOA was 100% for all administrations of the PDC–HS. IOA data were also collected for all four therapists. A second observer discreetly observed the number of verbal operant opportunities presented by the therapist to the client in each dyad during 50, 61, 67, and 53% of all sessions for Therapists 1, 2, 3, and 4, respectively. IOA data were calculated by dividing the smaller number of opportunity presentations by the larger number of opportunity presentations and multiplying by 100. These numbers were then converted to a percentage. IOA for Therapists 1 and 2 was 100%. IOA for Therapist 3 was 96% (range of 92 to 100%), and IOA for Therapist 4 was 94% (range of 90 to 100%).

Treatment integrity data were also collected during 56, 50, 54, and 63% of intervention sessions for Therapists 1, 2, 3, and 4, respectively. The experimenter and a second observer noted whether or not the experimenter provided graphed feedback, increased availability of materials, task clarification, or training each time it was scheduled to be implemented. Integrity data were calculated by dividing the smaller number of independent variable implementations by the larger number of independent variable implementations and multiplying by 100. These numbers were then converted to a percentage. Treatment integrity was 100% across implementation of all independent variables.

Results

PDC–HS Results

Figure 1 depicts the PDC–HS results for all dyads. The PDC–HS was administered to two BCBA respondents (i.e., supervisors) for each dyad. For Dyad 1 (upper panel), the PDC–HS identified the highest level of deficits in the performance consequences, effort, and competition domain. A mean of 40% of questions in this domain suggested a problem. In contrast, a mean of 12.5% of questions in the training domain, a mean of 10% of questions in the task clarification and prompting domain, and a mean of 0% of questions in the resources, materials, and processes domain suggested a problem. The direct observations (prompted by a few questions on the PDC–HS) were consistent with supervisor responses to questions.

Fig. 1.

Fig. 1

Performance Diagnostic Checklist—Human Services (PDC–HS) results for each therapist in the four dyads across Supervisors 1 and 2 in Experiment 1

For Dyad 2 (upper middle panel), the performance consequences, effort, and competition domain was also identified as being potentially responsible for performance problems. A mean of 50% of questions in this domain suggested a problem. A mean of 40% of questions in the task clarification and prompting domain suggested a problem, a mean of 25% of questions in the training domain suggested a problem, and a mean of 25% of questions in the resources, materials, and processes domain suggested a problem. Because the same domain was identified as problematic for both Dyads 1 and 2, graphed feedback was selected as an intervention for these employees. The direct observations (prompted by a few questions on the PDC–HS) were consistent with supervisor responses to questions.

The PDC–HS identified a lack of training as contributing to poor performance for Dyad 3 (lower middle panel). A mean of 75% of questions in this domain suggested a problem. A mean of 45% of questions in the resources, materials, and processes domain and the performance consequences, effort, and competition domain suggested a problem. A mean of 20% of questions in the task clarification and prompting domain indicated a problem. Training was implemented as an intervention for Dyad 3. The direct observations (prompted by a few questions on the PDC–HS) were consistent with supervisor responses to questions.

For Dyad 4 (lower panel), the PDC–HS identified the task clarification and prompting domain as contributing most to the performance problem. A mean of 40% of questions in this domain were identified as problematic. A mean of 30% of questions in the performance consequences, effort, and competition domain, a mean of 12.5% of questions in the training domain, and a mean of 10% of questions in the resources, materials, and processes domain suggested a problem. Task clarification was implemented as an intervention for Dyad 4. The direct observations (prompted by a few questions on the PDC–HS) were consistent with supervisor responses to questions.

Treatment Evaluation

Figure 2 depicts the treatment evaluation for Dyads 1 and 2. During baseline, the therapist in Dyad 1 (upper panel) delivered a mean of 0.01 mand opportunities per minute. The therapist in Dyad 2 (lower panel) delivered no mand opportunities. Therapist 2 was then exposed to two non-PDC–HS-indicated interventions—increased availability of materials and task clarification—in separate phases, with baseline phases separating the introduction of these interventions. Her mand opportunity presentation remained at 0 across all of these phases but increased slightly during the final baseline phase to 0.13/min. During the graphed feedback intervention, which was indicated by PDC–HS results, mand opportunities per minute for Therapist 1 increased to 1.02/min. Mand opportunities per minute for the therapist in Dyad 2 increased to 0.78/min.

Fig. 2.

Fig. 2

Therapist rate of mand opportunities presented per minute across baseline, graphed feedback (PDC–HS-indicated intervention), increased availability of materials, and task clarification (non-PDC–HS-indicated intervention) phases for Dyads 1 and 2 in Experiment 1. PDC–HS Performance Diagnostic Checklist—Human Services, BL baseline, TC task clarification

Figure 3 depicts the treatment evaluation for Dyad 3. The upper panel depicts presentation of mand opportunities per minute, and the lower panel depicts presentation of tact opportunities per minute. During baseline, the therapist in Dyad 3 delivered a mean of 0.02 mand opportunities per minute; she delivered no tact opportunities. Therapist 3 was then exposed to a nonindicated intervention—increased availability of materials—across teaching of both mands and tacts. This intervention resulted in the delivery of no mand or tact opportunities. BST, a PDC–HS-indicated intervention, was then conducted for mands, and after a brief return to the baseline phase in which Therapist 3 presented no tact opportunities, BST was implemented for tacts. During BST, the therapist’s rate of mand opportunities increased to 0.68/min. Her rate of tact opportunities increased to 0.92/min.

Fig. 3.

Fig. 3

Therapist rate of mand and tact opportunities presented per minute across baseline, increased availability of materials (non-PDC–HS-indicated intervention), and behavioral skills training (PDC–HS-indicated intervention) phases for Dyad 3 in Experiment 1. PDC–HS Performance Diagnostic Checklist—Human Services, BL baseline

Figure 4 depicts the treatment evaluation for the therapist in Dyad 4. The upper panel depicts therapist presentation of mand opportunities per minute, the middle panel depicts presentation of tact opportunities per minute, and the lower panel depicts presentation of opportunities for listener responding per minute. During baseline, Therapist 4 presented a mean of 0.04 mand opportunities per minute. She presented no tact opportunities and a mean of 0.01 opportunities for listener responding per minute. During task clarification, a PDC–HS-indicated intervention, Therapist 4 presented a mean of 0.78 mand opportunities per minute, a mean of 1 tact opportunity per minute, and a mean of 1 listener responding opportunity per minute.

Fig. 4.

Fig. 4

Therapist rate of mand, tact, and listener responding opportunities presented per minute across baseline and task clarification (PDC–HS-indicated intervention) phases for Dyad 4 in Experiment 1. PDC–HS Performance Diagnostic Checklist—Human Services

Discussion

The PDC–HS was used to identify the environmental variables contributing to poor performance by therapists teaching verbal operants during breaks from DTT in a center-based autism treatment program. PDC–HS-indicated interventions were then evaluated for each of the therapists in the four dyads. In addition, non-PDC–HS-indicated interventions were implemented for two of the four therapists. The results show that the PDC–HS-indicated interventions were effective at increasing the rate of mand, tact, and listener responding opportunities delivered by therapists in all dyads. In addition, all non-PDC–HS-indicated interventions were ineffective at increasing the presentations of verbal operant opportunities by therapists. These data suggest that the three PDC–HS domains identified in Experiment 1 have some degree of predictive validity.

Although three of the four PDC–HS domains were identified as contributing to performance problems in Experiment 1, one domain—resources, materials, and processes—was not identified. In addition, this domain has not been identified as contributing to poor performance in previous research. Thus, the purpose of Experiment 2 was to illustrate the use of the PDC–HS to identify a performance problem (irregular use of a timer by therapists when conducting a skill acquisition program) due to a lack of resources, materials, and processes. An intervention based on PDC–HS results was also evaluated.

Experiment 2

Method

Participants, Setting, and Materials

Two supervisors, two staff members split between two shifts (morning and afternoon), and one client participated in Experiment 2. This resulted in two shift–client dyads. Shift–client dyads consisted of a therapist who was either part of the morning (AM) or the afternoon (PM) shift and worked with a specific client. The client with whom he or she worked was the same person across both shifts and all sessions. Neither therapist ever overlapped shifts; each worked either morning or afternoon. None of the supervisors, therapists, or clients in Experiment 1 participated in Experiment 2. The supervisors were two BCBAs responsible for overseeing the implementation of the client’s programming. The client attended the same university-based clinic for children with autism described for Experiment 1; Experiment 2 was also conducted at this site. The materials for Experiment 2 included a clipboard with a data sheet and a MotivAider (a digital timer that can be programmed to make a sound at various intervals), which was used during the intervention.

Dependent Variable

The dependent variable in Experiment 2 was the percentage of opportunities in which a MotivAider was used during a specific DTT program. The DTT program focused on teaching the client to make eye contact with the therapist; the MotivAider was used to prompt the therapist to check client eye contact. An opportunity was defined as any instance in which the DTT program was conducted. An instance of the program being conducted correctly was defined as the program being run with the use of the MotivAider. If the MotivAider was not used, the program was implemented incorrectly. All therapists had been previously informed by a supervisor that they should always use the MotivAider when running this program. The percentage of opportunities in which a MotivAider was used during the eye contact program was calculated by dividing the number of times a MotivAider was used during implementation of the program by the number of times the eye contact program was run during that shift (AM or PM).

Independent Variable

The independent variable in Experiment 2 was increased availability of materials, which in this case was a MotivAider device. This intervention was based on the resources, materials, and processes domain of the PDC–HS. A MotivAider was placed on the therapist’s clipboard at the beginning of each session, specifically for use during the eye contact program. A multiple-baseline, across-therapists (AM and PM) design was used to evaluate the effects of the intervention.

Procedure

Baseline data were collected on therapist use of a MotivAider during the eye contact program. A trained graduate student discreetly recorded whether the therapists implemented the program using the MotivAider during each shift. After baseline data collection, the second author administered the PDC–HS to two supervisors. The PDC–HS was administered and the data were analyzed in a manner identical to that in Experiment 1. For PDC–HS items that involved direct observation, the second or third author discreetly observed the therapist and recorded the relevant information. Based on the results of the PDC–HS, increased availability of materials (i.e., the MotivAider) was implemented. That is, before each session, a MotivAider was placed on the clipboard on which data sheets were kept. Intervention data on therapist use of the MotivAider were then collected.

IOA and Treatment Integrity

IOA data were collected for PDC–HS administrations. A second rater scored the PDC–HS from a recorded interview. IOA data were calculated in a manner identical to that in Experiment 1; mean IOA was 100% for all administrations of the PDC–HS. IOA data were also collected for both therapists (AM and PM). A second observer observed the percentage of opportunities in which the program was implemented with a MotivAider for 33% of all sessions. IOA data were calculated in a manner identical to that in Experiment 1; mean IOA was 100%.

Treatment integrity data (i.e., data on the presence of the MotivAider on the clipboard at the start of the session) were collected during the implementation of the intervention. Treatment integrity data were calculated in a manner identical to that in Experiment 1; mean integrity was 100% across all independent variables.

Results

PDC–HS Results

The upper panel of Fig. 5 depicts the PDC–HS results for Experiment 2. The PDC–HS identified the highest level of deficits in the resources, materials, and processes domain. A mean of 66.5% of questions in this domain suggested a problem. In contrast, a mean of 40% of questions in the performance consequences, effort, and competition domain, a mean of 12.5% of questions in the training domain, and a mean of 20% of questions in the task clarification and prompting domain suggested a problem.

Fig. 5.

Fig. 5

Performance Diagnostic Checklist—Human Services (PDC–HS) results for the therapist across Supervisors 1 and 2 in Experiment 2 (upper panel). Percentage of opportunities in which therapist conducted the program during baseline and increased availability of materials (PDC–HS-indicated intervention) phases in Experiment 2 (lower panel)

Treatment Evaluation

The lower panel of Fig. 5 depicts the treatment evaluation for Experiment 2. During baseline, neither the morning (AM) therapist (upper panel) nor the afternoon (PM) therapist (lower panel) implemented the program correctly (i.e., with the use of the MotivAider). When the availability of materials was increased (a PDC–HS-indicated intervention), the program was conducted correctly during every session by both the morning and afternoon therapists.

Discussion

The PDC–HS results of Experiment 2 illustrate that a lack of resources or materials was responsible for the targeted performance deficit (i.e., running a program in the context of DTT for children with autism). The PDC–HS-indicated intervention (increased availability of materials) was effective at increasing performance across both morning and afternoon therapists at the center. These data represent the first example of the PDC–HS identifying a lack of resources or materials as being responsible for a performance deficit and further support the utility of the PDC–HS as a tool to identify the variables contributing to performance problems in human service settings.

General Discussion

In Experiment 1, PDC–HS-indicated interventions were effective at increasing the teaching of verbal operants by therapists at a clinic for children with autism. Interventions based on three PDC–HS domains were effective; nonindicated interventions were ineffective. In Experiment 2, an intervention based on the resources, materials, and processes domain, which was the domain not evaluated in Experiment 1, was effective at increasing correct program implementation across two therapists at the same clinic for children with autism. These data suggest that the tool is useful at identifying interventions to improve staff performance in human service agencies.

This study advances the literature on the PDC–HS and, more broadly, informant assessment tools in organizational behavior management (OBM) by illustrating the identification of performance problems in all the four PDC–HS domains and the effectiveness of an intervention based on each of these domains to improve performance. Previous research (Carr et al., 2013; Ditzian et al., 2015) has illustrated the utility of PDC–HS interventions for only two of the four PDC–HS domains. In addition, this study provides further evidence of the predictive validity of the PDC–HS; non-PDC–HS-indicated interventions may be less likely to be effective than PDC–HS-indicated interventions. This study also introduces two additional, very common (in human service settings) dependent variables (therapist teaching of verbal operants and therapist use of a timer during a skill acquisition program) for which the PDC–HS can be useful.

This study is not without limitations. First, for each therapist, scores in multiple domains of the PDC–HS were elevated. We selected and targeted the one domain that was most elevated. However, it is possible that a package intervention based on more than one domain would have been more effective at improving performance. However, the present study provides a clear evaluation of the effects of a single indicated intervention on performance. Given that OBM interventions often employ multiple components, which may prevent identification of the most effective intervention component, this seems like a good starting point for evaluation of PDC–HS-indicated interventions; indeed, previous research (Carr et al., 2013) has suggested the evaluation of single-component procedures. Nevertheless, future researchers should consider comparing a package intervention to a single intervention to address performance targets.

Second, in some cases (e.g., Therapist 2, listener responding for Therapist 4), we had to terminate the intervention before data became stable. Future research should continue intervention phases until a stability criterion is achieved. Third, although we evaluated some non-PDC–HS-indicated interventions and they were ineffective, other non-PDC–HS-indicated interventions may be as effective as the PDC–HS-indicated interventions employed in this study. For example, it is possible that a feedback and goal-setting intervention would have been as effective, or even more effective, than the PDC–HS-indicated interventions we evaluated. Future research should compare other non-PDC–HS-indicated interventions to indicated interventions, both in terms of their effectiveness and social validity. Fourth, the task clarification intervention implemented with Dyad 4 produced the least robust effects. The rate of presentation of two verbal operants appeared to be on a decreasing trend toward the end of the intervention phase; future researchers might consider implementing a booster session once a decreasing trend becomes evident.

Finally, we did not assess the effect of changed therapist performance on child behavior. It would have been interesting to assess the extent to which the verbal behavior of the children changed when the indicated intervention was implemented. Although the intervention was only implemented for a short time, it is possible that the change in therapist performance did affect child behavior. Future PDC–HS research should evaluate interventions for an extended period of time and examine child behavior.

Now that research has demonstrated the utility of all the four PDC–HS domains, future research should examine the reliability and content validity of the PDC–HS. Both test–retest reliability and interrater reliability should be examined. The content validity of the tool might be assessed by preparing a video-based scenario depicting a performance problem that is due to a specific PDC–HS domain. Participants could be asked to complete the PDC–HS after watching the video. If the participants’ conclusions regarding the variables contributing to the performance problem are consistent with those of the domain implicated in the video-based scenario, the validity of the tool will have been supported.

Future research should also more closely examine the direct observation components of the PDC–HS and their utility relative to the informant items included in the tool. Given that many performance problems are behavioral deficits, which can be difficult to directly observe, the direct observation components may be less useful. Other variations of direct observation (e.g., analog-style testing) might be incorporated into future versions of the PDC–HS. Future research should also examine the PDC–HS in other human service settings (e.g., schools, group homes, residential facilities). Also, other performance targets such as absenteeism, tardiness, timely submission of billing sheets, and the integrity of intervention procedures should be examined. Finally, a large-scale evaluation of the PDC–HS (e.g., 10 or more supervisors responding about performance problems exhibited by 30 or more direct care staff members) in an institutional setting might provide some data on the commonality of the environmental determinants of performance problems typically exhibited by direct care staff members in human service settings.

Compliance with Ethical Standards

The authors have complied with all ethical standards. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

Conflict of Interest

The authors declare that they have no conflicts of interest.

References

  1. Austin J. Performance analysis and performance diagnostics. In: Austin J, Carr JE, editors. Handbook of applied behavior analysis. Reno, NV: Context Press; 2000. pp. 321–349. [Google Scholar]
  2. Austin J, Weatherly NL, Gravina NE. Using task clarification, graphic feedback, and verbal feedback to increase closing-task completion in a privately owned restaurant. Journal of Applied Behavior Analysis. 2005;38:117–120. doi: 10.1901/jaba.2005.159-03. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Carr JE, Wilder DA. The Performance Diagnostic Checklist—Human Services: a correction. Behavior Analysis in Practice. 2016;9:63. doi: 10.1007/s40617-015-0099-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Carr JE, Wilder DA, Majdalany L, Mathisen D, Strain L. An assessment-based solution to a human-service employee performance problem: an initial evaluation of the Performance Diagnostic Checklist–Human Services. Behavior Analysis in Practice. 2013;6:16–32. doi: 10.1007/BF03391789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Ditzian K, Wilder D, King A, Tanz J. An evaluation of the Performance Diagnostic Checklist–Human Services to assess an employee performance problem in a center-based autism treatment facility. Journal of Applied Behavior Analysis. 2015;48:199–203. doi: 10.1002/jaba.171. [DOI] [PubMed] [Google Scholar]
  6. Eikenhout N, Austin J. Using goals, feedback, reinforcement, and a performance matrix to improve customer service in a large department store. Journal of Organizational Behavior Management. 2005;24:27–64. doi: 10.1300/J075v24n03_02. [DOI] [Google Scholar]
  7. Rodriguez M, Wilder DA, Therrien K, Wine B, Miranti R, Daratany K, Rodriguez M. Use of the performance diagnostic checklist to select an intervention designed to increase the offering of promotional stamps at two sites of a restaurant franchise. Journal of Organizational Behavior Management. 2005;25:17–35. doi: 10.1300/J075v25n03_02. [DOI] [Google Scholar]
  8. Rohn D, Austin J, Lutrey S. Using feedback and performance accountability to decrease cash register shortages. Journal of Organizational Behavior Management. 2003;22:33–46. doi: 10.1300/J075v22n01_03. [DOI] [Google Scholar]

Articles from Behavior Analysis in Practice are provided here courtesy of Association for Behavior Analysis International

RESOURCES