Abstract
Direct Instruction (DI) teaches challenging academic content to a range of diverse learners. In order to do so, DI includes a complex system for organizing and directing teacher–student interactions to maximize learning. This system includes: instructional formats that specify the interactions between teacher and student, flexible skills-based groupings, active student responding, responsive interactions between students and teachers, ongoing data-based decision making, and mastery teaching. In this article, we describe each of these main features of the system, define their functions, reveal how they are interwoven throughout all DI lessons, and provide specific examples of their application during instruction. Our goal is to describe and clarify critical features of DI lesson presentation and teacher–student interaction so that instructional designers, teachers, and other practitioners can use existing DI programs effectively and include these features in newly developed programs.
Keywords: Direct Instruction, instructional design, teacher, student interactions, teaching
Direct Instruction (DI) sets for itself the ambitious goal of effectively and efficiently teaching challenging academic content to a wide range of learners including typically developing students and those with significant disabilities. This requires attention to numerous aspects of the design and process of instruction. The design of instruction includes content analysis to find powerful generative relations that can be taught (Slocum & Rolf, this issue), analysis of concepts to develop sets of examples and nonexamples that demonstrate their range and limits (Johnson, this issue), development of faultless communication through juxtaposition of examples and nonexamples to ensure that communication is logically consistent with one, and only one, interpretation (Twyman, this issue), and other aspects of instructional program development such as sequencing and interconnection among topics (Watkins & Slocum, 2004). But even the most carefully and elegantly designed program cannot change student behavior without powerful systems for teacher–student interactions. An instructional program, at best, represents potential. For its potential to be realized, the instructional program must influence the actual interactions that take place between teachers and students.
DI employs a complex system of interactions between teachers and students. Instructional formats are central to this system; they provide the structure of activities that occur during a lesson. They include highly specific directions to the teacher that specify the question–answer sequences between teacher and students. Instructional formats are the building blocks with which lessons are constructed. A format is applied to multiple items (e.g. words to be read, mathematics problems to be solved) and together, the format and specific items comprise an exercise. A DI lesson consists of multiple exercises. Throughout a program, the formats within each lesson are thoughtfully sequenced and developed to bring students to mastery on each skill. DI lessons are designed to be delivered to homogenous skills-based groups of students using frequent, active student responding. However, they can also be easily adapted to working with clients individually in homes or clinics, which is a common service delivery context for many behavior analysts. DI includes multiple methods for frequent assessment of learning and means for teachers to respond to the needs of individual students through ongoing data-based decision making and ensuring mastery learning. The teaching scripts found in DI programs can obscure some of these critical features. Those who are unfamiliar with DI programs may not discern the elegance and power of instructional design elements that are not superficially obvious when examining scripts. Throughout this article we will describe how DI employs each of these critical features to shape teacher and student behavior in a way that optimizes student learning.
Formats
Clear Communication
The basic unit of instruction in DI programs is the format—a series of teacher–student exchanges that lead to a complex correct response. Formats have several important functions. First, they are a means of clear communication from the instructional designer to the teacher by specifying teacher wording and other critical behaviors. They provide teacher wording that is carefully crafted and extensively field tested to ensure that it is effective and does not add artificial difficulty to instructional tasks. The difficulty of an instructional task should come from the target skills, not from unfamiliar vocabulary, complex sentences, or other demands that are not inherent in the skill that is being taught. Without clearly defined wording, teachers often add a great deal of language that is not helpful for learning the task and may confuse students. The carefully controlled language specified in formats is particularly important for learners with language-related disabilities and students whose first language is not English, but is helpful for any learner. Carefully controlled language also helps promote appropriate stimulus control by eliminating spurious cues that may gain control of student responding.
Figure 1 shows an example of clear communication found in a DI format to teach students to read words that end with the vowel-consonant-e pattern and discriminate them from those that end with the vowel-consonant pattern (e.g., hope vs. hop). Prior to this exercise, students would have already learned (1) to read words with the basic consonant-vowel-consonant pattern (e.g., sat, dog, pin); (2) to say both the “sound” (i.e., short sound) and the “name” (i.e., long sound) for each vowel under discriminative control; and (3) to say the relevant rule. The rule for making this discrimination and reading these words is given in Fig. 1, Version 1, Step 1: “When there is an ‘e’ on the end, this letter [point to medial vowel] says its name.” The wording of this rule was developed using logic and empirical evidence in order to make the discrimination and the proper response clear while minimizing irrelevant prerequisite skills and vocabulary. The key feature of the stimulus, “When there is an ‘e’ on the end,” does not require students to discriminate consonants and vowels. Likewise, saying “this letter” and pointing to the relevant letter precludes the requirement for complex wording that describes the relevant letter. Empirical evidence has shown that students do not make errors related to identifying which letter is relevant. In addition, by specifying that the letter “says its name” (rather than its “sound”), the wording indicates the long vowel sound without requiring students to learn the terms “long vowel” and “short vowel.” Most important, this wording has been extensively field tested and found to be highly effective—students learn to make the discrimination with minimal errors. Of course, many students would not be able to apply this rule without support. Steps 2–5 of Version 1 in Fig. 1 prompt the application of each component of the rule.
Fig. 1.
Language of Instruction
To achieve clear communication, formats also specify the antecedent stimuli and form of student response. This is critical for teaching appropriate stimulus control over responses. In each of the four responses specified in Fig. 1, Version 1, the response is under the joint control of the teacher’s question and the relevant letters in the word. The first student response is a yes/no response form, but the other three responses are specific to the content. A general feature of DI formats is preference for response forms that are content-specific rather than yes/no. For example, when teaching the concept red, a student would be presented with an item (i.e., example or nonexample) and would be asked to say “red” or “not red” rather than “yes” or “no.” A yes/no task may require critical discrimination, but does not require a response that is specific to the discrimination. That is, in the yes/no task, the learner practices saying “yes” in the presence of red objects; in the task with the content-specific response, the learner practices saying “red” in the presence of red objects—a more relevant response. Using the first type of response may exacerbate the well-known problem of students’ receptive vocabulary (i.e., discrimination) outstripping their productive vocabulary (i.e., tacting). Returning to Fig. 1, Version 1, on Step 3 the students respond by saying either “sound” or “name” for the medial vowel. This prepares them for Step 4 in which they say the sound or name for that vowel. That response leaves them well-prepared to read the word correctly. As with teacher language, student responses are simplified to their instructionally relevant essentials in order to reduce irrelevant demands on the student.
Reducing Scaffolding
A second function of formats is to carefully fade supports by systematically reducing scaffolding across lessons. Scaffolding must provide substantial support to produce accurate responding to newly introduced complex situations. When first introduced, formats often contain multiple, specific questions and statements designed to draw students’ attention to the critical features of a concept or task (see Fig. 1, Version 1). As students master these component responses, the formats reduce the amount of support provided to the students by the instructor, systematically increasing student independence over time.
The progression in Fig. 1 from Version 1 through Version 4 illustrates this process of gradually and systematically reducing scaffolding across lessons. Version 1 is the initial, fully scaffolded version used to introduce application of the rule for reading words that end with the vowel-consonant-e pattern. This is an early reading lesson, and many steps are required to support students to accurately read the words. Version 2 of Fig. 1 presents a later iteration of the format. In this version, the instructor no longer reminds the students of the rule for reading these words nor asks the students if the medial vowel will say its name or its sound. It is typical that, after only a few days, students master the rule for words following this pattern and no longer need the teacher to remind them of the rule or support their application of it. Version 3 presents the next iteration of the format in which the teacher no longer prompts the students to attend to the presence of an “e” at the end of the word. Instead, the teacher presents the word to be read, asks the students to verbalize the medial vowel sound, and prompts the students to read the word. Version 4 presents the final iteration of the format. In this version, the teacher merely prompts the students to read the word. Subsequent to Version 4 of the format, students will encounter words following this pattern in lists mixed with a wide variety of other previously mastered word-types and then in prose passages.
DI programs do not present formats to teachers in the outline form seen in Fig. 1, instead, they are presented as scripts. Scripts integrate the items into the format and may call for changes in the format from item to item. Scripts allow for great specification and subtlety in providing and systematically reducing scaffolding. For example, a more heavily prompted version of a format may be applied to an entire set of items in an exercise, then a less heavily prompted version may be applied to that same set of items in that same exercise. In another variation, heavier prompting may be applied to the first item, with prompting reduced for later items. Full scripting gives program designers extremely fine-grain control over format-related details, such as examples and nonexamples and juxtaposition of items. Unfortunately, complex scripts can also obscure the elegant underlying structure of lessons making it difficult for teachers and others to detect the critical features of DI lessons
Flexible Skills-Based Groups
DI is designed to be used in the context of school classrooms where academic content is taught. As such, it must address the issue of teaching groups of students. DI programs are designed to be implemented with flexible skills-based groups of 2–12, although individual presentation and groups larger than 12 are possible depending on circumstances. The term skills-based groups suggests that groups are formed of students with the most homogenous possible skills in the relevant content area—in other words, students with highly similar instructional needs are grouped together. This supports both the effectiveness of teaching each student at their instructional level and the efficiency of teaching multiple students simultaneously. A great deal of emphasis is placed on carefully assessing students prior to beginning instruction to create appropriately homogenous groups. In reading, for example, flexible skills-based groups of students would be formed based on program-specific placement tests that are designed to evaluate the specific skills necessary to succeed at a specified level in a program. Placement would also consider each student’s performance in previous reading programs and other indicators of likely rate of learning. (See below for more information on placement testing.)
In general, group size depends on the intensity of the students’ needs and the likelihood of needing to modify the programs to support student learning. Most students without disabilities are likely to succeed in DI programs even if the match to their instructional level is only approximate, and they are likely to proceed successfully through the program with relatively few adjustments. These students can be taught in relatively larger groups to increase efficiency. Other students—especially those with disabilities—are likely to require more precise tuning of placement and more extensive decision making to succeed. These students benefit from being taught in smaller DI groups. Complete individualization (i.e., one-to-one instruction) is not necessarily ideal, even for students with intensive needs. Additional students can provide beneficial models, and small DI groups may help the social skills development of students. This is especially true when students who receive special education services are transitioning from more to less restrictive environments. For example, students who received primarily one-on-one instruction may benefit from receiving instruction in a structured small group before receiving instruction in larger groups and less structured settings. The student can learn the behaviors that are expected in most classrooms while in the small group setting.
Optimal grouping of students at one point in the academic year does not imply that this same grouping will be optimal weeks or months later. Students progress at different rates, and group membership should not inhibit growth nor require a student to move at a pace that does not support mastery of all skills. Therefore, DI groups must be flexible. DI teachers and support staff must collect relevant data and actively reconsider appropriate grouping frequently. We will elaborate on creating and maintaining optimal groups in the discussion of data-based decision making below.
Responsive Interactions
Although DI programs are highly specified through formats and scripts, they are extremely flexible and responsive to individual student needs. No two groups have exactly the same experience in completing a given program. Further, because of targeted individual turns and supplemental supports, no two students within the same group have the same experience. In the following sections, we will describe the main ways that DI programs facilitate responding to the unique needs of instructional groups and individual students.
Active Student Responding
DI programs include frequent, active student responding throughout every lesson. This feature has two main functions: (1) enhancing learning directly through active practice of relevant tasks and (2) providing the teacher with highly relevant moment-to-moment information about students’ skills. The form of response depends on the students’ skills, content relevance, and lesson efficiency. As we described above, it is important to minimize instructionally irrelevant demands on the students that could interfere with their performance of instructionally relevant tasks. Student responses should be of a form that is fluent for the students (unless, of course, the form of response is the target of instruction). DI programs frequently use vocal responses because many students are fluent with this type of response. However, if vocal responses are problematic—as with preschoolers with language-related disabilities—pointing, touching, or other response forms could be used while vocal responses are developed. Response forms must also match the content of instruction. For example, during early reading instruction, vocal responding by reading words in passages is highly relevant. But in higher level reading comprehension and written composition, silent reading and written responding are more relevant. Lesson efficiency and responsive feedback is enhanced through response forms that students can produce rapidly and teachers can discriminate quickly. Because most DI programs are designed to teach academic content to elementary-age students, short vocal responding is the predominant, but by no means exclusive, response form.
DI is designed to be delivered to groups of students, which makes individual vocal responses challenging for three main reasons. First, when one student answers individually, all other students are unlikely to actively respond. In fact, teachers usually expect other students to remain silent when calling on one individual. The number of opportunities for each student to respond is diminished as a function of group size—in a group of 10 students, each student can respond to only 1/10 of the total response opportunities. Second, when one student responds, other students can hear that response. If the same question is asked of a second student, their response may be partially (or fully) under echoic control—that is, they simply respond to what the previous student said rather than the content-relevant question. Other shifts of stimulus control can occur with repetition of small sets of items with the same result—the second student may not respond under the same control as the first student. Third, as a result of both of the first two limitations, the teacher obtains information on each student only rarely and the responses they are judging may not represent the targeted skill. The teacher’s information about student performance is greatly compromised.
To avoid these issues and enhance the potential efficiency of teaching groups of students, DI programs frequently use group unison vocal responses. If all students respond vocally in precise unison, each student responds to every question, each student must respond to the teacher-presented stimulus rather than to other students’ responses, and the teacher can assess each student’s accuracy on each item. Thus, group unison vocal responses are a tremendously efficient and powerful instructional tool when used skillfully with students grouped homogenously according to their current skills. In order to produce effective unison responses, the questions and prompts presented by a teacher must be crafted so that all students use exactly the same words in their responses. Figure 1 shows how formats enable group unison vocal responses by providing teacher questions and prompts that produce the same response for all students (assuming mastery of the previous lessons).
In addition to carefully worded questions, producing group unison vocal responses also require a means of cueing students to answer at the same moment. If vocal responses are not in precise unison, they are much less useful. Students who answer a fraction of a second later than the others, for example, may be partially controlled by hearing the first part of other students’ responses. Over time, students may gain extraordinary skill at responding based on other students’ responses rather than their own skills in the targeted content area. Also, if vocal responses are not in unison, a single error is not as easily detectable by the teacher. When oral responses are given in unison, a skilled teacher can reliably detect a single error in a group response.
DI programs call for a variety of signals to cue group unison vocal responses depending on the nature of the stimuli and responses. If the students are expected to read words from a whiteboard, the teacher would use a signal that draws the students’ attention to the critical stimulus, such as pointing to the word. If students were reading those same words from a textbook, the teacher would use an auditory signal that would allow the students to continue looking at the book, such as a finger snap or a tap on the table. If the critical stimulus is not visual (e.g., saying a math fact from memory), the teacher may use an auditory signal or a visual signal such as dropping their hand. The format in Fig. 1 calls for several different signals. Assuming that the words were written on a whiteboard, the teacher would mostly signal by pointing to prompt student attention to the relevant letters and support unison responding. In step 2, the teacher would point to the beginning of the word and tap their finger on the board to signal the response. In steps 3 and 4, the instructor would point and tap under the medial vowel. In step 5, the instructor would again point and tap at the beginning of the word. If the words were presented in a textbook, the instructor would use an auditory signal, such as a snap or tap, to avoid drawing the students’ attention away from the words.
An important element of presenting items and signaling is providing appropriate time for students to construct their responses—that is, think time. After presenting an item to the students, the teacher would pause before signaling to allow the students to construct their responses. The timing of this pause is extremely important. If the pause is too brief, student errors will increase. If the pause is too lengthy, student attention can wander causing errors, fluency is not built, and the overall pace of the lesson is unnecessarily slowed down. Think time should be just long enough to enable all students to respond correctly. The proper length of the pause changes as students gain fluency with a task. When a new version of a format is introduced, think time should be increased because of the additional demands on the students. Then, as students become skilled with that version of the format, think time should be gradually reduced. This builds fluency in the current version of the task, which prepares students to succeed in the subsequent, leaner versions. In the least prompted version (e.g., Fig. 1, Version 4) the teacher would continue to reduce think time to build fluency in the terminal task in preparation for reading these words fluently in passages.
Although group unison responses give DI tremendous efficiency in group instruction, build accuracy and fluency, and provide frequent assessment opportunities, individual turns are also important. In general, individual turns are used after a series of group unison vocal responses to strategically spot check and verify the assessment of mastery that was based on the group responses. The teacher would strategically select the items and students to be targeted for individual turns. Most individual turns should target items that caused errors during group responses, students who made errors, and students who tend to come to mastery more slowly. Individual turns are instructionally expensive in that they reduce overall student engagement and rate of opportunities to respond. To make individual turns more engaging for those who are not selected to respond, teachers should present the question or prompt, pause, and then call on an individual by name. This procedure increases the likelihood that students respond covertly even if they are not called upon to respond overtly.
Although group unison vocal responses are common in DI, they are not a critical feature. This form of response is driven by the critical functions described above. In cases where group unison vocal responses are not appropriate, other forms of response are used. First, as we mentioned above, the group unison vocal response form requires a convergent response—one for which all students will make precisely the same response. They are less useful for divergent responses—those for which students will make different responses. For example, the question, “What is the main idea of this paragraph?” would generally produce widely differing responses and, as a result, would not be a good candidate for group unison response. This type of question could be handled as an individual vocal response, a written response, or in other ways. Second, group unison vocal responses are also particularly convenient when students can respond with relatively short latency. Solving complex math problems and writing paragraphs do not lend themselves to group unison vocal responses. For responses of this type, students may be given longer periods of time to produce their responses in writing. The teacher may spot-check these responses as the students produce them, by calling on individuals to share their work, and/or by reviewing the students’ responses after the lesson. Third, vocal responses are convenient with a human teacher. If a DI program is presented on a computer, other response forms are more suitable. Fourth, in some cases, the response form is part of the instructional objective. For example, in early literacy programs, students write responses even though this is highly effortful and slow because the response form itself is the instructional objective.
Lesson Presentation and Pacing
The scripted lessons found in DI programs lead some to the incorrect conclusion that the programs deskill the teacher role and enable anyone who can read to deliver a DI program with equal effectiveness. Nothing could be further from the truth. Shakespearean plays are fully scripted, yet they require highly developed skills from the actors. The actors do not have to be playwrights or poets to effectively portray a Shakespearean character, but they do have to be able to make words come alive in a way that captures the attention of the audience and clearly conveys subtleties of the characters—a very different and highly skilled role. Likewise, a DI teacher must present the instructional formats with a range of expressivity, emphasis on key words and phrases, and changes of speed of speech all while interacting with students. Skilled DI teachers closely examine lessons to determine where to strategically insert pauses and to identify which words in the script would benefit from additional emphasis. These presentation features are not mere window dressing, they are necessary to make the words and gestures maximally comprehensible to the students, support correct responding, and to motivate students. The teacher’s rate of speech is also critical. When formats are new and complex (e.g., the first time Fig. 1, Version 1 is introduced) or specific items are particularly difficult for the students, teachers must be careful to speak clearly and at a rate that students can respond to. They must also strongly emphasize key words in the script. When a format is well-known, material is at a practice level, or is otherwise easy for the students, the teacher should speak more quickly to save time, keep students active and alert, and provide variety. Teachers can cause errors by speaking too quickly, too slowly, in a monotone, or by emphasizing the wrong words. The teacher’s rate of speech and other features of prosody must be controlled by close moment-to-moment attention to student performance. Excellent presentation is fine-grained reciprocal control between the teacher and students—the teacher’s presentation behavior is controlled by student responses, and therefore can effectively control and build student behavior.
Error Corrections
Although DI programs are designed to thoroughly prepare students for each task by building prerequisite skills and providing extensive scaffolding for complex tasks, mistakes are an inevitable part of learning. Correcting mistakes efficiently and in a manner that will produce correct responses in the future is important. As previously mentioned, DI programs include specific procedures for correcting errors. The generic correction procedure in DI programs is composed of three steps: model, test, and delayed test. When using this type of correction, the teacher models the correct response, tests the item by repeating the question and having the student respond, and then retests by presenting the item again after several other items. Many formats include more specific correction procedures depending on the instructional task. When an item involves application of a rule, the correction may begin with the student saying the relevant rule followed by a series of steps guiding the student through application of a rule followed by a test and delayed test. In other situations, the correction involves giving a partial model followed by a test and delayed test. In cases where the student error is in the vocal response rather than discrimination (e.g., an error in pronouncing a word correctly after a model), the correction could include a step of leading the student (i.e., making the response simultaneously with the student). In this case, the correction would follow the sequence of model, lead, test, delayed test.
Pacing
DI programs are made up of lessons that are designed to be presented in a single day. Typically developing students can usually complete one lesson per day and thereby finish one level of a program in a school year. Students with unique instructional needs may require additional time on particular skills. With any group, the rate of progress across lessons must be based on the group’s performance and not on what is convenient for the teacher (e.g., keeping two or more groups on the same lesson to reduce time spent planning) or an artificial external benchmark (e.g., moving ahead in the program to keep pace with another teacher). DI includes mechanisms for meeting the needs of groups that require either more or fewer practice items to reach mastery. When groups require less practice and, therefore, can move more quickly through a program, the teacher can skip specified exercises and/or lessons (see discussion, below, in “Data-Based Decision Making”). DI also includes mechanisms to meet the needs of groups that require more practice to reach mastery. Many formats include the instruction “repeat until firm” (see Fig. 1). This directs the teacher to provide additional practice by repeatedly presenting the set of items until the entire group of students responds correctly to all of the items. Some groups will require even more practice on specific exercises. A simple accommodation is to repeat a difficult exercise from the previous lesson prior to beginning a new lesson on the subsequent day. If the difficulty is more pervasive, the teacher could repeat an entire lesson to build mastery. If the need to repeat exercises or entire lessons becomes frequent, an additional practice session can be scheduled at a different time of the day. This practice session can be devoted to additional practice on exercises that were problematic during the previous lesson. If the group is somewhat heterogeneous in their needs—that is, some students frequently need extra practice and others do not—additional practice sessions can be scheduled only for those who need it. These adjustments are not modifications to the program; they are adjustments driven by student performance and a decision-making structure built into the programs that allow implementation with fidelity while responding to the needs of individual students.
Data-Based Decision Making
DI calls for teachers to make frequent data-based decisions using a variety of data sources. Data-based decision making occurs prior to beginning instruction when teachers determine where to place students in a program. It continues throughout the course of a program at multiple levels. In this section, we will describe the various forms of data-based decision making that are necessary for supporting students to achieve mastery. Table 1 presents the various decision-making timeframes and data that are employed when making decisions when using DI programs.
Table 1.
Data-Based Decision Making in Direct Instruction
Timeframe | Data Source | Decisions/Adjustments |
---|---|---|
1. Prior to beginning instruction | Placement Tests |
Decision of whether level of program is appropriate Possible lesson at which to begin |
Mastery tests (for placement purposes) | Specific lesson at which to begin | |
2. During lessons | Student responses (oral and written) |
Corrections Targeted individual turns Repeating exercises until firm Rate of speech, emphasis on key words Repeating exercises from the previous lesson Reteaching the previous lesson |
3. Ongoing (e.g., weekly, monthly) | Accumulated information from daily lesson performance |
Repeating exercises and/or lessons Scheduling additional instructional time |
Mastery Tests (every 10 lessons) |
Accelerating the program Repeating prescribed exercises from a set of lessons Repeating a prescribed set of lessons |
Placement Decisions
Every DI program has a placement test. As explained in the section on flexible skills-based groups, a great deal of emphasis is placed on ensuring that each student is working at their instructional level. This is primarily to optimize learning but also to address motivation. When students are placed below their instructional level, they do not need instruction on the material and have little to learn from the program. Their instructional time is wasted on material they have already mastered. In addition, because they are not challenged by the instructional tasks, they may engage in problematic behavior. When students are placed above their instructional level, they are not fluent in prerequisite skills and are likely to make frequent errors. This requires frequent and extensive corrections by the teacher and slows the group’s progress. Frequent corrections and slow progress can both serve as establishing operations for problem behavior.
Placement tests assess whether students have the requisite skills to enter a level of a program at a specific point, usually at or near the beginning. They do not indicate if students have the skills to enter a program at a much later lesson. The mastery tests included in each program may be used in conjunction with placement tests to determine a more specific point to begin instruction. Every DI program includes mastery tests that are presented approximately every 10 lessons. Teachers can obtain the data necessary to make a more nuanced placement decision by administering the mastery tests in a program beginning with the first mastery test and progressing until the student does not meet the mastery criteria specified for that test. For example, a student may place into lesson 11 of the first level of Reading Mastery (Engelmann & Bruner, 2003) according to the placement test. This data does not, however, indicate if the student possesses the skills to begin instruction at a later lesson. To make this determination, the teacher would administer the first mastery test and continue until the student did not meet the test’s criteria. If this happened on the mastery test that followed lesson 90 but the student met mastery criteria for the mastery test that followed lesson 80, then the teacher could confidently decide to start the student at lesson 81.
Within-Lesson Decisions
In the discussion of individual turns, we described some of the data-based decisions that occur when providing targeted individual turns; teachers monitor student performance during group unison responses and use that information to provide additional targeted assessment and practice to individuals. Teachers also engage in data-based decision making within lessons by immediately correcting student errors. When a teacher obtains data in the form of a group unison response, the teacher must decide at that moment how to proceed. If the data suggest that the students mastered the item (i.e., they provided a correct response), the teacher moves on to the next item. If the data suggest that the students have not mastered the item (i.e., they provided an incorrect response), the teacher engages in the specified procedure for correcting the error and provides additional practice on that type of item until the students’ performance is “firm.” Teachers’ decisions about the number of repetitions within each exercise, what words to emphasize during a lesson, the speed with which to deliver instruction, and how expressively they vocalize the instruction also result from data obtained from student responses during lessons.
Teachers also analyze data at the end of each day. In addition to considering student responses during the highly interactive parts of the lesson, they evaluate responses from independent work exercises. In higher levels of reading, composition, and mathematics programs students make extended written responses. These responses are spot-checked as the teacher circulates among the students while they are working. But reading and evaluating paragraphs and systematically checking complex math problems must wait until after the lesson. The programs include procedures for giving students feedback on these extended responses. In addition, the teacher may identify patterns of errors that require remediation and/or additional practice the next day.
Mastery Tests
As previously mentioned, DI programs support teachers to make data-based decisions on a weekly or monthly basis using data that are more formally collected and analyzed. The in-program mastery tests given approximately every 10 lessons assess specific skills targeted in previous lessons. Each program provides highly specific guidance for analyzing mastery test data and making instructional decisions. When students make a pattern of errors that suggest a persistent misunderstanding (rather than an isolated error), the programs prescribe specific remediation exercises. For example, Connecting Math Concepts (Engelmann & Carnine, 2003) directs teachers to reteach specific exercises from multiple lessons when students make repeated errors on the same type of problem on a mastery test. The scoring guide for the mastery tests in Connecting Math Concepts specifies the criteria for determining when the remediation exercises need to be presented.
DI programs are designed to bring students to mastery on all material. In fact, the theory behind the design of the instruction relies on students progressing in a program only after they have demonstrated mastery of that day’s material (Engelmann & Carnine, 1982/2016). Ongoing data-based decision making at the micro-level (i.e., within each lesson) and at the macro-level (i.e., mastery tests, placement tests) ensures that all students master the presented material. DI programs include both the means of contacting the data and clear, structured guidance on decision making.
For students who are not making sufficient progress with the data-based decision-making features that are built into DI programs, practitioners can add components of Precision Teaching to increase the intensity of measurement, decision making, and enhanced practice procedures (Johnson & Street, 2004; Lindsley, 1972; Potts et al., 1993).
Fast Cycle
In addition to the many levels of data-based decision making to ensure that the students receive sufficient instruction and practice to achieve mastery, DI programs also support data-based decision making to accelerate the progress of students who are learning at a faster rate. Specific mechanisms for implementing this “fast cycle” vary across programs. In many DI programs, placement tests can identify students likely to benefit from a fast cycle. In some DI programs, certain exercises are marked with a star—these are designated as fast cycle exercises. Groups that can move more quickly through the program without sacrificing mastery can skip other exercises and only do these fast cycle exercises. By completing the fast cycle exercises, students get all the introductory exercises for each skill and each transition to a leaner version of a format, and they skip some exercises that provide extra practice on the same versions of the formats. Thus, fast cycle provides a complete program with smooth skill development but leaner practice. If teachers find that student errors increase when only the fast cycle exercises are taught, they can begin including some or all of the non-fast cycle exercises. Mastery tests also may provide data suggesting that learning can be accelerated. In beginning reading, for example, students who consistently perform well on mastery tests may skip certain exercises or lessons that provide more practice than their instructional needs require. In this way, the teacher can accelerate the pace of progress when appropriate and slow to a standard pace when needed. Entering fast cycle need not be a long-term decision; teachers can adjust fast cycle to respond quickly to student needs and strengths.
Conclusion
DI uses an elaborate set of strategies and procedures to organize and deliver instruction. Building on the underlying content analysis (Slocum & Rolf, this issue), concept analysis (Johnson, this issue), and juxtaposition of items (Twyman, this issue), effective lesson design and delivery ensures that the products of instructional design contact learners in a powerful and interactive form. DI strategies and procedures for lesson delivery ensure that communication is clear, skills are built smoothly and systematically, and teachers respond effectively to student performance. The sequence of formats, items selected, and fading of scaffolds across a program create a choreography of steps that teachers and students perform to build elaborate repertoires of student behaviors. The systems for interaction, assessment, and decision making enable teachers and students to enter a dance in which each partner is responding to the other. By becoming exquisitely sensitive to student performance, teachers can lead students through the dance of learning. By coming under the control of student behavior, teachers can bring student behavior under the control of complex academic content.
Many of the strategies, procedures, and techniques that make up DI are similar to those used by ABA practitioners with individual clients. DI programs specify and coordinate a coherent system made up of these elements to teach challenging academic content to groups of students. ABA practitioners who are new to DI programs will find modified forms of familiar strategies. This is both a blessing and a curse. ABA practitioners often recognize and understand the importance of these strategies (e.g., the emphasis on mastery learning and using prompt fading sequences). However, the functions of some of the exercises in DI programs are not obvious. A practitioner who is too quick to modify exercises may fail to build fluency on important prerequisites for future skills. DI programs elaborately implement highly refined behavioral principles to positively affect individuals’ lives. We hope that this article will increase the widespread use of DI programs by ABA practitioners, support behavior analysts in their ethical obligation to develop competency when implementing new approaches (Behavior Analyst Certification Board, 2014), and inspire the incorporation of DI features into new instructional programs that address previously overlooked domains.
Funding
Production of this manuscript was supported in part by a grant from the Department of Education, H325D170080.
Data Availability
Not applicable
Declarations
Conflicts of interest/competing interests
Ms. Rolf is a coauthor of the textbook Direct Instruction Mathematics, 5th edition; Dr. Slocum is a coauthor of the textbook Direct Instruction Reading, 6th edition.
Code availability
Not applicable
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Behavior Analyst Certification Board. (2014). Professional and ethical compliance code for behavior analysts. Author.
- Engelmann, S. & Bruner, E. C. (2003). Reading mastery classic: Level 1. SRA/McGraw-Hill.
- Engelmann, S. & Carnine, D. (2003). Connecting math concepts: Level A. SRA/McGraw-Hill.
- Engelmann, S. & Carnine, D. (2016). Theory of instruction: Principles and applications—Revised edition. NIFDI Press. (Original work published 1982)
- Johnson, K. (this issue). Creating the components for teaching concepts. Morningside Academy. [DOI] [PMC free article] [PubMed]
- Johnson, K., & Street, L. M. (2004). The Morningside model of generative instruction. Cambridge Center for Behavioral Studies.
- Lindsley OR. From Skinner to precision teaching: The child knows best. In: Jordan JB, Robbins LS, editors. Let's try doing something else kind of thing (pp. 1–12) 1972. [Google Scholar]
- Potts L, Eshleman JW, Cooper JO. Ogden R. Lindsley and the historical development of precision teaching. The Behavior Analyst. 1993;16:177–189. doi: 10.1007/BF03392622. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Slocum, T. A., & Rolf, K. R. (this issue). Analysis of the content domain. Department of Special Education & Rehabilitative Counseling, Utah State University.
- Twyman, J. S. (this issue). You have the big idea, concept, and examples: Now what? BLAST: A Learning Sciences Company.
- Watkins, C. L., & Slocum, T. A. (2004). The components of Direct Instruction. In N. Marchand-Martella, T. Slocum, & R. Martella (Eds.), Introduction to Direct Instruction. Allyn & Bacon.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Not applicable