Skip to main content
Campbell Systematic Reviews logoLink to Campbell Systematic Reviews
. 2024 Mar 3;20(2):e1389. doi: 10.1002/cl2.1389

Protocol: Strategy instruction for improving short‐ and long‐term writing performance on secondary and upper‐secondary students: A systematic review

André Kalmendal 1,, Ida Henriksson 1, Thomas Nordström 1, Rickard Carlsson 1
PMCID: PMC10909389  PMID: 38434535

Abstract

This is the protocol for a Campbell systematic review. The objectives are as follows. This review aims to investigate the effectiveness of all types of teacher‐delivered classroom‐based strategy instruction aimed at students in the general population (all students) including struggling students (with or at‐risk of academic difficulties) in ages 12–19 for increasing writing performance. The majority of previous reviews scoped all outcomes presented in the primary studies. This review will solely focus on covering three most common outcomes: story quality, story elements and word count/length.

1. BACKGROUND

1.1. The problem, condition or issue

According to national assessments from several western countries such as the Netherlands, the USA, and the UK the results reveal poor results regarding students' writing proficiency (De Smedt & Van Keer, 2018; Inspectie van het Onderwijs, 2010; National Center for Education Statistics, 2012; Ofsted, 2000). The importance of writing in modern societies can not be overstated; even though we communicate in writing more than ever, students still have trouble with formal writing procedures such as planning, revising, and editing texts. The complexity of writing is well‐known and previous research shows that one promising way to enhance writing is through interventions consisting of self‐regulation training (Harris et al., 2003; Klein et al., 2021). In recent years, the method strategy instruction has received more attention which builds on self‐regulation as one of the core components. The consensus that self‐regulated learning is important for academic and lifelong learning is well established and Dignath and Veenman (2021) emphasize the role of direct Strategy Instruction as a framework for activating self‐regulated learning in the classroom. Strategy instruction is a collective name for interventions focusing on self‐regulation, self‐efficacy, and meta‐cognitive strategies. Olson and Land (2007) argue that we do “students a disservice when we offer a reductionist curriculum focusing primarily on skill and drill.” Instead of focusing on spelling and handwriting, students need to understand what it is that experienced writers do when they write. It is therefore crucial that students are introduced to cognitive strategies that underlie writing in a meaningful context. They argue that teachers should provide sustained and guided practices that can be internalized and performed independently by the students (Olson & Land, 2007). There are several types of strategy instruction conditions that utilize the concept of internalizing cognitive strategies to reach a higher level of writing. As of today, educational research using strategy instruction has been researched and tested on all grades in school. This review aims to investigate the effectiveness of strategy instruction on writing performance of all students, including struggling writers, in elementary school.

Previous literature (Dunn, 2012; Graham & Harris, 2005) shows that students with writing difficulties usually do not acquire (meta)‐cognitive strategies unless explicit and detailed information is provided by the teacher. Struggling writers usually spend less time in the pre‐writing phase such as reflecting on the topic, choosing the audience, and developing ideas, and instead start writing immediately after the assignment is given out. There are also students who have trouble getting started with writing, where students need help with writing by modeling texts. The idea of strategy instruction is to engage the students to actively create and understand the constructs of their personal strategies to develop self‐regulated learning (Chalk et al., 2005). Students who struggle with writing might therefore benefit even more from this intervention than typical students. Since having trouble writing can manifest itself in many ways (e.g., poor planning or having trouble getting started) it is reasonable to believe that strategy instruction is suitable as a whole‐class intervention for writers of different proficiency levels.

1.2. The intervention

Strategy Instruction is not a standardized manual‐based method but it typically addresses two core components of early‐stage writing development: discourse knowledge and self‐regulation (Kim, 2020), and can be described as a goal‐oriented mental activity often with an aim to solve a problem in writing within a learning situation (De Silva, 2015).

Discourse knowledge includes information on a specific genre of texts that the students are about to write, such as reports, speeches, and narrative or persuasive writing (Klein et al., 2021). To be self‐regulated learners, students need knowledge about the subject, the task, and the context in which they will apply their learning in order for them to reflect on their own learning processes (Woolfolk, 2021). The knowledge can be remembered using mnemonic strategies or more advanced maps of the key ideas.

Self‐regulation is the process we use to activate and sustain our thoughts, behaviors, and emotions to reach our goals (Perry & Rahim, 2011). According to Zimmerman (2002), self‐regulated learning is based on three phases: forethought, performance, and reflection (see Figure 1).

  • Phase 1: forethought phase – The students need to set clear and reasonable goals and together with a plan to accomplish those goals. The students need motivation and high self‐efficacy to reach their goals.

  • Phase 2: performance phase – In the performance phase the students must perform self‐control and learning strategies to stay engaged to the task. This might include mnemonics, imagery, attention focusing, and other techniques (Woolfolk, 2021). This stage also includes self‐observation in order for the students to understand how things are going and how they can change strategies if needed.

  • Phase 3: reflection phase – The students now look back at their work and reflect and evaluate their performance. Questions like: What strategies worked? What did not work out? Were the goals reasonably set? are useful in order for the students to increase their self‐efficacy for the next similar task.

Figure 1.

Figure 1

Phases and subprocesses of self‐regulation (Zimmerman, 2002).

The concept of strategy instruction allows the creation of different interventions based on the two core components; discourse knowledge and self‐regulation, and the most researched intervention is Self‐Regulated Strategy Development which will be presented more thoroughly below followed by examples of other strategy instruction.

Self‐Regulated Strategy Development has been the most researched type of strategy instruction. This intervention addresses multiple processes of self‐regulation such as self‐instruction, self‐evaluation, goal‐setting, and self‐reinforcement (Klein et al., 2021). The Self‐Regulated Strategy Development intervention's primary focus lies in teaching students strategies for successfully completing an academic task by increasing knowledge and self‐regulatory procedures like goal setting, self‐instruction, and self‐monitoring (Harris et al., 2006). According to Graham and Harris (19932005), Self‐Regulated Strategy Development also shows promising effects on students who are considered to be struggling with writing.

In Self‐Regulated Strategy Development, the lessons typically range from 20 to 60 min three times a week and the intervention is based on six stages: background knowledge, discussion, modeling, memorizing, supporting, and independent performance (Harris et al., 2003). The stages can be combined together and may take more than one lesson to complete. Below follows the Self‐Regulated Strategy Development stages of instruction as presented by Harris et al. (2013) and Graham and Harris (1993):

  • 1.

    Background knowledge – In the first stage, students read and discuss works in the genre being addressed (e.g., reports, persuasive essays, etc.) to increase declarative, conditional, and procedural knowledge. Students may also be introduced to goal setting and self‐monitoring to develop self‐regulation.

  • 2.

    Discussion – In this stage, it is important to establish students' commitment to learning strategies and to become collaborative partners. Now it is time to discuss the student's self‐regulation and writing abilities to learn more about the purpose, benefits, and how they can use them in their writing. Self‐monitoring (graphing) may be introduced to assist goal setting but might be skipped if the students are likely to negatively react to it.

  • 3.

    Modeling – The teacher shows how the model is used and connects it to useful self‐instructions. Together, the teacher and the student discuss and analyze the strategies and the model performance and make changes if necessary, for example, a new mnemonic may be developed.

  • 4.

    Memorizing – Start requiring and confirming memorization of strategies, mnemonics, and self‐instruction fitted for the students. The teacher makes sure that the students have memorized the strategies before independent performance takes place.

  • 5.

    Supporting – Students and teachers together use writing and self‐regulation strategies to succeed in composing using strategy charts, self‐instruction sheets, and graphic organizers. Additional self‐regulation components used for managing the writing environment, use of imagery, and so forth, may be introduced. The criterion levels are increased gradually until the goals are met.

  • 6.

    Independent performance – In the last stage, students should be able to use writing and self‐regulation strategies independently, with some support and monitoring by teachers if necessary. Plans for maintenance and generalization are discussed and implemented.

  • 7.

    Background knowledge – In the first stage, students read and discuss works in the genre being addressed (e.g., reports, persuasive essays, etc.) to increase declarative, conditional, and procedural knowledge. Students may also be introduced to goal setting and self‐monitoring to develop self‐regulation.

  • 8.

    Discussion – In this stage, it is important to establish students' commitment to learning strategies and to become collaborative partners. Now it is time to discuss the student's self‐regulation and writing abilities to learn more about the purpose, benefits, and how they can use them in their writing. Self‐monitoring (graphing) may be introduced to assist goal setting but might be skipped if the students are likely to negatively react to it.

  • 9.

    Modeling – The teacher shows how the model is used and connects it to useful self‐instructions. Together, the teacher and the student discuss and analyze the strategies and the model performance and make changes if necessary, for example, a new mnemonic may be developed.

  • 10.

    Memorizing – Start requiring and confirming memorization of strategies, mnemonics, and self‐instruction fitted for the students. The teacher makes sure that the students have memorized the strategies before independent performance takes place.

  • 11.

    Supporting – Students and teachers together use writing and self‐regulation strategies to succeed in composing using strategy charts, self‐instruction sheets, and graphic organizers. Additional self‐regulation components used for managing the writing environment, use of imagery, and so forth, may be introduced. The criterion levels are increased gradually until the goals are met.

  • 12.

    Independent performance – In the last stage, students should be able to use writing and self‐regulation strategies independently, with some support and monitoring by teachers if necessary. Plans for maintenance and generalization are discussed and implemented.

Cognitive Self‐Regulation Interventions aim to develop declarative knowledge about skills or procedures to a stage where the students can apply strategies to their own writing (Torrance et al., 2007). The intervention is adapted for use with typically developing students and therefore aims for less direct teacher oversight.

The cognitive Self‐Regulation Intervention program is built upon 10 weekly sessions with lessons lasting between 60 and 75 min as well as several homework tasks (Torrance et al., 2007). The sessions include interactive instructions in planning, setting rhetorical goals, generating the content, and developing a structure for the students' writing processes. Different mnemonics are used to remember the strategies and students are asked to emulate the teacher's modeled strategies in their homework. Feedback is given continuously by the teachers during the last three sessions and students are expected to write and produce their own list of self‐regulatory statements.

Tekster (Bouwer et al., 2018) is an example of a strategy instruction developed for Dutch students based on student's grades and level of writing proficiency, the focus is on teaching students a general writing strategy along with self‐regulation skills needed to use the strategy successfully. Tekster consists of three design principles: Writing strategies, Text structures, and Self‐regulation skills (Bouwer et al., 2018). The intervention includes a series of 16 lessons based on grade level. The lessons are typically between 45 and 60 min and aim to guide the students through all steps of the writing process. All three principles come with three modes of instruction: Observational learning, explicit instruction, and practice. The interactive learning activities include the first steps of observing and discussing, and applying different models of writing strategy at several stages of the writing process. Teachers introduce specific characteristics of text types through modeling, comparing texts, and explicit instructions. Further, the teacher introduces an assignment with a clear communicative goal and intended audience. Acronyms for the strategy are named and content is generated in keywords and gradually released into a text written using organized content. Students read each other texts and evaluate using questions and feedback resources. The last step of the intervention is revising the text based on the feedback received.

There will also be other interventions based on the concept of strategy instruction that includes goal‐setting, self‐evaluation, self‐instruction, self‐reinforcement, and so forth, and we aim to include all interventions in this review where authors explicitly use strategy instruction as a method for improving writing performance.

1.3. How the intervention might work

The idea of strategy instruction is to increase the student's discourse knowledge and self‐regulation in the domain of writing. A major purpose of the strategy is to deliver explicit instructions on the content in the genre the students are supposed to write within. Explicit instructions are one way to ease the processing demands associated with incorporating new procedures into an already heavily taxed cognitive system (Graham & Harris, 2005). Further, the students are treated as active collaborators in the learning process and the student's role and effort in learning the new strategies are emphasized and rewarded. The instructional components of the intervention aim to help the students to be creative, plan their writing, use a number of different writing strategies, and reinforce their own ideas to increase the level of their writing. The goal of the intervention is to enhance the students in all aspects of the writing process. A good writer has the knowledge and knows how to approach the writing process as a whole. This includes generating the content, organizing and creating a structure for the composition, formulating goals and plans, efficiently executing the mechanical aspects of writing, and revising and reformulating goals in the text (Chalk et al., 2005).

1.3.1. Intervention in practice

Strategy instruction may be referred to as a constructivist approach to learning (Olson & Land, 2007). Together, the students and the teacher choose a strategy to solve a category of tasks and map it to a step‐by‐step plan in the classroom (Blik et al., 2016). The students progress throughout the stages as they fulfill the criteria in each stage, hence the instructions are rather criterion‐based than time‐based. The feedback and support are individualized by the instructor to adapt and be responsive to the student's needs.

Strategy instruction uses mnemonic devices to help students remember and apply different writing strategies. The mnemonic is usually adapted to the addressed language and might look like this: TREE: This strategy prompted students to Tell what you believe (State your topic sentence), give three or more Reasons (Why do I believe this?), End it (Wrap it up right), and Examine (Do I have all of my parts?) (Graham & Harris, 2005). Another example is EKSTER (which means magpie in Dutch), this strategy brings up several aspects of the writing process specific for Dutch students (Bouwer et al., 2018): Eerst nadenken (think first), Kiezen & ordenen (choose & organize), Schrijven (write), Teruglezen (reread), Evalueren (evaluate), Reviseren (revise).

1.4. Why it is important to do this review

1.4.1. Prior reviews

Several reviews have been conducted regarding strategy instruction (or similar interventions) and student performance over the past 10 years. In our search, we found 8 reviews (see Table 1) on the topic since 2011. We conducted a minor ROBIS check on the available reviews to evaluate the quality of the papers. We found that the previous reviews are of various quality and no review meets the requirement of a standardized systematic review (Higgins et al., 2022) and none were pre‐registered. Only one review (de Boer et al., 2018) provided information about long‐term effects, however, it was not solely focused on strategy instruction. The majority of the reviews do not differentiate between teacher‐delivered interventions in the classroom and external implementation of the intervention (e.g., by trained research staff outside of the classroom). Hence, they are conflating the efficacy and effectiveness of the intervention (Flay et al., 2005). Not separating between teacher and external implementation makes it harder to evaluate if the intervention works in applied authentic settings. Several of the previous reviews were conducted without any statistical analysis or standardized meta‐analytic methods (Finlayson & McCrudden, 2020; See & Gorard, 2020). Previous reviews also mixed reading and writing outcomes in the same analysis which makes the evidence hard to interpret (de Boer et al., 2018; Graham et al., 2018; Plonsky, 2011). The majority of the conducted reviews we found in this area did not use any Risk of Bias tools and had more in common with a scoping review rather than a meta‐analysis (Gillespie & Graham, 2014; de Boer et al., 2018; Donker et al., 2014). A summary of descriptives related to these eight reviews can be found in Table 1.

Table 1.

Descriptives of previous literature.

Review Population Intervention Comparison Outcome Included study design Instructor Pre‐reg. Risk of bias
de Boer et al. (2018) K‐12 Cognitive and meta‐cognitive strategies Teaching as usual Academic achievement RCT Teacher, researcher or computers No No
Donker et al. (2014) K‐12 Cognitive and meta‐cognitive strategies Teaching as usual Student performance RCT/QES Not specified No No
Finlayson and McCrudden (2020) K‐6 Writing instructionsa Not specified Writing achievement RCT/QES Teacher implemented No No
Gillespie and Graham (2014) 1–12 with LD Strategy instruction Not specified Writing quality RCT/QES Not specified No Quality score
Graham et al. (2012) 1–6 Strategy instruction Not specified Writing quality RCT/QES Not specified No No
Graham et al. (2018) K‐12 Literacy programsa Not specified Writing and Reading RCT Teacher or researcher No No
Plonsky (2011) K‐University Cognitive and meta‐cognitive strategies Teaching as usual Strategy instruction effectiveness QES Not specified No No
See and Gorard (2020) K‐12 Writing interventionsa Teaching as usual Academic performance RCT/QES/Longitudinal Not specified No Quality assessment

Abbreviations: LD, learning disability; QES, Quasi‐Experimental study; RCT, randomized controlled trial.

a

Included interventions labeled as Strategy Instruction.

1.4.2. Outcomes related to strategy instruction

Three common outcomes are primarily focused on in evaluations of strategy instruction: story quality, story elements or components, and length or word count (Harris et al., 2012; McKeown et al., 2018; Torrance et al., 2007; Gillespie & Graham, 2014; Chalk et al., 2005; Collins et al., 2021; Ennis, 2016; Graham & Harris, 2005; Klein et al., 2021; Sundeen, 2007). In assessing these outcomes it is common that students' manuscripts are transferred into a word document where spelling, punctuation, and capitalization errors are corrected to not bias the rater (Harris et al., 2012). This also removes the possibility of student identification by recognizing the style of handwriting.

It is common for studies to also include auxiliary measures that do not measure writing performance per se, but instead focus on different aspects of the intervention, such as adherence, preferences, whether they use the strategy, and how much time each student spends on different intervention stages. We will not assess these types of auxiliary measurements in this review; however, measures are of course interesting in understanding how an intervention works (e.g., whether students are using more meta‐cognitive strategies) and might be applicable for other types of research questions.

The three most common outcomes related to writing performance are presented below.

Story quality

Story quality or holistic quality comprises a measure of writing development by scoring a holistic measure of text quality. The outcome is typically rated on a rubric‐designed point‐scale, and the higher scores represent essays of better quality. The rubric design usually consists of 3–4 elements that vary depending on the topic of the essay, each element is separately scored and is then aggregated into one quality outcome. We will include story quality as an outcome as long as it is assessed quantitatively even if not scored as mentioned above. Some elements that can be seen in most articles are: Organization, Fluency, Development of support, Coherence, and Conventions. The raters vary in studies and include school teachers and special educators to researchers.

Story elements or components

Story elements or components are usually measured on a rubric scored point‐scale. Minimum and maximum scores vary between studies. Students get points by including different elements or components such as: topic sentences, supporting details, transition words, explanations, and endings (e.g., a student can get 0 points if the element is missing, 1 point if the element was included, and 2 points if the element was elaborated or highly developed). If the outcomes are graded in a different way than a rubric scored point‐scale, we aim to include the outcome as long as it is measured quantitatively. As in story quality, the rates vary between studies and include teachers, special educators, and researchers.

Length and word count

Length and word count are measured as raw scores that represent the total number of words written in the essay without regard to grammatical accuracy or spelling.

Long‐ and short‐term measurements

According to the Swedish Agency for Health Technology Assessment and Assessment of Social Services (Statens beredning för medicinsk och social utvärdering [SBU] [The Swedish Council on Health Technology Assessment], 2014), there is no standardized time for a follow‐up measurement in studies of literacy. Although, we argue that a 6‐month period follow‐up time would be interesting for a teacher because it would also provide information regarding if the intervention still shows effects going into the next semester.

1.4.3. The contribution of this review

The study aims to review evidence‐based research for strategy instruction interventions to improve student performance in writing. This includes all students in the classrooms as well as students who are struggling with writing. We focus on students ages 12–19, as this is the critical period in writing development where strategy instruction can be expected to be most beneficial. At the age of 12 years, most students will already have mastered the basics of how to write (e.g., spelling, basic grammar), and writing is used as a tool for learning and demonstrating knowledge. Thus, the skills needed to start producing longer and more advanced texts becomes an essential part of students' academic success.

In line with the Salamanca declaration, the field of education has made historical changes toward more inclusive teaching but there are challenges in delivering classroom‐based interventions that target all students, including struggling academics (Shaw & Pecsi, 2021). One important aspect of educational research is therefore to provide teachers with evidence‐based guidelines for teaching writing in daily practice that targets all students in the classroom (De Smedt & Van Keer, 2018; UNESCO, 2014).

2. OBJECTIVES

This review aims to investigate the effectiveness of all types of teacher‐delivered classroom‐based strategy instruction aimed at students in the general population (all students) including struggling students (with or at‐risk of academic difficulties) in ages 12–19 for increasing writing performance. The majority of previous reviews scoped all outcomes presented in the primary studies. This review will solely focus on covering three most common outcomes: story quality, story elements and word count/length.

2.1. Primary research questions

What are the short‐term effects of teacher‐delivered strategy instruction on all students (ages 12–19) writing performance when compared to teaching as usual?

What are the short‐term effects of teacher‐delivered strategy instruction on struggling writers' (ages 12–19) writing performance when compared to teaching as usual?

2.2. Secondary research questions

The review will also include the following secondary research questions. We consider them secondary as we are unsure whether there is sufficient research in this area to answer them.

What are the long‐term effects of teacher‐delivered strategy instruction on all students' (12−19) writing performance when compared to teaching as usual?

What are the long‐term effects of teacher‐delivered strategy instruction on struggling writers (12–19) writing performance when compared to teaching as usual?

2.3. Moderator analysis

We will conduct a subgroup analysis for the different types of strategy instruction (SRSD, CSRI, Tekster, Other).

3. METHODS

3.1. Criteria for considering studies for this review

3.1.1. Types of studies

  • Studies must have a randomized controlled trials (RCTs) design, and/or cluster RCTs design. We will also include quasi‐experimental designs, and/or cluster quasi‐experimental designs, which use both control groups and pretests.

Controls should be carefully matched (e.g., same‐year students, same or similar school) with demonstrated baseline equivalence. Single group pre‐post comparisons are excluded as well as studies that compare with norm data or similar types of statistical controls.

We want the review to be as comprehensive as possible and therefore include both (cluster) RCTs and quasi‐experimental studies. In educational research, it is hard to conduct blinded RCTs (e.g., parents or teachers unaware that the control group students did not receive an intervention).

3.1.2. Types of participants

  • Studies should include students (age 12–19) attending regular, private, or public, schools in grades 7–12 in typical classroom settings in OECD countries.

The population eligible for the review includes all students (age 12–19) attending regular, private, or public, schools in grades 7–12. We will only include students in typical classroom settings and not focus on classes in special schools. Students attending special schools, reading clinics, or equivalent are thus not included in this study and should be focused on in another systematic review because it warrants other approaches than ours.

To make the selected studies comparable across included studies, we will only include studies carried out in OECD countries due to the similarity in school settings and teaching as usual conditions.

An important sub‐population of interest for our secondary research question in this review is students who struggle with writing. We will include all struggling writers regardless of their causes (i.e., different learning difficulties or disabilities) if they are participating in regular classrooms. Because the field lacks agreed‐upon criteria for who is considered a struggling writer (Dunn, 2012), we will include students who have documented difficulties with writing (e.g., details provided by the school or screened by the researchers) and perform on or below the 25th percentile in normed‐referenced tests (a typical cutoff for struggling readers, which is fairly comparable) in at least one of the following areas: vocabulary, spelling, sentence combination or story composition, such as provided by the standardized writing test TOWL, Comprehensive Receptive and Expressive Vocabulary Test, Receptive One‐Word Picture Vocabulary Test‐Revised, or similar.

3.1.3. Types of interventions

  • Studies should have teacher‐delivered classroom‐based explicitly stated strategy instruction intervention for improving writing performance.

To be included, studies should focus on teacher‐delivered classroom‐based strategy instruction for improving writing performance. Strategy Instruction includes specific use of strategies that involve students planning, writing, revising, and/or editing texts (Gillespie & Graham, 2014). There may be other interventions based on the concept of Strategy Instruction that includes goal‐setting, planning, revising, and so forth, and we aim to include all interventions.

The comparison/control group will be teaching as usual, which does not include strategy instruction or another subtype of strategy instruction. We will code what the comparison/control group does to the extent possible when information is given by the authors.

The intervention is restricted to implementations in classroom settings, during the regular school year.

3.1.4. Types of outcome measures

Primary outcomes
  • Studies should provide at least one of the following outcomes: story quality, story elements or components, and length or word count.

The outcomes included will be writing performance with both short and follow‐up measurement points. Writing performance is typically systematically assessed and/or scored using standardized tests (Finlayson & McCrudden et al., 2020; Harris et al., 2006; Torrance et. al., 2007).

For the short‐term outcomes, we will use measures that are collected immediately after the intervention is finished. For long‐term outcomes, a minimum of 6 months after the interventions is required to be regarded as long‐term in this study.

Secondary outcomes

There are no planned secondary outcomes for this review. If we find other outcomes than our core outcomes, we will present them in a table but not include them in our main analysis. However, if we find another outcome that exists in the majority of our selected studies, we will include it in our analysis.

3.1.5. Types of settings

  • Studies that focus on students attending special schools, reading clinics, or equivalent, as well as summer schools and after‐school programs, will be excluded.

  • Studies with an active control group (e.g., other writing interventions) will be excluded.

3.2. Search methods for identification of studies

The search strategy is developed with recommendations based on the Campbell guide provided by Kugley et al. (2017). Our strategy to find relevant studies will be described below, additional information about the search can be found in Supporting Information: Appendix A.

3.2.1. Limitations and restrictions

The Self‐Regulation Strategy Development intervention was developed in 1992 (Graham & Harris, 1993) therefore, all searches will be restricted to publications after 1990.

Due to the nature of our language limitation, we will only select studies written in English. In alignment with the Cochrane handbook, our search will be updated within 6 months of publication (Higgins et al., 2022).

3.2.2. Filters

As stated in Kugley et al. (2017), predefined filters should be used with caution outside of medical and health sciences. However, to get studies with specific designs we will use the recommended strategies for finding methodologically sound studies in each database (e.g., strategies developed by Cochrane specifically for PsycInfo, https://work.cochrane.org/psycinfo).

3.3. Electronic searches

3.3.1. Databases

We will search different databases through Linnaeus University access. Potentially relevant studies will be identified through these databases:

  • APA PsycInfo (ProQuest)

  • ERIC (ProQuest)

  • Linguistics and Language Behavior Abstracts (ProQuest)

  • MLA International Bibliography (EBSCO)

  • Web of Science: Core collection (Clarivate)

    • SCI‐Expanded (1900–present)

    • SSCI – (1955–present)

    • AHCI – (1975–present)

    • ESCI – (2005–present)

3.3.2. Search terms

The search term provided below is tailored toward PsycInfo. A report of the search string will be presented in Supporting Information: Appendix A. The search is divided into the categories: Study type, Intervention, and Population. Our preliminary searches in PsycINFO resulted in 1444 hits without filtering for peer‐reviewed articles (date: 20230428).

(MAINSUBJECT.EXACT(“High School Education”) OR MAINSUBJECT.EXACT(“High School Students”) OR MAINSUBJECT.EXACT(“High Schools”) OR MAINSUBJECT.EXACT(“Junior High School Students”) OR MAINSUBJECT.EXACT(“Junior High Schools”) OR MAINSUBJECT.EXACT(“Middle School Education”) OR MAINSUBJECT.EXACT(“Middle School Students”) OR MAINSUBJECT.EXACT(“Middle Schools”) OR MAINSUBJECT.EXACT(“Secondary Education”) OR ti,ab,if(“grade* 7” OR “grade* seven” OR “seventh grade”) OR ti,ab,if(“grade* 8” OR “grade* eight” OR “eighth grade”) OR ti,ab,if(“grade* 9” OR “grade* nine” OR “ninth grade”) OR ti,ab,if(“grade* 10” OR “grade* ten” OR “tenth grade”) OR ti,ab,if(“grade* 11” OR “grade* eleven” OR “eleventh grade”) OR ti,ab,if(“grade* 12” OR “grade* twelve” OR “twelfth grade”) OR ti,ab,if(gymnasium*) OR ti,ab,if(“high school*”) OR ti,ab,if(“highschool*”) OR ti,ab,if(“juniorhigh”) OR ti,ab,if(K‐12) OR ti,ab,if(K12) OR ti,ab,if(“lower secondary”) OR ti,ab,if(“middle” PRE/1 school*) OR ti,ab,if(pupil OR pupils) OR ti,ab,if(“secondary” PRE/1 (school* OR education* OR level OR grade)) OR ti,ab,if(student OR students) OR ti,ab,if(“upper secondary”)) AND (MAINSUBJECT.EXACT(“Literacy”) OR MAINSUBJECT.EXACT(“Literacy Programs”) OR MAINSUBJECT.EXACT(“Writing Skills”) OR MAINSUBJECT.EXACT.EXPLODE(“Written Communication”) OR mainsubject.Exact(“writing”) OR ti,ab,if(“writing”)) AND (MAINSUBJECT.EXACT(“Educational Objectives”) OR MAINSUBJECT.EXACT(“Goals”) OR MAINSUBJECT.EXACT(“Goal Orientation”) OR MAINSUBJECT.EXACT(“Goal Setting”) OR MAINSUBJECT.EXACT(“Individualized Instruction”) OR MAINSUBJECT.EXACT(“Learning Strategies”) OR MAINSUBJECT.EXACT(“Metacognition”) OR MAINSUBJECT.EXACT(“Mnemonic Learning”) OR MAINSUBJECT.EXACT(“Self‐Efficacy”) OR MAINSUBJECT.EXACT(“Self‐Evaluation”) OR MAINSUBJECT.EXACT(“Self‐Monitoring”) OR MAINSUBJECT.EXACT(“Self‐Regulated Learning”) OR MAINSUBJECT.EXACT(“Self‐Regulation”) OR MAINSUBJECT.EXACT(“Strategies”) OR ti,ab,if((Goal OR goals) NEAR/1 setting) OR ti,ab,if(“learning skill”) OR ti,ab,if(“learning skills”) OR ti,ab,if(“learning strat*”) OR ti,ab,if(metacognit*) OR ti,ab,if(meta‐cognit*) OR ti,ab,if(Mnemonic OR Mnemonics) OR ti,ab,if(self‐instruction) OR ti,ab,if(self‐instructions) OR ti,ab,if(self‐instructional) OR ti,ab,if(self‐evaluat*) OR ti,ab,if(self‐monitor*) OR ti,ab,if(selfmonitor*) OR ti,ab,if(self PRE/1 effic*) OR ti,ab,if(selfeffic*) OR ti,ab,if(self‐regulat*) OR ti,ab,if(selfregulat*) OR ti,ab,if(“strat* instruction*”) OR ti,ab,if(“strat*use”) OR ti,ab,if(“study skill”) OR ti,ab,if(“study skills”) OR ti,ab,if(“study strat*”)) AND (ti,ab,su(experiment*) OR ti,ab,su(“group”) OR ti,ab,su(“groups”) OR ti,ab,su(intervention*) OR ti,ab,su(random NEAR/1 (distributed OR assigned OR sampling)) OR ti,ab,su(randomized) OR ti,ab,su(randomly) OR ti,ab,su(“trial”) OR ti,ab,su(“trials”) OR ti,ab,su(quasi*) OR ti,ab,su(quasiexperiment*)).

3.4. Searching other resources

3.4.1. Hand search

A hand search will be performed on the editions released from 2018 to 2023 to capture relevant studies recently published that might not have been indexed in any databases. The following selected journals had a high frequency of publishing relevant literature capturing strategy instruction and writing performance.

  • Intervention in School and Clinic

  • Reading and writing quarterly

  • Journal of Learning Disabilities Quarterly

3.4.2. Gray literature

Search for working papers/conference proceedings

To identify relevant conference papers, academic clearinghouses and repositories for working papers additional searches will also be conducted within a timeframe of the past 10 years (2013–2023). Due to the nature of our research question, it is unlikely that government documents provide studies with experimental research designs, therefore, none will be sought.

  • European Educational Research Association

  • American Educational Research Association

  • What Works Clearinghouse

  • Campbell Collaboration

  • Education Endowment Foundation

Search for dissertations

The following databases selected in our electronic search index dissertations and theses in their catalog:

  • APA PsycInfo (ProQuest)

  • ERIC (ProQuest)

Additional searches in national repositories are also conducted to find unpublished or non peer‐review content:

  • DIVA – Swedish repository for research publications and theses

  • EThOS – British repository for doctoral theses

  • NDLTD – Networked Digital Library of Theses and Dissertations

  • OATD – Open Access Theses and Dissertations

Citation searching

To identify both published studies and gray literature we will use forward and backward citation‐tracking strategies. The primary strategy is to cite‐track related systematic reviews and meta‐analyses. We also plan to check the citations and reference list of included primary studies for new leads.

Contacts to international experts

Based on our initial screening of existing meta‐analyses mentioned in previous literature, we plan to contact the following international expert in the related area to identify unpublished and ongoing studies:

  • Steve Graham

  • Karen R. Harris

  • Radhika De Silva

  • Mark Torrance

3.5. Data collection and analysis

3.5.1. Selection of studies

The screening procedure will be conducted independently in Rayyan (Ouzzani et al., 2016) by at least two reviewers from the systematic review team. A third reviewer will help resolve eventual conflicts. Our search and screening procedure will be presented in a flow diagram in accordance with PRISMA (Page et al., 2021).

The screening process will be divided into two stages. The first stage is made based on titles and abstracts and the second stage decision is made out of full‐text reads. In the title and abstract reading stage, a study will be excluded if one answer to questions 1–6 in Supporting Information: Appendix B is “No.” If the answer to these questions is “Yes” or “Uncertain” it will be moved forward to full text read in the second stage. These questions are based on our PICOS.

In the full‐text screening, each report will be screened again against the first six questions, as well as three additional ones. A study will only be included in the review if the answers to all questions are “Yes.” If a study receives the answer “Uncertain” or if there is a disagreement regarding the eligibility in the full‐text screening stage it will be resolved by the review authors. If deemed necessary, the study author/s will be contacted to provide more information.

3.5.2. Data extraction and management

Two review authors will code independently and extract data from included studies. Any disagreement will be resolved through discussion. The data that will be extracted is bibliographical data, the characteristics of participants, sample size, intervention, control group, research design, and outcomes. All data will be coded using Google sheets and stored accordingly on Google Drive and the extracted data will also be uploaded to the Open Science Framework. Our codebook is presented in Supporting Information: Appendix C. Before starting the data extraction for the review, we are going to pilot our code book on five articles.

In cases where studies have multiple assessors (e.g., two independent raters), the aggregated score will be extracted. If studies contain standardized test scores on an aggregated level (e.g., both reading and spelling combined) and we are not able to extract the writing outcome exclusively the study will be excluded.

If a study contains a broader PICOS, for example, students in grades K‐12, only the subset of the sample that is eligible for our study will be extracted. If we can not extract the eligible subset, the data will be considered missing, and the study will not be included in this review.

3.5.3. Assessment of risk of bias in included studies

As recommended by Cochrane when assessing randomized trials, RoB‐2 will be used to assert the risk of bias (Sterne et al., 2019). Two reviewers from the research group will independently review the risk of bias, any disagreement will be resolved between the two reviewers. We will be using the original version as well as the test version for cluster‐randomized trials when needed. This tool includes five domains that cover all types of bias that are currently understood to affect the results:

  • 1.

    Bias arising from the randomization process

  • 2.

    Bias due to deviations from intended interventions

  • 3.

    Bias due to missing outcome data

  • 4.

    Bias in the measurement of the outcome

  • 5.

    Bias in the selection of the reported result

For the Quasi‐experimental studies, we will be using the tool ROBINS‐I (Sterne et al., 2016). This tool is similar to RoB‐2 but includes seven domains:

  • 1.

    Bias due to confounding

  • 2.

    Bias in the selection of participants for the study

  • 3.

    Bias in the classification of interventions

  • 4.

    Bias due to deviations from intended interventions

  • 5.

    Bias due to missing data

  • 6.

    Bias in the measurement of outcomes

  • 7.

    Bias in the selection of the reported results

The response options for each question regarding bias consists of Yes, probably yes, probably no, No, and No information. In RoB‐2, the results add up to a domain‐level judgment about the risk of bias which includes the levels of low‐risk of bias, some concerns, and High risk of bias. In ROBINS‐I, the results add up to low risk, moderate risk, serious risk, critical risk of bias. We do not expect to find too many studies with a low‐risk of bias therefore, we will conduct our main analysis on low‐ and moderate‐risk of bias. However, a sensitivity analysis will be performed including the studies with a serious risk in ROBINS‐I and a high‐risk of bias in RoB‐2. Any primary study assessed with critical risk of bias will be removed from the analysis in line with provided recommendations (Sterne et al., 2016).

3.5.4. Measures of treatment effect

Based on our initial screening of the literature, we expect all core outcomes to be continuous variables reported as means and standard deviations for the pre‐test, post‐test, and, for some studies, follow‐up(s). We will extract one single measure of the means and standard deviations for each core outcome, both aggregated across all students (i.e., the main reporting of all studies), normal performing students (for studies that report this), and for the sub‐group of struggling writers separately (for studies that report this). If several assessments for the same core outcome are presented we will extract the focal test, if a focal test cannot be determined, one random selected test score will be extracted. The reason for this is to remove any dependency that can occur when extracting several effect sizes for the same outcome (Hedges et al., 2010).

Next, we will calculate the Standardized Means Differences (SMD) for the comparison between the control and intervention group for the post‐test, as well as for any follow‐up test, using the escalc function in metafor (Viechtbauer, 2010), with the measure argument SMD (i.e., Hedge's g, metafor, Viechtbauer, 2010). The effect sizes will be coded in a way that positive effects are favoring the intervention group. The full R‐code can be found in Supporting Information: Appendix D.

3.5.5. Unit of analysis issues

Because the interventions included in this review are always conducted in the regular classroom setting, the studies will most likely be clustered randomized trials or clustered quasi‐experimental trials. However, it might vary whether the clustering is on the level of classrooms or schools. According to the Cochrane handbook, the ideal information to extract from a cluster‐randomized trial is the direct estimate of the required measure from an appropriate analysis, such as Multilevel modeling or Generalized Estimating Equations. However, there is currently no explicit standard in how this is reported in the literature, meaning that we cannot expect to find comparable metrics to extract (e.g., studies might use different standardizers when calculating effect sizes from multilevel‐models). Instead, we will use the alternative approach of adjusting for the design effect (Higgins et al., 2008). The main idea is to correct the sample size of each clustered trial, by dividing the sample size by the design effect. The design effect is calculated as: 1 + (M − 1) × ICC; where M = average cluster size and ICC = intracluster correlation coefficient (Hedges, 2007).

In cases where ICC is not reported, we will use an external ICC. The closest estimates we could find are provided by Hedges and Hedberg (2007) and are based on reading performance at the school level. The ICC varies quite a bit between different school grades, and there's no exact figure that matches grades 7–12, but based on the range of ICCs we find 0.1 to be reasonable, albeit rough, estimate to use as our default as it also is consistent with the findings of Ahn et al. (2012). We will use the same estimate for classroom clusters as well. We will also conduct sensitivity analyses based on assumed ICC of 0.05 and 0.2.

In cases where a study consists of several groups that use different types of interventions using strategy instruction, we will analyze these groups separately.

3.5.6. Dealing with missing data

If the selected studies do not provide the statistics (M and SD) to estimate the effect size, or some other information that's part of our code sheet (e.g., the country it was conducted in), we will request the missing information from the authors of the primary studies. If we still cannot obtain the information, the study will be included in the review but excluded from the analysis that needs that information (e.g., missing outcomes means it's excluded from the meta‐analytical synthesis).

3.5.7. Assessment of heterogeneity

The heterogeneity will be assessed based on the random‐effects model (in metafor, see Supporting Information: Appendix D for the full code). In the random‐effects model, the studies are assumed to be sampled from a larger population of possible studies. This distribution is assumed to be normally distributed with the variance of τ 2, that is, N(µ, τ 2). Hence, τ 2 is a direct estimate of the heterogeneity. However, that also means that τ is the standard deviation of this population of possible studies, which is, in our opinion, easier to interpret as it is on the same metric as the outcome (SMD). For example, if the SMD is 0.5 with a τ of 0.25, we know that on average, the heterogeneity is half as large as the effect itself. We will also calculate I 2 statistic, as it can be interesting to understand how much of the variance across studies is due to heterogeneity and how much is due to sampling variance.

3.5.8. Assessment of reporting biases

To examine possible publication bias, we will use a contour‐enhanced funnel plot, Eggers' test (Egger et al., 1997) as well as PET‐PEESE (Stanley & Doucouliagos, 2014) to test for and adjust for small‐study bias (which is often, but not always, due to publication bias). We will also use 3PSM (Iyengar & Greenhouse, 1988), which in contrast is a selection model (Hedges & Vevea, 1996; Hedges, 1984; Vevea & Woods, 2005) that adjusts the weight of different studies (i.e., lower weight for statistically significant studies) to adjust for that non‐significant studies might be missing in the literature.

Simulation studies made by Carter et al. (2019), Stanley (2017), and McShane et al. (2016), suggest that these publication bias methods require a large number of studies (e.g., K > 20) to perform well in terms of accuracy of adjustment and power. We will thus not apply these methods (except the funnel plot) if K < 10, and still remain cautious about the result if K < 20. It might be tempting to resolve this by pooling across both types of designs. However, we argue that would be a mistake, as it would risk confounding small‐sample bias with design, as RCT studies usually have smaller sample sizes, and are also arguably more likely to be published regardless of whether results are statistically significant or not. Further, that would also mean that we would estimate an effect size that does not properly match our research questions.

3.5.9. Data synthesis

We will conduct a separate analysis for RCT and quasi‐experimental designs to determine the effects of each study design. As we expect heterogeneity in the underlying effect sizes that are studied across studies (e.g., due to different class sizes, grades, and resources), our main synthesis will be a random‐effects meta‐analysis (Borenstein et al., 2009). All effect sizes will be presented as Hedges g due to the nature of us expecting a low amount of studies included (N < 20). Hedges g uses pooled standard deviation to adjust for small sample sizes (Borenstein et al., 2009).

Hedgesg=M1M2SDpooled,

where:

M1 − M2 = difference in means,

SDpooled = pooled and weighted standard deviations.

We will use the Restricted maximum likelihood estimator, and our alpha level will be set to 0.05. The results will also be illustrated in a forest plot along with the 95% confidence intervals. We will conduct the meta‐analysis in the R‐package metafor (Viechtbauer, 2010). The full code can be found in Supporting Information: Appendix D.

3.5.10. Subgroup analysis and investigation of heterogeneity

There will be a separate contrast analysis for all students, typically developing students, as well as struggling writers.

For our moderator analysis question, we aim to conduct a subgroup analysis of the different types of interventions. However, we consider this exploratory since the analysis is dependent on how many studies use a specific method, and we'll have to decide in a later stage on what type of comparison (if any) is meaningful.

3.5.11. Sensitivity analysis

We plan to conduct a summary table for our sensitivity analyses including the analyses mentioned above; the SMD with ICC levels of 0.10 and 0.15; and analysis with studies assigned with a high‐risk of bias. We will also perform a leave‐one‐out analysis to see whether the estimated results are robust to the influence of a single study.

3.5.12. Summary of findings and assessment of the certainty of the evidence

A summary of findings will be presented in the review including:

  • Population

  • Setting

  • Intervention

  • Comparison

  • All core outcomes for all students (% improvement)

  • All core outcomes for struggling writers (% improvement)

  • Number of participants

  • Bias Check overview: RoB2 or ROBINS‐I

  • Comments

  • Explanations

CONTRIBUTIONS OF AUTHORS

  • Content: André Kalmendal

  • Systematic review methods: André Kalmendal, Thomas Nordström, Rickard Carlsson

  • Statistical analysis: André Kalmendal, Rickard Carlsson

  • Information retrieval: Ida Henriksson

DECLARATIONS OF INTEREST

No conflict of interest.

Preliminary timeframe

Approximate date for submission of the systematic review: May 2025.

Supporting information

Supporting information.

CL2-20-e1389-s001.docx (273.9KB, docx)

Supporting information.

CL2-20-e1389-s002.docx (272.1KB, docx)

Supporting information.

CL2-20-e1389-s004.docx (271.8KB, docx)

Supporting information.

CL2-20-e1389-s003.docx (271.4KB, docx)

Kalmendal, A. , Henriksson, I. , Nordström, T. , & Carlsson, R. (2024). Protocol: Strategy instruction for improving short‐ and long‐term writing performance on secondary and upper‐secondary students: A systematic review. Campbell Systematic Reviews, 20, e1389. 10.1002/cl2.1389

REFERENCES

  1. Ahn, S. , Myers, N. D. , & Jin, Y. (2012). Use of the estimated intraclass correlation for correcting differences in effect size by level. Behavior Research Methods, 44(2), 490–502. 10.3758/s13428-011-0153-1 [DOI] [PubMed] [Google Scholar]
  2. Blik, H. , Harskamp, E. G. , & Naayer, H. M. (2016). Strategy instruction versus direct instruction in the education of young adults with intellectual disabilities. Journal of Classroom Interaction, 51(2), 20–35. [Google Scholar]
  3. de Boer, H. , Donker, A. S. , Kostons, D. D. N. M. , & van der Werf, G. P. C. (2018). Long‐term effects of metacognitive strategy instruction on student academic performance: A meta‐analysis. Educational Research Review, 24, 98–115. 10.1016/j.edurev.2018.03.002 [DOI] [Google Scholar]
  4. Borenstein, M. , Hedges, L. V. , Higgins, J. P. , & Rothstein, H. R. (2009). Introduction to meta‐analysis. John Wiley & Sons, Ltd. [Google Scholar]
  5. Bouwer, R. , Koster, M. , & van den Bergh, H. (2018). Effects of a strategy‐focused instructional program on the writing quality of upper elementary students in the Netherlands. Journal of Educational Psychology, 110(1), 58–71. 10.1037/edu0000206 [DOI] [Google Scholar]
  6. Carter, E. C. , Schönbrodt, F. D. , Gervais, W. M. , & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta‐analytic methods. Advances in Methods and Practices in Psychological Science . 10.1177/2515245919847196 [DOI]
  7. Chalk, J. C. , Hagan‐Burke, S. , & Burke, M. D. (2005). The effects of self‐regulated strategy development on the writing process for high school students with learning disabilities. Learning Disability Quarterly, 28(1), 75–87. 10.2307/4126974 [DOI] [Google Scholar]
  8. Collins, A. A. , Ciullo, S. , Graham, S. , Sigafoos, L. L. , Guerra, S. , David, M. , & Judd, L. (2021). Writing expository essays from social studies texts: A self‐regulated strategy development study. Reading and Writing, 34(7), 1623–1651. 10.1007/s11145-021-10157-2 [DOI] [Google Scholar]
  9. Dignath, C. , & Veenman, M. V. J. (2021). The role of direct strategy instruction and indirect activation of self‐regulated learning—Evidence from classroom observation studies. Educational Psychology Review, 33(2), 489–533. 10.1007/s10648-020-09534-0 [DOI] [Google Scholar]
  10. Donker, A. S. , de Boer, H. , Kostons, D. , Dignath van Ewijk, C. C. , & van der Werf, M. P. C. (2014). Effectiveness of learning strategy instruction on academic performance: A meta‐analysis. Educational Research Review, 11, 1–26. 10.1016/j.edurev.2013.11.002 [DOI] [Google Scholar]
  11. Dunn, M. (2012). Response to intervention: Employing a mnemonic‐strategy with art media to help struggling writers. Journal of International Education and Leadership, 2(3), 12. [Google Scholar]
  12. Egger, M. , Smith, G. D. , Schneider, M. , & Minder, C. (1997). Bias in meta‐analysis detected by a simple, graphical test. BMJ, 315(7109), 629–634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ennis, R. P. (2016). Using self‐regulated strategy development to help high school students with EBD summarize informational text in social studies. Education and Treatment of Children, 39(4), 545–568. 10.1353/etc.2016.0024 [DOI] [Google Scholar]
  14. Finlayson, K. , & McCrudden, M. T. (2020). Teacher‐implemented writing instruction for elementary students: A literature review. Reading & Writing Quarterly, 36(1), 1–18. 10.1080/10573569.2019.1604278 [DOI] [Google Scholar]
  15. Flay, B. R. , Biglan, A. , Boruch, R. F. , Castro, F. G. , Gottfredson, D. , Kellam, S. , Mościcki, E. K. , Schinke, S. , Valentine, J. C. , & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175. 10.1007/s11121-005-5553-y [DOI] [PubMed] [Google Scholar]
  16. Gillespie, A. , & Graham, S. (2014). A meta‐analysis of writing interventions for students with learning disabilities. Exceptional Children, 80(4), 454–473. 10.1177/0014402914527238 [DOI] [Google Scholar]
  17. Graham, S. , & Harris, K. R. (1993). Self‐regulated strategy development: Helping students with learning problems develop as writers. The Elementary School Journal, 94(2), 169–181. [Google Scholar]
  18. Graham, S. , & Harris, K. R. (2005). Improving the writing performance of young struggling writers: Theoretical and programmatic research from the center on accelerating student learning. The Journal of Special Education, 39(1), 19–33. 10.1177/00224669050390010301 [DOI] [Google Scholar]
  19. Graham, S. , Liu, X. , Aitken, A. , Ng, C. , Bartlett, B. , Harris, K. R. , & Holzapfel, J. (2018). Effectiveness of literacy programs balancing reading and writing instruction: A meta‐analysis. Reading Research Quarterly, 53(3), 279–304. 10.1002/rrq.194 [DOI] [Google Scholar]
  20. Graham, S. , McKeown, D. , Kiuhara, S. , & Harris, K. R. (2012). A meta‐analysis of writing instruction for students in the elementary grades. Journal of Educational Psychology, 104(4), 879–896. 10.1037/a0029185 [DOI] [Google Scholar]
  21. Harris, K. R. , Graham, S. , Friedlander, B. , & Laud, L. (2013). Bring powerful writing strategies into your classroom! Why and how. The Reading Teacher, 66(7), 538–542. 10.1002/TRTR.1156 [DOI] [Google Scholar]
  22. Harris, K. R. , Graham, S. , & Mason, L. H. (2003). Self‐regulated strategy development in the classroom: Part of a balanced approach to writing instruction for students with disabilities. Focus on Exceptional Children, 35(7), 1–16. [Google Scholar]
  23. Harris, K. R. , Graham, S. , & Mason, L. H. (2006). Improving the writing, knowledge, and motivation of struggling young writers: Effects of self‐regulated strategy development with and without peer support. American educational research journal, 43(2), 295–340. 10.3102/00028312043002295 [DOI] [Google Scholar]
  24. Harris, K. R. , Lane, K. L. , Graham, S. , Driscoll, S. A. , Sandmel, K. , Brindle, M. , & Schatschneider, C. (2012). Practice‐based professional development for self‐regulated strategies development in writing: A randomized controlled study. Journal of Teacher Education, 63(2), 103–119. 10.1177/0022487111429005 [DOI] [Google Scholar]
  25. Hedges, L. V. (1984). Estimation of effect size under nonrandom sampling: The effects of censoring studies yielding statistically insignificant mean differences. Journal of Educational Statistics, 9(1), 61–85. [Google Scholar]
  26. Hedges, L. V. (2007). Effect sizes in cluster‐randomized designs. Journal of Educational and Behavioral Statistics, 32(4), 341–370. 10.3102/1076998606298043 [DOI] [Google Scholar]
  27. Hedges, L. V. , & Hedberg, E. C. (2007). Intraclass correlation values for planning group‐randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87. 10.3102/0162373707299706 [DOI] [Google Scholar]
  28. Hedges, L. V. , Tipton, E. , & Johnson, M. C. (2010). Robust variance estimation in meta‐regression with dependent effect size estimates. Research Synthesis Methods, 1(1), 39–65. 10.1002/jrsm.5 [DOI] [PubMed] [Google Scholar]
  29. Hedges, L. V. , & Vevea, J. L. (1996). Estimating effect size under publication bias: Small sample properties and robustness of a random effects selection model. Journal of Educational and Behavioral Statistics, 21(4), 299–332. 10.3102/10769986021004299 [DOI] [Google Scholar]
  30. Higgins, J.P. T. , Thomas, J. , Chandler, J. , Cumpston, M. , Li, T. , Page, M. J. , & Welch, V. A. (Eds.). (2022). Cochrane handbook for systematic reviews of interventions version 6.3 (updated February 2022).  Cochrane. http://www.training.cochrane.org/handbook [Google Scholar]
  31. Higgins, J. P. T. , White, I. R. , & Anzures‐Cabrera, J. (2008). Meta‐analysis of skewed data: Combining results reported on log‐transformed or raw scales. Statistics in Medicine, 27(29), 6072–6092. 10.1002/sim.3427 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Inspectie van het Onderwijs . (2010). Het onderwijs in het schrijven van teksten, De kwaliteit van het schrijfonderwijs in het basisonderwijs. Ministerie van Onderwijs, Cultuur en Wetenschap. [Google Scholar]
  33. Iyengar, S. , & Greenhouse, J. B. (1988). Selection models and the file drawer problem. Statistical Science, 3(1), 109–117. [Google Scholar]
  34. Kim, Y.‐S. G. (2020). Structural relations of language and cognitive skills, and topic knowledge to written composition: A test of the direct and indirect effects model of writing. British Journal of Educational Psychology, 90(4), 910–932. 10.1111/bjep.12330 [DOI] [PubMed] [Google Scholar]
  35. Klein, P. , Bildfell, A. , Dombroski, J. D. , Giese, C. , Sha, K. W.‐Y. , & Thompson, S. C. (2021). Self‐Regulation in early writing strategy instruction. Reading & Writing Quarterly, 38, 101–125. 10.1080/10573569.2021.1919577 [DOI] [Google Scholar]
  36. Kugley, S. , Wade, A. , Thomas, J. , Mahood, Q. , Jørgensen, A. M. K. , Hammerstrøm, K. , & Sathe, N. (2017). Searching for studies: A guide to information retrieval for Campbell systematic reviews. Campbell Systematic Reviews, 13(1), 1–73. 10.4073/cmg.2016.1 [DOI] [Google Scholar]
  37. McKeown, M. G. , Crosson, A. C. , Moore, D. W. , & Beck, I. L. (2018). Word knowledge and comprehension effects of an academic vocabulary intervention for middle school students. American Educational Research Journal, 55(3), 572–616. 10.3102/0002831217744181 [DOI] [Google Scholar]
  38. McShane, B. B. , Böckenholt, U. , & Hansen, K. T. (2016). Adjusting for publication bias in meta‐analysis: An evaluation of selection methods and some cautionary notes. Perspectives on Psychological Science, 11(5), 730–749. 10.1177/1745691616662243 [DOI] [PubMed] [Google Scholar]
  39. National Center for Education Statistics . (2012). The nation's report card: Writing 2011. Institute of Education Sciences, US Department of Education. [Google Scholar]
  40. Ofsted . (2000). Teaching of writing in primary schools: Could do better. Ofsted. [Google Scholar]
  41. Olson, C. B. , & Land, R. (2007). A cognitive strategies approach to reading and writing instruction for English language learners in secondary school. Research in the Teaching of English, 41(3), 269–303. [Google Scholar]
  42. Ouzzani, M. , Hammady, H. , Fedorowicz, Z. , & Elmagarmid, A. (2016). Rayyan—A web and mobile app for systematic reviews. Systematic Reviews, 5(1), 210. 10.1186/s13643-016-0384-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Page, M. J. , Moher, D. , Bossuyt, P. M. , Boutron, I. , Hoffmann, T. C. , Mulrow, C. D. , Shamseer, L. , Tetzlaff, J. M. , Akl, E. A. , Brennan, S. E. , Chou, R. , Glanville, J. , Grimshaw, J. M. , Hróbjartsson, A. , Lalu, M. M. , Li, T. , Loder, E. W. , Mayo‐Wilson, E. , McDonald, S. , … McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ, 372, n160. 10.1136/bmj.n160 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Perry, N. E. , & Rahim, A. (2011). Studying self‐regulated learning in classrooms. In Zimmerman B. J. & Schunk D. H. (Eds.), Handbook of self‐regulation of learning and performance (pp. 122–136). Routledge/Taylor & Francis Group. [Google Scholar]
  45. Plonsky, L. (2011). The effectiveness of second language strategy instruction: A meta‐analysis. Language learning, 61(4), 993–1038. 10.1111/j.1467-9922.2011.00663.x [DOI] [Google Scholar]
  46. See, B. H. , & Gorard, S. (2020). Effective classroom instructions for primary literacy: A critical review of the causal evidence. International Journal of Educational Research, 102, 101577. 10.1016/j.ijer.2020.101577 [DOI] [Google Scholar]
  47. Shaw, S. R. , & Pecsi, S. (2021). When is the evidence sufficiently supportive of real‐world application? Evidence‐based practices, open science, clinical readiness level. Psychology in the Schools, 58(10), 1891–1901. 10.1002/pits.22537 [DOI] [Google Scholar]
  48. De Silva, R. (2015). Writing strategy instruction: Its impact on writing in a second language for academic purposes. Language Teaching Research, 19(3), 301–323. 10.1177/1362168814541738 [DOI] [Google Scholar]
  49. De Smedt, F. , & Van Keer, H. (2018). Fostering writing in upper primary grades: A study into the distinct and combined impact of explicit instruction and peer assistance. Reading and Writing, 31(2), 325–354. 10.1007/s11145-017-9787-4 [DOI] [Google Scholar]
  50. Stanley, T. D. (2017). Limitations of PET‐PEESE and other meta‐analysis methods. Social Psychological and Personality Science, 8(5), 581–591. 10.1177/1948550617693062 [DOI] [Google Scholar]
  51. Stanley, T. D. , & Doucouliagos, H. (2014). Meta‐regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60–78. 10.1002/jrsm.1095 [DOI] [PubMed] [Google Scholar]
  52. Statens beredning för medicinsk och social utvärdering (SBU) [The Swedish Council on Health Technology Assessment] . (2014). Dyslexi hos barn och ungdomar: tester och insatser; en systematisk översikt [Dyslexia in children and adolescents – Tests and interventions]. Report: 225. ISBN: 978‐91‐85413‐66‐9. [PubMed]
  53. Sterne, J. A. , Hernán, M. A. , Reeves, B. C. , Savović, J. , Berkman, N. D. , Viswanathan, M. , Henry, D. , Altman, D. G. , Ansari, M. T. , Boutron, I. , Carpenter, J. R. , Chan, A.‐W. , Churchill, R. , Deeks, J. J. , Hróbjartsson, A. , Kirkham, J. , Jüni, P. , Loke, Y. K. , Pigott, T. D. , … Higgins, J. P. (2016). ROBINS‐I: A tool for assessing risk of bias in non‐randomised studies of interventions. BMJ, 355, i4919. 10.1136/bmj.i4919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Sterne, J. A. C. , Savović, J. , Page, M. J. , Elbers, R. G. , Blencowe, N. S. , Boutron, I. , Cates, C. J. , Cheng, H.‐Y. , Corbett, M. S. , Eldridge, S. M. , Emberson, J. R. , Hernán, M. A. , Hopewell, S. , Hróbjartsson, A. , Junqueira, D. R. , Jüni, P. , Kirkham, J. J. , Lasserson, T. , Li, T. , … Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366, l4898. 10.1136/bmj.l4898 [DOI] [PubMed] [Google Scholar]
  55. Sundeen, T. (2007). The effect of prewriting strategy instruction on the written products of high school students with learning disabilities. University of Central Florida. [Google Scholar]
  56. Torrance, M. , Fidalgo, R. , & García, J.‐N. (2007). The teachability and effectiveness of cognitive self‐regulation in sixth‐grade writers. Learning and Instruction, 17(3), 265–285. 10.1016/j.learninstruc.2007.02.003 [DOI] [Google Scholar]
  57. UNESCO . (2014). Teaching and learning: Achieving quality for all. UNESCO. [Google Scholar]
  58. Vevea, J. L. , & Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10(4), 428–443. 10.1037/1082-989X.10.4.428 [DOI] [PubMed] [Google Scholar]
  59. Viechtbauer, W. (2010). Conducting meta‐analyses in R with the metafor package. Journal of Statistical Software, 36, 1–48. 10.18637/jss.v036.i03 [DOI] [Google Scholar]
  60. Woolfolk, A. (2021). Education psychology (14th ed.). Pearson Education. [Google Scholar]
  61. Zimmerman, B. J. (2002). Becoming a self‐regulated learner: An overview. Theory Into Practice, 41(2), 64–70. 10.1207/s15430421tip4102_2 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting information.

CL2-20-e1389-s001.docx (273.9KB, docx)

Supporting information.

CL2-20-e1389-s002.docx (272.1KB, docx)

Supporting information.

CL2-20-e1389-s004.docx (271.8KB, docx)

Supporting information.

CL2-20-e1389-s003.docx (271.4KB, docx)

Articles from Campbell Systematic Reviews are provided here courtesy of Wiley

RESOURCES