Skip to main content
PLOS One logoLink to PLOS One
. 2022 Aug 4;17(8):e0271621. doi: 10.1371/journal.pone.0271621

Instructing children to construct ideas into products alters children’s creative idea selection in a randomized field experiment

Kim van Broekhoven 1,*, Barbara Belfi 2, Lex Borghans 3
Editor: Sergio Agnoli4
PMCID: PMC9352014  PMID: 35925913

Abstract

Many popular pedagogical approaches instruct children to construct their ideas into tangible and physical products. With the prospect of implementation, do children decide to go for the most creative ideas or do they shift towards ideas that are perhaps less creative but easier to construct? We conducted a field experiment to test whether expected construction affects children’s creative idea selection. In this experiment, 403 children were asked to select the most original ideas to make a toy elephant more fun to play with. We randomly assigned them to a treatment condition—in which they were informed they had to construct one of the original ideas that they selected—and a control group—in which children were informed that, after idea selection, they had to perform another task. Children who were instructed to construct the selected idea into a tangible product turned a blind eye to original ideas and preferred the more feasible ideas. Thus, pedagogical approaches that aim to stimulate creativity by instructing children to construct original ideas into tangible and physical products may unintentionally change children’s choices for creative ideas. This finding highlights the importance for educators of guiding children’s decision-making process in creative problem solving, and to be aware of children’s bias against original ideas when designing creative assignments for them.

Introduction

To develop children’s creativity, constructivist pedagogies have risen dramatically across primary schools (e.g., Montessori education and project-or research-based learning [1]). Constructivism emphasizes the importance of the learner being actively involved in the learning process, and it has been established by educational science that children learn better when they develop external representations or products of their constructed knowledge [2, 3]. As such, an important characteristic of constructivist pedagogies is that children conclude their projects by reflecting their understanding, knowledge, and ideas in the construction of a final and concrete product, such as a prototype [46]. Further, the resulting products are popular means to assess creativity as they may be seen as the universal language of all children, irrespective of their literacy skills, age, nationality, or intelligence [79]. The prospect of having to build a product based on an original idea might, however, make children hesitant to select such ideas, because it may be more of a challenge to actually build them; this may be detrimental for children’s creativity development. To investigate this, we tested whether instructing children to construct original ideas into tangible products alters children’s creative idea selection.

In this study, creativity was conceptualized as a set of specific characteristics of ideas that children selected, and these characteristics include originality and usefulness [10]. Originality refers to the characteristic of an idea that is new and unusual, and usefulness refers to the characteristic of an idea that is potentially feasible or valuable [11]. Originality is seen as the most important aspect of creativity because something must be original in some way to be considered creative [12]. While psychological and educational studies on creativity focus mainly on the generation of original and useful ideas [12], management and business studies on innovation focus mainly on the implementation of such ideas [13]. The present study takes a multidisciplinary perspective by linking these two approaches, acknowledging that creativity starts with generation of ideas that are then typically implemented. Yet, to move from creativity to innovation, the most creative ideas must be selected, and the prospect of implementation itself may affect idea selection. The terms implementation, construction, and building are used interchangeably in this article. This leads to the question of how the expected implementation of ideas affects children’s idea selection.

Several findings in the social psychology literature suggest that extrinsic constraints, such as expected evaluation of ideas, hinder people’s creativity [e.g., 1417]. This may be so for several reasons. First, it has been argued that exerting external pressure and constraints may reduce the fun of performing a task, which in turn may reduce intrinsic motivation and, subsequently, creativity [15]. This line of research has shown that people are more likely to produce creative work when they are intrinsically motivated, because they are free of extraneous concerns about contextual conditions and are able to concentrate their attention to the task itself [16]. Arguably, the expected implementation of ideas is a form of extrinsic constraint as well, because children observe each other’s attempts to build ideas into concrete products. Consequently, they may fear building unorthodox ideas that may fail or be ridiculed by their peers. Hence, expected implementation may exert external pressure on children’s attempt to implement an idea in practice, and subsequently on their idea selection. Further, it has been found that people have a deep-seated desire to maintain a sense of certainty and to preserve the familiar [18]. In idea selection, people often favor the more common ideas due to risk perception with creative ideas, as creative ideas are by definition uncertain, because they are often new and untested [19]. According to the novelty-usefulness trade-off, creative ideas are often original, and the more original an idea is, the higher the risk perception whether an idea will work in practice [20], creating doubt as to whether the idea can be realized [12]. As such, children may fear failure in building physical products out of original ideas. For these reasons, simply instructing children to select original ideas—without having to build their chosen idea into a tangible product—may already be a challenging task. We further theorize that this may become even more problematic if teachers additionally instruct children to build original ideas into tangible products, because this invites the question of whether their idea can actually be built in practice.

Based on previous literature, we hypothesize that: (i) children who are instructed to implement ideas are more likely to reject highly original ideas, and (ii) children select more feasible ideas.

A prior laboratory experiment shows that another extrinsic constraint, expected evaluation, caused undergraduates to make their ideas more feasible [17]. While laboratory experiments elicit the pressure of extrinsic constraints, they generally have a low ecological validity, because the experiment is done in an artificial environment. As such, field settings allow a more realistic setting where extrinsic constraints are elicited in the actual classroom. Therefore, we ran a field experiment in which children in their natural school environment were not only asked to select the most original ideas of a set of ideas, but were also asked to actually build their selected ideas after selection. Research in developmental psychology has shown that children aged 10 to 13 show an increasing degree of conformist thinking that continues through high school [2123]. Due to this increasing conformist way of thinking, the consequences of extrinsic constraints such as actually constructing an idea into a concrete product could become even more detrimental for children’s creativity in a natural school classroom environment (where children observe each other’s attempts to build ideas into concrete products).

Methods and materials

Participants

Before data collection, a priori power analysis revealed that a minimum of 388 participants would be required to obtain a statistical power of 0.99 with a (independent) t-test [G*Power 3.1; 24]. We recruited slightly more participants to compensate for drop-outs due to potential technical issues (e.g., problems with internet connection in the school). Data were collected from 403 children from 13 primary schools in the Netherlands between February and June of 2019. The children (49.9% girls) attended grade 6 (last grade of primary school) and were aged between 10 and 13 years (M = 11.6, SD = 0.48).

This study was reviewed and approved by the Ethical Review Committee Inner City Faculties from Maastricht University (ERIC_090_14_06_2018). Families of children in participating schools received a letter describing the project, and their consent was obtained through passive consent wherein they had the opportunity to ask for their child not be included in the project. The research team received withdrawal from participation requests for two children.

Procedure

For our field experiment, children filled out an online assignment in their natural school environment as part of their daily school program (see S1 Appendix in S1 File for online assignment). Both the teacher and a researcher—who had been introduced as a substitute teacher—were present. The assignment consisted of three tasks and took in total 20 minutes to complete (see Fig 1 for experimental design). Both the researcher and the teacher walked around to answer questions about possible ambiguities.

Fig 1. The experimental design.

Fig 1

In the online assignment, children were first asked to evaluate 20 ideas from a pre-defined list in terms of feasibility and originality to improve a stuffed toy elephant, such as enlarging the toy elephant or creating a toy elephant that is able to spit fire. Next, all children were asked to generate as many ideas as possible for toys for monkeys in the zoo. After this task, children were randomly assigned to either the experimental condition or the control condition. Children in the experimental condition (N = 201) were told, through written instruction on the computer screen, to expect future implementation of their selected ideas:

“A toy factory needs your help! The toy factory makes toy animals, such as elephants, dogs, rabbits and so on. They would like to receive original ideas to change a toy elephant. They will first test these ideas on a toy elephant made of paper. You will build these ideas.

In contrast, children in the control condition (N = 202) were told:

“A toy factory needs your help! The toy factory makes toy animals, such as elephants, dogs, rabbits and so on. They would like to receive original ideas to change a toy elephant. They will first test these ideas on a toy elephant made of paper. You will NOT build these ideas, because you will be building ideas for monkey toys.

After reading these instructions, the children were asked to select five original ideas to improve a toy elephant. From these five ideas, they had to select the two most original ideas. Thus, the manipulation was that some children were told that they later had to implement (i.e., build) these ideas, while other children were told that would not have to implement these ideas. The control condition received an additional task to select ideas to build a toy for monkeys in the zoo (see S1 Appendix in S1 File for complete materials for both conditions).

Teachers were provided with building materials for the children. This pack of materials included colored pencils, paper, scissors, glue, foam balls, magnets, iron wire, string, wool, paper clips, water, bouncy balls, bandages, plastic straws, tubes, and bags. At the end of this online assignment, children had to build their selected ideas (see Fig 2 for two examples of final products for the stuffed toy elephant). After this building exercise, the experiment ended and the children went back to their normal school program.

Fig 2. Two examples of final products for the toy elephant.

Fig 2

Picture A presents the idea of ‘make the elephant soft’, and picture B presents the idea of ‘make the elephant in such a way that it can fly’.

Measures

Idea pool

As part of this study, 36 grade-6 pupils from a previous cohort had generated ideas to improve a stuffed toy elephant as part of the Torrance Test of Creative Thinking [TTCT; 25]. This product improvement task resulted in 438 ideas. This number was reduced to 369 by excluding ideas that involved non-play uses, such as make the elephant alive or use it as a pincushion. The ideas were then further reduced to a list of 62 ideas by excluding ideas that were similar (i.e., the ideas “make it bigger” and “making a XL elephant” were collapsed into one idea, “enlarge the toy elephant”). Next, the remaining 62 ideas were rated by seven experts (i.e., four primary school teachers and three creativity researchers). The experts were instructed to rate each idea on feasibility and originality using a Likert scale ranging from 1 (not at all feasible/original) to 5 (very feasible/original). To reduce the list of 62 ideas even further, a random set of 20 ideas was selected to be presented to the children in the experiment (see S2 Appendix in S1 File). These 20 pre-defined ideas varied in creativity, in a 2 (originality: low, high) by 2 (feasibility: low, high), as a creative ideas has to be (a) original and (b) feasibility [10]. Interrater reliability for this list of 20 pre-defined ideas was high: The overall intraclass correlation coefficient (ICC, two-way random, consistency analysis) was 0.90, and the single interrater reliabilities were excellent (feasibility ICC = 0.94 and originality ICC = 0.86). By averaging the scores of the seven experts, each single idea received a feasibility and originality score.

Idea evaluation task

From this list of 20 pre-defined ideas for the stuffed toy elephant, children were asked to evaluate each idea on their feasibility (with a 5-point rating scale). Similarly as in Charles and Runco [26], children were asked to select one of five faces: (1) frown = very difficult to build this idea; (2) slight frown = difficult to build this idea; (3) no expression = not difficult but also not easy to build this idea; (4) slight smile = easy to build this idea; (5) smile = very easy to build this idea; that best showed what they thought of the idea. Next, the originality evaluation task was administered with a 10-point rating scale. The children were asked to estimate a hypothetical number of children (between 1 and 10) able to generate a given idea [26]. Hence, the originality of an idea was operationalized as the degree to which children thought that a number of children, from 10 children in total, would have generated a similar idea. This measure is often used in creativity research to determine the originality of ideas [2631], and the pilot study showed that children were better able to assess the originality of ideas in this way.

Idea selection task

After the evaluation task, children were asked to select the five most original ideas from the list of pre-defined ideas for the stuffed toy elephant. From these five ideas, they had to select the two most original ideas (original ideas were defined as ideas that children would rarely see). Based on children’s evaluation ratings, children’s idea selection performance is measured by children’s own-rated feasibility and originality level of each idea. Next to children’s own-rated feasibility and originality, we also have the average rating of the seven experts for feasibility and originality.

Control variables

To test whether our findings are consistent across all children, we measured demographic and psychological variables that prior studies reported as relevant [32, 33]. Children’s demographic variables (i.e., children’s gender, age, ethnicity, socioeconomic status, prior level of achievement) originate from data from the Onderwijs Monitor Limburg. This is a large cooperative project between Maastricht University and schools, school boards, and local government. This data collection aims in particular to collect and analyze information about the educational development of students to foster educational improvement. The data contain school administrative data—concerning each child’s gender, date of birth, ethnicity, parental educational level, and school site—and report grade for grade 6. Prior to the experiment, children’s psychological variables were measured (i.e., risk preference and personality traits) in another online questionnaire (see S3 Appendix in S1 File). Risk preference was measured using the Risk Taking 10-item scale from the Jackson Personality Inventory [JPI; 34]. Children received adjusted statements suitable for children (presented in random order). Sample statements include: “I take risks” and “I like adventure.” Children rated how well each statement describes themselves on a Likert scale, ranging from 1 (very inaccurate) to 5 (very accurate). Scale reliability (Cronbach’s alpha) in this study was good (α = 0.84). Personality traits were measured using the 50-item version of the International Personality Item Pool [IPIP; 35]. For each personality trait, children received 10 adjusted statements suitable for children (presented in random order). Sample statements include: “I am bursting with ideas” and “I am always prepared.” Children rated how well each statement describes themselves on a Likert scale, ranging from 1 (very inaccurate) to 5 (very accurate). Scale reliability (Cronbach’s alpha) in this study was good: openness to experience (α = 0.75), conscientiousness (α = 0.80), agreeableness (α = 0.74), extraversion (α = 0.71) and emotional stability (α = 0.82).

Data analysis

To investigate whether children select less original, but more feasible ideas, we needed both variables ranging from 1 to 5. For this, we transformed the originality rating for each idea from the ten-point Likert scale to a five-point Likert scale. This was done by first dividing the originality evaluation ten-point Likert scale by two (range from 0.5 to 5). Next, we reversed these values so that higher values indicated higher originality (range from 1 to 5). Subsequently, we tested whether children’s idea selection performance varied by condition in univariate ordinary least squares regressions, with children’s average rating for the two selected ideas as outcome variables separately for feasibility and originality:

Yija=β0+β1treatmentij+eij

where Yija is the outcome of child i in group j (j = 0 for control, j = 1 for treated) and a refers to the average feasibility or originality rating by child i for the two selected ideas. As such, Yija indicates children’s average own rating of the two selected ideas, separately for feasibility and originality. The variable of interest, treatmentij, is a binary variable that takes the value 1 for children who expected implementation of their selected ideas, and zero for children who did not expect implementation of their selected ideas. Lastly, eij is a normally distributed residual with zero mean and constant variance σe2 [e.g., 3638]. As a robustness check, the average rating of the seven experts was also used to measure idea selection performance.

To test whether these findings were consistent across children’s gender, age, ethnicity, socioeconomic status, prior level of achievement, and psychological variables (i.e., risk preference and personality traits), we performed multivariate ordinary least squares regressions with children’s average rating for the two selected ideas as outcome variables separately for feasibility and originality, where we controlled for demographic and psychological variables:

Yija=β0+β1treatmentij+β2Dij+β3Pij+eij,

where Yija is the outcome of child i in group j (j = 0 for control, j = 1 for treated) and a refers to feasibility or originality. In addition to the univariate ordinary least squares regression, Dij refers to demographic variables (i.e., children’s gender, age, ethnicity, socioeconomic status, prior level of achievement) and Pij refers to psychological variables (i.e., risk preference and personality traits).

Since there is a negative correlation between feasibility and originality, a reduction of the originality of ideas in case of expected implementation might be the direct result of increased feasibility. To check whether children really reduce the originality of the idea, more than would just be expected based on this correlation, we ran a conditional logit model that simultaneously estimates the effects of children’s feasibility and originality ratings on their selection of ideas [39]. In this way, we partialize out the feasibility rating from the originality rating. In brief, each child chooses two ideas of a set of 18 ideas to improve a stuffed toy elephant. The probability that child i chooses k among j alternatives is:

Pr(ichoosesk)=Pr(Vik>Vij)jk,j=1,,J

In general, the utility of alternative j for child i is given by:

Vij=xijβ+viji=1,,n;j=1,,J

where xij represents the variation of the feasibility and originality ratings across ideas. These ratings interact with the treatment (i.e., expected implementation or non-expected implementation). All variables must vary across ideas (or alternatives) to achieve identification in the conditional logit model. Therefore, as treatment is alternative-invariant, it can be included in the model only as an interaction with the characteristics of the alternative (i.e., feasibility and originality). Specifically, the interaction terms included are:

  • Treatmenti * Feasibilityij

  • Treatmenti * Originalityij

Results are reported as odds ratios. These should be interpreted as the proportional change in the odds of child i selecting idea k for a unit increase in the treatment variable, holding all other variables constant. This means that we can draw conclusions about the probability for children in the expected implementation condition and children in the non-expected implementation condition to select an idea given its feasibility and originality. It is important to be clear about what is meant by “change in the odds” in these models. This is based on the number of children making a particular choice (i.e., selection of two ideas) while accounting for the number of alternatives available within that choice set (i.e., 18 other ideas). Thus, when we say that the probability of selecting a feasible idea is higher among children in the expected implementation condition than children with no such expectation, this is after accounting for the total number of ideas in the total set.

Results

The manipulation was successful: 93% of children in the expected implementation condition expected to construct ideas for a toy elephant in contrast to 10% in the non-expected implementation condition (F(1, 211) = 466.49, p < 0.001, Cohen’s d = 2.969) (S1 Table in S1 File).

To investigate whether expected construction affects children’s creative idea selection, we compared children’s self-rated levels of feasibility and originality of their two selected ideas between the treatment and control group. We analyzed differences between the experimental and control group using ordinary least squares regression, and analyzed differences between the experimental and control group controlling for demographic and psychological variables as a robustness check (S2 Table in S1 File). In these analyses, degrees of freedom varied slightly across analyses because some children did not fill in the questionnaire containing the psychological variables. Furthermore, to understand children’s trade-off between feasibility and originality when choosing a creative idea, we ran a conditional logical model that simultaneously estimated the effects of children’s feasibility and originality ratings on their selection of ideas.

Instructional effects on creative behavior were found to be reliable and large in magnitude [40, 41]. Compared with those in the control group, children who expected implementation of ideas selected more feasible ideas. The Cohen’s d effect size is 1.028 (B = 1.16, SDtreatment = 1.12, SE = 0.11, 95% CI = 0.94 to 1.38, ɳ = 403, t(1, 401) = 10.31, p < 0.001) (Fig 3). However, these children who did expect implementation selected significantly less original ideas. The Cohen’s d effect size is small to medium: 0.377 (B = -0.43, SDtreatment = 1.23, SE = 0.12, 95% CI = -0.66 to -0.21, ɳ = 403, t(1, 401) = -3.78, p < 0.001) (Fig 3). These results are robust for the selection of five original ideas as well.

Fig 3. Estimated effect of expected implementation on children’s feasibility and originality ratings.

Fig 3

Error bars represent 1 SE. This figure summarizes the intervention’s effect on children’s feasibility and originality ratings. Error bars reflect 95% confidence intervals.

More specifically, the conditional logit models show that when taking a decision about the level of creativity of an idea, the expectation of idea implementation increased the probability of choosing a feasible idea by 103%, while the probability of choosing an original idea declined by 14% (Table 1).

Table 1. Idea selection conditional on alternative ideas (odds ratios and Z-statistics).

  Model 1 Model 2 Model 3
Feasibility 0.604 *** 0.621 ***
(-11.05) (-10.33)
Treatment*Feasibility 2.073 *** 2.026 ***
(12.29) (11.78)
Originality 1.296 *** 1.202 ***
(5.94) (4.13)
Treatment*Originality 0.771 *** 0.859 *
      (-4.41)   (-2.52)  

Notes: Z-statistics are reported in parentheses to indicate statistical significance. Effects are interpreted as the probability of favoring idea k multiplied by a one-unit increase in that variable. Estimates greater than 1 are considered positive effects, while estimates smaller than 1 are considered negative effects.

*** Statistical significance at the 0.10% level (Z-statistics > 3.10)

** Statistical significance at the 1% level (Z-statistics > 2.58)

* Statistical significance at the 5% level (Z-statistics > 1.96).

Notably, the detrimental effect of instructing children to transform their ideas into tangible and physical products was consistent among children with different background characteristics (i.e., gender, age, ethnicity, socioeconomic status, prior level of achievement) and psychological characteristics (i.e., risk preference and personality traits with exception of conscientiousness). Hence, these findings are broadly applicable to children independent of demographic or psychological characteristics (S2 Table in S1 File). Further, results did not change when we used expert ratings as the outcome (S3 Table in S1 File).

Discussion

Many constructivist pedagogies aim to develop children’s creativity by instructing them to actually construct their ideas into tangible and physical products. Yet, little is known whether children decide to go for more or less creative ideas with the prospect of implementation. The aim of this study was to examine whether children select less creative ideas when they expect to implement these ideas on a later moment. More specifically, the current study investigated the effect of expected implementation of ideas on the selection of original and feasible ideas among primary school children.

We found that children inhibit themselves in selecting original ideas once there is an expectation of idea implementation. Hence, to move from creativity to innovation, this research shows that children may put more focus on the practicality of ideas. In a brainstorm between two people (i.e., dyad) about ways to improve a stuffed toy elephant, Glăveanu, Gillespie, & Karwowski [42] have compared the practicality of ideas recorded on paper with ideas not recorded on paper (but only verbally expressed). In line with our findings, they found that dyads were more likely to write down practical ideas than original ideas. As such, it seems that the practicality of ideas becomes more important in the selection and implementation of ideas than in the generation of ideas.

These findings are broadly applicable to children independent of demographic or psychological characteristics. We only found an exception for the personality trait conscientiousness. The significant interaction effect illustrates that more conscientiousness children tend to choose more feasible ideas, also when they are not in the treatment group (S2 Table in S1 File). Yet, their choice changes much less when they are assigned to the treatment group. Hence, untreated conscientiousness children show the behavior of others when they are treated.

Furthermore, we found a trade-off between novelty and usefulness where children who expected implementation selected less original, but more feasible ideas than children who did not have an expectation of later implementing the selected ideas. According to the novelty-usefulness trade-off, highly original ideas are more likely to be judged as less feasible because they involve, by definition, a step into the unknown [e.g., 4348]. Several scholars argue that these two criteria of creativity are often seen as incompatible and represent a fundamental tension or paradox [e.g., 4954]. In line with von Thienen, Ney and Meinel [55], we found a negative correlation between originality and feasibility of 0.51 among experts, and 0.36 among children. As such, the novelty-usefulness trade-off explains our finding that children select less original, but more feasible ideas once expecting implementation of those ideas. This implies that the expectation of idea implementation causes children to play it even safer with regard to practicality and to choose more feasible ideas rather than more original ones.

In addition, our finding can also partly be explained by the bias against originality [52]. The most original ideas are often those that are radically different from existing solutions or practices, which often cause people to have ambivalent feelings towards novel ideas, because people often prefer the status quo or familiar ideas. Blair and Mumford [56], for example, found that managers concerned with idea implementation prefer non-original ideas even when they are ideas that are both original as well as useful. We found that children without expected implementation selected more original ideas, they did not totally abandon feasible ideas.

Practical implications

The results of this study have important implications for educational practice. First and foremost, our finding that children inhibit themselves in selecting original ideas once there is an expectation of idea implementation suggests that this instructional approach may lead to loss of potential original ideas. This means that by focusing on end-products only, educators may run the risk of applauding the creativity of a small group of children who succeeded into constructing original ideas into tangible products, while at the same time not recognizing the creative potential of a group of equally creative children who tried, but failed in constructing original ideas into physical products. As such, educators should focus not only on the end-product, but particularly on the decision-making process children go through in selecting creative ideas [4]. In stimulating children’s creativity, it may be worthwhile for educators to support children in their intuitive judgments of original ideas, in resisting peer pressures to conform, and guide children with highly original but seemingly unrealistic ideas to think of ways for the idea to be made feasible. There are several strategies that educators can use to render wild-sounding ideas into more useful or feasible ideas, and this may help children to pursue original ideas [e.g., 5759]. More specifically, wild-sounding ideas can be rendered more effective by means of parallel prototyping [57]. For instance, educators could encourage children to imagine, try out constructing multiple ideas in parallel into tangible products. Another highly effective approach to render original ideas more effective is iterative prototype testing [57, 60]. In iterative prototyping, educators could encourage children to test and refine their ideas multiple times, moving from non-refined prototypes–such as rough sketches–to refined prototypes–such as refined paper models or CAD drawing over time. Next to these strategies, educators could also explicitly instruct children to try out at least one daring, wild “dark horse” solution. This ‘Dark horse’ strategy calls for the exploration of wild ideas that may never otherwise be explored, and ensures that highly original ideas do not get lost in the implementation phase [59]. For this, it is important for teachers to foster a psychologically safe classroom environment where all children feel safe in playing and experimenting with ideas and materials, taking sensible risks, and making mistakes. In such an environment, novel and unorthodox ideas are valued and failures are seen as a necessary and positive part of the learning process in support of creativity [6163].

Limitations and future directions

To the best of our knowledge, this is the first study demonstrating the effect of expected implementation on children’s idea selection. To test this, a relatively large sample of 403 children were asked in a randomize design to select two innovative ideas with or without the expectation to having to implement these ideas in the classroom.

Despite its strengths, our study inevitably has limitations that future research may address. First, we investigated the effect of expected implementation on idea selection among children aged 10–13, because several studies show that children around this age begin a trend of increasingly conformist thinking that continues through high school [21, 22]. This manifests itself in the fact that children want to be as ‘normal’ as possible, and prefer to do everything the same as their peers. As a result, it is therefore often more important to them what children of the same age think of them and their behavior than their teachers or parents [64]. Accordingly, it can be expected that our findings may become even stronger among young adolescents. However, it remains unclear how younger children or adults would perform in similar experimental conditions. Thus, while the present study provides a starting point in research on the effect of expected implementation on creativity, the question remains as to how expected construction of ideas into tangible and physical products affects creativity in younger and older age groups.

We may also note that while this study investigated whether the expectation of idea implementation affects idea selection among primary school children, it did not investigate the underlying mechanisms why this happens (e.g., emotional aspects of idea selection). For instance, children’s emotional reaction, such as fear of failure, to expected implementation might explain why children have a bias against original ideas in their selection [65, 66]. Prior research has showed that people have a natural bias against creativity over feasibility because of uncertainty [15, 52]. As such, children may select different types of ideas to reduce uncertainty in the implementation phase [67]. Future research should aim to investigate these underlying mechanisms.

Finally, it is possible that the creativity of the children’s final selected ideas was lowered, because children may be constrained by their own building capacity in their selection of original ideas [68]. Children may wonder which idea they can actually build instead of selecting original ideas irrespective of their building skills. Further, several researchers have shown that multiple iterations and parallel prototyping contributes to an increase in both the quantity and creativity of prototypes produced, as it allows children to try out constructing multiple ideas in parallel into tangible products [e.g., 57, 60]. Therefore, instead of giving children only one opportunity to translate their idea into a tangible product, children may gain more experience by letting them repeatedly work on a prototype in parallel sessions to further refine their idea and this may boost the creativity of their idea.

Conclusion

In sum, the present study investigated the effect of expected implementation of ideas on children’s selection of original and feasible ideas. The results showed that expected implementation exerted different effects on the two dimension of final product creativity. Children who expected implementation selected less original ideas, but more feasible ideas than did children in the non-expected implementation condition. Thus, pedagogical approaches that aim to stimulate creativity by instructing children to construct original ideas into tangible and physical products may unintentionally change children’s choices for creative ideas. This finding highlights the importance for educators of guiding children’s decision-making process in creative problem solving, and to be aware of children’s bias against original ideas when designing creative assignments for them.

Supporting information

S1 File

(DOCX)

Acknowledgments

We are grateful to all of the teachers and children who participated in our research. We thank Sylvie Beckers and Maurice Thelen for their assistance with data collection, and their expertise on how to integrate our experimental design in the grade 6-curriculum. Research reported in this publication was supported by the Educatieve Agenda Limburg and the Kindante board of primary schools.

Data Availability

The data that support the findings of this study are publicly available in the Center for Open Science (OSF) at https://osf.io/thnyu/. DOI 10.17605/OSF.IO/THNYU.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Vincent-Lancrin S, González-Sancho C, Bouckaert M, Fernández-Barrerra M, Jacotin G, Urgel J, et al. Fostering Students’ Creativity and Critical Thinking: What it means in School. Paris (France). OECD Publishing; 2019. p. 1–356. [Google Scholar]
  • 2.Scardamalia M, Bereiter C. Knowledge building: Theory, pedagogy, and technology. In: Sawyer K, editor. Cambridge Handbook of the Learning Sciences. UK: Cambridge University Press; 2006. p. 97–118. [Google Scholar]
  • 3.Chi MTH, Wylie R. The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist. 2014; 49(4): 219–243. [Google Scholar]
  • 4.Beghetto RA, Kaufman JC. Nurturing Creativity in the Classroom. UK: Cambridge University Press; 2010. [Google Scholar]
  • 5.Cardarello R. Enhancing scientific thinking in children: Suggestions based on studies about creativity. In Pixel, editor. 3rd Annual Meeting of the New Perspectives in Science Education. Padova: Libreria Universitaria; 2014. p. 249–251. [Google Scholar]
  • 6.Davies D, Jindal-Snape D, Collier C, Digby R, Hay P, Howe A. Creative learning environments in education—A systematic literature review. Thinking Skills and Creativity. 2013; 8: 80–91. [Google Scholar]
  • 7.Ritchie SJ, Luciano M, Hansell NK, Wright MJ, Bates TC. The relationship of reading ability to creativity: Positive, not negative associations. Learning Individual Differences. 2013; 26: 171–176. [Google Scholar]
  • 8.Alfonso-Benlliure V, Santos MR. Creativity development trajectories in elementary education: Differences in divergent and evaluative skills. Thinking Skills and Creativity. 2016; 19: 160–174. [Google Scholar]
  • 9.Jellen HG, Urban KK. The TCT-DP (Test for Creative Thinking-Drawing Production): An instrument that can be applied to most age and ability groups. Creative Child and Adult Quarterly. 1986; 11: 138–155. [Google Scholar]
  • 10.Lucas BJ, Nordgren LF. The creative cliff illusion. Proceedings of the National Academy of Sciences of the United States of America. 2020; 117(33): 19830–19836. doi: 10.1073/pnas.2005620117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Corazza GE. Potential originality and effectiveness: The dynamic definition of creativity. Creativity Research Journal. 2016; 28(3): 258–267. [Google Scholar]
  • 12.Runco MA, Jaeger GJ. The standard definition of creativity. Creativity Research Journal. 2012; 24(1): 92–96. [Google Scholar]
  • 13.Anderson N, De Dreu CKW, Nijstad BA. The routinization of innovation research: A constructively critical review of the state-of-the-science. Journal of Organizational Behavior. 2004; 25(2): 147–173. [Google Scholar]
  • 14.Amabile TM. Effects of external evaluation on artistic creativity. Journal of Personality and Social Psychology. 1979; 37(2): 221–233. [Google Scholar]
  • 15.Amabile TM, Pratt MG. The dynamic componential model of creativity and innovation in organizations: Making progress, making meaning. Research in Organizational Behavior. 2016; 36: 157–183. [Google Scholar]
  • 16.Shalley CE, Perry-Smith JE, Effects of social-psychological factors on creative performance: The role of informational and controlling expected evaluation and modelling experience. Organizational Behavior and Human Decision Processes. 2001; 84(1): 1–12. doi: 10.1006/obhd.2000.2918 [DOI] [PubMed] [Google Scholar]
  • 17.Yuan F, Zhou J. Differential effects of expected external evaluation on different parts of the creative idea production process and final product creativity. Creativity Research Journal. 2002; 20(4): 391–403. [Google Scholar]
  • 18.Sorrentino RM, Roney CJ. The uncertain mind: Individual differences in facing the unknown. UK: Psychology Press; 2013. [Google Scholar]
  • 19.Rietzschel EF, Nijstad BA, Stroebe W. The selection of creative ideas after individual idea generation: Choosing between creativity and impact. British Journal of Psychology. 2010; 101(1): 47–68. doi: 10.1348/000712609X414204 [DOI] [PubMed] [Google Scholar]
  • 20.Rubenson DL, Runco MA. The psychoeconomic view of creative work in groups and organizations. Creativity and Innovation Management. 1995; 4(4): 232–241. [Google Scholar]
  • 21.Kim KH. The creativity crisis: The decrease in creative thinking scores on the Torrance Tests of Creative Thinking. Creativity Research Journal. 2011; 23(4): 285–295. [Google Scholar]
  • 22.Said-Metwaly S, Fernández-Castilla B, Kyndt E, Barbot B. Does the fourth-grade slump in creativity actually exist? A meta-analysis of the development of divergent thinking in school-age children and adolescents. Educational Psychology Review. 2020; 33(1): 275–298. [Google Scholar]
  • 23.Jastrzębska D, Limont W. Not only jumps, slumps, but also mini plateau. Creative potential assessed by the test for creative thinking-drawing production. A cross-sectional study of Polish students aged from 7 to 18. Creativity Research Journal. 2017; 29(3): 337–342. [Google Scholar]
  • 24.Faul F, Erdfelder E, Buchner A, Lang AG. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior research methods, 2009; 41(4): 1149–1160. doi: 10.3758/BRM.41.4.1149 [DOI] [PubMed] [Google Scholar]
  • 25.Torrance E. Torrance Tests of Creative Thinking: Technical-Norms Manual. US: Scholastic RTesting Service; 1974. [Google Scholar]
  • 26.Charles RE, Runco MA. Developmental trends in the evaluative and divergent thinking of children. Creativity Research Journal. 2001; 13(3–4): 417–437. [Google Scholar]
  • 27.Grohman M, Wodniecka Z, Klusak M. Divergent thinking and evaluation skills: Do they always go together? The Journal of Creative Behavior. 2006; 40(2): 125–145. [Google Scholar]
  • 28.Runco MA, Dow GT. Assessing the accuracy of judgments of originality on three divergent thinking tests. The International Journal of Creativity & Problem Solving. 2014; 14(2): 5–14. [Google Scholar]
  • 29.Silvia PJ. Discernment and creativity: How well can people identify their most creative ideas? Psychology of Aesthetics, Creativity, and the Arts. 2008; 2(3): 139–146. [Google Scholar]
  • 30.Wagner HL. On measuring performance in category judgment studies of nonverbal behavior. Journal of Nonverbal Behavior. 1993; 17(1): 3–28. [Google Scholar]
  • 31.Wallach MA, Kogan N. Modes of thinking in young children: A study of the creativity-intelligence distinction. New York: Holt, Rinehart & Winston; 1965. [Google Scholar]
  • 32.Puente-Díaz R, Cavazos-Arroyo J, Puerta-Sierra L, Vargas-Barrera F. The contribution openness to experience and its two aspects to the explanation of idea generation, evaluation and selection: A metacognitive perspective. Personality and Individual Differences. 2022; 185: 1–7. [Google Scholar]
  • 33.Rodriguez WA, Cheban Y, Shah S, Watts LL. The general factor of personality and creativity: Diverging effects on intrapersonal and interpersonal idea evaluation. Personality and Individual Differences. 2020; 167: 1–7. [Google Scholar]
  • 34.Jackson DN. Jackson Personality Inventory-Revised Manual. US: Sigma Assessment Systems; 1994. [Google Scholar]
  • 35.Goldberg LR, Johnson JA, Eber HW, Hogan R, Ashton MC, Cloninger CR, et al. The international personality item pool and the future of public-domain personality measures. Journal of Research in Personality. 2006; 40(1): 84–96. [Google Scholar]
  • 36.Porter AC, Raudenbush ST. Analysis of covariance: Its model and use in psychological research. Journal of Counseling Psychology. 1987; 34(4): 383–392. [Google Scholar]
  • 37.Reichardt CS. The statistical analysis of data from nonequivalent group designs. Quasi-experimentation: Design and Analysis Issues for Field Settings. 1979; 13: 147–169. [Google Scholar]
  • 38.van Breukelen GJ. ANCOVA versus CHANGE from baseline in nonrandomized studies: The difference. Multivariate Behavioral Research. 2013; 48(6): 895–922. doi: 10.1080/00273171.2013.831743 [DOI] [PubMed] [Google Scholar]
  • 39.McFadden D. The measurement of urban travel demand. Journal of Public Economics. 1974; 3(4): 303–328. [Google Scholar]
  • 40.Sawilowsky SS. New effect size rules of thumb. Journal of Modern Applied Statistical Methods. 2009; 8(2): 597–599. [Google Scholar]
  • 41.Cohen J. Statistical power analysis for the behavioral sciences. UK: Routledge; 2013. [Google Scholar]
  • 42.Glăveanu VP, Gillespie A, Karwowski M. Are people working together inclined towards practicality? A process analysis of creative ideation in individuals and dyads. Psychology of Aesthetics, Creativity, and the Arts. 2019; 13(4): 388–401. [Google Scholar]
  • 43.Manske ME, Davis GA. Effects of simple instructional biases upon performance in the unusual uses test. The Journal of General Psychology. 1968; 79(1): 25–33. doi: 10.1080/00221309.1968.9710449 [DOI] [PubMed] [Google Scholar]
  • 44.Rietzschel EF, Nijstad BA, Stroebe W. Productivity is not enough: A comparison of interactive and nominal brainstorming groups on idea generation and selection. Journal of Experimental Social Psychology. 2006; 42(2): 244–251. [Google Scholar]
  • 45.Ward TB. The role of domain knowledge in creative generation. Learning and Individual Differences. 2008; 18(4): 363–366. [Google Scholar]
  • 46.van Broekhoven K, Belfi B, Borghans L, Seegers P. Creative idea forecasting: The effect of task exposure on idea evaluation. Psychology of Aesthetics, Creativity, and the Arts. 2021: 1–10. [Google Scholar]
  • 47.van Broekhoven K, Cropley D, Seegers P. Differences in creativity across Art and STEM students: We are more alike than unalike. Thinking Skills and Creativity. 2020; 38: 1–13. [Google Scholar]
  • 48.Baer M. Putting creativity to work: The implementation of creative ideas in organizations. Academy of Management Journal. 2012; 55(5): 1102–1119. [Google Scholar]
  • 49.Frederiksen MH, Knudsen MP. From creative ideas to innovation performance: The role of assessment criteria. Creativity and Innovation Management. 2017; 26(1): 60–74. [Google Scholar]
  • 50.Miron-Spektor E, Beenen G. Motivating creativity: The effects of sequential and simultaneous learning and performance achievement goals on product novelty and usefulness. Organizational Behavior and Human Decision Processes. 2015; 127: 53–65. [Google Scholar]
  • 51.Miron-Spektor E, Gino F, Argote L. Paradoxical frames and creative sparks: Enhancing individual creativity through conflict and integration. Organizational Behavior and Human Decision Processes. 2011; 116(2): 229–240. [Google Scholar]
  • 52.Mueller JS, Melwani S, Goncalo JA. The bias against creativity: Why people desire but reject creative ideas. Psychological Science. 2012; 23(1): 13–17. doi: 10.1177/0956797611421018 [DOI] [PubMed] [Google Scholar]
  • 53.Nijstad BA, De Dreu CKW, Rietzschel EF, Baas M. The dual pathway to creativity model: Creative ideation as a function of flexibility and persistence. European Review of Social Psychology. 2010; 21(1): 34–77. [Google Scholar]
  • 54.Zacher H, Robinson AJ, Rosing K. Ambidextrous leadership and employees’ self-reported innovative performance: The role of exploration and exploitation behaviors. The Journal of Creative Behavior. 2016; 50(1): 24–46. [Google Scholar]
  • 55.von Thienen J, Ney S, Meinel C. Estimator socialization in design thinking: The dynamic process of learning how to judge creative work. In: Beghetto RA, Corazza GE, editors. Dynamic Perspectives on Creativity. Cham: Springer; 2019. p. 67–99. [Google Scholar]
  • 56.Blair CS, Mumford MD. Errors in idea evaluation: Preference for the unoriginal? The Journal of Creative Behavior. 2007; 41(3): 197–222. [Google Scholar]
  • 57.Dow SP, Glassca A, Kass J, Schwarz M, Schwarzt DL, Klemmer SR. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction (TOCHI). 2010; 17(4): 1–24. [Google Scholar]
  • 58.Dow S. How prototyping practices affect design results. Interactions. 2011; 18(3): 54–59. [Google Scholar]
  • 59.Vetterli C, Hoffmann F, Brenner W, Eppler MJ, Uebernickel F. Designing innovation: Prototypes and team performance in design thinking. 23rd International Society of Professional Innovation Management. Barcelona: ISPIM; 2012. p. 1–11. [Google Scholar]
  • 60.Taranu M, Andersen MM, Bojesen AB, Roepstorff A. Trust the process: The effects of iteration in children’s creative processes on their creative products. Psychology of Aesthetics, Creativity, and the Arts. In press. [Google Scholar]
  • 61.Sawyer K. A call to action: The challenges of creative teaching and learning. Teachers College Record. 2015; 117(10): 1–34. [Google Scholar]
  • 62.Olivant KF. “I am not a format”: Teachers’ experiences with fostering creativity in the era of accountability. Journal of Research in Childhood Education. 2015; 29(1): 115–129. [Google Scholar]
  • 63.Chan S, Yuen M. Personal and environmental factors affecting teachers’ creativity-fostering practices in Hong Kong. Thinking Skills Creativity. 2014; 12: 69–77. [Google Scholar]
  • 64.Laursen B, Veenstra R. Toward understanding the functions of peer influence: a summary and synthesis of recent empirical research. Journal of Research on Adolesccence. 2021; 31(4): 889–907. doi: 10.1111/jora.12606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Wang X, Li Y, Li X, Duan H, Li Y, Hu W. Role of avoidance-motivation intensity in creative thinking: Similar and differential effects across creative idea generation and evaluation. Creativity Research Journal. 2021; 33(3): 284–301. [Google Scholar]
  • 66.Zhang M, Wang F, Zhang D. Individual differences in trait creativity moderate the state-level mood-creativity relationship. PloS one. 2020; 15(8): 1–15. doi: 10.1371/journal.pone.0236987 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Zielinska A, Lebuda I, Ivcevic Z, Karwowski M. How adolescents develop and implement their ideas? On self-regulation of creative action. Thinking Skills and Creativity. 2022; 43: 1–16. [Google Scholar]
  • 68.Sadler J, Shluzas L, Blikstein P, Katila R. Building blocks of the maker movement: Modularity enhances creative confidence during prototyping. In: Plattner H, Meinel C, Leifer L, editors. Design Thinking Research. Cham: Springer, 2016. p. 141–154. [Google Scholar]

Decision Letter 0

Sergio Agnoli

4 Apr 2022

PONE-D-22-04429Instructing children to construct ideas into products decreases creativity in a randomized field experimentPLOS ONE

Dear Dr. van Broekhoven,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

 I've now received comments by two experts. Both see merits in this work and appreciated the methodological and research design approach described in the paper. However, both reviewers highlighted specific concerns with the theoretical and interpretative approach of the work, which need to be addressed by the authors. After my own reading of the manuscript, I agree with the comments of the reviewers. In particular, I would suggest to integrate the introductory section with the abundant literature on implementation, as suggested by Reviewer 2. Moreover, a more precise and careful interpretation of the results is suggested to the authors. Both Reviewer included thoughtful and insightful comments and suggestions (Reviewer 1 as an attachment), which I strongly recommend to follow in the revision of the manuscript. 

Please submit your revised manuscript by May 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Sergio Agnoli

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review the text entitled "Instructing children to construct ideas into products decreases creativity in a randomized field experiment." In the field experiment presented in this article, 403 children aged 10 to 13 were asked to select the most original ideas to make a toy more fun to play with. A random half of children was told that after the idea selection, they are expected to craft the chosen ideas—to transform them into tangible products. The other half was informed that they are going to engage in another task after idea selection (so they did not expect to construct anything based on the selection made). The analyses shown that children who anticipated implementing the chosen ideas tended to pick more feasible yet less original ideas as compared to their peers who did not expect crafting the ideas. Using conditional logit models, the Authors shown that the probability of choosing a feasible idea increased by 103% and that of choosing an original idea declined by 14% when expecting crafting the ideas. These results were consistent irrespective of children’s demographic characteristics, their personality, and risk preference. Furthermore, the conclusions stayed the same with ideas’ originality self-rated by children (prior to idea selection) as well as when expert judges scored the ideas for originality.

Overall, I enjoyed reading this paper very much. It is clearly written, thought-provoking, and brings important message for both creativity and education research and practice. The performed analyses are elegant and well-suited for the research questions stated. The technical aspects of the presented study are of high quality making this work a nice fit for PLOS ONE. I have mostly minor suggestions for the Authors’ consideration that would make the paper even clearer and more understandable for the reader.

Please have a look at the attached review to see my specific comments.

Reviewer #2: This is a great study in terms of testing a large sample of children, with a clear-cut experimental manipulation and a good test battery to assess control variables. The topic is highly relevant, as creativity is an important learning objective at school, and the development of suitable education programmes is an enduring challenge.

To improve this paper even further prior to the final publication, a couple of suggestions follow.

First, there has been a lot of research on how implementation impacts ideation, and the outcomes of creative processes. Some key findings from this research should be integrated in the paper. Most notably, the kind of implementation that has been required from children in this experiment is a particular kind of implementation, which indeed would be expected to foster the opposite of creatively diverse and daring ideation. This form of implementation can be characterised as follows: (i) the children’s prototypes have a relatively high resolution, (ii) they require manual building skills, (iii) there is only one prototype construction round by the end of the process, (iv) there is no explicit use of multiple prototypes to try out diverse ideas including at least one “wild” approach. In all four regards, more creatively diverse and daring solutions would have been expected if the implementation process had been different.

(i) A well-replicated finding in creativity and innovation research is that of Edelman & Currano (2011) in their Media-Models Framework: Prototypes with a high resolution/refinement/granularity (e.g. CAD drawings, or refined paper models) foster convergent thinking, and little changes are made to already existing concepts. By contrast, rough prototypes (e.g., rough sketches or improvised, non-detailed 3D depictions) foster divergent thinking and invite deviance from existing concepts. The two sample elephants depicted in the present experiment show a notably high amount of refinement.

Ref:

Edelman, J., & Currano, R. (2011). Re-representation: Affordances of shared models in team-based design. In Design thinking (pp. 61-79). Springer, Berlin, Heidelberg.

(ii) There is a huge range of methods to implement creative ideas, and research on the impact of different implementation strategies. Building tangible prototypes is known to be delicate, especially when this requires physical building skills and thereby draws attention away from the ideas at stake. Alternative approaches include storytelling, role-playing, drawing sketches, wizard-of-oz-prototyping, building critical-function-prototypes or designing news-of-the-future. Whenever a skill-intensive form of implementation is required from people, thorough training in the specific implementation method is recommended beforehand, because only with the necessary skill for expressing ideas in a particular medium people can be free to express any kind of an idea freely. The two sample elephants from this study required a notable amount of paperwork skill to be built like this.

Refs:

McKim, R. H. (1972). Experiences in visual thinking. Wadsworth Publishing, Belmont.

d.school (2010) Bootcamp bootleg. https://hpi.de/fileadmin/user_upload/fachgebiete/d-school/documents/01_GDTW-Files/bootcampbootleg2010.pdf

Meinel, C., Rhinow, H., & Köppen, E. (2013). Design Thinking Prototyping Cardset. Hasso Plattner Institut für Softwaresystemtechnik, Potsdam.

Sadler, J., Shluzas, L., Blikstein, P., & Katila, R. (2016). Building blocks of the maker movement: Modularity enhances creative confidence during prototyping. In Design Thinking Research (pp. 141-154). Springer, Cham.

(iii) Even when the time for implementing a creative idea remains the same, there is a huge difference between the implementation as a “one-shot-operation” versus a process of “building and testing, building and testing…”. Via repeated tests and prototype improvements, even implementations that start with little efficacy can be rendered high-performing over time. Therefore, the prospect of being able to test and improve designs throughout the implementation phase allows people to select new and daring ideas, because they can be rendered feasible prior to the final submission. By contrast, when the implementation is (expected to be) a one-time operation, this indeed offers little hope for improvements over time, so that well known and easy to implement concepts seem obviously preferable. In this particular study, no hint has been made to the possibility of tests and refinements in the implementation phase, thereby rendering a “play-safe strategy” a logically good choice for the production of somewhat effective prototypes.

Ref:

Dow, S. & Klemmer, S. R. (2011). The efficacy of prototyping under time constraints. In Design thinking (pp. 111-128). Springer, Berlin, Heidelberg.

(iv) There can be a great difference between the instruction to implement an idea, versus the explicit instruction to implement varying ideas in parallel to probe a bandwidth of creative opportunities. In the latter case, there is also the possibility of instructing students to select at least one idea that seems well-feasible, and another idea that is highly original and would seem to yield very good effects if only it could be implemented (“dark horse” method). In this particular study, it seems to be the case that children were instructed to select (and implement?) two ideas. However, there was obviously no instruction to use this parallel prototyping opportunity in order to enhance more diverse thinking, including the implementation of at least one more extreme and daring idea.

Refs:

Dow, S. P., Glassco, A., Kass, J., Schwarz, M., Schwartz, D. L. & Klemmer, S. R. (2010). Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. Transactions on Computer-Human Interaction, 17 (4), 18:1-24.

Dow, S., Fortuna, J., Schwartz, D., Altringer, B., Schwartz, D., & Klemmer, S. (2011, May). Prototyping dynamics: sharing multiple designs improves exploration, group rapport, and results. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 2807-2816).

Dow, S. (2011). How prototyping practices affect design results. Interactions, 18(3), 54-59.

https://www.designthinking-methods.com/en/4Prototypen/darkhorse.html

In their paper, the authors need to be very careful in interpreting their results. This experiment has not tested the impact of students expecting to implement or even expecting to construct creative ideas in general terms. It has tested a very specific implementation procedure that aims at (i) refined prototypes, which (ii) require considerable building skills, (iii) constructed in a “one-time” implementation procedure, (iv) without using parallel designs as a means to foster divergent, creative thinking in the implementation phase. Educators should rather be recommended to delve deeper into the existing literature on how different implementation strategies impact creative thinking, rather than being warned about implementation steps as creating a bias against creativity in general terms.

Specific considerations:

In their discussion of practical implications, the authors currently write: “In stimulating children’s creativity, it may be worthwhile for educators to […] guide children with highly original but seemingly unrealistic ideas to think of ways for the idea to be made feasible.” In this context, it can be important to note that this question does not require answers from scratch to be developed in a case-by-case approach, let alone by the students. There is strong empirical data on strategies how to render original ideas more effective in the implementation phase. This could be taught by educators in the first place. A highly effective approach to render original ideas more effective is iterative prototype testing, moving from rough prototypes (concept-based) to more refined prototypes over time.

Furthermore the authors posit: “In stimulating children’s creativity, it may be worthwhile for educators to support children in their intuitive judgments of original ideas […]”. In this regard, again there are strategies that educators can use to help students pursue original ideas. In particular, parallel prototyping with the explicit instruction to choose at least one daring, wild “dark horse” solution is an empirically substantiated method to ensure that highly original ideas do not get lost in the implementation phase.

Second, the authors of the paper have compiled a great battery to assess control variables. However, in this regard the statistical analysis seems not very sensitive. It yields the highly surprising outcome that no control variable at all has a notable impact on the children’s choices of ideas, not even “risk preference” as measured with the Risk Taking 10-item scale from the Jackson Personality Inventory. The authors write: “Notably, the detrimental effect of instructing children to transform their ideas into tangible and physical products was consistent among children with different background characteristics (i.e., gender, age, ethnicity, socioeconomic status, prior level of achievement, school) and psychological characteristics (i.e., risk preference and personality traits). Hence, these findings are broadly applicable to children independent of demographic or psychological variables (S2).” Here, a more pinpointed statistical analysis would seem helpful. For instance, what is the average originality of selected ideas in the 25% of children that are most risk-averse compared to the 25% who are most open to risk-taking? After all, later on the authors wish to conclude:

“This means that by focusing on end-products only, educators may run the risk of applauding the creativity of a small group of children who had the courage to try out original ideas, while at the same time failing to recognize the creative potential of a group of equally creative children who were perhaps too afraid to bring their ideas into practice.” This statement seems to imply that there IS a differential impact of instruction on students. Especially the more cautious students would choose feasible solutions, while the more daring students might still implement highly original ideas.

“Further, it is important for teachers to foster a psychologically safe classroom environment where children feel safe in playing and experimenting with ideas and materials, taking sensible risks, and making mistakes.“ Again, this passage suggests that there IS a differential impact of psychological attributes, such that students who feel safe (e.g., to fail) will be in a better position to develop their creativity, from initial ideation up to the implementation of final project outcomes.

Overall, especially in the context of education, it would be good to show the differential impact of psychological variables such as willingness to take risks on creative performance. After all, education could help to foster a mindset that helps students be creative, e.g. by being better able to cope with the uncertainty of trying out something that is so unique that it has not been tried by anyone before.

Thirdly, it would be good to increase the conceptual clarity regarding the relationship of “originality” and “usefulness”, the two key dimensions characterising creative products. It is not logically necessary that highly original ideas must be less useful / feasible than non-original ideas. In this sense, the Idea Assessment Probes have been constructed as ideas sets for testing purposes in creativity research. These sets include original and non-original ideas as well as useful/feasible and not useful/feasible ideas, with the two dimensions varying independently of each other. Even with these idea sets, empirical research has found that people concerned with idea implementation (in a managerial context) preferred non-original ideas. This was the case even though highly original ideas would have been as feasible. Therefore, the feasibility of implementation does not seem to be the only factor that drives people to shy away from original ideas in the idea selection phase.

Ref:

von Thienen, J. P. A., Ney, S. & Meinel, C. (2019). Estimator socialization in design thinking: The dynamic process of learning how to judge creative work. In R. Beghetto and G. E. Corazza (eds.), Dynamic perspectives on creativity: New directions for theory, research and practice in education (pp. 67-99). Springer.

In the present paper, it would be interesting to read the statistical correlation of idea "originality" and "usefulness" ratings by the children and the experts.

Moreover, it would be interesting to read if there might be a preference for unoriginal ideas that is even independent of feasibility considerations (e.g., a negative correlation of “originality” with “choice for implementation”, while partialising out the “feasibility” rating).

Forthly, the paper discusses the impact of “conformist thinking” in children. For instance: “several studies show that children around this age begin a trend of increasingly conformist thinking that continues through high school (20-22). Accordingly, it can be expected that our findings may become even stronger among young adolescents.“ Here it is not clear why conformist thinking should lead to a bias against original ideas. After all, the role models with whom children/adolescents might want to conform could be teachers or peers who explicitly love wild and daring creative ideas. Maybe the logic of argumentation could be explained in more detail here. Alternatively, there might be a discussion regarding the importance of role models and the impact of educators as role models in this context.

All in all, this paper conveys a lot of important information in a condensed format; it is written in good English and very understandable. The authors master a good integration of a controlled experimental approach in an ecologically valid school setting, with a large sample size and a sophisticated testing battery. It is a very good piece of research, definitely recommended for publication. The required revisions range between minor and major. In particular, the theoretical background, interpretation and conclusion warrant further attention. Most importantly, varying methods exist for the implementation of creative ideas. Research has already demonstrated that these methods/strategies have hugely varying impacts on creative outcomes.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: PLOS-24-03.pdf

Decision Letter 1

Sergio Agnoli

5 Jul 2022

Instructing children to construct ideas into products alters children's creative idea selection in a randomized field experiment

PONE-D-22-04429R1

Dear Dr. van Broekhoven,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sergio Agnoli

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Congratulations on your nice work and on the careful revision. I agree with both reviewers that the work is now ready for publication in Plos One. As a last suggestion, I would recommend to take into account the suggestion by Reviewer 1 for the rephrasing of the highlighted sentence (during the proofs correction stage). Good luck for you future work!

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have done a great job in revising their work. All of the initially raised issues were clearly addressed and I thank the authors for their responsiveness. As this is an even better version of an already nice paper, I am happy to recommend it for publication in PLOS ONE.

Just one very small suggestion: Seems like the following sentence needs a slight rephrasing: "Hence, both untreated and treated conscientiousness children show the behavior of others when they are treated."

Reviewer #2: Thank you for the careful consideration of feedback, the line of argumentation now seems strong and consistent. Looking forward to seeing this article in print.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Acceptance letter

Sergio Agnoli

26 Jul 2022

PONE-D-22-04429R1

Instructing children to construct ideas into products alters children’s creative idea selection in a randomized field experiment

Dear Dr. van Broekhoven:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sergio Agnoli

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    Attachment

    Submitted filename: PLOS-24-03.pdf

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    The data that support the findings of this study are publicly available in the Center for Open Science (OSF) at https://osf.io/thnyu/. DOI 10.17605/OSF.IO/THNYU.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES