Skip to main content
The Cochrane Database of Systematic Reviews logoLink to The Cochrane Database of Systematic Reviews
. 2006 Apr 19;2006(2):CD001471. doi: 10.1002/14651858.CD001471.pub2

Educational games for mental health professionals

Paranthaman Sethupathi Bhoopathi 1,, Rajeev Sheoran 2
Editor: Cochrane Schizophrenia Group
PMCID: PMC7028004  PMID: 16625545

Abstract

Background

In traditional didactic teaching, the learner has a passive role, digesting the knowledge presented by the teacher. Stimulating and active teaching processes may be better at instilling information than more pedestrian approaches. Games involving repetition, reinforcement, association and use of multiple senses have been proposed as part of experiential learning.

Objectives

To assess the effects of educational games on the knowledge attainment and clinical skills of mental health professionals compared to the effects of standard teaching approaches.

Search methods

We searched the Cochrane Schizophrenia Group Trials Register (November 2005), AMED (1998 ‐ November 2005), British Nursing Index (November 2005), Cochrane Library (Issue 3, 2005), CINAHL (November 2005) EMBASE (November 2005), Educational Resources Information Centre on CSA (1966 ‐ November 2005), MEDLINE (November 2005) and PsycINFO (November 2005).

Selection criteria

We included randomised controlled trials comparing any educational game aiming at increasing knowledge and/or skills with a standard educational approach for mental health professionals.

Data collection and analysis

We extracted data independently. For individual person data we calculated the Odds Ratio (OR) and their 95% confidence intervals (CI) based on a fixed effects model. For dichotomous data we calculated relative risks (RR) and their 95% confidence intervals (CI) on an intention‐to‐treat basis based on a fixed effects model. We calculated numbers needed to treat/harm (NNT/NNH) where appropriate. For continuous data, we calculated weighted mean differences (WMD) again based on a fixed effects model.

Main results

We identified one trial (n=34) of an educational game for mental health nursing students of only a few hours follow up. For an outcome we arbitrarily defined ('no academically important improvement [a 10% improvement in scores]') those allocated to educational games fared considerably better than students in the standard education techniques group (OR 0.06 CI 0.01 to 0.3, NNT 3 CI 2 to 4). On average those in the games group scored six more points than the control students on a test of questions relevant to psychosis set to the standard of the mental health nursing curriculum of the day (WMD 6.00 CI 2.6 to 9.4).

Authors' conclusions

Current limited evidence suggests educational games could help mental health students gain more points in their tests, especially if they have left revision to the last minute. This salient study should be refined and repeated.

Keywords: Humans; Games, Experimental; Mental Health; Problem-Based Learning; Problem-Based Learning/methods; Teaching; Teaching/methods

Plain language summary

Educational games for mental health professionals

Standard teaching techniques in health care often contain traditional didactic elements. Learning from traditional didactic teaching has never been a very active process and can subsequently be tedious and tiring. In this review we wished to investigate the effects of more interactive ways of teaching mental health professionals. We identified one relevant trial which, although very small and short, did suggest quite a considerable short term positive effect for the more interactive teaching approach. On average, mental health nursing students who had been taught using this method scored six points more in a follow‐up test than students allocated to the standard teaching techniques.

This interesting experiment should be reproduced to determine whether the increased knowledge is sustained and, if so, whether this produces better skills and attitudes.

Background

Description of the intervention

Standard education in healthcare often comprises traditional didactic elements. Traditional didactic teaching has been described as "the presentation of the entire content of what is to be learned to the learner in its final form" (Ausubel 1968). The learner often has a passive role, digesting knowledge presented by the teacher. This process necessitates a considerable level of concentration and motivation and can be monotonous. Pure didactic teaching has been considered a poor investment in time and money (Becchetti 1995). The adult learner retains only 5% to 10% of what is taught in a lecture (McCahan 2002). Revision classes can be even more tedious for both the teacher and the learner because of the repetitive nature of the process.

Stimulating and active teaching processes should be better at helping people learn than more passive approaches. Group participation can increase this rate to 50% (McCahan 2002). Adults learn best in an informal setting and most of them (83%) are visually oriented; only 11% learn primarily by listening (McCahan 2002). A game is an instructional method requiring learners to participate and apply knowledge in a competitive activity with preset rules (Bastable 1997). Games are interactive processes that involve the application of cognitive, affective and psychomotor skills and knowledge (Lewis 1989). It has been suggested that new methods of teaching/learning should be adopted to make the learning process interesting and imaginative and to stimulate critical reflection (McEvoy 1989).

Pfieffer 1980 defines experiential learning as "occur[ing] when a person engages in some activity, looks back at the activity critically, abstracts some useful insight from the analysis and puts the results to work". Essentially experiential learning is about doing rather than listening to other people or reading. Some of the key characteristics of this type of learning are active involvement of the students, student centeredness, a degree of interaction, some measure of autonomy and flexibility and a high degree of relevance (Quinn 1988). Academic games are learner oriented and an interactive method of developing and assessing the competencies of participants. They reduce anxiety, provide variety, improve the desire to learn and promote group learning (Gruending 1991).

Hoffman 1985 describes a game as an exercise involving the essential characteristics of both competition and rules. (Wolkenheim 1990) proposed that games, as part of experiential learning, allow for the incorporation of the classic principles of learning (Newstrom 1980). They involve repetition, reinforcement, association and use of multiple senses. Repetition within games allows important points to be reiterated in various fashions to increase the probability of retention and application. Games provide an opportunity for success that is likely to reinforce behaviour and provide a way for learners to make transition from the game process to the underlying principles (association). Games are generally built on use of all the senses except smell.

Why it is important to do this review

Currently there is limited evidence available to determine the benefits of educational games for health care professionals. With the increasing use of problem‐based learning for health care professionals (Dwinnell 1998) the evidence for the value of this type of approach comes into question. This review seeks to investigate evidence for the value of one type of experiential learning ‐ the educational game.

Objectives

To assess the effects of educational games compared with standard teaching approaches for mental health professionals.

Methods

Criteria for considering studies for this review

Types of studies

We included all relevant randomised controlled trials. Where trials were described as 'double‐blind' but only implied as being randomised, then we included these only where the participant's demographic details were similar in each group. Quasi‐randomised studies, such as those allocating treatment depending on the day of the week, were excluded.

Types of participants

We included any mental health professional at any stage of training. Those clearly starting their training were defined as 'early stage' and all others, including those at postgraduate courses, as 'late stage'.

Types of interventions

1. Any educational game: a game is defined as any form of play or sport, usually competitive, played according to rules, and decided by skill, strength, or luck (COD 1990). The aim of the game had to be to educate the players with a subsequent increase of knowledge and skill.

2. Standard educational approach: this included any package of training usually provided for the particular course.

Types of outcome measures

All outcomes were divided into immediate (just after the end of the game, short term (1 day to 12 weeks), medium term (13‐26 weeks) and long term (over 26 weeks).

Primary outcomes

1. Knowledge 
 1.1 No important (large) positive change in knowledge

Secondary outcomes

1. Knowledge 
 1.1 No positive change in knowledge 
 1.2 Deterioration in knowledge 
 1.3 Average end scores for knowledge 
 1.4 Average change in knowledge

2. Acceptability 
 2.1 Not acceptable 
 2.2 Average scores on acceptance 
 2.3 Leaving the study early

3. Enjoyment/confidence 
 3.1 No positive change in enjoyment/confidence 
 3.2 Deterioration in enjoyment/confidence

4. Practical skills regarding patient care 
 4.1 No positive change in skills 
 4.2 Deterioration in skills

5. Motivation/concentration 
 5.1 No positive change in motivation/concentration 
 5.2 Deterioration in motivation/concentration

Search methods for identification of studies

Electronic searches

1. We searched the Cochrane Schizophrenia Group Trials Register (November 2005) using the phrase:

[(*game* in title, abstract and index terms in REFERENCE) and (*game* in outcomes in STUDY)] 
 
 This register is compiled by systematic searches of major databases, hand searches and conference proceedings (see Group Module). 
 
 2. We searched AMED (Allied and Complementary Medicine) on Ovid (1998 ‐ November 2005) using the phrase:

[(exp education/ or education$) and game$] 
 
 3. We searched the British Nursing Index (BNI) on Ovid (1998 ‐ November 2005) using the phrase:

[(exp education/ or education$) and game$]

4. We searched CENTRAL (The Cochrane Library 2005, Issue 4) using the phrase:

[exp games, experimental/ or (game* and educat* in title, abstract or keyword fields)]

5. We searched CINAHL (Cumulative Index to Nursing and Allied Health Literature) on OVID (1999 ‐ November 2005) using the Cochrane Schizophrenia Group terms for randomised controlled trials combined with the phrase:

[exp education/ and game$]

6. We searched EMBASE (1999 ‐ November 2005) using the Cochrane Schizophrenia Group terms for randomised controlled trials combined with the phrase:

[exp education/ and (exp GAME/ or game$.mp)]

7. We searched ERIC (Educational Resources Information Centre) on CSA (1966 ‐ November 2005) using the phrase:

[((randomi* or cross*ver or (clinic* near trial*) or ((singl* or doubl* or tripl* or trebl*) near (mask* or blind*)) or (random* near (allocat* or assign* or assort*))) and (medic* or nurs* or health*)) and game*]

8. We searched MEDLINE (1999 ‐ November 2005) using the Cochrane Schizophrenia Group terms for randomised controlled trials combined with the phrase:

[exp education/ and (exp GAME/ or game$.mp)]

9. We searched PsycINFO (1998 ‐ November 2005) using the Cochrane Schizophrenia Groups terms for randomised controlled trials combined with the phrase:

[(exp games/ or game$.mp) and (exp education/ or exp teaching methods/)]

We inspected all citations identified in this way for additional terms, and, if found, we added these to the above searches and repeated the process.

Searching other resources

1. Reference searching 
 We searched examination of references cited in all included trials in order to identify missing studies.

2. Personal contact 
 When possible, we also tried to contact the authors of studies initially selected for inclusion in order to identify further relevant trials.

3. Hand Searching 
 If high yield journals had been identified using the electronic searches and if they had not already been hand searched we would have chosen one for full page by page inspection.

Data collection and analysis

Selection of studies

We (PSB) and (RS) independently inspected all reports identified. We resolved any disagreement by discussion, and where doubt remained, we acquired the full article for further inspection. Once the full articles were obtained, we independently decided whether the studies met the review criteria. If disagreement could not be resolved by discussion, we sought further information and these trials were added to the list of those awaiting assessment.

Data extraction and management

We independently extracted data. Where disagreement occurred attempts were made to resolve this by discussion, where doubt still remained we sought further information from the study authors' to resolve the dilemma, and added the trial to the list of those awaiting assessment.

Assessment of risk of bias in included studies

We assessed the methodological quality of included studies using the criteria described in the Cochrane Handbook (Higgins 2008), which is based on the degree of allocation concealment. Poor concealment has been associated with overestimation of treatment effect (Schulz 1995). Category A includes studies in which allocation has been randomised and concealment is explicit. Category B studies are those which have randomised allocation but in which concealment is not explicit. Category C studies are those in which allocation has neither been randomised nor concealed. Only trials that are stated to be randomised (categories A or B of the handbook) will be included in this review. The categories are defined below:

A. Low risk of bias (adequate allocation concealment) 
 B. Moderate risk of bias (some doubt about the results) 
 C. High risk of bias (inadequate allocation concealment).

When disputes arose as to which category a trial should be allocated, attempts were made to resolve this by discussion. When doubt still remained we did not enter the data and added the trial to the list of those awaiting assessment until further information could be obtained.

Measures of treatment effect

1. Binary data 
 For binary outcomes, for example 'important improvement in knowledge' or 'no important improvement in knowledge', we estimated a fixed effects Peto Odds Ratio (OR), with the 95% confidence interval (CI). Where possible, we also calculated the number needed to teach statistic (NNT). Normally Cochrane Schizophrenia Group reviews employ Relative Risk as this has been shown to be more intuitive (Boissel 1999) than odds ratios and odds ratios tend to be interpreted as RR by clinicians (Deeks 2000). However, as this review only contains one study, and we were able to extract individual person data and enter these into RevMan, the calculator within this package limits us to presentation of the Peto Odds Ratio.

Where possible, efforts were made to convert outcome measures to binary data. This can be done by identifying cut‐off points on rating scales and dividing participants accordingly into "clinically improved" or "not clinically improved". It is generally assumed that if there is a 50% reduction in a scale‐derived score such as the Brief Psychiatric Rating Scale (BPRS Overall 1962) or the Positive and Negative Syndrome Scale (PANSS, Kay 1986, this could be considered a clinically significant response (Leucht 2005a, Leucht 2005b). It is recognised that for many people, especially those with chronic or severe illness, a less rigorous definition of important improvement (e.g. 25% on the BPRS) would be equally valid. If individual patient data are available, we used the 50% cut‐off point for non‐chronically ill people and a 25% cut‐off point for those with chronic illness. If data based on these thresholds were not available, we used the primary cut‐off presented by the original authors.

2.Continuous data 
 2.1 Normal distribution

Continuous data on outcomes in trials relevant to mental health issues are often not normally distributed. To avoid the pitfall of applying parametric tests to non‐parametric data we applied the following standards to continuous final value endpoint data before inclusion: (a) standard deviations and means were reported in the paper or were obtainable from the authors; (b) when a scale started from zero, the standard deviation, when multiplied by two, should be less than the mean (otherwise the mean is unlikely to be an appropriate measure of the centre of the distribution ‐ Altman 1996); In cases with data that are greater than the mean they were entered into 'Other data' table as skewed data. If a scale starts from a positive value (such as PANSS, which can have values from 30 to 210) the calculation described above in (b) should be modified to take the scale starting point into account. In these cases skewness is present if 2SD>(S‐Smin), where S is the mean score and Smin is the minimum score. We reported non‐normally distributed data (skewed) in the 'other data types' tables.

For change data (mean change from baseline on a rating scale) it is impossible to tell whether data are non‐normally distributed (skewed) or not, unless individual patient data are available. After consulting the ALLSTAT electronic statistics mailing list, we entered change data in RevMan analyses and reported the finding in the text to summarise available information. In doing this, we assumed either that data were not skewed or that the analysis could cope with the unknown degree of skew.

2.2 Final endpoint value versus change data 
 Where both final endpoint data and change data were available for the same outcome category, only final endpoint data were presented. We acknowledge that by doing this much of the published change data may be excluded, but argue that endpoint data is more clinically relevant and that if change data were to be presented along with endpoint data, it would be given undeserved equal prominence. Where studies reported only change data we contacted authors for endpoint figures.

For continuous outcomes we estimated a weighted mean difference (WMD) between groups based on a fixed effects model.

3. Rating scales 
 A wide range of instruments are available to measure mental health outcomes. These instruments vary in quality and many are not valid, and are known to be subject to bias in trials of treatments for schizophrenia (Marshall 2000). Therefore continuous data from rating scales were included only if the measuring instrument had been described in a peer‐reviewed journal.

4. Tables and figures 
 Where possible, data were entered into RevMan so the area to the left of the line of no effect indicated a favourable outcome for educational games.

Unit of analysis issues

Cluster trials 
 Studies increasingly employ cluster randomisation (such as randomisation by clinician or practice) but analysis and pooling of clustered data poses problems. Firstly, authors often fail to account for intra class correlation in clustered studies, leading to a unit‐of‐analysis error (Divine 1992) whereby p values are spuriously low, confidence intervals unduly narrow and statistical significance overestimated. This causes Type I errors (Bland 1997, Gulliford 1999).

Where clustering had not been accounted for in primary studies, we presented the data in a table, with a (*) symbol to indicate the presence of a probable unit of analysis error. In subsequent versions of this review we will seek to contact first authors of studies to obtain intra‐class correlation co‐efficients of their clustered data and to adjust for this using accepted methods (Gulliford 1999). Where clustering has been incorporated into the analysis of primary studies, we will also present these data as if from a non‐cluster randomised study, but adjusted for the clustering effect.

We have sought statistical advice and have been advised that the binary data as presented in a report should be divided by a design effect. This is calculated using the mean number of participants per cluster (m) and the intraclass correlation co‐efficient (ICC) [Design effect=1+(m‐1)*ICC] (Donner 2002). If the ICC is not reported we assumed it to be 0.1 (Ukoumunne 1999). If cluster studies had been appropriately analysed taking into account intra‐class correlation coefficients and relevant data documented in the report, we synthesised these with other studies using the generic inverse variance technique.

Dealing with missing data

Intention to treat analysis 
 Unless 90% of people were included in the final report of immediate outcomes, 80% for short‐term outcomes and 70% for the rest, we did not include such data as we considered such losses would have been too prone to bias.

We presented data on a 'once‐randomised‐always‐analyse' basis. For outcomes, such as 'important improvement in knowledge' we assumed that those who were lost to follow up had no important improvement in knowledge.

Assessment of heterogeneity

Firstly, we considered all the included studies within any comparison to judge for clinical heterogeneity. Then we visually inspected graphs to investigate the possibility of statistical heterogeneity. We supplemented this by using primarily the I‐squared statistic. This provides an estimate of the percentage of variability due to heterogeneity rather than chance alone. Where the I‐squared estimate is greater than or equal to 75%, we interpreted this as indicating the presence of considerable levels of heterogeneity (Higgins 2003). If inconsistency remained high, and substantially altered the results we did not add those studies responsible for heterogeneity to the main body of homogeneous trials. The heterogeneous studies were summated and presented separately and reasons for heterogeneity investigated.

Assessment of reporting biases

Reporting biases arise when the dissemination of research findings is influenced by the nature and direction of results. These are described in the Cochrane Handbook (Higgins 2008). We are aware that funnel plots may be useful in investigating reporting biases but are of limited power to detect small‐study effects. Where only 3‐4 studies reported an outcome or little variety in sample size (or precision estimate) between studies were found then we did not use funnel plots to test for reporting biases.

Data synthesis

Where possible we employed a fixed‐effects model for analyses. We understand that there is no closed argument for preference for use of fixed or random‐effects models. The random‐effects method incorporates an assumption that the different studies are estimating different, yet related, intervention effects. This does seem true to us, however, random‐effects does put added weight onto the smaller of the studies ‐ those trials that are most vulnerable to bias.

Results

Description of studies

Results of the search

We identified 235 reports and selected 15 citations for further inspection. Thirteen were added to the list of excluded studies, and one study (two citations) added to the list of included studies.

Included studies

1. Included studies 
 We included one study reported in one paper and one dissertation (Kelly 1992).

1.1 Setting 
 The study utilised a classroom for students on a Registered Mental Nurse (RMN) training programme in Northern Ireland, UK. The students had completed at least 18 months of their training and the games and tests took place in their classroom on a normal working day.

1.2 Participants 
 The trialists identified five senior groups of nursing students from a recognised Registered Mental Nurse training programme as being available during the research period. They selected two of these groups randomly for two runs of the experiment. The group members were then randomly allocated to the experimental or control group by an independent observer.

1.3 Study size 
 Thirty four participants were included. The first run of the study involved 16 students (n=16) of which eight were in the control group and eight in the experimental group. The second run included 18 students (n=18) of which ten were in the experimental group and eight in the control group. We found no power calculations.

1.4 Interventions 
 1.4.1 Experimental intervention 
 The game, called 'Trivia Psychotica' is based on the popular 'Trivial Pursuit' game and the rules were left unchanged. The game 'Trivia Psychotica', involved the drawing of question cards for one of two teams. The questions were divided into five categories (schizophrenia, affective psychosis, organic psychosis, pharmacology and pot luck) these five categories contained 50 questions each. The players were allowed 15 seconds to answer each question. The rules of the game encouraged players in each team to work together in order to help answering questions and points were awarded for correct answers. Although essentially the game is competitive, 'winning' is not a desired endpoint, merely the completion of at least two hours of play.

In both the runs, the experimental group are encouraged to play the game after they took the initial multiple choice questions test regarding the signs, symptoms and nursing management of people with psychosis. Questions were taken from the curriculum for the RMN training course of the day. A total of 60 questions were used in the test and participants were allowed 45 minutes to complete the test. All the participants repeated the test after a few hours. The evaluator remained blind to the participants' identity.

1.4.2 Control intervention 
 Over the same period, the students allocated to the control group took the same tests as those given to students who had participated in the intervention game. Instead of participating in the game they were involved in a period of non directed study.

1.5 Outcomes 
 The primary outcome for Kelly 1992 is knowledge change, as measured by test scores.

1.5.1 Knowledge about psychosis 
 We do not have a copy of the test. The questions were in keeping with the curriculum of the day for RMN training in Northern Ireland. We were able to extract the scores for each individual student from bar‐charts included in the dissertation we acquired and we can therefore present individual person data.

1.5.2 Other ratings 
 A further semi‐structured interview on only three randomly selected students from each of the two experimental groups explored other qualities of the study (enjoyment, confidence, motivation / concentration and practical skills regarding patient care). As these ratings were on such a small sample of people taken solely from the experimental group, and no numerical data were reported we have not used the findings.

1.6 Duration 
 The initial test lasted for forty five minutes. The educational game lasted for another two hours then the students had a break and other classes. Finally all students took the second test for another forty five minutes.

Excluded studies

1. Excluded studies 
 We excluded 13 of the 15 papers we selected for acquisition from the search. Four of these were randomised but they did not involve mental health professionals (Burke 2001, Haselhuhn 2005, Ingram 1998, Leblanc 1995). Of these studies two did involve educational games. Haselhuhn 2005 randomised 80 first year undergraduate business students to a series of economic educational games. Ingram 1998 compared the test scores of the experimental group that used the game with the control group that did not use the game for 217 third year BSc nursing students (not mental health care professionals). Burke 2001 randomised healthcare workers employed in a community facility to a self test module versus a video learning module to increase knowledge. Educational games were not involved in this study. Leblanc 1995 randomly allocated first year nursing students to study the effect of lecture discussion versus simulation as a teaching strategy, this study also involved no educational games. 
 
 The other nine were not randomised studies (Gifford 2001, Glendon 2005, McCahan 2002, Metcalf 2003, Skinner 2000, Stokamer 2000, Tankel 1999, Wargo 2000, Youseffi 2000)

2. Awaiting assessment 
 There are no studies that await assessment.

3. Ongoing studies 
 We know of no ongoing studies.

Risk of bias in included studies

1. Loss to follow up 
 Many of the outcomes were rendered unusable because only six participants from the experimental groups provided data. This loss of data is too prone to bias to use as prestated in the methods section of this review.

Allocation

Kelly 1992 stated that students were allocated a number randomly and then assignment to the experimental or control group. The author, however, is not explicit about the precise methods used and the reader is not fully assured that allocation is truly random or that the sequence of allocation is appropriately concealed.

Blinding

The trialist stated that people marking the papers were unaware of the students identity, although the success of this single blinding method was not tested.

Incomplete outcome data

No student withdrew from the study, but they may not have had the choice to refuse either the intervention or the test.

Selective reporting

We are not aware of data being reported selectively.

Other potential sources of bias

The quality of description of randomisation is poor and blinding untested so we rated this study as Category 'B' ‐ likely to be prone to bias. It may therefore overestimate the effect of educational games on learning.

Effects of interventions

1. COMPARISON: EDUCATIONAL GAME + STANDARD LEARNING versus STANDARD LEARNING

1.1 Knowledge of psychosis 
 1.1.1 Not improved to an important extent 
 We arbitrarily defined 'academically important improvement' as a 10% improvement in scores. We have no empirical basis for this, but thought that students would appreciate a gain of such magnitude in their exams. The individual person data function of RevMan allows only calculation of fixed effects Peto Odds Ratio. Those allocated to the educational game were statistically significantly less likely to have 'no academically important improvement' by our definition in the period immediately following the game (n=34, 1 RCT, OR 0.06 CI 0.01 to 0.3, NNT 3 CI 2 to 4).

1.1.2 Deterioration 
 The scores of six of the nine students from the control group that showed no improved actually deteriorated (n=34, 1 RCT, OR 0.08 CI 0.01 to 0.5, NNT 4 CI 3 to 7).

1.1.3 Average test score at the end of the study 
 On average, students taking the test after the game scored six points more than those who did not play the game (n=34, 1 RCT, MD 6.00 CI 2.6 to 9.4). 
 
 1.2 Acceptability: leaving the study early 
 No participants left the trial early.

1.3 Missing or unusable outcomes 
 The trialists did measure enjoyment/confidence, motivation/concentration, improvement of patient management skills but only on six people in the experimental group. 
 
 1.4 Heterogeneity, sensitivity analyses and tests of publication bias 
 We were unable to carry these out on a single included trial.

Discussion

1. The search 
 We found several studies of educational games in fields other than mental health professionals indicating that such trials are rare but not unheard of. Kelly 1992 is an innovative and pioneering experiment. However, to our knowledge, no further trials have been conducted and therefore the intriguing results have not been replicated.

2. COMPARISON 1. EDUCATIONAL GAMES + STANDARD LEARNING versus STANDARD LEARNING

2.1 Knowledge of psychosis 
 We recognise that our choice of the 10% gain in score as being an important one is not based on knowing the actual test or its standards. We arrived at 10% by discussion and agreement with others not aware of the data. We thought that presenting data as binary as well as continuous would be helpful. Using both the binary and continuous scores from the trial suggests statistically significant gain for people learning through the game. If the NNT of 3 truly reflects the potency of this simple technique, this could really represent a valuable tool for teachers of mental health professionals. Put another way, we expect that many student nurse would appreciate an average of six additional points in their tests and for their retention of facts not to deteriorate.

Of course these findings do need to be replicated and we do not know if the early effects of the game are still evident in the medium or long term.

2.2 Acceptability: leaving the study early 
 We found no evidence of the game being unacceptable, but it may have been difficult for the students to refuse to take part either in the tests or the game.

2.3 Missing or unusable outcomes 
 This thoughtfully designed trial also considered issues of enjoyment, confidence, motivation, concentration and improvement of patient management skills. Results were not numerical. In a larger, longer, better funded trial these could be looked at again (see below).

Summary of main results

To our knowledge, this is the only systematic review on educational games for mental health professionals. We may have missed trials but the Cochrane Schizophrenia Groups Register of trials in one of the most comprehensive of its kind. We included only one trial with limited outcome data, but overall the data favoured educational games as a tool for health care professionals.

Overall completeness and applicability of evidence

We were able to include only one small study of short duration, with limited usable outcome data. Further evidence is needed to support these initial positive findings.

Quality of the evidence

We don't know how randomisation had been performed or whether blinding had been maintained throughout the study. No power calculations were made and we don't know if these improvements in knowledge were retained by the nursing staff.

Authors' conclusions

Implications for practice.

1. For students 
 Educational games could help students gain more points in their tests, especially if they have left revision to the last minute. These games may even prevent forgetting of salient facts during the test.

2. For teachers 
 This group learning approach to gaining knowledge of psychosis may be a valuable last minute addition to standard learning packages and, at the very least, could help students to gain a few more points in their examinations.

3. For manager/policy makers 
 People designing or managing courses may want to consider these innovative approaches to teaching.

Implications for research.

1. General 
 With the guidance of the CONSORT recommendations (Moher 2001), we would hope that future studies would present all methods and numerical data more clearly.

2. Specific 
 This small study demonstrates how playing educational games may increase the short term level of knowledge. This finding should be replicated across many mental health care specialties. It would not seem impossible to randomised hundreds of students to educational games with both short and longer term follow up (see PICO Table 1).

1. PICO table for future research in educational games.
Type of study Participants Intervention Control Outcomes Notes
Allocation: randomised. 
 Blindness: single. 
 Design: parallel group. 
 Duration: 8 weeks. Any mental health professional at any stage of training, for example, trainee psychiatrists taking exams for higher training. 
 Stage of training: those clearly starting their training were defined as 'early stage' and all others, including those at postgraduate courses, as 'late stage'. 
 N=300. 
 Age: Not important. 
 Sex: Not important. General: Any educational game: a game was defined as any form of play or sport, usually competitive, played according to rules, and decided by skill, strength, or luck. The aim of the game had to be to educate the players with a subsequent increase of knowledge and skill. 
 Specific: A game based on 'Trivial Pursuit' adapted to cover the topics needed by the participants for the examinations they are due to be taking. Standard educational approach: this included any package of training usually provided for the particular course. 1. 10% change in knowledge and/or skills in target test paper* 
 2. Average endpoint scores in knowledge or skills in target test paper 
 3. Loss to follow up 
 4. Confidence: measured by Likert scale 
 5. Motivation to revise: measured by Likert scale 
 6. Pass rate of target exam 
 
 * Primary PICO = participants, interventions, control, outcomes

What's new

Date Event Description
11 November 2009 Amended Contact details updated.

History

Protocol first published: Issue 2, 1999
 Review first published: Issue 2, 2006

Date Event Description
5 August 2009 Amended Contact details updated.
22 December 2008 Amended Edited to new review format and some minor changes.
13 February 2006 New citation required and conclusions have changed Substantive amendment

Notes

None

Acknowledgements

The first version of the protocol was undertaken by Mariana Gomes and Sean Kelly but both have moved on to do other things. The current reviewers have expanded their work and completed the review, but without the idea and thought already invested in the protocol by Mariana Gomes and Sean Kelly this review would have been of poorer quality. The reviewers would also like to thank Clive Adams for his immense support and advice.

Data and analyses

Comparison 1. EDUCATIONAL GAME + STANDARD TRAINING vs STANDARD TRAINING ALONE (all immediate‐term data).

Outcome or subgroup title No. of studies No. of participants Statistical method Effect size
1 Knowledge 1   Peto Odds Ratio (95% CI) Subtotals only
1.1 no significant improvement in test scores (> 10%) 1 34 Peto Odds Ratio (95% CI) 0.06 [0.01, 0.27]
1.2 deterioration in score 1 34 Peto Odds Ratio (95% CI) 0.08 [0.01, 0.47]
2 Knowledge: 2. Average test score (high = good) 1 34 Mean Difference (IV, Fixed, 95% CI) 6.0 [2.63, 9.37]
3 Acceptability: leaving the study early 1 34 Peto Odds Ratio (95% CI) 2.72 [0.00, 405374963728624968339339186054287424915788798256675622502289979753392324558723350108535358972664171216448138250160856879680595995878978997373268019116382751477928964413008566625453803274321546217467272596233822484974981870968804696738837572813407161292885303893331804160]

1.1. Analysis.

1.1

Comparison 1 EDUCATIONAL GAME + STANDARD TRAINING vs STANDARD TRAINING ALONE (all immediate‐term data), Outcome 1 Knowledge.

1.2. Analysis.

1.2

Comparison 1 EDUCATIONAL GAME + STANDARD TRAINING vs STANDARD TRAINING ALONE (all immediate‐term data), Outcome 2 Knowledge: 2. Average test score (high = good).

1.3. Analysis.

1.3

Comparison 1 EDUCATIONAL GAME + STANDARD TRAINING vs STANDARD TRAINING ALONE (all immediate‐term data), Outcome 3 Acceptability: leaving the study early.

Characteristics of studies

Characteristics of included studies [ordered by study ID]

Kelly 1992.

Methods Allocation: randomised. 
 Blindness: single. 
 Design: parallel group. 
 Setting: UK. 
 Duration: 8 hours.
Participants Students in Registered Mental Nurse training. 
 N=34. 
 Age: no details. 
 Sex: no details.
Interventions 1. Educational game played for two hours. N=18. 
 2. No educational game. N=16.
Outcomes Knowledge about psychosis: test scores.
Unable to use ‐ 
 Enjoyment: semi‐structured interview (only six people in experimental group). 
 Concentration: semi‐structured interview (only six people in experimental group). 
 Motivation: semi‐structured interview (only six people in experimental group).
Notes  
Risk of bias
Bias Authors' judgement Support for judgement
Adequate sequence generation? Unclear risk Randomised, no further details
Allocation concealment? Unclear risk B ‐ Unclear, no further details
Blinding? 
 All outcomes Unclear risk Single (assessor), untested
Incomplete outcome data addressed? 
 All outcomes Low risk All participants completed the study
Free of selective reporting? Unclear risk No further details
Free of other bias? Unclear risk Unclear

Characteristics of excluded studies [ordered by study ID]

Study Reason for exclusion
Burke 2001 Allocation: randomised. 
 Participants: health care workers employed in a community facility. 
 Interventions: self test module versus video learning module to increase knowledge, not educational games.
Gifford 2001 Allocation: not randomised.
Glendon 2005 Allocation: not randomised.
Haselhuhn 2005 Allocation: randomised. 
 Participants: first year undergraduate business students, not mental health care professionals.
Ingram 1998 Allocation: randomised. 
 Participants: third year BSc nursing students, not mental health care professionals.
Leblanc 1995 Allocation: randomised. 
 Participants: first year nursing students. 
 Interventions: lecture discussion versus simulation as a teaching strategy, not educational game.
McCahan 2002 Allocation: not randomised.
Metcalf 2003 Allocation: not randomised.
Skinner 2000 Allocation: not randomised.
Stokamer 2000 Allocation: not randomised.
Tankel 1999 Allocation: not randomised.
Wargo 2000 Allocation: not randomised.
Youseffi 2000 Allocation: not randomised.

Differences between protocol and review

We upgraded the review into RevMan 5 format with additional heading added but no substantial changes were made to the review.

Contributions of authors

Paranthaman Bhoopathi ‐ updated the protocol, undertook the search, selected material, extracted and assimilated data and wrote the final report.

Rajeev Sheoran ‐ undertook the search, selected the material and extracted data.

Sources of support

Internal sources

  • Bradford District Care Trust, UK.

External sources

  • Cochrane Schizophrenia Group General Fund, UK.

Declarations of interest

None.

Edited (no change to conclusions)

References

References to studies included in this review

Kelly 1992 {published data only}

  1. Kelly LS. Trivia‐Psychotica the Development and evaluation of an Educational Game for The Revision of Psychotic Disorders in a R.M.N. Training Programme. Journal of Psychiatric and Mental Health Nursing 1995;2(6):366‐7. [DOI] [PubMed] [Google Scholar]

References to studies excluded from this review

Burke 2001 {published data only}

  1. Burke CT. The influences of teaching strategies and reinforcement techniques on health care workers' learning and retention. The University of Southern Mississippi 2001:176.

Gifford 2001 {published data only}

  1. Gifford KE. Using instructional games: a teaching strategy for increasing student particiaption and retension. Occupational Therapy in Health Care 2001;15:13‐21. [DOI] [PubMed] [Google Scholar]

Glendon 2005 {published data only}

  1. Glendon K, Ulrich D. Using games as a teaching strategy. Journal of Nursing Education 2005;44(7):338‐9. [DOI] [PubMed] [Google Scholar]

Haselhuhn 2005 {published data only}

  1. Haselhunhn MP, Mellers BA. Emotions and cooperation in economic games. Brain Research Cognitive Brain Research 2005;23(1):24‐33. [DOI] [PubMed] [Google Scholar]

Ingram 1998 {published data only}

  1. Ingram C, Ray K, Landeen J, Keane DR. Evaluation of an educational game for health sciences students. Jornal of Nursing Education 1998;37(6):240‐6. [ISSN‐0148‐4834 (Print)] [DOI] [PubMed] [Google Scholar]

Leblanc 1995 {published data only}

  1. Leblanc PA. Attitudes of nursing students towards the elderly as influenced by lecture discussion with and without simulation. The University of Southern Mississippi 1995:91.

McCahan 2002 {published data only}

  1. McCahan, C. Improving CNA Education with a game show. Geriatric Nursing 2002;23(4):200‐2. [DOI] [PubMed] [Google Scholar]

Metcalf 2003 {published data only}

  1. Metcalf B, Yankou D. Using games to help students understand ethics. Journal of Nursing Education 2003;42(5):212‐15. [DOI] [PubMed] [Google Scholar]

Skinner 2000 {published data only}

  1. Skinner K. Creating a game for sexuality and aging: the Sexual Dysfunction Trivia Game. Journal of Continuing Education in Nursing 2000;31(4):185‐87. [DOI] [PubMed] [Google Scholar]

Stokamer 2000 {published data only}

  1. Stokamer C, Soccio D. Reinvigurating mandatory safety training. Journal of Continuing Education in Nursing 2000;31(4):169‐73. [DOI] [PubMed] [Google Scholar]

Tankel 1999 {published data only}

  1. Tankel K, Wissmann J. Psychopharmacology RACE: a fun road to competence. Journal of Psychosocial Nursing 1999;37(2):30‐5. [ISSN‐0279‐3695 (Print)] [PubMed] [Google Scholar]

Wargo 2000 {published data only}

  1. Wargo C. Blood Clot: gaming to reinforce learning about disseminated intravascular coagulation. Journal of Continuing Education in Nursing 2000;31(4):149‐51. [DOI] [PubMed] [Google Scholar]

Youseffi 2000 {published data only}

  1. Youseffi F, Caldwell R, Hadnot P. Recall Rummy: learning can be fun. Journal of Continuing Education in Nursing 2000;31(4):161‐2. [DOI] [PubMed] [Google Scholar]

Additional references

Altman 1996

  1. Altman DG, Bland JM. Detecting skewness from summary information. BMJ 1996;313:1200. [EDU020600] [DOI] [PMC free article] [PubMed] [Google Scholar]

Ausubel 1968

  1. Ausubel DP. Not available. Educational psycolchology: A cognitive view. New York: Holt, Rinehart and Wilson, 1968. [Google Scholar]

Bastable 1997

  1. Bastable SB. Nursing as Educator. Principles of Teaching and Learning. Boston: Jones and Bartlett, 1997. [Google Scholar]

Becchetti 1995

  1. Becchetti, R. Shriners' challenge. A game format for a mandatory inservice program. Shriners' challenge. A game format for a mandatory inservice program 1995;11(2):83‐7. [ISSN‐0882‐0627 (Print)] [PubMed] [Google Scholar]

Bland 1997

  1. Bland JM. Statistics notes. Trials randomised in clusters. BMJ 1997;315:600. [DOI] [PMC free article] [PubMed] [Google Scholar]

Boissel 1999

  1. Boissel JP, Cucherat M, Li W, Chatellier G, Gueyffier F, Buyse M, et al. The problem of therapeutic efficacy indices. 3. Comparison of the indices and their use. Therapie 1999;54(4):405‐11. [PubMed] [Google Scholar]

COD 1990

  1. Allen RE (ed). The concise Oxford dictionary. Oxford: Clarendon Press, 1990. [Google Scholar]

Deeks 2000

  1. Deeks J. Issues in the selection for meta‐analyses of binary data. Abstracts of 8th International Cochrane Colloquium; 2000 Oct 25‐28th; Cape Town, South Africa. 2000.

Divine 1992

  1. Divine GW, Brown JT, Frazier LM. The unit of analysis error in studies about physicians' patient care behavior. Journal of General Internal Medicine 1992;7(6):623‐9. [DOI] [PubMed] [Google Scholar]

Donner 2002

  1. Donner A, Klar N. Issues in the meta‐analysis of cluster randomized trials. Statistics in Medicine 2002;21:2971‐80. [DOI] [PubMed] [Google Scholar]

Dwinnell 1998

  1. Dwinnell BG, Adams L. Problem‐based learning in medical education. Hospital Practice Office Edition 1998;33(11):15‐6. [DOI] [PubMed] [Google Scholar]

Gruending 1991

  1. Gruending DL, Fenty D, Hogan T. Fun and games in nursing staff development. Journal of Continuing Education in Nursing 1991;22(6):259‐62. [ISSN‐0022‐0124 (Print)] [DOI] [PubMed] [Google Scholar]

Gulliford 1999

  1. Gulliford MC. Components of variance and intraclass correlations for the design of community based surveys and intervention studies: data from the health survey for England 1994. Merican Journal of Epidemiology 1999;149:876‐83. [DOI] [PubMed] [Google Scholar]

Higgins 2003

  1. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta‐ analyses. BMJ 2003;327:557‐60. [DOI] [PMC free article] [PubMed] [Google Scholar]

Higgins 2008

  1. Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions. The Cochrane Collaboration 2008.

Hoffman 1985

  1. Hoffman SB, Brand FR, Beatty PG, Hamill LA. Geriatrix: a role playing game. The Gerontologist 1985;25(6):568‐72. [DOI] [PubMed] [Google Scholar]

Kay 1986

  1. Kay SR, Opler LA, Fiszbein A. Positive and negative syndrome scale (PANSS) manual. North Tonawanda (NY): Multi‐Health Systems, 1986. [Google Scholar]

Leucht 2005a

  1. Leucht S, Kane JM, Kissling W, Hamann J, Etschel E, Engel R. What does the PANSS mean?. Schizophrenia Research 2005;79:231‐8. [DOI] [PubMed] [Google Scholar]

Leucht 2005b

  1. Leucht S, Kane JM, Kissling W, Hamann J, Etschel E, Engel R. Clinical implications of Brief Psychiatric Rating Scale Scores. British Journal of Psychiatry 2005;187:366‐71. [DOI] [PubMed] [Google Scholar]

Lewis 1989

  1. Lewis DJ, Saydak SJ, Mierzwa IP, Robinson JA. Gaming: A teaching strategy for adult learners. Journal of Continuing Education in Nursing 1989;20(2):80‐3. [DOI] [PubMed] [Google Scholar]

Marshall 2000

  1. Marshall M, Lockwood A, Bradley C, Joy C, Fenton M. Unpublished rating scales ‐ a major source of bias in randomised controlled trials of treatments for schizophrenia. British Journal of Psychiatry 2000;176:249‐52. [DOI] [PubMed] [Google Scholar]

McEvoy 1989

  1. McEvoy P. Teaching and learning: a climate of change. Nursing Standard 1989;9(3):36‐8. [DOI] [PubMed] [Google Scholar]

Moher 2001

  1. Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before‐and‐after evaluation. JAMA 2001;285(15):1992‐5. [ISSN‐0098‐7484 (Print)] [DOI] [PubMed] [Google Scholar]

Newstrom 1980

  1. Newstrom WJ, Scannell EE. Games Trainers Play. New York: McGraw‐Hill, 1980. [Google Scholar]

Overall 1962

  1. Overall JE, Gorham DR. The Brief Psychiatric Rating Scale. Psychological Reports 1962;10:799‐812. [Google Scholar]

Pfieffer 1980

  1. Pfieffer JW, Jones JE. Structured Experienence Kit. The Curriculum in Nursing Education. London: Croom Helm, 1980. [Google Scholar]

Quinn 1988

  1. Quinn FM. The Principles and Practice of Nurse Education. 2nd Edition. Suffolk: St Edmunds Press, 1988. [Google Scholar]

Schulz 1995

  1. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408‐12. [DOI] [PubMed] [Google Scholar]

Ukoumunne 1999

  1. Ukoumunne OC, Gulliford MC, Chinn S, Sterne JAC, Burney PGJ. Methods for evaluating area‐wide and organisation‐based interventions in health and health care: a systematic review. Health Technology Assessment 1999;3(5):iii‐92. [MEDLINE: ] [PubMed] [Google Scholar]

Wolkenheim 1990

  1. Wolkenheim BJ, Westdrop J. Games that teach: a practical approach. Journal of Nursing Staff Development 1990;6(1):45‐7. [PubMed] [Google Scholar]

Articles from The Cochrane Database of Systematic Reviews are provided here courtesy of Wiley

RESOURCES