Skip to main content
Psychiatry, Psychology, and Law logoLink to Psychiatry, Psychology, and Law
. 2022 Feb 14;30(3):229–248. doi: 10.1080/13218719.2021.2006097

Assessment of self-report response bias in high functioning autistic people

Marilyn A Sher a,b,, Caroline Oliver a
PMCID: PMC10281395  PMID: 37346057

Abstract

The study aimed to establish a normative data set for the Paulhus Deception Scales (PDS) and Structured Inventory of Malingered Symptomology (SIMS) in a community adult sample of high functioning autistic (HFA) people. Assessments were administered anonymously online. Seventy surveys were completed, with respondents contributing from 16 countries. The majority of subscales and total scores for the PDS and SIMS fell above cut-off for self-report response bias, suggesting that completion of these measures by HFA individuals may lead to conclusions of intentional response distortion, even when this is not the case. Significant relationships were found between high scores and education level, as well as psychological distress. The findings of the study raise concerns about the use of these measures with HFA people, particularly in ‘high stakes’ situations.

Keywords: high functioning autism, impression management, malingering, response bias, socially desirable responding

Introduction

In prisons, hospitals, probation services and medico-legal assessment contexts across the UK, psychometrics are routinely utilised. These can be used to explore and evaluate treatment efficacy as well as to determine an individual’s risk factors at various points of their offence pathway through the court, detention and community (Wakeling & Barnett, 2014). When faced with offenders or victims who have neurodevelopmental difficulties, psychologists will typically be asked by the court to assess their specific deficits and establish how those relate to their abilities in response to specific legal questions (Salekin et al., 2010), such as their understanding of the trial process and their ability to instruct solicitors or take part in a trial.

In forensic psychology, we often consider risk factors in terms of static (those that are generally stable) and those that are dynamic (variable/changeable), recognising that multiple variables can impact on dynamic risk (Wakeling & Barnett, 2014). Many psychometrics used in forensic psychology to evaluate individual risk factors, beliefs, personality and other factors rely on self-report measures, which naturally introduces the risk of response bias. Many of the psychometrics have been criticised for being so transparent that people can easily lie, fake, present themselves in a biased way and introduce further bias because of variations or limitations in insight, shame and sense of guilt (Wakeling & Barnett, 2014). One way this has been addressed is by utilising questionnaires that explore typical response biases, such as socially desirable responding (SDR), impression management (IM) and malingering.

In terms of self-report measures of response bias, these tend to fall into two main categories: those that explore malingering – that is, faking or over-reporting psychopathology (Furnham, 1986) – and those that explore socially desirable responding, whereby the person tries to portray themselves in a more favourable light and manage impressions others may form of them (Rogers, 2018a). As such, various tools have been developed that focus on these two key areas. Popular tools exploring self-reported malingering include the Structured Inventory of Malingered Symptomology (SIMS: Widows & Smith, 2005), which is a standalone measure. Conversely, the Paulhus Deception Scales (PDS: Paulhus, 1998) is an example of a standalone measure of socially desirable responding and impression management. Measures that explore both over-reporting/‘faking bad’ as well as under-reporting/‘faking good’ of symptoms and psychopathology by incorporating validity scales within the full assessment include: the Minnesota Multiphasic Personality Inventory (MMPI: Butcher et al., 2001), the Millon Clinical Multiaxial Inventory–Version 4 (MCMI–IV: Millon et al., 2015) and the Multiphasic Sex Inventory (MSI: Nichols & Molinder, 1984). For example, the Debasement (Z scale) on the MCMI–IV provides a measure of the exaggeration of symptoms, and the Desirability scale (Y scale) a measure of socially desirable responding.

Response bias is recognised as situational and context driven, falling on a continuum (Hart, 1995; Rogers et al., 2010, Tan & Grace, 2008; Young, 2017). People may over- or under-report abilities, traits or symptoms, to gain something, such as parole, to mitigate risk or to obtain a job (Archer et al., 2016; Cassano & Grattagliano, 2019; Leite, 2015). Thus, understanding motivation and incentives that may contribute to a person displaying response bias is important to consider (Ohlsson & Ireland, 2011). At the same time, factors such as IQ (Jobson et al., 2013), personality (Hart, 1995; Impelen et al., 2017; Paulhus, 1998; Tully & Bailey, 2017) and age (Mathie & Wakeling, 2011) have been found to play a contributory role, and are important to understand.

Despite the fact that measures of response bias have been around for many decades now, they are not without their shortcomings. The various self-report response bias questionnaires that are frequently used by clinicians as part of their assessments have not been developed with specialist populations in mind. Whilst the British Psychological Society (BPS) does acknowledge the need to consider the impact, for instance, of neurodevelopmental difficulties in relation to response bias, effort and malingering (BPS, 2009), the guidance provides little information on how to do this. Some conditions, such as learning difficulties and social communication disorders, may render people particularly susceptible to produce raised scores on assessments of malingering and effort (Lerner et al., 2012), or conversely come across in a socially desirable way (Langdon et al., 2010), and so this remains an important area to address.

Therefore, it is important to develop normative data, bearing in mind different contexts and different populations, so that it is possible to contextualise an individual’s scores with comparable population norms. This may be particularly relevant for those with neurodevelopmental disorders who may inadvertently produce distorted responses on these measures, due to factors such as cognitive inflexibility, suggestibility and social communication deficits, leading assessors to conclude that they are trying to ‘fake good’ or ‘fake bad’, when this is not actually the case.

Varying base rates and differences in cut-off scores for different populations, settings, contexts and even clinicians create problems with the meaningfulness of results from such measures (Drob et al., 2009). Attempting to generalise results from purely clinical contexts to forensic contexts can also have a seriously adverse effect on the quality and accuracy of such assessment (Cassano & Grattagliano, 2019). A significant factor to consider is that response bias may look different in forensic groups as opposed to non-forensic populations (Tan & Grace, 2008). This picture may be further complicated when considered in the context of those who have pre-existing conditions that may render them more likely to appear as if they are malingering or, conversely, engaging in SDR when they are not – for example, those with learning disabilities and autism.

Autistic spectrum disorder (ASD) is a lifelong condition and refers to a spectrum or dyad of neurodevelopmental difficulties that can vary considerably between people, but encompasses impairments with social communication, repetitive behaviours, cognitive flexibility/rigidity, restrictive/specialist interests and problems understanding and making sense of other people’s feelings, thoughts and behaviours (Ali, 2018; Attwood, 2015; Baron-Cohen, 2008; National Autistic Society, NAS, 2019; Wing, 1997). Alongside these, they can also have difficulties with executive function and language, sensory sensitivities, and problems with emotion recognition and regulation (Lai et al., 2014). A type of autism, often described as high functioning or Asperger’s, refers to those who are of average, or above average, intelligence and who may also have better language skills than other autistic people, but may have certain learning difficulties or problems with processing and making sense of communication and social information (NAS, 2019). This can mean that their difficulties may not be as obvious, particularly when considering those who find themselves in trouble with the law (Browning & Caulfield, 2011).

In terms of prevalence, according to the NAS (NAS, 2020a), approximately 1.1% of the UK population has a diagnosis of ASD. However, many go undiagnosed. Lai and Baron-Cohen (2015) highlighted that many autistic females may also have been missed due to variations in how they present. Only with improvements in diagnostic approaches over the last five years have we begun to see an increase in the prevalence of ASD amongst the female population (NAS, 2020b). However, there has been very little research on the prevalence of ASD in criminal justice contexts, which means that many may not have been identified (Ali, 2018; Underwood et al., 2013). Population differences do seem to exist based on the small pockets of research conducted. For instance, Scragg and Shah (1994) found between 1.5% and 2.3% of inpatients in a high secure hospital in the UK had ASD, and Siponmaa et al. (2001) found a prevalence rate of 3%, but a further 15% were identified as having a pervasive developmental disorder. A further study of all UK high secure hospitals conducted by Hare et al. (1999) found a prevalence rate of 1.6% and an additional finding that those with a diagnosis of ASD tended to be detained over 10 years longer that those with other diagnoses (Hare et al., 1999). This suggests that prevalence rates of autism in forensic contexts may be higher than what appears on the surface. As a result, it becomes important to have appropriate normative data for response bias measures that can be used in forensic assessments of autistic offenders across community, secure hospital and prison contexts.

In relation to response bias specifically, some conditions (such as learning difficulties and social communication disorders, such as autism) may render people particularly susceptible to inadvertently producing raised scores on assessments of malingering and may also be seen as displaying less effort (Lerner et al., 2012). This is because autistic people are often described as being ‘black and white’ in their thinking (and so may be inclined to endorse extreme ends of a Likert scale when rating questionnaires) and can find it hard to perspective take, possibly causing difficulty when endorsing scenario-based questions (Mazefsky & White, 2014), both of which could potentially impact on how they respond to questionnaires.

Autistic people also show a greater tendency to be compliant and/or suggestible (Chandler, Russell, & Maras, 2019; Lerner et al., 2012; O’Mahony, 2012) – a further factor that can impact on their response styles when completing questionnaires. Gudjonsson and Clark (1986, p. 84) define suggestibility as ‘The extent to which, within a closed social interaction, people come to accept messages communicated during formal questioning, as the result of which their subsequent behavioural response is affected’. As Gudjonsson (2003) goes on to explain, the definition considers that, in the context of an assessment or interview, five factors are playing a role: A social interaction; a questioning procedure; a suggestive stimulus; acceptance of the stimulus; and a behavioural response’ (p. 345). This demonstrates how many stages can be relevant and impact on autistic people being suggestible in forensic contexts. For example, by asking an autistic person if they have experienced something (e.g. felt criticised by other people), they may endorse this as if they had, even if they had not thought that prior to being faced with the question. They may also over-endorse symptoms (Lerner et al., 2012), with the level of suggestibility being very much related to situational demand characteristics (Gudjonsson, 2003). That is, if an autistic individual is suspected of malingering and is informed during assessment that they are being asked about their symptoms of psychosis (using the SIMS) they may feel that they should be reporting symptoms and therefore endorse psychosis-related items, due to a tendency to be suggestible.

NHS Guidance (2019) aims to ensure that autistic individuals can be held responsible for criminal behaviours whilst still ensuring fair and just treatment by the Criminal Justice System (CJS), in line with the Equality Act (2010) and the NHS Long-term Plan (2019). Advocates of ASD emphasise the importance of the courts having a better understanding of ASD so that a person’s journey through the CJS, from arrest to sentencing, takes into account the person’s particular needs, vulnerabilities and difficulties (Ali, 2018; Mouridsen, 2012). This is particularly key when considering sentencing, risk management, and treatment options (Ali, 2018; Freckleton, 2013; B. P. Murphy, 2010), and when ensuring that approaches are consistent and fair (Browning & Caulfield, 2011). On that basis, it is crucial that clinicians can approach assessments with relative confidence, and that the information they gain is accurate and reliable. Thus, it is crucial that when examining honesty and accuracy of self-report information, this is done in line with good practice and includes the use of valid and reliable assessment measures that have appropriate normative data.

The current study aims to address some of the gaps in the understanding of self-report response bias amongst an autistic client group. A key focus of the research is to obtain normative data on the SIMS and PDS and explore any differences that emerge between the scores for the general population in the manuals and the scores of high functioning autistic (HFA) people. A high functioning autistic sample was chosen on the basis that it was important that participants were able to read and understand items on questionnaires, and that cognitive functioning was not a confounding factor in the research. It is important to note here that the term ‘high functioning autism’ was used in this research given its prevalence within the literature. It is recognised, however, that this is not without its problems, and the term is increasingly being seen as pejorative, in that it is considered ‘ableist’ (Bottema-Beutel et al. (2021), and that it is better to refer to autistic people and their levels of support, as opposed to functioning. However, there remains no clear consensus on what is considered appropriate terminology (Kenny et al., 2016) in autism research, and for the purposes of the current research, in terms of sampling, the researchers decided to continue to use the term high functioning autism, given its common usage.

It is hypothesised that HFA people will endorse items differently to the general population on measures that assess malingering (such as the SIMS) and on those that assess SDR (such as the PDS), in a context where there should be no motivation to display response bias. Additionally, the relationship between actual psychological distress in people with HFA, as measured using the Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE–OM), a commonly-used generic measure of psychological distress, and subsequent scores on a relevant subscale (Affective Disorders) of the SIMS, is also explored, bearing in mind the tendency towards suggestibility. Therefore, the research is exploratory in nature.

Method

Sample

The final study sample consisted of 70 adults in the community who reported being on the HFA spectrum. A total of 79 respondents consented to complete the survey; however, eight were removed from the data set as they only partially completed the overall questionnaire. A further one respondent was excluded as they disclosed a diagnosis of learning disability, and so did not meet the criterion for ‘high functioning’. Data were collected anonymously via an online survey from males and females aged 18 or over who confirmed that they had an established diagnosis of autism (high functioning). Confirmation of a diagnosis of autism was established by asking respondents to confirm this when they emailed for the survey link, and again when consenting to participate upon entering the online survey. A section was added to the survey for them to disclose any other diagnoses they had. This enabled a decision to be made as to whether these other diagnoses would have had a marked impact on their responses and required their data to be excluded from the analysis.

The mean age of the sample was 34.01 years (range = 18–69, SD = 13.19). There were slightly more males (53.4%) than females (45.1%) in the sample, and one respondent (1.4%) identified themselves as ‘other’. The majority were employed (43.8%) and had an undergraduate or postgraduate qualification (54.8%). Respondents came from a variety of different countries, with the majority residing in the UK (46.6%) followed by the USA (26%). In relation to diagnoses other than HFA, the majority (57.5%) did not have other diagnoses, followed by 21.9% having a depressive disorder, anxiety disorder (6.8%), attention-deficit/hyperactivity disorder (ADHD; 6.8%), personality disorder (2.7%), eating disorder (2.7%) and a trauma-related disorder (1.4%). Although there is high comorbidity of mental health difficulties in people with autism, particularly low mood and anxiety, not all would have sought a formal diagnosis or had symptoms severe enough to meet diagnostic criteria. This may account for the high percentage that did not disclose any other diagnoses in the current community sample.

Procedure/data collection

The study collected data anonymously from respondents in 16 countries, including the UK, USA and a number of European countries, as well as South America and Australia, between 25 February 2020 and 13 April 2020. It is important to note that the data were collected during the COVID-19 pandemic, though this was unintentional. The decision was made to collect data beyond the UK for two reasons: firstly, this study was the first to explore self-report response bias in HFA people. As a result, it was important to establish a general sense of how HFA people approach such measures. The focus was then narrowed to look at specific groups, for instance by country, gender/sex or any other specific variable. Secondly, expanding the study beyond the UK would generate a larger sample size. ASD organisations and online social media platforms were specifically targeted to try and maximise sample sizes, and these platforms are not country-specific (though limited to English-speaking). An incentive to participate was offered whereby respondents could opt into a prize draw for a gift voucher once they had completed the study. The advert directed them to email the principal researcher, at which point they were given a link and password to enter the online survey. This was a requirement of the test publishers as the SIMS and PDS are restricted tests.

On accessing the website, participants were to confirm that they were over 18 and had a formal diagnosis of autism (high functioning). Once confirmed, they were directed to the next page where they were provided with an information sheet giving details of the study and asking for consent to participate. On completion of the questionnaires, participants were asked to create a unique identifying code, which they were instructed they could use should they wish to withdraw their data at a later date, rather than supplying personally identifiable information. Withdrawal from the study was limited to a specific date, at which point their data were merged with all the other respondents’ data, and so it was not possible to remove their specific responses. None of the respondents requested for their data to be withdrawn.

The survey consisted of collecting demographic data as well as the completion of three questionnaires: CORE–OM, PDS and SIMS, details of which are summarised below.

Measures

Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE–OM)

The CORE–OM (Evans et al., 2000) is a 34-item self-report questionnaire of psychological distress. The measure explores subjective well-being (four items), commonly experienced problems or symptoms (12 items) and life/social functioning (12 items). The measure includes six items on risk to self and to others. For the purposes of this study, the risk items were excluded. It has a five-level response choice, ranging from not at all to most or all of the time. The time frame rated is the preceding seven days. Normative data are available for clinical and non-clinical populations as well as males and females separately. The CORE–OM has been used extensively with a range of populations. Evans et al (2002) undertook an extensive evaluation of the psychometric properties of the CORE–OM on a UK sample. According to the CORE–OM user manual (CORE, 1998), the non-clinical norms were based on 1106 convenience sample and a clinical population of 890 people receiving a range of psychological interventions across the UK. The CORE–OM was selected to gauge the level of respondents’ psychological distress as a means to determine whether the SIMS Affective subscale was able to distinguish genuine from malingered mental health symptomology, bearing in mind the problems of suggestibility in those with HFA.

Paulhus Deception Scales (PDS)

The PDS (Paulhus, 1998) is a 40-item self-report questionnaire consisting of two scales – the Self-Deceptive Enhancement Scale (SDE) and the Impression Management Scale (IM). The author states, ‘the PDS captures the two principal forms of socially desirable responding with two (relatively) independent subscales’ (Paulhus, 1998, p. 1). A critique of the PDS (Sher, 2020) concluded that the PDS has demonstrated adequate reliability and validity. A systematic review (Sher, 2020) identified that the PDS is one of the most widely used socially desirable response bias measures in the UK.

Structured Inventory of Malingered Symptomology (SIMS)

The SIMS (Widows & Smith, 2005) consists of 75 items that are rated as true or false. It screens for malingered psychopathology across five scales: Psychosis, Low Intelligence, Neurologic Impairment, Affective Disorders and Amnestic Disorders, as well as providing an overall score of malingering. Impelen et al. (2014) undertook a meta-analysis of the SIMS. Their main findings were that the SIMS could distinguish between honest and instructed feigners, could detect elevated scores in groups predicted to be high scorers, and showed false positives in those who had diagnoses of schizophrenia, cognitive deficits and psychogenic seizures, and cut-off scores showed adequate sensitivity but poor specificity when the manualised cut-off scores were applied.

Ethical considerations

Ethics approval for the current study was received from the University Research Ethics Committee on 25 February 2020. No personally identifiable information was collected throughout data collection and data analysis. At various points in the survey respondents were signposted to various organisations should they require emotional support due to the questions being asked.

Data analysis

All data were analysed using SPSS Version 25. As several of the subscales deviated from normality it was not possible to use parametric tests (with asymptomatic significance) of association and difference. Instead, bootstrap tests of association and difference were employed (as these do not rely upon distributional assumptions), and significance testing was undertaken using bootstrap probability estimates, standard errors and confidence intervals. The bootstrap confidence intervals and significance test have been shown to be robust for small samples and do not depend on normal distribution assumptions. When comparing groups on multiple occasions there is the risk of finding differences by chance, and thus a false-positive result (Ranganathan et al., 2016). To control for this, analyses were undertaken using the Bonferroni correction.

Results

Distribution of CORE, PDS and SIMS total scores and subscales

Table 1 provides descriptive statistics and tests of normality of distribution for each of the CORE–OM, PDS and SIMS subscales. CORE–OM mean scores for all the subscales and total score fell in the ‘clinical range’ when compared to the norms provided in the CORE–OM manual. Normative data in the manual for the CORE–OM corresponded more with the 5th and 25th percentile scores found in the current study.

Table 1.

Mean and percentile scores for CORE–OM, PDS and SIMS.

Scale Valid N Manual M M Mdn SD Percentile
One-sample Kolmogorov–Smirnov test
5th 25th 75th 95th 99th Test statistic Asymp. p (2-tailed)
CORE                        
 Wellbeing 73 3.64 8 8 4 1 4 11 14 15 0.102 .06
 Psych distress 73 10.80 24 23 10 8 18 33 39 42 0.085 .20
 Feelings 73 10.20 22 23 8 8 16 28 37 39 0.066 .20
 Total 73 24.64 a 54 55 20 21 42 70 86 91 0.061 .20
PDS                        
 IM (t score) 70 50 62 62 9 44 57 67 78 80 0.112 .03*
 SDE (t score) 70 50 53 49 10 42 46 57 72 76 0.22 .01*
 Total (t score) 70 50 b 64 62 12 46 54 70 86 97 0.127 .01*
SIMS                        
 Psychosis 70 0.82 3 1 4 0 0 5 12 13 0.279 .01*
 Neurologic Impairment 70 1 5 3 4 0 2 7 13 15 0.191 .02*
 Amnestic Disorders 70 1.15 5 4 4 0 1 8 12 15 0.17 .03*
 Low Intelligence 70 1.42 2 2 2 0 1 4 6 7 0.214 .04*
 Affective Disorders 70 3.27 6 7 2 3 4 8 11 12 0.101 .08
 Total 70 7.67 c 21 18 13 6 11 28 53 56 0.146 .01*

Note: CORE–OM = Clinical Outcomes in Routine Evaluation–Outcome Measure; PDS = Paulhus Deception Scales; SIMS = Structured Inventory of Malingered Symptomology; IM = Impression Management Scale; SDE = Self-Deceptive Enhancement Scale; asymp. - asymptotic.

aBased on 1084 non-clinical population (CORE System Group, 1998); bBased on 441 general population (Paulhus, 1998); cBased on 238 honest responders (Widows & Smith, 2005).

*p < .05.

One of the primary aims in the study was to generate normative data for the SIMS and PDS for a HFA population. The above scores were therefore compared with the norms provided in each manual respectively.

The normative data reported in the SIMS manual (Widows & Smith, 2005) consisted of 476 undergraduate students from four universities in the USA, who participated in an ‘analogue simulation study’ (p. 12). Ages ranged from 17 to 66 years (mean age 24.43 years), and comprised mainly females (71%). Compared to the normative data reported in the SIMS manual, the mean score of the respondents in this study fell above cut-off on all subscales and total scores of the SIMS, which indicates that the HFA respondents, as a whole, would be considered to be malingering their symptomology in relation to psychosis, neurologic impairment, amnestic disorders and affective disorders. An exception was found with the Low Intelligence subscale, with the mean score attained being 2, which fell on the cut-off (>2).

The normative data in the PDS manual (Paulhus, 1998) consisted of 441 American and Canadian urban and rural respondents between the ages of 21 and 75 years. Compared to the normative data in the PDS manual, mean scores for the PDS SDE subscale and Total PDS scores for the current sample fell in the ‘average’ and ‘above average’ range, respectively. This indicates that HFA people are no more likely that the general population to demonstrate SDE, as their IM scores most likely impacted on the Total score ‘elevation’. In relation to the PDS IM score, the current sample’s mean scores fell in the ‘above average’ range. This suggests that HFA people score more highly on impression management than the general population, under conditions of low demand characteristics. In terms of the invalidity cut-off score for the IM scale, the mean of 10.86 (SD = 3.432) was identified according to the manual as ‘may be faking good’, suggesting that HFA people as a whole show a tendency towards presenting themselves in a positive light.

Within-group differences

Age

Correlations were undertaken to explore any differences between age and scores on the CORE–OM, PDS and SIMS. Only the SIMS Psychosis subscale evidenced a significant correlation with age (r = −.304, p < .01), indicating that scores on this subscale go down as age increases. Age differences are not provided in any of the measure manuals, but the finding that symptoms of psychosis reduce with age is in line with other studies, as previously discussed, and where the SIMS produced false positives with those who had a diagnosis of schizophrenia (Impelen et al., 2014).

Gender differences

In order to explore differences in gender, a series of bootstrapped t tests between males and females were undertaken. The only significant differences found between males and females were on the Psychosis (p < .01) and Neurological Impairment (p < .001) subscales of the SIMS and the Total score (p < .05) of the SIMS, where males scored higher than females. Differences in scores across males and females are not reported in the PDS and SIMS manuals. In relation to the CORE–OM, the manual reports some small significant differences between the non-clinical male and female sample, with females scoring higher than males. However, differences were not evident between males and females in their clinical sample, with the exception of the Wellbeing subscale (p < .001).

Differences in country of origin

Only the UK (n = 34) and the USA (n = 19) had more than three valid responses. Accordingly, it was not possible to assess differences between respondents from other countries with fewer than three valid responses. Table 2 summarises differences between UK and USA samples on the CORE–OM, PDS and SIMS subscale and total scores.

Table 2.

Bootstrap t test of differences between the UK and the USA samples.

Scale UK
USA
  Bootstrap based on 1000 repetitions
M SD N M SD N M D iff t Bias SE p (2-tailed)
95% CI
CORE–OM                        
 Wellbeing 8.41 3.47 34 5.53 3.78 19 2.716 2.677 −0.003 1.063 .007* [0.531, 4.606]
 Psych distress 26.24 9.65 34 20.47 9.83 19 5.526 1.969 −0.001 2.792 .05* [−0.077, 10.755]
 Feelings 24.41 7.65 34 19.74 8.42 19 4.233 1.902 0.067 2.285 .077 [−0.415, 8.696]
 Total 59.06 18.59 34 45.74 19.84 19 12.475 2.303 0.064 5.542 .024* [1.428, 22.914]
PDS                        
 IM t score 59.91 7.82 33 66.05 8.87 19 −6.144 −2.597 −0.063 2.431 .018* [−10.914, −1.308]
 SDE t score 50.24 7.42 33 54.32 10.41 19 −4.073 −1.641 −0.024 2.746 .135 [−9.904, 1.025]
 Total t score 60.27 8.84 33 69.53 13.69 19 −9.254 −2.964 −0.061 3.512 .017* [−15.965, −1.873]
SIMS                        
 Psychosis 3.55 4.19 33 3.63 4.37 19 −0.086 −0.07 −0.015 1.23 .005* [−2.504, 2.199]
 Neurologic Impairment 4.88 4.34 33 5.32 4.18 19 −0.437 −0.355 −0.023 1.206 .944 [−2.802, 1.897]
 Amnestic Disorders 5.24 4.47 33 6.21 4.44 19 −0.968 −0.753 −0.013 1.285 .724 [−3.61, 1.564]
 Low Intelligence 3 2.18 33 1.26 1.37 19 1.737 3.13 0.006 0.498 .003* [0.834, 2.683]
 Affective Disorders 7.03 2.42 33 5.58 2.78 19 1.451 1.975 −0.017 0.766 .054 [−0.12, 2.988]
 Total 23.7 14.37 33 22 14.66 19 1.697 0.407 −0.06 4.186 .708 [−6.545, 9.757]

Note: CORE–OM = Clinical Outcomes in Routine Evaluation–Outcome Measure; PDS = Paulhus Deception Scales; SIMS = Structured Inventory of Malingered Symptomology; IM = Impression Management Scale; SDE = Self-Deceptive Enhancement Scale; CI = confidence interval.

*p < .05.

Seven of the tests’ measures (CORE Wellbeing, CORE Psych distress, CORE Total, PDS IM, PDS total, SIMS Psychosis and SIMS Low Intelligence) showed significant differences between the UK and the USA respondents. In relation to the CORE–OM Wellbeing and Distress subscales and CORE–OM Total score, UK mean scores were significantly higher than USA mean scores (p < .05), indicating higher levels of psychological distress and lower levels of wellbeing among UK participants. The PDS IM mean subscale score was lower in the UK than in the USA (p < .05), with a similar finding on the PDS Total score (p < .01), indicative of the UK sample being less likely to impression manage than USA samples, with USA scores falling in the ‘above average’ range, according to the PDS manual. The SIMS Psychosis mean subscale score was also higher in the USA than in the UK (p < .01), indicating that USA respondents were considered to report more malingered psychotic symptoms that UK respondents, though according to the SIMS manual, both groups still scored above cut-off for malingering psychotic symptoms. In contrast, the Low Intelligence mean subscale scores were higher in the UK than in the USA (p < .01), suggesting that UK respondents reported more malingered symptoms associated with cognitive deficits and also scored above cut-off, indicative of malingering of learning problems. A t test was undertaken to explore whether age was a factor in the differences between scores between the USA and UK, but no significant difference in mean age was found. In addition, a chi-square analysis of gender distribution between the two countries showed no difference, suggesting that gender was not underpinning these differences in scores either.

Education status

Differences between educational status and the CORE–OM, PDS and SIMS subscale and total scores were undertaken using analysis of variance (ANOVA). A significant difference was found for education level in relation to the CORE–OM Wellbeing subscale, F(6, 66) = 3.271, p < .05, the CORE–OM Feelings subscale, F(6, 66) = 3.844, p < .01, and the CORE–OM Total score, F(6, 66) = 2.875, p < .05. Post hoc comparisons using the Games–Howell test indicated that, in relation to the CORE–OM Total score, respondents with up to a GCSE/O-level qualification had higher scores (possibly reflecting higher psychological distress) than undergraduates (p < .05). In addition, in relation to the CORE–OM Wellbeing subscale, GCSE/O-level respondents showed higher scores than undergraduates (p < .01) and postgraduates (p < .05), which may imply lower levels of wellbeing amongst the GCSE/O-level respondents. A similar finding was evident in relation to the CORE–OM Feelings subscale, with GCSE/O-level respondents showing higher scores than undergraduates (p < .01) and postgraduates (p < .01), potentially indicating greater levels of psychological distress amongst the GCSE/O-level respondents. However, these results should be interpreted with caution given the small sample sizes.

No differences were evident on any of the measures in relation to secondary diagnosis and work status.

Reliability of the measures

In order to establish whether the PDS, SIMS and CORE–OM are psychometrically sound when used with a HFA population, internal consistency of all measures and convergent validity of the SIMS were explored.

Internal consistency

The internal consistency of the scales was calculated using Cronbach’s α. Table 3 provides a summary of the internal reliability of the CORE–OM, PDS and SIMS total and subscale scores.

Table 3.

 Internal reliability of the CORE, PDS and SIMS total scores and subscales.

Scale Number of items Cronbach’s αcoefficient Cronbach’s α per manual
CORE      
 Total 28 .929 .94
 Wellbeing 4 .822 .77
  Psych distress 12 .889 .90
 Feelings 12 .803 .86
PDS      
 Total (t score) 40 .712 .85
 IM (t score) 20 .668 .84
 SDE (t score) 20 .654 .75
SIMS      
 Total 75 .936 .88
 Psychosis 15 .899 .82
 Neurologic Impairment 15 .853 .83
 Amnestic Disorders 15 .882 .83
 Low Intelligence 15 .624 .85
 Affective Disorders 15 .498 .86

Note: CORE–OM = Clinical Outcomes in Routine Evaluation–Outcome Measure; PDS = Paulhus Deception Scales; SIMS = Structured Inventory of Malingered Symptomology; IM = Impression Management Scale; SDE = Self-Deceptive Enhancement Scale.

All total scores were well within acceptable ranges (above .7): CORE–OM (n = 73) Cronbach α of .929; the PDS (n = 70) Cronbach alpha of .712; and the SIMS (n = 70) Cronbach α of .936. However, at a subscale level, the Cronbach αs were below the acceptable range on the PDS SDE scale (.654), the SIMS Low Intelligence subscale (.624) and the SIMS Affective subscale (.498), indicating that there are items on those subscales that are not correlating well and thus not measuring the same thing. This would raise concerns about the use of these subscales with HFA people.

Convergent validity

Given that the SIMS explores malingering of symptomology, it was considered useful to explore how actual levels of psychological distress (measured using the CORE–OM) in HFA people may relate to how they then score on the relevant SIMS subscale (Affective Disorders), possibly due to suggestibility. Convergent validity between the CORE–OM Total and SIMS Affective scale was undertaken using Pearson correlation. A significant relationship was found with high total scores on the CORE–OM being correlated with high scores on the Affective scale of the SIMS (r = .604, p < .000). This would suggest, firstly, that the SIMS Affective subscale is measuring a similar construct to the CORE–OM. Secondly, this may also suggest that genuinely experienced psychological distress would be identified as malingering of affective symptoms, when using the SIMS with a HFA population.

The CORE–OM Cronbach alphas in the current study were largely in line with those reported in the manual for the non-clinical population. In terms of the PDS, the Cronbach alpha scores in the current study were lower than those in the manual for the general population, on both the subscales and total score. The SIMS faired similarly in the current study compared to the manual’s general population, with the exception of the Low Intelligence and Affective subscales, with both scales, particularly the Affective subscale, demonstrating poor internal consistency in the current study. These findings suggest that the CORE–OM and PDS could be used with a HFA population, but that certain scales on the SIMS would not be appropriate to use with HFA people.

Qualitative feedback

Although not specifically requested, some respondents provided feedback, highlighting certain difficulties they experienced when completing the measures. For instance, some respondents felt that some items were ambiguous. At times they were unsure how to endorse an item as they expressed that their feelings may differ in different situations and so, in order to answer a question accurately, felt they needed a scenario or context around it. Additionally, some experienced frustration at not knowing what something meant, having some initial thoughts and then doubting themselves. All of these factors are likely to have impacted on how they approached their responses to items on the survey.

Discussion

High functioning autistic (HFA) people sometimes find themselves involved in the criminal justice system, whether as a victim, witness or offender. As part of this process, they are likely to be assessed in some form, in terms of their ability to give evidence or participate in a trial, their level of risk and responsibility, or treatment responsivity. BPS (2009) guidance recommends that assessments in criminal justice contexts should include consideration of response bias, more specifically socially desirable responding, impression management and malingering. However, none of the self-report measures frequently used by psychologists have been designed or validated for use with an autistic population. This means that conclusions may be drawn on the role of response bias in assessments that are inaccurate and lack empirical support. This study aimed to address this gap in the evidence base.

The study hypothesised that HFA people, under conditions of low situational demand, would score differently on two commonly used tools in the UK – the PDS and the SIMS – when compared to the normative data in the published manuals. This was hypothesised because previous research has identified that HFA people could display response biases to certain types of questioning styles (Lerner et al., 2012; O’Mahony, 2012). This hypothesis has largely been proven in the current study. In relation to the SIMS, HFA respondents scored above cut-off on the Psychosis, Neurologic Impairment, Amnestic Disorders and Affective Disorders subscales of the SIMS, as well as the Total score. The only subscale in which HFA respondents did not score above the cut-off was the Low Intelligence subscale. In relation to the PDS, the scores of HFA people were in line with the general population for Self-Deceptive Enhancement (SDE), but fell in the above-average ranges on the Total score and Impression Management (IM) subscale. Their elevated scores for IM were reflective of possible ‘faking good’ as measured by this test. This is an important finding as it may suggest that the IM scale on the PDS, when used with HFA people, can lead to a false positive for impression management, thus having significant negative implications for how their reporting is viewed by assessing clinicians. It is also important to consider that respondents had complete anonymity in the current study and still had elevated scores, and what the impact would be if they were assessed in person.

This general pattern of elevated scores on a measure of malingering and socially desirable responding is important to consider in the context of respondents having no clear motive for displaying response bias, as in the current study, where their responses were anonymous and outside of a ‘high stakes’ context. The current study’s findings are in line with Lerner et al (2012) who found an increased likelihood of elevated scores on assessments of effort and malingering amongst those with neurodevelopmental conditions, including autism. The reason for this is likely related to some of the key attributes of HFA people, which include the inability to cognitively process in a flexible way, perspective taking, processing social and emotional information and general theory of mind difficulties (Ali, 2018; Baron-Cohen, 2008; Lerner et al., 2012; NAS, 2019; Wing, 1997). The tendency for those with neurodevelopmental conditions such as learning disability and autism to be suggestible is also a likely contributory factor to these findings (Chandler et al., 2019; Lerner et al., 2012; O’Mahony, 2012).

It is important, however, to consider a closer inspection of the two measures explored. In terms of the SIMS, this self-report measure explores malingered psychopathology (Widows & Smith, 2005). One explanation for elevated scores amongst a HFA population is that they may actually be experiencing higher levels of co-morbid mental health disorders, which fits with previous research (C. M. Murphy et al., 2016). However, this would suggest that the SIMS is not effectively discriminating between genuine and faked symptomology. A further explanation may relate to HFA people being found to be more suggestible than the general population (Chandler et al., 2019). Thus, the way in which ‘symptoms’ are presented in the SIMS may serve to ‘prime’ HFA people to endorse the symptom as something they have experienced. These factors may explain the elevated scores on most of the subscales, but do not explain the finding that the Low Intelligence subscale was not above cut-off in this population. This is likely related to the nature of the Low Intelligence questions at an individual level. The items that make up the Low Intelligence subscale consist mainly of basic general knowledge or mathematical calculations that are very obviously either correct or incorrect. Thus, this subscale is tapping into an alternative cognitive structure – that being factually based as opposed to somatic or psychological experiences that the other subscales are measuring. Thus, in the context of not having any clear incentive or motive to malinger, they correctly identified which items were true and false.

The patterns of scores for the PDS were interesting and also deserve closer inspection. In contrast to the SIMS, the PDS assesses socially desirable responding, and separately considers SDE and IM (Paulhus, 1998). The HFA respondents in the current study scored in line with the general population for SDE, which indicates that they are no more likely to display rigid over-confidence. When considered in this way, this finding makes sense if we consider that whilst HFA people may effectively ‘mask’ to compensate for social and communication difficulties (Kaland et al., 2007; Tager-Flusberg, 2007), they largely lack confidence and experience anxiety about their social selves. This also fits with Baron-Cohen’s (1992) findings that autistic people struggle to be deceptive in more complex contexts.

This may go some way towards explaining the elevated IM scores, and is suggestive of HFA people portraying themselves in a more positive light. However, a further reason for the elevated IM scores may relate to the way the PDS is designed and scored. The PDS presents a statement, which is scored on a scale of 1 to 5, with the extreme ends of the continuum attracting a score. This lends itself to higher item endorsement, due to HFA people being more prone to rigid or black-and-white thinking (Mazefsky & White, 2014) and propensity to adhere to social and moral rules that are explicitly outlined (Grant et al., 2018), particularly when worded in the way they are in the PDS. As a result, they may be mistakenly seen as ‘faking good’ when in fact they are responding honestly, or at least are unaware that they are endorsing distorted responses.

The analysis of scores also identified some individual differences amongst the HFA sample, which needs further consideration. Respondents in this study did not differ significantly on the PDS and SIMS with regards to any secondary diagnoses, level of education or work status. In terms of age, the only significant finding was that scores on the SIMS Psychosis subscale went down as people aged. This is perhaps unsurprising given that the actual experience of psychotic symptomology in the general population decreases with age (Auslander & Jeste, 2004), but that the SIMS Psychosis scale produces false positives with those diagnosed with schizophrenia (Impelen et al., 2014). However, the lack of variability depending on age across all other subscales of the SIMS, as well as the PDS, is in contrast to the finding in the forensic study by Mathie and Wakeling (2011) that age correlated with high IM scores on the PDS. This again may be explained by the differing motives in this context compared to offending populations, where there may be extrinsic factors contributing to respondents displaying self-report response bias.

Despite the growing evidence that there are key differences between autistic males and females (NAS, 2020b), the only significant differences in scores in this study were found on the SIMS Total score and the Psychosis and Neurological Impairment subscales. This generally suggests that HFA males and females are no more or less likely to over-endorse psychopathology on self-report measures. However, when using the SIMS, some alternative means and percentiles should be used for the specific scales where differences between genders were identified.

Differences in response styles for the PDS and SIMS were evident when comparisons were made between the UK and USA. The directionality of differences was in contrast to previous findings amongst the general population for the PDS, undertaken by Tully and Bailey (2017). Their study found slightly lower SDE scores but higher IM scores in the UK than USA/Canadian norms. However, the current study found lower UK IM scores than those for the USA respondents. Despite this, the UK norms for the current sample were more closely matched to Tully and Bailey’s (2017) mean scores that they found in their study. This may suggest that at least on the PDS, UK norms may differ from those in the manual, but HFA scores in the UK are relatively similar to those for the UK general population. The only significant differences evident on the SIMS were for the Psychosis and Low Intelligence scales. These differences may reflect actual differences, but may also be confounded by other variables. However, in the current study, t tests in relation to age and chi-square analysis exploring gender differences did not identify differences between the UK and USA, suggesting that other factors are contributing to the differences on the Psychosis and Low Intelligence subscales of the SIMS.

Given the variation in norms with the current HFA sample compared to normative data in the respective manuals of the PDS and SIMS, this does suggest that these measures can be used with an HFA population, but that higher cut-off scores need to be utilised. The measures themselves demonstrated good internal consistency, with the exception of the SIMS Low Intelligence and Affective subscales and, to some extent, the PDS SDE subscale. This introduces some caution when interpreting these subscales with a HFA population.

In relation to ratings on the CORE–OM, whilst the respondents’ scores all fell well within the clinical range indicating high levels of psychological distress, the impact of data collection for the research taking place during the COVID-19 pandemic cannot be ignored. This may be particularly relevant given how change in routine and structure can increase anxiety in autistic people (NAS, 2019). As a result, scores potentially may be more elevated as a result.

Other variations in CORE–OM scores may also be worth noting but, again, with some caution due to the period of time when data collection took place. If the assumption is that SIMS scores were elevated amongst HFA people compared to the general population, this may be due to them genuinely experiencing higher levels of psychological distress than the average population, in line with previous studies (Anckarsäter et al., 2008; C. M. Murphy et al., 2016). Their scores on the CORE–OM falling in the clinical ranges may be evidence of this. However, this would mean that the SIMS was producing false positives – that it was not able to distinguish between real and malingered symptomology.

No differences were evident in relation to age, secondary diagnosis or work status. However, differences were evident between the UK and USA scores, with UK means being higher. In addition, CORE–OM Total scores appeared significantly lower amongst those with higher levels of education. However, this may be an artefact of the COVID pandemic (such as greater financial difficulties for those with lower levels of education due to the economic impact of the pandemic), thus confounding the results. Other potential reasons for these differences are beyond the scope of this study and would require larger samples.

Conclusions and limitations

The use of measures of malingering and socially desirable responding has its place as part of wider clinical assessments in medico-legal contexts. However, it is imperative that we utilise measures that are valid and reliable in order to have confidence in the conclusions that we draw. This study provides evidence that scores on the PDS and SIMS do differ amongst a HFA population, and that relying purely on normative data in the respective manuals may increase the likelihood of false-positive outcomes. Whilst the PDS and SIMS would not be used alone to determine malingering or SDR, their contribution to the process must be evidence based or not included at all. This study has provided a starting point for establishing the evidence base for the use of these tools with HFA people. However, interpretation of certain subscales, particularly the SIMS and PDS SDE subscale, may need to be approached with caution due to poor internal consistency when used with HFA people. Whilst the current study provides some evidence that these measures operate differently in a HFA population, the data collected in the current study are not sufficient to provide alternative normative data for a HFA population. The findings of this study should therefore provide a starting point for further research. However, in forensic and clinical contexts, the information gained from the current study should provide clinicians with some supporting evidence that these measures should ideally not be used with HFA people. This is especially important as we are still in the early stages of understanding how and why these measures operate differently amongst HFA populations.

Furthermore, whilst the PDS and SIMS are relatively quick and cost-effective psychometrics for adding to an assessment, there is growth in alternative performance-based measures, particularly in relation to assessing malingering, which may be more useful to rely on than self-report measures such as the SIMS. In relation to SDR, alternatives to the PDS are limited, and the general concept of SDE and IM are difficult to accurately measure, and so in these cases, exploring motive, context and drawing on collateral information will go a long way in substantiating or disproving these types of response biases.

The study is not without its limitations. Small sample sizes impacted on the extent to which meaningful analyses could be undertaken in terms of exploring individual differences within the current sample. Future research, using larger sample sizes, can more fully explore the impact of individual differences, such as ethnicity and culture, on the way in which HFA individuals respond to assessments of response bias.

Perhaps most importantly, it was not possible to ascertain with certainty that a respondent did, in fact, have a formal diagnosis of autism (high-functioning type) in line with the rigorous approach recommended by the National Institute of Clinical Excellence (NICE). Although the researcher specifically targeted ASD online groups, ultimately the research relied on the honesty and understanding of participants. Diagnostic approaches of ASD/HFA may also differ in different countries; hence some may be considered to meet the criteria in one country and not in another. These factors mean that some respondents may have completed the survey even though they did not have the appropriate diagnosis, impacting on the validity of the results. Future research on those with a confirmed diagnosis is required.

The study was also undertaken during the COVID-19 pandemic. The impact of the COVID-19 pandemic on how respondents approached certain measures, such as the CORE–OM, requires consideration. Whilst the most obvious impact was on the CORE–OM (as it is a measure of psychological distress), there was likely less impact on the PDS and SIMS given the nature of the items and content of those measures. Further research exploring differences in CORE–OM scores before, during and after the pandemic may shed light on the level of impact that the pandemic had on the current data. In addition, expanding the normative data set for the PDS and SIMS with a HFA sample would increase the applicability of the results of this study.

Acknowledgements

The study formed part of a doctoral dissertation at the University of Birmingham. Marilyn Sher would like to acknowledge the guidance and support of her doctoral supervisor, Caroline Oliver. We would also like to thank all the wonderful participants who took part in this study, despite the challenging conditions we all faced during the COVID-19 pandemic.

Availability of data and material

Anonymised data are available in SPSS.

Ethical standards

Declaration of conflicts of interest

Marilyn A. Sher has declared no conflicts of interest

Caroline Oliver has declared no conflicts of interest

The authors did not receive any financial support from any organisation for the submitted work. The first author was a student at the University of Birmingham, and the study was undertaken as part of a practitioner doctorate. The second author works for the University of Birmingham and is the course director. The second author supervised the study as part of the doctorate.

All authors certify that they have no affiliations with or involvement in any organisation or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee, the University of Birmingham Research Ethics Committee on 25 February 2020, and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Consent to participate and for publication

Informed consent was obtained from all individual participants included in the study. Respondents with a diagnosis of high functioning autism were approached via social media platforms and autism support groups and were asked to express an interest in taking part. The advert directed them to email the principal researcher, at which point they were given a link and password to enter the online survey. This was a requirement of the test publishers as the SIMS and PDS are restricted tests. Once respondents arrived at the website, they were asked some preliminary questions to confirm they were over 18 and had a diagnosis of HFA. Once confirmed, they were directed to the next page where they were provided with an information sheet giving details of the study and asking for consent to participate. Once consented, the survey opened. As soon as respondents completed the questionnaires, they were asked to create a unique identifying code. They could use this code, rather than any personally identifiable information, to withdraw their data from the study, should they subsequently wish to do so. Respondents were given the option to withdraw from the research, but opportunity to withdraw was limited to a specific date, at which point their data were merged with all the other respondents’ data, and so it was not possible to remove their specific responses.

References

  1. Ali, S. (2018). Autistic spectrum disorder and offending behaviour – a brief review of the literature. Advances in Autism, 4(3), 109–121. 10.1108/AIA-05-2018-0015 [DOI] [Google Scholar]
  2. Anckarsäter, H., Nilsson, T., Saury, J.-M., Råstam, M., & Gillberg, C. (2008). Autism spectrum disorders in institutionalized subjects. Nordic Journal of Psychiatry, 62(2), 160–167. 10.1080/08039480801957269 [DOI] [PubMed] [Google Scholar]
  3. Archer, R. P., Wheeler, E. M. A., & Vauter, R. A. (2016). Empirically Supported Forensic Assessment. Clinical Psychology: Science and Practice, 23(4), 348–364. 10.1111/cpsp.12171 [DOI] [Google Scholar]
  4. Attwood, T. (2015). The complete guide to Asperger’s syndrome. Jessica Kingsley Publ. [Google Scholar]
  5. Auslander, L. A., & Jeste, D. V. (2004). Sustained remission of schizophrenia among community-dwelling older outpatients. The American Journal of Psychiatry, 161(8), 1490–1493. 10.1176/appi.ajp.161.8.1490 [DOI] [PubMed] [Google Scholar]
  6. Baron-Cohen, S. (1992). Out of sight or out of mind? Another look at deception in autism. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 33(7), 1141–1155. 10.1111/j.1469-7610.1992.tb00934.x [DOI] [PubMed] [Google Scholar]
  7. Baron-Cohen, S. (2008). Autism. British Journal of Psychiatry, 193(4), 321–321. 10.1192/bjp.193.4.321 [DOI] [Google Scholar]
  8. Bottema-Beutel, K., Kapp, S. K., Lester, J. N., Sasson, N. J., & Hand, B. N. (2021). Avoiding Ableist language: Suggestions for autism researchers. Autism in Adulthood, 3(1), 18–29. 10.1089/aut.2020.0014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. British Psychological Society. (2009). Assessment of effort in clinical testing of cognitive functioning for adults. British Psychological Society (BPS). [Google Scholar]
  10. Browning, A., & Caulfield, L. (2011). The prevalence and treatment of people with Asperger’s Syndrome in the criminal justice system. Criminology & Criminal Justice, 11(2), 165–180. 10.1177/1748895811398455 [DOI] [Google Scholar]
  11. Butcher, J. N., Graham, J. R., Ben-Porath, Y. S., Tellegen, A., & Dahlstrom, W. G. (2001). Mmpi-2: Minnesota multiphasic personality inventory - 2. University of Minnesota Press. [Google Scholar]
  12. Cassano, A., & Grattagliano, I. (2019). Lying in the medicolegal field: Malingering and psychodiagnostic assessment. Clinical Therapeutics, 170(2), 134–141. 10.7417/CT.2019.2123 [DOI] [PubMed] [Google Scholar]
  13. Chandler, R., Russell, A., & Maras, K. (2019). Compliance in autism: Self-report in action. Autism: The International Journal of Research and Practice, 23(4), 1005–1017. 10.1177/1362361318795479 [DOI] [PubMed] [Google Scholar]
  14. Core System Group (1998). CORE system (information management) handbook. Core System Group. http://www.coreims.co.uk [Google Scholar]
  15. Drob, S. L., Meehan, K. B., & Waxman, S. E. (2009). Clinical and conceptual problems in the attribution of malingering in forensic evaluations. Journal of the American Academy of Psychiatry and Law, 37, 98–106. [PubMed] [Google Scholar]
  16. Evans, C., Connell, J., Barkham, M., Margison, F., McGrath, G., Mellor-Clark, J., & Audin, K. (2002). Towards a standardised brief outcome measure: Psychometric properties and utility of the CORE-OM. The British Journal of Psychiatry : The Journal of Mental Science, 180(JAN.), 51–60. 10.1192/bjp.180.1.51 [DOI] [PubMed] [Google Scholar]
  17. Evans, C., Mellor-Clark, J., Margison, F., Barkham, M., Audin, K., Connell, J., & McGrath, G. (2000). CORE: Clinical outcomes in routine evaluation. Journal of Mental Health, 9(3), 247–255. [Google Scholar]
  18. Freckleton, G. (2013). Education at the royal courts of justice. The Law Teacher, 47(2), 269–270. 10.1080/03069400.2013.790151 [DOI] [Google Scholar]
  19. Furnham, A. (1986). Response bias, social desirability and dissimulation. Personality and Individual Differences, 7(3), 385–400. 10.1016/0191-8869(86)90014-0 [DOI] [Google Scholar]
  20. Grant, T., Furlano, R., Hall, L., & Kelley, E. (2018). Criminal responsibility in autism spectrum disorder: A critical review examining empathy and moral reasoning. Canadian Psychology/Psychologie Canadienne, 59(1), 65–75. 10.1037/cap0000124 [DOI] [Google Scholar]
  21. Gudjonsson, G. H. (2003). Wiley series in the psychology of crime, policing and law. The psychology of interrogations and confessions: A handbook. John Wiley & Sons Ltd. [Google Scholar]
  22. Gudjonsson, G. H., & Clark, N. K. (1986). Suggestibility in police interrogation: A social psychological model. Social Behaviour, 1(2), 83–104. [Google Scholar]
  23. Hare, D. J., Gould, J., Mills, R., & Wing, L. (1999). A preliminary study of individual with autistic spectrum disorders in three special hospitals in England. The National Autistic Society. Available at: www.aspires-relaationships.com/3hospitals.pdf.
  24. Hart, K. J. (1995). The assessment of malingering in neuropsychological evaluations: Research-based concepts and methods for consultants. Consulting Psychology Journal: Practice and Research, 47(4), 246–254. 10.1037/1061-4087.47.4.246 [DOI] [Google Scholar]
  25. Impelen, A., Merckelbach, H., Niesten, I. J. M., Jelicic, M., Huhnt, B., & Campo, J. á (2017). Biased symptom reporting and antisocial behaviour in forensic samples: A weak link. Psychiatry, Psychology and Law, 24(4), 530–548. 10.1080/13218719.2016.1256017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Impelen, A. V., Merckelbach, H., Jelicic, M., & Merten, T. (2014). The Structured Inventory of Malingered Symptomatology (SIMS): A systematic review and meta-analysis. The Clinical Neuropsychologist, 28(8), 1336–1365. 10.1080/13854046.2014.984763 [DOI] [PubMed] [Google Scholar]
  27. Jobson, L., Stanbury, A., & Langdon, P. E. (2013). The Self- and Other-Deception Questionnaires-Intellectual Disabilities (SDQ-ID and ODQ-ID): Component analysis and reliability. Research in Developmental Disabilities, 34(10), 3576–3582. 10.1016/j.ridd.2013.07.004 [DOI] [PubMed] [Google Scholar]
  28. Kaland, N., Mortensen, E., & Smith, L. (2007). Disembedding performance in children and adolescents with Asperger syndrome or high-functioning autism. Autism : The International Journal of Research and Practice, 11(1), 81–92. 10.1177/1362361307070988 [DOI] [PubMed] [Google Scholar]
  29. Kenny, L., Hattersley, C., Molins, B., Buckley, C., Povey, C., & Pellicano, E. (2016). Which terms should be used to describe autism? Perspectives from the UK autism community. Autism: The International Journal of Research and Practice, 20(4), 442–462. 10.1177/1362361315588200 Epub 2015 Jul 1. PMID: 26134030. [DOI] [PubMed] [Google Scholar]
  30. Lai, M. C., Lombardo, M. V., & Baron-Cohen, S. (2014). Autism. Lancet (London, England), 383(9920), 896–910. 10.1016/S0140-6736(13)61539-1 [DOI] [PubMed] [Google Scholar]
  31. Lai, M. C., & Baron-Cohen, S. (2015). Identifying the lost generation of adults with autism spectrum conditions. The Lancet. Psychiatry, 2(11), 1013–1027. 10.1016/S2215-0366(15)00277-1 [DOI] [PubMed] [Google Scholar]
  32. Langdon, P. E., Clare, I. C., & Murphy, G. H. (2010). Measuring social desirability amongst men with intellectual disabilities: The psychometric properties of the Self- and Other-Deception Questionnaire-Intellectual Disabilities. Research in Developmental Disabilities, 31(6), 1601–1608. 10.1016/j.ridd.2010.05.001 [DOI] [PubMed] [Google Scholar]
  33. Leite, V. (2015). The MMPI-2 criminal offender infrequency scale and PAI negative distortion scale: A comparison study of malingering scales within a forensic sample [Unpublished doctoral dissertation]. Alliant International University. [Google Scholar]
  34. Lerner, M. D., Haque, Q. S., Northrup, E. C., Lawer, L., & Bursztajn, H. J. (2012). Emerging perspectives on adolescents and young adults with high-functioning autism spectrum disorders, violence, and criminal law. Journal of the American Academy of Psychiatry and the Law, 40, 177–190. PMID: 22635288 [PubMed] [Google Scholar]
  35. Mathie, N. L., & Wakeling, H. C. (2011). Assessing socially desirable responding and its impact on self-report measures among sexual offenders. Psychology, Crime & Law, 17(3), 215–237. 10.1080/10683160903113681 [DOI] [Google Scholar]
  36. Mazefsky, C. A., & White, S. W. (2014). Emotion regulation: Concepts & practice in autism spectrum disorder. Child and Adolescent Psychiatric Clinics of North America, 23(1), 15–24. 10.1016/j.chc.2013.07.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Millon, T., Grossman, S., & Millon, C. (2015). MCMI-IV. Pearson. [Google Scholar]
  38. Mouridsen, S. E. (2012). Current status of research on autism spectrum disorders and offending. Research in Autism Spectrum Disorders, 6(1), 79–86. 10.1016/j.rasd.2011.09.003 [DOI] [Google Scholar]
  39. Murphy, B. P. (2010). Beyond the first episode: Candidate factors for a risk prediction model of schizophrenia. International Review of Psychiatry (Abingdon, England), 22(2), 202–223. 10.3109/09540261003661833 [DOI] [PubMed] [Google Scholar]
  40. Murphy, C. M., Wilson, C. E., Robertson, D. M., Ecker, C., Daly, E. M., Hammond, N., Galanopoulos, A., Dud, I., Murphy, D. G., & McAlonan, G. M. (2016). Autism spectrum disorder in adults: Diagnosis, management, and health services development. Neuropsychiatric Disease and Treatment, 12, 1669–1686. 10.2147/ndt.s65455 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. National Autistic Society (2019). Asperger syndrome. https://www.autism.org.uk/about/what-is/asperger.aspx
  42. National Autistic Society (2020b). Women and girls. https://www.autism.org.uk/professionals/training-consultancy/online/women-and-girls.aspx
  43. National Autistic Society:NAS (2020a). Autism facts and history. Retrieved from https://www.autism.org.uk/about/what-is/myths-facts-stats.aspx
  44. NHS England and NHS Improvement (2019). People with a learning disability, autism or both: Liaison and Diversion Managers and Practitioner resources (2019). Publishing number 000948. [Google Scholar]
  45. NHS England: NHS Long Term Plan. (2019). Retrieved May 8, 2020, from https://www.england.nhs.uk/long-term-plan/
  46. Nichols, H. R., & Molinder, I. (1984). Manual for the multiphasic sex inventory. Crime and Victim Psychology Specialist. [Google Scholar]
  47. Ohlsson, I. M., & Ireland, J. L. (2011). Aggression and offence motivation in prisoners: Exploring the components of motivation in an adult male sample. Aggressive Behavior, 37(3), 278–288. 10.1002/ab.20386 [DOI] [PubMed] [Google Scholar]
  48. O’Mahony, B. M. (2012). Accused of murder: Supporting the communication needs of a vulnerable defendant at court and at the police station. Journal of Learning Disabilities and Offending Behaviour, 3(2), 77–84. 10.1108/20420921211280060 [DOI] [Google Scholar]
  49. Paulhus, D. L. (1998). Paulhus deception scales: User manual. MHS. [Google Scholar]
  50. Ranganathan, P., Pramesh, C. S., & Buyse, M. (2016). Common pitfalls in statistical analysis: The perils of multiple testing. Perspectives in clinical research, 7(2), 106–107. 10.4103/2229-3485.179436 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Rogers, R. (2018a). An introduction to response styles. In Rogers R. & Bender (Eds.) S., Clinical assessment of malingering and deception (4th ed., pp. 3–17). Guildford Press. [Google Scholar]
  52. Rogers, R., Vitacco, M. J., & Kurus, S. J. (2010). Assessment of malingering with repeat forensic evaluations: Patient variability and possible misclassification on the SIRS and other feigning measures. Journal of the American Academy of Psychiatry and the Law, 38(1), 108–114. [PubMed] [Google Scholar]
  53. Salekin, K. L., Olley, J. G., & Hedge, K. A. (2010). Offenders with intellectual disability: Characteristics, prevalence, and issues in forensic assessment. Journal of Mental Health Research in Intellectual Disabilities, 3(2), 97–116. 10.1080/19315861003695769 [DOI] [Google Scholar]
  54. Scragg, P., & Shah, A. (1994). Prevalence of Asperger’s syndrome in a secure hospital. The British Journal of Psychiatry: The Journal of Mental Science, 165(5), 679–682. 10.1192/bjp.165.5.679 [DOI] [PubMed] [Google Scholar]
  55. Sher, M. A. (2020). Understanding self-report response bias in high-functioning autism [Unpublished doctoral dissertation]. University of Birmingham. [Google Scholar]
  56. Siponmaa, L., Kristiansson, M., Jonson, C., Nydén, A., & Gillberg, C. (2001). Juvenile and young adult mentally disordered offenders: The role of child neuropsychiatric disorders. Journal of the American Academy of Psychiatry and the Law, 29(4), 420–426. [PubMed] [Google Scholar]
  57. Tager-Flusberg, H. (2007). Evaluating the theory-of-mind hypothesis of autism. Current Directions in Psychological Science, 16(6), 311–315. 10.1111/j.1467-8721.2007.00527.x [DOI] [Google Scholar]
  58. Tan, L., & Grace, R. C. (2008). Social desirability and sexual offenders: A review. Sexual Abuse: A Journal of Research and Treatment, 20(1), 61–87. 10.1177/1079063208314820 [DOI] [PubMed] [Google Scholar]
  59. Tully, R. J., & Bailey, T. (2017). Validation of the Paulhus Deception Scales (PDS) in the UK and examination of the links between PDS and personality. Journal of Criminological Research, Policy and Practice, 3(1), 38–50. 10.1108/JCRPP-10-2016-0027 [DOI] [Google Scholar]
  60. Underwood, L., Forrester, A., Chaplin, E., & Mccarthy, J. (2013). Prisoners with neurodevelopmental disorders. Journal of Intellectual Disabilities and Offending Behaviour, 4(1/2), 17–23. 10.1108/JIDOB-05-2013-0011 [DOI] [Google Scholar]
  61. Wakeling, H., & Barnett, G. (2014). The relationship between psychometric test scores and reconviction in sexual offenders undertaking treatment. Aggression and Violent Behavior, 19(2), 138–145. 10.1016/j.avb.2014.01.002 [DOI] [Google Scholar]
  62. Widows, M. R., & Smith, G. P. (2005). Sims: Structured inventory of malingered symptomatology: Professional manual. Psychological Assessment Resources. [Google Scholar]
  63. Wing, L. (1997). The autistic spectrum. Lancet (London, England), 350(9093), 1761–1766. 10.1016/S0140-6736(97)09218-0 [DOI] [PubMed] [Google Scholar]
  64. Young, G. (2017). PTSD in Court III: Malingering, assessment, and the law. International Journal of Law and Psychiatry, 52, 81–102. 10.1016/j.ijlp.2017.03.001 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Anonymised data are available in SPSS.


Articles from Psychiatry, Psychology, and Law are provided here courtesy of Taylor & Francis

RESOURCES