Skip to main content
Clinical Neuropsychiatry logoLink to Clinical Neuropsychiatry
. 2023 Dec;20(6):511–522. doi: 10.36131/cnfioritieditore20230606

Preliminary Development and Psychometric Evaluation of the Metacognition Brief Rating Scale: An Informant form of the Metacognition Self-Assessment Scale

Roberto Pedone a,b, Antonio Semerari c
PMCID: PMC10852410  PMID: 38344460

Abstract

Objective

Metacognition has been conceptualized as the ability to reflect on self and others' mental states and representations, including affects, beliefs, and intentions. The Metacognition Self-Assessment Scale (MSAS) was developed to assess various aspects of metacognition, aiming to leverage its potential applications in fields like clinical psychology and psychotherapy. However, a concern associated with MSAS is whether individuals can accurately self-report difficulties in identifying and describing mental states, both their own and others', when they lack these abilities. In response to this challenge, we aimed to develop and validate an alternative reporting tool, the Metacognition Brief Rating Scale (MBRS), which serves as an informant form of MSAS.

Method

The MBRS was administered to 384 individuals randomly recruited from the general population. We employed a methodological strategy based on three successive steps. In the preliminary step, items from the MSAS were rewritten into a third-person version by the authors. In the second step, we examined whether the four-factor structure was congruent between the informant-report (MBRS) and the self-report (MSAS) using exploratory and confirmatory factor analysis. In the last step, we examined and compared the psychometric properties of the MBRS and MSAS items, including item characteristics and internal reliability analyses.

Results

The psychometric properties (items and scales) of both versions were found to be adequate, and the four-factor structure of the MBRS was supported. The correlation between the two versions was statistically significant, and the factor structures were similar.

Conclusions

The results support the psychometric properties of the MBRS. However, further research is needed, especially in larger non-clinical and clinical samples, to replicate and extend these findings.

Keywords: metacognition, assessment, informant rating scale

Introduction

Researchers have extensively explored the phenomenon often referred to as "mind-reading" from various theoretical perspectives, employing diverse terminologies and lexicons (Pedone et al., 2017; Semerari et al., 2014). However, the terms "mentalization" (Bateman and Fonagy, 2004; Bateman et al., 2013; Bouchard et al., 2008; Choi-Kain and Gunderson, 2008) and "metacognition" (Carcione et al., 2011; Dimaggio and Lysaker, 2010; Semerari et al., 2003, 2007) are more commonly used within the clinical domain (Moritz and Lysaker, 2018; Lysaker, 2020). It is noteworthy that these terms have been employed in numerous personality studies, with many authors concurring that they represent essentially the same psychological construct (Lysaker et al., 2020; Bo et al., 2015; Fonagy and Bateman, 2016; Semerari et al., 2014).

In this context, we refer to metacognition as a comprehensive set of psychological and neuropsychological mental functions (Frith and Frith, 2006; Monticelli et al., 2021; Gilead et al., 2021) that enable individuals to comprehend and process various aspects of the mind, including knowledge, beliefs, wishes, intentions, actions, and problem-solving (Dimaggio et al., 2007a, 2007b; Faustino et al., 2021; Semerari et al., 2003).

Numerous lines of evidence underscore the critical importance of metacognition and its dysfunctions as a significant clinical construct. Metacognition plays a pivotal role in psychopathology, primarily due to its impact on one's capacity to understand their own internal experiences and those of others. These impairments have far-reaching implications for the development of a cohesive and stable self-concept and self-other representations (Dimaggio et al., 2007; Dimaggio et al., 2015).

Metacognitive deficits are observed in various clinical conditions, such as schizophrenia (Dimaggio and Lysaker, 2010; Lysaker et al., 2020), autism spectrum disorders (Grainger et al., 2014), neurocognitive impairments in verbal and visual memory and processing speed (Nicolò et al., 2012), executive dysfunctions (Lysaker et al., 2008), poor social functioning (Bo et al., 2015), difficulties in affect regulation (Harder and Folke, 2012; Solbakken et al., 2012), and personality disorders (Dimaggio et al., 2015). As such, metacognition may be considered a transdiagnostic factor in psychology, as it potentially underlies the comorbidity of symptoms across various psychological disorders (Gumley, 2011).

In the domain of personality pathology, several authors have underscored the importance of identifying common factors that underlie the general pathology of personality to enhance our comprehension and conceptualization of personality disorders (PDs) (Clark, 2007; Hopwood et al., 2011; Semerari et al., 2014; Sharp et al., 2015; Widiger and Simonsen, 2005). Among these factors, the impaired ability to understand one's own mind and that of others appears particularly significant (Fonagy et al., 2002; Bateman and Fonagy, 2004, 2009; Dimaggio et al., 2007a, 2007b; Fonagy, 1991; Gullestad et al., 2013; Minzenberg et al., 2006; Semerari et al., 2003, 2007, 2014, 2015).

Furthermore, the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) (American Psychiatric Association [APA], 2022) emphasizes the role of self-reflective abilities. The Alternative Model for PDs (APA, 2022), Section III, establishes that evaluating and diagnosing a PD requires an assessment of the patient's capacity to both a) self-reflect, promoting a stable self-concept and self-direction, and b) understand the minds of others to establish and maintain intimate and empathetic relationships (APA, 2022).

Pedone et al. (2017) underscore the shared aspects among these constructs, particularly the functional processing of mental contents and cognitive representations, as described in the Metacognitive Multi-Function Model (MMFM; Semerari et al., 2003; Pedone et al., 2021).

Metacognitive Multi-Function Model

In the MMFM, metacognitive functioning is a construct originally derived from clinical observations in the field of Personality Disorders (PD). Semerari and colleagues (2003) describe metacognition as a multifaceted construct consisting of four prominent features: Self-Reflexivity; Critical Distance; Understanding Others' Minds; and Mastery. Although these characteristics are conceptually distinct, they are intricately interrelated.

Self-Reflexivity (SR): This term refers to the general ability to understand one's own mind, comprising monitoring and integration abilities. Monitoring involves identifying and defining the components that make up an inner state in terms of thoughts, images, and emotions (identification) and the associated variables (relating variables). Integration pertains to reflecting on various mental states, providing a comprehensive and coherent description of their components, including their evolution over time. It also relates to the capacity to construct a cohesive narrative.

Critical Distance (CD): This refers to the general ability to differentiate and decenter, involving distancing the mind's point of view. Differentiation is the ability to distinguish between different classes of representations (e.g., dreams, fantasies, beliefs) and between representations and reality, recognizing their subjectivity. Decentration involves the ability to define others' mental states by forming hypotheses independent of one's own perspective, mental functioning, or involvement in the relationship.

Understanding Others' Minds (UOM): This relates to the ability to monitor, recognize, and define the emotions underlying others' behaviors, expressions, and actions and make plausible inferences about their thoughts.

Mastery (M): This encompasses the use of psychological information to cope with problems of increasing complexity, involving regulation and control activities.

In the field of personality disorders, various studies have highlighted the utility of a multi-function model. For instance, Semerari and colleagues (Semerari et al., 2014) have demonstrated that as the general severity of PDs increases, there is an overall impairment in all metacognitive functions. At the same time, the results confirm that impairment in specific functions has an effect on particular aspects of the PDs' style of expression. In another study, it was highlighted that individuals diagnosed with borderline personality disorder exhibited worse integration and differentiation compared to those without borderline PD (Semerari et al., 2015). Moreover, individuals diagnosed with avoidant personality disorder showed impaired functioning in terms of monitoring and decentering (Moroni et al., 2016). More recently, Pedone and colleagues (Pedone et al., 2021) demonstrated that the relationship between personality traits and personality functioning is partially mediated by global and specific metacognitive abilities. These findings support the hypothesis that both global and specific metacognitive abilities play a significant role in predicting the levels of impairment in personality functioning (Carcione et al., 2021; Semerari et al., 2014; Pedone et al., 2021).

Finally, the role of specific metacognitive functions, as formalized through the MMFM, has also been assessed in areas closely linked to personality psychopathology. For example, Aloi and colleagues demonstrated how the metacognitive monitoring function is related to the severity of binge eating disorder (Aloi et al., 2020), and how the differentiation function is associated with the severity of gambling disorder (Aloi et al., 2022).

Metacognitive Multi-Function Model Assessment Measures

Over the past years, three different instruments have been developed to assess the metacognition construct in accordance with the Metacognitive Multi-Function Model (Semerari et al., 2003). In the order of their development, these instruments are: the Metacognition Assessment Scale-Revised (MAS-R, Carcione et al., 2010); the Metacognition Assessment Interview (MAI; Semerari et al., 2012); and the Metacognitive Self-Assessment Scale (MSAS, Pedone et al., 2017). The following provides a brief description of each tool:

Metacognition Assessment Scale-Revised (MAS-R)

The Metacognition Assessment Scale-Revised is a coding system comprising 29 items used to rate therapy sessions, structured interviews, and other narratives about close relationships based on the MMFM (Semerari et al., 2003). The narratives are transcribed, divided into shorter fragments, and then rated according to the coding manual. To carry out the coding process, it is essential that the evaluators have clinical experience and are trained in the coding procedure (Carcione et al., 2010). Specifically, through the MAS, a trained rater indicates whether and how the participant has successfully used a particular metacognitive function. For each sub-scale, the interviewer assigns a score ranging from one to five, on the basis of the Likert scale, to describe how well the client employed that aspect of metacognitive function in respect to the assessed unit. A score of one signifies "very poor functioning," while a score of five signifies "very well-functioning." The global score on the MAS-R represents the individual's general metacognitive ability. The MAS-R is a valid and reliable tool and has been employed in numerous studies in the field of schizophrenia (Lysaker et al., 2014) and personality disorders (Semerari et al., 2005, 2015; Maillard et al., 2017, 2020; Carcione et al., 2011; Dimaggio et al., 2009). Although the use of this instrument demands a substantial amount of time and specific training for the evaluators, it offers the advantage of high measurement accuracy.

Metacognition Assessment Interview (MAI)

The Metacognition Assessment Interview (Semerari et al., 2012) was developed based on the indicators described in the MMFM (Semerari et al., 2003) and was designed to be more user-friendly compared to the MAS-R. The MAI is a semi-structured clinical interview used to assess the level of metacognitive abilities. The participant is asked to describe one of the most challenging relational experiences involving another person that they have personally experienced in the last six months. At the end of the narration provided by the subject, the examiner asks specific questions that investigate the contents of the construct. Examples include, "What do you think?" and "How are you feeling?" (monitoring); "How has your mental state changed?" (integration); "Have you considered other explanations for what happened?" (differentiation); and "What do you think she was thinking?" (decentration). Each function is assessed on a 5-step Likert scale, from 1 (poor metacognition) to 5 (excellent metacognition). The MAI captures an overall evaluation of four basic functions of the MMFM (Semerari et al., 2003) but lacks the assessment of the Mastery component. The interview takes approximately 45 minutes, and the interviewer should have clinical experience and be trained in the use of the instrument. The MAI is a valid and reliable tool and is used in numerous studies in the field of personality disorders (Semerari et al., 2014, 2015; Bilotta et al., 2018; Moroni et al., 2016; Pellecchia et al., 2018; Carcione et al., 2019; Colle et al., 2020). Although the use of this instrument requires specific training for the evaluators, it offers a relatively short interview time with a high degree of measurement accuracy.

Metacognition Self-Assessment Scale (MSAS)

The Metacognition Self-Assessment Scale (Pedone et al., 2017) is the most recent instrument developed to address the need for implementing the assessment of the MMFM (Semerari et al., 2003) for screening and research purposes. The MSAS consists of 18 items, with each item scored on a 5-point Likert scale (from 1 = never to 5 = almost always). The time required to complete the questionnaire is 10 to 15 minutes. The raw scores range from 18 to 90, with higher scores indicating a higher level of metacognition. The MSAS comprises a four-factor structure, with factors named Self-Reflexivity (SR), Critical Distance (CD), Understanding Others' Mind (UOM), and Mastery (M). These components are composed of the five MMFM functional abilities: monitoring, integration, differentiation, decentration, and mastery, reflecting specific skills. In particular, the four factors are:

Self-Reflexivity (SR): Comprising monitoring (three items) and integration (two items)

Critical Distance (CD): Comprising differentiation (two items) and decentration (three items)

Understanding the Mental States of Others (UOM): Focusing on monitoring/identification (three items)

Mastery (M): Reflecting the ability to use mental representations to solve personal and interpersonal problems (five items).

The MSAS has shown good psychometric properties (Pedone et al., 2017). Validity and reliability were satisfactory in the original validation study and subsequent research (Pedone et al., 2021, Aloi et al., 2020, 2022). The factor structure was also replicated in a Portuguese version of the MSAS (Faustino et al., 2021).

However, it is important to note that metacognitive abilities are often better assessed by external observers rather than by self-report measures. This is due to potential issues related to social desirability bias (Lezak, 2015) and limitations intrinsic to the psychopathology itself. Concerns have been raised about the accuracy of self-assessment in individuals with high levels of maladaptive personality traits, as they may struggle to accurately evaluate their ability to monitor and integrate their own mental states, maintain a critical distance from their mental contents, and master their problematic mental states. One approach to address these concerns is to create an informant version of the MSAS, which would provide a direct assessment of the features of the construct without self-filtering. This approach has been considered optimal for measuring alexithymia (Nemiah, 1976), a multifaceted construct with features analogous to the "metacognition" of affective states (Bagby et al., 2021).

In this preliminary study, our aim is to develop and evaluate the psychometric properties of an informant report version of the MSAS, the 18-item Metacognition Brief Rating Scale (MBRS – Informant Form).

Method

Participants and procedure

Participants in the role of informants (N = 75) were recruited at the Department of Psychology of University of Campania “Luigi Vanvitelli”. The study was presented during the course on psychological testing in Clinical Psychology and involved students enrolled in the master’s degree program in Clinical Psychology, who took on the role of conducting the assessments. All the students who participated to the study were trained to administer the inventories within the activities of the Personality Measurement and Assessment Laboratory (MIVAP) of the Psychology Department and received training credits at the end of the work.

These participants served as “informants” and completed the MBRS “paper and pencil” for a number of “target” people. For a target person, they were instructed to select people they knew well (e.g., friend, romantic partner, spouse, sibling, parent, etc.). For each target, with respect to gender, the MBRS with the appropriate third-person gender declination was used. The informants also completed a demographics questionnaire and provided information about their relationship to the person (i.e., the target) they were rated on the MBRS. Moreover, in this phase of the study, the selected targets completed a demographics questionnaire and the self-report MSAS. All subjects (informant and target) volunteered to participate after being presented with a detailed study description, and all were treated in accordance with the “Ethical Principles of Psychologists and Code of Conduct.” (CNOP. Italy). Informed consent was obtained from all individual participants included in the study. To be included in the study, subjects were required to possess a level of education equal to or higher than primary school and should never have been treated for any psychiatric disorder. We excluded individuals who indicated that they had a history of psychiatric diagnoses, psychiatric or psychological treatment, severe brain injury, and/or substance-related disorders. None of the participants were taking psychotropic drugs, nor had they used them during the month preceding the study.

Informants and Target Sample

All informant participants (N=75) were white and of Italian background; 8 participants (10.7%) were male and 67 (89.3%) were female. The average age was 24.45 years (SD = 4.12); 71 participants (94.7%) were unmarried, 4 (5.3%) were married.

All target subjects (N=384) were white and of Italian background; 165 participants (43%) were male and 219 (57%) were female. The average age of participants was 33.70 years (SD = 14.45); 258 participants (67.2%) were unmarried, 107 (27.9%) were married, 4 (1.0%) were separated, 11 (2.9%) were divorced, and 4 (1.0%) were widowers. In addition, 195 participants had attended college (50.8%) and 151 had a higher degree of education (39.3%).

The informant participants reported that they had known the targets they rated on the MBRS for an average of 12.41 (SD = 8.08) years; 267 (69.5%) of the rated individuals were a “partner / friend”, 78 (20.31%) a “parent”, and 39 (10.15%) a “sibling”. As requested by the procedure, informants were in contact with their target reporting that they spoke with or saw the target with a daily (46.4%), weekly (36.3%), monthly (14.3%), annual (6.0%) frequency. Furthermore, all informants reported having a “close” relationship with their target that they considered low (13%), medium (24%), or high (63%).

Procedure and Analysis

In order to maintain fidelity to the constructs of the MSAS as a validated instrument for assessing metacognition, as outlined by the MMFM (Semerari et al., 2003; Pedone et al., 2017), we adhered to established guidelines (Rosellini and Brown, 2021) within the framework of a three-step methodological approach. In the first step, we developed the MBRS by rewriting the first-person items of the MSAS into the third person. In step two, we endeavored to corroborate these newly written items formed four factors consistent with the validated components of the metacognition construct, as was done with the original self-report items of the MSAS. To do that we examined if the factor structure was congruent between the informant-report (MBRS) and the self-report (MSAS). In step three, we examined and compared the psychometric properties of the MBRS and MSAS items, including item characteristics and internal reliability analyses.

2.3.1. Step 1. Item development

The candidate items from the MBRS were constructed by modifying the 18 items from the self-report MSAS. Items were first written by authors with the task of modifying the 18 items of the self-report MSAS by writing them in the third person. Consistent with the self-report version of the MSAS, a five-point Likert response format ranging from 1 (never) to 5 (Almost always) was maintained in the MBRS. Scoring remains the same with higher scores indicating a greater degree of metacognitive abilities. The next task in this step of scale development was for the authors to reach final consensus on the item wording following an approximately two-hour meeting. No other changes were made in adapting the MSAS item into the MBRS informant version.

2.3.2. Step 2. Factor structure confirmation

Exploratory and confirmatory factor analysis (EFA and CFA) were used to assess the four-factor structure of the MBRS items, a structure that has been obtained for the MSAS (Pedone et al, 2017; Faustino et al, 2021). An EFA was conducted using the principal axis factoring extraction method, and an oblique rotation. Successively, using CFA, we tested the similarity of the factor structure across the MBRS and the MSAS. An oblique four-factor model was specified, with items loading on one of the four respective factors (i.e., Self-Reflexivity - Monitoring/Integration; Critical Distance - Differentiation/Decentration; Understanding Others Mind; and Mastery). The robust weighted least squares mean, and variance adjusted (WLSMV) estimator was used to estimate the model from the polychoric items correlation matrix. Polychoric correlations were computed given that response scales of all items included in the tool consisted of ordinal data (i.e., 5-point Likert scales) (Özdemir et al., 2019; Holgado et al., 2010). The use of polychoric correlations in the factor analysis of ordinal data, compared to Pearson correlations which are to be used with at least interval-level data, yields results that demonstrate less error and better alignment with originally proposed theoretical models (Özdemir et al., 2019). The means and variances of the factors were fixed to equal zero and one, respectively. The CFA analysis was conducted using the Lavaan R Packages (Rosseel, 2012). CFA generates an array of goodness-of-fit indices to assess the degree and adequacy to which the obtained structure fits the underlying theoretical model. We used four fit indexes to evaluate the model (Bentler, 1990): the standardized root-mean-square residual (SRMR), root-mean square error of approximation (RMSEA), comparative fit index (CFI), and the Tucker-Lewis Index (TLI). Fit is considered acceptable if CFI and TLI are 0.90 or greater and SRMR and RMSEA are 0.10 or less (Finch and West, 1997; Hu and Bentler, 1999). Internal reliability for the MBRS was estimated using coefficients alpha [ α ] and omega [ ω ] with estimates considered adequate when greater than 0.80 (Clark and Watson, 2019; Dunn et al., 2014).

We also used Tucker’s congruence coefficients (TCC; φ) to empirically compute the similarity of factor loadings across the MBRS and the MSAS. TCC is an established statistical method for factor structure comparisons (McCrae, et al., 1996; Lorenzo-Seva and Berge 2006; Somma, et al., 2019). To determine the level of congruence across the factor structures of the MBRS and self-report MSAS we used the criterion recommended by Lorenzo-Seva and Berge (2006), and Lovik et al. (2017), with TCC congruence estimates φ ≥ 0.95 indicating “equal” or “good similarity”; φ between 0.85 and 0.95 suggesting “fair” similarity; and φ < 0.85, indicating “no similarity”.

2.3.3. Step 3. Psychometric properties

We examined the item and the scale characteristics of the MBRS and the MSAS. Item characteristics were examined using mean, standard deviation and the standardized estimation of skewness and kurtosis for each of the 18 items. The degree of the association of a scale item score with the total scale score was estimated with the corrected item-total correlation. Items with skewness and kurtosis estimates between -1.5 to +1.5 were considered to be in the acceptable range (Tabachnick et al., 2018). Corrected item-to-total correlations were considered acceptable if the value was greater than 0.30 (Cristobal et al., 2007). The mean-level differences between self-reports and informant-reports for each item, and for MBRS and MSAS total scores was examined using paired t-tests. Those tests were supplemented with Cohen’s d effect size estimates, with d = 0.2; 0.5; and 0.80 considered small, medium, and large effects, respectively (Cohen, 1988). For comparative purposes, at the scale level analyses, we calculated estimates of internal reliability of the MBRS, and MSAS, using the alpha and omega coefficients.

Results

Item development

Two gender-specific male/female forms (i.e., he/ she; him/her) versions of the MBRS were created. The following are some examples. MSAS item # 1, “I am able to define and distinguish my mental activities such as: remembering, imagining, fantasizing, dreaming, wishing, deciding, foreseeing and thinking”, was changed to the gender-specific masculine: “He is able to define and distinguish his own mental activities such as: remembering, imagining, fantasizing, dreaming, desiring, deciding, foreseeing and thinking”; the gender-specific feminine: “She is able to define and distinguish her own mental activities such as: remembering, imagining, fantasizing, dreaming, desiring, deciding, foreseeing and thinking”; MSAS item # 5, “I am aware that what I want or expect may not come true and that I have limited power to influence things”, was changed to “He is aware that what he desires or expects may not come true and that he has limited power to influence things”; “She is aware that what she desires or expects may not come true and that she has limited power to influence things.”MSAS item # 12. “I am aware that others can perceive facts and events differently from me and interpret them differently.”; was changed to “He is aware that others may perceive facts and events differently from him and interpret them differently”; “She is aware that others may perceive facts and events differently from her and interpret them differently”. MSAS item # 16. “I approach problems by trying to question or enrich my assessments and beliefs about the problems themselves.” was changed to “He addresses problems by trying to question or enrich his assessments and beliefs about the problems themselves”; “She addresses problems by trying to question or enrich her assessments and beliefs about the problems themselves”. In the present study, informants employed the gender-specific version of the MBRS based on the gender of the target.

Table 1.

Exploratory factor analysis: model factor loadings for the MBRS (N=384)

ITEM NAME Self Reflexivity Critical Distance Others Mind Understanding Mastery Communalities
1 MON 1 0.420 0.154 0.051 0.376 .626
2 MON 2 0.786 -0.062 0.143 -0.011 .790
3 MON 3 0.732 -0.287 -0.007 -0.032 .748
6 INT 1 0.814 -0.017 0.019 0.062 .729
7 INT 2 0.766 -0.075 0.087 0.060 .753
4 DIF 1 0.241 -0.680 0.054 -0.056 .662
5 DIF 2 0.312 -0.640 -0.053 -0.073 .583
11 DEC 1 0.015 -0.778 0.080 0.002 .675
12 DEC 2 -0.150 -0.829 0.056 0.188 .745
13 DEC3 -0.042 -0.610 0.240 0.091 .569
8 UOM 1 -0.090 -0.043 0.920 -0.093 .773
9 UOM 2 0.017 -0.035 0.668 0.289 .676
10 UOM 3 0.075 -0.054 0.698 0.164 .683
14 MST 1 0.315 -0.092 -0.292 0.594 .402
15 MST 2 0.330 0.168 0.196 0.457 .495
16 MST 3 0.140 -0.310 0.065 0.575 .665
17 MST 4 -0.036 -0.205 0.031 0.734 .645
18 MST 5 0.206 -0.403 0.007 0.413 .600

Note: Monitoring: MON; Integration: INT; Differentiation: DIF; Decentring; DEC; Understanding Others’ Mind: UOM; Mastery: MST. Factor loadings resembling MMFM are marked.

Factor Structure

Exploratory factor analysis was applied with principal axis factoring extraction method, and oblimin rotation, without imposing any constraints on potential solutions within the data sample. To ascertain the optimal number of factors to retain, we employed multiple techniques, including an evaluation of the scree plot, parallel analysis (Horn, 1965), and the minimum average partial (MAP) test (Velicer, 1976). All three assessment methods consistently indicated that a four-factor solution provided the best fit for the data, explaining 65.66% of the total variance. This factor structure closely resembled the Metacognition Self-Assessment Scale (Pedone et al., 2017). The largest proportion of variance was accounted for by the first factor (44.66%). The remaining factors explained 8.67%, 6.51%, and 5.80% of the variance, respectively. The first factor, related to Self-Reflexivity, consisted of 5 items with loadings ranging from 0.81 to 0.42. The second factor, composed of Critical Distance items, included 5 items with loadings ranging from 0.82 to 0.61. The third factor, representing Mastery, consisted of 5 items with loadings ranging from 0.73 to 0.41. Finally, the fourth factor, associated with Understanding Others' Minds, comprised 3 items with loadings ranging from 0.92 to 0.66. Factor loadings of EFA for the MBRS are displayed in table 2, following the MMFM description.

Table 2.

CFA Model Standardized factor loadings for the MBRS and the MSAS, and congruence coefficients (N=384)

ITEM ITEM NAME Self Reflexivity [MBRS/MSAS] Critical Distance [MBRS/MSAS] Others Mind Under-standing [MBRS/MSAS] Mastery [MBRS/MSAS]
1 MON 1 0.571 / 0.558
2 MON 2 0.864/ 0.770
3 MON 3 0.825/ 0.724
6 INT 1 0.804 / 0.818
7 INT 2 0.843 / 0.824
4 DIF 1 0.764 / 0.463
5 DIF 2 0.675 / 0.539
11 DEC 1 0.762 / 0.696
12 DEC 2 0.781 / 0.844
13 DEC3 0.701 / 0.674
8 UOM 1 0.641 / 0.750
9 UOM 2 0.821 / 0.735
10 UOM 3 0.852 / 0.863
14 MST 1 0.462 / 0.441
15 MST 2 0.548 / 0.605
16 MST 3 0.777 / 0.740
17 MST 4 0.612 / 0.598
18 MST 5 0.752 / 0.684
Congruence 0.91 0.94 0.90 0.85

Note: Monitoring: MON; Integration: INT; Differentiation: DIF; Decentring; DEC; Understanding Others’ Mind: UOM; Mastery: MST. MSAS self-reports N = 384; MBRS informant-reports N = 384. Factor loading values are significant at p < 0.001.

Confirmatory factor analysis applied to test the four-factor structure of the MSAS was obtained in the MBRS, and resulted structure demonstrated acceptable fit on the selected indices: CFI = 0.993, TLI = 0.991, RMSR = 0.058, and RMSEA = 0.062. The CFA standardized factor loadings for the MBRS and MSAS are displayed in table 2. All loadings were statistically significant (p < 0.01). Visual inspection of the pattern of item loadings across the two scales suggested an overall high level of similarity. The inter-factor parameter estimates showed a pattern of significant associations among the factors suggesting that the results are consistent with the theoretical definition of the metacognitive functions described by the MMFM. In particular, the association estimates were 0.699 between SR and CD, 0.746 between SR and UOM, 0.767 between SR and MST, 0.671 between CD and UOM, 0.748 between CD and MST, and 0.699 between UOM and MST. All associations were significant (p < 0.01).

The TCC results (see bottom row of table 2) indicated that the Self-Reflexivity, Critical Distance, Understanding Others Mind, and Mastery factors had “fair” similarity across the forms (i.e., φ between 0.85 and 0.93); no congruence coefficient fell below 0.85, which indicates “no similarity” (Lorenzo-Seva and Ten Berge, 2006).

Psychometric properties

The results from the item level analyses for both MBRS and MSAS items are displayed in table 3. For all 18 items on both scales the values for the distribution of item scores were well within the acceptable range. For the MBRS items: skewness ranged from -1.16 to -0.01; median = -0.41. Kurtosis ranged from -0.73 to 0.49; median = -0.54. Similar values were obtained for the MSAS items: skewness ranged from -1.10 to -0.26; median = -0.47, and for kurtosis ranged from -0.37 to 0.98; median = -0.04. The corrected item-to-total correlations for the MBRS ranged from 0.42 to 0.73; median = 0.65. The corrected item-to-total correlations for the MSAS ranged from 0.18 to 0.50; median = 0.37. These values as estimates of internal reliability are adequate-to-good.

Table 3.

Item-level and total score descriptive statistics for the MBRS and MSAS (N = 384)

MBRS MSAS
ITEM ITEM NAME M (SD) Skew (Kur-tosis) r item-total/α/ω M (SD) Skew (Kurtosis) r item-total/α/ω dz
1 MON 1 4.49 (0.7) -1.16 / 0.49 0.55 4.51 (0.63) -1.1 / 0.98 0.36 0.02
2 MON 2 3.99 (0.82) -0.46 / -0.24 0.73 4.16 (0.76) -0.66 / 0.09 0.37 0.16**
3 MON 3 3.9 (0.83) -0.28 / -0.62 0.73 4.09 (0.75) -0.55 / 0.04 0.38 0.19***
4 INT 1 3.9 (0.96) -0.5 / -0.49 0.66 3.65 (0.97) -0.47 / -0.1 0.25 -0.20***
5 INT 2 4.04 (0.89) -0.69 / -0.05 0.59 3.77 (0.98) -0.44 / -0.37 0.31 -0.21***
6 DIF 1 3.9 (0.89) -0.36 / -0.72 0.65 3.97 (0.86) -0.62 / -0.05 0.44 0.07
7 DIF 2 3.66 (0.93) -0.26 / -0.55 0.71 3.63 (0.83) -0.36 / -0.26 0.46 -0.03
8 DEC 1 4.09 (0.9) -0.74 / 0.01 0.54 3.54 (0.86) -0.29 / 0.07 0.45 -0.48***
9 DEC 2 3.86 (0.84) -0.27 / -0.63 0.65 3.77 (0.72) -0.66 / 0.75 0.34 -0.09
10 DEC3 3.53 (0.94) -0.01 / -0.73 0.66 3.27 (0.82) -0.26 / 0.07 0.42 -0.22***
11 UOM 1 4.1 (0.93) -0.71 / -0.43 0.60 4.14 (0.82) -0.77 / 0.24 0.36 0.04
12 UOM 2 3.98 (0.97) -0.6 / -0.54 0.59 4.37 (0.71) -0.79 / -0.18 0.46 0.35***
13 UOM 3 4.41 (0.77) -1.08 / 0.31 0.60 4.49 (0.67) -1.05 / 0.35 0.35 0.09
14 MST 1 3.35 (0.98) -0.04 / -0.52 0.42 3.04 (0.94) -0.28 / -0.31 0.18 -0.25***
15 MST 2 3.93 (0.95) -0.54 / -0.58 0.49 3.67 (0.89) -0.46 / -0.12 0.40 -0.22***
16 MST 3 3.57 (1.03) -0.28 / -0.63 0.67 3.6 (0.88) -0.37 / -0.04 0.50 0.03
17 MST 4 3.31 (1.09) -0.22 / -0.68 0.50 3.19 (0.94) -0.37 / -0.2 0.33 -0.09
18 MST 5 3.65 (0.98) -0.32 / -0.56 0.67 3.63 (0.97) -0.48 / -0.13 0.41 -0.02
Total 69.67 (10.85) -0.42 / -0.21 0.93 a / 0.94 b 68.48 (7.10) -0.04 / -0.01 0.80 0.84 a b / -0.10*

Note:

Monitoring: MON; Integration: INT; Differentiation: DIF; Decentring; DEC; Understanding Others’ Mind: UOM; Mastery: MST. r item-total: corrected item-total correlation. Cohen’s dz values correspond to effect sizes from paired t-tests between respective items from the MSAS and MBRS. P-values have been corrected for multiple testing using the Benjamini-Hochberg procedure. a Corresponds to coefficient alpha (α) for the total scale. b Corresponds to coefficient categorical omega (ω) for the total scale. *p < 0.05; **p < 0.01; ***p < 0.001.

Across all items, the median Cohen’s dz effect size, ranged from -0.48 to 0.35; median = -0.08. On average, 55.55% of items of MBRS had a higher mean rating than items of MSAS, while 44.44% of items of MSAS had a higher mean rating than items of MBRS. The alpha and the omega estimates suggested a good level of internal consistency for the MBRS (α = 0.93; ω = 0.94) and an adequate level of internal consistency for the MSAS (α = 0.83; ω = 0.87).

The difference in total scores between the scales was small but statistically significant producing an essentially low effect size (d z = -0.10), suggesting that total scale scores are no different across the two versions. More specifically, with respect to mean difference between scales factor components, results showed that individual informants rated the targets slightly higher in score of Critical Distance (informant-report mean = 20.42; SD = 3.60; self-report mean = 20.41; SD = 2.79; t(383) = -0.25; n.s.), Understanding Others’ Mind (informant-report mean = 11.48; SD = 2.28; self-report: mean = 10.57; SD = 1.98; t(383) = -6.38; p < 0.001), and Mastery (informant-report mean = 17.81; SD = 2.28; self-report mean = 17.12; SD = 2.96; t(383) = -3.29; p < 0.01). Whereas, individual informants rated the targets slightly lower in score of Self-Reflexivity (informant-report mean = 19.94; SD = 3.47; self-report mean = 20.36; SD = 3.86; t(383) = 2.07; p < 0.05). The mean correlation between the MBRS and MSAS items was 0.15 (averaged p = 0.03); while the correlation between the MBRS and MSAS total score was 0.24 (p < 0.001). All correlations are reported in table 4.

Table 4.

Correlations between MBRS and MSAS, Global Scores and subscales (N =384)

MBRS Global Score MBRS Self Reflexivity MBRS Critical Distance MBRS Others Mind MBRS Mastery MSAS α; ω
MSAS Global Score 0.24** 0.21** 0.17** 0.14** 0.25** 0.80 / 0.84
MSAS Self Reflexivity 0.17** 0.23** 0.12* 0.09 0.12* 0.79 / 0.81
MSAS Critical Distance 0.17** 0.11* 0.21** 0.10* 0.14** 0.68 / 0.67
MSAS Others Mind Understanding 0.13* 0.10 0.03 0.14** 0.17** 0.76 / 0.77
MSAS Mastery 0.15** 0.11* 0.07 0.05 0.23** 0.64 / 0.63
MBRS α; ω 0.93 / 0.94 0.88 / 0.89 0.85 / 0.85 0.80 / 0.81 0.76 / 0.77

Note: Cronbach's alpha coefficient: α; Coefficient omega: ω. * p < 0.05; ** p < 0.001.

This result is consistent with the literature on the measurement of personality and psychopathology in which correlations between self- and informant-report measures are typically modest (0.20 - 0.30) to moderate (0.40 - 0.60) (Clark and Watson, 2019).

Discussion

In this study, we successfully developed an informant version of the Self-report MSAS, which we named MBRS, and evaluated its psychometric properties. The internal consistency, as estimated by α and ω, for the MBRS was not only good but, notably, higher than those obtained for the MSAS using the same reference sample. Confirmatory factor analysis (CFA) was conducted on both scales, providing evidence for the factorial validity of the MBRS concerning its counterpart items in the MSAS. The factor structure, based on the metacognition construct outlined in the MMFM (Semerari et al, 2003), was effectively replicated in both the MBRS and MSAS items, producing a good fit for both scales. In essence, the factor structures of the MBRS and MSAS demonstrated a high degree of congruence, indicating that the two different measurement forms effectively assess the same theoretical constructs.

An intriguing finding from this study is that, on average, individual informants rated the targets as slightly more capable in metacognition than the targets themselves. In fact, the overall average score, though modest, was statistically significantly higher in the informant report version than in the self-report version. Notably, this pattern of score differences between the two instrument versions was particularly evident in the components of the construct, specifically in regard to Self-Reflexivity, Understanding Others’ Mind, and Mastery, except for Critical Distance. While these score differences might seem small on average, their statistical significance highlights the need for further investigation.

Moreover, the magnitude of the correlation between the MBRS and MSAS total scores was relatively modest (r = 0.24, p < 0.001). However, this level of correlation, falling within the range of r = 0.20 to 0.30, is a common finding in many self-informing correlations (Clark and Watson, 2019). It's important to recognize that metacognitive abilities often exhibit low external visibility, particularly in a behavioral context, when compared to more overt personality traits such as impulsivity or extraversion. Furthermore, the strength of the correlation between self-report and informant-reported measures can be influenced by the depth of knowledge that informants possess about the targets. Informants who have observed the target in various situations can offer a more accurate assessment, and they provide the added benefit of reducing the impact of social desirability response bias (McDonald, 2008), which can be associated with self-assessment measures. Combining evaluations from multiple informants could further enhance the reliability of the results, as noted by Bagby and Colleagues (Bagby, et al., 2021) in the development of the informant form of the Toronto Alexithymia Scale (Bagby, et al., 1994).

Strengths and limitations of this study are worth mentioning. Firstly, this study demonstrates two main strengths. The first is related to the successful development of an empirically valid and practical method for evaluating metacognition. The second is its contribution as a valuable addition to the set of MMFM assessment tools (Semerari 2003). In the context of a multi-method approach, the MBRS can be used alongside the MSAS and other validated measures of the model, such as the MAS-R (Carcione et al., 2010) or the MAI (Semerari et al., 2012), to assess metacognitive abilities. Importantly, the MBRS combines the convenience of screening provided by the MSAS with the precision generally achievable in hetero-reported evaluations (Madeira et al., 2013). This is particularly beneficial for individuals with compromised metacognitive or mentalizing abilities (Lieberman et al., 2007; Nosek et al., 2011).

From a clinical perspective, the development and validation of the Metacognition Brief Rating Scale as an informant form of the Metacognition Self-Assessment Scale have significant implications. Having two complementary tools for evaluating metacognitive abilities provides clinicians with a distinct advantage in the field of clinical psychology and psychotherapy. While self-report assessments, such as the MSAS, offer valuable insights into a patient's self-perception, the introduction of the MBRS allows for a critical distinction between self-report evaluations and those conducted by clinicians or informants. This dual perspective offers a nuanced and comprehensive understanding of metacognition. Clinicians can assess not only how individuals perceive their own mental states and processes but also compare these self-perceptions to external evaluations. This comparative aspect holds promise in identifying discrepancies, facilitating targeted interventions, and enhancing the accuracy of metacognitive assessments. Having both self-report and informant instruments allows for a thorough and nuanced understanding of metacognition in the clinical context, a valuable asset for tailoring therapeutic strategies and interventions.

Finally, in comparison to session transcript analysis (MAS-R) or clinical interviews (MAI), MBRS is a quick and convenient assessment tool that can be easily administered and data collected over the internet (Daraz, et al., 2019; Sherifali et al., 2018). The results of this study suggest a self-reported difference in metacognitive abilities within the sample considered. This finding is of great importance and warrants further empirical studies to cross-validate the MBRS in both community and clinical samples.

A limitation of this study is that it relied on a target sample identified by friends and families of students in the role of informant evaluators. It is important to note that while this approach leaves aspects of clinical settings unexamined, it accounts for significant factors related to the relational knowledge between informants and targets. Nonetheless, we acknowledge that the level of agreement between informant and self-report versions of personality trait measures might differ between clinical and nonclinical samples, potentially influenced by various variables as well as the specific training and profession of the informants.

It is essential to underscore for the reader that each informant collected multiple target data points, potentially resulting in a clustering bias within the dataset. As Stochl and colleagues (2016) have pointed out, this factor may introduce random effects when conducting Confirmatory Factor Analysis in psychiatric research, which should not be disregarded. Therefore, future studies examining the structure of MBRS should duly consider these insights.

Another limitation is that this study did not evaluate the validity of the MBRS using external criteria. As an initial step, the focus of this research was primarily on establishing the internal structure's similarity, item characteristics across the two versions, and evidence of psychometric properties. As mentioned in the introduction, the self-report MSAS has already been translated into Portuguese and is used by researchers (Faustino, et al., 2021). Future investigations might consider translating the MBRS into the same language in an effort to ensure linguistic and instrumental congruence.

In conclusion, our preliminary results suggest that the MBRS is a reliable and valid measure of metacognition. It assesses the construct in a manner consistent with the self-report MSAS, offering promise for both clinical and research applications. While replication and further validation studies are necessary, we believe that the Metacognition Brief Assessment Scale can be a valuable component of a multi-method approach to assessing metacognition.

Acknowledgments

The authors thank the colleagues of Psicometrica.it who contributed to the recruitment of the subjects who kindly participated in this study.

MBRS - APPENDIX

The following questionnaire regards what people think about their ability to identify and describe their thoughts, emotions and the social relationships in which they are involved. By following the statements listed below you can indicate your judgment on how descriptive they are of the person you are evaluating. MBRS-18_2023_EN TPMF
Please answer to each statement marking a cross in the appropriate box. Thanks for your cooperation!
A RELATIVE TO HIMSELF/HERSELF, ... Never Rarely Sometimes Frequently Almost always
1 IT1. MON 1 He/She is able to define and distinguish his/her own mental activities such as: remembering, imagining, fantasizing, dreaming, desiring, deciding, foreseeing and thinking. 1 2 3 4 5
2 IT2. MON 2 He/She is able to define and distinguish his/her emotions. 1 2 3 4 5
3 IT3. MON 3 He/She is aware of what thoughts or emotions are driving him/her to perform certain actions. 1 2 3 4 5
4 IT4. DIF 1 He/She is aware that what he/she thinks of himself, of others and of things, are ideas and representations that are not necessarily true. He/She realizes that his views can be temporary and can change. 1 2 3 4 5
5 IT5. DIF 2 He/She is aware that what he/she desires or expects may not come true and that he/she has limited power to influence things. 1 2 3 4 5
6 IT6. INT 1 He/She is able to perceive and clearly describe his/her thoughts, emotions and the relationships he/she is involved in. 1 2 3 4 5
7 IT7. INT 2 He/She is able to describe the thread that binds his/her thoughts and emotions even when they change from moment to moment. 1 2 3 4 5
B RESPECT TO OTHERS, ... Never Rarely Someti- mes Fre- quently Almost always
1 IT8. UOM 1 He/She is able to understand and distinguish the different mental activities of the people he/she knows; like when, for example: they remember, imagine, have fantasies, dream, desire, foresee and think. 1 2 3 4 5
2 IT9. UOM 2 He/She is able to identify and understand the emotions of the people he/she knows. 1 2 3 4 5
3 IT10. UOM 3 He//She is able to describe the thread that binds the thoughts and emotions of the people he/she knows even when they change from moment to moment. 1 2 3 4 5
C RESPECT TO "PUT IN THE SHOES OF OTHERS", ... Never Rarely Sometimes Frequently Almost always
1 IT11. DEC 1 He/She is aware that he/she is not necessarily at the center of the thoughts, feelings and emotions of others and that their actions derive from motives and goals that may be independent of the relationship they have with him/her. 1 2 3 4 5
2 IT12. DEC 2 He/She is aware that others may perceive facts and events differently from him/her and interpret them differently. 1 2 3 4 5
3 IT13. DEC 3 He/She is aware that aspects such as age and experience influence people's thinking, emotions and behavior. 1 2 3 4 5
D WITH RESPECT TO TROUBLESHOOTING,... Never Rarely Sometimes Frequently Almost always
1 IT14. MST 1 He/She faces problems voluntarily by trying to impose or inhibit some of his/her behaviors. 1 2 3 4 5
2 IT15. MST 2 He/She faces problems voluntarily by trying to follow his/her own mental order. 1 2 3 4 5
3 IT16. MST 3 He/She addresses problems by trying to question or enrich his/her assessments and beliefs about the problems themselves. 1 2 3 4 5
4 IT17. MST 4 When problems are related to relationships with other people, he/she tries to solve them on the basis of what he/she believes to be their mental functioning. 1 2 3 4 5
5 IT18. MST 5 He/She faces the problem, recognizing and accepting his/her limitations in managing himself/herself and influencing events. 1 2 3 4 5

Note: MBRS informant-report: Monitoring: MON; Integration: INT; Differentiation: DIF; Decentring; DEC; Understanding Others’ Mind: UOM; Mastery: MST.

SCORING

Self-reflexivity = Monitoring (IT1 + IT2 + IT3) + Integration (IT6 + IT7)

Critical Distance = Differentiation (IT4+IT5) + Decentering (IT11 + IT12 + IT13)

Understanding of others mind = IT8 + IT9 + IT10.

Mastery = IT14 + IT15 + IT16 + IT17 + IT18.

References

  1. American Psychiatric Association. (2022). Diagnostic and statistical manual of mental disorders (5th ed., text rev.). 10.1176/appi.books.9780890425787 [DOI] [Google Scholar]
  2. Aloi, M., Rania, M., Carbone, E. A., Calabrò, G., Caroleo, M., Carcione, A., Nicolò, G., Semerari, A. & Segura‐Garcia, C. (2020). The role of self‐monitoring metacognition sub‐ function and negative urgency related to binge severity. European Eating Disorders Review, 28(5), 580–586. [DOI] [PubMed] [Google Scholar]
  3. Aloi, M., Riccelli, C., Piterà, F., Notaro, M., Curcio, V., Pullia, L., Sorrentino, C., Audino, M.G., Carcione, A., Segura-Garcia, C. & De Fazio, P. (2022). Impaired Metacognitive Differentiation, High Difficulty in Controlling Impulses and Non-acceptance of Emotions are Associated With the Severity of Gambling Disorder. Journal of Gambling Studies, 1–11. [DOI] [PubMed] [Google Scholar]
  4. Bagby, R. M., Parker, J. D., & Taylor, G. J. (1994). The twenty-item Toronto Alexithymia Scale. Item selection and cross-validation of the factor structure. Journal of psychosomatic research, 38(1), 23–32. [DOI] [PubMed] [Google Scholar]
  5. Bagby, R. M., Parker, J. D., Onno, K. A., Mortezaei, A., & Taylor, G. J. (2021). Development and psychometric evaluation of an informant form of the 20-item Toronto alexithymia scale. Journal of psychosomatic research, 141, 110329. [DOI] [PubMed] [Google Scholar]
  6. Bateman, A. W., & Fonagy, P. (2004). Mentalization-based treatment of BPD. Journal of personality disorders, 18(1), 36–51. [DOI] [PubMed] [Google Scholar]
  7. Bateman, A. W., & Fonagy, P. (2009). Randomized Controlled Trial of Outpatient Mentalization-Based Treatment Versus Structured Clinical Management for Borderline Personality Disorder. Am J. Psychiatry, 166, 1355–1364. [DOI] [PubMed] [Google Scholar]
  8. Bateman, A., Bolton, R., & Fonagy, P. (2013). Antisocial personality disorder: A mentalizing framework. Focus, 11(2), 178–186. [Google Scholar]
  9. Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological bulletin, 107(2), 238. [DOI] [PubMed] [Google Scholar]
  10. Bilotta, E., Carcione, A., Fera, T., Moroni, F., Nicolò, G., Pedone, R., & Colle, L. (2018). Symptom severity and mindreading in narcissistic personality disorder. PloS one, 13(8). [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bo, S., Kongerslev, M., Dimaggio, G., Lysaker, P. H., & Abu-Akel, A. (2015). Metacognition and general functioning in patients with schizophrenia and a history of criminal behavior. Psychiatry research, 225(3), 247–253. [DOI] [PubMed] [Google Scholar]
  12. Bouchard, M. A., Target, M., Lecours, S., Fonagy, P., Tremblay, L. M., Schachter, A., & Stein, H. (2008). Mentalization in adult attachment narratives: Reflective functioning, mental states, and affect elaboration compared. Psychoanalytic Psychology, 25(1), 47. [Google Scholar]
  13. Carcione, A., Dimaggio, G., Conti, L., Fiore, D., Nicolò, G., & Semerari, A. (2010). Metacognition Assessment Scale v. 4.0. Unpublished manuscript, Rome. [Google Scholar]
  14. Carcione, A., Nicolò, G., Pedone, R., Popolo, R., Conti, L., Fiore, D., & Dimaggio, G. (2011). Metacognitive mastery dysfunctions in personality disorder psychotherapy. Psychiatry Research, 190(1), 60–71. [DOI] [PubMed] [Google Scholar]
  15. Carcione, A., Nicolo, G., & Semerari, A. (Eds.). (2021). Complex cases of personality disorders: Metacognitive interpersonal therapy. Springer International Publishing. [Google Scholar]
  16. Carcione, A., Riccardi, I., Bilotta, E., Leone, L., Pedone, R., Conti, L., & Procacci, M. (2019). Metacognition as a Predictor of Improvements in Personality Disorders. Frontiers in psychology, 10, 170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Choi-Kain, L. W., & Gunderson, J. G. (2008). Mentalization: ontogeny, assessment, and application in the treatment of borderline personality disorder. Am J Psychiatry, 165, 1127–1135. [DOI] [PubMed] [Google Scholar]
  18. Clark, L. A. (2007). Assessment and diagnosis of personality disorder: perennial issues and an emerging reconceptualization. Annual review of psychology, 58, 227–57. [DOI] [PubMed] [Google Scholar]
  19. Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological assessment, 31(12), 1412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. [Google Scholar]
  21. Colle, L., Dimaggio, G., Carcione, A., Nicolò, G., Semerari, A., & Chiavarino, C. (2020). Do Competitive Contexts Affect Mindreading Performance? Frontiers in Psychology, 11, 1284. 10.3389/fpsyg.2020.01284 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Cristobal, E., Flavian, C., & Guinaliu, M. (2007). Perceived e‐ service quality (PeSQ) measurement validation and effects on consumer satisfaction and web site loyalty. Managing service quality: An international journal, 17(3), 317–340. [Google Scholar]
  23. Daraz, L., Morrow, A. S., Ponce, O. J., Beuschel, B., Farah, M. H., Katabi, A., Alsawas, M., Majzoub, A. M., Benkhadra, R., Seisa, M. O., Ding, J. F., Prokop, L., & Murad, M. H. (2019). Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet. Journal of General Internal Medicine, 34(9), 1884–1891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Dimaggio, G., Procacci, M., Nicolò, G., Popolo, R., Semerari, A., Carcione, A., & Lysaker, P.H. (2007). Poor metacognition in narcissistic and avoidant personality disorders: Four psychotherapy patients analysed using the Metacognition Assessment Scale. Clinical Psychology & Psychotherapy, 14(5), 386–401. [Google Scholar]
  25. Dimaggio, G., Semerari, A., Carcione, A., Nicolò, G., & Procacci, M. (2007). Psychotherapy of personality disorders: Metacognition, states of mind and interpersonal cycles. Routledge. [Google Scholar]
  26. Dimaggio, G., Carcione, A., Nicolò, G., Conti, L., Fiore, D., & Pedone, R. (2009). Impaired decentration in personality disorder: A series of single cases analysed with the metacognition assessment scale. Clinical Psychology & Psychotherapy, 16(5), 450–462. [DOI] [PubMed] [Google Scholar]
  27. Dimaggio, G., & Lysaker, P. H. (Eds.) (2010). Metacognition and severe adult mental disorders: From research to treatment. Routledge. [Google Scholar]
  28. Dimaggio, G., Montano, A., Popolo, R., & Salvatore, G. (2015). Metacognitive interpersonal therapy for personality disorders: A treatment manual. London, UK: Routledge. [Google Scholar]
  29. Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412. [DOI] [PubMed] [Google Scholar]
  30. Faustino, B., Branco Vasco, A., Oliveira, J., Lopes, P., & Fonseca, I. (2021). Metacognitive self-assessment scale: Psychometric properties and clinical implications. Applied Neuropsychology. Adult, 28(5), 596–606. 10.1080/23279095.2019.1671843 [DOI] [PubMed] [Google Scholar]
  31. Finch, J. F., & West, S. G. (1997). The investigation of personality structure: Statistical models. Journal of Research in Personality, 31(4), 439–485. [Google Scholar]
  32. Fonagy, P. (1991). Thinking about Thinking: Some Clinical and Theoretical Considerations in the Treatment of Borderline Patient, International Journal of Psychoanalysis, 72, 639–56. [PubMed] [Google Scholar]
  33. Fonagy, P., Gergely, G., Jurist, A., Elliot, L., & Target, M. (2002). Affect regulation, mentalisation and the development of the self. New York, NY: The Other Press. [Google Scholar]
  34. Fonagy, P., & Bateman, A. W. (2016). Adversity, attachment, and mentalizing. Comprehensive psychiatry, 64, 59–66. [DOI] [PubMed] [Google Scholar]
  35. Frith, C. D., & Frith, U. (2006). The neural basis of mentalizing. Neuron, 50(4), 531–534. [DOI] [PubMed] [Google Scholar]
  36. Gilead, M., & Ochsner, K. N. (Eds.). (2021). The neural basis of mentalizing. Springer International Publishing. [Google Scholar]
  37. Grainger, C., Williams, D. M., & Lind, S. E. (2014). Metacognition, metamemory, and mindreading in high-functioning adults with autism spectrum disorder. Journal of Abnormal Psychology, 123(3), 650–659. [DOI] [PubMed] [Google Scholar]
  38. Gullestad, F. S., Johansen, M. S., Høglend, P., Karterud, S., & Wilberg, T. (2013). Mentalization as a moderator of treatment effects: Findings from a randomized clinical trial for personality disorders. Psychotherapy Research, 23(6), 674–689. [DOI] [PubMed] [Google Scholar]
  39. Gumley, A. (2011). Metacognition, affect regulation and symptom expression: A transdiagnostic perspective, Psychiatry Research, 190, 72–78. [DOI] [PubMed] [Google Scholar]
  40. Harder, S., & Folke, S. (2012). Affect regulation and metacognition in psychotherapy of psychosis: An integrative approach. Journal of Psychotherapy Integration, 22(4), 330–343. [Google Scholar]
  41. Holgado–Tello, F. P., Chacón–Moscoso, S., Barbero–García, I., & Vila–Abad, E. (2010). Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. Quality & Quantity, 44, 153–166. [Google Scholar]
  42. Hopwood, C. J., Malone, J. C., Ansell, E. B., Sanislow, C. A., Grilo, C. M., McGlashan, T. H., & Morey, L. C. (2011). Personality assessment in DSM-5: empirical support for rating severity, style, and traits, Journal of Personality Disorders, 25, 305–320. [DOI] [PubMed] [Google Scholar]
  43. Horn, J.L., 1965. A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. [DOI] [PubMed] [Google Scholar]
  44. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar]
  45. Lezak, M. D. (2015). Neuropsychological assessment (5th ed.). New York, NY: Oxford University Press. [Google Scholar]
  46. Lieberman, M. D. (2007). Social cognitive neuroscience: a review of core processes. Annu. Rev. Psychol., 58, 259–289. [DOI] [PubMed] [Google Scholar]
  47. Lorenzo-Seva, U., & Ten Berge, J. M. (2006). Tucker's congruence coefficient as a meaningful index of factor similarity. Methodology, 2(2), 57–64. [Google Scholar]
  48. Lovik, A., Nassiri, V., Verbeke, G., & Molenberghs, G. (2017). Combining factors from different factor analyses based on factor congruence. In The Annual Meeting of the Psychometric Society (pp. 211–219). Springer, Cham. [Google Scholar]
  49. Lucangeli, D., Cornoldi, C., & Tellarini, M. (1998). Metacognition and learning disabilities in mathematics. In Scruggs T. E. & Mastropieri M. A. (Eds.), Advances in learning and behavioral disabilities (Vol. 12, pp. 219–244). Atlanta, GA: Elsevier Science/JAI Press. [Google Scholar]
  50. Lysaker, P. H., Warman, D. M., Dimaggio, G., Procacci, M., La Rocco, V. A., & Clark, L. K. (2008). Metacognition in schizophrenia associations with multiple assessments of executive functions. The Journal of Nervous and Mental Disease, 196(5), 384–389. [DOI] [PubMed] [Google Scholar]
  51. Lysaker, P. H., Leonhardt, B. L., Pijnenborg, M., van Donkersgoed, R., de Jong, S., & Dimaggio, G. (2014). Metacognition in schizophrenia spectrum disorders: Methods of assessment and associations with neurocognition, symptoms, cognitive style and function. The Israel Journal of Psychiatry and Related Sciences, 51(1), 54–62. [PubMed] [Google Scholar]
  52. Lysaker, P. H., Minor, K. S., Lysaker, J. T., Hasson-Ohayon, I., Bonfils, K., Hochheiser, J., & Vohs, J. L. (2020). Metacognitive function and fragmentation in schizophrenia: Relationship to cognition, self-experience and developing treatments. Schizophrenia Research: Cognition, 19, 100142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Madeira, N., Roque, C., Pereira, A. T., Nogueira, V., Soares, M. J., Macedo, A., Marques, M., Valente, J. & Bós, S. (2013). 1211-Self-report and hetero-evaluation of insight and medication adherence in severe mental illness-correlation and clinical interest. European Psychiatry, 28(S1), 1–1.21920709 [Google Scholar]
  54. Maillard, P., Dimaggio, G., de Roten, Y., Berthoud, L., Despland, J.-N., & Kramer, U. (2017). Metacognition as a predictor of change in the treatment for borderline personality disorder: A preliminary pilot study. Journal of Psychotherapy Integration, 27(4), 445. [Google Scholar]
  55. Maillard, P., Dimaggio, G., Berthoud, L., Roten, Y., Despland, J., & Kramer, U. (2020). Metacognitive improvement and symptom change in a 3‐month treatment for borderline personality disorder. Psychology and Psychotherapy: Theory, Research and Practice, 93(2), 309–325. 10.1111/papt.12219 [DOI] [PubMed] [Google Scholar]
  56. McCrae, R. R., Zonderman, A. B., Costa Jr, P. T., Bond, M. H., & Paunonen, S. V. (1996). Evaluating replicability of factors in the Revised NEO Personality Inventory: Confirmatory factor analysis versus Procrustes rotation. Journal of personality and social psychology, 70(3), 552. [Google Scholar]
  57. McDonald, J. D. (2008). Measuring personality constructs: The advantages and disadvantages of self-reports, informant reports and behavioural assessments. Enquire, 1(1), 1–19. [Google Scholar]
  58. Minzenberg, M. J., Poole, J. H., & Vinogradov, S. (2006). Social-emotion recognition in borderline personality disorder. Comprehensive Psychiatry, 47(6), 468–474. [DOI] [PubMed] [Google Scholar]
  59. Monticelli, M., Zeppa, P., Mammi, M., Penner, F., Melcarne, A., Zenga, F., & Garbossa, D. (2021). Where we mentalize: Main cortical areas involved in mentalization. Frontiers in Neurology, 12, 712532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Moritz, S., & Lysaker, P. H. (2018). Metacognition–what did James H. Flavell really say and the implications for the conceptualization and design of metacognitive interventions. Schizophrenia Research, 201, 20–26. [DOI] [PubMed] [Google Scholar]
  61. Moroni, F., Procacci, M., Pellecchia, G., Semerari, A., Nicolò, G., Carcione, A., & Colle, L. (2016). Mindreading dysfunction in avoidant personality disorder compared with other personality disorders. The Journal of nervous and mental disease, 204(10), 752–757. [DOI] [PubMed] [Google Scholar]
  62. Nemiah, J. C. (1976). Alexithymia: a view of the psychosomatic process. Modern Trends In Psychosomatic Medicine, 3, 430–439. [Google Scholar]
  63. Nicolò, G., Dimaggio, G., Popolo, R., Carcione, A., Procacci, M., Hamm, J., & Lysaker, P. H. (2012). Associations of metacognition with symptoms, insight, and neurocognition in clinically stable outpatients with schizophrenia. Journal of Nervous Mental Disease, 200, 644–647. [DOI] [PubMed] [Google Scholar]
  64. Nosek, B. A., Hawkins, C. B., & Frazier, R. S. (2011). Implicit social cognition: From measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152–159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Özdemir, H. F., Toraman, Ç., & Kutlu, Ö. (2019). The use of polychoric and Pearson correlation matrices in the determination of construct validity of Likert type scales. Turkish Journal of Education, 8(3), 180–195. [Google Scholar]
  66. Pedone, R., Semerari, A., Riccardi, I., Procacci, M., Nicolò, G., & Carcione, A. (2017). Development of a scale-self report measure of metacognition: The metacognition self-assessment scale (MSAS). Instrument description and factor structure. Clinical Neuropsychiatry, 3(14), 185–194. [Google Scholar]
  67. Pedone, R., Barbarulo, A. M., Colle, L., Semerari, A., & Grimaldi, P. (2021). Metacognition mediates the relationship between maladaptive personality traits and levels of personality functioning: A general investigation on a nonclinical sample. The Journal of nervous and mental disease, 209(5), 353–361. [DOI] [PubMed] [Google Scholar]
  68. Pellecchia, G., Moroni, F., Colle, L., Semerari, A., Carcione, A., Fera, T., & Procacci, M. (2018). Avoidant personality disorder and social phobia: Does mindreading make the difference? Comprehensive psychiatry, 80, 163–169. [DOI] [PubMed] [Google Scholar]
  69. Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1–36. [Google Scholar]
  70. Rosellini, A. J., & Brown, T. A. (2021). Developing and validating clinical questionnaires. Annual Review Of Clinical Psychology, 17, 55–81. [DOI] [PubMed] [Google Scholar]
  71. Semerari, A., Carcione, A., Dimaggio, G., Falcone, M., Nicolò, G., Procacci, M., & Alleva, G. (2003). How to evaluate metacognitive functioning in psychotherapy? The metacognition assessment scale and its applications. Clinical Psychology & Psychotherapy, 10, 238–261. [Google Scholar]
  72. Semerari, A., Carcione, A., Dimaggio, G., Nicolò, G., Pedone, R., & Procacci, M. (2005). Metarepresentative functions in borderline personality disorder. Journal of Personality Disorders 19(6), 609–710. [DOI] [PubMed] [Google Scholar]
  73. Semerari, A., Carcione, A., Dimaggio, G., Nicolò, G., & Procacci, M. (2007). Understanding minds: Different functions and different disorders? The contribution of psychotherapy research. Psychotherapy Research, 17(1), 106–119. [Google Scholar]
  74. Semerari, A., Cucchi, M., Dimaggio, G., Cavadini, D., Carcione, A., Bottelli, V., & Smeraldi, E. (2012). The development of the metacognition assessment interview: Instrument description, factor structure and reliability in a nonclinical sample. Psychiatry Research, 200, 890–895. [DOI] [PubMed] [Google Scholar]
  75. Semerari, A., Colle, L., Pellecchia, G., Buccione, I., Carcione, A., Dimaggio, G., & Pedone, R. (2014). Metacognitive dysfunctions in personality disorders: correlations with disorder severity and personality styles. Journal of Personality Disorders, 28(6), 751–766. [DOI] [PubMed] [Google Scholar]
  76. Semerari, A., Colle, L., Pellecchia, G., Carcione, A., Conti, L., Fiore, D., & Pedone, R. (2015). Personality disorders and mindreading: Specific impairments in patients with borderline personality disorder compared to other PDs. Journal of nervous and mental disease, 203(8), 626–631. [DOI] [PubMed] [Google Scholar]
  77. Sharp, C., Wright, A. G., Fowler, J. C., Frueh, B. C., Allen, J. G., Oldham, J., & Clark, L. A. (2015). The Structure of Personality Pathology: Both General (‘g’) and Specific (‘s’) Factors?. Journal of Abnormal Psychology, 124(2), 387–398. [DOI] [PubMed] [Google Scholar]
  78. Sherifali, D., Ali, M. U., Ploeg, J., Markle-Reid, M., Valaitis, R., Bartholomew, A., Fitzpatrick-Lewis, D. & McAiney, C. (2018). Impact of internet-based interventions on caregiver mental health: systematic review and meta-analysis. Journal of Medical Internet Research, 20(7), e10668. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Solbakken, O. A., Hansen, R. S., Havik, O. E., & Monsen, J. T. (2012). Affect integration as a predictor of change: Affect consciousness and treatment response in open-ended psychotherapy. Psychotherapy research, 22(6), 656–672. [DOI] [PubMed] [Google Scholar]
  80. Stochl, J., Jones, P. B., Perez, J., Khandaker, G. M., Böhnke, J. R., & Croudace, T. J. (2016). Effects of ignoring clustered data structure in confirmatory factor analysis of ordered polytomous items: a simulation study based on PANSS. International Journal of Methods in Psychiatric Research, 25(3), 205–219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Tabachnick, B. G., & Fidell, L. S. (2018). Using multivariate statistics (7th ed.). Pearson. Canada. Toronto. [Google Scholar]
  82. Somma, A., Krueger, R. F., Markon, K. E., & Fossati, A. (2019). The replicability of the personality inventory for DSM-5 domain scale factor structure in US and non-US samples: A quantitative review of the published literature. Psychological Assessment, 31(7), 861. [DOI] [PubMed] [Google Scholar]
  83. Velicer, W.F., 1976. Determining the number of components from the matrix of partial correlations. Psychometrika, 41(3), 321–327. [Google Scholar]
  84. Wells, A. (2000). Emotional disorders and metacognition: Innovative cognitive therapy. Chichester, UK: Wiley. [Google Scholar]
  85. Widiger, T. A., & Simonsen, E. (2005). Alternative dimensional models of personality disorder: Finding a common ground. Journal of Personality Disorders, 19, 110–130. [DOI] [PubMed] [Google Scholar]

Reference

  1. Pedone, R., Semerari, A. (2023). Preliminary Development and psychometric evaluation of the Metacognition Brief Rating Scale: an informant form of the Metacognition Self-Assessment Scale. Clinical Neuropsychiatry, 20(6), 511-522. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Clinical Neuropsychiatry are provided here courtesy of Giovanni Fioriti Editore

RESOURCES