Skip to main content
PLOS One logoLink to PLOS One
. 2025 Oct 28;20(10):e0332724. doi: 10.1371/journal.pone.0332724

Validation of the Mentalization Scale (MentS) in francophone control and clinical samples

Flora Descartes 1,*, Vincent Besch 1, Margaux Bouteloup 1,2, Rosetta Nicastro 3,4, Eléonore Pham 3,4, Eva Rüfenacht 3,4, Nader Ali Perroud 3,4, Martin Debbané 1,5,*
Editor: Marco Innamorati6
PMCID: PMC12561985  PMID: 41150688

Abstract

Aims

The Mentalization Scale (MentS) is a self-report measure for the assessment of mentalizing capacities, consisting of 28 items, yielding a three-factor structure: self-mentalizing, mentalizing others and motivation to mentalize. Its anglophone version has been validated for usage in both research and clinical contexts. The present study explores the psychometric properties of the francophone translation of the MentS, in both control and clinical samples.

Method

A total of 711 participants were enrolled in this study. The MentS was administered to a community sample (N = 302, 161 females, Mage = 37.1, SDage = 12.3), and to a clinical sample composed of participants diagnosed with borderline personality disorder (BPD), attention deficit hyperactive disorder (ADHD) and co-occurring BPD and ADHD (N = 409, 266 females, Mage = 32.5, SDage = 11.9). Confirmatory factory analysis was used to analyze the fit of the model in our data, followed by reliability and validity analyses.

Results

Results from confirmatory factor analysis (CFA) revealed a 27-item model to best fit the data for both control and clinical samples. In the control sample, good internal consistency was found for the total scale (α = 0.856, ω = 0.867) as well as for the three subscales MentS-Motivation (α = 0.789, ω = 0.801), MentS-Other (α = 0.792, ω = 0.798) and MentS-Self (α = 0.824, ω = 0.828). Similarly, good internal consistency was found in the clinical sample for the total scale (α = 0.871, ω = 0.879) and subscales (Motivation α = 0.770, ω = 0.783) (Others α = 0.842, ω = 0.847) (Self α = 0.808, ω = 0.815). The MentS demonstrated good temporal stability over a one-year interval, with excellent average-measures ICC for the total scale (ICC = .877, 95% CI [.843,.904], p < .001), and strong reliability for the Motivation (.837), Others (.806), and Self (.837) subscales (all ps < .001). The validity of the scale was confirmed using additional measures showing a coherent pattern of associations in relation to components underlying the construct of mentalization: reflective functioning, childhood trauma, cognitive emotion regulation, overall psychopathology distress and borderline symptomatology.

Conclusion & Clinical Implications

The MentS can be used for research and clinical purposes in francophone samples. Our results suggest that amongst French speaking samples, a 27-item solution may be most optimal in both clinical and control populations. Evidence shows the scale could be employed across diagnostic entities, and that participants scoring low in self-mentalizing (MentS-Self subscale) may be more likely to report increased manifestations of psychopathology.

Introduction

Mentalization, or reflective functioning, is a capacity based on imaginative mental activity, defined as the awareness an individual has of mental states (feelings, thoughts, beliefs and wishes in oneself and others) underlying human behavior [1]. Mentalization supports self-regulation by fostering integrated and updated mental models of the affective and cognitive dimensions of human experience; these mental models critically assist the navigation of interpersonal and social interactions [2]. The centrality of mentalizing in human functioning is found to play a role in mental health and illness, which positions mentalizing as a key transdiagnostic psychological process [3,4]. Indeed, impairments in mentalizing have been linked to a range of mental health disorders, including borderline personality disorder (BPD) [5], attention deficit hyperactive disorder (ADHD) [6], psychosis-spectrum disorders [7] and post-traumatic stress disorder (PTSD) [8]. Mentalizing is also targeted by clinical psychotherapeutic interventions, namely within the framework of mentalization based treatments (MBT), developed for personality disorders, such as BPD [9], antisocial personality disorder (ASPD) [10] or narcissistic personality disorder (NPD) [11], as well as other disorders such as eating disorders [12], drug addiction comorbidity [13,14], psychotic spectrum disorders [15,16] or complex trauma [17,18]. Such clinical evidence accentuates the importance of developing fine-grained understanding of the different components of mentalizing, to better understand the specific mechanisms sustaining improvements in mental health.

In this vein, the evaluation of mentalization has evolved significantly since the initial assessment of mentalizing based on the Adult Attachment Interview (AAI) [1921], which employed an hour-long semi-structured interview to explore attachment-related autobiographical memories. In this context, the Reflective Functioning Scale (RFS) was applied to provide specific scores on the basis of the participant’s AAI transcript [22]. While highly informative and widely used as a gold-standard tool for assessing reflective functioning, the RFS is time-intensive, resource-heavy, and requires specialized training for clinicians, limiting its feasibility for large-scale research and daily clinical applications [2325]. Additionally, although the RFS assesses multiple qualitative aspects of reflective functioning such as plausibility, complexity, and consistency, its scoring system remains unidimensional, thus limiting the interpretability of scores, and preventing factorial statistical examination of its underlying structure [23].

A significant development in the assessment of mentalization was introduced through the conceptualization of the dimensions of mentalizing, operationalized on the basis of neurofunctional dynamic systems involved in thinking about mental states [26]. Specific dimensions of mentalizing and their articulation reflects the involvement of distinct but coordinated systems: cognitive/affective mentalizing, self/other mentalizing, automatic/controlled mentalizing, internal/external mentalizing. This conceptual development fueled the creation of self-report measures to evaluate mentalizing. One of the first and most widely used self-report is the Reflective Functioning Questionnaire (RFQ), developed to assess the way in which an individual employs mental state information when mentalizing [27]. Among its currently validated versions (in at least 7 languages), the 8-item RFQ (RFQ-8) is thought to reflect both increased (certainty) and decreased (uncertainty) use of mental state information when mentalizing. The francophone validation of the RFQ-8 confirmed its two-factor structure and psychometric robustness, demonstrating relevance in studies of typical and atypical development and links to clinical phenomena such as non-suicidal self-injury [28]. In addition to the RFQ, other self-report instruments have been developed to assess distinct facets or domains of mentalizing. The Mentalization Questionnaire (MQ) explores facets of inefficient mentalizing (such as refusing self-reflection and psychic equivalence mode) as well as general factors such as emotional awareness and regulation of affect through mentalizing [29]. The Mentalized Affectivity Scale (MAS) examines the identification, processing and expression of affectively-laden mental states [30]. These instruments all attempt to operationalize mentalization that could address some of the previously observed limitations of the RFS. More recently, other self-report tools have also emerged and include the Cognitive and Affective Mentalization Scale Questionnaire (CAMSQ) [31], the Multidimensional Mentalizing Questionnaire (MMQ) [32], the Mentalizing Emotions Questionnaire [33], the Failure to Mentalize Trauma Questionnaire [34], and the Interactive Mentalizing Questionnaire [35].

In the context of self-report assessments of mentalization, although several tools have been developed, few instruments comprehensively fulfill the combined requirements of being time-efficient, easy to administer, reliable and dimension-specific. Furthermore, few have been validated across both community and clinical samples. The Mentalization Scale (MentS) is one of the only self-report instruments fulfilling these requirements [36]. The rationale behind the development of the MentS items aligns with what the authors have identified as key priorities in both research and clinical applications of mentalization. The items capture indicators of well-developed mentalizing capacity (e.g., explicit efforts to identify mental states or awareness of their subjective nature) [37], as well as markers of impaired or distorted mentalization (e.g., lack of interest in or imagination about mental states) [38,39]. Furthermore, the scale also considers global mentalizing, reflecting its developmental roots, connection to attachment quality, and relevance to overall mental health [38,40]. Consistent with the multidimensional nature of mentalization described above, factor analyses revealed three distinct dimensions: self-mentalizating, mentalizating others, and the motivational component underlying the drive to mentalize. The Motivation subscale (MentS-M; 10 items) assesses interest in understanding mental states, one’s own and others’, including curiosity and willingness to reflect (e.g., “I find it important to understand reasons for my behavior”). The Others subscale (MentS-O; 10 items) measures the ability to recognize and interpret others’ emotions and intentions (e.g., “I can recognize other people’s feelings”). The Self subscale (MentS-S; 8 items) captures the ability to access and reflect on one’s own thoughts and feelings, including avoidance (e.g., “I do not like to think of my problems”).

The MentS has been validated across several languages and populations, though the sample sizes and clinical groups in these studies vary widely. Among non-clinical populations, the MentS scale has been widely validated across various languages, including studies conducted in Iran [41,42], Poland [43], Italy [44], Korea [45], Turkey [46] and Japan [47], attesting to the robustness of the scale. All these studies consistently confirmed the three-factor structure of the scale, though some reported differences in factor loadings on specific items [4547]. As far as studies with clinical samples are concerned, only two studies aside from the MentS princeps study [36] have included clinical samples. The Austrian validation assessed 26 psychiatric inpatients with various diagnoses [48]; their small sample size limits the validity and generalizability of the findings, underscoring the need for larger clinical samples. Similarly, the Chinese validation of the MentS in a sample of patients with schizophrenia (n = 200), demonstrated good internal consistency and reliability [49]. However, the focus on a single diagnostic group highlights the need for more diverse clinical populations and broader clinical validation.

The francophone validation of the MentS appears as a particularly relevant choice for the present study, given its multidimensional structure, existing cross-cultural validation, and focus on both self- and other-oriented mentalizing as well as the motivation towards mentalization. A further strength of the present study lies in the validation of the MentS within a large and diverse clinical sample. Most notably, individuals with borderline personality disorder (BPD) are included, thereby addressing a critical need for confirmatory evidence in this clinical population surveyed only in the original validation of the scale. Moreover, the sample encompassed individuals with other diagnostic profiles, namely ADHD and comorbid BPD and ADHD, which arguably augments the scale’s potential for clinical application. The focus on a francophone population further contributes to the growing body of cross-linguistic validation efforts and underscores the cultural adaptability of the instrument.

Building on existing validation studies, the present study selected specific constructs were to assess the validity of the scale and its dimensions [36]. First, the total mentalizing score (MentS-Tot) reflects an individual’s overall capacity for reflective functioning. Lower scores have been associated with both temporary (e.g., under emotional stress) and more enduring impairments in mentalization. These associations are well documented in borderline personality disorder (BPD) [3,50], and also reported in other conditions such as attention-deficit hyperactivity (ADHD) disorder and comorbid ADHD-BPD [6]. Given this, the MentS-Total score is expected to correlate negatively with borderline symptoms and overall psychopathology, as lower mentalizing capacity is a hallmark of various psychiatric conditions, particularly those involving emotion dysregulation and interpersonal dysfunction [3,36].

Second, the motivation to mentalize (MentS-M) has been associated with emotion regulation capacity [2,51]. Given that previous research has documented how the willingness to engage with one’s own and others’ mental states supports more flexible and constructive emotional processing and regulation, the MentS-Motivation subscale’s score is expected to positively correlate with adaptive emotion regulation strategies, in line with other reports [2,37].

Third, mentalizing others (MentS-O) has been linked to RFQ-8 scores [44] and more specifically with significant correlations with certainty subscale [41]. This relation underlines a balanced usage of mental state information for social cognitive functioning [3]. This balance, neither overconfident nor excessively uncertain, is thought to support effective and adaptive mentalizing of self and others [1,28], and a recent longitudinal study reports its developmental association to prosociality and mental health [52]. Therefore, the MentS-Others score is expected to show positive associations with the certainty subscale of the Reflective Functioning Questionnaire, in line with prior work linking accurate social mentalizing with a confident, though not overly rigid, grasp of others’ mental states [27,30].

Finally, self-mentalizing (MentS-S) has gained traction in recent mentalization research, particularly in understanding trauma-related disturbances [5355]. Findings suggest that difficulties in self-mentalizing could be particularly evident in clinical populations [36,56]. For instance, individuals with a history of childhood trauma often present with fragmented internal representations, impairing their ability to make sense of their own experiences [57,58]. Impaired self-mentalizing can hinder coherent self-understanding and may lead to maladaptive self-attributions, such as excessive self-blame [59]. As a result, lower MentS-Self scores are anticipated to negatively correlate with self-reported childhood trauma.

Based on these considerations, we hypothesized the original three factor 28-item model to fit the data well with acceptable to good fit indices and good internal consistencies for both the total and subscales in both clinical and control samples. In terms of reliability, we expected the scores for controls subjects to exhibit good temporal stability. For validity analyses, we predicted that the MentS-Tot scores would correlate negatively with general psychopathology expression in the control sample, and further with borderline symptom severity in the clinical sample. We predicted that the MentS-M scores would correlate positively with adaptive emotion regulation scores. We further expected that the MentS-O scores would correlate positively with the certainty subscale of reflective functioning. Finally, we hypothesized that the MentS–S scores would correlate negatively with self-reported childhood trauma.

Method

Participants

A total of 711 participants were enrolled in this study. First, a control sample with N = 302 adult participants (161 females, Mage = 37.1, SDage = 12.3, age range from 19–75). Participants, whose data were fully anonymized, were recruited on the Prolific website (https://www.prolific.com) during the month of March 2022 and they provided informed written consent. Inclusion criteria were to be older than 18, to be fluent in French, and to have never been hospitalized for psychiatric reasons. To ensure the quality of the data they provided, several inclusion and exclusion controls were applied. Participants were required to have satisfactorily submitted at least 15 online surveys beforehand to ensure a good level of use of the online platform and reduce data entry errors. Nonsensical items were randomly inserted in the survey and response times were monitored to check for participants’ understanding and attention [60]. Participants were excluded if they provided incorrect responses to more than two bogus items or if their total response time was faster than the sample mean by more than two standard deviations (i.e., > 2 SD below the mean). In addition, participants who gave only one incorrect response on bogus items were also excluded if their total response time was at least one standard deviation below the sample mean. As a result of the combination of these rules, N = 70 participants were excluded.

Participants from the control group were contacted one year after the first survey; 257 replied and one was excluded for failing attention control, resulting in a retest subsample of n = 256 (84,77% of the initial sample) with 133 females and a mean age of 38.9 (SDage = 12.7) and who did not differ in age (Mann Whitney U = 5149, p = 0.175) and gender (c2 [1] = 1.93, p = 0.165) in comparison to the baseline group.

Second, a clinical sample was composed of N = 409 participants (266 females, Mage = 32.5, SDage = 11.9, age range from 16−77). They were recruited at the emotional regulation disorder unit (ERD) at the University Hospitals of Geneva, which is a second and third line service specialized in the assessment and treatment of adult attention deficit hyperactivity disorder (ADHD) and borderline personality disorder (BPD) through evidence-based programs. Individuals in the clinical sample were diagnosed with three different principal diagnostics. The inclusion criteria for participation in the present study were being at least 18 years old, having a diagnosis of ADHD or BPD or co-occurring ADHD and BPD, and providing informed consent for participation in the study and use of health data for research purposes. Some were diagnosed with BPD (N = 133) as principal diagnosis, others presented with ADHD (N = 207) as principal diagnosis, and a portion of them had co-occurring diagnostics of BPD and ADHD (N = 69). Participants were assigned a battery of tests at arrival in the unit, including the MentS self-report questionnaire. Patient data was collected between 01/10/2020 and 29/01/2021 and accessed on the 20/02/2022 for research purposes. Authors had access to information that could identify individual participants during and after data collection. Informed and written consent was obtained by all participants at the time of admission in the unit. According to the Swiss law, parental agreement for participation of minors above 14 years of age is not necessary. The study was approved by the Ethics Committee of the Geneva University Hospitals (no. 2021−00694) and by the Swiss Ethics Commission in Geneva under project BASEC id 2021−01100.

Measures

The Mentalization Scale (MentS; Dimitrijević, Hanak [36]); is a self-report questionnaire designed to assess the capacity to mentalize. The original measure contains 28 items, which participants rate on a 5-point Likert scale (1 = completely untrue; 5 = completely true). As mentioned above the instruments yields 3 subscales reflecting domain-specific scores (MentS-Self, MentS-Others, MentS-Motivation), and the total score reflects the global mentalization capacity, with higher scores indicating stronger reflective functioning. The scale measures various aspects of mentalization through three subscales, namely the Motivation to mentalize, mentalizing Others and Self-mentalizing. The MentS anglophone version demonstrated respectively good and acceptable internal consistency with total-scale coefficients of α = 0.84 in control samples and α = 0.75 in clinical samples and subscales showed acceptable reliability (ranging from α = 0.74 to 0.79), except for the Motivation-subscale below the acceptable threshold (α = 0.60). The original English MentS was translated to French by independent French and English native speakers with the use of the forward-backward-forward procedure [61]. The French version of the scale can be found in the Supporting Information file S1 Data.

The Symptom Checklist-90-Revised (SCL-90-R; [62]): is a 90-item, self-report questionnaire that utilizes a 5-point Likert scale. It assesses nine primary symptom dimensions: Somatization, Obsessive-Compulsive, Interpersonal sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation and Psychoticism. Additionally, it provides three scores reflecting global distress: Global Severity Index (GSI), Positive Symptom Distress Index and Positive Symptom Total. This scale has been validated among French-speaking populations, and was employed with the control group in this study. The scale has demonstrated stability and reliability for its main factors, including Depression, Somatization, and Panic-Agoraphobia, with Cronbach’s alphas ranging from 0.77 to 0.90 across different subscales.

The Borderline Symptom List (BSL-23; [63]): is a short, 23-item self-rating assessment designed to measure borderline personality disorder typical symptomatology, employed in the clinical group in this study. The French version of the BSL-23 has demonstrated good psychometric properties, indicating its validity for both clinical and research purposes as it shows excellent internal consistency with Cronbach’s α of 0.94 for the total score [64].

The Cognitive Emotion Regulation Questionnaire (CERQ; [65]): is a 36-item self-report, measuring nine different cognitive emotion regulation strategies, classified maladaptive and adaptive, that individuals use after experiencing negative events or situations. The CERQ focuses on an individual’s thoughts rather than their actions. It is widely used for both research and diagnostic purposes. In this study, we focus on adaptive strategies (CERQ_Ad_ER), regarded as acceptance, positive refocusing, refocus on planning, positive reappraisal, and putting into perspective. The French version of the CERQ has demonstrated good psychometric properties, making it suitable for both research and clinical purposes with Cronbach’s alphas ranging from 0.68 to 0.87 for the five adaptive strategies subscales [66].

The 8-item Reflective Functioning Questionnaire (RFQ-8; [27]): is a self-report, 8-item questionnaire. It consists of two subscales: certainty (RFQ_c) and uncertainty (RFQ_u) regarding mental states, with extreme scores on either subscale indicating impairments in mentalization. The French version of the questionnaire was validated in a non-clinical sample with α coefficients ranging from 0.70 to 0.83 for different subscales [28]. In this study, we employ a single dimension approach [67] based on the certainty subscale (RFQ_c) scoring scheme [52].

The Childhood Trauma Questionnaire – Short Form (CTQ-SF; [68]): is a 28 item, 5-point Likert scale, brief screening instrument designed to assess histories of maltreatment. Through five subscales scores, it evaluates the presence and severity of physical, emotional, and sexual abuse, as well as emotional and physical neglect. The French version of the CTQ-SF has been validated with good psychometric properties suitable for both clinical and research purposes with alphas ranging from 0.77 to 0.95 for the different subscales [69].

Statistical analyses

Descriptive statistics and data analyses.

Statistical analyses were conducted using Jamovi (desktop version 2.3.21.0) and IBM SPSS Statistics softwares (version 29.0.2.0 [20]). The characteristics of participants from both clinical and control samples were collected, including their gender, age, and diagnostic condition.

All variables were screened for missing data. A pragmatic, researcher-defined criterion was applied, consistent with prior research practices [70]: participants were excluded if more than 30% of items were missing on a given scale, and mean substitution was used when missingness was below this threshold. Twenty-five participants (N = 25) were excluded from analyses due to missing responses on more than 30% of items on one or more scales in the clinical sample and there were no missing values in the control sample.

Visual inspection of histograms, and boxplots was also performed to assess the shape of distributions and detect potential outliers. Extreme values were identified in the control group but not in the clinical sample. Specifically, one extreme low value was observed on the self-mentalizing subscale (MentS-Self), while no outliers were detected on the MentS-Others or MentS-Motivation subscales. On the MentS total scale, six extreme values were found: three on the lower end and three on the higher end. All data points, including these extreme cases, were retained in the analyses, as they were considered to reflect meaningful individual variation rather than error. As a result, nonparametric Spearman’s rho correlations were computed to examine the associations among the MentS subscales in the control group to assess potential multicollinearity. A moderate positive correlation was found between MentS-Others and MentS-Motivation (ρ = .703, p < .001), indicating substantial overlap between these two dimensions. Weaker but significant correlations were also observed between MentS-Self and MentS-Others (ρ = .175, p = .002), and between MentS-Self and MentS-Motivation (ρ = .120, p = .037). These results suggest that while the three subscales are related, they capture distinct facets of mentalizing. All correlations fell within acceptable ranges [71], indicating no significant concerns regarding multicollinearity.

Confirmatory factor analysis.

Confirmatory factor analyses (CFA) were conducted using the SEMLj module as an interface to lavaan R package [72] in Jamovi (desktop version 2.3.21.0), with the robust Weighted Least Squares Mean and Variance adjusted (WLSMV) estimation method. The WLSMV estimator is well suited for ordinal data and provides robust estimates in the presence of non-normality and potential heteroscedasticity. CFA was first applied to the control sample to evaluate the proposed three-factor, 28-item model. Guided by the theoretical framework of mentalization and by modification indices, alternative models were tested. These included versions with correlated residuals between items with similar wording or meaning, consistent with recent research using the same approach [27,73], as well as models with item reduction. The final best-fitting model was subsequently tested in the clinical sample to assess model generalizability and to conduct a split-sample cross-validation.

For each model tested, the goodness-of-fit indices considered were the Comparative Fit Index (CFI), the Tucker-Lewis Index (TLI), the Root Mean Square Error of Approximation (RMSEA) with a 90% confidence interval (90% CI), and the Standardized Root Mean Square Residual (SRMR). These are additional descriptive indices to consider in parallel to the χ²-statistic to find the best model as this latter is sensitive to sample size [74,75]. Following established recommendations, models were evaluated using the following thresholds: CFI and TLI values more than or equal to 0.90 indicate an acceptable fit, and values greater than or equal to 0.95 a good fit [76], values of RMSEA below 0.06 and SRMR below 0.08 indicate a good fit and when below 0.05 an excellent fit [74,76,77].

Reliability analyses.

Normality of score distributions for all MentS subscales and total scores was evaluated through skewness, kurtosis, and the Shapiro-Wilk test. These indices were computed separately for the clinical and control samples. Following common guidelines, distributions were considered approximately normal when skewness and kurtosis values fell within the ± 2 range [78], and when the Shapiro-Wilk test was non-significant (p > .05) [79]. Shapiro-Wilk tests indicated no violation of normality for the total scores in either group. All subscales were normally distributed in the control group, whereas in the clinical group, the Others and Self subscales significantly deviated from normality, while the Motivation subscale did not.

Internal consistency of the obtained subscales and total scale was estimated using Cronbach’s α and McDonald’s ω coefficients in both control and clinical samples, according to the model found to best fit our data. Cronbach’s α coefficient is deemed acceptable for exploratory research when equals to or above 0.70 to 0.79, good for research or clinical purposes when equals to or higher than 0.80 to 0.89 and excellent and suitable for diagnostic purposes when equals to or above 0.90 [80,81]. McDonald’s ω is considered acceptable when equals to or above 0.70, good when equals to or above 0.80 and excellent when equals to or above 0.90 [82].

Temporal stability of the scale in the control sample was evaluated through intraclass correlations coefficient between test and 1-year retest measures for total scale and subscales using average measures intraclass correlation coefficient, with a two-way mixed model and absolute agreement type.

Convergent validity.

Total-scale scores’ convergent validity was examined differently amongst samples. In the control sample, score correlations with global psychopathology severity was inspected. Whereas in the clinical sample, correlations with borderline symptomatology were used. Subscales’ validity analyses were similarly examined in both samples, with correlations between MentS-Motivation and adaptative emotion regulation, MentS-Others and certainty in reflective functioning and MentS-Self and childhood trauma.

All correlations are Spearman’s rho coefficients, applied consistently across both samples to ensure methodological consistency and facilitate direct comparison of correlation strengths. This choice was motivated by violations of normality observed in the clinical group and the presence of extreme values in the control group. Full correlation tables are provided in the S1 Data.

Results

Descriptive statistics

The control sample included 302 participants (161 females, 141 males). The clinical sample comprised 409 participants diagnosed with borderline personality disorder (BPD), attention-deficit/hyperactivity disorder (ADHD), or both. The remaining clinical sample showed substantial diagnostic heterogeneity, with BPD and/or ADHD diagnoses often accompanied by additional psychiatric comorbidities, particularly mood, anxiety, and substance use disorders. Most participants in this group were receiving at least one psychiatric medication at the time of participation. Full demographic and diagnostic information, including age statistics by gender and diagnosis, is presented in Table 1.

Table 1. Descriptive statistics–number of participants, gender, diagnostic, age.

Sample Gender Diagnostic N/Ntot Age
Mean SD Min Max
Control Female N/A 161 (/302) 37.3 13.6 19.6 74.9
Male N/A 141 (/302) 36.8 10.7 20.2 73.4
Clinical Female BPD 121 (/409) 26.5 9.11 16 58
BPD/ ADHD 51 (/409) 30.4 10.27 16 55
ADHD 94 (/409) 35.7 12.59 17 77
Male BPD 12 (/409) 27.8 6.93 20 42
BPD/ ADHD 18 (/409) 33.1 9.63 18 50
ADHD 113 (/409) 37.5 12.41 16 74

N/A = Not Applicable (control sample); BPD = Borderline Personality Disorder; ADHD = Attention Deficit Hyperactivity Disorder; N = Number of Participants.

Confirmatory factor analysis

Confirmatory factor analysis (CFA) was first conducted on the control sample to evaluate the fit of the originally proposed 3-factor and 28-item model [36].

As initial model fit was not satisfactory (CFI = .868; TLI = .856; SRMR = .110; RMSEA = .116 (.110 −.121); χ² (347) = 1745.0, p < .001), modification indices from covariance of residuals and theoretical considerations were used to improve the model. Specifically, we allowed for errors between items similar in formulation or meaning to correlate [27,73]. A revised model with 28 items and correlated residuals (3 pairs; item 3 with 5, 22, 4) showed improved fit (CFI = .923; TLI = .915; SRMR = .093; RMSEA = .092, 90% CI [.086–.097]; χ² (343) = 1209.0, p < .001). However, inspection of the modification indices revealed that item 25 showed a large modification index of 1224.85 for its loading on the MentS-Self factor (MentS-S), and 1045.84 for the MentS-Others factor (MentS-M), indicating significant cross-loadings. These extremely high values persisted even after residual correlations were added (e.g., 1214.50 for MentS-S in model 2), suggesting that item 25 did not fit well with the underlying structure.

Given this persistent misfit and its disproportionate influence on the model, item 25 was removed in model 3. The resulting 27-item model including only 3 residual correlations, yielded the best fit in the control sample (CFI = .954; TLI = .949; SRMR = .079; RMSEA = .071, 90% CI [.065–.077]; χ² (318) = 800.0, p < .001) and was retained.

This final model was subsequently tested in the clinical sample to assess its generalizability and conduct a split-sample cross-validation. Results also indicated an acceptable fit in the clinical data (CFI = .902; TLI = .891; SRMR = .077; RMSEA = .069, 90% CI [.064–.074]; χ² (318) = 937.0, p < .001), supporting the replicability of the model across both samples. All analyses values are reported in Table 2.

Table 2. Goodness of fit indices–confirmatory factor analyses.

Model CFI TLI SRMR (scaled) RMSEA
(scaled)
χ²(ddl)
(scaled-user)
1) 3-factor, 28 items 0.868 0.856 0.110 0.116 (.110−.121) χ² (347) = 1745, p < .001
2) 3-factor, 28 items, CR 0.923 0.915 0.093 0.092 (0.086-0.097) χ² (343) = 1209, p < .001
3) 3-factor, 27 items, CR 0.954 0.949 0.079 0.071 (0.065-0.077) χ² (318) = 800, p < .001
4) Model 3 applied to the clinical sample: 3-factor, 27 items, CR 0.902 0.891 0.077 0.069 (0.064-0.074) χ² (318) = 937, p < .001

CR = Correlated Residuals; CFI = Comparative Fit Index; TLI = Tucker-Lewis Index; SRMR = Standardized Root Mean Square Residual; RMSEA = Root Mean Square Error of Approximation.

Model 3 (27 items) showed improved fit over Model 2 (28 items), with ΔCFI = 0.031 and ΔRMSEA = 0.021, both exceeding recommended thresholds for meaningful change [83,84]. This supports the selection of the more parsimonious 27-item model.

Subsequent analyses therefore use the MentS-27 item version – omitting item 25. The scoring system is the following: Self-subscale = [8, 11, 14, 18, 19, 21, 22, 26], Others-subscale = [2, 3, 5, 6, 10, 12, 20, 23, 28], Motivation-subscale = [1, 4, 7, 9, 13, 15, 16, 17, 24, 27], The total score consists of the addition of all subscales = [MentS-S + MentS-O +MentS-M]. Items with reversed coding are: 8, 9, 11, 14, 18, 19, 21, 22, 26, 27.

Reliability analyses

Table 3 presents the MentS mean scores, results of normality testing, and reliability analyses for the total scale and subscales in both samples.

Table 3. MentS scores, normality testing and reliability analyses.

MentS scores
Sample Mean SD Min Max Skewness Std Error Kurtosis Std error Shapiro-Wilk p Cronbach’s α McDonald’s ω
MentS-
Tot
Control 95.2 13.5 55 135 0.177 0.140 0.215 0.280 0.992 0.117 0.856 0.867
Clinical 97.0 14.6 59 131 −0.013 0.121 −0.308 0.241 0.995 0.160 0.871 0.879
MentS-M Control 35.6 6.28 18 50 −0.263 0.140 −0.263 0.280 0.990 0.046 0.789 0.801
Clinical 39.2 6.38 21 50 −0.411 0.121 −0.326 0.241 0.977 <.001 0.770 0.783
MentS-
O
Control 33.0 5.27 18 45 −0.179 0.140 −0.149 0.280 0.991 0.070 0.792 0.798
Clinical 35.4 5.82 17 45 −0.393 0.121 −0.390 0.241 0.976 <.001 0.842 0.847
MentS-
S
Control 26.6 6.61 8 40 −0.168 0.140 −0.265 0.280 0.990 0.031 0.824 0.828
Clinical 22.4 6.76 8 40 0.301 0.121 −0.352 0.241 0.987 0.001 0.808 0.815

MentS-Tot = Total MentS score; MentS-M = MentS-Motivation subscale; MentS-O = MentS-Others subscale; MentS-S = MentS-Self subscale; Mean = MentS mean scores; SD = Standard Deviation in MentS scores; Min = Minimum MentS scores; Max = Maximum MentS scores; Std Error = Standard Error; p = Shapiro-Wilk’s test p-value.

Reliability estimates were acceptable for the Motivation-subscale in the clinical group and for the Others-subscale in the control group, and good for all other subscales and whole-scale in both samples.

Our francophone version of the scale demonstrated good 1 year test-retest reliability with average measures intraclass correlations coefficient. The total MentS scale demonstrated excellent reliability (ICC = .877, 95% CI [.843,.904], p < .001). Subscale reliability coefficients were also strong: MentS–Motivation (ICC = .837, 95% CI [.792,.873], p < .001), MentS–Others (ICC = .806, 95% CI [.752,.848], p < .001), and MentS–Self (ICC = .837, 95% CI [.791,.872], p < .001).

Convergent validity analyses

Correlations with neighboring constructs and MentS mean scores were examined in both control and clinical samples, respectively displayed in Tables 4 and 5 for the control and clinical samples.

Table 4. Spearman’s rho correlations in the control sample.

MentS_Tot MentS_M MentS_O MentS_S
SCL90_GSI −.164 ** .111 .078 −.520**
CERQ_Ad_ER .282** .183 ** .184** .254**
RFQ_c .418** .117* .218 ** .602**
CTQ_Tot −.045 .074 .070 −.201 **
CTQ_EmAb .020 .161** .161** −.221**
CTQ_PhAb .035 .062 .057 −.052
CTQ_SeAb .042 .124* .100 −.120*
CTQ_EmNeg −.072 .008 .028 −.149**
CTQ_PhNeg −.144* −.068 −.089 −.156**

Hypothesized correlations in bold font.

**. Correlation is significant at the 0.01 level (2-tailed).

*. Correlation is significant at the 0.05 level (2-tailed).

Table 5. Spearman’s rho Correlations in the clinical sample.

MentS_Tot MentS_M MentS_O Ment_S
BSL_23 −.145 ** .005 .085 −.425**
CERQ_Ad_ER .265** .208 ** .112* .294**
RFQ_c .431** .237** .266 ** .521**
CTQ_Tot −.026 .060 .082 −.199 **
CTQ_EmAb .033 .122* .144** −.183**
CTQ_PhAb −.039 −.018 .046 −.128**
CTQ_SexAb .066 .110* .053 −.036
CTQ_EmNeg −.099* −.014 .000 −.206**
CTQ_PhNeg −.145** −.092 −.013 −.206**

Hypothesized correlations in bold font.

**. Correlation is significant at the 0.01 level (2-tailed).

*. Correlation is significant at the 0.05 level (2-tailed).

Associations in the control sample

Table 4 displays associations in the control sample. The mentalization total-scale score (MentS_Tot) shows a significant negative correlation with general psychological distress (SCL90_GSI, ρ = −0.164, p< 0,01), indicating that higher mentalization is associated with lower distress. Motivation to mentalize (MentS_M) is significantly correlated with adaptive (CERQ_Ad_ER, ρ = 0.183, p< 0,01). Mentalizing of others (MentS_O) is significantly correlated with reflective functioning certainty (ρ = 0.218, p< 0,01). Mentalizing the self (MentS_S) correlates negatively with various trauma measures, such as emotional abuse (CTQ_EmAb, ρ = −0.221, p< 0,01), emotional neglect (CTQ_EmNeg, ρ = −0.149, p< 0,01), and physical neglect (CTQ_PhNeg, ρ = −0.156, p< 0,01). The full association table can be found in the Supporting Information File as S1 Table in S1 Data.

Associations in the clinical sample.

Table 5 displays associations in the clinical sample. Mentalization total-scale score (MentS_Tot) is significantly negatively correlated with borderline symptomatology (BSL_23, ρ = −0.145, p< 0,01), suggesting that higher mentalization capacity is associated with fewer borderline symptoms. Motivation to mentalization (MentS_M), is significantly correlated with adaptive emotion regulation (CERQ_Ad_ER, ρ = 0.208, p< 0,01). Mentalization of others (MentS_O) shows positive correlations with reflective functioning certainty (RFQ_c, ρ = 0.266, p< 0,01). Mentalization of the self (MentS_S) reveals negative correlations with several trauma measures, including emotional abuse (CTQ_EmAb, ρ = −0.183, p< 0,01), emotional neglect (CTQ_EmNeg, ρ = −0.206, p< 0,01), physical neglect (CTQ_PhNeg, ρ = −0.206, p< 0,01), and childhood trauma (CTQ_Tot, ρ = −0.199, p< 0,01). The full association table can be found in the Supporting Information File as S2 Table in S1 Data.

Discussion

The main purpose of this study was to test the validity of the MentS self-report questionnaire amongst francophone control and clinical participants. The large sample size in this study provides robust and well-powered psychometric results. To the best of our knowledge, it is the only the second study to include a significant sample of individuals with borderline personality disorder, making the findings comparable to first development and validation study of the MentS [36].

The three-factor structure of the MentS scale has been consistently replicated in prior validation studies [4144,46,48,49]. Among these studies, three recommended removals of items [4547]. In our study, the decision to recommend excluding item 25 “I can easily describe what I feel” was empirically driven from confirmatory factor analyses results in both samples. Therefore, we hypothesized this item is not interpreted consistently by participants. This observation is consistent with findings from the original MentS study [36], which indicated that item 25 loads differently in the principal components across samples in the scale’s initial validation study: loading more with the “mentalizing others” dimension in the control sample whilst loading more on the “self-mentalizing” dimension in the clinical sample.

Regarding the discriminative power of the scale, our descriptive findings suggest that the low scores on the MentS-Self subscale in the clinical sample may most significantly differentiate from the non-clinical sample. This observation aligns with Dimitrijević, Hanak [36], who reported lower self-mentalizing scores in individuals with borderline personality disorder (BPD) compared to controls.

Reliability analyses yielded good overall internal consistency. Total-scale and the MentS-Self subscale can be used in clinical as well as research purposes in clinical and non-clinical populations. However, caution is warranted for the MentS-Others and especially for the MentS-Motivation subscales in clinical purposes in francophone populations. This last subscale was notably found to be below the acceptable threshold in the clinical sample in the original MentS study [36] and in the clinical and community samples in the Chinese validation study [49]. This underscores the importance of further research into the measurement of the motivation to mentalize.

The validity analyses yielded results that align with our hypotheses. First, higher levels of psychopathology in the control sample and elevated borderline symptomatology in the clinical sample were both associated with lower overall mentalization scores. This finding supports the well-documented relationship between impaired mentalization and psychopathological expression, particularly in borderline personality disorder [85]. Second, a positive association was observed between motivation to mentalize and adaptive emotion regulation, underscoring the role of curiosity and motivation as facilitators of emotional processing and regulation. Indeed, the curiosity-driven stance central to the motivation to mentalize fosters effective emotion regulation [86,87]. Third, a positive relationship between mentalization of others and reflective functioning certainty was identified, consistent with the idea that maintaining a balance between certainty and uncertainty about mental states is critical for effective mentalizing [1]. Finally, the validity analyses revealed significant negative associations between self-mentalizing and childhood trauma in both samples. This finding is in line with a recent meta-analysis which reports significant, negative associations between childhood maltreatment and mentalizing capacities [88]. Both the present validity analyses and the meta-analytic findings support the conceptualization of mentalization as a factor protecting from the development of psychopathology [89]. In this vein, self-mentalizing has been found to moderate the link between psychopathological manifestations and self-functioning [53,90]. Self-mentalizing was also found to mediate the relationship between childhood adversity and psychosis [91]. Additionally, self-mentalizing was found to be negatively and significantly associated to narcissistic vulnerability [55]. In this context, further investigation into the protective role of self-mentalizing appears warranted.

Furthermore, there is a growing recognition of the self-other distinction as critical to accurate mental state attribution [92], for which the MentS is particularly well suited to investigate given its dimensional structure which distinguishes self and other mentalizing. Research highlights the clinical relevance of contrasting dimension of mentalizing, particularly in conditions such as borderline personality disorder (BPD) [93]. More recently, the self-other distinction was established to be of transdiagnostic relevance [94]. Our current findings in clinical populations with BPD and/or ADHD tend to align with these observations. Above diagnostic conditions, an illustration of the relevance of considering the self-other distinction acutely applies to cases of severe abuse, where victims can become hypersensitive to the perpetrator’s emotions, internalizing the perpetrator’s wants and needs as a means of protecting oneself in abusive and neglectful developmental environments, referred to as “identification with the aggressor”, a phenomena for which a specific scale was recently developed [95]. Overall, these strands of research point to the importance of further exploring the interplay between self- and other-mentalizing as a key factor in understanding mental state attribution. This nuanced approach could provide deeper insights into the mechanisms underlying mentalization impairments and guide the refinement of therapeutic strategies targeting specific mentalization dimensions.

The findings of the present study offer preliminary insights for clinicians interested in working with the concept of mentalization. First, the scale can now be employed for francophone professionals, both with clinical and non-clinical populations. Second, although the MentS is not intended for diagnostic use, initial results suggest that it may be useful as a screening tool for reduced global mentalizing capacity, and more particularly, for evaluating self- and other-mentalizing, as well as motivation to mentalize. Our results on impairments in self-oriented mentalizing points to the clinical sensitivity of this subscale, although further work should examine both low and very high scores in relation to the expression of clinical symptoms. Finally, no formal cut-off values have been established to date, as further research is required to define clinically meaningful clinical thresholds. However, one preliminary study proposed a total score of 100.5 as a potential benchmark for distinguishing between individuals with schizophrenia and non-clinical populations [49]. From a dimensional point of view, the scale and subscales scores may help the clinician and patient formulate strengths and difficulties in mentalizing at the outset of treatment and further along the intervention to assess potential therapeutic effects. Here, test-retest clinical intervention studies are needed to evaluation whether the Ment-S constitute a valuable scale to measure therapeutic change mechanisms.

This study should be considered keeping in mind the following limitations. First, the systematic and informed exclusion professional participants in the control sample was impossible, and constitutes a limitation of the present study. Second, the use of correlated errors in structural equation modeling may have introduced bias or inaccuracies in the model estimates, necessitating caution in interpreting these findings [96]. Third, childhood trauma data relied on retrospective self-reports, such as the CTQ, are subject to important limitations. Although the CTQ is widely used and psychometrically validated, retrospective reporting of early adversity is inherently vulnerable to various sources of bias. These include memory distortions, repression or dissociation related to traumatic content, and underreporting due to stigma, guilt, or shame. Furthermore, the study did not incorporate clinical interviews, collateral reports, or biological markers, which could have enhanced the reliability and ecological validity of trauma assessment. Future research would benefit from combining self-report tools with multimodal assessments, such as clinician-administered interviews or corroborating evidence from health or forensic records, to more accurately capture trauma exposure. Finally, the use of the RFQ has limitations as recent studies suggest it may in fact measure mentalization as a unidimensional [67], or bidimensional self- and other-mentalizing construct [97], therefore calling for further refinement. Future research should address these limitations and further explore the MentS scale’s clinical applications. Specifically, self-mentalizing and the self-other distinction have emerged as critical factors in understanding mentalization impairments, particularly in the context of trauma. Future studies could investigate the discriminative clinical power of the MentS and examine the role of MentS-Self scores in various diagnostic conditions.

In conclusion, the francophone validation of the MentS demonstrates strong psychometric properties, supporting its use in both clinical and non-clinical populations. It is recommended to exclude item 25 from score interpretation when using the francophone version, as its meaning may vary across populations. By distinguishing mentalization dimensions, the scale facilitates a nuanced understanding of mentalization impairments. Notably, impairments in self-mentalizing emerge as a key marker of psychopathological risk, particularly in trauma-affected clinical populations. This highlights the critical role of the self-other distinction, where reduced self-mentalizing, coupled with heightened focus on others, may blur identity boundaries and negatively affect the embodied sense of self, underscoring its relevance for clinical applications. These findings collectively validate the utility of the MentS scale for capturing key dimensions of mentalization and highlight its potential for furthering our understanding of psychopathology and therapeutic interventions.

Supporting information

S1 Data

S1 Table. Full Table 4 Correlations in control sample - Spearman’s rho Correlations. S2 Table. Full Table 5 Correlations in clinical sample - Spearman’s rho Correlations. S3 File. French translation of The Mentalization Scale (MentS). S4 List of researchers who contributed to this work as part of the RF-TBM Consortium.

(DOCX)

pone.0332724.s001.docx (37.3KB, docx)

Acknowledgments

The PI (Martin Debbane) was funded by the Swiss National Science Foundation (Grant No. 100014_179033). The funders had no role in study design and data collection, analyses, or interpretation.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The PI (Martin Debbané) was funded by the Swiss National Science Foundation (Grant No. 100014_179033). The funders had no role in study design and data collection, analysis, or interpretation.

References

  • 1.Bateman A, Fonagy P. Handbook of Mentalizing in Mental Health Practice. American Psychiatric Pub. 2012. [Google Scholar]
  • 2.Fonagy P, Gergely G, Jurist EL, Target M. Affect regulation, mentalization, and the development of the self. New York, NY, US: Other Press. 2002. [Google Scholar]
  • 3.Luyten P, Campbell C, Allison E, Fonagy P. The Mentalizing Approach to Psychopathology: State of the Art and Future Directions. Annu Rev Clin Psychol. 2020;16:297–325. doi: 10.1146/annurev-clinpsy-071919-015355 [DOI] [PubMed] [Google Scholar]
  • 4.Luyten P, Campbell C, Moser M, Fonagy P. The role of mentalizing in psychological interventions in adults: Systematic review and recommendations for future research. Clin Psychol Rev. 2024;108:102380. doi: 10.1016/j.cpr.2024.102380 [DOI] [PubMed] [Google Scholar]
  • 5.Fonagy P, Bateman A. The development of borderline personality disorder--a mentalizing model. J Pers Disord. 2008;22(1):4–21. doi: 10.1521/pedi.2008.22.1.4 [DOI] [PubMed] [Google Scholar]
  • 6.Perroud N, Badoud D, Weibel S, Nicastro R, Hasler R, Küng A-L, et al. Mentalization in adults with attention deficit hyperactivity disorder: Comparison with controls and patients with borderline personality disorder. Psychiatry Res. 2017;256:334–41. doi: 10.1016/j.psychres.2017.06.087 [DOI] [PubMed] [Google Scholar]
  • 7.Debbané M, Salaminios G, Luyten P, Badoud D, Armando M, Solida Tozzi A. Attachment, neurobiology, and mentalizing along the psychosis continuum. Frontiers in Human Neuroscience. 2016;10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Stevens JS, Jovanovic T. Role of social cognition in post-traumatic stress disorder: A review and meta-analysis. Genes Brain Behav. 2019;18(1):e12518. doi: 10.1111/gbb.12518 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bateman A, Fonagy P. Mentalization based treatment for borderline personality disorder. World Psychiatry. 2010;9(1):11–5. doi: 10.1002/j.2051-5545.2010.tb00255.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fonagy P, Simes E, Yirmiya K, Wason J, Barrett B, Frater A, et al. Mentalisation-based treatment for antisocial personality disorder in males convicted of an offence on community probation in England and Wales (Mentalization for Offending Adult Males, MOAM): a multicentre, assessor-blinded, randomised controlled trial. Lancet Psychiatry. 2025;12(3):208–19. doi: 10.1016/S2215-0366(24)00445-0 [DOI] [PubMed] [Google Scholar]
  • 11.Choi-Kain LW, Simonsen S, Euler S. A Mentalizing Approach for Narcissistic Personality Disorder: Moving From “Me-Mode” to “We-Mode”. Am J Psychother. 2022;75(1):38–43. doi: 10.1176/appi.psychotherapy.20210017 [DOI] [PubMed] [Google Scholar]
  • 12.Morando S, Robinson P, Skårderud F, Sommerfeldt B. Mentalization Based Therapy for Eating Disorders. In: Robinson P, Wade T, Herpertz-Dahlmann B, Fernandez-Aranda F, Treasure J, Wonderlich S. Eating Disorders: An International Comprehensive View. Cham: Springer International Publishing. 2023. 1–24. [Google Scholar]
  • 13.Suchman NE. Mothering from the Inside Out: A mentalization-based therapy for mothers in treatment for drug addiction. Int J Birth Parent Educ. 2016;3(4):19–24. [PMC free article] [PubMed] [Google Scholar]
  • 14.Philips B, Wennberg P, Konradsson P, Franck J. Mentalization-Based Treatment for Concurrent Borderline Personality Disorder and Substance Use Disorder: A Randomized Controlled Feasibility Study. Eur Addict Res. 2018;24(1):1–8. doi: 10.1159/000485564 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Weijers J, Ten Kate C, Viechtbauer W, Rampaart LJA, Eurelings EHM, Selten JP. Mentalization-based treatment for psychotic disorder: a rater-blinded, multi-center, randomized controlled trial. Psychol Med. 2021;51(16):2846–55. doi: 10.1017/S0033291720001506 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Debbané M, Benmiloud J, Salaminios G, Solida-Tozzi A, Armando M, Fonagy P, et al. Mentalization-Based Treatment in Clinical High-Risk for Psychosis: A Rationale and Clinical Illustration. J Contemp Psychother. 2016;46(4):217–25. doi: 10.1007/s10879-016-9337-4 [DOI] [Google Scholar]
  • 17.Smits ML, de Vos J, Rüfenacht E, Nijssens L, Shaverin L, Nolte T. Breaking the cycle with trauma-focused mentalization-based treatment: theory and practice of a trauma-focused group intervention. Frontiers in Psychology. 2024;15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Rüfenacht E, Shaverin L, Stubley J, Smits ML, Bateman A, Fonagy P, et al. Addressing dissociation symptoms with trauma-focused mentalization-based treatment. Psychoanalytic Psychotherapy. 2023;37(4):467–91. doi: 10.1080/02668734.2023.2272765 [DOI] [Google Scholar]
  • 19.Malcorps S, Fonagy P, Ensink K. Handbook of mentalizing in mental health practice. In: Luyten P. Handbook of mentalizing in mental health practice. American Psychiatric Association Publishing. 2019. 37–62. [Google Scholar]
  • 20.George C, Main M, Kaplan N. Adult Attachment Interview (AAI). 1985. [Google Scholar]
  • 21.Hesse E. The adult attachment interview: Protocol, method of analysis, and empirical studies. Handbook of attachment: Theory, research, and clinical applications. 2nd ed. New York, NY, US: The Guilford Press. 2008. 552–98. [Google Scholar]
  • 22.Fonagy P, Target M, Steele H, Steele M. Reflective-functioning manual version 5 for application to adult attachment interviews. 1998. [Google Scholar]
  • 23.Choi-Kain LW, Gunderson JG. Mentalization: ontogeny, assessment, and application in the treatment of borderline personality disorder. Am J Psychiatry. 2008;165(9):1127–35. doi: 10.1176/appi.ajp.2008.07081360 [DOI] [PubMed] [Google Scholar]
  • 24.Katznelson H. Reflective functioning: a review. Clin Psychol Rev. 2014;34(2):107–17. doi: 10.1016/j.cpr.2013.12.003 [DOI] [PubMed] [Google Scholar]
  • 25.Taubner S, Hörz S, Fischer-Kern M, Doering S, Buchheim A, Zimmermann J. Internal structure of the Reflective Functioning Scale. Psychol Assess. 2013;25(1):127–35. doi: 10.1037/a0029138 [DOI] [PubMed] [Google Scholar]
  • 26.Fonagy P, Luyten P. A developmental, mentalization-based approach to the understanding and treatment of borderline personality disorder. Dev Psychopathol. 2009;21(4):1355–81. doi: 10.1017/S0954579409990198 [DOI] [PubMed] [Google Scholar]
  • 27.Fonagy P, Luyten P, Moulton-Perkins A, Lee Y-W, Warren F, Howard S, et al. Development and Validation of a Self-Report Measure of Mentalizing: The Reflective Functioning Questionnaire. PLoS One. 2016;11(7):e0158678. doi: 10.1371/journal.pone.0158678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Badoud D, Luyten P, Fonseca-Pedrero E, Eliez S, Fonagy P, Debbané M. The French Version of the Reflective Functioning Questionnaire: Validity Data for Adolescents and Adults and Its Association with Non-Suicidal Self-Injury. PLoS One. 2015;10(12):e0145892. doi: 10.1371/journal.pone.0145892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Hausberg MC, Schulz H, Piegler T, Happach CG, Klöpper M, Brütt AL, et al. Is a self-rated instrument appropriate to assess mentalization in patients with mental disorders? Development and first validation of the mentalization questionnaire (MZQ). Psychother Res. 2012;22(6):699–709. doi: 10.1080/10503307.2012.709325 [DOI] [PubMed] [Google Scholar]
  • 30.Greenberg DM, Kolasi J, Hegsted CP, Berkowitz Y, Jurist EL. Mentalized affectivity: A new model and assessment of emotion regulation. PLoS One. 2017;12(10):e0185264. doi: 10.1371/journal.pone.0185264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Müller S, Wendt LP, Zimmermann J. Development and Validation of the Certainty About Mental States Questionnaire (CAMSQ): A Self-Report Measure of Mentalizing Oneself and Others. Assessment. 2023;30(3):651–74. doi: 10.1177/10731911211061280 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Gori A, Topino E. The Multidimensional Mentalizing Questionnaire (MMQ). 2023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Kasper LA, Hauschild S, Berning A, Holl J, Taubner S. Development and validation of the Mentalizing Emotions Questionnaire: A self-report measure for mentalizing emotions of the self and other. PLoS One. 2024;19(5):e0300984. doi: 10.1371/journal.pone.0300984 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Berthelot N, Savard C, Lemieux R, Garon-Bissonnette J, Ensink K, Godbout N. Development and validation of a self-report measure assessing failures in the mentalization of trauma and adverse relationships. Child Abuse Negl. 2022;128:105017. doi: 10.1016/j.chiabu.2021.105017 [DOI] [PubMed] [Google Scholar]
  • 35.Wu H, Fung BJ, Mobbs D. Mentalizing during social interaction: The development and validation of the interactive mentalizing questionnaire. Frontiers in Psychology. 2022;12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Dimitrijević A, Hanak N, Altaras Dimitrijević A, Jolić Marjanović Z. The Mentalization Scale (MentS): A Self-Report Measure for the Assessment of Mentalizing Capacity. J Pers Assess. 2018;100(3):268–80. doi: 10.1080/00223891.2017.1310730 [DOI] [PubMed] [Google Scholar]
  • 37.Holmes J. Mentalizing from a psychoanalytic perspective: What’s new?. 2008. 31–49.
  • 38.Fonagy P, Target M. Attachment, trauma, and psychoanalysis: Where psychoanalysis meets neuroscience. Mind to mind: Infant research, neuroscience, and psychoanalysis. New York, NY, US: Other Press. 2008. 15–49. [Google Scholar]
  • 39.Luyten P, Fonagy P, Lowyck B, Vermote R. Assessment of mentalization. Handbook of mentalizing in mental health practice. American Psychiatric Publishing, Inc. 2012. 43–65. [Google Scholar]
  • 40.Fonagy P, Target M, Steele H, Steele M. Reflective-functioning manual version 5 for application to adult attachment interviews. 1998. [Google Scholar]
  • 41.Asgarizadeh A, Vahidi E, Seyed Mousavi PS, Bagherzanjani A, Ghanbari S. Mentalization Scale (MentS): Validity and reliability of the Iranian version in a sample of nonclinical adults. Brain Behav. 2023;13(8):e3114. doi: 10.1002/brb3.3114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Ahmadian Z, Ghamarani A. Reliability and validity of Persian version of mentalization scale in university students. Journal of Fundamentals of Mental Health. 2021;23(4):233–40. [Google Scholar]
  • 43.Jańczak MO. Polish adaptation and validation of the Mentalization Scale (MentS) - a self-report measure of mentalizing. Psychiatr Pol. 2021;55(6):1257–74. doi: 10.12740/PP/125383 [DOI] [PubMed] [Google Scholar]
  • 44.Cosenza M, Pizzini B, Sacco M, D’Olimpio F, Troncone A, Ciccarelli M, et al. Italian validation of the mentalization scale (MentS). Curr Psychol. 2024;43(29):24205–15. doi: 10.1007/s12144-024-06071-9 [DOI] [Google Scholar]
  • 45.Su-lim L, Mun-hee L. A Validation Study of the Korean Version of The Mentalization Scale. Korea Journal of Counseling. 2018;19(5):117–35. doi: 10.15703/KJC.19.5.201810.117 [DOI] [Google Scholar]
  • 46.Törenli Kaya Z, Alpay EH, Türkkal Yenigüç Ş, Özçürümez Bilgili G. Validity and Reliability of the Turkish Version of the Mentalization Scale (MentS). Turk Psikiyatri Derg. 2023;34(2):118–24. doi: 10.5080/u25692 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Matsuba Y, Lee SK, Haraguchi M, Iwasaki M, Ohtsuki T, Katsuragawa T. The development of the Japanese version of the mentalization scale and the examination of its reliability and validity. The Japanese Journal of Developmental Psychology. 2022;33(3):137–45. [Google Scholar]
  • 48.Richter F, Steinmair D, Löffler-Stastka H. Construct Validity of the Mentalization Scale (MentS) Within a Mixed Psychiatric Sample. Frontiers in Psychology. 2021;12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Wen Y, Fang W, Wang Y, Du J, Dong Y, Zu X, et al. Reliability and validity of the Chinese version of the Mentalization Scale in the general population and patients with schizophrenia: A multicenter study in China. Curr Psychol. 2022;42(35):30747–56. doi: 10.1007/s12144-022-04093-9 [DOI] [Google Scholar]
  • 50.Fonagy P, Luyten P. A Multilevel Perspective on the Development of Borderline Personality Disorder. Developmental Psychopathology. Wiley. 2016. 1–67. doi: 10.1002/9781119125556.devpsy317 [DOI] [Google Scholar]
  • 51.Fonagy P, Luyten P. Conduct problems in youth and the RDoC approach: A developmental, evolutionary-based view. Clin Psychol Rev. 2018;64:57–76. doi: 10.1016/j.cpr.2017.08.010 [DOI] [PubMed] [Google Scholar]
  • 52.Morosan L, Ghisletta P, Badoud D, Toffel E, Eliez S, Debbané M. Longitudinal Relationships Between Reflective Functioning, Empathy, and Externalizing Behaviors During Adolescence and Young Adulthood. Child Psychiatry & Human Development. 2020;51(1):59–70. [DOI] [PubMed] [Google Scholar]
  • 53.Ballespí S, Vives J, Sharp C, Chanes L, Barrantes-Vidal N. Self and Other Mentalizing Polarities and Dimensions of Mental Health: Association With Types of Symptoms, Functioning and Well-Being. Frontiers in Psychology. 2021;12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Ballespí S, Nonweiler J, Sharp C, Vives J, Barrantes-Vidal N. Self- but not other-mentalizing moderates the association between BPD symptoms and somatic complaints in community-dwelling adolescents. Psychol Psychother. 2022;95(4):905–20. doi: 10.1111/papt.12409 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Blay M, Bouteloup M, Duarte M, Hasler R, Pham E, Nicastro R, et al. Association between pathological narcissism and emotion regulation: The role of self-mentalizing?. Personal Ment Health. 2024;18(3):227–37. doi: 10.1002/pmh.1613 [DOI] [PubMed] [Google Scholar]
  • 56.Belvederi Murri M, Ferrigno G, Penati S, Muzio C, Piccinini G, Innamorati M, et al. Mentalization and depressive symptoms in a clinical sample of adolescents and young adults. Child Adolesc Ment Health. 2017;22(2):69–76. doi: 10.1111/camh.12195 [DOI] [PubMed] [Google Scholar]
  • 57.Berthelot N, Ensink K, Bernazzani O, Normandin L, Luyten P, Fonagy P. Intergenerational transmission of attachment in abused and neglected mothers: the role of trauma-specific reflective functioning. Infant Ment Health J. 2015;36(2):200–12. doi: 10.1002/imhj.21499 [DOI] [PubMed] [Google Scholar]
  • 58.Ensink K, Bégin M, Normandin L, Godbout N, Fonagy P. Mentalization and dissociation in the context of trauma: Implications for child psychopathology. J Trauma Dissociation. 2017;18(1):11–30. doi: 10.1080/15299732.2016.1172536 [DOI] [PubMed] [Google Scholar]
  • 59.Tanzer M, Salaminios G, Morosan L, Campbell C, Debbané M. Self-Blame Mediates the Link between Childhood Neglect Experiences and Internalizing Symptoms in Low-Risk Adolescents. J Child Adolesc Trauma. 2021;14(1):73–83. doi: 10.1007/s40653-020-00307-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Meade AW, Craig SB. Identifying careless responses in survey data. Psychol Methods. 2012;17(3):437–55. doi: 10.1037/a0028085 [DOI] [PubMed] [Google Scholar]
  • 61.Morin G, Meilleur D. Traduction francophone du Mentalization Scale. Université de Montréal. 2018. [Google Scholar]
  • 62.Derogatis LR, Unger R. Symptom Checklist-90-Revised. The Corsini Encyclopedia of Psychology. 2010. 1.–2. [Google Scholar]
  • 63.Bohus M, Kleindienst N, Limberger MF, Stieglitz RD, Domsalla M, Chapman AL. The Short Version of the Borderline Symptom List (BSL-23): Development and Initial Data on Psychometric Properties. Psychopathology. 2008;42(1):32–9. [DOI] [PubMed] [Google Scholar]
  • 64.Nicastro R, Prada P, Kung A-L, Salamin V, Dayer A, Aubry J-M, et al. Psychometric properties of the French borderline symptom list, short form (BSL-23). Borderline Personal Disord Emot Dysregul. 2016;3:4. doi: 10.1186/s40479-016-0038-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Garnefski N, Kraaij V, Spinhoven P. Negative life events, cognitive emotion regulation and emotional problems. Personality and Individual Differences. 2001;30(8):1311–27. doi: 10.1016/s0191-8869(00)00113-6 [DOI] [Google Scholar]
  • 66.Jermann F, Van der Linden M, d’Acremont M, Zermatten A. Cognitive Emotion Regulation Questionnaire (CERQ): Confirmatory factor analysis and psychometric properties of the French translation. European Journal of Psychological Assessment. 2006;22(2):126–31. [Google Scholar]
  • 67.Müller S, Wendt LP, Spitzer C, Masuhr O, Back SN, Zimmermann J. A Critical Evaluation of the Reflective Functioning Questionnaire (RFQ). J Pers Assess. 2022;104(5):613–27. doi: 10.1080/00223891.2021.1981346 [DOI] [PubMed] [Google Scholar]
  • 68.Bernstein DP, Stein JA, Newcomb MD, Walker E, Pogge D, Ahluvalia T, et al. Development and validation of a brief screening version of the Childhood Trauma Questionnaire. Child Abuse Negl. 2003;27(2):169–90. doi: 10.1016/s0145-2134(02)00541-0 [DOI] [PubMed] [Google Scholar]
  • 69.Paquette D, Laporte L, Bigras M, Zoccolillo M. Validation de la version française du CTQ et prévalence de l’histoire de maltraitance. Santé mentale au Québec. 2004;29(1):201–20. [DOI] [PubMed] [Google Scholar]
  • 70.Cohen S, Salamin V, Perroud N, Dieben K, Ducasse D, Durpoix A, et al. Group intervention for family members of people with borderline personality disorder based on Dialectical Behavior Therapy: Implementation of the Family Connections® program in France and Switzerland. Borderline Personal Disord Emot Dysregul. 2024;11(1):16. doi: 10.1186/s40479-024-00254-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Kim JH. Multicollinearity and misleading statistical results. Korean J Anesthesiol. 2019;72(6):558–69. doi: 10.4097/kja.19087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Rosseel Y. lavaan: AnRPackage for Structural Equation Modeling. J Stat Soft. 2012;48(2). doi: 10.18637/jss.v048.i02 [DOI] [Google Scholar]
  • 73.Campbell C, Tanzer M, Saunders R, Booker T, Allison E, Li E, et al. Development and validation of a self-report measure of epistemic trust. PLoS One. 2021;16(4):e0250264. doi: 10.1371/journal.pone.0250264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.MacCallum RC. The Need for Alternative Measures of Fit in Covariance Structure Modeling. Multivariate Behav Res. 1990;25(2):157–62. doi: 10.1207/s15327906mbr2502_2 [DOI] [PubMed] [Google Scholar]
  • 75.Marsh HW, Balla JR, McDonald RP. Goodness-of-fit indexes in confirmatory factor analysis: The effect of sample size. Psychological Bulletin. 1988;103(3):391–410. doi: 10.1037/0033-2909.103.3.391 [DOI] [Google Scholar]
  • 76.Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal. 1999;6(1):1–55. doi: 10.1080/10705519909540118 [DOI] [Google Scholar]
  • 77.MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychological Methods. 1996;1(2):130–49. doi: 10.1037/1082-989x.1.2.130 [DOI] [Google Scholar]
  • 78.George D, Mallery P. IBM SPSS statistics 29 step by step: A simple guide and reference. Routledge. 2024. [Google Scholar]
  • 79.Mohd Razali N, Yap B. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. J Stat Model Analytics. 2011;2. [Google Scholar]
  • 80.Nunnally JC. Psychometric Theory 3E. Tata McGraw-Hill Education. 1994. [Google Scholar]
  • 81.DeVellis RF. Scale Development: Theory and Applications. SAGE Publications. 2016. [Google Scholar]
  • 82.McDonald RP. Test theory: A unified treatment. L. Erlbaum Associates. 1999. [Google Scholar]
  • 83.Cheung GW, Rensvold RB. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance. Structural Equation Modeling: A Multidisciplinary Journal. 2002;9(2):233–55. doi: 10.1207/s15328007sem0902_5 [DOI] [Google Scholar]
  • 84.Chen FF. Sensitivity of Goodness of Fit Indexes to Lack of Measurement Invariance. Structural Equation Modeling: A Multidisciplinary Journal. 2007;14(3):464–504. doi: 10.1080/10705510701301834 [DOI] [Google Scholar]
  • 85.Fonagy P, Bateman AW. Adversity, attachment, and mentalizing. Compr Psychiatry. 2016;64:59–66. doi: 10.1016/j.comppsych.2015.11.006 [DOI] [PubMed] [Google Scholar]
  • 86.Bateman A, Fonagy P. Mentalization-Based Treatment for Personality Disorders: A Practical Guide. Oxford University Press. 2016. [Google Scholar]
  • 87.Luyten P, Mayes LC, Nijssens L, Fonagy P. The parental reflective functioning questionnaire: Development and preliminary validation. PLoS One. 2017;12(5):e0176218. doi: 10.1371/journal.pone.0176218 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Yang L, Huang M. Childhood maltreatment and mentalizing capacity: A meta-analysis. Child Abuse Negl. 2024;149:106623. doi: 10.1016/j.chiabu.2023.106623 [DOI] [PubMed] [Google Scholar]
  • 89.Ballespí S, Vives J, Debbané M, Sharp C, Barrantes-Vidal N. Beyond diagnosis: Mentalization and mental health from a transdiagnostic point of view in adolescents from non-clinical population. Psychiatry Res. 2018;270:755–63. doi: 10.1016/j.psychres.2018.10.048 [DOI] [PubMed] [Google Scholar]
  • 90.Ballespí S, Vives J, Nonweiler J, Perez-Domingo A, Barrantes-Vidal N. Self- but Not Other-Dimensions of Mentalizing Moderate the Impairment Associated With Social Anxiety in Adolescents From the General Population. Frontiers in Psychology. 2021;12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Nonweiler J, Torrecilla P, Kwapil TR, Ballespí S, Barrantes-Vidal N. I don’t understand how I feel: mediating role of impaired self-mentalizing in the relationship between childhood adversity and psychosis spectrum experiences. Front Psychiatry. 2023;14:1268247. doi: 10.3389/fpsyt.2023.1268247 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Quesque F, Apperly I, Baillargeon R, Baron-Cohen S, Becchio C, Bekkering H, et al. Defining key concepts for mental state attribution. Commun Psychol. 2024;2(1):29. doi: 10.1038/s44271-024-00077-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.De Meulemeester C, Lowyck B, Luyten P. The role of impairments in self-other distinction in borderline personality disorder: A narrative review of recent evidence. Neurosci Biobehav Rev. 2021;127:242–54. doi: 10.1016/j.neubiorev.2021.04.022 [DOI] [PubMed] [Google Scholar]
  • 94.Eddy CM. The transdiagnostic relevance of self-other distinction to psychiatry spans emotional, cognitive and motor domains. Frontiers in Psychiatry. 2022;13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Lahav Y, Talmon A, Ginzburg K. Knowing the abuser inside and out: The development and psychometric evaluation of the identification with the aggressor scale. Journal of Interpersonal Violence. 2021;36(19–20):9725–48. [DOI] [PubMed] [Google Scholar]
  • 96.Hermida R. The problem of allowing correlated errors in structural equation modeling: concerns and considerations. Computational Methods in Social Sciences. 2015;3:1–17. [Google Scholar]
  • 97.Rogoff S, Moulton-Perkins A, Warren F, Nolte T, Fonagy P. “Rich” and “poor” in mentalizing: Do expert mentalizers exist?. PLoS One. 2021;16(10):e0259030. doi: 10.1371/journal.pone.0259030 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Marco Innamorati

3 May 2025

PONE-D-25-06596Validation of Mentalization Scale (MENT-S) in francophone control and clinical samplesPLOS ONE

Dear Dr. Descartes,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 17 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Marco Innamorati

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that you have indicated that there are restrictions to data sharing for this study. For studies involving human research participant data or other sensitive data, we encourage authors to share de-identified or anonymized data. However, when data cannot be publicly shared for ethical reasons, we allow authors to make their data sets available upon request. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

Before we proceed with your manuscript, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible.

Please update your Data Availability statement in the submission form accordingly.

3. We note that your Data Availability Statement is currently as follows: All relevant data are within the manuscript and its Supporting Information files.

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

4. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation. Authors’ affiliations should reflect the institution where the work was done (if authors moved subsequently, you can also list the new affiliation stating “current affiliation:….” as necessary).

5. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table.

6. We notice that your supplementary tables 5, and 6 are included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study evaluated the psychometric properties of the French version of the Mentalization Scale (MentS) in both community and clinical samples. A total of 711 participants, including individuals with borderline personality disorder (BPD), ADHD, and co-occurring BPD and ADHD, completed the scale. Confirmatory factor analysis supported a 27-item, three-factor structure (mentalizing self, mentalizing others, and motivation to mentalize) as optimal for both groups. According to the Authors, the French version of the MentS is suitable for research and clinical use.

The topic of the paper is both timely and interesting. However, there are several important points that require attention. I recommend a thorough and very careful revision of the manuscript to address these issues and strengthen the overall quality of the work.

In the Introduction, the Authors present the RFS scale in a somewhat cursory manner. It would have been beneficial to provide a more comprehensive overview of the RFS, including discussion of related instruments such as the RFQ-8. Notably, the authors do not address several critical issues—frequently highlighted in the literature—regarding the challenges of measuring mentalization with the RFS. As a result, the transition to the MentS scale feels abrupt and insufficiently justified. Furthermore, it is important to acknowledge that, both before and after the development of the MentS, other instruments for assessing mentalization have been introduced. Including references to these alternative measures would have strengthened the context and provided a more balanced perspective.

In the Introduction (lines 80–92), the authors assert that Fonagy and Luyten have addressed and resolved the methodological limitations outlined earlier (lines 77–79). They also cite encouraging data supporting the francophone version of the RFQ. However, the subsequent transition to introducing the Mentalization Scale (MentS) feels abrupt and insufficiently justified. While the MentS is indeed a valuable tool for multidimensional mentalization assessment, the authors do not adequately articulate its comparative advantages over existing measures. A more thorough discussion is needed to clarify why the MentS was selected over alternative instruments, particularly in light of its strengths (e.g., multidimensional structure) and potential limitations. Highlighting how the MentS addresses gaps left by other tools would provide a stronger rationale for its use and enhance the coherence of this section.

In my view, the description of the MentS in the manuscript is insufficient and does not offer readers a clear understanding of the instrument. The structure and dimensions of the MentS are not adequately defined, while greater emphasis is placed on the interpretation of scores. Providing a more detailed explanation of the scale’s underlying structure and its specific dimensions would significantly enhance the reader’s comprehension and better contextualize the meaning and relevance of the reported scores.

I found the section of the Introduction addressing convergent validity (lines 128 to 134) to be somewhat confusing, possibly due to an oversimplification of the authors’ argument. The formulation of multiple hypotheses in the latter part of the introduction is particularly perplexing, as several of these hypotheses, especially the one regarding a negative correlation between the MentS-S scale and self-reported childhood trauma, are not logically substantiated within the text. Additionally, I am concerned by the decision to assess childhood trauma using a self-report measure, as this approach raises questions about the

A seemingly minor point concerns the way references are cited within the text. While I understand that the authors may have chosen this citation format to simplify the preparation of the reference list, it does not conform to any recognized citation styles typically used in scientific writing. Adhering to a standard citation style is important for clarity and consistency throughout the manuscript.

Method

In describing the sample, the authors state that they recruited participants for the control group via the Prolific website. Based on my understanding, Prolific is a platform that compensates research participants for their time and contributions. While this approach facilitates rapid and diverse recruitment, it does raise the question of whether such a method increases the likelihood of enrolling so-called “professional participants”—individuals who frequently participate in online studies for compensation. It is still possible for samples to include individuals with substantial prior experience in online research, which may influence their responses or introduce certain biases. While Prolific offers valuable tools to mitigate the risk of recruiting predominantly “professional participants,” researchers should remain mindful of this potential limitation and consider reporting the level of participant experience in their sample description.

Including a brief description of the MentS and reporting Cronbach's alpha values at the end of the Participants paragraph is unusual and disrupts the logical flow of the manuscript. The Participants section should focus on describing the sample's characteristics and recruitment methods, as well as any relevant demographic information. Details about the instruments used—including their structure, dimensions, and psychometric properties such as reliability (e.g., Cronbach's alpha)—are more appropriately placed in the Materials or Measures section.

In summary, the inclusion of the MentS description and its Cronbach's alpha values in the Participants paragraph is misplaced. This information would be more appropriately presented in the section dedicated to describing the study's materials or measures, where readers expect to find details about the instruments and their psychometric properties.

Additional measures

The section describing additional measures administered alongside the MentS lacks clarity regarding the rationale for selecting these instruments. It is important for the authors to explain why each measure was included and how it relates to the study’s objectives or hypotheses. Without this context, readers may find it difficult to understand the relevance and contribution of these additional assessments.

Furthermore, the information about the translation into French, which appears at the end of this paragraph, seems out of place. Details about translation procedures are typically presented in a dedicated section on instrument adaptation or within the description of the specific measure being translated. Placing this information in the "Additional Measures" section disrupts the logical flow and organization of the manuscript.

To improve clarity and coherence, the authors should clearly justify the inclusion of each additional measure and relocate the translation details to a more appropriate section of the manuscript.

As a minor point, it should be noted that the authors used the RFQ-8, the short form of the Reflective Functioning Questionnaire, rather than the longer version of the instrument.

Statistical analysis

Based on the results of the confirmatory factor analysis (CFA) conducted on both the control and clinical samples, the authors decided to eliminate item 25 from their version of the MentS. After removing item 25, the authors used this abbreviated version of the MentS for subsequent reliability analyses. Then they computed the associations between the measures of interest in both samples. The lack of clarity regarding the MentS scoring system in the manuscript creates significant ambiguity, particularly in interpreting the meaning of positive or negative values.

The fact that the authors used the Jamovi software for statistical analyses should have been stated at the beginning of the paragraph, not at the end of the section dedicated to the CFA.

Discussion

In this section of the manuscript, the authors' extensive focus on findings derived from the Childhood Trauma Questionnaire (CTQ) short form—a 28-item self-report measure—warrants further scrutiny. Although the CTQ is a validated instrument, its brevity and reliance on retrospective self-reporting raise concerns about its ability to comprehensively assess complex and multifaceted experiences of childhood maltreatment. The CTQ evaluates five subscales (physical, emotional, and sexual abuse, along with emotional and physical neglect), yet such a condensed format may lack the depth needed to capture the nuances of traumatic experiences, including contextual factors, chronicity, and subjective impact. Given these limitations, the heavy emphasis on CTQ-based results in the discussion may inadvertently oversimplify the interpretation of childhood trauma's role in the study's outcomes.

It is worth to note that self-report measures are vulnerable to recall bias, social desirability bias, and underreporting, especially for stigmatized experiences like abuse. Participants may consciously or unconsciously minimize or deny traumatic events. The CTQ quantifies maltreatment severity but provides no qualitative insights into the lived experience of trauma. Critical factors such as developmental timing, relational dynamics, and coping mechanisms remain unaddressed.

Finally, the CTQ’s subscales (physical/emotional/sexual abuse, physical/emotional neglect) show high intercorrelations, making it difficult to isolate specific trauma types. This overlap complicates interpretations of how distinct maltreatment experiences relate to outcomes like mentalization deficits.

The brief mention of the study’s limitations in the final section of the discussion is inadequate. A thorough and transparent discussion of limitations is essential for contextualizing the findings, acknowledging potential biases, and guiding future research. Simply referencing these issues without elaboration does not provide readers with a clear understanding of how the study’s design, measures, or sample characteristics may have influenced the results. Expanding this section to address specific methodological constraints, such as the reliance on self-report measures, sample representativeness, or the generalizability of the findings, would strengthen the manuscript and enhance its scientific rigor.

Reviewer #2: Although the authors justify model refinements (e.g., residual correlations and removal of item 25), a brief expanded discussion on clinical interpretation and usability of the shortened 27-item scale would benefit readers.

Implications for practitioners using this tool in diagnostic or therapeutic settings could be elaborated—especially given the growing interest in mentalization in clinical psychology.

A few language polishing points (minor grammar or flow) may help improve readability, though these are not major.

Reviewer #3: The present study explores the psychometric properties of the francophone translation of the MentS, performing a confirmatory factor analysis and evaluating test-retest reliability.

The study is sound. However, I would like to draw your attention to some areas for improvement.

First of all, the acronym MENT-S appears in the title, while MentS is used throughout the manuscript. I recommend choosing one version and using it consistently.

In general, I suggest reviewing spelling according to a consistent language style (British or American English, e.g., behavior / behaviour).

I also recommend that the authors revise the writing and the English language throughout the manuscript, as some sentences are difficult to understand or seem incomplete, and there are a few mistakes. Below are some examples:

- Line 22 ("it's" should be "its"?)

- Lines 33–34 (revise sentence structure)

- Line 48 ("may more likely to report increased")

- Line 58 ("The centrality of mentalizing in human" → "mentalization")

- Lines 150–151 (combine the two sentences)

- Lines 161–162 (combine the two sentences)

- Line 165 ("sample composed of" → "sample was composed of")

- Line 252 ("analysis were" → "analysis was")

- Lines 308–309 (split the sentence using a full stop → ". Results ...")

- Line 352 (some values are reported as r=xxx, others as r = xxx. Decide on spacing and ensure consistency)

I also recommend avoiding the use of asterisks in the text to indicate p-values (it is acceptable in tables, but in the text it is always preferable to report values fully, e.g., p < 0.001).

Furthermore, in table- section "note", when acronyms are explained, the initial letters of the words should be capitalized (e.g., ADHD = Attention Deficit Hyperactivity Disorder; RMSEA = Root Mean Square Error of Approximation).

Abstract: I suggest reporting the test-retest reliability values in parentheses and naming at least the constructs that were assessed when referring to "additional measures" (line 40).

Methods:

- Lines 173–175 would be more appropriate in the Results section.

- I would move the following line (inclusion criteria at the end of line 169).

- Before describing the various measurement instruments, insert a subheading titled Measures.

- The section on missing data should be placed under Data Analysis and should specify whether any missing data were present. Also, provide a reference for the rule “If less than 30% of item values were missing”.

- Line 291–292: specify why the non-parametric Spearman coefficient was used (were the variables not normally distributed?). Additionally, there is a lack of detail regarding outliers, multicollinearity, heteroscedasticity, etc.

While the general approach for statistical analyses is sound and report a comprehensive set of fit indices was reported, several statistical concerns and methodological considerations should be addressed to enhance the rigor and interpretability of the results.

- The use of ML estimation may not be optimal given that questionnaire data are typically ordinal in nature. ML assumes multivariate normality and continuous data, which is often violated in Likert-type scales. A more appropriate estimation method would be Weighted Least Squares Mean and Variance adjusted (WLSMV), which is robust for ordinal variables and commonly recommended in such contexts.

- The final model incorporates 24 correlated residuals, which is a substantial number relative to the total number of items (28). While the justification provided (i.e., similar item wording) is acknowledged, such an extensive modification raises concerns about overfitting and may artificially inflate model fit. Correlated errors should be added sparingly and only when strong theoretical justification is present. I read that you have brought this observation within limitation of the study.

- After introducing model modifications and removing item 25, it would be important to statistically compare the models (e.g., using Chi-square difference tests or ΔCFI/ΔRMSEA) to support the decision to retain the revised structure. Additionally, cross-validation using split samples or bootstrapping could help assess the stability of the modified model.

- you briefly mention the use of Spearman’s rho due to non-normal distributions but, as previously mentioned, other essential assumptions such as outliers, multicollinearity, and heteroscedasticity are not addressed. Clarification on these points would strengthen the statistical validity of the analyses.

Given the large number of residual correlations (24 pairs) needed to improve model fit in the confirmatory factor analysis, I wonder whether you considered using an Exploratory Structural Equation Modeling (ESEM) approach. ESEM could have allowed for more flexibility in modeling item cross-loadings without relying on post-hoc correlated error terms, and might have provided a better representation of the underlying structure. Including a rationale for not choosing this approach, or a brief discussion of its potential relevance, would strengthen the methodological justification of the CFA strategy adopted.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2025 Oct 28;20(10):e0332724. doi: 10.1371/journal.pone.0332724.r002

Author response to Decision Letter 1


17 Jul 2025

PONE-D-25-06596

Validation of Mentalization Scale (Ment-S) in francophone control and clinical samples

PLOS ONE

Rebuttal letter – Answer to the comments

Please include the following items when submitting your revised manuscript:

• A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

• A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

• An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

Each of these items with mentioned labels have been uploaded.

Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

We have looked at the journal’s style requirements and have done modifications consequently to meet the journal’s style requirement for file naming.

2. We note that you have indicated that there are restrictions to data sharing for this study. For studies involving human research participant data or other sensitive data, we encourage authors to share de-identified or anonymized data. However, when data cannot be publicly shared for ethical reasons, we allow authors to make their data sets available upon request. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

We hereby confirm that our data cannot be publicly shared based on the study’s ethics committee (Project-ID 2021-00694), but we do make it available upon request for a specific collaboration, which will entail a special permission granted by the local ethics committee.

Before we proceed with your manuscript, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

There are ethical and legal restrictions preventing the sharing of a de-identified dataset, as such sharing is not permitted under the ethics agreement approved by the cantonal ethics committee (Project-ID 2021-00694). Although authorization was granted to collect and analyze the data, sharing the dataset is prohibited due to the sensitive nature of the personal information involved, linked to the rules governing data collection locally in the hospital setting. The ethics committee prohibits the sharing of data. As a result, the dataset cannot be made publicly available, only upon request for a specific collaboration, which will entail a special permission granted by the local ethics committee.

Address of the cantonal authority on human medical research:

Commission cantonale d'éthique de la recherche (CCER)

Rue Adrien-Lachenal 8

1207 Genève

Phone: +41 22 546 51 01

E-mail: ccer@etat.ge.ch

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible.

Please update your Data Availability statement in the submission form accordingly.

There are restrictions as stated above. Therefore, we did not update the Data Availability statement. As it already stated in the original submission: All relevant data are within the manuscript and its Supporting Information files.

3. We note that your Data Availability Statement is currently as follows: All relevant data are within the manuscript and its Supporting Information files.

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

There are ethical and legal restrictions on sharing a de-identified data set, as sharing data is prohibited under the current ethics agreement approved by the cantonal ethics committee. While authorization was granted to collect and analyze the data, sharing the dataset is not allowed due to the sensitive nature of the personal information involved. Consequently, as stated above, the dataset must remain confidential and cannot be shared.

Address of the cantonal authority on human medical research:

Commission cantonale d'éthique de la recherche (CCER)

Rue Adrien-Lachenal 8

1207 Genève

Phone: +41 22 546 51 01

E-mail: ccer@etat.ge.ch

4. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation. Authors’ affiliations should reflect the institution where the work was done (if authors moved subsequently, you can also list the new affiliation stating “current affiliation:….” as necessary).

We have amended our list of authors to ensure that it reflects the institution where the work was done. Additionally, we have added Dr Eva Rüfenacht from Geneva University Hospitals who was also part of the data collection as well as the RFTBM consortium (Réseau Francophone de Thérapie Basée sur la Mentalisation) in the Supporting Information file. This addition exhaustively and more precisely reflects involvement in the work done and data collected.

5. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table.

We thank you for your attention to this error and have made modifications accordingly.

6. We notice that your supplementary tables 5, and 6 are included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list.

We thank you for your attention to this error and have made modifications accordingly.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Partly

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: No

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1:

• This study evaluated the psychometric properties of the French version of the Mentalization Scale (MentS) in both community and clinical samples. A total of 711 participants, including individuals with borderline personality disorder (BPD), ADHD, and co-occurring BPD and ADHD, completed the scale. Confirmatory factor analysis supported a 27-item, three-factor structure (mentalizing self, mentalizing others, and motivation to mentalize) as optimal for both groups. According to the Authors, the French version of the MentS is suitable for research and clinical use.

The topic of the paper is both timely and interesting. However, there are several important points that require attention. I recommend a thorough and very careful revision of the manuscript to address these issues and strengthen the overall quality of the work.

In the Introduction, the Authors present the RFS scale in a somewhat cursory manner. It would have been beneficial to provide a more comprehensive overview of the RFS, including discussion of related instruments such as the RFQ-8. Notably, the authors do not address several critical issues—frequently highlighted in the literature—regarding the challenges of measuring mentalization with the RFS. As a result, the transition to the MentS scale feels abrupt and insufficiently justified. Furthermore, it is important to acknowledge that, both before and after the development of the MentS, other instruments for assessing mentalization have been introduced. Including references to these alternative measures would have strengthened the context and provided a more balanced perspective.

In the Introduction (Lines 80–92), the authors assert that Fonagy and Luyten have addressed and resolved the methodological limitations outlined earlier (Lines 77–79). They also cite encouraging data supporting the francophone version of the RFQ. However, the subsequent transition to introducing the Mentalization Scale (MentS) feels abrupt and insufficiently justified. While the MentS is indeed a valuable tool for multidimensional mentalization assessment, the authors do not adequately articulate its comparative advantages over existing measures. A more thorough discussion is needed to clarify why the MentS was selected over alternative instruments, particularly in light of its strengths (e.g., multidimensional structure) and potential limitations. Highlighting how the MentS addresses gaps left by other tools would provide a stronger rationale for its use and enhance the coherence of this section.

We thank the reviewer for this helpful and constructive comment.

The line numbers mentioned in our responses refer to the clean version of the manuscript.

In the revised manuscript, we expanded the paragraph originally spanning lines 78 to 90 to provide a more comprehensive and fine-grained presentation of the Reflective Functioning Scale (RFS), also mentioning the well-documented limitations in assessing mentalization (lines 74 to 82).

This section is now complemented by a similarly concise yet thorough overview of the Reflective Functioning Questionnaire (RFQ), focusing specifically on the 8-item version (lines 90 to 97)

We then briefly introduce earlier self-report measures developed prior to the MentS, namely the Mentalization Questionnaire (2012) and the Mentalized Affectivity Scale (MAS) (2017), as well as the mention of other more recent self-report developed tools since the MentS (lines 97 to 108).

Finally, at the end of this revised section, once all relevant tools have been introduced and the MentS is appropriately situated among them, we inserted a new paragraph assessing elements specific to the MentS in comparison to existing instruments (lines 109 to 113).

• In my view, the description of the MentS in the manuscript is insufficient and does not offer readers a clear understanding of the instrument. The structure and dimensions of the MentS are not adequately defined, while greater emphasis is placed on the interpretation of scores. Providing a more detailed explanation of the scale’s underlying structure and its specific dimensions would significantly enhance the reader’s comprehension and better contextualize the meaning and relevance of the reported scores.

In the revised version of the manuscript, we have added elements to provide a better understanding of the MentS instrument. Specifically, we now include a clear explanation of each dimensions composing the structure of the MentS, as well as example items for each subscale. We further make explicit the number of items per subscale (lines 113 to 129) and provide the full French-translated instrument as supplementary information to the article.

• I found the section of the Introduction addressing convergent validity (lines 128 to 134) to be somewhat confusing, possibly due to an oversimplification of the authors’ argument. The formulation of multiple hypotheses in the latter part of the introduction is particularly perplexing, as several of these hypotheses, especially the one regarding a negative correlation between the MentS-S scale and self-reported childhood trauma, are not logically substantiated within the text.

We thank the reviewer for this constructive observation. We acknowledge that the section on convergent validity may have appeared confusing due to an overly condensed presentation of our rationale.

In the revised manuscript we have clarified and expanded this s

Attachment

Submitted filename: Response to reviewers.docx

pone.0332724.s002.docx (61.9KB, docx)

Decision Letter 1

Marco Innamorati

3 Sep 2025

Validation of the Mentalization Scale (Ment-S) in francophone control and clinical samples.

PONE-D-25-06596R1

Dear Dr. Descartes,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Marco Innamorati

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewer #1:

Reviewer #3:

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #3: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

Reviewer #3: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #3: No

**********

Acceptance letter

Marco Innamorati

PONE-D-25-06596R1

PLOS ONE

Dear Dr. Descartes,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Marco Innamorati

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Data

    S1 Table. Full Table 4 Correlations in control sample - Spearman’s rho Correlations. S2 Table. Full Table 5 Correlations in clinical sample - Spearman’s rho Correlations. S3 File. French translation of The Mentalization Scale (MentS). S4 List of researchers who contributed to this work as part of the RF-TBM Consortium.

    (DOCX)

    pone.0332724.s001.docx (37.3KB, docx)
    Attachment

    Submitted filename: Response to reviewers.docx

    pone.0332724.s002.docx (61.9KB, docx)

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting Information files.


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES