Table 2.
Authors (year) | Participants | Measures | Main results (ER and Alex) | (QA JBI) | ||
---|---|---|---|---|---|---|
N | Age | ER | Alex | |||
Bani et al. (2023) | 342 | 46.4 (12.2) | Modified version of the DANVA2‐ AF | TAS‐20 | A significant association was found between alex and ER accuracy in the unmasked condition (r = 0.15, p < .05); no significant association emerged in the masked condition (r = 0.06; p = NS). | 100% |
Bègue et al. (2019) | 34 | 23.8 (N/A) | Ad hoc constructed paradigm | TAS-20 | No significant correlation was found between alex and metacognitive ability index (values N/A). | 75% |
Brewer et al. (2015) | 34 (15 alex; 19 controls) | Alex: 28.7 (14.9) Controls: 22.68 (3.13) | Pictures of Facial Affect Karolinska Directed Emotional Faces Database | TAS‐20 | The alex group exhibited lower sensitivity than the control group to changes in facial emotion [5.6 ± 2.3 vs. 8.8 ± 3.7, t(32) = 2.94, p = .003] (Experiment 1). Alex participants showed reduced inter-rater consistency when judging the character traits, F(3, 861) = 18.49, p < .001, η2 = .037 (Experiment 2), and emotions, F(10, 2790) = 4.83, p < .001, η2 = .017 (Experiment 3), of emotionally neutral models. | 70% |
Coll et al. (2019) | 42 (20 alex; 22 controls) | Alex: 29.4 (12.0) Controls: 30.1 (10.9) | Radboud Faces Database Implicit facial expression discrimination task | TAS-20 | Alex participants were able to detect physical differences between facial expressions in the explicit emotion discrimination task (p= .66). Conversely, perm alex individuals showed no difference between oddball responses to upright and inverted faces in the mixed-emotions paradigm (p= .21, Cohen’s d = .47; 95% CI: perm -1.12, -0.18). | 70% |
Connolly et al. (2020a) | 308 | 38.1 (N/A) | FEEST Point-light displays Montreal Affective Voices | TAS‐20 | A negative correlation between supramodal ER ability (measured with faces, bodies, and voices) and alex (r = -0.33; p <.001) was found. | 63% |
Connolly et al. (2020b) | Study 1a 389 Study 1b 318 | Study 1a 37.0 (11.7) Study 1b 35.9 (N/A) | FEEST Point-light displays Montreal Affective Voices | TAS-20 | Negative correlations between supramodal ER ability and alex (Study 1a: r = -0.18, p = .003; Study 1b: r = -0.36, p <.001) were detected. | 63% |
Di Tella et al. (2020) | 260 | 21.23 (2.06) | MPAFC | TAS-20 | In the regression model, alex (β = -0.22, p = .005), among all the predictors, was found to be the only significant contributor of ER accuracy. | 100% |
Hakala et al. (2015) | 40 | 28.1 (9.5). | Stereoscopic photographs | TAS-20 | Facial expression was significantly associated with alex, both for negative, F(2, 3476) = 10.1, p < .001, and positive valences, F(2, 3475) = 32.2, p < .001. | 63% |
Halberstadt et al. (2021) | Sample 1 183 Sample 2 74 Sample 3 177 Sample 4 43 | Sample 1 19.31 (N/A) Sample 2 19.77 (N/A) Sample 3 22.48 (N/A) Sample 4 37.42 (N/A) Men: 23.7 | PerCEIVED Increasingly Clear Emotions Task DANVA-2-CF | TAS-20 | Performance at the Increasingly Clear Emotions task was found to be associated with the DIF subscale of the TAS-20 [r(183) = 0.16, p = .03]. No other significant correlations were found. | 37.5% |
Hovey et al. (2018) | 492 (182 men; 310 women) | (3.1) Women: 23.0 (3.2) | ERAM | TAS-20 | A significant association between alex and the audiovisual ER score was found only in female (r = -0.14, p = .01), but not in male (r = -0.11, p = .16) participants. | 88% |
Hsinga et al. (2013) | 115 | 18.95 (N/A) | Emostroop task | TAS-20 | No significant differences were found between high and low alex groups on either reaction time or accuracy in classifying emotional faces (angry and sad) [F(1, 113) = .10, p = .749; F(1, 113) < .20, p > .60, respectively]. | 50% |
Jessimer & Markham (1997) | 180 | N/A | Chimeric stimuli (Pictures of Facial Affect) | TAS-20 | A significant difference between high and low alex groups was found on the ER test [F(1, 34) = 15.24, p < .001], with high alex participants showing poorer ER than low alex ones. The high alex group reported a poorer performance on all the six basic emotions [happiness, t(34) = 2.45, p < .01; surprise, t(34) = 2.57, p < .01; sadness, t(34) = 3.5, p < .001; fear, t(34) = 3.5, p < .001; disgust, t(34) = 3.15, p < .01; and anger t(34) = 3.06, p < .01] compared to low alex participants. | 50% |
Jongen et al. (2014) | 40 (20 alex; 20 non-alex) | Alex: 26.5 (7.7) Non-alex: 25.8 (6.7) | FEEL | TAS-20 | Participants high in alex performed significantly worse than individuals low in alex (t = -2.40; p = .022) in the facial ER task. | 70% |
Kafetsios & Hess (2019)s | 108 | 25.87 (5.04) | ACE | TAS-20 | Emotion perception bias (perceiving emotions additional to those communicated), but not accuracy (perceiving the emotions communicated), was associated with alex [r(108) = 0.30, p < .01]. Emotion perception bias and accuracy were also associated with DIF [r(108) = 0.34, p < .01; r(108) = −0.49, p < .001] and DDF [r(108) = 0.26, p < .01; r(108) = −0.26, p < .01] scores. | 25% |
Keightley et al. (2006) | 60 (30 younger adults; 30 older adults) | Younger adults: 25.7 (5.1) Older adults: 72.5 (7.8) | JACNeuF | TAS-20 | In older adults only, a greater performance in the recognition of anger faces was associated with lower alex scores (β = -0.17, p < .01). | 80% |
Koelkebeck et al. (2015) | 42 | 30.0 (9.5) | Noh mask test | TAS-26 | A significant positive association between alex scores and mean reaction times on the ER test was found (r = 0.32, p = .038). | 90% |
Kyranides et al. (2022) | 110 | 24.9 (2.8) | MPAFC | TAS-20 | Individuals high in alex did not perform worsley in the facial ER task (either in accuracy or in response times) compared to participants low in alex (p = .58). | 63% |
Lane et al. (1995) | 318 | N/A | Perception of Affect Task | TAS-20 | A significant association was found between higher alex levels and lower accuracy rates at the ER task (r = -0.32, p < .001). | 63% |
Lane et al. (2000) | 379 | N/A | Perception of Affect Task | TAS-20 | A significant difference between participants high vs. low in alex in ER accuracy was found (F = 19.8, p < .001). | 75% |
Larwood al. (2021et ) | 162 | 21.5 (1.9) | Musical stimuli | TAS-20 | Alex was not associated with the number of emotion words generated (r = -0.05, p = NS), but was related to valence-the DDF specific factor (affect r = -0.19judgements , p < .05). Participants of music at least higher for in alex rated sad, angry, and fearful pieces as more neutral in valence and arousal. | 100% |
Laukka et al. (2021) | 593 (226 men; 367 women) | Men: 23.4 (3.3) Women: 22.9 (3.2) | ERAM | TAS-20 | A significant negative correlation between overall ER accuracy and alex was found (r = -0.19, p < .001). | 50% |
Lewis et al. (2016) | 389 | 37 (11.7) | FEEST | TAS-20 | Facial ER accuracy was negatively associated with alex total score (r = -0.32, p < .001), DIF (r = 0.21 p = .004), DDF (r = 0.24 p = .01), and EOT (r = 0.36 p < .001) subscale scores. | 63% |
Maiorana et al. (2022) | 31 | 32 (11) | NimStim Face Stimulus Set | TAS‐20 | Mean reaction times correlated with alex in the mouth-only (r = 0.48, p = .006), unmasked (r = 0.48, p = .007), and eyes-only (r = 0.37, p = .038) conditions. No correlation was found in the masked condition (r = 0.08, p = .656). | 37.5% |
Malykhin et al. (2023) | 140 | 48.3 (18.4) | Penn Emotion Recognition task | TAS-20 | Alex negatively correlated with the accurate recognition of sad images (r = -0.17, p = .04). This association was driven by an increased number of errors when sad images were assigned to the neutral (r = 0.20, p = .016) and happy (r = 0.16, p = .057) categories. | 75% |
Mann et al. (1994) | 62 | 31.5 (10.3) | Pictures of Facial Affect | TAS-26 | Participants high in alex performed less accurately overall on the ER test compared to individuals low in alex (top third: 25.9 ± 2.6; second third: 27.2 ± 2.2; lowest third: 27.7 ± 1.3; Χ2(2) = 7.2, p < .05) | 50% |
Martingano et al. (2022) | 1253 | 27.6 (N/A) | FACS-verified | TAS-20 | No significant associations between the performance on the ER test and alex total, DIF, DDF, and EOT scores were found (all p-values = NS). | 88% |
Mayer et al. (1990) | 139 | N/A | Pictures of Facial Affect | TAS-26 | Alex was associated with a greater emotional range [r(128) = 0.16, p < .05] and a higher perception of emotion (generally negative) [r(128) = 0.20, p < .01] in response to the emotional stimuli. | 50% |
McCubbin et al. (2014) | 96 | 22.4 (6.81) | Perception of Affect Task | TAS-20 | A significant association between alex and ER accuracy was found [r(88) = -0.34, p = .001]. ER intensity was not correlated with alex (r = -0.12, p = NS). | 88% |
Montebarocci et al. (2011) | 91 | 25.3 (4.7) | Pictures of Facial Affect | TAS-20 | High alex group obtained a significantly lower ER accuracy score than the low alex group [F(1.33) = 4.35, p < .05]. | 90% |
Murphy et al. (2019) | 134 | 55.0 (19.5) | Emotion-identity recognition task | TAS-20 | A negative association between alex and the performance on the ER task was found (r = -0.21, p < .05). | 75% |
Nook et al. (2015) * | 82 | 22.9 (5.72) | NimStim IASLab | TAS-20 | Higher alex was associated with impaired performance for face-face trials [r(35) = 0.34, p = .04] but not for face-word trials [r(32) = 0.01, p = .96]. | 50% |
Parker et al. (1993) | 216 (131 women; 85 men) | Women: 20.6 (2.1) Men: 21.1 (1.8) | Photographs from Izard | TAS-20 | A main effect for alex group (low, moderate, high alex) was found [F(2,210) = 4.73, p = .010], as well as a significant interaction between the alex group and the type of emotion [F(8,1680) = 2.16, p = .005]. The low alex group reported significantly higher ER total scores than the high alex sample. | 63% |
Parsons et al. (2021) | 610 | 32 (4.6) | Infant Facial Emotion Perception Task KDEF-dyn Database | TAS-20 | No significant associations between alex and ratings of arousal or valence across the infant emotion categories were found (all r < .08), with the only exception of a negative correlation between arousal ratings for the muted negative faces and EOT scores (r = 0.11, p = .009). Conversely, the correlations between alex and accuracy for adult faces were significant for the sad (r = 0.10, p = .02) and angry faces (r = 0.14, p < .001), and the overall accuracy scores (r = 0.09, p = .02). Also, EOT scores correlated with accuracy for the sad (r = 0.19, p = .0001) and angry faces (r = 0.16, p = .0001), and overall accuracy (r = 0.15, p = .001). | 63% |
Radoš et al. (2021) | 426 | 22.5 (4.6) | City Infant Faces Database | TAS-20 | Greater total accuracy on the ER test was related to lower levels of alex total (r = -0.15, p = .009) and EOT (r = -0.19, p = .001) scores. | 63% |
Ridout et al. (2010) | 45 high (23 EDI; 22 low EDI) | High 19.6 EDI: (1.7) Low EDI: 19.1 (0.9) | TASIT - Emotion Evaluation | TAS-20 | A significant negative correlation between ER accuracy and alex scores was detected [r(45) = -0.54, p < .001]. | 80% |
Ridout et al. (2021) | Study 1 39 Study 2 38 | Study 1: 19.5 (1.1) Study 2: 19.63 (2.7) | Karolinska and Nimstim face sets TASIT | TAS-20 | Alex did not predict ER accuracy in both tasks (p > .05). | 88% |
Rosenberg et al. (2020) | 49 | 23.3 (2.8) | Pictures of Facial Affect | TAS-20 BVAQ TSIA | The TAS-20 and BVAQ total scores were significantly correlated with the priming score for angry faces (r = -0.30, p < .05; r = -0.29, p < .05, respectively), whereas no significant association emerged between the TSIA and any of the emotions. Also, the BVAQ Identifying was associated with fearful faces (r = -0.34, p < .05), while the TSIA imaginal processes subscale correlated with happy faces (r = -0.38, p < .01). | 63% |
Rus-Calafell et al. (2013) | 98 | 32.6 (9.2) | Penn Emotion Recognition Test Virtual Faces | TAS-20 | Positive correlations were found between alex and committed errors in both presentation conditions (static images, r = 0.32, p <.01; virtual reality, r = 0.43, p <.01). | 75% |
Schlegel et al. (2019) | 70 | 26.0 (4.9) | GERT | TAS-20 | Accuracy in facial emotion recognition was negatively correlated with alex (r = -0.20 , p < .01). | 63% |
Senior et al. (2020) | 83 | 19.7 (N/A) | Pictures of Facial Affect | TAS-20 | Accuracy in facial ER was negatively correlated with alex [r(75) = -0.4, p < .001]. | 75% |
Sharpe et al. (2016) | 52 | 22.1 (2.5) | BU-3DFE database | TAS-20 | Alex was not a significant predictor of ER accuracy (p > .05) in the regression model. | 80% |
Sunahara et al. (2022) | 1756 Biological task; 384 Penn test | Biological task: 24.8 (10.9) Penn test: 19.7 (1.8) | Biological Motion Task Penn Emotion Recognition Test | TAS-20 | Higher alex levels predicted lower ER accuracy on the biological motion test (b = -0.07, 95% CI [-0.12, -0.02]), but not on the Penn Emotion Recognition test (b = -0.04, 95% CI [-0.15, 0.07]). | 88% |
Swart et al. (2009) | 34 (16 alex; 18 non-alex) | Alex: 20.1 (1.7) Non-alex: 19.3 (1.0) | Micro expression training tool Affective Prosody task | BVAQ | Alex participants scored significantly lower on recognizing brief emotional expressions [F(1,31) = 9.60, p = .004] compared to non-alex individuals. No difference between alex and non-alex participants on accuracy in either the prosody or semantic task was found [F(4,29) = 1.77, p = 0.16; F(4,29) = 0.32, p = 0.86, respectively]. | 70% |
Taruffi et al. (2017) | 120 | 30.4 (9.5) | Musical stimuli | TAS-20 | Only the EOT subscale of the TAS-20 was a significant predictor of musical emotion recognition total score (β = 0.21, p < .05). | 63% |
QA (JBI) = Quality Assessment (Joanna Briggs Institute); DANVA2‐AF = Diagnostic Analysis of Nonverbal Accuracy FACES 2‐Adult Faces; TAS = Toronto Alexithymia Scale; TAS DIF = Difficulty Identifying Feelings; TAS DDF = Difficulty Describing Feelings; TAS EOT = Externally Oriented Thinking; FEEST = Facial Expressions of Emotion: Stimuli and Tests; MPAFC = Montréal Pain and Affective Face Clips; PerCEIVED = Perceptions of Children’s Emotions in Videos, Evolving and Dynamic task; DANVA2‐CF = Diagnostic Analysis of Nonverbal Accuracy FACES 2‐Children Faces; ERAM = Emotion Recognition Assessment in Multiple modalities; FEEL = Facially Expressed Emotion Labelling; ACE = Assessment of Contextualized Emotions-faces; JACNeuF = Japanese and Caucasian Facial Expressions of Emotions and Neutral Faces; FACS-verified = Facial Action Coding System–verified University of California set of Emotion Expressions; KDEF = Karolinska Directed Emotional Faces; EDI = Eating Disorder Inventory; TASIT = The Awareness of Social Inference Test; BVAQ = Bermond-Vorst Alexithymia Questionnaire; TSIA = Toronto Structured Interview for Alexithymia; GERT = Geneva Emotion Recognition Test.
* Only the data from Study 2 were considered, as in Study 1 the total alexithymia score was calculated by summing up solely the subscales “identifying emotions” and “describing emotions” of the TAS-26.
Note. Age is expressed in years; NS = not significant.