Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2022 Mar 16:1–17. Online ahead of print. doi: 10.1007/s12207-022-09447-z

Can the Rorschach be Administered Remotely? A Review of Options and a Pilot Study Using a Newly Developed R-PAS App

Francesca Ales 1,, Gregory J Meyer 2, Joni L Mihura 2, Andrea Corgiat Loia 1, Sara Pasqualini 1, Alessandro Zennaro 1, Luciano Giromini 1
PMCID: PMC8923744  PMID: 35308458

Abstract

The ongoing COVID-19 pandemic has required psychologists to adopt measures like physical distancing and mask wearing, though other safety procedures such as travel restrictions or prohibitions on in-person practice and research have fostered the use of tele-health tools. In this article, we review options for using the Rorschach task via videoconference and provide preliminary data from using a new electronic app for remote R-PAS administration to determine whether the remote administration in an electronic form yields different information than in-person administration with the cards in hand. As a pilot study, our focus is on the “first factor” of all Rorschach scores, i.e., complexity. Data were collected from 60 adult Italian community volunteers, and statistical analyses evaluated the extent to which the average complexity score significantly departed from R-PAS normative expectations (SS = 100), accompanied by Bayesian likelihoods for supporting the null hypothesis. Results suggest that the general level of complexity shown by the test-takers when administered the Rorschach remotely with the new R-PAS app closely resembles that previously observed using “standard” in-person procedures. Tentative analyses of other R-PAS scores suggested normative departures that could be due to the effects of the app, testing at home, or responses to the pandemic. We offer recommendations for future research and discuss practical implications.

Keywords: Rorschach, R-PAS, Tele-health, Personality, Assessment

Introduction

The spread of COVID-19 has dramatically changed the landscape of mental health services around the world, strongly affecting the ways in which psychological assessment used to be conducted. Over the past 2 years, preventive and corrective measures to control the COVID-19 outbreak have caused difficulties in delivering basic mental health care services. In May 2020, the Society for Personality Assessment conducted a survey, the results of which were quite discouraging: 26% of practitioners conducted assessment procedures virtually, via videoconferencing, but 52% paused their psychological evaluations, waiting until in-person activities could be resumed. Even with vaccine dissemination, the scenario of mental health services seems to have changed radically and definitively. Thus, it is necessary to adapt psychological services to the new tele-assessment context. In fact, psychological assessment has had difficulty adapting quickly to the pandemic context, especially compared to psychotherapeutic treatment practice that has moved quite easily to the online mode, thanks to studies that empirically supported it (Batastini et al., 2016; Bolton & Dorstyn, 2015; Reese et al., 2015; Varker et al., 2019). Therefore, in the last 2 years, increasing attention has been paid to the development of tele-health1 practices (i.e., delivery of health care services via remote technologies).

It should be specified that although tele-health research has been more prolific recently and the health emergency has certainly increased interest in it, researchers have been studying these procedures over the past 20 years (Barnett et al., 2018; Spivak et al., 2020; Wilson et al., 2017). Initially, the appeal of tele-health was linked to the desire to improve equity and access conditions for those who could not easily travel (e.g., the elderly, people living in rural areas). In fact, prior to the ongoing pandemic, best practice guidelines for virtual psychological assessment had been published (Joint Task Force, 2013; Luxton et al., 2014), and some psychological measures had been tested with equivalence analyses comparing in-person and remote assessment.

Overall, research on tele-assessment has produced encouraging results regarding the reliability, validity, and utility of psychological data collected remotely. For example, several studies have showed that structured interviews conducted remotely are equivalent to traditional interviews conducted in-person, both in clinical and forensic settings (Garb, 2007; Grady et al., 2011; Hyler et al., 2005; Lexcen et al., 2006; Luxton et al., 2014; Manguno-Mire et al., 2007; Schopp et al., 2000; Shore et al., 2007; Singh et al., 2007). This is likely due to the fact that the success of a clinical interview is largely related to the degree of therapeutic alliance (COVID-19 Task Force to Support Personality Assessment, 2020), which appears to be undiminished in tele-health practices (Germain et al., 2010; Morgan et al., 2008; Simpson, 2001). Similarly, fairly strong evidence has demonstrated equivalence between self-report measures administered remotely and in-person (Garb, 2007; Giromini et al., 2021; Luxton et al., 2014), although it is necessary to ensure that test integrity is maintained (Corey & Ben-Porath, 2020). For example, the Millon Clinical Multiaxial Inventory 4th Edition (MCMI-IV; Millon et al., 2015) has been found to have good equivalency when administered electronically (Finger & Ones, 1999). Finally, several studies have focused on equivalency analyses of neuro-psychological tests (Cullum et al., 2006, 2014; Galusha-Glasscock et al., 2016; Grosch et al., 2015; Harrell et al., 2014; Loh et al., 2007; Temple et al., 2010; Tukstra et al., 2012). In a recent meta-analysis, Brearly et al. (2017) observed that videoconferencing administration does not result in significantly different outcomes compared to in-person administration.

According to the SPA survey (2020), the main pitfall many clinicians reported was the remote administration of some psychological measures, especially the performance-based ones, e.g., cognitive tests. A few attempts were made to assess possible differences between online vs traditional in-person administration for cognitive tests, but most have focused on specific tasks, i.e., WAIS-IV subtests (Brearly et al., 2017; Temple et al., 2010). In addition, most of these studies were conducted in highly controlled environments in which, for example, a facilitator was present to assist with test administration. Nonetheless, these studies represent the first efforts to demonstrate equivalence between tests administered in-person and tests administered remotely. Therefore, to date, one might say that there is more empirical support for online assessment than for in-person assessment with social distancing measures (e.g., masks, wider distance between assessor and client), on which no research has been conducted yet, and although research on tele-assessment is still young, it offers some empirical bases to build on.

Finally, it is worth mentioning the current status of tele-assessment in forensic contexts. The field lacks robust guidelines for online forensic assessment. Drogin (2020) pointed out that the courts have not yet become part of the debate about the use of tele-assessment in forensic evaluations, but he anticipated that this will happen soon. Thus, best practices and guidelines should be developed as soon as possible so that all parties involved (e.g., forensic psychologists, judges, attorneys) are able to handle these new tele-assessment practices.

Rorschach and Tele-Assessment

Assessment instruments adapted to online administration from in-person administration inevitably introduce a risk of error as the instrument was validated under different assumptions, testing environments, and administration standards, all of which could affect the psychometric accuracy of the test (Kline, 2015). In particular, performance-based tests, which often use visual, tangible, or interactive stimuli instead of verbal items, have greater difficulty adapting to the online setting. To date, the only recommendations on how to conduct remote administration when dealing with performance-based personality assessment measures relate to the Rorschach task (Meyer et al., 2020). Meyer and colleagues’ guidelines (2020) refer specifically to the Rorschach, but can actually be applied to other performance-based tests in which the examinee interacts with visual stimuli.

Meyer et al. (2020) noted that online administration of the Rorschach test might generate a number of challenges, which can be easily guessed: the assessor cannot simply hold the cards and show them to the respondent via video camera, as the size of the stimuli and the respondent’s ability to rotate the stimuli are crucial features for its standardization. Sending the cards to the examinee can also pose some challenges. This option would open up the risk of possible violations of test security, the cards might not be returned to the clinician who would incur a financial loss, and the examinee would be wholly responsible for the entire test administration process. Other options for remote administration were proposed, such as the presence of an onsite facilitator (e.g., a professional, quasi-professional, a family member, or other cohabitant) who can receive the material and prepare the setting. More information about the potential issues with remote administration of the Rorschach and the solutions proposed by Meyer et al. (2020) is discussed below (see the paragraph “R-PAS at the time of COVID-19”).

Rorschach Performance Assessment System (R-PAS)

Despite a long-standing debate on its validity and usefulness (e.g., Cronbach, 1949; Jensen, 1965; Lilienfeld et al., 2000; Meyer & Archer, 2001; Mihura et al., 2013, 2015; Society for Personality Assessment, 2005; Viglione, 1999; Viglione et al., 2022; Wood et al., 2015), the Rorschach inkblot task remains one of the most widely used (Wright et al., 2017), taught (Belter & Piotrowski, 2001; Childs & Eyde, 2002; Mihura et al., 2017), and researched2 assessment instruments. According to Meyer et al. (2011), one of the reasons for its popularity is that it offers a unique range of information about one’s personality features and processing style, giving the assessor the possibility to observe performance-related characteristics that are typical, rather than maximum.3

It should be underscored that from a psychometric standpoint, the claim that the Rorschach “is invalid” has been refuted by the most recent and extensive reviews and meta-analyses on this topic (Bornstein, 1999; Diener et al., 2011; Graceffo et al., 2014; Jørgensen, 2000; Meyer & Archer, 2001; Mihura et al., 2013, 2015; Viglione et al., 2022). In particular, Mihura et al.’s (2013) meta-analytic examination of all interpreted variables included in the popular comprehensive system (CS; Exner, 2003) led to the conclusion that 26 of them demonstrated excellent (r ≥ 0.33, p < .001, FSN > 50) or good (r ≥ .21, p < .05, FSN ≥ 10) validity, when using externally assessed criteria to evaluate them rather than self-report.4

Moreover, during the past two decades, several Rorschach variables (e.g., human movement, complexity, Vista) have received extensive empirical validation using (a) neuroscientific techniques such as EEG (Ando’ et al., 2018; Giromini et al., 2010; Pineda et al., 2011; Porcelli et al., 2013), fMRI (Asari et al., 2010; Giromini et al., 2017, 2019a, 2019b; Vitolo et al., 2020), and rTMS (Ando’ et al., 2015, 2018), or (b) psychophysiological research methods involving the measurement of electrodermal activity (Giromini et al., 2016), eye movements (Ales et al., 2019; Minassian et al., 2005, Perry et al., 1995), 5-hydroxyindoleacetic acid (5-HIAA) (Lundbäck et al., 2006), and other physiological criteria (Meaney, 2011; Meyer et al., 2018).

Therefore, selected scores from the Rorschach task may now be considered useful tools in the field of psychological injury and law (Erard, 2012; Erard et al., 2014; Erard, & Viglione, 2014; Viglione et al., 2022). Especially in forensic settings, it is crucial to ensure that a test used is empirically supported and has good psychometric properties, so that forensic examiners can draw valid and accurate conclusions from methods that meet current requirements for admissibility in court (Erard et al., 2014; Meyer & Eblin, 2012; Viglione et al., 2022). These selected scores are included in the Rorschach Performance Assessment System (R-PAS; Meyer et al., 2011, 2014; Meyer, & Eblin, 2012), which is the most updated method for administration, scoring, and interpretation and was designed as a replacement for the CS.

R-PAS in Psycho-Legal Contexts

The use of the Rorschach in forensic practice is well-documented in the literature (Erard, 2012; Khadivi & Barton Evans, 2012; Meyer & Eblin, 2012; Mihura, 2012), and its use in the evaluation of psychological injury cases in particular is known to offer many advantages (Gacono & Evans, 2008). One of the strengths of using the Rorschach in forensic settings is that it is a viable alternative to self-reports. Most of the interviews (e.g., structured clinical interview for DSM-5: SCID-5; First et al., 2015) and personality inventories (e.g., the Minnesota Multiphasic Personality Inventory-3: MMPI-3; Ben-Porath & Tellegen, 2020; Personality Assessment Inventory: PAI; Morey, 1991) rely on the self-awareness of the defendant, on their ability to reflect on their own experiences, and to communicate their own personality characteristics honestly. The Rorschach, instead of asking how the respondent sees themselves, allows the assessor to observe how they see, communicate about, and interact with the inkblots, and thus does not rely on their ability of self-observation and self-awareness.

In this regard, it is worth specifying that the forensic context itself might be a stressful factor for the defendant. As much as one is willing to be honest in presenting their own memories and symptoms, it might be difficult not to emphasize their own discomfort and underestimate their own faults—if any. The Rorschach, as a free-response performance test, assesses what the person does, not what the person says they do (Meyer et al., 2011). It is not as clear to defendants how to deliberately appear cognitively disturbed or emotionally distressed during the Rorschach task. The opposite is also true: some individuals may have extremely resistant defensive barriers that cannot be overcome with self-report measures, but that can be scratched by methods such as the Rorschach, which are capable of showing psychopathology that is not overtly manifest (Ganellen, 2008; Ganellen et al., 1996; Grossman et al., 2002). Furthermore, self-reports can be redundant due to their shared mono-method variance (Meyer, 1999; Meyer et al., 2000). The Rorschach instead, confirming or disconfirming the results from self-reports, provides incremental validity (Weiner, 1999), and protects against the possibility that the defendant intentionally exaggerates or downplays certain aspects of their personality.

Another advantage of the Rorschach in forensic settings is that it is an implicit measure of personality traits and behavioral tendencies (Bornstein, 2002; McClelland et al., 1989; Shedler et al., 1993). Traditionally, implicit measures have been found to be useful in predicting how a person might behave in daily life, outside of structured settings such as psycho-legal evaluations, in which one might be inclined to meet particular expectations (Bornstein, 2002; Finn, 2011; McGrath, 2008). Finally, the Rorschach offers an idiographically rich and multifaceted representation of the defendant’s personality, which would be difficult to obtain through self-reports only (Erard, 2012).

In light of these strengths, the Rorschach is one of the ten most frequently used tools in forensic assessments of various kinds (Ackerman & Ackerman, 1997; Archer et al., 2006; Boccaccini & Brodksy, 1999; Borum & Grisso, 1995; Neal & Grisso, 2014; Quinnell & Bow, 2001) and ranks second in child custody evaluations.5 Furthermore, it is used in nearly a third of assessments to investigate criminal responsibility and competency to stand trial (Borum & Grisso, 1995), and nearly a third of forensic psychologists currently use it in their daily practice (Archer et al., 2006). Moreover, it is important to underline that even following the numerous controversies over the Rorschach, almost all the federal and state courts have considered it a sufficient tool for expert testimony (Meloy et al., 1987; Meloy, 2008; Viglione et al., 2022; Weiner et al., 1996).

Some might argue that R-PAS is too young a method to be accepted in legal processes. However, there are many reasons why R-PAS is commonly accepted in psycho-legal contexts. R-PAS has more than 9000 registered account holders, with more than 600 of them approved for teaching purposes. Account holders reside in every US state and 56 other countries. The R-PAS manual (Meyer et al., 2011) is in its tenth printing, and it is accompanied by a 19-chapter book illustrating case interpretation, including in forensic practice (Mihura & Meyer, 2018). The manual has been translated into four languages (Italian, Japanese, Portuguese, and Spanish) with five others in progress (Czech, Complex Chinese, Hungarian, Korean, and Thai). The online scoring program and resource center are available in 14 languages, and R-PAS has contracted local distributors and account brokers in seven countries. Finally, R-PAS has offered more than 180 official training workshops throughout the USA and in 17 other countries. Thus, with this scope of active use, it appears that R-PAS meets a standard of general acceptance with respect to its clinical use. This is in addition to the many ongoing studies and published research supporting R-PAS and the continuous increase in citations of the method by clinical and forensic textbooks (e.g., Ackerman & Kane, 2011; Archer & Smith, 2014).

Lastly, US federal law provides the use of the Daubert standard as a rule of evidence for the admissibility of expert witness testimony. What has become known as the Daubert trilogy (i.e., the three US Supreme Court cases that articulated the Daubert standard: Daubert, 1993; General Electric v. Joiner, 1997; Kumho Tire, 1999) stipulates seven non-exclusive and non-mandatory criteria that can be applied in a rather flexible manner in assessing the scientific reliability of expert testimony. In light of the seven Daubert criteria:

  1. R-PAS is a testable technique (1st criterion). It is an evidence-based method whose results can be tested by means of countless techniques (e.g., convergent and discriminant correlations with behavioral measures).

  2. R-PAS has been developed and tested (Mihura et al., 2013; also, see the Technical Manual section in Meyer et al., 2011) through extensive and supportive empirical testing of validity and reliability (2nd criterion).

  3. R-PAS has been and still is subject of peer review (3rd criterion). The Rorschach is the second most studied assessment tool after the MMPI, and R-PAS specifically is based on peer-reviewed research (Meyer et al., 2012).

  4. R-PAS variables have been tested to estimate a potential error rate (4th criterion). Interrater reliability is excellent on average (Schneider et al., 2020), indicating generally minimal error when classifying Rorschach responses, and Mihura et al. (2013) report an overall validity effect size (r = .27) within the typical range of variable-to-criterion error rates for psychological testing (Hemphill, 2003).

  5. R-PAS uses standardized rules of administration (e.g., R-optimized administration) and, in order to maximize validity and reliability, has eliminated all variables that are not scientifically supported (5th criterion).

  6. R-PAS is a relatively new method, but its popularity and use in training and assessment contexts are extensive (6th criterion). The comprehensive system (Exner, 1974), R-PAS’ predecessor, is generally accepted in nearly all courts; therefore, R-PAS, which originated from the CS, but improves its scientific foundations, can aptly anticipate receiving the acceptance that the CS has already benefitted (Erard et al., 2014). Additionally, Daubert standards are open to innovations with respect to old methods.

  7. Chapter 10 of the R-PAS Manual (Meyer et al., 2011) offers specific guidelines on what inferences can be made using the test. Thus, with respect to whether the expert’s conclusions reasonably follow from applying the technique (7th criterion), it is partly up to the expert in the particular legal case to assess whether the use of the R-PAS can be helpful to the judge.

R-PAS at the Time of COVID-19

Like all assessment measures involving the interaction between an assessor and a respondent, the ongoing COVID-19 pandemic has posed challenges to the Rorschach’s use in applied settings, due to the need for physical distancing and other safety procedures (e.g., wearing masks, prohibitions on in-person practice or research). To provide Rorschach users with strategies to continue using it during the pandemic, the R-PAS authors developed two sets of guidelines (available at www.r-pas.org). The first consists of slightly modified administration guidelines for clinicians and researchers who were able to conduct in-person psychological testing with physical distancing. The second set of guidelines is for completing an assessment remotely using a videoconferencing platform. These guidelines require the respondent to have the inkblots in hand and to position themselves, potentially with a third camera, in such a way that the assessor can observe the respondent and the inkblots simultaneously. The guidelines discuss five possible scenarios. One involves just the respondent and the assessor; the others also involve a facilitator, who could be a member of the household or a quasi-professional aide to the assessor. Depending on circumstances, the facilitator could be in the room with the respondent during testing, which is less desirable, or on site and available if needed but not in the room. The assessor can implement these four options either in the respondent’s residence or at a clinical setting near the respondent. Each option has its own challenges and benefits, though they all encompass physical inkblots in the respondent’s hands and use of videoconferencing software to link the assessor with the respondent. Research has not explored whether these kinds of remote administration modify the assessment experience sufficiently to alter normative expectations. However, the R-PAS developers suggested that these designs sufficiently mirror in-person assessment to support their cautious use in practice.

A final option is more notably deviant from traditional in-person assessment with the respondent holding the inkblots in hand. This entails having a self-contained method for remote administration using inkblot stimuli presented to the respondent electronically on an appropriately sized device, while the assessor and the respondent link to each other via videoconferencing software. The R-PAS authors developed a new electronic app for this purpose. The app addresses several challenges. First, in partnership with Hogrefe, the publisher of the inkblots, it uses official electronic version of the original stimuli, in their precise color and shading. Second, it provides a screen calibration tool to ensure the inkblot images display at the correct size on the respondent’s device. Third, it protects intellectual property and test security by ensuring the images are not available in browser cache or via browser history and by ensuring they are inaccessible to either the respondent or the assessor once the session has been ended. Fourth, it provides options to the respondent for card turning. Fifth, the assessor controls movement from one card to another.

Finally, the remote administration app is an option within an electronic app that also is for use with traditional in-person assessment. The app provides a fully encrypted interface that links with user accounts on the R-PAS site, progresses through the phases of administration, allows the assessor to move around within or across phases, provides the option for speech to text transcription, and allows the assessor to add or delete responses and to score response behaviors related to card turning. It also provides image mark-up and annotation tools to complete an electronic version of a location chart and a place for the assessor to take notes on each response. Once the assessment is complete, the app securely sends all the responses information to the R-PAS site where it pre-fills the coding interface with relevant information, such as the structure of the responses, codes already assigned, the response and clarification phase communications, and, if present, notes from the assessor and card image annotations.

In the current study, we examine the remote use of the online app, given its relevance for potential use during the pandemic, as well as its remarkable flexibility by allowing the assessor and the respondent to reside anywhere rather than requiring them to be in the same room for the assessment. Since our sample was non-clinical, the procedure did not require help from facilitators.

This Study

Although the remote administration app solves many technical hurdles, an important empirical question remains. Does the Rorschach administered remotely in a fully electronic format yield different information than when it is administered in-person with the cards in hand? Our pilot study begins to address this question by focusing on complexity, which is the “first factor” of all Rorschach scores and “the most important thing that makes one person look different from another person” on the Rorschach (Meyer et al., 2011, p. 319). In R-PAS, this variable is scored based on Viglione’s (1999) conceptualization of it as “the amount of productivity, precision, differentiation, and integration involved in the aggregate of all the responses” (p. 259). It also is the dimension of engagement observed by Meyer (1992) and historically by others, which he defined as “cognitive and emotional investment in the task as opposed to more simplistic or efficient responding” (p. 129).

Research supports the R-PAS approach to interpreting complexity scores. Mihura et al.’s (2013) meta-analytic review found that the Rorschach variables closely related to Complexity, i.e. those that assess cognitive synthesis, richness, or engagement, were among those with the strongest empirical support. Additionally, an eye-tracking study by Ales et al. (2019) found that complexity correlated at r = .53, p = .000002, with the number of fixations occurring during the response phase of administration. As this parameter (fixations number) is a well-established proxy marker of cognitive engagement in the eye-tracking literature (e.g., Chen & Proctor, 2017; Jarodzka et al., 2010; Laeng et al., 2011), Ales et al.’s study strongly corroborates the R-PAS approach to interpreting complexity scores. Also consistent with R-PAS interpretive guidelines, a recent fMRI study conducted by Vitolo et al. (2020) found that delivering more complex as opposed to simpler Rorschach responses showed increased activity in the dorsal attention network (d = .43, p < .01), a brain pathway deemed to be responsible for goal-directed or top-down attentional processes (Corbetta & Shulman, 2002; Ptak, 2012; Vossel et al., 2014).

Given that complexity is the Rorschach variable that defines the largest source of variability in the test, and considering its strong evidence base, we determined that it should be the primary focus of our first attempt to study whether administering the Rorschach remotely rather than in-person influences the respondent’s overall approach to the test. This study thus administered the Rorschach remotely to Italian adult community volunteers and evaluated the extent to which the complexity scores departed from R-PAS normative expectations derived from in-person administrations before COVID-19. Additionally, for exploratory purposes, we also examined the average scores of all other Rorschach variables included in the R-PAS profile pages of scoring output.

Ideally, to test possible differences between the remote vs. in-person administration formats, the R-PAS scores generated by our sample should be compared against those produced by an equivalent but independent in-person sample matched on key control variables (e.g., overall level of wellbeing). Alternatively, the same individuals could be administered the Rorschach twice, one time using the standard, in-person administration and one time using the remote administration format. However, when we designed and conducted this study, neither of these options was available because of COVID-19-related restrictions (lockdown, etc.). Accordingly, we opted to compare our newly collected data against those of R-PAS normative reference values, which—as noted above—were derived from in-person administrations before COVID-19. Although sub-optimal, this choice was justified by the fact that the vast majority of Rorschach variables is presumably influenced by stable personality traits rather than by short-lasting and unstable psychological states (Exner, 2003; Meyer et al., 2011; Sultan et al., 2006; Viglione & Hilsenroth, 2001). In fact, a meta-analytic study by Grønnerød (2003) found that on average Rorschach scores generate a test–retest stability coefficient of r = .64 over an interval of slightly more than 3 years. Along similar lines, a more recent study by Freitas and Pasian (2018) found that the scores generated by 88 volunteers over an interval of 15 years were similar across the two administrations. This was particularly true for variables more closely associated with the overall engagement with the task, which is the main focus of the current study. For instance, the number of responses correlated at r = .72, which was the highest correlation value among all presented results. Therefore, since the Rorschach appears to be related to stable personality traits rather than situational conditions and temporary affects, we believe that, in the lack of a sample collected with in-person administration during COVID-19 outbreak, the comparison with pre-pandemic R-PAS normative reference values may be a viable alternative.

As no published study had ever investigated the effects of administering the Rorschach remotely, we did not have any sound a priori hypotheses for this study. Thus, we speculated that individuals taking the Rorschach remotely would show a level of engagement—and thereby a complexity score—similar to that found with the standard in-person administration format, simply because we did not have any data suggesting otherwise. On the other hand, we also anticipated that R-PAS variables reflecting body-related preoccupations, social isolation, or reduced mental activity (e.g., anatomy-related content, passive movements) could perhaps slightly depart from normative expectations as a result of the ongoing pandemic.

Method

Participants

The sample consisted of 60 Italian young adults who were recruited across Italy. Table 1 provides basic demographic information for them and for the R-PAS international reference sample that established the normative expectations to which they are being compared. As the table suggests, our sample can be characterized as younger, more educated, generally single, and of one ethnic background relative to the norms, which are older, less educated, generally married, and multi-ethnic. In addition, unlike the R-PAS norms, about half of our sample (53.3%) were students. Nevertheless, it should be pointed out that empirical research shows that adulthood age, ethnicity, and gender have minimal to no influence on R-PAS scores (Meyer et al., 2014).

Table 1.

Descriptive data in this study sample (N = 60) and in the reference sample (N = 640)

Study participants ( N  = 60) Reference sample ( N  = 640)
Variable M SD M SD
Age 25.75 4.51 37.3 13.4
Years of Ed 16.05 2.03 13.3 3.6
Proportions Proportions
Gender
  Male 43.3 44.7
  Female 56.7 55.3
Ethnicity
  White 100 66.8
  Black - 2.6
  Hispanic - 8.7
  Asian - 2.6
  Other or mixed - 19.4
Marital status
  Single 86.7 35.5
  With partner 10 1.6
  Married 1.7 52.3
  Separated - 1.6
  Divorced 1.7 6.8
  Widowed - 2.3

As with the R-PAS norms, individuals in our sample may be characterized as “non-clinical,” as none had a history of psychological or psychiatric disorders with the exception of one person who reported a previous post-partum depression diagnosis. Inclusion and exclusion criteria required that participants (a) had never been administered the Rorschach and had no prior knowledge of the test; (b) were not regularly taking psychoactive or psychotropic drugs; (c) did not have dyschromatopsia, achromatopsia, and/or color blindness; and (d) had an electronic device (e.g., PC, laptop) large enough to see the Rorschach cards in the correct size dimension (i.e., 9.5 × 6.75 inches, or 24.13 × 17.145 cm). All of the above information was collected through a dedicated socio-demographic form that participants had to answer before taking part in the study. All participants used their devices and were in their own homes during Rorschach administration.

Measures and Interrater Reliability

The R-PAS interpretive output reports scores from 60 Rorschach variables, located into two profile or summary pages. Variables with the strongest research and behavioral support are listed on page 1; those with less support or behavioral foundation are listed on page 2. The scoring program (www.r-pas.org) calculates all protocol level summary scores based on the codes entered at the response level (i.e., response by response) by the examiner; then it assigns percentiles based on normative reference data and converts those to a standard score (SS) metric (M = 100; SD = 15) for visual plotting. As such, the closer a score is to 100, the less it departs from normative expectations. The normative reference sample is heterogeneous in terms of culture and geography. Approximately one-fifth of the sample came from the USA, two-thirds from nine different European countries including Italy, and the remainder from Israel, Argentina, or Brazil. Thus, Western countries, cultures, and languages are well-represented within the sample.

To assess interrater reliability (IRR), 20 protocols were randomly selected and independently recoded by a rater who was blind to the original coding. The reliability of summary scores was quantified using an exact agreement intraclass correlation coefficient (ICC) for most variables. However, non-clinical samples generate very few codes representative of severe psychological problems (e.g., cognitive codes indicative of serious thinking problems; see Viglione et al., 2012), so we inspected the summary score distributions to identify those that had scores of just 0 or 1 (i.e., dichotomous variables) for either of the coders. For variables like these, there is enough information to assess whether the raters could code reliably the absence of that variable, but too few data points to test whether they could code reliably its presence (for more information, see Lewey et al. (2018)). Accordingly, for them, a contingency table was produced, and IRR was established via percentage of agreement and Gwet’s AC (Gwet, 2002), which is a variant of Cohen’s kappa.

The target variable of the current study, complexity, demonstrated an excellent IRR, with ICC = .95 (for common standards for characterizing ICC values, see Cicchetti, 1994). For the other 50 R-PAS variables whose distributions were non-dichotomous, ICC ranged from .47 to 1.00, with an average of .77 (SD = .14) and a median of .81. Specifically, 30 (60%) had excellent IRR (ICC > .74), 11 (22%) had good IRR (ICC > .60), and 9 (18%) had fair IRR (ICC > .40); none had poor IRR (ICC ≤ .40). For the remaining 9 dichotomous variables, percentage of agreement ranged from 75 to 100.0% (M = 89.2%; SD = 7.2%), and Gwet’s AC values ranged from .70 to 1.00 (M = .87; SD = .09). For these variables, one might say that our raters had a good to excellent agreement to code their absence, yet, there were too few scores in our data set to also test whether they would be able to reliably code their presence.

Procedure

Prior to beginning recruitment and data collection, the bioethical committee of the University of Turin formally approved the research proposal. Prospective participants were recruited through word-of-mouth. They were first contacted (via email or phone) to make sure inclusion and exclusion criteria were met, and to obtain written consent6 and socio-demographic information. Next, the examiners—graduate students who had been trained by a member of the R-PAS Research and Development Group (last author) and supervised by an experienced R-PAS user (first author) in their data collection—scheduled an administration appointment.

Rorschach administration procedures closely resembled standard R-PAS guidelines, as much as possible given the online administration via the newly developed app. Once the video meeting with the respondent started through an online platform (e.g., WebEx, Zoom), the examiner shared a link to the remote card viewer with the respondent, and the respondent then shared their screen with the assessor. The assessor guided the participant through the test, as the card seen onscreen is determined by the assessor. The remote card viewer allowed both the assessor and the participant to see the cards and the respondent’s cursor on the card during clarification. In order for the respondent to view the cards at their true physical size, the participant was guided through a screen calibration process using a credit card, ID card, or piece of letter-size or A4 paper.7 R-PAS was administered in Italian since both the examiner and the examinee were native Italian speakers.

Consistent with standard R-PAS administration, each participant was asked to give two or maybe three responses per card. During the response phase (RP), the examiner encouraged a second response if only one was given and moved on to the next card if four responses were given, while providing a reminder of the desired number per card. On completion of the RP, the clarification phase (CP) was conducted as usual, with the participant seeing the cards again and helping the examiner to see what they saw during the RP. All protocols were typed as in-person and no examiner used the speech-to-text option that is also available on the app.

Data Analysis

The primary aim of this study was to test whether the average complexity score significantly departed from SS = 100, which is the average score one should see if our data perfectly matched R-PAS normative expectations. Because null hypothesis significance testing cannot provide support for a true null hypothesis and can only prove it wrong when that is the case (Altman & Bland, 1995), we also implemented Bayesian statistics. Specifically, we used Rouder et al.’s (2009) JZS Bayes factor (JZS B) to estimate the relative posterior probability of the null and alternative hypotheses, given the data. We then interpreted this odds ratio based on Jeffreys’ (1961) criteria, which suggest that JZS B values of > 3, > 10, and > 30 should be characterized, respectively, as “some evidence,” “strong evidence,” and “very strong evidence” for the prevailing hypothesis.

Next, we examined—for exploratory purposes —the average scores of the other 59 variables on page 1 and page 2. To account for the multiple comparisons problem (Herzog et al., 2019), a Bonferroni-Holm correction was used for these one-sample t-test analyses. That is, we ordered our 59 results by their p value, and at the first step, we applied the “pure” Bonferroni correction (e.g., .05/59 = .000847 for α = .05) to the most statistically significant finding. We then sequentially adjusted the critical p value for the number of potentially true null hypotheses remaining in the set of analyses if the previous step was significant (for instance, if the first result survived the “pure” Bonferroni correction, then the second result was corrected for 58 potentially true nulls, e.g., .05/58 = .000862 for α = .05).

To characterize the effect sizes of the differences between our sample and the normative expectations, we used the standard deviation values of R-PAS norms to estimate our effect sizes and Glass’s delta to calculate d. Doing so provides a more accurate index of the extent to which remote administration departs from normative expectations than to use the standard deviation from both samples. When assessing normative equivalence for remote assessment, it is customary to consider differences of less than three tenths of a SD to be equivalent (e.g., Wright & Raiford, 2021), thus a d ≤ .30. It should be noted that these analyses were exploratory, as our study did not have the power to investigate such a large number of variables, given the relatively small sample size.8

For these exploratory analyses, the R-PAS interpretive output includes seven proportion scores, i.e., with scores other than the number of responses (R) as their denominator. For these scores, a value is computed only if there are at least three relevant codes, so that a respondent may produce a missing value on one or more of these scores. In this study, six of the seven proportion scores had a valid score on at least 40 of the 60 cases in our sample. However, the Mutuality of Autonomy Pathology Proportion (MAP/MAHP) only generated seven valid cases. For that specific proportion score, we substituted its numerator (i.e., Mutuality of Autonomy Pathology; MAP), so that all cases could be included in this analysis using the standard scores that allow comparison to the R-PAS reference norms.9

Results

The average complexity score produced by our sample was SS = 100.83 (SD = 12.29; range = 73.00–130.00) and was not significantly different from the value of SS = 100.00 that one would see if our data perfectly matched R-PAS normative expectations, t(59) = .525, p = .601, d = .055. This result produced a JZS B value of 6.205, which indicates that the null hypothesis is greater than six times more likely than the alternative, given the data. Based on Jeffreys’ (1961) characterization of B, these data yield “some evidence” in support of the hypothesis that the R-PAS App to administer the Rorschach remotely produces the same complexity score, on average, as the in-person administration.

Results of the exploratory analyses of the other 59 scores are presented in Tables 2 and 3. Twenty-four variables produced statistically significant differences at a Bonferroni-Holm corrected alpha of .05. Effect sizes for these variables ranged from small (d = .29) to medium-large (d = .67). Thirty two variables were within the range that suggests equivalence (i.e., d ≤ .30). However, the average absolute value of d among these 59 variables was |.30| (SD = .20), and 23 variables had a significant difference and an effect size suggesting non-equivalence.

Table 2.

One-sample t-test and effect size results for page 1 R-PAS variables under investigation

N Min Max M SD t df p d
Administration behaviors and observations
Pr 60 89 119 96.9 9.0  − 2.64 59 .01  − 0.21
Pu 60 96 138 103.2 11.1 2.21 59 .03 0.21
CT 60 86 132 89.7 10.7  − 7.41 59  < .01**  − 0.68
Engagement and cognitive processing
R 60 73 130 104.7 8.5 4.29 59  < .01** 0.31
F% 60 83 120 103.9 13.0 2.31 59 .02 0.26
Blend 60 66 129 95.4 14.6  − 2.46 59 .02  − 0.31
Sy 60 73 135 99.7 14.1  − 0.19 59 .85  − 0.02
MC 60 64 130 100.4 13.4 0.23 59 .82 0.03
MC—PPD 60 68 134 103.9 13.8 2.19 59 .03 0.26
M 60 80 131 105.3 13.6 3.01 59  < .01 0.35
M/MC 57 83 141 107.7 13.7 4.24 56  < .01** 0.51
(CF + C)/SumC 40 84 135 93.9 13.6  − 2.87 39 .01  − 0.41
Perception and thinking problems
EII-3 60 75 126 106.9 14.2 3.76 59  < .01* 0.46
TP-Comp 60 74 143 106.2 14.3 3.36 59  < .01* 0.41
WSumCog 60 72 142 97.2 12.1  − 1.79 59 .08  − 0.19
SevCog 60 79 135 95.6 5.3  − 6.46 59  < .01**  − 0.29
FQ-% 60 94 113 110.1 14.9 5.27 59  < .01** 0.67
WD-% 60 78 143 107.9 14.0 4.35 59  < .01** 0.52
FQo% 60 82 143 90.6 12.2  − 5.96 59  < .01**  − 0.62
Popular 60 66 111 95.1 10.6  − 3.58 59  < .01*  − 0.33
Stress and distress
YTVC' 60 73 119 97.4 13.3 -1.50 59 .14  − 0.17
m 60 73 133 100.0 13.1 0.01 59 .99 0.00
Y 60 84 131 98.0 12.9 -1.21 59 .23  − 0.13
MOR 60 85 133 100.3 12.4 0.18 59 .86 0.02
SC-Comp 60 86 123 93.4 12.9  − 3.96 59  < .01**  − 0.44
Self and other representation
ODL% 60 64 117 98.2 12.6  − 1.12 59 .27  − 0.12
SR 60 74 124 96.9 11.1  − 2.16 59 .03  − 0.21
MAP 60 87 127 98.9 11.0  − 0.75 59 .45  − 0.07
PHR/GPHR 56 90 126 108.5 12.5 5.09 55  < .01** 0.57
M- 60 75 136 106.7 13.3 3.89 59  < .01* 0.45
AGC 60 95 143 100.3 14.7 0.17 59 .87 0.02
H 60 74 136 102.4 14.1 1.30 59 .20 0.16
COP 60 75 135 101.5 8.6 1.36 59 .18 0.10
MAH 60 88 120 98.0 8.7  − 1.79 59 .08  − 0.13

*Statistically significant at .05 after applying Bonferroni-Holm correction

**Statistically significant at .01 after applying Bonferroni-Holm correction

Table 3.

One-sample t-test and effect size results for page 2 R-PAS variables under investigation

N Min Max M SD t df p d
Engagement and cognitive processing
W% 60 63 131 99.8 14.2  − 0.12 59 .91  − 0.01
Dd% 60 75 122 97.4 12.1  − 1.66 59 .10  − 0.17
SI 60 74 123 92.6 13.1  − 4.37 59  < .01**  − 0.49
IntCont 60 81 112 91.1 9.9  − 6.96 59  < .01**  − 0.59
Vg% 60 86 117 92.9 8.7  − 6.35 59  < .01**  − 0.48
V 60 92 126 102.7 12.8 1.62 59 .11 0.18
FD 60 88 122 93.2 8.8  − 5.98 59  < .01**  − 0.45
R8910% 60 77 115 98.0 9.8  − 1.62 59 .11  − 0.14
WSumC 60 70 116 93.8 12.9  − 3.72 59  < .01*  − 0.41
C 60 95 114 97.5 6.5  − 2.93 59  < .01  − 0.16
Mp/(Ma + Mp) 45 89 130 109.3 12.2 5.10 44  < .01** 0.62
Perception and thinking problems
FQu% 60 83 128 105.3 12.1 3.42 59  < .01* 0.36
Stress and distress
PPD 60 67 127 96.3 14.0  − 2.03 59 .05  − 0.24
CBlend 60 91 126 97.4 10.2  − 2.00 59 .05  − 0.18
C' 60 84 128 101.9 12.3 1.17 59 .24 0.12
CritCont% 60 78 128 103.3 13.0 1.97 59 .05 0.22
Self and other representation
SumH 60 71 131 104.7 14.2 2.53 59 .01 0.31
NPH/SumH 56 65 127 101.1 13.5 0.58 55 .56 0.07
V-Comp 60 77 129 104.2 11.8 2.77 59 .01 0.28
r (Reflections) 60 95 144 102.7 12.7 1.66 59 .10 0.18
p/(a + p) 58 80 137 108.6 12.8 5.12 57  < .01** 0.57
AGM 60 93 131 101.3 11.5 0.89 59 .38 0.09
T 60 91 107 91.8 3.5  − 18.06 59  < .01**  − 0.55
PER 60 92 109 93.1 4.3  − 12.44 59  < .01**  − 0.46
An 60 85 136 109.7 13.8 5.46 59  < .01** 0.65

Vista (V) is also part of the stress and distress domain. However, we examined it just once in the engagement and cognitive processing domain

*Statistically significant at .05 after applying Bonferroni-Holm correction

**Statistically significant at .01 after applying Bonferroni-Holm correction

Discussion

The ongoing COVID-19 pandemic has posed challenges to psychological and legal evaluations in applied settings. Recent studies have laid the groundwork in support of several psychological measures remotely administered (e.g., Brearly, 2017; Cullum et al., 2006; Galusha-Glasscock et al., 2016; Harrell et al., 2014; Parmanto et al., 2013; Smith et al., 2017; Wadsworth et al., 2018; Wright, 2018), identifying valid tests that could be used with a digital format (Corey & Ben-Porath, 2020; Wright, 2020; Wright & Raiford, 2021; Wright et al., 2020). Nevertheless, many clinicians, in transitioning from in-person to online practice, encountered difficulties, particularly for assessment using performance-based measures, as the presentation of online stimuli (e.g., Rorschach cards) poses additional challenges compared to self-report measures. These difficulties certainly contributed to mental health professionals being unsure of the feasibility of a tele-assessment and more reluctant to use it. On the other hand, psychological and legal evaluations could not stop, especially since so many individuals were likely experiencing some degree of distress due to isolation and the spread of COVID-19. In this respect, the state of emergency has served as a fuse to ignite applied interest in tele-assessment. However, no studies have assessed the lack of differences between standard vs. remote administration for performance-based personality measures.

Therefore, to update practitioners and researchers and inform them on how to use the Rorschach during the pandemic, the current article noted the guidelines developed by the R-PAS authors to administer the Rorschach in-person with physical distancing or remotely with the inkblots in hand. To extend those options further, this study pilot tested a newly developed app to conduct remote administrations using electronic inkblot stimuli developed by Hogrefe, the publisher of the original inkblots. Our findings may be summarized as follows: the general level of engagement shown by the test-takers when administered the Rorschach remotely with the new R-PAS app closely resembles that previously observed in the general population with “standard” in-person procedures. However, additional research is needed to appreciate the extent to which currently available R-PAS normative reference values are applicable to this new administration method.

Many studies have reported on the psychometric equivalence of administering tests via paper-and-pencil and computer formats using an in-person administration (e.g., Daniel & Wahlstrom, 2019; Daniel et al., 2014; Finger & Ones, 1999; Forbey & Ben-Porath, 2007; Menton et al., 2019; Pinsoneault, 1996; Roper et al., 1995). Fewer studies have focused on the comparison between in-person vs remote administration formats of psychological tests, particularly with respect to performance tasks (e.g., Brearly et al., 2017; Chuah et al., 2006; Marra et al., 2020; Wright, 2018). Because COVID-19 forced many practitioners to conduct their assessments remotely, we directed our research efforts to evaluate a newly developed R-PAS app aimed at allowing remote Rorschach administrations using electronic stimuli. This was done also because we believe that assessment at a distance will likely be a permanent part of psychological assessment even once the ongoing pandemic has subsided. In fact, it should be noted that this app could be used in the future also for in-person assessments, so that future studies could investigate its applicability also in the context of face-to-face administrations.

The most striking result of our study is that R-PAS variable Complexity, i.e., “the most important thing that makes one person look different from another person” on the Rorschach (Meyer et al., 2011; p. 319), generated a virtually identical average score, when compared to normative reference values generated via standard, face-to-face administration. Both the small effect size of this comparison (d = .055) and its relatively large Bayes factor value (JZS = 6.205) suggest that, overall, the Rorschach task should not change dramatically, when one takes it in-person at an office with the cards in hand versus electronically and remotely from home via video link. This finding is consistent with emerging research suggesting that other performance-based tests yielded similar results when administered in-person vs online and remotely (Brearly et al., 2017; Wright, 2018; Wright & Raiford, 2021). It is worth mentioning, however, that while previously published studies focused on tests investigating maximum performance, as far as we know, ours is the first to examine a typical performance measure.

Nevertheless, our study should not be taken as evidence that one can use the newly developed R-PAS remote app with no need to make any adjustments or refinements to existing R-PAS norms. Indeed, a first issue to keep in mind is that our comparison against R-PAS normative reference values is not optimal, as not only the administration format (in-person vs remote) but also the general context in which the data were collected (before vs during COVID-19) differ between the two data sets under examination. As such, even though Rorschach variables—especially those related to complexity—should not be dramatically affected by the different administration contexts, additional studies adopting a test–retest approach or random assignment to administration format are necessary before making any determination with regard to the suitability of extant Rorschach norms for the newly developed R-PAS remote app. The results of our pilot study should therefore be considered as preliminary and our conclusions as tentative. Additionally, our exploratory analyses inspecting all other scores from the R-PAS interpretive profile pages revealed that 23 variables generated a significant difference and an effect size suggesting non-equivalence (d > .30). Although we do not have any conclusive evidence to support our opinion, we believe that some of these discrepancies could be related to using the app, to testing being conducted in the comfort of one’s own home, or to the psychological consequences of the ongoing pandemic and related lifestyle changes.

For instance, relative to the R-PAS norms collected before COVID-19, our sample was more prone to concerns about their physical integrity (An), showed more idiosyncratic perceptions (FQ-%, WD-%, FQo%), were more cognitively ideational as opposed to reactive to bright and provocative stimuli (M/MC, WSumC), and generated more representations of passive activity (Mp/[Ma + Mp], p/[a + p]). These qualities could suggest that people in our sample were inside, wary of contact with others, ruminative rather than buoyant, and seeing their preoccupations in the cards rather than the conventional things people often notice during more normal times.

Other differences might reflect the modified format to present the stimuli remotely. This could explain, for example, why our sample was less likely to act on the perceptual environment by modifying the presented orientation of the inkblot stimuli (CT) and less prone to touch-related tactile representations (T). Third, testing completed while at home and at a distance from the assessor rather than in an office and adjacent to the assessor may contribute to a reduction in defensive assertions of personal knowledge (PER). All these considerations, however, are quite speculative at this time, given the small size of our sample. As such, additional research would be beneficial to clarify the extent to which these discrepancies from normative expectations really represent a true effect and the extent to which any true (i.e., replicating) effect is due to the mode of administration rather than the pandemic or the modified setting for the testing.

It is important to also underscore that even if non-clinical volunteers were to produce nearly identical R-PAS scores when administered the Rorschach in-person vs remotely with electronic stimuli, empirical evidence attesting to the validity of this remote administration format would still be needed. The generalizability of our findings to other cultural contexts also might be questioned, given that our pilot study only included a relatively small group of young and largely single Italian volunteers. As such, although R-PAS scores seem to be unaffected by the nationality and ethnicity of the test-taker or adult age (Meyer et al., 2015), additional research conducted remotely using this app in different cultural environments would be beneficial. Although adult age has not shown an association with R-PAS scores, our sample consisted of mostly students, and our average age (25.7, SD = 4.5) was much lower than in the R-PAS norms (37.3, SD = 13.4), which may play a role in our findings.

Furthermore, another aspect that our pilot investigation could not address is the extent to which individuals suffering from cognitive or psychiatric deficits or unfamiliar with computers or videoconferencing (e.g., elderly patients) could comply with remote administration requirements. To answer this and many other similar questions, more research is clearly needed.

Overall, the pandemic has boosted the growth of tele-assessment given the pressing need for professional services delivered remotely to the benefit of many people (Wosik et al., 2020). Being able to administer the Rorschach remotely holds the promise for similarly benefitting those in need of assessment during the pandemic. However, administering the Rorschach in videoconference assessment would be useful beyond this moment of health crisis to encompass other circumstances, such as assessing individuals with limited mobility, those whose travel would require more cost than benefit, inmates, or patients living in areas with few or no professional assessors.

The past year and a half has also seen courts shifting to forensic mental tele-health assessment (FMTA; Drogin, 2020). Pandemic limitations have affected private clinics, hospitals, prisons, and the offices of attorneys and consultants (Wright et al., 2020), which have encouraged forensic evaluators to practice remotely (Levy, 2020). Drogin (2020) predict that FMTA will not disappear with the control of COVID-19, but rather will become increasingly prevalent in the years to come, as it offers many benefits, such as reduced travel expenses, more flexible scheduling, and service to rural or remote areas. Hence, it is crucial that psycho-legal evaluations adapt to the change that is occurring.

Rorschach assessment should be similarly adaptable. Although potential limitations need to be considered, such as access to the technology needed for administration (e.g., laptop), high-speed internet, cultural and personal considerations (e.g., familiarity with the technology), and the environment in which the examinee is located during administration, developing an online method for administering the Rorschach is essential as assessment becomes increasingly oriented towards an “online” methodology. In light of this, our study represents the beginning of a systematic effort to demonstrate that the Rorschach can be administered online. Although our preliminary results are far from conclusive, they appear to be promising in addressing the important question, “Can the Rorschach yield interpretively useful information, when administered remotely?”.

Funding

Open access funding provided by Università degli Studi di Torino within the CRUI-CARE Agreement.

Declarations

Ethics Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Conflict of Interest

Gregory Meyer (second author) and Joni Mihura (third author) own a share in the corporate (LLC) that possesses rights to Rorschach Performance Assessment System. All other authors declare no competing interests.

Footnotes

1

The term tele-health refers to physical and mental health assessment, prevention, and intervention services provided remotely. Thus, tele-health refers to a broader spectrum than tele-assessment, which is defined as the practice of administering via tele-communication psychological measures traditionally administered face-to-face—or side-by-side in the same room (Krach et al., 2020).

2

On October 11, 2021, a PsycINFO (EBSCO) search limited to the last decade (2010–2021) for articles using the cue words “Rorschach” or “Rorschach test” found 1424 results.

3

Maximum performance measures provide a clear and correct solution, guidance for successfully achieving that end, limited response options, and instructional sets that motivate behavior, whereas typical performance measures provide a context for engagement with a task, but the task lacks clear standards for success, provides many response options, and leaves it to the respondent to determine the manner of engaging the task.

4

FSN = fail-safe N, which is the number of effect sizes with null results required to bring the observed significance of the summary effect size to a level above p = .05.

5

Between 40 and 48% of assessors use the Rorschach in their daily practice (Ackerman & Ackerman, 1997; Quinnell & Bow, 2001).

6

Participants were asked to carefully read an information sheet regarding the research they were about to participate and, if they agreed, to sign the informed consent form.

7

Participants select one of these commonly available objects of standard size and place the object against the screen to align it with a representative image of that object displayed on the screen. The subject adjusts the size of the object on screen to match the size of the physical object by moving a slide bar.

8

This study was designed to investigate complexity. We considered that with a power of .80, a small to medium effect size of d = .35, and an alpha value set to .05, a sample size of about 60 to 70 participants was needed to compute a one-sample t-test (Cohen, 1988). Wanting to test a notably larger number of variables, however, a larger sample size would be necessary in order to account for the multiple comparisons problem.

9

Previous studies have followed the R-PAS recommendation to use difference scores as substitutes for research purposes (e.g., Schneider et al., 2020). We did not because standard scores are not provided for them.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

7/21/2022

Springer Nature’s version of this paper was updated: Missing Open Access funding information has been added in the Funding Note.

Contributor Information

Francesca Ales, Email: francesca.ales@unito.it.

Gregory J. Meyer, Email: gregory.meyer@utoledo.edu

Joni L. Mihura, Email: joni.mihura@utoledo.edu

Andrea Corgiat Loia, Email: andrea.corgiatloia@edu.unito.it.

Sara Pasqualini, Email: sara.pasqualini@edu.unito.it.

Alessandro Zennaro, Email: alessandro.zennaro@unito.it.

Luciano Giromini, Email: luciano.giromini@unito.it.

References

  1. Ackerman MJ, Ackerman MC. Child custody evaluation practices: A survey of experienced professionals (revisited) Professional Psychology: Research and Practice. 1997;28(2):137–145. doi: 10.1037/0735-7028.28.2.137. [DOI] [Google Scholar]
  2. Ackerman MJ, Kane AW. Psychological experts in divorce actions. 5. Aspen; 2011. [Google Scholar]
  3. Ales, F., Giromini, L., & Zennaro, A. (2019). Complexity and cognitive engagement in the Rorschach task: An eye-tracking study. Journal of Personality Assessment, 538-550.10.1080/00223891.2019.1575227 [DOI] [PubMed]
  4. Altman DG, Bland JM. Absence of evidence is not evidence of absence. British Medical Journal. 1995;311:485. doi: 10.1136/bmj.311.7003.485. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Ando’, A., Pineda, J. A., Giromini, L., Soghoyan, G., QunYang, Bohm, M., Maryanovsky, D., & Zennaro, A. Effects of repetitive transcranial magnetic stimulation (rTMS) on attribution of movement to ambiguous stimuli and EEG mu suppression. Brain Research. 2018;1680:69–76. doi: 10.1016/j.brainres.2017.12.007. [DOI] [PubMed] [Google Scholar]
  6. Ando’, A., Salatino, A., Giromini, L., Ricci, R., Pignolo, C., Cristofanelli, S., & Ferro, L., Viglione, D. J., & Zennaro, A. Embodied simulation and ambiguous stimuli: The role of the mirror neuron system. Brain Research. 2015;1629:135–142. doi: 10.1016/j.brainres.2015.10.025. [DOI] [PubMed] [Google Scholar]
  7. Archer RP, Buffington-Vollum JK, Stredney RV, Handel RW. A survey of psychological test use patterns among forensic psychologists. Journal of Personality Assessment. 2006;87:84–95. doi: 10.1207/s15327752jpa8701_07. [DOI] [PubMed] [Google Scholar]
  8. Archer RP, Smith SR, editors. Personality assessment. Routledge; 2014. [Google Scholar]
  9. Asari T, Konishi S, Jimura K, Chikazoe J, Nakamura N, Miyashita Y. Amygdalar enlargement associated with unique perception. Cortex. 2010;46:94–99. doi: 10.1016/j.cortex.2008.08.001. [DOI] [PubMed] [Google Scholar]
  10. Barnett ML, Ray KN, Souza J, Mehrotra A. Trends in telemedicine use in a large commercially insured population. JAMA. 2018;320(20):2147–2149. doi: 10.1001/jama.2018.12354. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Batastini AB, King CM, Morgan RD, McDaniel B. Telepsychological services with criminal justice and substance abuse clients: A systematic review and meta-analysis. Psychological Services. 2016;13(1):20–30. doi: 10.1037/ser0000042. [DOI] [PubMed] [Google Scholar]
  12. Belter RW, Piotrowski C. Current status of doctoral-level training in psychological testing. Journal of Clinical Psychology. 2001;57(6):717–726. doi: 10.1002/jclp.1044. [DOI] [PubMed] [Google Scholar]
  13. Ben-Porath YS, Tellegen A. Minnesota Multiphasic Personality Inventory-3 (MMPI-3): Manual for administration, scoring, and interpretation. University of Minnesota Press; 2020. [Google Scholar]
  14. Boccaccini MT, Brodsky SL. Diagnostic test usage by forensic psychologists in emotional injury cases. Professional Psychology: Research and Practice. 1999;30(3):253–259. doi: 10.1037/0735-7028.30.3.253. [DOI] [Google Scholar]
  15. Bolton AJ, Dorstyn DS. Telepsychology for posttraumatic stress disorder: A systematic review. Journal of Telemedicine and Telecare. 2015;21(5):254–267. doi: 10.1177/1357633X15571996. [DOI] [PubMed] [Google Scholar]
  16. Bornstein RF. Criterion validity of objective and projective dependency tests: A meta-analytic assessment of behavioral prediction. Psychological Assessment. 1999;11:48–57. doi: 10.1037/1040-3590.11.1.48. [DOI] [Google Scholar]
  17. Bornstein RF. A process dissociation approach to objective-projective test score interrelationships. Journal of Personality Assessment. 2002;78:47–68. doi: 10.1207/S15327752JPA7801_04. [DOI] [PubMed] [Google Scholar]
  18. Borum, R. & Grisso, T. (1995). Psychological tests and criminal forensic evaluations. Professional Psychology: Research and Practice, 26, 465–473. 10.1037/0735-7028.26.5.465
  19. Brearly TW, Shura RD, Martindale SL, Lazowski RA, Luxton DD, Shenal BV, Rowland JA. Neuropsychological test administration by videoconference: A systematic review and meta-analysis. Neuropsychology Review. 2017;27(2):174–186. doi: 10.1007/s11065-017-9349-1. [DOI] [PubMed] [Google Scholar]
  20. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment. 1994;6(4):284–290. doi: 10.1037/1040-3590.6.4.284. [DOI] [Google Scholar]
  21. Chen, J., & Proctor, R. W. (2017). Role of accentuation in the selection/rejection task framing effect. Journal of Experimental Psychology: General, 146, 543–568. 10.1037/xge0000277 [DOI] [PubMed]
  22. Childs RA, Eyede LD. Assessment training in clinical psychology doctoral programs: What should we teach? What do we teach? Journal of Personality Assessment. 2002;78(1):130–144. doi: 10.1207/S15327752JPA7801_08. [DOI] [PubMed] [Google Scholar]
  23. Chuah SC, Drasgow F, Roberts BW. Personality assessment: Does the medium matter? Journal of Research in Personality. 2006;40(4):359–376. doi: 10.1016/j.jrp.2005.01.006. [DOI] [Google Scholar]
  24. Corbetta M, Shulman G. Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience. 2002;3:201–215. doi: 10.1038/nrn755. [DOI] [PubMed] [Google Scholar]
  25. Corey DM, Ben-Porath YS. Practical guidance on the use of the MMPI instruments in remote psychological testing. Professional Psychology: Research and Practice. 2020;51(3):199–204. doi: 10.1037/pro0000329. [DOI] [Google Scholar]
  26. Cronbach LJ. Essentials of psychological testing. Harper; 1949. [Google Scholar]
  27. Cohen J. Statistical power analysis for the behavioral sciences. 2. Erlbaum; 1988. [Google Scholar]
  28. Cullum CM, Hynan LS, Grosch M, Parikh M, Weiner MF. Teleneuropsychology: Evidence for video teleconference-based neuropsychological assessment. Journal of the International Neuropsychological Society. 2014;20(10):1028–1033. doi: 10.1017/S1355617714000873. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Cullum CM, Weiner MF, Gehrmann HR, Hynan LS. Feasibility of telecognitive assessment in dementia. Assessment. 2006;13(4):385–390. doi: 10.1177/1073191106289065. [DOI] [PubMed] [Google Scholar]
  30. Daniel M, Wahlstrom D. Raw-score equivalence of computer-assisted and paper versions of WISC-V. Psychological Services. 2019;16(2):213–220. doi: 10.1037/ser0000295. [DOI] [PubMed] [Google Scholar]
  31. Daniel, M. H., Wahlstrom, D., & Zhang, O. (2014). Equivalence of Q-interactive™ and paper administrations of cognitive tasks: WISC®-V (Q-interactive Technical Report 8).
  32. Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993). 509, U.S. 579, 113 S.Ct. 2786.
  33. Diener MJ, Hilsenroth MJ, Shaffer SA, Sexton JE. A meta-analysis of the relationship between the Rorschach Ego Impairment Index (EII) and psychiatric severity. Clinical Psychology & Psychotherapy. 2011;18:464–485. doi: 10.1002/cpp.725. [DOI] [PubMed] [Google Scholar]
  34. Drogin EY. Forensic mental telehealth assessment (FMTA) in the context of COVID-19. International Journal of Law and Psychiatry. 2020;71:101595. doi: 10.1016/j.ijlp.2020.101595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Erard RE. Expert testimony using the Rorschach performance assessment system in psychological injury cases. Psychological Injury and Law. 2012;5:122–134. doi: 10.1007/s12207-012-9126-7. [DOI] [Google Scholar]
  36. Erard RE, Meyer GJ, Viglione DJ. Setting the record straight: Comment on Gurley, Piechowski, Sheehan, and Gray (2014) on the admissibility of the rorschach performance assessment system (R-PAS) in court. Psychological Injury and Law. 2014;7:165–177. doi: 10.1007/s12207-014-9195-x. [DOI] [Google Scholar]
  37. Erard, R. E., & Viglione, D. J. (2014). The Rorschach performance assessment system (R-PAS) in child custody evaluations. Journal of Child Custody. Applying Research to Parenting and Assessment Practice and Policies, 11(3), 159–180. 10.1080/15379418.2014.943449
  38. Exner JE. The Rorschach: A comprehensive system, vol: 1: Basic foundations. Wiley; 1974. [Google Scholar]
  39. Exner, J. E. (2003). The Rorschach: A comprehensive system: Vol. 1. Basic foundations (4th ed.). New York, NY: Wiley.
  40. Finger MS, Ones DS. Psychometric equivalence of the computer and booklet forms of the MMPI: A meta-analysis. Psychological Assessment. 1999;11(1):58–66. doi: 10.1037/1040-3590.11.1.58. [DOI] [Google Scholar]
  41. Finn SE. Journeys through the valley of death: Multimethod psychological assessment and personality transformation in long-term psychotherapy. Journal of Personality Assessment. 2011;93:123–141. doi: 10.1080/00223891.2010.542533. [DOI] [PubMed] [Google Scholar]
  42. First, M. B., Williams, J. B. W., Karg, R. S., & Spitzer, R. L. (2015), Structured clinical interview for DSM-5. Arlington, VA: American Psychiatric Association.
  43. Forbey JD, Ben-Porath YS. Computerized adaptive personality testing: A review and illustration with the MMPI-2 computerized adaptive version. Psychological Assessment. 2007;19(1):14–24. doi: 10.1037/1040-3590.19.1.14. [DOI] [PubMed] [Google Scholar]
  44. Freitas FR, Pasian SR. Reassessment (after 15 years) of non-patient adults by the Rorschach method. The Spanish Journal of Psychology. 2018;21:1–10. doi: 10.1017/sjp.2018.36. [DOI] [PubMed] [Google Scholar]
  45. Gacono CB, Evans FB, editors. The handbook of forensic Rorschach assessment. Taylor & Francis; 2008. pp. 561–566. [Google Scholar]
  46. Galusha-Glasscock JM, Horton DK, Weiner MF, Cullum CM. Video teleconference administration of the repeatable battery for the assessment of neuropsychological status. Archives of Clinical Neuropsychology. 2016;31(1):8–11. doi: 10.1093/arclin/acv058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Ganellen RJ. Rorschach assessment of malingering and response sets. In: Gacono CB, Evans FB, editors. The handbook of forensic Rorschach assessment. Taylor & Francis; 2008. pp. 89–120. [Google Scholar]
  48. Ganellen RJ, Wasyliw OW, Haywood TW, Grossman LS. Can psychosis be malingered on the Rorschach? An empirical study. Journal of Personality Assessment. 1996;66:65–80. doi: 10.1207/s15327752jpa6601_5. [DOI] [PubMed] [Google Scholar]
  49. Garb HN. Computer-administered interviews and rating scales. Psychological Assessment. 2007;19(1):4–13. doi: 10.1037/1040-3590.19.1.4. [DOI] [PubMed] [Google Scholar]
  50. General Electric v. Joiner. (1997). 522 US 136, 118, S. Ct. 512, 139 L Ed 2d 508.
  51. Germain V, Marchand A, Bouchard S, Guay S, Drouin M-S. Assessment of the therapeutic alliance in face-to-face or videoconference treatment for posttraumatic stress disorder. Cyberpsychology, Behavior, and Social Networking. 2010;13(1):29–35. doi: 10.1089/cyber.2009.0139. [DOI] [PubMed] [Google Scholar]
  52. Giromini, L., Ando’, A., Morese, R., Salatino, A., Di Girolamo, M., Viglione, D. J., & Zennaro, A. (2016). Rorschach performance assessment system (R-PAS) and vulnerability to stress: A preliminary study on electrodermal activity during stress. Psychiatry Research, 246, 166–172. 10.1016/j.psychres.2016.09.036 [DOI] [PubMed]
  53. Giromini L, Pignolo C, Young G, Drogin EY, Zennaro A, Viglione DJ. Comparability and validity of the online and in-person administrations of the inventory of problems-29. Psychological Injury and Law. 2021;14:77–88. doi: 10.1007/s12207-021-09406-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Giromini L, Porcelli P, Viglione DJ, Parolin L, Pineda JA. The feeling of movement: EEG evidence for mirroring activity during the observations of static, ambiguous stimuli in the Rorschach cards. Biological Psychology. 2010;85:233–241. doi: 10.1016/j.biopsycho.2010.07.008. [DOI] [PubMed] [Google Scholar]
  55. Giromini L, Viglione DJ, Pineda JA, Porcelli P, Hubbard D, Zennaro A, Cauda F. Human movement responses to the Rorschach and mirroring activity: An fMRI study. Assessment. 2019;26:56–69. doi: 10.1177/1073191117731813. [DOI] [PubMed] [Google Scholar]
  56. Giromini L, Viglione DJ, Zennaro A, Cauda F. Neural activity during production of Rorschach responses: Aan fMRI study. Psychiatry Research: Neuroimaging. 2017;262:25–31. doi: 10.1016/j.pscychresns.2017.02.001. [DOI] [PubMed] [Google Scholar]
  57. Giromini L, Viglione DJ, Vitolo E, Cauda F, Zennaro A. Introducing the concept of neurobiological foundation of Rorschach responses using the example of oral dependent language. Scandinavian Journal of Psychology. 2019;60:528–538. doi: 10.1111/sjop.12585. [DOI] [PubMed] [Google Scholar]
  58. Graceffo RA, Mihura JL, Meyer GJ. A meta-analysis of an implicit measure of personality functioning: The mutuality of autonomy scale. Journal of Personality Assessment. 2014;96:581–595. doi: 10.1080/00223891.2014.919299. [DOI] [PubMed] [Google Scholar]
  59. Grady B, Myers KM, Nelson E-L, Belz N, Bennett L, Carnahan L, Voyles D. Evidence-based practice for telemental health. Telemedicine and e-Health. 2011;17(2):131–148. doi: 10.1089/tmj.2010.0158. [DOI] [PubMed] [Google Scholar]
  60. Grønnerød C. Temporal stability in the Rorschach method: A meta-analytic review. Journal of Personality Assessment. 2003;80:272–293. doi: 10.1207/S15327752JPA8003_06. [DOI] [PubMed] [Google Scholar]
  61. Grosch MC, Weiner MF, Hynan LS, Shore J, Cullum CM. Video teleconference-based neurocognitive screening in geropsychiatry. Psychiatry Research. 2015;225(3):734–735. doi: 10.1016/j.psychres.2014.12.040. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Grossman LS, Wasyliw OE, Benn AF, Gyoerkoe KL. Can sex offenders who minimize on the MMPI conceal psychopathology on the Rorschach? Journal of Personality Assessment. 2002;78:484–501. doi: 10.1207/S15327752JPA7803_07. [DOI] [PubMed] [Google Scholar]
  63. Gwet K. Kappa statistic is not satisfactory for assessing the extent of agreement between raters. Statistical Methods for Inter-Rater Reliability Assessment. 2002;1(6):1–6. [Google Scholar]
  64. Harrell KM, Wilkins SS, Connor MK, Chodosh J. Telemedicine and the evaluation of cognitive impairment: The additive value of neuropsychological assessment. Journal of the American Medical Directors Association. 2014;15(8):600–606. doi: 10.1016/j.jamda.2014.04.015. [DOI] [PubMed] [Google Scholar]
  65. Hemphill, J. F. (2003). Interpreting the magnitudes of correlation coefficients. American Psychologist, 58, 78–79. 10.1037/0003-066X.58.1.78 [DOI] [PubMed]
  66. Herzog, M. H., Francis, G., Clarke, A. (2019). The multiple testing problem. In: Understanding Statistics and Experimental Design. Learning Materials in Biosciences. Springer, Cham. 10.1007/978-3-030-03499-3_5
  67. Hyler S, Gangure D, Batchelder S. Can telepsychiatry replace in-person psychiatric assessments? A review and meta-analysis of comparison studies. CNS Spectrums. 2005;10(5):403–415. doi: 10.1017/S109285290002277X. [DOI] [PubMed] [Google Scholar]
  68. Jarodzka H, Scheiter S, Gerjets P, Van Gog T. In the eyes of the beholder: How experts and novices interpret dynamic stimuli. Learning and Instruction. 2010;20:146–154. doi: 10.1016/j.learninstruc.2009.02.019. [DOI] [Google Scholar]
  69. Jeffreys H. Theory of probability. 3. Oxford University Press, Clarendon Press; 1961. [Google Scholar]
  70. Jensen AR. Scoring the stroop test. Acta Psychologica. 1965;24:398–408. doi: 10.1016/0001-6918(65)90024-7. [DOI] [PubMed] [Google Scholar]
  71. Force JT, for the Development of Telepsychology Guidelines for Psychologists. Guidelines for the practice of telepsychology. American Psychologist. 2013;68(9):791–800. doi: 10.1037/a0035001. [DOI] [PubMed] [Google Scholar]
  72. Jørgensen K, Andersen TJ, Dam H. The diagnostic efficiency of the Rorschach depression index and the schizophrenia index: A review. Assessment. 2000;7:259–280. doi: 10.1177/107319110000700306. [DOI] [PubMed] [Google Scholar]
  73. Khadivi A, Barton Evans F. The brave new world of forensic Rorschach assessment: Comments on the Rorschach special section. Psychological Injury and Law. 2012;5:145–149. doi: 10.1007/s12207-012-9134-7. [DOI] [Google Scholar]
  74. Kline P. A handbook of test construction (psychology revivals): Introduction to psychometric design. Routledge; 2015. [Google Scholar]
  75. Kumho Tire v. Carmichael. (1999). 526 US, 199 S Ct 1167.
  76. Krach SK, Paskiewicz TL, Monk MM. Testing our children when the world shuts down: Analyzing recommendations for adapted tele-assessment during COVID-19. Journal of Psychoeducational Assessment. 2020;38(8):923–941. doi: 10.1177/0734282920962839. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Laeng, B., Ørbo, M., Holmlund, T., & Miozzo, M. (2011). Pupillary stroop effects. Cognitive Processing, 12, 13–21. 10.1007/s10339-010-0370-z [DOI] [PMC free article] [PubMed]
  78. Levy, M. I. (2020). Virtual forensic psychiatric practice: A lawyer’s guide. Forensic psychiatric associates medical corporation. https://fpamed.com/virtual-forensicpsychiatric-practice-a-lawyers-guide
  79. Lexcen FJ, Hawk GL, Herrick S, Blank MB. Use of video conferencing for psychiatric and forensic evaluations. Psychiatric Services. 2006;57(5):713–715. doi: 10.1176/ps.2006.57.5.713. [DOI] [PubMed] [Google Scholar]
  80. Lewey JH, Kivisalu TM, Giromini L. Coding with RPAS: Does prior training with the Exner comprehensive system impact interrater reliability compared to those examiners with only R-PAS based training? Journal of Personality Assessment. 2018;14:1–9. doi: 10.1080/00223891.2018.1476361. [DOI] [PubMed] [Google Scholar]
  81. Lilienfeld, S. O., Wood, J. M., Garb, H. N. (2000). The scientific status of projective techniques. Psychological Science in the Public Interest, 1(2), 27–66. 10.1111/2F1529-1006.002 [DOI] [PubMed]
  82. Loh P-K, Donaldson M, Flicker L, Maher S, Goldswain P. Development of a telemedicine protocol for the diagnosis of Alzheimer’s disease. Journal of Telemedicine and Telecare. 2007;13:90–94. doi: 10.1258/135763307780096159. [DOI] [PubMed] [Google Scholar]
  83. Lundbäck E, Forslund K, Rylander G, Jokinen J, Nordström P, Nordström AL, Asberg M. CSF 5-HIAA and the Rorschach test in patients who have attempted suicide. Archives of Suicide Research. 2006;10(4):339–345. doi: 10.1080/13811110600790942. [DOI] [PubMed] [Google Scholar]
  84. Luxton DD, Pruitt LD, Osenbach JE. Best practices for remote psychological assessment via telehealth technologies. Professional Psychology: Research and Practice. 2014;45(1):27–35. doi: 10.1037/a0034547. [DOI] [Google Scholar]
  85. Manguno-Mire GM, Thompson JW, Shore JH, Croy CD, Artecona JF, Pickering JW. The use of telemedicine to evaluate competency to stand trial: A preliminary randomized controlled study. Journal of the American Academy of Psychiatry and the Law. 2007;35:481–489. [PubMed] [Google Scholar]
  86. Marra DE, Hamlet KM, Bauer RM, Bowers D. Validity of teleneuropsychology for older adults in response to COVID-19: A systematic and critical review. The Clinical Neuropsychologist. 2020;34(7–8):1411–1452. doi: 10.1080/13854046.2020.1769192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. McClelland, D. C., Koestner, R. &Weinberger, J. (1989). How do self-attributed and implicit motives differ? Psychological Review, 96, 690–702. 10.1037/0033-295X.96.4.690
  88. McGrath RE. The Rorschach in the context of performancebased personality assessment. Journal of Personality Assessment. 2008;90:465–475. doi: 10.1080/00223890802248760. [DOI] [PubMed] [Google Scholar]
  89. Meaney MJ. Maternal care, gene expressione, and the transmission of individual differences in stress reactivity accross generations. Annual Review of Neuroscience. 2011;24:1161–1192. doi: 10.1146/annurev.neuro.24.1.1161. [DOI] [PubMed] [Google Scholar]
  90. Meloy JR. The authority of the Rorschach: An update. In: Gacono CB, Evans FB, editors. The handbook of forensic Rorschach assessment. Taylor & Francis; 2008. pp. 79–87. [Google Scholar]
  91. Meloy JR, Hansen TL, Weiner IB. Authority of the Rorschach: Legal citations during the past 50 years. Journal of Personality Assessment. 1987;69:53–82. doi: 10.1207/s15327752jpa6901_3. [DOI] [Google Scholar]
  92. Menton, W. H., Crighton, A. H., Tarescavage, A. M., Marek, R. J., Hicks, A. D., & Ben-Porath, Y S. (2019). Equivalence of laptop and tablet administrations of the Minnesota Multiphasic Personality Inventory-2 Restructured Form. Assessment, 26(4), 661–669. 10.1177/2F1073191117714558 [DOI] [PubMed]
  93. Meyer, G. J. (1992). Response frequency problems in the Rorschach: Clinical and research implications with suggestions for the future. Journal of Personality Assessment, 58, 231–244. 10.1207/s15327752jpa5802_2 [DOI] [PubMed]
  94. Meyer GJ. The convergent validity of MMPI and Rorschach scales: An extension using profile scores to define response and character styles on both methods and a reexamination of simple Rorschach response frequency. Journal of Personality Assessment. 1999;72:1–35. doi: 10.1207/s15327752jpa7201_1. [DOI] [PubMed] [Google Scholar]
  95. Meyer GJ, Archer RP. The hard science of Rorschach research: What do we know and where do we go? Psychological Assessment. 2001;13(4):486–502. doi: 10.1037/1040-3590.13.4.486. [DOI] [PubMed] [Google Scholar]
  96. Meyer GJ, Eblin JJ. An overview of the Rorschach performance assessment system (R-PAS) Psychological Injury and Law. 2012;5(2):107–121. doi: 10.1007/s12207-012-9130-y. [DOI] [Google Scholar]
  97. Meyer GJ, Giromini L, Viglione DJ, Reese JB, Mihura JL. The association of gender, ethnicity, age, and education with Rorschach scores. Assessment. 2015;22(1):46–64. doi: 10.1177/1073191114544358. [DOI] [PubMed] [Google Scholar]
  98. Meyer GJ, Katko NJ, Mihura JL, Klag MJ, Meoni LA. The incremental validity of self-report and performance-based methods for assessing hostility to predict cardiovascular disease in physicians. Journal of Personality Assessment. 2018;100(1):68–83. doi: 10.1080/00223891.2017.1306780. [DOI] [PubMed] [Google Scholar]
  99. Meyer GJ, Riethmiller RJ, Brooks RD, Benoit WA, Handler L. A replication of Rorschach and MMPI-2 convergent validity. Journal of Personality Assessment. 2000;74:175–215. doi: 10.1207/S15327752JPA7402_3. [DOI] [PubMed] [Google Scholar]
  100. Meyer GJ, Viglione DJ, Giromini L. In: Personality assessment. 2. Archer RP, Smith SR, editors. Routledge; 2014. pp. 1–36. [Google Scholar]
  101. Meyer, G. J., Viglione, D. J., Mihura, J. L., Erard, R. E., & Erdberg, P. (2011) Rorschach performance assessment system: Administration, coding, interpretation, and technical manual, Toledo, OH, Rorschach Performance Assessment System.
  102. Meyer, G., Viglione, D. J., Mihura, J. L., Erdberg, P., Bram, A., Giromini, L., Grønnerød, C., Kleiger, J., Lipkind, J., de Ruiter, C., Pianowski, G., & Vanhoyland, M. (2020). Recommendations concerning remote administration of the Rorschach. Retrieved from https://rpas.org/Docs/Remote%20Administration%20of%20the%20Rorschach.pdf
  103. Mihura JL. The necessity of multiple test methods in conducting assessments: The role of the Rorschach and self-report. Psychological Injury and Law. 2012;6:97–106. doi: 10.1007/s12207-012-9132-9. [DOI] [Google Scholar]
  104. Mihura, J. L., & Meyer, G. J. (Eds.). (2018). Using the Rorschach Performance Assessment System®(R-PAS®). The Guilford Press.
  105. Mihura, J. L., Meyer, G. J., Bombel, G., & Dumitrascu, N. (2015). Standards, accuracy, and questions of bias in Rorschach meta-analyses: Reply to Wood, Garb, Nezworski, Lilienfeld, and Duke (2015). Psychological Bulletin, 141, 250–260. 10.1037/a0038445 [DOI] [PubMed]
  106. Mihura JL, Meyer GJ, Dumitrascu N, Bombel G. The validity of individual Rorschach variables: Systematic reviews and meta-analyses of the comprehensive system. Psychological Bulletin. 2013;139:548–605. doi: 10.1037/a0029406. [DOI] [PubMed] [Google Scholar]
  107. Mihura JL, Roy M, Graceffo RA. Psychological assessment training in clinical psychology doctoral programs. Journal of Personality Assessment. 2017;99(2):153–164. doi: 10.1080/00223891.2016.1201978. [DOI] [PubMed] [Google Scholar]
  108. Millon T, Grossman S, & Millon C. (2015) MCMI-IV. Pearson
  109. Minassian, A., Granholm, E., Verney, S., & Perry, W., 2005. Visual scanning deficits in schizophrenia and their relationship to executive functioning impairment. Schizophrenia Research, 74, 69–79. 10.1016/j.schres.2004.07.008 [DOI] [PubMed]
  110. Morey, L. C. (1991). Personality assessment inventory-professional manual. Lutz: Psychological Assessment Resources.
  111. Morgan RD, Patrick AR, Magaletta PR. Does the use of telemental health alter the treatment experience? Inmates’ perceptions of telemental health versus face-to-face treatment modalities. Journal of Consulting and Clinical Psychology. 2008;76(1):158–162. doi: 10.1037/0022-006X.76.1.158. [DOI] [PubMed] [Google Scholar]
  112. Neal TM, Grisso T. Assessment practices and expert judgment methods in forensic psychology and psychiatry: An international snapshot. Criminal Justice and Behavior. 2014;41(12):1406–1421. doi: 10.1177/0093854814548449. [DOI] [Google Scholar]
  113. Parmanto B, Pulantara IW, Schutte JL, Saptono A, McCue MP. An integrated telehealth system for remote administration of an adult autism assessment. Telemedicine and e-Health. 2013;19(2):88–94. doi: 10.1089/tmj.2012.0104. [DOI] [PubMed] [Google Scholar]
  114. Perry W., Sprock J., Schaible D., McDougall A., Minassian A., Jenkins M., & Braff D. (1995) Amphetamine on Rorschach measures in normal subjects. Journal of Personality Assessment, 64(3), 456–465. 10.1207/s15327752jpa6403_5 [DOI] [PubMed]
  115. Pineda, J. A., Giromini, L., Porcelli, P., Parolin, L., & Viglione, D. J. (2011). Mu suppression and human movement responses to the Rorschach test. NeuroReport, 22, 223–226. 10.1097/WNR.0b013e328344f45c [DOI] [PubMed]
  116. Pinsoneault TB. Equivalency of computer-assisted and paper-and-pencil administered versions of the Minnesota Multiphasic Personality Inventory-2. Computers in Human Behavior. 1996;12(2):291–300. doi: 10.1016/0747-5632(96)00008-8. [DOI] [Google Scholar]
  117. Porcelli P, Giromini L, Parolin L, Pineda JA, Viglione DJ. Mirroring activity in the brain and movement determinant in the Rorschach test. Journal of Personality Assessment. 2013;95(5):444–456. doi: 10.1080/00223891.2013.775136. [DOI] [PubMed] [Google Scholar]
  118. Ptak, R. (2012). The frontoparietal attention network of the human brain: Action, saliency, and a priority map of the environment. The Neuroscientist, 18(5), 502–515. [DOI] [PubMed]
  119. Quinnel FA, Bow JN. Psychological tests used in child custody evaluations. Behavioral Sciences and the Law. 2001;19(14):491–501. doi: 10.1002/bsl.452. [DOI] [PubMed] [Google Scholar]
  120. Reese RJ, Slone NC, Soares N, Sprang R. Using telepsychology to provide a group parenting program: A preliminary evaluation of effectiveness. Psychological Services. 2015;12(3):274–282. doi: 10.1037/ser0000018. [DOI] [PubMed] [Google Scholar]
  121. Roper BL, Ben-Porath YS, Butcher JN. Comparability and validity of computerized adaptive testing with the MMPI-2. Journal of Personality Assessment. 1995;65(2):358–371. doi: 10.1207/s15327752jpa6502_10. [DOI] [PubMed] [Google Scholar]
  122. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review. 2009;16:225–237. doi: 10.3758/PBR.16.2.225. [DOI] [PubMed] [Google Scholar]
  123. Schneider AM, Bandeira DR, Meyer GJ. Rorschach Performance Assessment System (R-PAS) interrater reliability in a Brazilian adolescent sample and comparisons with three other studies. Assessment. 2020 doi: 10.1177/1073191120973075. [DOI] [PubMed] [Google Scholar]
  124. Schopp L, Johnstone B, Merrell D. Telehealth and neuropsychological assessment: New opportunities for psychologists. Professional Psychology: Research and Practice. 2000;31(2):179–183. doi: 10.1037/0735-7028.31.2.179. [DOI] [Google Scholar]
  125. Shedler, J., Mayman, M. & Manis, M. (1993). The illusion of mental health. American Psychologist, 48, 1117–1131. 10.1037/0003-066X.48.11.1117 [DOI] [PubMed]
  126. Shore JH, Savin MPHD, Orton H, Beals J, Manson SM. Diagnostic reliability of telepsychiatry in American Indian veterans. The American Journal of Psychiatry. 2007;164(1):115–118. doi: 10.1176/ajp.2007.164.1.115. [DOI] [PubMed] [Google Scholar]
  127. Simpson, S. (2001). The provision of a telepsychology service to Shetland: Client and therapist satisfaction and the ability to develop a therapeutic alliance. Journal of Telemedicine and Telecare, 7, 34–36. 10.1258/1357633011936633 [DOI] [PubMed]
  128. Singh, S.P., Arya, D. & Peters, T. (2007). Accuracy of telepsychiatric assessment of new routine outpatient referrals. BMC Psychiatry, 7(55). 10.1186/1471-244X-7-55 [DOI] [PMC free article] [PubMed]
  129. Smith CJ, Rozga A, Matthews N, Oberleitner R, Nazneen N, Abowd G. Investigating the accuracy of a novel telehealth diagnostic approach for autism spectrum disorder. Psychological Assessment. 2017;29(3):245–252. doi: 10.1037/pas0000317. [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Society for Personality Assessment. (2005). The status of the Rorschach in clinical and forensic practice: An official statement by the Board of Trustees of the Society for Personality Assessment. Journal of Personality Assessment, 85, 219–237. 10.1207/s15327752jpa8502_16 [DOI] [PubMed]
  131. Society for Personality Assessment (SPA). (2020). SPA COVID-19 survey results. https://personality.org/publications/covid-19-survey-results/
  132. Spivak, S., Spivak, A., Cullen, B., Meuchel, J., Johnston, D., Chernow, R., Green, C., & Mojtabai, R. (2020). Telepsychiatry use in U.S. mental health facilities, 2010–2017. Psychiatric Services, 71(2), 121–127. 10.1176/appi.ps.201900261 [DOI] [PubMed]
  133. Sultan S, Andronikof A, Réveillère C, Lemmel G. A Rorschach stability study in a nonpatient adult sample. Journal of Personality Assessment. 2006;87(3):330–348. doi: 10.1207/s15327752jpa8703_13. [DOI] [PubMed] [Google Scholar]
  134. Temple V, Drummond C, Valiquette S, Jozsvai E. A comparison of intellectual assessments over video conferencing and in-person for individuals with ID: Preliminary data. Journal of Intellectual Disability Research. 2010;54(6):573–577. doi: 10.1111/j.1365-2788.2010.01282.x. [DOI] [PubMed] [Google Scholar]
  135. Turkstra LS, Quinn-Padron M, Johnson JE, Workinger MS, Antoniotti N. In-person versus telehealth assessment of discourse ability in adults with traumatic brain injury. The Journal of Head Trauma Rehabilitation. 2012;27(6):424. doi: 10.1097/HTR.0b013e31823346fc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Varker T, Brand RM, Ward J, Terhaag S, Phelps A. Efficacy of synchronous telepsychology interventions for people with anxiety, depression, posttraumatic stress disorder, and adjustment disorder: A rapid evidence assessment. Psychological Services. 2019;16(4):621–635. doi: 10.1037/ser0000239. [DOI] [PubMed] [Google Scholar]
  137. Viglione DJ. A review of recent research addressing the utility of the Rorschach. Psychological Assessment. 1999;11(3):251–265. doi: 10.1037/1040-3590.11.3.251. [DOI] [Google Scholar]
  138. Viglione, D. J., Blume-Marcovici, A. C., Miller, H. L., Giromini, L., & Meyer, G. (2012). An inter-rater reliability study for the Rorschach performance assessment system. Journal of Personality Assessment, 94, 607–612. 10.1080/00223891.2012.684118 [DOI] [PubMed]
  139. Viglione DJ, Hilsenroth MJ. The Rorschach: Facts, fictions, and future. Psychological Assessment. 2001;13:452–471. doi: 10.1037//1040-3590.13.4.452. [DOI] [PubMed] [Google Scholar]
  140. Viglione, D. J., de Ruiter, C., King, C. M., Meyer, G. J., Kivisto, A. J., Rubin, B. A., & Hunsley, J. (2022). Legal Admissibility of the Rorschach and R-PAS: A Review of Research, Practice, and Case Law. Journal of Personality Assessment, 1–25. [DOI] [PubMed]
  141. Vitolo, E., Giromini, L., Viglione, D. J., Cauda, F., & Zennaro, A. (2020). Complexity and cognitive engagement in the Rorschach task: An fMRI study. Journal of Personality Assessment, 1–11. 10.1080/00223891.2020.1842429 [DOI] [PubMed]
  142. Vossel, S., Geng, J. J., & Fink, G. R. (2014). Dorsal and ventral attention systems: Distinct neural circuits but collaborative roles. The Neuroscientist, 20(2), 150–159. 10.1177/2F1073858413494269 [DOI] [PMC free article] [PubMed]
  143. Wadsworth HE, Dhima K, Womack KB, Hart J, Jr, Weiner MF, Hynan LS, Cullum CM. Validity of teleneuropsychological assessment in older patients with cognitive disorders. Archives of Clinical Neuropsychology. 2018;33(8):1040–1045. doi: 10.1093/arclin/acx140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Weiner, I. B. (1999). What the Rorschach can do for you: Incremental validity in clinical applications. Assessment, 6, 327–339. 10.1177/2F107319119900600404 [DOI] [PubMed]
  145. Weiner I, Exner J, Sciara A. Is the Rorschach welcome in the courtroom? Journal of Personality Assessment. 1996;67:422–424. doi: 10.1207/s15327752jpa6702_15. [DOI] [PubMed] [Google Scholar]
  146. Wilson FA, Rampa S, Trout KE, Stimpson JP. Telehealth delivery of mental health services: An analysis of private insurance claims data in the United States. Psychiatric Services. 2017;68(12):1303–1306. doi: 10.1176/appi.ps.201700017. [DOI] [PubMed] [Google Scholar]
  147. Wood JM, Garb HN, Nezworski MT, Lilienfeld SO, Duke MC. A second look at the validity of widely used Rorschach indices: Comment on Mihura, Meyer, Dumitrascu, and Bombel (2013) Psychological Bulletin. 2015;141(1):236–249. doi: 10.1037/a0036005. [DOI] [PubMed] [Google Scholar]
  148. Wosik J, Fudim M, Cameron B, Gellad ZF, Cho A, Phinney D, Tcheng J. Telehealth transformation: COVID-19 and the rise of virtual care. Journal of the American Medical Informatics Association. 2020;27(6):957–962. doi: 10.1093/jamia/ocaa067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Wright AJ. Equivalence of remote, online administration and traditional, face-to-face administration of the Woodcock-Johnson IV cognitive and achievement tests. Archives of Assessment Psychology. 2018;8(1):23–35. [Google Scholar]
  150. Wright, A. J. (2020). Equivalence of remote, digital administration and traditional, in-person administration of the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V). Psychological Assessment, 32(9), 809–817. 10.1037/pas0000939 [DOI] [PubMed]
  151. Wright CV, Beattie SG, Galper DI, Church AS, Bufka LF, Brabender VM, Smith BL. Assessment practices of professional psychologists: Results of a national survey. Professional Psychology: Research and Practice. 2017;48(2):73–78. doi: 10.1037/pro0000086. [DOI] [Google Scholar]
  152. Wright, A. J., Mihura, J. L., Pade, H., & McCord, D. M. (2020). Guidance on psychological tele-assessment during the COVID-19 crisis. https://www.apaservices.org/practice/reimbursement/health-codes/testing/tele-assessment-covid-19
  153. Wright AJ, Raiford SE. Essential of psychological tele-assessment. Wiley; 2021. [Google Scholar]

Articles from Psychological Injury and Law are provided here courtesy of Nature Publishing Group

RESOURCES