Skip to main content
American Journal of Speech-Language Pathology logoLink to American Journal of Speech-Language Pathology
. 2024 Jun 19;33(5):2280–2290. doi: 10.1044/2024_AJSLP-24-00085

Applications of the R.A.I.S.E. Assessment Framework to Support the Process of Assessment in Primary Progressive Aphasia

Jeanne Gallée a,b,c,, Anna Volkmer d, Anne Whitworth e, Deborah Hersh f, Jade Cartwright e
PMCID: PMC11427735  PMID: 38896883

Abstract

Purpose:

To establish the extent to which person-centered processes are integrated in assessment procedures, the Relationship, Assessment, Inclusion, Support, Evolve (R.A.I.S.E.) Assessment framework was used to evaluate measures that are typically used when assessing people living with primary progressive aphasia (PPA).

Method:

Forty-five assessment tools were evaluated through the lens of the five R.A.I.S.E. principles: building the client–clinician Relationship, Assessment choices, Including the client and care partners, providing Support, and Evolving procedures to match client capability and progression. The principles were operationalized as questions for raters to evaluate whether a measure met this aspect of the R.A.I.S.E. Assessment framework.

Results:

Ten measures commonly used in the assessment of people living with PPA met all R.A.I.S.E. principles. These measures centered upon the elicitation of naturalistic discourse, conversation, client self-report, and clinician ratings. Thirteen measures did not meet any of the criteria, and represented standardized evaluation procedures do not provide the opportunity to connect to the client, elicit or provide feedback or support, nor to adapt in response to need or performance.

Conclusions:

Whether using standardized or informal assessment tools, a relational and qualitative approach to providing assessment is paramount to promote client success and therapeutic engagement. We provide guidance through the R.A.I.S.E. framework on practices to cultivate person-centered processes of assessment in the care of people living with PPA.


The recently introduced R.A.I.S.E. Assessment framework (Gallée et al., 2023) provides a multidimensional person-centered approach to comprehensive assessment of people living with primary progressive aphasia (PPA). Moreover, this framework highlights the necessity for the clinician to consider the following features when conducting assessments: (a) build the Relationship with the client, (b) make conscious choices about the formality and standardization of Assessment approaches and types, (c) Include and incorporate the client and care partner's feedback, (d) provide Support to the client and actively advocate to enhance their agency, and (e) ensure that the provided assessment appropriately Evolves over time as the condition progresses and the needs of the client change (Gallée et al., 2023). In this article, we apply the R.A.I.S.E. assessment framework to provide in-depth analysis of assessment tools routinely used in the evaluation of speech, language, and communication symptoms for PPA. Through this analysis, we aim to establish the extent to which the principles of R.A.I.S.E. are addressed by each tool and establish the relative overlap of the framework's principles and the assessments as they exist independently (e.g., based on the instructions provided/original formatting). This discussion will help inform clinical decision making by providing guidance to clinicians as to which adaptations can be made to enhance a better fit against the R.A.I.S.E. Assessment framework and to inform the future development of new assessment tools.

The R.A.I.S.E. Assessment framework was developed to provide clinical guidance for the assessment process when working with people living with PPA. Speech-language assessment for PPA is an indispensable feature of establishing a diagnosis, monitoring change in symptoms, and informing treatment targets (Gallée & Volkmer, 2023). For when assessment is diagnostic in nature, symptomatologic profiles can be evaluated through validated screening tools; these include the Progressive Aphasia Rating Scale (Epelbaum et al., 2021), the Sydney Language Battery (Janssen et al., 2022), and the Screening for Aphasia in NeuroDegeneration (Catricalà et al., 2017). For the functions of monitoring change in symptoms and informing treatment targets, progress or decline following intervention is evaluated by comparing pre- and posttherapy assessment outcomes. In a review by Volkmer et al. (2020), measures typically used to examine the effects of functional interventions for PPA broadly fell into the following categories: interviews and questionnaires, formal tests of language, conversation analysis, and rating scales largely based on clinician judgment. For a progressive condition with both variability and uncertainty, the need to select efficacious and person-centered assessment tools consistent with the principles of the R.A.I.S.E. Assessment framework is essential to ensure that assessment itself is supportive and of therapeutic value (Hersh et al., 2013). There is therefore a direct need to evaluate the extent to which assessment tools commonly used in PPA, and components of the assessment process (e.g., case history), align with the framework's principles. In addition to what is evaluated in assessment, we set forth to evaluate the process of assessment and how we assess, including how these elements are considered in commonly used tools. When we read an assessment manual, we need to go beyond the standardized protocol—we need to think about the “tool” and the “process” from a relational and supportive perspective. This study further aimed to examine how assessment protocols explicitly involve the person and their caregivers in discussion around the “process” of assessment and whether this process is delivered in a supportive manner. As the R.A.I.S.E. Assessment framework centers on providing person-centered evaluation, we hypothesize that a predictor of alignment with its principles is the extent to which the assessment measures a client's participation rather than impairment.

Method

The study was not subject to an approval process for institutional review. The assessments analyzed in this article were drawn from those provided in Henry and Grasso (2018), Gallée et al. (2023), and Volkmer et al. (2024), as exemplars of tools that are commonly used to diagnose, evaluate, and monitor speech, language, and communication outcomes in PPA. Assessments were selected based on author agreement of their relevance to capturing linguistic function and likelihood of being used in the evaluation of people with a suspected diagnosis of PPA. The assessments were organized by the domains of speech, language, and communication that are typically of interest when it comes to providing a diagnosis of PPA as well as identifying one of the established three subtypes (e.g., semantic, nonfluent, or logopenic). Of note, certain assessments are subtests drawn from comprehensive assessments of aphasia, such as the Picnic Scene task from the Western Aphasia Battery–Revised (WAB-R; Kertesz, 2020) and the Aphasia Impact Questionnaire–21 (AIQ-21; Swinburn et al., 2019). All assessments were evaluated upon the parameters outlined in Table 1. Responses to each of these prompts were scored on a 2-point scale: yes (1) and no (0). For all components, responses were coded as yes = 1 or no = 0 for each criterion, resulting in a maximum possible score of 2 for each principle. A total score was calculated as the sum of all principle scores, where the possible range of scores was 0–10. Consensus was first established through discussion for 15% of the assessment protocols in a discussion by two authors (J.G. and J.C.). Once joint reliability on these 15% tools was established, one author (J.G.) coded the remaining assessments with 85% integrated reliability. Codes were then reviewed by the other author (J.C.), where no corrections were made.

Table 1.

The Relationship, Assessment, Inclusion, Support, Evolve (R.A.I.S.E.) Assessment framework principles and associated criterion to evaluate assessment protocols.

R.A.I.S.E. principle Measure
Relationship Does the assessment contain questions that allow the clinician to better understand the client as a person?
Do the instructions of the assessment allow for the clinician to meaningfully respond to what the client is communicating?
Assessment Does the administration allow the clinician to tailor scripts or instructions to the client?
Does the administration allow the clinician to provide cues or prompts in order to identify strengths and support needs?
Inclusion Does the assessment allow for the clinician to provide feedback to the client?
Can the client provide feedback to the clinician for the clinician to adjust their prompts or explain their purpose?
Support Does the assessment promote advocacy for the client by asking questions that determine the client's personal strengths, challenges, or needs?
Will the assessment results provide the clinician with information that helps advocate for services and supports for the client and family?
Evolve Can the instructions of the assessment be modified for a client's needs?
Does the assessment remain valid if the client's response modality changes?

Results

A total of 45 assessment tools were evaluated in the context of the R.A.I.S.E. Assessment framework using the criteria described above. Of the 45 assessments, 13 did not meet the criteria of any of the five components. Of the remaining 32 tools, 15 met the principles of Relationship, 17 met the criteria of enabling instructions to be tailored or adapted to the person during Assessment (14 involved formal, standardized assessment which precluded this), 15 met all criteria of Inclusion, 16 met the criteria of Support, and 13 met the criteria of Evolve (see Table 2).

Table 2.

Relationship, Assessment, Inclusion, Support, Evolve evaluation outcomes of tools typically used to assess primary progressive aphasia.

Assessment Total Relationship
Assessment
Inclusion
Support
Evolve
R1 R2 Sum A1 A2 Sum I1 I2 Sum S1 S2 Sum E1 E2 Sum
Boston Diagnostic Examination of Aphasia–Third Edition (BDAE-3), Responsive Naming (Goodglass et al., 2001) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Timed categorical (animals; Tombaugh et al., 1999) or letter fluency (F, A, S; Monsch et al., 1992; Rees et al., 1998) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cognitive Linguistic Quick Test-Plus (CLQT+), Generative Naming (Helm-Estabrooks, 2017) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Psycholinguistic Assessment of Language Processing in Aphasia (PALPA), Spoken Word–Naming (Kay et al., 1996) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cambridge Semantic Battery (CSB), Category Comprehension Adlam et al., 2010) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PALPA, Spoken Word–Picture Matching (Kay et al., 1996) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Western Aphasia Battery–Revised (WAB-R), Auditory Verbal Comprehension: Sequential Commands (Kertesz, 2007) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Comprehensive Aphasia Test–Second Edition (CAT-2), Language Comprehension: Comprehension of Spoken Sentences (Swinburn et al., 2019) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WAB-R, Repetition (Kertesz, 2007) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CAT-2, Repetition subtests (Swinburn et al., 2019) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WAB-R, Reading (Kertesz, 2007) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CAT-2, Reading Out Loud (Swinburn et al., 2019) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
CAT-2, Language Comprehension: Comprehension of Written Sentences (Swinburn et al., 2019) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Peanut butter and jelly sandwich (Stark, 2019) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
CLQT+, Story Retelling (Helm-Estabrooks, 2017) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
Boston Naming Test (BNT; Kaplan et al., 2001) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
CLQT+, Confrontation Naming (Helm-Estabrooks, 2017) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
WAB-R, Word Fluency subtest (Kertesz, 2007) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
Northwestern Anagram Test (NAT; Weintraub et al. 2009) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
Make A Sentence Test (MAST; Billette et al., 2015) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
Northwestern Assessment of Verbs and Sentences (NAVS; Cho-Reyes & Thompson, 2012) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
CLQT+, Semantic Comprehension (Helm-Estabrooks, 2017) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
Pyramids and Palm Trees (PPT; Howard & Patterson, 1992) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
WAB-R, Constructional, Visuospatial, and Calculation: Calculation (Kertesz, 2007) 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
CAT-2, Written Picture Description (Swinburn et al., 2019) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
The Arizona Battery of Reading and Spelling (ABRS), Reading/Spelling List (Beeson & Rising, 2010) 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0
Communication Activities of Daily Living–Third Edition (CADL-3; Holland et al., 2018) 2 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1
Curtin University Discourse Protocol (CUDP; Whitworth et al., 2015) 4 1 0 1 0 1 1 0 0 0 1 1 2 0 0 0
WAB-R, Picnic Scene (Kertesz, 2020) 5 0 0 0 0 1 1 1 1 2 0 1 1 1 0 1
CLQT+, Personal Facts (Helm-Estabrooks, 2017) 5 1 0 1 0 1 1 0 0 0 0 1 1 1 1 2
Aphasia Needs Assessment (ANA; Garrett & Beukelman, 2006) 7 1 1 2 0 0 0 1 1 2 1 1 2 0 1 1
Aphasia Severity Rating (ASR; Simmons-Mackie et al., 2018) 8 1 1 2 1 1 2 0 0 0 1 1 2 1 1 2
The Progressive Aphasia Language Scale (PALS), Spontaneous Speech (Leyton et al., 2011) 9 1 1 2 1 1 2 1 1 2 1 1 2 1 0 1
Communication Confidence Rating Scale for Aphasia (CCRSA; Babbitt et al., 2011; Cherney et al., 2011) 9 1 1 2 1 1 2 1 1 2 1 1 2 0 1 1
Measure of Participation in Conversation (MPC; Kagan et al., 2004) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Case history (e.g., “Tell me what brings you here”) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Personal narratives elicited by a prompt (e.g., “Tell me what you do for work” or “Tell me about a typical Sunday”) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
WAB-R, Conversational Questions (Kertesz, 2020) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
A conversation between the client, clinician and/or familiar conversational partner (Gallée et al., 2023; Henry & Grasso, 2018) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Communicative Effectiveness Index (CETI; Lomas et al., 1989) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Clinical Dementia Rating (Knopman et al., 2011) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Progressive Aphasia Severity Scale (PASS; Sapolsky et al., 2014) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Measure of Skill in Conversation (MSC; Kagan et al., 2004) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
Assessment for Living with Aphasia (ALA; Simmons-Mackie et al., 2014) 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2
CAT-2, The Aphasia Impact Questionnaire 10 1 1 2 1 1 2 1 1 2 1 1 2 1 1 2

Note. The individual scores and summative scores for each R.A.I.S.E. principle, as well as the total score, are indicated. For a single criterion, an assessment could score either a 0 (criterion unfulfilled) or 1 (criterion fulfilled). As such, the range of possible scores for an R.A.I.S.E. principle was 0–2. Finally, the total score reflects the sum of all scores, with a possible range of 0–10. The criterion for each of the R.A.I.S.E. principles was as follows:

R1: Does the assessment contain questions that allow the clinician to better understand the client as a person?

R2: Do the instructions of the assessment allow for the clinician to meaningfully respond to what the client is communicating?

A1: Does the administration allow the clinician to tailor scripts or instructions to the client?

A2: Does the administration allow the clinician to provide cues or prompts in order to identify strengths and support needs?

I1: Does the assessment allow for the clinician to provide feedback to the client?

I2: Can the client provide feedback to the clinician for the clinician to adjust their prompts or explain their purpose?

S1: Does the assessment promote advocacy for the client by asking questions that determine the client's personal strengths, challenges, or needs?

S2: Will the assessment results provide the clinician with information that helps advocate for services and supports for the client and family?

E1: Can the instructions of the assessment be modified for a client's needs?

E2: Does the assessment remain valid if the client's response modality changes?

More than half of assessments partially met the criterion of the specific R.A.I.S.E. components, that is, scored 1 of the 2 possible points for a given principle. For example, Cognitive-Linguistic Quick Test PLUS (CLQT+; Helm-Estabrooks, 2017) Personal Facts partially met criterion for Assessment as the clinician has some flexibility to probe when a client's response is incomplete or delayed while sticking to scripted prompts; this same assessment also met criterion E1 but not E2 for Evolve as only the verbal modality is scored as accurate), resulting in a total of 27 assessments meeting partial criterion for certain principles. A total of 10 assessments met all evaluated aspects of R.A.I.S.E. These consisted of assessment tools and components that elicited naturalistic language samples (e.g., conducting a targeted case history and eliciting self-reports about communicative abilities), clinician rating scales (e.g., the PASS; Sapolsky et al., 2014), and self-rating scales (e.g., the Comprehensive Aphasia Test–Second Edition [CAT-2], Aphasia Impact Questionnaire 21; Swinburn et al., 2019). More broadly, 13 assessment tools were in strong alignment with the R.A.I.S.E. Assessment framework. An assessment was determined to be in strong alignment with R.A.I.S.E. when the tool met at least one of the criteria for each of the five principles (see Figure 1).

Figure 1.

The image displays color coded assessments. The red end of the spectrum represents a value of 0. Yellow at the center represents a value of 5. The aqua end of the spectrum represents a value of 10. The assessments colored red are BDAE 3, Timed Fluency, and WAB-R repetition. The assessment colored pink is BNT. The assessment colored orange is CADL 3. The assessments that are colored yellow are WAB-R picnic scene, and CLQT plus personal facts. The assessment colored green is PALS spontaneous speech. The assessments colored aqua are AIQ 21, Personal narratives elicited by a prompt, and MPC.

Examples of assessments along the spectrum of alignment with the R.A.I.SE. principles: Relationship, Assessment, Inclusion, Support, and Evolve. Strong alignment is indicated by a score of 9 or above (represented by green to aqua), whereas the absence of alignment is equivalent to a score of 0 (represented by red). BDAE-3 = Boston Diagnostic Examination of Aphasia–Third Edition; WAB-R = Western Aphasia Battery–Revised; BNT = Boston Naming Test; CADL-3 = Communication Activities of Daily Living–Third Edition; CLQT+ = Cognitive-Linguistic Quick Test PLUS; PALS = Progressive Aphasia Language Scale; AIQ-21 = Aphasia Impact Questionnaire–21; MPC = Measure of Participation in Conversation.

Relationship

Of all 45 assessments, two met partial criteria for Relationship and 15 met both R1 and R2. Both the CUDP (Whitworth et al., 2015) and CLQT+ Personal Facts (Helm-Estabrooks, 2017) met R1, where clients were asked questions that would allow their clinician to understand them in the absence of the clinician being able to respond meaningfully to this information (see Table 2). Notably, assessment tools that met full criterion for Relationship predominantly consisted of observational scales for clinicians to fill out based on a variety of conversation-based activities (e.g., the ASR [Simmons-Mackie et al., 2018] or PASS [Sapolsky et al., 2014]). Outliers to this trend were the AIQ-21 (Swinburn et al., 2019) and the Conversational Questions subtest of the WAB-R (Kertesz, 2020).

Assessment

Three met partial criteria met both criteria for Assessment. Partial credit was assigned to the CUDP (Whitworth et al., 2015), the WAB-R Picnic Scene, and CLQT+ Personal Facts (Helm-Estabrooks, 2017) as these measures allowed clinicians to provide cues or prompts to identify clients' strengths and support needs (A2). With exception for the Aphasia Needs Assessment (ANA; Garrett & Beukelman, 2006), the same measures that met criterion for Relationship met criterion for Assessment by additionally tailoring scripts to the client's unique needs (A1).

Inclusion

While no tools met partial criteria, a total of 15 met both criteria for Inclusion. Consistent with the outcomes for Relationship and Inclusion, the majority of these measures consisted of observational rating scales and one self-report scale (AIQ-21; Swinburn et al., 2019).

Support

Nine met partial criteria by meeting the criteria of promoting Support for the client by identifying individual strengths, challenges, and needs (S1). Sixteen additional measures met full criterion for Support, largely overlapping with the assessment tools that met all components for Relationship, Assessment, and Inclusion.

Evolve

Twenty-three assessment tools met partial criteria for Evolve, and an additional 12 met all criteria.

Co-Occurrence of R.A.I.S.E. Principles

Beyond the 11 assessments that met all criterion, there were nine assessment tools that met at least one criterion for two or more principles without meeting full criteria. Seven of these assessment tools met criteria for both Relationship and Support. Of these, only the Spontaneous Speech subtest of the Progressive Aphasia Language Scale (Leyton et al., 2011) and the Communication Confidence Rating Scale for Aphasia (Babbitt et al., 2011; Cherney et al., 2011) met all five principles of R.A.I.S.E. Notably, for the remaining seven assessments, the respective combinations of the R.A.I.S.E. principles for whom criteria was met only occurred once (see Table 2).

Discussion

The purpose of this study was to evaluate how commonly used assessment tools, and components of the assessment process, are in line with the principles of the R.A.I.S.E. Assessment framework. Our results demonstrated that many standardized assessments, when used on their own, do not fulfill the principles of R.A.I.S.E. and are at risk of undermining the therapeutic relationship. Conversely, approximately a quarter of assessment tasks showed strong alignment with the R.A.I.S.E. framework (e.g., where at least partial criterion was met for each of the five principles) and were more inherently equipped to enable therapeutic assessment, that is, “assessment of support, with support, and as support” (Hersh et al., 2013, p. 162). Differentiating assessment tools and components in this way provides valuable insights into assessment practices. Importantly, the evaluation process stepped beyond consideration of the psychometric properties of assessment tools, to consider the more relational, supportive, and therapeutic aspects of assessment. We will discuss key findings and how they can be used to guide assessment practice, propose modifications to existing assessment processes, and position the development of new assessment tools as a priority for action in the PPA field.

Using R.A.I.S.E. to Guide Assessment Practice

The assessment tools and components evaluated fell along a continuum of low to high alignment with the R.A.I.S.E. Assessment framework, providing an objective framework to guide assessment practice and judicious selection of assessment tools in practice. The assessment tools at the lower end of the continuum tended to be standardized in nature and designed for diagnostic and classification purposes, while those at the higher end were frequently informal and highly oriented toward client participation in more naturalistic paradigms. Knowing where different assessment tools fall along the continuum and how they align with R.A.I.S.E. principles allow a more considered approach to planning and facilitating assessment sessions and interactions. For example, a small number of assessments were identified that intrinsically promote Relationship, such as a case history and personal narratives. As such, these assessments have value in the early stages of the assessment process to build rapport and relationships, before administering more standardized assessments, like the WAB-R (Kertesz, 2020) and Comprehensive Aphasia Test–Second Edition-2 (CAT-2; Swinburn et al., 2022). The importance of establishing rapport prior to administration is recommended in the CAT-2 manual (Swinburn et al., 2022), affirming that what comes before and after standardized assessment is essential!

Inclusion of client and clinician feedback in the assessment process was also evaluated. As anticipated, many formal assessments constrained provision of feedback during the assessment process to comply with standardization of administration. This is important for allowing comparison to a norm but restricts opportunities within the assessment for support and mutual benefit. This is especially true of assessments that do not allow for the clinician to provide tailored cueing when the client is challenged, produces errors, or does not provide a response. For example, in the CLQT+ (Helm-Estabrooks, 2017) Generative Naming task, for a client who has expressed concern over their performance, only the following direction is deemed acceptable: “I'm not allowed to help you. Just do the best you can.” Although many clinicians will naturally provide additional support through personalized commentary (e.g., “After we have finished, we can talk this through”), this guidance is rarely presented or discussed in test manuals. The CAT-2 (Swinburn et al., 2022) is one exception, where the need for care when administering standardized assessments is explicitly acknowledged, encouraging responsiveness to a person's needs while adhering to the task instructions. Swinburn et al. (2022) acknowledge giving feedback or a summary of performance at the end of the assessment, highlighting the need to emphasize the positives and to acknowledge any negative emotions that were expressed during the assessment, for example, acknowledging that those feelings are commonly experienced by people with aphasia (PPA). Examples of phrases are provided, drawing on the work of Cheng et al. (2020), for example, “I know it's tough now. We're here to support you. We'll do everything we can to help” (p. 46). If the person demonstrates engagement with the results, providing a summary is recommended; however, a template for this is not provided. Such forms of feedback acknowledge, include, and support the client, setting the foundation for a long-term relationship between client and clinician.

Explicit opportunities to Support, such as those described above, and advocate for the client were rarely considered in assessment protocols, as this was dependent on the clinician's ability to determine a client's unique strengths, challenges, needs, and goals, based on assessment prompts. This principle is fundamental in clinicians supporting clients and their networks to “use” the assessment information gathered (assessment as support; Hersh et al., 2013). Social network analysis is an example of an assessment tool that allows clients and clinicians to work collaboratively (promoting Relationship) to create an accessible output and resource through the assessment process (aligning with Support and Advocacy; Hilari & Northcott, 2017; Vickers, 2010). The relative size and quality of a person's social network is visualized, supporting functional and person-centered goal setting and outcome measurement, while helping the person with PPA and their family advocate for the services and supports they need to “grow” their social network and strengthen connectedness.

Finally, the extent to which an assessment Evolves over time was analyzed. Certain standardized assessments of select modalities, such as in confrontation naming, help the clinician track more nuanced change over time. A positive example of a standardized assessment that is amenable to changes in naming ability is the CLQT+ Confrontation Naming subtest (Helm-Estabrooks, 2017), in which the clinician has the opportunity to provide credit for partially correct responses. Such a scoring modification can easily be, and anecdotally often is, implemented by clinicians in practice.

An exemplar of an assessment with strong R.A.I.S.E. alignment is the AIQ-21 (Swinburn et al., 2019), which provides clear direction to the clinician, noting explicitly how the “manner” of AIQ administration should feel “qualitatively different” in mood and tone to the standardized components of the assessment. Importantly, the authors highlight that “… as much support, encouragement, and feedback as possible” should be provided during administration (p. 41). Examples of supportive features are provided, including rewording and repeating questions, using gesture, and smiling during administration (aligning with Relationship and Support). As demonstrated in Table 1, informal assessments of naturalistic language can also meet all components of the R.A.I.S.E. Assessment framework. Finally, both client- and clinician-based rating scales, such as the PASS (Sapolsky et al., 2014), are amenable to the principles of the framework in that they comprehensively capture a client's unique strengths, challenges, wants, and needs in a manner that establishes a relationship, is inclusive of client and care partner feedback, supports the client, and is adaptable to the client and over time. Examining the assessments with strong alignment with R.A.I.S.E. lens highlights attention to the relational and supportive aspects of assessment and provides useful directions for enhancing assessment practices and developing new assessment tools in the future.

Using R.A.I.S.E. to Enhance Assessment Practices

The evaluation process allowed examination of every aspect of the R.A.I.S.E. framework and revealed a paucity of existing assessments that align with all elements. As such, we see significant potential for using the R.A.I.S.E. evaluation framework in a principle-based way to enhance assessment practices. Knowing how well an assessment aligns with R.A.I.S.E. can inform how an assessment might be best administered and the supports or scaffolds that may need to surround the assessment process. For example, when using assessments that score at the lower end of the R.A.I.S.E. continuum, and for when meeting people with PPA for the first time, the clinician must go beyond standardized assessment protocols to determine (and reveal) a person's strengths, rather than focusing on impairments, to create a comprehensive and mutually beneficial assessment process. As a further example, when administering assessments that do not allow feedback or provision of tailored cues or instructions during administration, clear expectations can be provided for the client and their family. Using the R.A.I.S.E. framework to drive assessment practice promotes reflection on why standardized tasks are required and why they need to be delivered in constrained ways (e.g., to ensure a reliable picture of performance to support diagnosis and/or to allow sensitive tracking of maintenance or decline over time) and ensures that we provide this context to the person and their family. Furthermore, constrained assessment tasks can be carefully balanced with more flexible, responsive, and supportive tasks that allow a person's strengths and effective strategies to be identified and revealed—promoting a sense of competence, as well as an understanding of support needs. As such, using the R.A.I.S.E. ratings in this way allows us to plan the aims, structure, flow, and “feel” of our assessment sessions in a more considered and sensitive way—ensuring we never assess to “destruction” (Gallée et al., 2023).

Alternatively, rather than abandon the instructions of standardized assessments, the relative rigidity of these assessment tools can frequently be softened by adding strengths-based modifications. For example, this could include offering an alternative response modality and providing cues, or opportunities to complete items outside of the official protocol or formal administration, particularly if a person has been anxious about one aspect of their performance. Where formal outcomes of a psychometrically established test are required and the assessment cannot be modified midprocedure, care can be taken to set clear expectations and prepare the client for the assessment process. Furthermore, appropriate debriefing and opportunities to repeat items can be created afterward. For example, after providing the test instructions, allowing the client to ask questions and adapting prompts to elicit the targeted response permits the clinician to not only follow test protocols and conduct standardized assessment, but also to collect a separate, and arguably richer, set of data related to the modifications and scaffolds that allow a client to flourish in communication. This combined manner of data collection can result in a dynamic and person-centered process of assessment while using readily available, commonly used standardized assessment materials.

Other ideas for using R.A.I.S.E. to enhance assessment practices include identifying ways to transform assessment outcomes into accessible and useable formats to advocate for funding or extended hospital stays or to promote how well family and friends understand PPA. Such additional layers of support ensure that assessments are empowering and useful for all stakeholders. As advocated for by Hersh and Boud (2024), it would be promising to see these supportive and R.A.I.S.E.-aligned elements more formally embedded within assessment protocols in the future. Furthermore, the Evolve principle ensures that we select assessment tools in the early stages of the continuum of care that can be used over time to track maintenance and evolution. Conversely, having to continuously change the assessment tool restricts interpretation of the rate and nature of decline. Discourse assessments are a good example of a tool that offers longevity and sensitivity over time. Clinician ratings, such as those collected by the PASS, provide similar flexibility in that the suite of measures or tools may change in response to progression and capability, but the interpretation and “classification” of performance is documented in a standardized and trackable manner. Consistency in tool use allows the clinician to document performance in a more coherent and meaningful way over time, serving to help maintain the Relationship while also Evolving as necessary.

Development of New Assessment Tools and Approaches

The outcomes of this work provide direction for the development of new assessment tools in the PPA field that align with R.A.I.S.E. and support more person-centered and therapeutic assessment practices (Hersh & Boud, 2024; Hersh et al., 2013). The need to develop assessment approaches that prioritize the relational, supportive, and therapeutic aspects of assessment, while also maintaining attention to robust psychometric properties, particularly when individualized person-centered practices are emphasized, is critical. Given the progressive nature of PPA, we need to explicitly consider and integrate the principles of Relationship and Evolve into assessment tools and the assessment process. Based on the results of this evaluation, the development of tools that draw on naturalistic language elicitation, clinician or self-report scales, and that result in accessible language that is easily transferable between clinicians, evaluation timepoints, clients, and care partners should be prioritized.

To reliably share information and allow for this relationship to flourish, there is a strong need for the clinician to use common terminology across the disease trajectory that is accessible yet flexible to changing symptoms. Although measures, such as the PASS, closely address the need for flexibility and use of common or consistent terminology/scoring (e.g., scores of 0–3), there remains room for measurement tools with a strengths-based, rather than impairment-based, scale with built-in supports to create objective ratings that boost interrater and intrarater reliability. The development of a scale, for example, that asks objective questions that can be reliably tracked over time would be a positive step forward in meeting this need. Finally, our findings motivate the need for tools that facilitate immediate feedback and accessible language for the clinician to share with the client and care partners to contextualize the outcomes of the assessment.

Conclusions

Evaluation tools are core features of assessment. Broadly, there are two forms of measures readily available to us: standardized measures with the option of comparing client performance to normative scores, and personalized tasks to evaluate functional performance. Through this analysis using the R.A.I.S.E. Assessment framework, we have aimed to draw clinicians' attention to these relational and qualitative aspects of assessment that are essential for a client's well-being and therapeutic engagement. The clinician's role is then to create and incorporate the context of the clinician–client relationship, be purposeful in choice of tools while maintaining the implications of their use, and consider how these are introduced, explained, and used to prompt further intervention.

Limitations

This study did not include all assessment tools that are used in the evaluation of people living with PPA, such as the American Speech-Language-Hearing Association Functional Assessment of Communication Skills for Adults, Cookie Theft, and Apraxia Battery for Adults, nor were all subtests of comprehensive evaluations analyzed. Despite this, we believe to have presented analysis outcomes on a representative array of assessment tools that illustrate the range of approaches clinicians can take in evaluation. Furthermore, in our analysis, psychometric properties were not evaluated. Inclusion of a review of the psychometric properties of assessment tools used in working with this population could contribute to a more comprehensive audit of assessments.

Data Availability Statement

The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

This research was supported by the National Institute on Aging (U24AG074855 to J.G., Role: Postdoctoral Fellow).

Funding Statement

This research was supported by the National Institute on Aging (U24AG074855 to J.G., Role: Postdoctoral Fellow).

References

  1. Adlam, A.-L. R., Patterson, K., Bozeat, S., & Hodges, J. R. (2010). The Cambridge Semantic Memory Test Battery: Detection of semantic deficits in semantic dementia and Alzheimer's disease. Neurocase, 16(3), 193–207. 10.1080/13554790903405693 [DOI] [PubMed] [Google Scholar]
  2. Babbitt, E. M., Heinemann, A. W., Semik, P., & Cherney, L. R. (2011). Psychometric properties of the Communication Confidence Rating Scale for Aphasia (CCRSA): Phase 2. Aphasiology, 25(6–7), 727–735. 10.1080/02687038.2010.537347 [DOI] [PubMed] [Google Scholar]
  3. Beeson, P. M., & Rising, K. (2010). Arizona Battery for Reading and Spelling (ABRS).
  4. Billette, O. V., Sajjadi, S. A., Patterson, K., & Nestor, P. J. (2015). SECT and MAST: New tests to assess grammatical abilities in primary progressive aphasia. Aphasiology, 29(10), 1135–1151. 10.1080/02687038.2015.1037822 [DOI] [Google Scholar]
  5. Catricalà, E., Gobbi, E., Battista, P., Miozzo, A., Polito, C., Boschi, V., Esposito, V., Cuoco, S., Barone, P., Sorbi, S., Cappa, S. F., & Garrard, P. (2017). SAND: A Screening for Aphasia in NeuroDegeneration. Development and normative data. Neurological Sciences, 38(8), 1469–1483. 10.1007/s10072-017-3001-y [DOI] [PubMed] [Google Scholar]
  6. Cheng, B. B. Y., Worrall, L. E., Copland, D. A., & Wallace, S. J. (2020). Prognostication in post-stroke aphasia: How do speech pathologists formulate and deliver information about recovery? International Journal of Language & Communication Disorders, 55(4), 520–536. 10.1111/1460-6984.12534 [DOI] [PubMed] [Google Scholar]
  7. Cherney, L. R., Babbitt, E. M., Semik, P., & Heinemann, A. W. (2011). Psychometric properties of the Communication Confidence Rating Scale for Aphasia (CCRSA): Phase 1. Topics in Stroke Rehabilitation, 18(4), 352–360. 10.1310/tsr1804-352 [DOI] [PubMed] [Google Scholar]
  8. Cho-Reyes, S., & Thompson, C. K. (2012). Verb and sentence production and comprehension in aphasia: Northwestern Assessment of Verbs and Sentences (NAVS). Aphasiology, 26(10), 1250–1277. 10.1080/02687038.2012.693584 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Epelbaum, S., Saade, Y. M., Flamand Roze, C., Roze, E., Ferrieux, S., Arbizu, C., Nogues, M., Azuar, C., Dubois, B., Tezenas du Montcel, S., & Teichmann, M. (2021). A reliable and rapid language tool for the diagnosis, classification, and follow-up of primary progressive aphasia variants. Frontiers in Neurology, 11. 10.3389/fneur.2020.571657 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gallée, J., Cartwright, J., Volkmer, A., Whitworth, A., & Hersh, D. (2023). “Please don't assess him to destruction”: The R.A.I.S.E. Assessment framework for primary progressive aphasia. American Journal of Speech-Language Pathology, 32(2), 391–410. 10.1044/2022_AJSLP-22-00122 [DOI] [PubMed] [Google Scholar]
  11. Gallée, J., & Volkmer, A. (2023). Role of the speech-language therapist/pathologist in primary progressive aphasia. Neurology: Clinical Practice, 13(4), Article e200178. 10.1212/CPJ.0000000000200178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Garrett, K. L., & Beukelman, D. R. (2006). Aphasia Needs Assessment. https://cehs.unl.edu/documents/secd/aac/assessment/aphasianeeds.pdf [PDF]
  13. Goodglass, H., Kaplan, E., & Weintraub, S. (2001). BDAE: The Boston Diagnostic Aphasia Examination. Lippincott Williams & Wilkins. [Google Scholar]
  14. Helm-Estabrooks, N. (2017). Cognitive Linguistic Quick Test–Plus. The Psychological Corporation. [Google Scholar]
  15. Henry, M. L., & Grasso, S. M. (2018, July). Assessment of individuals with primary progressive aphasia. Seminars in Speech and Language, 39(03), 231–241. 10.1055/s-0038-1660782 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Hersh, D., & Boud, D. (2024). Reassessing assessment: What can post stroke aphasia assessment learn from research on assessment in education? Aphasiology, 38(1), 123–143. 10.1080/02687038.2022.2163462 [DOI] [Google Scholar]
  17. Hersh, D., Worrall, L., O'Halloran, R., Brown, K., Grohn, B., & Rodriguez, A. (2013). Assess for success: Evidence for therapeutic assessment. In Simmons-Mackie N., King J., & Beukelman D. (Eds.), Supporting communication for adults with acute and chronic aphasia (pp. 145–164). Brookes. [Google Scholar]
  18. Hilari, K., & Northcott, S. (2017). “Struggling to stay connected”: Comparing the social relationships of healthy older people and people with stroke and aphasia. Aphasiology, 31(6), 674–687. 10.1080/02687038.2016.1218436 [DOI] [Google Scholar]
  19. Holland, A. L., Frattali, C., & Fromm, D. (2018). CADL-3: Communication Activities of Daily Living–Third Edition. Pro-Ed. [Google Scholar]
  20. Howard, D., & Patterson, K. E. (1992). The Pyramids and Palm Trees Test. Pearson. [Google Scholar]
  21. Janssen, N., Roelofs, A., van den Berg, E., Eikelboom, W. S., Holleman, M. A., in de Braek, D. M. J. M., Piguet, O., Piai, V., & Kessels, R. P. C. (2022). The diagnostic value of language screening in primary progressive aphasia: Validation and application of the Sydney Language Battery. Journal of Speech, Language, and Hearing Research, 65(1), 200–214. 10.1044/2021_JSLHR-21-00024 [DOI] [PubMed] [Google Scholar]
  22. Kagan, A., Winckel, J., Black, S., Felson Duchan, J., Simmons-Mackie, N., & Square, P. (2004). A set of observational measures for rating support and participation in conversation between adults with aphasia and their conversation partners. Topics in Stroke Rehabilitation, 11(1), 67–83. 10.1310/CL3V-A94A-DE5C-CVBE [DOI] [PubMed] [Google Scholar]
  23. Kaplan, E., Goodglass, H., & Weintraub, S. (2001). Boston Naming Test. Pro-Ed.
  24. Kay, J., Lesser, R., & Coltheart, M. (1996). Psycholinguistic Assessments of Language Processing in Aphasia (PALPA): An introduction. Aphasiology, 10(2), 159–180. 10.1080/02687039608248403 [DOI] [Google Scholar]
  25. Kertesz, A. (2007). Western Aphasia Battery–Revised. Pearson. [Google Scholar]
  26. Kertesz, A. (2020). The Western Aphasia Battery: A systematic review of research and clinical applications. Aphasiology, 36(1), 21–50. 10.1080/02687038.2020.1852002 [DOI] [Google Scholar]
  27. Knopman, D. S., Weintraub, S., & Pankratz, V. S. (2011). Language and behavior domains enhance the value of the clinical dementia rating scale. Alzheimer's & Dementia, 7(3), 293–299. 10.1016/j.jalz.2010.12.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Leyton, C. E., Villemagne, V. L., Savage, S., Pike, K. E., Ballard, K. J., Piguet, O., Burrell, J. R., Rowe, C. C., & Hodges, J. R. (2011). Subtypes of progressive aphasia: Application of the international consensus criteria and validation using β-amyloid imaging. Brain, 134(10), 3030–3043. 10.1093/brain/awr216 [DOI] [PubMed] [Google Scholar]
  29. Lomas, J., Pickard, L., Bester, S., Elbard, H., Finlayson, A., & Zoghaib, C. (1989). The Communicative Effectiveness Index: Development and psychometric evaluation of a functional communication measure for adult aphasia. Journal of Speech and Hearing Disorders, 54(1), 113–124. 10.1044/jshd.5401.113 [DOI] [PubMed] [Google Scholar]
  30. Monsch, A. U., Bondi, M. W., Butters, N., Salmon, D. P., Katzman, R., & Thal, L. J. (1992). Comparisons of verbal fluency tasks in the detection of dementia of the Alzheimer type. Archives of Neurology, 49(12), 1253–1258. 10.1001/archneur.1992.00530360051017 [DOI] [PubMed] [Google Scholar]
  31. Rees, L., Tombaugh, T. N., & Kozak, J. (1998). Normative data for two verbal fluency tests (FAS and “animals”) for 1300 cognitively intact individuals aged 16–90 years. Archives of Clinical Neuropsychology, 31(1), 101. 10.1016/S0887-6177(98)90552-2 [DOI] [Google Scholar]
  32. Sapolsky, D., Domoto-Reilly, K., & Dickerson, B. C. (2014). Use of the Progressive Aphasia Severity Scale (PASS) in monitoring speech and language status in PPA. Aphasiology, 28(8–9), 993–1003. 10.1080/02687038.2014.931563 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Simmons-Mackie, N., Kagan, A., & Shumway, E. (2018). Aphasia Severity Rating. Aphasia Institute. https://aphasia-institute.s3.amazonaws.com/uploads/2021/03/ASR-Rating-Scale.pdf [PDF] [Google Scholar]
  34. Simmons-Mackie, N., Savage, M. C., & Worrall, L. (2014). Conversation therapy for aphasia: A qualitative review of the literature. International Journal of Language & Communication Disorders, 49(5), 511–526. 10.1111/1460-6984.12097 [DOI] [PubMed] [Google Scholar]
  35. Stark, B. C. (2019). A comparison of three discourse elicitation methods in aphasia and age-matched adults: Implications for language assessment and outcome. American Journal of Speech-Language Pathology, 28(3), 1067–1083. 10.1044/2019_AJSLP-18-0265 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Swinburn, K., Best, W., Beeke, S., Cruice, M., Smith, L., Pearce Willis, E., Ledingham, K., Sweeney, J., & McVicker, S. J. (2019). A concise patient reported outcome measure for people with aphasia: The Aphasia Impact Questionnaire 21. Aphasiology, 33(9), 1035–1060. 10.1080/02687038.2018.1517406 [DOI] [Google Scholar]
  37. Swinburn, K., Porter, G., & Howard, D. (2022). Comprehensive Aphasia Test–Second Edition. Routledge. [Google Scholar]
  38. Tombaugh, T. N., Kozak, J., & Rees, L. (1999). Normative data stratified by age and education for two measures of verbal fluency: FAS and animal naming. Archives of Clinical Neuropsychology, 14(2), 167–177. 10.1016/S0887-6177(97)00095-4 [DOI] [PubMed] [Google Scholar]
  39. Vickers, C. P. (2010). Social networks after the onset of aphasia: The impact of aphasia group attendance. Aphasiology, 24(6–8), 902–913. 10.1080/02687030903438532 [DOI] [Google Scholar]
  40. Volkmer, A., Copland, D. A., Henry, M. L., Warren, J. D., Varley, R., Wallace, S. J., & Hardy, C. J. (2024). COS-PPA: Protocol to develop a core outcome set for primary progressive aphasia. BMJ Open, 14(5), Article e078714. 10.1136/bmjopen-2023-078714 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Volkmer, A., Spector, A., Meitanis, V., Warren, J. D., & Beeke, S. (2020). Effects of functional communication interventions for people with primary progressive aphasia and their caregivers: A systematic review. Aging & Mental Health, 24(9), 1381–1393. 10.1080/13607863.2019.1617246 [DOI] [PubMed] [Google Scholar]
  42. Weintraub, S., Mesulam, M.-M., Wieneke, C., Rademaker, A., Rogalski, E. J., & Thompson, C. K. (2009). The Northwestern Anagram Test: Measuring sentence production in primary progressive aphasia. American Journal of Alzheimer's Disease & Other Dementias, 24(5), 408–416. 10.1177/1533317509343104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Whitworth, A., Claessen, M., Leitão, S., & Webster, J. (2015). Beyond narrative: Is there an implicit structure to the way in which adults organise their discourse? Clinical Linguistics & Phonetics, 29(6), 455–481. 10.3109/02699206.2015.1020450 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.


Articles from American Journal of Speech-Language Pathology are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES