Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Dec 8.
Published in final edited form as: Generations. 1997 Mar;21(1):10.

Attuning Assessment to the Client: Recent Advances in Theory and Methodology

Mark Luborsky 1
PMCID: PMC4259285  NIHMSID: NIHMS637200  PMID: 25505359

Show how long you take to rise from a chair and walk ten feet. Circle just one answer on a standardized test question. Spell the word world backwards. Count down from twenty to one. Describe a problem you have with incontinence or sexual performance. Ordered not to move, talk, or breathe, remain calm while strapped to a plank squeezed inside a narrow, thrumming PET scan machine.

This is some of what you can expect as the subject of a gerontological assessment. You could be a passive biological specimen whose brain or organ function is recorded, a subject who must show rather than say what you can do, or a credible “self-reporter” on a health survey—or all three.

The purpose of this article is to describe some of what has been learned about the client’s experience of the assessment process. The issues addressed relate to the experiences of people in person-to-person assessments (face-to-face or administered over the phone) but also apply to “paper and pencil” style interviews that clients fill out themselves.

Practitioners and clinicians have long been attentive to the actual events and social interactions within the assessment process, understanding that they shape, by intention or otherwise, how clients respond to assessment tools. Fortunately, research and science are now catching up. Recent advances in theory and methodology now permit greater attuning of tests to research subjects. Advances on several fronts have revealed the need for a multidisciplinary approach. New understanding of the assessment experience has been provided by sociolinguistics and social communication (Coupland, Coupland, and Giles, 1991; Gumperz, 1982; Sacks, Schlegoff, and Jefferson, 1978; Mishler, 1986). These areas of study provide a framework for examining the language expressed in social settings and also for understanding dialogue as the means of resolving differences between the “agendas” of speakers. Medical anthropology (Scheer and Luborsky, 1991; Luborsky, 1995; Luborsky, 1993) has helped to identify how “self-reports” and the meaning and experience of assessment are shaped by historical, sociocultural, developmental, and individual factors. The ways that people process information and perform during assessment situations have been identified by psychology research in both cognitive and social realms (Beisecker, 1996; Haug and Ory, 1987; Jobe, Keller, and Smith, 1995; Roter and Hall, 1992; Sudman and Schwarz, 1995). From another quarter, the advent of the Participatory Action Research model (Whyte, 1991; NIDRR, 1989), led by advocacy groups arguing for an increased voice in research, has broadened our knowledge by mandating the incorporation of consumers as part of the team in each phase of the assessment process, from design to dissemination.

To determine how research participants make sense of an assessment would require application of current scientific knowledge about social, interactional, developmental, cultural and ethnic, and historical processes. Such a Herculean task is beyond the scope of this paper. However, there are some basic areas in which a summary of current knowledge about the client’s perspective on the assessment process can provide insight and useful information for practitioners, and these areas will be the focus of this paper, as addressed below. The discussion and accompanying examples are drawn from work in which we have identified, through close observations and analysis of verbatim transcripts of assessments, issues that critically affect the experience, and sometimes the behavior, of people who are being assessed.

ASSESSMENT IS NOT A NEUTRAL PROCESS

It should come as no surprise that the people we assess have their own ideas about the kinds of data we collect and how we go about doing it. For subjects, the translation of individual experience or knowledge into assessment data may engender strong feelings, in part because the assessment is viewed as an evaluation of the individual, often with important, even life-changing, consequences. Similarly, the person conducting the assessment acts in ways that reflect professional norms but also his or her own personal traits. In short, assessment is not a neutral process, unaffected by subjective interpretations, values, and emotions, for either the client or the assessor.

Subjects view their answers as a direct reflection of their social skills, competence, and self-image (Sudman and Schwarz, 1995; Luborsky, 1993); they know that the assessment will determine whether they will receive treatment or services, and they know that to receive them usually requires determination of some kind of deficit, whether functional, medical, or financial. As a practical point, the best assessment instruments are designed and pretested by including subjects’ evaluations of the scales and survey items, and that knowledge is built into staff training procedures for data collection.

Responses to questions, even when they are scored objectively, may be misunderstood for simple or complex reasons. At the simple level, subjects may find it hard to determine an answer on the abstract answer scale they must use (e.g., 1 to 5, or, not at all, somewhat, completely). Each person will inject a meaning or concept to define the anchor or endpoint. For example, in one study we observed that subjects create categorical contrasts to anchor their decisions about how to report pain or disability (Luborsky, 1993):

  • RESEARCHER: OK. Is that pain extreme pain, moderate, or…?

  • SUBJECT: Moderate amount.

  • RESEARCHER: Moderate amount, OK.

  • SUBJECT: As long as I can walk, it is moderate.

Thus, the decision about which reply category best describes one’s condition or situation includes judgements about what constitutes an important real-world distinction and how that fits into the range of categories provided.

A more complex view is that the replies are enmeshed in and express the subjects’ subjective interpretation of the question and of the implications their answer may have for current functioning and self-image (Luborsky, 1993; Mishler, 1986). Indeed, researchers have shown that subjects use judgement and estimation, even in quantitative scales (Jobe, Keller, and Smith, 1995). This fact points to one of the inherent differences in how subjects and assessors approach the assessment. While for many assessors these data are viewed as somewhat abstract knowledge isolated from the concrete social occasions in which it is put into use, for the subjects the data have wider implications. A subject’s answers (e.g., amount of pain, physical disability, sadness or happiness, flexibility or rigidity) are perceived within a host of contexts including the meaning of the answer for their own sense of social and personal identity, their overall feelings of wellbeing, and the settings, conditions, and wishes of daily life.

The next excerpt from an assessment interview illustrates these processes. The example is drawn from a longitudinal study of daily mood, which provided an opportunity to observe consecutive clinical diagnostic interviews with elderly subjects (Luborsky, 1993). Twelve standardized questions were used to rate pain, health, and mood (e.g., happy, anxious, sad). Subjects were instructed to answer the questions by telling the researcher their choice of five standardized answers ranging from “not at all” to “extremely.” The subject is a cognitively intact 78-year-old man. The cancer deaths of his sister, father, and wife remained keenly felt. He described much strife between his daughter, son, and himself. Two years earlier a mild stroke had left him with residual weakness in one arm. Continuing symptoms of depression led to a clinical diagnosis of dysphoria (feeling unwell or unhappy). The interviewer was a psychologist in her mid-thirties. The excerpt below began just after the subject was asked how much pain he was feeling and he had answered that he was “having excruciating pain.”

  • RESEARCHER: Right now, are you feeling sad?

  • SUBJECT: Yes.

  • RESEARCHER: You answered you felt sad. Is it extremely sad, a little bit?

  • SUBJECT: It ain’t bad. You know—when you, you ask me questions but, but when something hurts you, how could you feel happy?

  • RESEARCHER: It depends on the person. It is such a personal thing.

  • SUBJECT: Must be you are a stupid person to feel good.

  • RESEARCHER: Not necessarily. There are many different reasons. Some people have a lot of reasons to feel bad.

  • SUBJECT: You know I am not stupid!

This dialogue illustrates the complex and multiple agendas of the assessment process. Using parody, the subject challenged the researcher’s authority and even the basic rationality of the questions after he said he is in pain: “How could you feel happy …” and “Must be you are a stupid person to feel good.” At the same time, he claimed his reply was intelligent and sensible in the context of his whole life, by saying “You know I am not stupid!” He resisted the reduction of his experience and self-representation into the abstract labels needed for quantitative measurements, such as degrees of pain or of sadness (e.g., some, a little). This example shows the importance of acknowledging that the subjects do not perceive answers to specific assessment questions as separate from the contexts of the setting or their life experiences.

COMMENTS PROVIDE VALUABLE INFORMATION

In the course of an assessment, subjects comment about the process they are engaged in, and these comments can provide valuable information. When asked, subjects can deliver very direct and explicit comments and reflections about their assessment experiences. Sociolinguists call this type of language metacommentary. The metacommentary from subjects provides many basic insights about an assessment topic that would not be captured with the assessment tool or might be hard to evaluate.

Unfortunately, practitioners and researchers often fail to pay attention to or record these types of comments. In the excerpt of the assessment session provided above, analyses of the verbatim transcripts of the assessment session were compared with the replies scored on the standardized test form by the clinician. The two kinds of records of the interview show very different pictures of the subject. The view from the standardized test form suggests a nonproblematic test using questions that captured facts about the subject’s mental state. In contrast, the texts depict lively talk about the questions and answers. The transcribed dialogs show how individual beliefs may be forced into standardized categories and that the process of doing so involves problematic interactions even within the constraints imposed by standardized test procedures.

In the same assessment, despite repeated instructions to report his “feelings at the moment,” the subject often switched the topic in order to assert a lifelong identity and self-image rooted in his past. He also redefined the meaning of the question or the standardized answers prior to replying. In one session he ignored the instruction to state how he felt at the moment by asserting that for his whole life, he “never knew a dull moment,” and “always had a lot of energy.” These redefinitions of the questions and the answer categories reveal the personal and cultural meanings that were guiding his self-appraisal. The subject labeled his experience using personalized frames of reference instead of the fixed answer categories of the test; but his intended meanings were not encoded into the permanent record. Thus, important aspects of the intended self-representation were routinely neglected or “forgotten” in the completion of the assessment tool.

Attending to the informal side remarks that subjects make during the course of the interview can be essential to gaining an understanding of the way subjects make sense of the assessment; to providing us with important information on the validity of the assessment tool; and to identifying important areas for clinical intervention. In a study of the Bradburn Well-Being Scale (Perkinson et al., 1994), for example, we examined subjects’ comments about the questions on the tool in order to gain insight into the subjects’ interpretations of the questions. Subjects repeatedly said “no” to one item on the measure (are you “feeling on top of the world today?”). When asked about their responses, subjects said the question was too extreme or they were not sure what it meant. The subjects’ comments helped us to see that the connotation of the phrase used, which was once a popular expression and had a great deal of saliency at the time the scale was developed, had changed over time.

In another study, we went inside a rehabilitation clinic to examine how recent stroke patients were learning to function using adaptive equipment before they were discharged to homes (Gitlin, Luborsky, and Schemm, 1997). We collected and analyzed comments the patients made about the equipment while answering the structured assessment questions about the functionality and quality of the equipment. These comments revealed not just an evaluation by the patients of the objective quality of the equipment, but that they thought it was “ugly” and it made them feel “ugly” and very visibly different or deformed by virtue of having to use it (Luborsky, 1994). Thus, comments outside the scope of the structured assessment helped us direct clinicians to be more sensitive, to better anticipate and understand people’s reactions. These patients were concerned about appearing in public with equipment that made them feel deformed or different—regardless of how “functional” it might have been.

ASSESSMENT IS A SOCIAL EVENT

An assessment needs to be viewed in part as a social event involving at least two people. The behavior of subjects and assessors is influenced not only by the specific assessment task but also by other social and cultural factors. Mishler (1986) has examined and described the social bases of research interactions with particular attention to the conversational aspects of the research process. The contribution of his work is to show the systematic way that the participants in the room, the scientist and the subject, in this case, behave in ways that show they are attentive to a range of ideals and constraints beyond that of merely collecting raw data by asking questions and recording answers.

This observation is also amply illustrated in our own research and that of others, in which subjects debate and reframe the task, the questions, and even the meaning or kinds of set answers provided (Luborsky, 1993; Sudman and Schwarz, 1995). Interviewees actively monitor a variety of features of the interview, including how much sense their answer makes in light of what was asked and answered before, and they often ask to be reminded of or even seek to change an earlier answer. Self-reports are influenced by the subject’s monitoring of the following: previously mentioned topics, claims to a lifetime identity as opposed to current identity, self-image management, clumsy interviewing in which the interviewer switched topics in a way that broke coherence or interfered with turn-taking or juxtaposed topics abrasively (a good example is the excerpt above in which the interviewer asked the test question “are you happy” right after the client had reported being in a great deal of pain). Typically, respondents’ ongoing reinterpretations of the questions and answers are not recorded as “data,” even though they are prominent in the active give-and-take process of translating individual experiences into standardized reply categories.

Again, because the assessment interview is enmeshed within the social and personal life of the subject, social conventions, norms and cultural values come into play (Coupland, Coupland, and Giles, 1991). Participants focus not just on the explicit test questions; they also monitor cultural expectations for conversation and social interactions. For example, an important aspect of the assessment interview are the ways in which the participant’s identity, social roles, and status outside of the interview influence the interview itself. This aspect is especially evident in the case of an older person being interviewed by a younger person, when patterns of deference and demeanor appropriate between youth and elderly in the community may influence the interaction between assessor and subject.

These issues are heightened because in assessments, the assessor wields a disproportionate amount of power; standardized assessments are relatively lopsided engagements. The assessor defines the topic, the response options, and the form of the interaction, and the subjects often have little choice but to participate—in many cases, with a great deal at stake.

For this reason, the formal assessment process using standardized questions is subject to the same criticism raised about quantitative research. That is, at times assessors may seem to “dog” the informants to talk about things they do not want to discuss, or may handle the data recording and analysis in ways that erase the imprint of the informant’s own sense of what is important. For this reason, some of the better assessments, even those using structured interviews, provide avenues for the subject to contribute to assessment in ways that are culturally appropriate and that follow social conventions—for example, allowing subjects to ask questions or offer opinions, including suggesting answers suitable for a question, not interrupting as the subject speaks, and explaining the results of the assessment when known.

PERCEIVED DIFFERENCES BETWEEN SUBJECTS AND ASSESSORS INFLUENCE RESPONSES

In the 1940s, E. Evans-Pritchard (1949) formulated the principle of segmentary opposition to explain how people decide how to define their identities and roles and interact with one another. For example, two strangers meeting will work to see what social group they have in common and their relative places in it.

In our research, we have found that assessment subjects are concerned about revealing private information to people they perceive as strangers but less reluctant if they perceive themselves as having something in common with the interviewer. For this reason, it is important to understand the way the interviewers are perceived by assessment subjects.

It is common in research projects to actively recruit members of minority communities to serve as interviewers for other participants from the same community; for example, African Americans will be sought to interview African Americans, and Latinos and Hispanics will be trained to conduct the assessments of those populations. Racial/ethnic matching is done to minimize cultural differences between interviewer and research subject and has proven to be effective. However, matching subject and assessor on race or ethnicity does not guarantee that assessors will be welcomed and that assessors and subjects will understand each other, even when assessors have been trained properly.

For some subjects, racial/ethnic matching may be less relevant than other characteristics of the assessors that form the basis for perceived social differences. A study in the journal Ethos illustrates consequences of ignoring social differences even when race or ethnicity are matched. A refugee resettlement program hired Cambodian and Vietnamese immigrants in order to ease the transition of the refugees by providing them with someone who spoke their language and knew more about their culture and the experience of resettlement. The agency directors failed to take into account that the immigrants carried with them long-standing clan prejudices from their own countries, which were exploited by the more recent immigrants hired as staff.

Similarly, if the interviewer has a nice watch and expensive clothes and is interviewing subjects in a community where these things are rare, the socioeconomic difference may lead respondents to feel that they are being judged, that their views are not being taken seriously, or that the interviewer is the one benefiting from the study. Even when obvious social class differences are minimized, subjects may distrust assessors if they are seen as part of a research project, or if the assessor comes into their room with a uniform or research materials or equipment.

CONCLUSION

Giving reports and taking tests are not new to most people. Questions and answers, interrogations, discussions, and investigations are a routine and regular part of everyday life. We routinely question each other, frequently asking one another for an evaluation or discussion of an event that just happened. At a doctor’s office, most of us have been familiar since infancy with the questions “How do you feel?” and “Where does it hurt?” From our earliest days, we are schooled to answer questions. Parents have asked, “Why did you do this?” “What happened?” “How are you?”

In some respects, there is nothing unique or even unusual about the assessment process. Some components of the subject’s experience of the assessment process will be readily recognizable in any social interaction with another person. For example, all interactions are influenced by cultural norms and expectations for behavior—taking turns in speaking, being able to ignore or change the topic when a topic makes one uncomfortable, or other conventions by which the parties in a conversation demonstrate that they are paying attention, following up with appropriate questions and answers, for example. The difference for participants in structured assessments is that many of these conventions seem to be held in abeyance, in part because of the status inequities intrinsic between assessment subject and interviewer, and in part because of the requirements of collecting standardized information.

The purpose of this paper has been to describe some of the factors that influence the responses of subjects to the assessment process. Assessment subjects’ responses to assessment questions are influenced by their attempts to maintain or understand the assessment as part of regular, culturally determined patterns of interaction. Participants do not view assessment questions in isolation. Rather, they seek to provide answers to specific questions based on the context of their entire life experience. Practitioners need to recognize the importance of listening to all of the subject’s comments, not just those that respond specifically to a required question. Too often, the assessor exercises authority to direct a process of what can be called “selective forgetting” by dismembering experiences from their personal and social contexts in order to make “usable” data. These are important aspects of the self-representation and should not be neglected or “forgotten” in the completion of the assessment tool; rather, they should be viewed as useful and insightful information. Often, what the subject says outside of the research question can be more important than his or her responses to the questionnaire item. As we have seen, failure to listen to these comments and to see them in the context of the subject’s life deprives the assessor of clinically valuable information and may invalidate the data collected.

Science has learned through experience to acknowledge the voice of the client in the assessment process. For example, the first major survey to gauge the health and functioning of Americans included a long list of health-related questions. As an afterthought, to ease the transition into the formal questions, the researchers decided to start the interview with a simple, friendly question to get people oriented to the more objective survey questions about their health. This question was “How do you rate your health today? Would you say it is excellent, very good, fair, poor, or really bad?” In the many subsequent administrations of the survey since then, all of the questions that the scientists thought would be important indicators of health have proved unreliable. What has remained is that simple, relatively unassuming self-evaluation question about health status. We now know that people who say their health is poor or bad have a three-fold greater likelihood of dying than people who rate their health as good to excellent, and we know that subjects are more accurate than physicians or objective measures of health conditions (Mossey, 1995; Idler and Kasl, 1991).

Attention to such issues as the subject’s understanding and experiences of the assessment process is invaluable. Researchers are now trying to determine exactly what it is that people have in mind and are attending to when they construct and answer assessment questions. It is hoped that these kinds of efforts will continue and expand in the future.

SECTION I THE ROLE OF CLIENTS IN THE ASSESSMENT PROCESS.

The first section of the issue is designed to bring into greater clarity the client’s point of view in the assessment process. Too often assessment measures fail to do justice to the person who is assessed. This is only partly a measurement issue in the traditional sense (the validity and reliability of the assessment tool). Rather, it reflects more on measurement design and training and illustrates our own values about the role of the client in the assessment process. For example, even experienced practitioners sometimes fail to consider the client as an important source of information, instead relying on caregivers or their own judgment. This situation is also reflected in the design of many assessment measures, which usually do not clearly identify the source of information and often are not designed to record the consumer’s opinion—serious mistakes, in my view. The hope is that the articles in this section will promote greater understanding of and respect for the client in the assessment process.

S.M.G.

WBN: 9710503142002

Footnotes

The magazine publisher is the copyright holder of this article and it is reproduced with permission. Further reproduction of this article in violation of the copyright is prohibited.

References

  1. Beisecker A. Older Persons’ Medical Encounters and Their Outcomes. Research on Aging. 1996;18(1):9–31. [Google Scholar]
  2. Coupland N, Coupland J, Giles H. Language, Society and the Elderly. Oxford: Blackwell; 1991. [Google Scholar]
  3. Evans-Pritchard E. The Nuer. Cambridge: Cambridge University Press; 1949. [Google Scholar]
  4. Gitlin L, Luborsky M, Schemm R. Emerging Concerns of Assistive Device Use by Old Stroke Patients in Rehabilitation. 1997 doi: 10.1093/geront/38.2.169. Unpublished manuscript. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Gumperz J. Discourse Strategies. New York: Cambridge; 1982. [Google Scholar]
  6. Haug M, Ory M. Issues in Elderly Patient-Provider Interactions. Research on Aging. 1987;9:3–44. doi: 10.1177/0164027587009001001. [DOI] [PubMed] [Google Scholar]
  7. Idler E, Kasl S. Health Perceptions and Survival: Do Global Evaluations of Health Status Really Predict Mortality? Journal of Gerontology: Social Sciences. 1991;46:s289–300. doi: 10.1093/geronj/46.2.s55. [DOI] [PubMed] [Google Scholar]
  8. Jobe J, Keller D, Smith A. Cognitive Interviewing Techniques in Interviewing Old People. In: Sudman S, Schwarz N, editors. Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Jossey-Bass; 1995. [Google Scholar]
  9. Luborsky M. Socioucultural Factors Shaping Technology Usage: Fulfilling the Promise. Technology and Disability. 1993;2(1):71–8. doi: 10.3233/TAD-1993-2110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Luborsky M. The Cultural Adversity of Physical Disability: Erosion of Full Adult Personhood. Journal of Aging Studies. 1994;8(3):239–53. doi: 10.1016/0890-4065(94)90002-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Luborsky M. The Process of Selfreport of Impairment in Clinical Research. Social Science and Medicine. 1995;40(11):1447–59. doi: 10.1016/0277-9536(94)00359-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Mishler E. Research Interviewing. Boston: Harvard University Press; 1986. [Google Scholar]
  13. Mossey J. Importance of Self-Perceptions for Health Status Among Elderly Persons. In: Gatz M, editor. Emerging Issues in Mental Health and Aging. New York: American Psychological Association; 1995. [Google Scholar]
  14. National Institute on Disability and Rehabilitation Research. Evolving Methodology in Disability Research. Rehab Brief. 1989;12(5) [Google Scholar]
  15. Perkinson M, et al. Exploring the Validity of the Affect Balance Scale with a Sample of Family Caregivers. Journal of Gerontology: Social Sciences. 1994;49(5):s264–75. doi: 10.1093/geronj/49.5.s264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Roter D, Hall J. Doctors Talking with Patients, Patients Talking with Doctors: Improving Communication in Medical Visits. Wesport, Conn: Auburn House; 1992. [Google Scholar]
  17. Sacks H, Schlegoff E, Jefferson G. A Simplest Systematics for the Organization of Turn-taking for Conversation. In: Schenkein J, editor. Studies in the Organization of Conversational Interaction. New York: Academic Press; 1978. [Google Scholar]
  18. Scheer J, Luborsky M. The Cultural Context of Polio Biographies. Orthopedics. 1991;14(11):1173–81. doi: 10.3928/0147-7447-19911101-05. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Sudman S, Schwarz N. Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Jossey-Bass; 1995. [Google Scholar]
  20. Whyte W, editor. Participatory Action Research. Newbury Park, Calif: Sage; 1991. [Google Scholar]

RESOURCES