Skip to main content
Alzheimer's & Dementia logoLink to Alzheimer's & Dementia
. 2025 Aug 20;21(8):e70552. doi: 10.1002/alz.70552

Contextualization of Harmonized Cognitive Assessment Protocol (HCAP) in an aging population in rural low‐resource settings in Africa: Experiences and strategies adopted to optimize effective adaption of cognitive tests in Kenya

Roselyter Monchari Riang'a 1,, Eunice Muthoni Mwangi 1, Niranjani Nagarajan 2, Felix Agoi 1,3, Patrick N Mwangala 4, Alden L Gross 5, Jean Ikanga 6,7, Kenneth M Langa 8,9, Edward Miguel 10,11, Muthoni Gichu 12, Joshua R Ehrlich 2,8, Anthony K Ngugi 1
PMCID: PMC12367447  PMID: 40835849

Abstract

Cross‐cultural adaptation of cognitive assessments is crucial for detecting Alzheimer's disease (AD) and related dementias (ADRD) in aging populations. This study documents the adaptation of the Harmonized Cognitive Assessment Protocol (HCAP) for a pilot study on the Longitudinal Study of Health and Aging in Kenya (LOSHAK) in rural Kilifi County, Kenya, highlighting challenges and strategies for optimizing outcomes. As part of the LOSHAK feasibility phase, cognitive tests including: the Swahili Mental State Examination, 10‐word recall, animal naming, story recall, clock drawing, and making change, were administered to 202 participants (≥45 years) from the Kaloleni/Rabai Health and Demographic Surveillance System (KRHDSS). Measures were adapted culturally and linguistically, and trained local enumerators conducted home‐based assessments. Low literacy (60.1% had no schooling), linguistic diversity, cultural norms, and infrastructure limitations influenced assessments. Key adaptations included translation, culturally relevant modifications, flexible administration, and community engagement. Contextualized cognitive assessments improve validity in rural resource‐limited settings, offering insights for future research.

Highlights

  • This is a narrative qualitative study documenting the experiences and Strategies Adopted to Optimize Effective Adaption of Cognitive Tests in Longitudinal Study of Health and Aging in Kenya (LOSHAK). This article documents the adaptation and contextualization of the Harmonized Cognitive Assessment Protocol (HCAP) for a pilot study on health and aging (LOSHAK) in rural Kilifi County, Kenya, focusing on the challenges and strategies employed to optimize outcome.

  • HCAP tests were administered as part of the feasibility and pilot phase of the LOSHAK, the aim of which was to validate measures and optimize data collection procedures.

  • The median age of the 202 study participants was 64, with 57.4% being female. A majority (62.5%) were not currently working, and nearly 70% fell within the two poorest wealth quintiles.

  • The rural setting presented unique challenges including:

    1. low literacy rates (60.1% of participants had no schooling),

    2. diverse language use (primarily Giriama and Swahili),

    3. limited infrastructure (e.g., 44.8% of households had electricity),

    4. restrictive cultural norms that influenced data collection (e.g., in‐home interviews were often conducted outdoors with destructions).

  • Key adaptations included:

    1. translating and culturally adapting test items (e.g., using local Swahili dialects and culturally relevant examples),

    2. recruiting and training local enumerators who were familiar with the community, culture and language,

    3. iterative pre‐testing and using roleplay helped to ensure that enumerator scoring was consistent, accurate, and reliable.

    4. contextualization of tool and data collection strategies included: adjusting data collection methods to accommodate cultural practices and environmental limitations, for example, allowing respondents to use their preferred language for “animal naming,” using paper and pen for some test items for which participants experienced difficulty using tablets, paying close special attention to time and season of administration, and adopting strategies to minimize background noise and other environmental distractions (e.g., due to rain, lunch hour, planting and harvesting, and school holidays). Prior to data collection, it was essential to engage local community health volunteers (CHVs), build rapport with participants, explain the study, and describe the idiosyncrasies of cognitive testing.

  • The study emphasizes the importance of cultural sensitivity, linguistic appropriateness, and community engagement in cognitive assessment in diverse, resource‐constrained settings.

  • The findings offer practical recommendations for researchers aiming to conduct cognitive assessments in similar populations, contributing to the development of culturally sensitive and effective tools for understanding and addressing cognitive health in aging populations.

Keywords: aging, Alzheimer's disease, assessment adaptation, cognitive assessment, cultural adaptation, dementia, field experiences, hcap, Kenya, low‐resource settings, Sub‐Saharan Africa

1. INTRODUCTION

Age is the number one risk factor for cognitive decline and dementia, and today, adults aged 60 and older comprise 13.6% of the total global population. 1 In particular, Sub‐Saharan Africa (SSA) is experiencing a rapid demographic transition. 1 Currently, 5.6% of Kenya's population is 60 years and over, though this proportion is expected to nearly quadruple by 2050, making it one of the most rapidly aging countries in SSA. 1

Cognitive impairment and dementia are major public health concerns, especially among a rapidly aging population, 2 as they can profoundly impact the quality of life and well‐being of older adults and their communities. 3 , 4 The prevalence of cognitive dysfunction among older persons is increasing globally, especially in low‐resource settings 5 where access and resource allocation to healthcare are limited. 6 , 7 Risk factors for cognitive impairment such as chronic illnesses, psychological stress and trauma, and malnutrition, are also high among older adults in low‐ and middle‐income countries (LMICs). 8 Early diagnosis and treatment of cognitive impairment are key to improving quality of life, decreasing the risk of progression to dementia, and promoting optimal aging. 9

Harmonized test batteries such as the Harmonized Cognitive Assessment Protocol (HCAP) are commonly used to assess cognitive functioning and identify suspected cases of dementia. 9 These tests identify a person who needs full dementia evaluation. They can also inform interventions aimed at preserving cognitive health. The Mini‐Mental State Examination (MMSE) is one of the assessments within the HCAP battery most widely used tools to screen for cognitive impairments, including dementia. 10 It features 11 tasks or questions that assess seven key cognitive domains and typically takes about 5 min to complete. The test yields a maximum score of 30, with healthcare professionals evaluating the score based on their observation of the individual's performance on each task. The results help clinicians categorize the severity of dementia as follows: 25–30, no impairment; 20–24, mild dementia; 13–20, moderate dementia; 12 or below, severe dementia.

While the MMSE is known to be influenced by participants’ educational attainment, potentially leading to misclassification of cognitive status in low‐literacy populations 11 it remains widely used. We adopted it in this study because one of our primary objectives was to develop cognitive tests that are harmonized with other health and retirement studies (HRS) studies globally. Thus, these studies were a starting point for choosing and adapting measures in the Longitudinal Study of Health and Aging in Kenya (LOSHAK). We are also confident that our robust piloting, adaptation and iterative field pre‐testing of the modules in our study (as detailed in this article) mitigated against any misclassification of cognitive status in our population. In addition, our validation results (excellent internal consistency of each scale (McDonald's omega ≥ 0.75) and robust Confirmatory Factor Analyses models with strong factor loadings and good fit statistics 12 provide evidence of good performance of the adapted Swahili Short Mental State Examination (SMSE) module in this setting.

The effective administration of HCAP batteries in diverse cultural and socio‐economic contexts presents unique challenges, as these tools are rooted in Western paradigms. Current HCAP batteries often rely heavily on English, require written responses, reflect formal educational models, and assume the presence of infrastructure like electricity and furniture. 9 , 11 , 13 , 14 Administering these tests in low‐resource, low‐literacy communities—characterized by limited English proficiency, minimal schooling, and distinct cultural norms—complicates interpretation and cross‐population comparisons of cognition.

Despite these barriers, adapting cognitive tests for low‐resource settings is essential for population screening, early diagnosis, and timely intervention. Harmonized test batteries like HCAP are critical for cross‐contextual comparisons, highlighting the importance of cultural contextualization.

Growing attention has focused on the limitations of cognitive assessments in non‐Western populations. 11 , 13 , 14 , 15 , 16 , 17 Prior studies emphasize selecting or adapting tests with content that is relevant, familiar, and minimally dependent on language, literacy, or numeracy. 15 , 18 Recommendations include clear instructions, the use of prompts and feedback, performance‐based assessments with demonstrations and practice, and tests that are brief and portable. 11 , 14 , 15 , 16 , 18 Cultural sensitivity and linguistic variation are also vital for ensuring the reliability and validity of test outcomes. 11

However, limited research exists on the practical challenges of administering cognitive tests in culturally diverse and resource‐constrained settings, especially among older adults in rural SSA.

This article documents the adaptation of HCAP batteries in rural, low‐resource, and culturally diverse settings in Kenya, within the pilot and feasibility LOSHAK. 19 Using Bernal et al.’s Ecological Validity Model (EVM), 20 we describe how the batteries were contextualized and offer practical recommendations to enhance their cultural relevance, validity, and reliability—ultimately informing evidence‐based interventions for aging populations in similar environments.

While one of the overarching goals of this project was to validate the newly culturally adapted version of the HRS survey instruments, the detailed methodology and findings related to the validation process are beyond the scope of the current paper. Details on the psychometric validation of the measures, including reliability and factor structure analyses has been submitted for publication in a separate manuscript. 12 This current paper focuses on the implementation and field testing of the adapted instruments, with an emphasis on experiences and strategies Adopted to optimize effective implementation.

1.1. Project description

LOSHAK is a small‐scale (n = 202) feasibility and pilot study conducted in Kenya, aimed at adapting, validating, and piloting existing HRS measures. 19 Its purpose was to inform the development of new survey modules and data collection protocols for a future full‐scale, population‐based LOSHAK. This cross‐sectional survey targeted adults aged ≥45 years. The broader goal of the LOSHAK parent study is to establish a longitudinal panel examining socio‐economic and environmental aspects of health and aging in Kenya. It seeks to understand critical domains and track trends, trajectories, and risk factors for late‐life disease, disability, and economic distress at the population level. 19

The pilot study employed HCAP batteries, widely used in other HRS studies, including in LMICs. These cognitive measures covered orientation to time and place, memory, language fluency, executive function, and visuospatial orientation. The tests piloted included the Swahili SMSE, 10‐word list recall (immediate and delayed), Animal Naming, Brave Man story recall, Clock Drawing, and Making Change (a basic arithmetic task).

Several tests were adapted from the Longitudinal Aging Study in India (LASI), 21 including the SMSE, derived from the Hindi Mental State Examination (HMSE), and the 10‐word recall test. We selected culturally and linguistically relevant terms—for example, using “Milk” and “Car” instead of “Butter” or “Boat.” India, an LMIC with diverse linguistic and cultural contexts and lower literacy levels, shares similarities with Kenya, making these tools more readily adaptable.

Additionally, the battery included globally validated tests such as Animal Naming, Brave Man Recall, and Clock Drawing, which have demonstrated strong reliability in cognitive assessments worldwide.

2. METHODS

2.1. Research design

This is a narrative qualitative research design documenting the experiences and Strategies Adopted to Optimize Effective Adaption of Cognitive Tests in LOSHAK.

2.2. Theoretical Framework and study setting

This study applied Bernal et al.’s (1995) EVM 20 to contextualize the HCAP batteries for the LOSHAK study in Kenya. EVM emphasizes adapting innovations—like cognitive assessments—to the cultural, linguistic, and socioeconomic context of ethnocultural groups, avoiding a one‐size‐fits‐all approach. The model outlines eight cultural dimensions: language, persons, metaphors, content, concepts, goals, methods, and context. These dimensions guided how the HCAP tools were tailored to older adults in rural Kenya.

Specifically, we used language‐appropriate assessment tools and considered the relationships among participants, study staff, and the community. Metaphors relevant to local culture, content reflecting values and traditions, culturally relevant goals, adaptive methods, and the local context—such as economic and social realities—were all incorporated into the study design and implementation.

The study was conducted in Kilifi County, a rural, semi‐arid region along the Kenyan coast. It spans 12,246 km2 with a population of over 1.1 million people, most of whom live in sparsely distributed households. 22 According to the 2019 Census, 22 70% of Kilifi residents live below the poverty line, 81% rely on rain‐fed subsistence farming, and 68% are illiterate. The region also faces high food insecurity (48%) and poor road infrastructure, with access becoming difficult during rains due to flooding and landslides caused by loose sandy soil. The predominant community is Giriama‐speaking, with Swahili also commonly spoken, though English proficiency is low.

Participants were selected from the Kaloleni/Rabai Health and Demographic Surveillance System (KRHDSS) 23 in coastal Kenya. Trained enumerators conducted home‐based interviews and cognitive assessments using culturally adapted tools in participants’ native language.

2.3. Study participants, sampling, and data collection procedures

Participants were drawn from the KRHDSS 23 in Coastal Kenya and data collection was performed in participants’ homes by trained enumerators. KRHDSS operates in two administrative units (sub‐counties) in Kilifi County in the coast of Kenya, mapped on to 10 Community Health delivery Units (CHUs of the Ministry of Health [MoH] (community‐level tier of the health services). We purposively selected two CHUs in each sub‐county, one rural and one urban/semi‐urban, and sampled proportionate to the population distribution along the age categories 45–54, 55–64, and 65+ years, which also ensured a near even split of male:female within each sub‐county. This facilitated comparisons of measures between rural and urban/peri‐urban settings, between age groups and by sex. More details on the study participants, sampling aproach, and data collection procedures of this project are presented in the protocol for the main LOSHAK feasibility study manuscript. 19 In total, the study included 202 participants whose demographics are detailed in Table 1.

TABLE 1.

Demographic characteristics of the study participants

Parameter Level Overall (n = 202)
Age range (y) 54.00–73.00 (Median 64.00)
Gender Male 86 (42.6%)
Fe male 116 (57.4%)
Marital status Married 136 (67.0%)
Widowed 56 (27.5%)
Divorced/single 11 (5.4%)
Schooling Attended school 81 (.40.1%)
Did not attend any school 121 (59.9%)
Household size Mean (SD) 6.4 (3.2)
Has electricity in the household Yes 91 (44.8%)
Wealth quintile 1 69 (34.2%)
2 71 (35.1)
3 27 (13.4)
4 24 (11.9)
5 11 (5.4)
Currently working  Yes 76 (37.6)
No 126 (62.4)

3. RESULTS

3.1. Demographic characteristics of the study participants

The average age of the 202 study participants was 64 years, of which 116 (57.4%) were female. The majority of the participants (63%) were not involved in any economic activity and almost 70% were in the two poorest quintiles.

3.2. Field experiences, learned lessons, and recommendations

We specifically focused on the field experiences encountered during the enumerator recruitment and training, home set‐up where the data collection took place, administration of cognitive tests, and data quality checks.

3.2.1. Enumerator recruitment and training

Recruitment of enumerators: It was important to recruit the enumerators from the local community from which sample persons are drawn. The four enumerators were recruited from the local community to enhance cultural and linguistic concordance. During training, these enumerators were helpful in ensuring that the linguistic translation and the intended meaning of the cognitive tests and other survey measures were locally contextualized.

Training of enumerators: Enumerators underwent intensive specialized training on how to administer and interpret the cognitive tests. They were certified in the administration of each of the eight cognitive tests before field implementation. The training covered potential pitfalls of administering cognitive tests in low‐resource settings, and the sensitivities involved in administering the tests to ensure reliability and validity. It also covered the cultural expectations of the local communities on matters such as dress code, gender relations, and home setup for an optimal interview experience. Ensuring that enumerators understood the goals of each test and what was being measured was vital for accurate data collection.

We found that working with enumerators who had health‐related training and experience (e.g., nurses, clinical officers, and lab technicians), particularly those providing care in the local health facilities also enabled a smooth training experience (e.g., in administering clinically oriented items, health assessments, biomarker sample collection). In addition, the fact that the enumerators were known, and respected healthcare workers enhanced endorsement and acceptance of the study by the local community, and was highly encouraged by the local health authorities who viewed their involvement as a form of research capacity building.

The time taken to train enumerators was crucial for the successful administration of cognitive tests. Conducting several pre‐tests and using role‐play helped ensure enumerator scoring was consistent, accurate, and standardized. Harmonization of scoring across enumerators was done to ensure consistency and accuracy in how different enumerators recorded and interpreted responses during cognitive test administration. This was achieved during the training phase through structured activities, including repeated pre‐tests and role‐plays, where all enumerators observed and independently scored the same respondent. This allowed for direct comparison and discussion of scoring differences.

Through facilitated debriefing sessions, discrepancies were identified and addressed, fostering a shared understanding of scoring criteria and interpretation. This iterative process helped standardize their scoring approaches. The effectiveness of this harmonization was assessed informally by observing the convergence of scores over successive practice rounds and through consensus‐building discussions led by the training facilitators. The enumerator training and pre‐test exercise took twelve working days.

Enumerator training team: The enumerator training team was comprised of a diverse array of professionals, spanning various disciplines such as social science, epidemiology, cognitive psychology, health systems, and linguistics. This multidisciplinary approach ensured comprehensive training, enriched by diverse perspectives, thereby enhancing the effectiveness of the training package holistically. Before conducting enumerator training, a member of the training team had participated in a comprehensive 2‐week HRS enumerator training workshop in Michigan, United States. This invaluable experience provided insights into the meticulous process of training enumerators for cognitive tests.

3.2.2. Home set‐up experiences and cultural issues

Culturally adapted cognitive tests: The cognitive tests were adapted to the local language and culture of study participants by translating the tests into the local language (Swahili) and using examples relevant to the participants' culture. For instance, words and phrases were adapted to align with local experiences “butter” was replaced with “milk,” and “home address” was modified to “describe the location of your home from the nearby town,” among other context‐appropriate substitutions. The tools were also pre‐tested in the community with a small sample (10 households) of target participants. The objective was to assess the cultural appropriateness, clarity, and practical feasibility of the assessments in real‐world conditions. Modifications to the tools were considered if community participants consistently misinterpreted questions, if items appeared culturally inappropriate or irrelevant, or if the administration process presented logistical challenges.

Selecting enumerators from the local community was important as they consistently provided feedback during training which was used to further contextualize the study tools. Importantly, there are variations of Swahili language across different parts of Kenya, for example in urban areas like Nairobi sheng (a fusion of Swahili with English and native languages) is common while a purer form of Swahili is spoken in the coastal region (where the study was conducted). This necessitated the use of a local Swahili speaking healthcare workers to validate the translation the tests and contextualization of the language.

3.2.3. Sitting area, taboos, and in‐home interview set‐up

Houses: Most of the houses in the local Giriama community have no windows and lack electricity, and thus rely on the doorway for lighting, making them dark and unconducive for conducting data collection. It is also a taboo for visitors (including enumerators) to be invited inside the houses. Therefore, the appropriate and designated sitting area for visitors is typically outside the house under the shade of a tree. This sitting arrangement also provides relief from the intense heat in the region. As a result, in most cases data collection was conducted in this setting which exposed the respondents to distractions from passersby, grazing animals, and noises from animals and playing children.

In cases where data collection was conducted outside and under the shade of a tree (nearly all cases), many approaches were taken to reduce distractions. Among other strategies, the enumerators requested the respondent to sit facing the less distracting side (away from the gate, kitchen area, animals, or children playing) and made alternate arrangements for children to play far from home during data collection.

Understand and respect local cultural norms, including dress code and gender interactions. During the pre‐test study, it was noted that, male partners were sensitive and suspicious when male enumerators were administering questionnaires to their female partners/spouses and vice versa. Partiners were noted walking aimlesly in the interviewing area, which often distracted the respondents. Conducting data collection inside the house in such circumstances would have been taboo. Therefore, during the actual data collection, same‐gender enumerators were used.

Female enumerators were highly encouraged to wear long, loose, free flowing dresses (locally known as Dera) as opposed to pants (trousers) or tightly fitting clothes, as the former is considered respectful in the local culture.

Building rapport: It was important to take time to build rapport with the participants and explain the study and the sensitivity involved in conducting cognitive tests before the day of the actual interview. We contacted the Community Health Unity (CHU) leaders and Community Health Volunteers (CHV) who are well known and highly regarded by the local community to locate households whose members had been selected for inclusion into the study and create awareness and sensitize the participants of the study measures before the actual day of data collection. This helped to ensure that participants were comfortable and that they understood what their participation would entail. It also enabled the spouse and other members of the household to give space and avoid distracting the study participant. On the day of data collection, the CHVs assisted in locating the homesteads, approached the head of the household, and after introducing the enumerator they requested him/her to allow his/her spouse or other selected family member to be interviewed. The enumerator then proceeded to explain the purpose of the study and seek consent from the study participant.

Seasons and household considerations: Administering cognitive tests requires a distraction‐free environment to obtain accurate responses. Therefore, administering tests in people's homes requires consideration of the time of day and the season of the year that are most conducive to successful data collection. We opted to avoid administering cognitive tests during busy seasons such as during the rains, planting, harvesting, lunch time, or school holidays.

Data collection during the rainy season should be avoided entirely. During the data collection period (April–May) the county experienced rainfall, a time during which farming households planted their crops, which decreased the availability of respondents. In such instances, enumerators had to plan with the sampled persons to return at a more appropriate time, which delayed the study completion and increased field travel costs. Flexibility in scheduling and the involvement of CHVs who were familiar with household schedules, helped mitigate these difficulties and supported participant retention in the study. Rain not only makes roads impassable and homesteads inaccessible, but also causes noise from the iron sheet roofs, poses challenges of sheltering from the rain (since enumerators were typically not allowed inside the houses), diverts participants’ attention as they may be needed to rescue their assets from potential rain damage, and are required to complete agricultural activities during this time.

Similarly, this study was conducted during school holidays. Visitors (enumerators) attracted the curiosity of many children from the neighborhood who would often hover around the interview scene, causing cognition distractions and respondent discomfort. Therefore, cognitive tests should be scheduled to minimize overlap with school holidays and be conducted during the school hours.

We also avoided conducting cognitive tests during the lunch hour when the respondent may be hungry or expected to prepare lunch for other household members. Mid‐morning and mid‐afternoon were the ideal times to collect data when children were at school and the sampled persons had typically completed household and farming chores.

3.2.4. Administering cognitive tests

The animal naming question: Enumerators needed to be fast in writing down animal names to avoid missing out on animals mentioned as well as repetitions and incorrect responses. Training and practice were crucial. Using abbreviations for animals and completing them after the module was completed was more effective. Allowing respondents to use their preferred language for animal naming questions was necessary, as some animals were more easily recalled in local languages. During scoring, great care was taken to observe if an animal was referred to multiple times using different languages. Enumerators formed a WhatsApp group which allowed them to consult and identify or confirm animals named in different language besides Swahili.

Administration of tests requiring writing: Most households did not have tables that could be used by participants to undertake cognitive tests that required writing or drawing. In this case, carrying a clipboard for conducting the tests was necessary.

While tests were administered using the ODK app on the tablet, drawing tests necessitated the use of paper. Enumerators needed easy access to materials like plain paper, a watch, and a pencil to avoid delays in retrieval that could distract the attention and memory of the respondent.

Adjustments, such as printing out the interlocking pentagon image on paper and having respondents copy the drawing below it, reduced challenges related to screen time‐outs, poor vision due to screen light refraction, and distractions. In some cases, some respondents, being unfamiliar with electronic tablets, were nervous about breaking the screen and thus drew very faint images. In other cases, some had callused fingers that made it difficult to draw on the screen. These might have been resolved by using a stylus, which was not procured for the study.

Constant reminder of test expectations: Cognitive tests for which the enumerators could not repeat instructions were included in our survey (e.g., tests requiring uninterrupted instructions such as the 10‐word recall and story recall test). Requesting respondents' full attention and informing them that the instructions will not be repeated before starting the test helped to address this challenge.

For tests that required timing. For timed tests, programming a timer on the tablet‐based module that the enumerator could activate when the respondent starts and ends the tests was important. This enhanced uniformity in timing of cognitive function tests, minimizing errors and distractions. We found that using a cell phone timer disrupted test administration due to incoming calls or text messages.

SMSE: which season of the year is this? Understanding the diverse seasons in Kenya and particularly in the study location was key. In this study, cultural seasons such as circumcision, farming (harvesting or planting), and religious (Ramadhan, Christmas, Easter) seasons, events such as elections season or school holidays, and weather seasons (short‐rain dry season, long rains) were all considered potentially appropriate responses.

SMSE: What is your Physical address? Address in the rural set‐up in Kenya is normally associated with letter mailing address (Post Office box number). In this question, most respondents during pre‐test reported that they don't have mailing postal box. This is because there are no street/house numbers in rural areas, which made it difficult for the respondents to provide an exact physical address. In this context, questions about physical addresses for rural households were rephrased and the respondent was asked to describe directions to the household location from the nearest town, market, or school.

SMSE: Word List Recall question: The method for ranking and recording words in the 10‐Word List recall test needed to be discussed and aligned with the administration process for efficiency and accuracy. In this case, ODK was programmed to order the words as they were mentioned during the interview.

Clock drawing: Clock drawing in the study was aligned with the local linguistic context. Specifically, it was tailored for the Swahili‐speaking participants. In this adaptation, the conventional process of clock reading was reversed to accommodate linguistic nuances. For instance, in Swahili, 7 o'clock translates to 1 o'clock, and vice versa. This modification ensured that the assessment accurately captured cognitive abilities within the cultural and linguistic nuances of the participants, promoting a more culturally sensitive and contextually relevant evaluation.

Story memory: Brave Man Recall: The East Boston Memory Test featuring recall of a story about a brave man underwent contextualization. The narrative of the brave man adopted native names and we integrated elements that were familiar and meaningful within the local context to enhance the local cultural resonance. For example, instead of referencing a man “walking down the street when he saw fire,” we described “Three children were alone at home and the house caught fire.” Similarly, rather than using the concept of a “the brave man ran to a nearby alarm and called the fire department” we adapted it to “A brave man managed to climb in a back window and carry them to safety” which are more recognizable actions in rural settings.

3.2.5. Other observations

Low literacy rates: A majority of respondents in this study did not have any formal schooling and were unable to read or write with any degree of proficiency. This made it difficult to administer some cognitive tests that required literacy. In this case, we used symbolic communication rather than telling the respondent to read. One such example was in some of the SMSE questions, where a respondent was asked to “look at me and do as I do” instead of “read and do this.” Another adaptation of SMSE was to ask participants to “tell me something about your house” instead of “write any complete sentence on this piece of paper describing your house.”

Difficult access to homesteads: Homesteads were sparsely populated with poor road access for vehicles or motor bikes (boda boda), which are impassable especially during the rainy seasons. Enumerators were compelled to walk distances of up to 30 min to 1 h. In such cases, we planned for adequate travel time during data collection.

4. DISCUSSION

4.1. Using the EVM to adapt HCAP batteries in LOSHAK

4.1.1. Person

The LOSHAK study considered participant and enumerator characteristics, emphasizing the importance of recruiting local enumerators. Local enumerators were recruited to enhance cultural and linguistic concordance, ensuring that cognitive tests and survey measures were appropriately contextualized.

Interviewer‐respondent familiarity can yield better outcomes in sensitive topics such as contraceptive use and age at first sex. 24 However, its impact on data quality remains debated—some studies suggest familiarity lowers item non‐response. 25 , 26 Interviewers themselves perceive familiarity to affect data collection but not responses, with respondent personality and willingness to disclose information playing a more significant role. 27 These findings underscore the importance of strategic interviewer selection, particularly in low‐ and middle‐income country surveys.

4.2. Language

Cognitive test questions in LOSHAK were translated into Swahili and adapted with culturally relevant examples by a native Swahili speaker. The Swahili translation was Pre‐tested to the study community to validate linguistic and cultural appropriateness, with refinements made based on feedback from local enumerators.

Effective assessment requires culturally sensitive language to ensure tools are communicated as intended. Translation alone is insufficient; careful attention must be given to culturally resonant terminology, particularly when working with regional or sub‐cultural communities. 28 Scholars emphasize that without appropriate language use, implementation of assessment tools becomes challenging. 29 , 30

4.3. Metaphors

Contextualizing metaphors is essential in study implementation. In LOSHAK, culturally relevant metaphors were integrated, particularly in the Story Memory: Brave Man Recall test, where the East Boston Memory Test was adapted to include native names and locally meaningful elements to enhance cultural resonance.

Studies indicate that incorporating familiar metaphors, such as traditional sayings, improves engagement and effectiveness of interventions, reduces resistance, increases motivation, and reframes problems in culturally meaningful ways. 31 By aligning study tools with culturally embedded symbols, access to effective services is improved for underserved communities.

4.4. Content

Content, or cultural knowledge, 20 was integral to the LOSHAK study. Familiarization with cultural values such as personal space, time orientation, and gender roles ensured respectful engagement with participants. Enumerators adhered to local norms, including dress codes, gender interactions, and seating arrangements.

Cultural knowledge plays a crucial role in health interventions and research design for underrepresented populations. 32 , 33 Researchers emphasize that standard methods may not align well with diverse communities, necessitating the integration of cultural factors for increased relevance, impact, and sustainability 34 Cross‐cultural principles, rather than ethnic‐specific knowledge, are essential for effective study execution. 32 , 35

4.5. Concepts

Concepts refer to constructs within study tools. 20 In LOSHAK, concept consonance was carefully evaluated for cultural sensitivity before adoption. A key adaptation involved the SMSE Clock Drawing test, where Swahili time conventions were considered—for example, 7 o'clock translating to 1 o'clock—to ensure accurate assessment and scoring.

4.6. Goals

Framing study goals within local values, customs, and traditions is crucial for effective outcomes. The LOSHAK study's goals aligned with those of participants and the Ministry of Health. Participants were eager to understand their cognitive health, while the Ministry sought to identify individuals at risk of cognitive impairment.

Cultural values significantly shape beliefs about health and illness, often contrasting with biomedical perspectives. 36 Misalignment between cultural and clinical perspectives can hinder therapeutic goals, particularly for elderly and non‐Western populations. Ensuring that study objectives resonate with community priorities fosters engagement and meaningful participation.

4.7. Methods

Methods refer to procedures for achieving study goals. 20 In LOSHAK, techniques were aligned with participants’ cultural contexts to improve data collection effectiveness. Strategies included building rapport with the community through health promoters, intensive enumerator training, and modifying tools and implementation modalities for language congruency and reliable responses.

Aligning scientific methods with cultural contexts enhances study validity. Community‐based participatory research and the integration of Indigenous knowledge with Western paradigms are increasingly recognized as essential. 37 Common strategies include engaging stakeholders, reviewing materials, and adapting contextualized metaphors and concepts. 38 Such culturally tailored approaches improve data quality and support health equity. 34

4.8. Context

The social, economic, and political contexts of a study site are critical considerations in research and health innovations. 20 In LOSHAK, contextual parameters—such as literacy levels, household economic status, seasonal and environmental factors, community engagement, and cultural considerations—were carefully addressed during data collection to minimize disruptions and enhance data reliability.

Research indicates that contextual elements such as physical surroundings, staff interactions, and community atmosphere significantly influence interview results. 39 A multiple case study of seven randomized controlled trials (RCTs) identified four key context elements—personal, organizational, trial, and problem context—that are essential for assessing study effectiveness and generalizability 40 Local institutions also play a critical role in the success of community‐based research, as studies that engage with them tend to be more effective. 41 Understanding these contextual factors enhances the internal validity and applicability of research findings.

By applying the EVM framework, LOSHAK successfully contextualized the HCAP batteries, ensuring cultural relevance, improving data quality and relevance in cognitive health assessments. This approach highlights the necessity of integrating local cultural knowledge into cognitive assessments for meaningful and impactful research outcomes.

5. RECOMMENDATIONS

Based on our experiences in administration of cognitive tests during the pilot phase of LOSHAK project, we present the following general recommendations for conducting cognitive tests, as well as recommendations specific to the Kenyan context.

5.1. General recommendations for conducting cognitive tests

  1. It is necessary to contextualize measures by translating the tests into the local languages and using examples that are relevant to the participants' culture and environment. This was particularly important for the SMSE, animal naming, and story recall measures.

  2. Having enumerators from the local community is useful in contextualizing the tools during the training and pre‐testing. Detailed enumerator training is vital to ensure that they understand the goals and expectations of each cognitive test for accurate data collection. Local enumerators also have a better understanding of cultural expectations and taboos.

  3. Conducting several pre‐tests and using role‐play helps to ensure enumerator scoring is consistent, accurate, and reliable.

5.2. Recommendations for conducting cognitive tests in Kenya

  1. We recommend allowing respondents to use their preferred language for animal naming tests, as some animals were more easily recalled in local languages. Using abbreviations for animal names allows enumerators to keep pace with the responses.

  2. In rural areas, administering cognitive tests also requires consideration of the time of day and the season of the year, so as to minimize distractions. For example, avoiding administering cognitive tests during busy times such as the rainy season, planting or harvesting, lunch hours, and school holidays is essential to minimize distractions.

  3. It is important to take time to build rapport with the participants and their community. Explaining the study and the sensitivity involved in conducting cognitive tests before the actual study day is recommended.

  4. Enumerators need to understand and respect local cultural norms and taboos and be prepared to adjust to the homestead set‐up to avoid compromising data quality.

  5. Be cognizant of the limitations of cognitive testing in low‐resource settings and ensure that these are adapted and interpreted appropriately in order to promote high‐quality data.

6. CONCLUSION

Administering cognitive tests among older individuals in low‐resource settings can be challenging, but doing so is both possible and important. By following the lessons learnt and recommendations provided in this manuscript, researchers and clinicians can improve the accuracy and reliability of cognitive testing in LMICs. In general, effective administration of cognitive tests requires meticulous planning and contextualizing of study instruments to best align with the culture, language, literacy, and numeracy, and the environment of the communities and participants being surveyed. Our findings reinforce the need for further development and validation of culturally appropriate tools that minimize educational bias in cognitive testing, particularly in SSAsettings.

CONFLICT OF INTEREST STATEMENT

The authors declare no conflict of interest. Author disclosures are available in the supporting information.

ETHICAL CONSIDERATIONS

Ethical approval was obtained from both the Institutional Scientific and Ethical Review Committees (ISERC) of Aga Khan University (AKU) (Ref: 2022/ISERC‐109‐V3) and University of Michigan (UM) and the study was performed in accordance with the tenets of the Declaration of Helsinki and ISERC. Community engagements with community representatives (village elders) and local government administration (chiefs and their assistants) was done during community entry. Research permit and approvals were also obtained from the County Department of Health and licensing from the Kenyan research regulator, the National Council of Science and Technology (NACOSTI) before implementation. Participation in this project was voluntary through informed consent. Respondents were let free to ask any questions regarding the information given to them before consenting and were set free to withdraw from the study at any time. To ensure confidentiality of participant information, unique identifiers were used to identify participants and their households. All data were transferred to a protected online database on encrypted password‐protected devices. Access to the deidentified data were restricted to the PI and research team.

Supporting information

Supporting Information

ALZ-21-e70552-s001.pdf (1.1MB, pdf)

ACKNOWLEDGMENTS

The authors acknowledge the entire LOSHAK study team for their invaluable and continued contributions. The authors also thank the LOSHAK study advisory board members for their guidance and support: Jinkook Lee, Zul Merali, Stephen Tollman, and David Weir. The authors also acknowledge Brenda Ochieng, Eric Ochieng and Matthew Krupoff of the Kenya Life Panel Survey (KLPS) project for sharing experiences with some cognitive tests that we found useful for adoption in our set‐up. The authors also extend gratitude to Dr. Shahirose Premji of Queens University for reviewing the manuscript and providing invaluable guidance on how to restructurer the manuscript, which greatly enhanced the quality of this work. This study was supported by the National Institute of Aging of the National Institute of Health (R21AG077042); University of Michigan Center for Global Health Equity; Harmonized Cognitive Assessment Protocol Network (U24AG065182); and Michigan Center on the Demography of Aging's HRS Partner Studies Network (P30AG012846).

Riang'a RM, Mwangi EM, Nagarajan N, et al. Contextualization of Harmonized Cognitive Assessment Protocol (HCAP) in an aging population in rural low‐resource settings in Africa: Experiences and strategies adopted to optimize effective adaption of cognitive tests in Kenya. Alzheimer's Dement. 2025;21:e70552. 10.1002/alz.70552

Joshua R. Ehrlich and Anthony K. Ngugi served as co‐senior authors and PI for this project.

REFERENCES

  • 1. He W, Aboderin I, Adjaye‐Gbewonyo D. International Population Reports, P95/20‐1, Africa Aging: 2020. U.S. Government Printing Office; 2020. [Google Scholar]
  • 2. Lin FR, Yaffe K, Xia J, et al. Hearing loss and cognitive decline in older adults. JAMA Intern Med. 2013;173(4):293‐299. doi: 10.1001/jamainternmed.2013.1868 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Khan TS, Hirschman KB, McHugh MD, Naylor MD. Self‐efficacy of family caregivers of older adults with cognitive impairment: a concept analysis. In: Nursing Forum. Vol 56. Wiley Online Library; 2021:112‐126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Landeiro F, Mughal S, Walsh K, et al. Health‐related quality of life in people with predementia Alzheimer's disease, mild cognitive impairment or dementia measured with preference‐based instruments: a systematic literature review. Alzheimers Res Ther. 2020;12:1‐14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. McGrattan AM, Zhu Y, Richardson CD, et al. Prevalence and risk of mild cognitive impairment in low and middle‐income countries: a systematic review. J Alzheimer's Dis. 2021;79(2):743‐762. [DOI] [PubMed] [Google Scholar]
  • 6. Group LGMH. Scale up services for mental disorders: a call for action. Lancet. 2007;370(9594):1241‐1252. [DOI] [PubMed] [Google Scholar]
  • 7. Prince M, Patel V, Saxena S, et al. No health without mental health. Lancet. 2007;370(9590):859‐877. [DOI] [PubMed] [Google Scholar]
  • 8. Brinda EM, Rajkumar AP, Attermann J, Gerdtham UG, Enemark U, Jacob KS. Health, Social, and Economic Variables Associated with Depression Among Older People in Low and Middle Income Countries: World Health Organization Study on Global AGEing and Adult Health. Am J Geriatr Psychiatry. 2016;24(12):1196‐1208. doi: 10.1016/j.jagp.2016.07.016 [DOI] [PubMed] [Google Scholar]
  • 9. Leifer BP. Early diagnosis of Alzheimer's disease: clinical and economic benefits. J Am Geriatr Soc. 2003;51(5s2):S281‐S288. [DOI] [PubMed] [Google Scholar]
  • 10. Tombaugh TN, McIntyre NJ. The Mini‐Mental State Examination: A comprehensive review. J Am Geriatr Soc. 1992;40(9):922‐935. doi: 10.1111/j.1532-5415.1992.tb01992.x [DOI] [PubMed] [Google Scholar]
  • 11. Czerwinski‐Alley NC, Chithiramohan T, Subramaniam H, Beishon L, Mukaetova‐Ladinska EB. The effect of translation and cultural adaptations on diagnostic accuracy and test performance in dementia cognitive screening tools: a systematic review. J Alzheimer's Dis Reports. 2024;8(1):659‐675. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Mani S, Ehrlich J, Ngugi A, et al. Validation of Cognitive and Psychosocial Instruments for the Kenyan Context: Results from loshak. Innov Aging. 2024;8(Suppl 1):635. [Google Scholar]
  • 13. Davidson G. Cognitive assessment of indigenous Australians: Towards a multiaxial model. Aust Psychol. 1995;30(1):30‐34. [Google Scholar]
  • 14. Westerman T. Engagement of indigenous clients in mental health services: what role do cultural differences play [editorial]. Aust EJ Adv Ment Heal. 2004;3(3). [Google Scholar]
  • 15. Dingwall KM, Lindeman MA, Cairney S. “You've got to make it relevant”: barriers and ways forward for assessing cognition in Aboriginal clients. BMC Psychol. 2014;2:1‐11. [Google Scholar]
  • 16. Kearins JM. Cultural elements in testing: The test, the tester and the tested. Ethn Cogn Assess Aust Perspect. 1988;60:70. [Google Scholar]
  • 17. Woodford HJ, George J. Cognitive assessment in the elderly: a review of clinical methods. QJM An Int J Med. 2007;100(8):469‐484. [DOI] [PubMed] [Google Scholar]
  • 18. Dingwall KM, Gray AO, McCarthy AR, Delima JF, Bowden SC. Exploring the reliability and acceptability of cognitive tests for Indigenous Australians: a pilot study. BMC Psychol. 2017;5(1):1‐16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Nagarajan N, Burns SD, Riang'a RM, et al. Development of the Longitudinal Study of Health and Ageing in Kenya (LOSHAK). Innov Aging. 2024;8(4):igad111. doi: 10.1093/geroni/igad111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Bernal G, Bonilla J, Bellido C. Ecological validity and cultural sensitivity for outcome research: Issues for the cultural adaptation and development of psychosocial treatments with Hispanics. J Abnorm Child Psychol. 1995;23:67‐82. [DOI] [PubMed] [Google Scholar]
  • 21. Lee J, Banerjee J, Khobragade PY, Angrisani M, Dey AB. LASI‐DAD study: a protocol for a prospective cohort study of late‐life cognition and dementia in India. BMJ Open. 2019;9(7). doi: 10.1136/bmjopen-2019-030300 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Kenya National Bureau of Statistics . 2019 Kenya Population and Housing Census Volume 1: Population by County and Sub‐County. Vol I.; 2019. https://www.knbs.or.ke/?wpdmpro=2019‐kenya‐population‐and‐housing‐census‐volume‐i‐population‐by‐county‐and‐sub‐county [Google Scholar]
  • 23. Ngugi AK, Odhiambo R, Agoi F, et al. Cohort profile: the Kaloleni/Rabai community health and demographic surveillance system. Int J Epidemiol. 2020;49(3):758‐759e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Anglewicz P, Akilimali P, Eitmann LP, Hernandez J, Kayembe P. The relationship between interviewer‐respondent familiarity and family planning outcomes in the Democratic Republic of Congo: A repeat cross‐sectional analysis. BMJ Open. 2019;9(1):1‐9. doi: 10.1136/bmjopen-2018-023069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Johnson TP, Fendrich M, Shaligram C, Garcy A, Gillespie S. An evaluation of the effects of interviewer characteristics in an RDD telephone survey of drug use. J Drug Issues. 2000;30(1):77‐101. [Google Scholar]
  • 26. Rodriguez LA, Sana M, Sisk B. Self‐administered Questions and Interviewer–Respondent Familiarity. Field methods. 2015;27(2):163‐181. doi: 10.1177/1525822X14549315 [DOI] [Google Scholar]
  • 27. Greenleaf AR, Turke SR, Bazié F, Sawadogo N, Guiella G, Moreau C. Interviewing the interviewers: Perceptions of interviewer–respondent familiarity on survey process and error in Burkina Faso. Field methods. 2021;33(2):107‐124. [Google Scholar]
  • 28. Guttfreund DG. Effects of language usage on the emotional experience of Spanish‐English and English‐Spanish bilinguals. J Consult Clin Psychol. 1990;58(5):604. [DOI] [PubMed] [Google Scholar]
  • 29. Snowling MJ, West G, Fricke S, et al. Delivering language intervention at scale: promises and pitfalls. J Res Read. 2022;45(3):342‐366. doi: 10.1111/1467-9817.12391 [DOI] [Google Scholar]
  • 30. Srivastava SB. Language: A Powerful Tool in Promoting Healthy Behaviors. Am J Lifestyle Med. 2019;13(4):359‐361. doi: 10.1177/1559827619839995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Rahill G, Jean‐Gilles M, Thomlison B, Pinto‐Lopez E. Metaphors as contextual evidence for engaging Haitian clients in practice: A case study. Am J Psychother. 2011;65(2):133‐149. [DOI] [PubMed] [Google Scholar]
  • 32. Kehoe KA, Melkus GD, Newlin K. Culture Within The Context of Care. Ethn Dis. 2003;13(3):344‐353. [PubMed] [Google Scholar]
  • 33. Iwelunmor J, Newsome V, Airhihenbuwa CO. Framing the impact of culture on health: A systematic review of the PEN‐3 cultural model and its application in public health research and interventions. Ethn Heal. 2014;19(1):20‐46. doi: 10.1080/13557858.2013.857768 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Whitesell NR, Sarche M, Keane E, Mousseau AC, Kaufman CE. Advancing Scientific Methods in Community and Cultural Context to Promote Health Equity: Lessons From Intervention Outcomes Research With American Indian and Alaska Native Communities. Am J Eval. 2018;39(1):42‐57. doi: 10.1177/1098214017726872 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Manson SM. The role of culture in effective intervention design, implementation, and research: Its universal importance. Prev Sci. 2020;21(Suppl 1):93‐97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Helman CG. Cuture Health and Illiness. fifth. (Purdy JK and S, ed.). Hodder Arnold; 2007. [Google Scholar]
  • 37. Dickerson D, Baldwin JA, Belcourt A, et al. Encompassing cultural contexts within scientific research methodologies in the development of health promotion interventions. Prev Sci. 2020;21:33‐42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Brown C, Maggin DM, Buren M. Systematic review of cultural adaptations of school‐based social, emotional, and behavioral interventions for students of color. Educ Treat Child. 2018;41(4):431‐456. [Google Scholar]
  • 39. Hansen HP, Tjørnhøj‐Thomsen T, Johansen C. Rehabilitation interventions for cancer survivors: The influence of context. Acta Oncol (Madr). 2011;50(2):259‐264. [DOI] [PubMed] [Google Scholar]
  • 40. Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in‐depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13:1‐17. doi: 10.1186/1745-6215-13-95 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Waylen KA, Fischer A, McGowan PJK, Thirgood SJ, Milner‐Gulland EJ. Effect of local cultural context on the success of community‐based conservation interventions. Conserv Biol. 2010;24(4):1119‐1129. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information

ALZ-21-e70552-s001.pdf (1.1MB, pdf)

Articles from Alzheimer's & Dementia are provided here courtesy of Wiley

RESOURCES