Abstract
Objective
The aim of this study was to explore the state of health information technology (HIT) usability evaluation in Africa.
Materials and Methods
We searched three electronic databases: PubMed, Embase, and Association for Computing Machinery. We categorized the stage of evaluations, the type of interactions assessed, and methods applied using Stead’s System Development Life Cycle (SDLC) and Bennett and Shackel’s usability models.
Results
Analysis of 73 of 1002 articles that met inclusion criteria reveals that HIT usability evaluations in Africa have increased in recent years and mainly focused on later SDLC stage (stages 4 and 5) evaluations in sub-Saharan Africa. Forty percent of the articles examined system-user-task-environment (type 4) interactions. Most articles used mixed methods to measure usability. Interviews and surveys were often used at each development stage, while other methods, such as quality-adjusted life year analysis, were only found at stage 5. Sixty percent of articles did not include a theoretical model or framework.
Discussion
The use of multistage evaluation and mixed methods approaches to obtain a comprehensive understanding HIT usability is critical to ensure that HIT meets user needs.
Conclusions
Developing and enhancing usable HIT is critical to promoting equitable health service delivery and high-quality care in Africa. Early-stage evaluations (stages 1 and 2) and interactions (types 0 and 1) should receive special attention to ensure HIT usability prior to implementing HIT in the field.
Keywords: health information technology, usability, evaluations, Africa, scoping review
BACKGROUND AND SIGNIFICANCE
Promoting good health and well-being is a crucial priority for all countries and is a sustainable development goal set by the United Nations.1 The continent of Africa is an economically diverse geographical area with countries with high gross national income, like Seychelles, and others, such as Burundi, with much lower gross national income.2 Health outcomes also vary widely throughout the continent. For example, in terms of maternal mortality, Chad had the highest rate among African countries with 1140 deaths per 100 000 live births in 2017, while Egypt only reported 37 deaths per 100 000 live births.3
Despite the economic diversity within the continent, Africa has the largest proportion of low-income countries with 23 of the 27 poorest countries in the world and 21 of the world’s 55 lower-middle-income countries.2 In comparison to other continents, Africa has greater health inequities, worse health outcomes, the lowest average life expectancy, and the highest proportion of under-five mortality and maternal mortality ratios.2,4,5 Furthermore, a 2018 World Health Organization review found that most countries in Africa were unable to achieve their targets for sustainable development goals of improving health outcomes.4 Thus, it is imperative that research and health policies focus on removing barriers to equitable health and improving health outcomes in this region.
Development and utilization of health information technology (HIT) can contribute to improving health outcomes in both high- and low-resource areas. HIT is the “processing, storage, and exchange of health information in an electronic environment”6 and has been shown to improve the quality and delivery of care, decrease costs, improve patient safety and health outcomes, assist healthcare providers in efficiently completing tasks, improve patient-provider interactions, and decrease inadvertent errors.4,6–12
Over the last decade, the growth and use of internet services and mobile devices have rapidly expanded in both urban and rural communities in Africa.9 This uptake of mobile devices and information technology has led to the rapid development of HIT and increased utilization of technology by healthcare workers and patients for communication, collaboration, and access to health-related information.7,9 Previous research on the impact of HIT in low- to middle-income countries has found the technology to be useful in enhancing the efficiency of task completion, which is in line with the positive effects of HIT in high-income countries.12
However, HIT must be used deliberately. If the creation, implementation, and use of HIT are not carefully enacted, unintentional consequences may occur.13 Tomasi et al12 also highlighted the importance of conscientious implementation and stakeholder involvement to ensure the optimal impact of system implementation in a specific environment. System usability is critical to ensuring positive influence of HIT on outcomes of interest.10 The International Organization for Standardization (ISO) defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specific context of use.”6 ISO definitions for the usability components of effectiveness, efficiency, and satisfaction are shown in Table 1.
Table 1.
ISO definitions of usability components
| Component of usability | Definition |
|---|---|
| Effectiveness | Accuracy and completeness with which users achieve specified goals |
| Efficiency | Resources expended in relation to the accuracy and completeness with which users achieve goals |
| Satisfaction | Freedom from discomfort and positive attitudes toward the use of the product |
Note: Definitions from the International Organization for Standardization (2013).
Poor usability can increase the time needed to complete tasks, increase the risk of errors occurring, lead to increased costs related to implementation and use, and discourage users from incorporating the technology into their daily activities, among many other negative consequences.8,10,14–16 Thus, researchers must perform usability evaluations to ensure that they are cognizant of the preexisting environment and have support from local stakeholders and targeted end-users of the HIT and that sufficient resources exist to promote uptake of the system.17 Moreover, usability evaluations should occur at multiple stages of system development rather than a single point in time to ensure adequate fit between the technology and task requirements.
We conducted a scoping review to explore the state of HIT usability evaluation in Africa. Specifically, we applied Bennett and Shackel’s usability models15,18 to characterize the types of usability interactions assessed and methods used and categorized the timing of the evaluation according to the stages of the System Development Life Cycle (SDLC).16
MATERIALS AND METHODS
This scoping review follows the Joanna Brigg’s scoping review framework.19 We used the Population-Concept-Context framework20 to inform the inclusion criteria (Supplementary Table S1) and developed an unpublished a priori protocol based on the PROSPERO guidelines (Supplementary Table S2).21 We report results according to Stead’s SDLC stages16 and Bennett’s and Shackel’s usability models.15,18
Eligibility criteria
We included articles in this scoping review if their main objective was HIT usability evaluation in African countries. We did not include articles of studies that explored general informatics aspects, such as information-seeking behavior or evaluations of personal technology usage, studies that evaluated specific methods, models, or frameworks, or studies focused on information systems for security, bioinformatics, or research purposes not related to health outcomes or the promotion of positive health behaviors (Supplementary Table S2). Our eligibility criteria comprised quantitative, qualitative, and mixed-method studies reported in journals, conference proceedings, and gray literature. While we did not limit the date of publication, our search was limited to the English language. The removal of the non-English articles from the original search results occurred during the title and abstract screening. However, one article had an English abstract but during full-text review, the reviewers found that the full article was not available in English and was thus removed.
Information sources and search strategy
We consulted an informationist from the Columbia University Irving Medical Center library to develop the search strategy for the review. We searched PubMed, Embase, and the Association for Computing Machinery (ACM) (Supplementary Table S3). We also performed an ancestry search of reference lists in the retrieved papers.
Study selection and data extraction
Our literature search retrieved a total of 1088 results comprising articles reporting HIT evaluation findings, study protocols, and conference papers. Two authors (KD and MH) screened, evaluated, and extracted data using Covidence.22 We removed 94 duplicates and added 11 articles from the search of reference lists resulting in 1002 titles and abstracts that were screened for inclusion. We subsequently reviewed 183 full-text articles resulting in 73 that met our inclusion criteria (Figure 1).
Figure 1.
PRISMA flow diagram.
We extracted data about author information, date of publication, study sample and size, study location, type of HIT evaluated, study design, methods used to explore usability, and framework used (if stated). Supplementary Table S4 summarizes study descriptions.
Data extraction from individual sources of evidence
To extract and categorize SDLC stage and types of interactions studied in Africa, we applied Yen and Bakken’s23 evaluation framework that combined the SDLC with Bennett’s and Shackel’s usability models to determine the when and the what of usability studies in the United States. The when refers to the point along the SDLC the evaluation is occurring. Table 2 presents specific study categorization criteria for each of the SDLC stages.
Table 2.
Criteria for study categorization based on the SDLC stages
| SDLC stage | Study categorization criteria |
|---|---|
| Stage 1 | Needs assessments with design methods described |
| Stage 2 | System validation evaluations, such as sensitivity and specificity analyses, ROC curve, and observer variation |
| Stage 3 | Human-computer interaction evaluations focusing on outcome quality, user perception, and user performance in the laboratory setting |
| Stage 4 | Field testing; experimental or quasi-experimental designs with control groups in one setting |
| Stage 5 | Field testing; experimental or quasi-experimental designs with control groups in multiple sites; postimplementation evaluation only; self-control, such as evaluation before and after implementation |
Note: Criteria from Yen et al23.
Abbreviations: ROC: receiver operating characteristic; SDLC: System Development Life Cycle.
Bennett’s and Shackel’s usability models explore the what of usability evaluations, which are the components being explored in the study. These components consist of the task needing to be accomplished, the system or HIT, the users of the system, and the environment where all of this is occurring.15,18
Figure 2 depicts how these four components interact with one another. The five component interaction types that are included in these models are (type 0) task alone, (type 1) user-task interaction, (type 2) system-task interaction, (type 3) system-user-task interaction, and (type 4) system-user-task-environment.15,18Table 3 displays the combined evaluation framework used in this literature review.
Figure 2.
Usability interaction diagram.
Table 3.
Usability evaluation frameworks
| Stead SDLC | Bennett and Shackel evaluation type |
|---|---|
| Stage 1: specification | Type 0: task |
| Type 1: user-task | |
| Stage 2: component development | Type 2: system-task |
| Type 3: system-user-task | |
| Stage 3: combination of components into a system | Type 2: system-task |
| Type 3: system-user-task | |
| Stage 4: integration of system into environment | Type 2: system-task |
| Type 3: system-user-task | |
| Type 4: system-user-task-environment | |
| Stage 5: routine use | Type 2: system-task |
| Type 3: system-user-task | |
| Type 4: system-user-task-environment |
Note: Integrated usability framework from Yen et al23.
Abbreviation: SDLC: System Development Life Cycle.
RESULTS
Description of included articles
Of the 73 articles included in this review, 20 (27%) are from the last two years and 31 (42%) from the last 3 to 5 years. The earliest study reviewed was published in 2010 and 2019 had the most publications (n = 14, 19%). Healthcare workers providing direct patient care were the most frequently targeted HIT end-users (n = 48, 66%). Others included health administrators and leadership, patients, caregivers, and content experts, such as HIV treatment experts (see Supplementary Table S4). Sample sizes varied widely among the studies from 524 to 2327.25 The studies reflected the implementation of HIT in multiple different settings including clinics (n = 16), hospitals (n = 17), communities (n = 19), long-term care facilities (n = 1), academia (n = 1), and health departments (n = 1). In addition, 16 studies described the implementation of HIT in multiple settings.
We describe the details of the 73 articles in this scoping review including the type of HIT, SDLC stage, usability interactions, and theories supporting the studies in the following sections. Some publications describe multistage usability evaluations, so the number of publications at each stage does not match the total number of studies included in the review.
The countries with the most publications were Kenya and Uganda (n = 10, 14% for each) followed by South Africa (n = 9, 12%). The remaining countries had 1–6 publications (Table 4). Two studies occurred in more than one country, one in Kenya and Uganda, and another in Kenya and Mozambique.
Table 4.
Number of articles from each African country
| Country | Number of articles |
|---|---|
| Kenya | 10 |
| Uganda | 10 |
| South Africa | 9 |
| Ethiopia | 6 |
| Ghana | 6 |
| Malawi | 6 |
| Nigeria | 6 |
| Tanzania | 4 |
| Botswana | 3 |
| Mozambique | 3 |
| Sierra Leone | 3 |
| Madagascar | 2 |
| Rwanda | 2 |
| Burkina Faso | 1 |
| Burundi | 1 |
| Central African Republic | 1 |
| Gabon | 1 |
| Zimbabwe | 1 |
Most articles are from the southeast region of the continent (Figure 3). Thirty-six countries did not have publications that met eligibility criteria, that is, no articles from north or southwest Africa.
Figure 3.
Map of countries in Africa that have studies included in this review.
Types of HIT evaluated
The most common types of HIT evaluated were clinical decision support systems (CDSS) (n = 24, 33%) and clinical information systems (n = 13, 18%), which include electronic health records, information systems, and documentation systems (Table 5). The least common type of HIT was adverse event reporting (n = 1). In terms of the application domain, 21% (n = 15) of HIT evaluated was for HIV care (n = 15, 21%). Very few studies utilized or evaluated preexisting HIT in their work. The exception is Lim et al26 who leveraged two health applications that were previously evaluated for usability to enhance the design of their CDSS. This allowed them to quickly move to stage 3 and 4 usability evaluations rather than duplicate the previous work.
Table 5.
Number of health IT systems evaluated by type
| Category | Number |
|---|---|
| 1. Population-based system | 9 |
| a. Registry | 2 |
| b. Population surveys | 2 |
| c. Disease control | 5 |
| 2. Clinical information system | 13 |
| a. Electronic health record | 7 |
| b. Information system | 5 |
| c. Documentation system | 1 |
| 3. CDSS | 24 |
| a. For information management (eg, information needs) | 1 |
| b. For focusing attention (eg, reminder/alerts) | 2 |
| c. For providing patient-specific recommendations | 21 |
| 4. Patient/consumer facing decision support system | 6 |
| a. For information management (eg, information needs) | 1 |
| b. For focusing attention (eg, reminder/alerts) | 5 |
| 5. Telehealth system—provider-patient consultation | 8 |
| 6. Training system | 7 |
| a. Provider education | 3 |
| b. Patient education | 4 |
| 7. Adverse event reporting | 1 |
| 8. Logistics system | 5 |
| a. Patient admissions | 2 |
| b. Medical supplies | 1 |
| c. Pharmaceuticals | 2 |
Note: HIT categories were based on Gremy and Degoulet’s classifications, CDSS classifications described by Musen, AHRQ, and logistic systems are categorized by the item they are tracking.27–29
Abbreviations: CDSS: clinical decision support system; HIT: health information technology; AHRQ: Agency for Healthcare Research and Quality.
Stages of SDLC
Most publications (n = 48, 66%) described single-stage evaluations. While many methods were similar across stages, some were unique to an individual stage, such as cost-benefit analysis and quality-adjusted life year (QALY) score calculations at stage 5 evaluations (Table 6). Mixed methods was the most common study design (n = 43, 59%), typically including some form of qualitative interviews or focus groups as well as administration of quantitative questionnaires.
Table 6.
Methods used at each SDLC stage
| SDLC stage | Methods Used |
|---|---|
| Stage 1: specify needs and setting | Data collection/data sources
|
Data analysis
| |
| Stage 2: system component development | Data collection/data sources
|
Data analysis
| |
| Stage 3: combination of components | Data collection/data sources
|
Data analysis
| |
| Stage 4: integrate health IT into a real environment | Data collection/data sources
|
Data analysis
| |
| Stage 5: routine use | Data collection/data sources
|
Data analysis
|
Abbreviations: CDSS: clinical decision support system; QALY: quality-adjusted life year; SDLC: System Development Life Cycle; HIT: health information technology.
Stage 1: specify needs and setting
Stage 1 focuses on determining the specific tasks and needs that the technology will support along with user preferences. Evaluations at this stage can occur in the laboratory or the field. A total of six articles (8%) included stage 1 evaluations. To determine specific user needs and knowledge some studies reviewed the current literature.30,31 To identify task requirements and workflow needs two publications used consultations with domain experts.32,33 These experts included professionals from implementation science, informatics, and disease-specific treatments, such as HIV/TB. The studies recruited health administrators and officers for their understanding of the health technology infrastructure and policy requirements to create HIT that can be easily implemented into the community.31,34,35 All stage 1 evaluations focused on the emic perspective, or insider perspective, and recruited healthcare workers and patients as the intended HIT end-users.30–35 Most studies included qualitative data collection strategies such as interviews, focus groups, and brainstorming design sessions. These studies used inductive and/or deductive coding strategies to analyze their data.31,32,35,36 Bagayoko et al25 was the only study to use quantitative surveys to elicit targeted end-user knowledge during stage 1 development. They used the survey responses to perform logistic regression to determine which variables are associated with healthcare provider perspectives on a national electronic health system.25
Stage 2: system component development
The purpose of stage 2 evaluations is to determine the validity and reliability of the HIT or its individual components. These evaluations occur in the laboratory, or a controlled environment that is not influenced by real-world conditions. Three articles reported performing stage 2 evaluations37–39 and calculated sensitivity and specificity, inter-rater reliability, and test-retest reliability to ensure the accuracy and validity of the HIT or its components.37–39 For example, to ensure validity of their HIT Azfar et al37 performed traditional face-to-face dermatology appointments with participants and made clinical diagnoses and treatment decisions based on these appointments to create a gold standard for comparison with a teledermatology system. Researchers compared the provider decisions in the two appointment modes to determine congruence. To explore test-retest reliability, the researchers provided the healthcare providers with blinded versions of the old cases using the information obtained from the teledermatology visits and asked them to make clinical decisions based on the information collected in the telehealth visits.37 By performing these tests, the authors could ensure reliability and validity of their tool before exploring usability concerns or real-world performance.
Stage 3: combination of components
Stage 3 evaluations occur in the laboratory setting and aim to determine if the HIT can decrease human error and improve the user’s experience performing the task. There were twenty-eight articles (38%) with stage 3 evaluations. To explore users’ perceptions of HIT, researchers used both quantitative and qualitative methods. The most common quantitative method for obtaining user perceptions was usability surveys. Many studies used existing tools, such as International Business Machines Corporation (IBM) usability questionnaires, System Usability Scale, and the Health Information Technology Usability Evaluation Scale.26,38,40–43 However, other studies chose to create their own surveys or modify existing surveys.44–50 Typically, authors used descriptive statistics summarize these subjective usability measures. In addition, 18 of the 28 articles with stage 3 evaluations used focus groups and interviews to gather additional user perceptions. The authors used both inductive and deductive coding to analyze the transcripts from these sessions.
Researchers used several different simulated activities to gain a better understanding of how targeted end-users might complete required tasks using the HIT. These simulated activities include think-aloud protocol, hybrid lab-live software evaluation, low-fidelity prototyping, direct observations, and one-to-one theoretical usability workshops. The most common of these methods was the think-aloud protocol.24,26,40,44,47,51 During think-aloud protocols, participants verbalized their thought processes as they completed tasks.52 Researchers are also interested in ensuring their HIT not only has high usability with end-users but also meets best practices (ie, heuristics) for user-interface design. Two studies used heuristic evaluations by HIT experts to identify usability violations in their HIT.43,44
Stage 4: integrate HIT into a real environment
Most articles included stage 4 evaluations (n = 53, 73%). Stage 4 evaluations occur in the field and evaluate how the HIT will perform in real-world conditions. A defining characteristic of stage 4 evaluations is that the HIT is typically still under the control of the individuals who developed it. Most articles describing stage 4 evaluations were pilot studies evaluating the ease of implementation and use of the HIT at one or a handful of sites. To identify barriers to implementation researchers performed cause of error analyses.53 To evaluate user satisfaction researchers used questionnaires (n = 34), focus groups (n = 32), and interviews (n = 28). In addition, chart review, user-engagement reports, frequency of IT support ticket submission, guideline adherence, and outcome comparison to the previously used system were all leveraged to evaluate both efficiency and effectiveness of the HIT during real-world performance. Similar to stage 3 evaluations in the laboratory, time-on-task, counting the number of errors that occurred during HIT use, and asking participants to perform think-aloud protocol during direct observation measured system efficiency.26,44,47,48,54 One study compared clinical care decisions made by nurses using a novel telehealth system and traditional face-to-face appointments to determine the reliability of nursing hypothesized diagnoses and feedback.55
Stage 5: routine use
The focus of stage 5 evaluations is to determine the long-term impact of the HIT. These evaluations occur in the field, typically when the HIT is no longer under the control of its original developers, and explore the effect of HIT on health outcomes, individuals, and communities.16 Nineteen (26%) articles had stage 5 evaluations. Data sources included user-engagement reports, weekly stock reports, patient health reports, and observations. For example, Abejirinde et al56 used facility inventory checklists to examine the impact the influence of HIT on facilities capacity to provide antenatal care. The follow-up period for these studies ranged from 1 month to 1 year following HIT implementation. Most of the studies applied mixed methods, while four were only quantitative57–60 and five were only qualitative.61–65 Overall, 14 articles used qualitative interviews to explore the influence of HIT on patient outcomes, user experiences, and workflow. Additional methods used in stage 5 evaluations included cost-benefit analysis, QALY analysis, and multivariable regression models to evaluate the influence of the HIT on treatment adherence and success.30,59,66
Bennett and Shackel’s usability interactions
None of the articles included in this review performed a type 0: task evaluation. The remainder of interaction evaluations were: type 1: user-task (n = 2, 3%) during SDLC stage 1; type 2: system-task (n = 13, 18%); type 3: system-user-task (n = 29, 40%); and type 4: system-user-task-environment (n = 29, 40%). Type 4 interactions occur during stages 3–5 evaluations since these studies occur in the field and researchers can explore the environmental component of usability as it relates to the technology.
Theories and frameworks guiding the usability studies
Theoretical models and frameworks guide study activities and support study rationale. Most articles did not include any framework or theory to support their work (n = 44, 60%). Of the 29 articles that used theories, the most common was the Technology Acceptance Model (n = 6, 8%). More than one study used the unified theory of acceptance, the use of technology model, and the consolidated framework for implementation research. The frameworks used in each study are listed in Supplementary Table S4.
DISCUSSION
Overview
This scoping review of HIT usability evaluations conducted in African countries identified and described when evaluations were occurring and what components of usability were evaluated. The review included more articles (n = 73) than a recent review about the presence of HIT in Africa (n = 14)67 but fewer than a similar HIT usability literature review focusing on articles in the United States (n = 346).23 However, this reflects a significant number of such studies in Africa given that Africa only accounts for approximately 1.1% of global investments in research and development.68
SDLC stages and Bennett and Shackel’s usability interactions
Most articles in the review consisted of later-stage evaluations where HIT was about to be deployed into the field or was already in real-world use. While several articles provided details on the methods they used to perform early-stage evaluations, many simply alluded to prior research or earlier development without providing any details on the methods used or samples for usability assessments.31,32,58,69–71 Review of those articles’ reference lists revealed no additional publications describing the methods in greater detail. Without additional detail, it is difficult to determine the rigor of the methods applied, or the fit of HIT produced to the user or task.
Furthermore, most articles did not describe early-stage evaluations. By ignoring early-stage usability concerns, researchers may develop and deploy HIT that does not perform its required tasks effectively. Additionally, it is more difficult to articulate and address specific usability concerns in later-stage evaluations because the interactions are more complex at later development stages.23 These findings align with the recent review exploring HIT usability in the United States that also found a predominance of HIT evaluations at stages 4 and 5.23 This consistency across the two reviews may reflect a broader trend in reporting only later-stage evaluations even if earlier-stage evaluations were conducted. Publication bias may play a role in that there are fewer dissemination venues for formative and typically small sample user-centered studies that may be specific to a single system or site despite the critical role of such evaluations in creating usable HIT.
Methodological characteristics
The majority of the articles in this review applied mixed methods approaches. By leveraging multiple types of data sources, evaluation methods, and analyses, researchers were able to obtain the objective and subjective measures of usability and identify barriers to efficient HIT utilization. Use of mixed methods also enabled the exploration of multiple usability components (eg, efficiency, effectiveness, and satisfaction).8
Theoretical frameworks provide rationale for studies, a lens for the interpretation of findings, and a basis for establishing the contribution of findings to advance knowledge generation in the field.72 However, fewer than half of the articles in the scoping review explicitly included theoretical frameworks or models. This finding is consistent with Yen and Bakken’s23 review of HIT usability studies in the United States so does not reflect a trend specific to Africa. To enhance the quality of research related to HIT usability and enable the explication of the research studies findings to advance knowledge generation, researchers should incorporate theoretical frameworks/models into their work. Furthermore, future research should identify theoretical frameworks that are appropriate for evaluations at each SDLC stage.
Gaps in the literature and future research implications
This review found several significant gaps in the literature related to HIT usability evaluations in Africa that are similar to a similar review in the United States.23 As previously mentioned, much of the literature did not include early-stage or preliminary interaction-type evaluations. Without rigorous evaluations along the SDLC stages, HIT has a limited potential to influence health outcomes and reduce health disparities.17
Furthermore, few studies evaluated or incorporated preexisting HIT into their work. This gap may be associated with the lack of early-stage evaluations, which would have identified preexisting tools that could answer researchers’ questions or health task needs. The use of preexisting tools can create robust and flexible technology that has proven usability in a wide range of situations. Researchers should perform environmental scans to identify potential solutions that meet the user-task-environment needs. This scan can save researchers time and energy if they can incorporate components of the preexisting HIT or adopt strategies to address common HIT development barriers encountered in prior research. Lim et al26 is the only study that leveraged two health applications that had already undergone usability evaluations. If researchers can incorporate preexisting HIT components that have undergone usability evaluation into their HIT product, they can focus on later-stage usability evaluations exploring the short and long-term influence of the HIT on health outcomes.
Limitations
This review has several limitations. It excluded non-English literature. Since English is not the national or common language in half of the countries in Africa, this may have led to an unintentional exclusion of articles without an English version. This single language selection may explain why the review did not include any North African countries, such as Arabic-speaking countries like Egypt and Morocco, which have more developed healthcare systems and infrastructure and publish the majority of Africa’s scientific research.73,74 However, of the 54 countries that make up Africa, 27 African countries include English as an official language, so the exclusion of non-English publications is not a major limitation to this review.75 Furthermore, the review is based on a search of only three databases, PubMed, Embase, and ACM. However, the use of ACM, an informatic-focused database, along with a thorough reference list review supported the decision to not search additional databases.
CONCLUSION
Developing and enhancing usable HIT is critical to promoting equitable health service delivery and high-quality care in Africa. The trend of more recent usability evaluations aligns with the increased development and utilization of HIT in low-resource settings. The use of mixed methods to obtain a comprehensive understanding of HIT usability is critical and provides researchers with rich results that they can leverage to further tailor their technology. This scoping review aimed to determine when usability evaluations were occurring and what components of usability they were exploring and found gaps in the current literature, and minimal advancement of theoretical approaches for HIT usability evaluation. In future studies, researchers need to incorporate theoretical frameworks to provide a stronger rationale for their work and enhance the rigor of usability evaluation research. The current literature shows that most articles focused on later-stage evaluations right before or at the point of HIT implementation. Early-stage evaluations (stages 1 and 2) and interactions (types 0 and 1) should receive special attention in future research. HIT that is rigorously tested, highly usable, and tailored to the cultural, geographic, and disease-specific needs is critical to overcome the health inequalities found in Africa.30,33,38,53,56,57,59–66,76–106
Supplementary Material
Contributor Information
Kylie Dougherty, School of Nursing, Columbia University, New York, New York, USA.
Mollie Hobensack, School of Nursing, Columbia University, New York, New York, USA.
Suzanne Bakken, School of Nursing, Columbia University, New York, New York, USA.
FUNDING
This work was supported by the Reducing Health Disparities Through Informatics (RHeaDI) Pre- and Postdoctoral Training Program (T32NR007969).
AUTHOR CONTRIBUTIONS
All authors contributed significantly to this work. KD and SB conceptualized the study. KD and MH searched for and retrieved relevant articles, screened, and extracted data from included articles. KD interpreted the data. KD drafted the article, and MH and SB made substantive revisions to the article. All authors gave final approval of and accepted accountability for the article.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
CONFLICT OF INTEREST STATEMENT
None declared.
DATA AVAILABILITY
This article does not include a data set. The list of articles analyzed is available in the references and in Supplementary Table S4.
REFERENCES
- 1. United Nations. Do You Know All 17 SDGs?. United Nations; 2020. https://sdgs.un.org/goals. Accessed November 28, 2022.
- 2. World Population Review. Poorest Countries in Africa 2021. World Population Review; 2021. https://worldpopulationreview.com/country-rankings/poorest-countries-in-africa. Accessed November 28, 2022.
- 3.CIA World Factbook. Maternal Mortality Rate – Africa. index mundi; 2017. https://www.indexmundi.com/map/?v=2223&r=af&l=en. Accessed November 28, 2022.
- 4. World Health Organization. State of Health in the WHO African Region: An Analysis of the Status of Health, Health Services and Health Systems in the Context of the Sustainable Development Goals. World Health Organization; 2018.
- 5. Statista. Average Life Expectancy at Birth in 2020, by Continent and Gender. Statista; 2021. https://www.statista.com/statistics/270861/life-expectancy-by-continent/. Accessed November 28, 2022.
- 6. U.S. Department of Health & Human Services. Health Information Technology. HHS.gov; 2020, updated August 31, 2021. https://www.hhs.gov/hipaa/for-professionals/special-topics/health-information-technology/index.html. Accessed November 28, 2022.
- 7. Bukachi F, Pakenham-Walsh N.. Information technology for health in developing countries. Chest 2007; 132 (5): 1624–30. [DOI] [PubMed] [Google Scholar]
- 8. International Organization for Standardization. Usability of Consumer Products and Products for Public Use—Part 2: Summative Test Method. ISO; 2013. https://www.iso.org/obp/ui/#iso:std:iso:ts:20282:-2:ed-2:v1:en. Accessed November 28, 2022.
- 9. Lewis T, Synowiec C, Lagomarsino G, Schweitzer J.. E-health in low- and middle-income countries: findings from the Center for Health Market Innovations. Bull World Health Organ 2012; 90 (5): 332–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Nielsen J. Usability 101: Introduction to Usability. Nielsen Norman Group; 2012, updated January 3, 2021. https://www.nngroup.com/articles/usability-101-introduction-to-usability/. Accessed November 28, 2022.
- 11. Office of Disease Prevention and Health Promotion. Health Communication and Health Information Technology. HealthyPeople; 2020. https://wayback.archive-it.org/5774/20220413183358/https://www.healthypeople.gov/2020/topics-objectives/topic/health-communication-and-health-information-technology. Accessed November 28, 2022.
- 12. Tomasi E, Facchini LA, Maia MF.. Health information technology in primary health care in developing countries: a literature review. Bull World Health Organ 2004; 82 (11): 867–74. [PMC free article] [PubMed] [Google Scholar]
- 13. Ratwani RM, Savage E, Will A, et al. Identifying electronic health record usability and safety challenges in pediatric settings. Health Aff (Millwood) 2018; 37 (11): 1752–9. [DOI] [PubMed] [Google Scholar]
- 14. Friedman CP, Wyatt J.. Evaluation Methods in Biomedical Informatics. 2nd ed. New York, NY: Springer; 2006. [Google Scholar]
- 15. Shackel B. Usability-context, framework, definition, design and evaluation. In: Shackel B, Richardson SJ, eds. Human Factors for Informatics Usability. New York, NY: Cambridge University Press; 1991: 21–37. [Google Scholar]
- 16. Stead WW, Haynes RB, Fuller S, et al. Designing medical informatics research and library–resource projects to increase what is learned. J Am Med Inform Assoc 1994; 1 (1): 28–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Kaufman D, Roberts WD, Merrill J, Lai TY, Bakken S.. Applying an evaluation framework for health information system design, development, and implementation Nurs Res. 2006;55 (2 Suppl):S37–42. [DOI] [PubMed] [Google Scholar]
- 18. Bennett J. Visual Display Terminals: Usability Issues and Health Concerns. Englewood Cliffs, NJ: Prentice-Hall; 1984. [Google Scholar]
- 19. Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H.. JBI Manual for Evidence Synthesis. JBI; 2020. https://synthesismanual.jbi.global. Accessed November 28, 2022.
- 20. Peters MDJ, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB.. Guidance for conducting systematic scoping reviews. JBI Evidence Implementation 2015; 13 (3): 141–6. [DOI] [PubMed] [Google Scholar]
- 21. University of York Center for Reviews and Dissemination. Guidance Notes for Registering a Systematic Review Protocol with PROSPERO; 2016. https://www.crd.york.ac.uk/PROSPERO/documents/Registering%20a%20review%20on%20PROSPERO.pdf. Accessed November 28, 2022.
- 22. Veritas Health Innovation. Covidence Systematic Review Software. 2021. www.covidence.org. Accessed November 28, 2022.
- 23. Yen P-Y, Bakken S.. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012; 19 (3): 413–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Vedanthan R, Blank E, Tuikong N, et al. Usability and feasibility of a tablet-based Decision-Support and Integrated Record-keeping (DESIRE) tool in the nurse management of hypertension in rural western Kenya. Int J Med Inform 2015; 84 (3): 207–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Bagayoko CO, Tchuente J, Traoré D, et al. Implementation of a national electronic health information system in Gabon: a survey of healthcare providers' perceptions. BMC Med Inform Decis Mak 2020; 20 (1): 202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Lim J, Cloete G, Dunsmuir DT, et al. Usability and feasibility of PIERS on the move: an mHealth app for pre-eclampsia triage. JMIR Mhealth Uhealth 2015; 3 (2): e37. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Grémy F, Degoulet P.. Assessment of health information technology: which questions for which systems? Proposal for a taxonomy. Med Inform (Lond) 1993; 18 (3): 185–93. [DOI] [PubMed] [Google Scholar]
- 28. Musen M, Middleton B, Greenes R.. Biomedical Informatics. London: Springer; 2013. [Google Scholar]
- 29.AHRQ. Clinical Decision Support Systems. Patient Safety Network AHRQ; 2019. https://psnet.ahrq.gov/primer/clinical-decision-support-systems. Accessed November 28, 2022.
- 30. Tochukwu Arinze I, Rita O.. Persuasive Technology for Reducing Waiting and Service Cost: A Case Study of Nigeria Federal Medical Centers. Nairobi, Kenya: Association for Computing Machinery; 2016: 24–35. [Google Scholar]
- 31. Oza S, Wing K, Sesay AA, et al. Improving health information systems during an emergency: lessons and recommendations from an Ebola treatment centre in Sierra Leone. BMC Med Inform Decis Mak 2019; 19 (1): 100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Gbadamosi SO, Eze C, Olawepo JO, et al. A patient-held smartcard with a unique identifier and an mHealth platform to improve the availability of prenatal test results in rural Nigeria: demonstration study. J Med Internet Res 2018; 20 (1): e18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Nhavoto JA, Grönlund Å, Chaquilla WP.. SMSaúde: design, development, and implementation of a remote/mobile patient management system to improve retention in care for HIV/AIDS and tuberculosis patients. JMIR Mhealth Uhealth 2015; 3 (1): e26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Bagayoko CO, Niang M, Anne A, et al. The delegation of tasks in the era of e-health to support community interventions in maternal and child health: lessons learned from the PACT-Denbaya project. Med Sante Trop 2017; 27 (4): 354–9. [DOI] [PubMed] [Google Scholar]
- 35. Kabukye JK, Koch S, Cornet R, Orem J, Hagglund M.. User requirements for an electronic medical records system for oncology in developing countries: a case study of Uganda. AMIA Annu Symp Proc 2017; 2017: 1004–13. [PMC free article] [PubMed] [Google Scholar]
- 36. Nhavoto JA, Grönlund Å, Klein GO.. Mobile health treatment support intervention for HIV and tuberculosis in Mozambique: Perspectives of patients and healthcare workers. PLoS One 2017; 12 (4): e0176051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Azfar RS, Lee RA, Castelo-Soccio L, et al. Reliability and validity of mobile teledermatology in human immunodeficiency virus-positive patients in Botswana: a pilot study. JAMA Dermatol 2014; 150 (6): 601–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Dumisani N, Janet W.. Using Mobile Computing to Support Malnutrition Management in South Africa. Centurion, South Africa: Association for Computing Machinery; 2014: 352–60. [Google Scholar]
- 39. Hudson J, Nguku SM, Sleiman J, et al. Usability testing of a prototype Phone Oximeter with healthcare providers in high- and low-medical resource environments. Anaesthesia 2012; 67 (9): 957–67. [DOI] [PubMed] [Google Scholar]
- 40. Coppock D, Zambo D, Moyo D, et al. Development and usability of a smartphone application for tracking antiretroviral medication refill data for human immunodeficiency virus. Methods Inf Med 2017; 56 (5): 351–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Heys M, Crehan C, Kesler E, et al. The acceptability, feasibility and usability of the neotree application in Malawi: an integrated m-health solution to improve quality of newborn care and survival in health facilities in resource-poor settings. Arch Dis Child 2018; 103 (Suppl 2): A22–A3. [Google Scholar]
- 42. Crehan C, Kesler E, Nambiar B, et al. The NeoTree application: developing an integrated mHealth solution to improve quality of newborn care and survival in a district hospital in Malawi. BMJ Glob Health 2019; 4 (1): e000860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Vélez O, Okyere PB, Kanter AS, Bakken S.. A usability study of a mobile health application for rural Ghanaian midwives. J Midwifery Womens Health 2014; 59 (2): 184–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. Ezeanolue EE, Gbadamosi SO, Olawepo JO, et al. An mHealth framework to improve birth outcomes in Benue State, Nigeria: a study protocol. JMIR Res Protoc 2017; 6 (5): e100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Heather U, Sterling SR, John KB.. The PartoPen in Practice: evaluating the Impact of Digital Pen Technology on Maternal Health in Kenya. Cape Town, South Africa: Association for Computing Machinery; 2013: 274–83. [Google Scholar]
- 46. Vanosdoll M, Ng N, Ho A, et al. A novel mobile health tool for home-based identification of neonatal illness in Uganda: formative usability study. JMIR Mhealth Uhealth 2019; 7 (8): e14540. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Mawji A, Li E, Komugisha C, et al. Smart triage: triage and management of sepsis in children using the point-of-care Pediatric Rapid Sepsis Trigger (PRST) tool. BMC Health Serv Res 2020; 20 (1): 493. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Ginsburg AS, Agyemang CT, Ambler G, et al. MPneumonia, an innovation for diagnosing and treating childhood pneumonia in low-resource settings: a feasibility, usability and acceptability study in Ghana. PLoS One 2016; 11 (10): e0165201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Feldacker C, Murenje V, Barnhart S, et al. Reducing provider workload while preserving patient safety via a two-way texting intervention in Zimbabwe's voluntary medical male circumcision program: study protocol for an un-blinded, prospective, non-inferiority, randomized controlled trial. Trials 2019; 20 (1): 451. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Fischer AE, Phatsoane M, Majam M, et al. Uptake of the Ithaka mobile application in Johannesburg, South Africa, for human immunodeficiency virus self-testing result reporting. South Afr J HIV Med 2021; 22 (1): 1197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Kawakyu N, Nduati R, Munguambe K, et al. Development and implementation of a mobile phone-based prevention of mother-to-child transmission of HIV cascade analysis tool: usability and feasibility testing in Kenya and Mozambique. JMIR Mhealth Uhealth 2019; 7 (5): e13963. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Odukoya OK, Chui MA.. Using think aloud protocols to assess e-prescribing in community pharmacies. Innov Pharm 2012; 3 (3): 88. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Desrosiers A, Schafer C, Esliker R, Jambai M, Betancourt T.. mHealth-supported delivery of an evidence-based family home-visiting intervention in Sierra Leone: protocol for a pilot randomized controlled trial. JMIR Res Protoc 2021; 10 (2): e25443. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Ha YP, Tesfalul MA, Littman-Quinn R, et al. Evaluation of a mobile health approach to tuberculosis contact tracing in Botswana. J Health Commun 2016; 21 (10): 1115–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Qin R, Dzombak R, Amin R, Mehta K.. Reliability of a telemedicine system designed for rural Kenya. J Prim Care Community Health 2013; 4 (3): 177–81. [DOI] [PubMed] [Google Scholar]
- 56. Abejirinde IO, Zweekhorst M, Bardají A, et al. Unveiling the black box of diagnostic and clinical decision support systems for antenatal care: realist evaluation. JMIR Mhealth Uhealth 2018; 6 (12): e11468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Habtamu E, Bastawrous A, Bolster NM, et al. Development and validation of a smartphone-based contrast sensitivity test. Transl Vis Sci Technol 2019; 8 (5): 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Liang L, Wiens MO, Lubega P, Spillman I, Mugisha S.. A locally developed electronic health platform in Uganda: development and implementation of Stre@mline. JMIR Form Res 2018; 2 (2): e20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Martindale S, Mableson HE, Kebede B, et al. Comparison between paper-based and m-Health tools for collating and reporting clinical cases of lymphatic filariasis and podoconiosis in Ethiopia. Mhealth 2018; 4: 49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Muhindo M, Bress J, Kalanda R, et al. Implementation of a newborn clinical decision support software (NoviGuide) in a Rural District Hospital in Eastern Uganda: feasibility and acceptability study. JMIR Mhealth Uhealth 2021; 9 (2): e23737. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Ide N, Hardy V, Chirambo G, et al. People welcomed this innovation with two hands: a qualitative report of an mHealth intervention for community case management in Malawi. Ann Glob Health 2019; 85 (1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Mwaisaka J, Gonsalves L, Thiongo M, et al. Young people's experiences using an on-demand mobile health sexual and reproductive health text message intervention in Kenya: qualitative study. JMIR Mhealth Uhealth 2021; 9 (1): e19109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Thomsen CF, Barrie AMF, Boas IM, et al. Health workers' experiences with the safe delivery app in West Wollega Zone, Ethiopia: a qualitative study. Reprod Health 2019; 16 (1): 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Landis-Lewis Z, Manjomo R, Gadabu OJ, et al. Barriers to using eHealth data for clinical performance feedback in Malawi: a case study. Int J Med Inform 2015; 84 (10): 868–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Janssen R, Engel N, Esmail A, et al. Alone but supported: a qualitative study of an HIV self-testing app in an observational cohort study in South Africa. AIDS Behav 2020; 24 (2): 467–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Byonanebye DM, Mackline H, Sekaggya-Wiltshire C, et al. Impact of a mobile phone-based interactive voice response software on tuberculosis treatment outcomes in Uganda (CFL-TB): a protocol for a randomized controlled trial. Trials 2021; 22 (1): 391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67. Koumamba AP, Bisvigou UJ, Ngoungou EB, Diallo G.. Health information systems in developing countries: case of African countries. BMC Med Inform Decis Mak 2021; 21 (1): 232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Simpkin V, Namubiru-Mwaura E, Clarke L, Mossialos E.. Investing in health R&D: where we are, what limits us, and how to make progress in Africa. BMJ Glob Health 2019; 4 (2): e001047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Hollander C, Joubert K, Schellack N.. An ototoxicity grading system within a mobile app (OtoCalc) for a resource-limited setting to guide grading and management of drug-induced hearing loss in patients with drug-resistant tuberculosis: prospective, cross-sectional case series. JMIR Mhealth Uhealth 2020; 8 (1): e14036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70. Morse RS, Lambden K, Quinn E, et al. A mobile app to improve symptom control and information exchange among specialists and local health workers treating Tanzanian cancer patients: human-centered design approach. JMIR Cancer 2021; 7 (1): e24062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71. Velloza J, Ngure K, Kiptinness C, et al. A clinic-based tablet application to support safer conception among HIV serodiscordant couples in Kenya: feasibility and acceptability study. Mhealth 2019; 5: 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72. Grant C, Osanloo A.. Understanding, selecting, and integrating a theoretical framework in dissertation research: developing a blueprint for your “house”. Adm Issues J 2015; 4 (2):12–26. [Google Scholar]
- 73. Duermeijer C, Amir M, Schoombee L.. Africa Generates Less than 1% of the World’s Research; Data Analytics Can Change That Elsevier. Elsevier; 2018, cited 2021 December 6, 2021. https://www.elsevier.com/connect/africa-generates-less-than-1-of-the-worlds-research-data-analytics-can-change-that. Accessed November 28, 2022. [Google Scholar]
- 74. Dworkin A. A Return to Africa: Why North African States Are Looking South. European Council on Foreign Relations; 2020, cited 2021 December 6, 2021. https://ecfr.eu/publication/a_return_to_africa_why_north_african_states_are_looking_south/. Accessed November 28, 2022. [Google Scholar]
- 75. Oluwole V. A Comprehensive List of All the English-Speaking Countries in Africa Business Insider Africa; 2021, cited January 24, 2022. https://africa.businessinsider.com/local/lifestyle/a-comprehensive-list-of-all-the-english-speaking-countries-in-africa/hdp1610. Accessed November 28, 2022.
- 76. Benski AC, Stancanelli G, Scaringella S, et al. Usability and feasibility of a mobile health system to provide comprehensive antenatal care in low-income countries: PANDA mHealth pilot study in Madagascar. J Telemed Telecare 2017; 23 (5): 536–43. [DOI] [PubMed] [Google Scholar]
- 77. Brown S, Mickelson A.. Smart phones as a viable data collection tool in low-resource settings: case study of Rwandan community health workers. NBE 2016; 4 (2): 132–9. [Google Scholar]
- 78. Chirambo GB, Muula AS, Thompson M, et al. End-user perspectives of two mHealth decision support tools: electronic community case management in Northern Malawi. Int J Med Inform 2021; 145: 104323. [DOI] [PubMed] [Google Scholar]
- 79. Hicks JP, Allsop MJ, Akaba GO, et al. Acceptability and potential effectiveness of eHealth tools for training primary health workers from Nigeria at scale: mixed methods, uncontrolled before-and-after study. JMIR Mhealth Uhealth 2021; 9 (9): e24182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80. Klingberg A, Wallis LA, Hasselberg M, Yen PY, Fritzell SC.. Teleconsultation using mobile phones for diagnosis and acute care of burn injuries among emergency physicians: mixed-methods study. JMIR Mhealth Uhealth 2018; 6 (10): e11076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Lodhia V, Karanja S, Lees S, Bastawrous A.. Acceptability, usability, and views on deployment of peek, a mobile phone mhealth intervention for eye care in Kenya: qualitative study. JMIR Mhealth Uhealth 2016; 4 (2): e30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82. Martin T, Florian S, Martin K, Anders B, Isabella W, Thomas G.. Stories from the Field: mobile Phone Usage and Its Impact on People's Lives in East Africa. London, UK: Association for Computing Machinery; 2010: 49. [Google Scholar]
- 83. Mauka W, Mbotwa C, Moen K, et al. Development of a mobile health application for HIV prevention among at-risk populations in urban settings in East Africa: a participatory design approach. JMIR Form Res 2021; 5 (10): e23204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84. Ngabo F, Nguimfack J, Nwaigwe F, et al. Designing and implementing an innovative SMS-based alert system (RapidSMS-MCH) to monitor pregnancy and reduce maternal and child deaths in Rwanda. Pan Afr Med J 2012; 13: 31. [PMC free article] [PubMed] [Google Scholar]
- 85. Oyetunde OO, Ogidan O, Akinyemi MI, Ogunbameru AA, Asaolu OF.. Mobile authentication service in Nigeria: an assessment of community pharmacists' acceptance and providers' views of successes and challenges of deployment. Pharm Pract (Granada) 2019; 17 (2): 1449. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86. Sabben G, Mudhune V, Ondeng'e K, et al. A smartphone game to prevent HIV among young Africans (Tumaini): assessing intervention and study acceptability among adolescents and their parents in a randomized controlled trial. JMIR Mhealth Uhealth 2019; 7 (5): e13049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87. van Heerden A, Norris S, Tollman S, Richter L, Rotheram-Borus MJ.. Collecting maternal health information from HIV-positive pregnant women using mobile phone-assisted face-to-face interviews in Southern Africa. J Med Internet Res 2013; 15 (6): e116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88. Zeleke AA, Worku AG, Demissie A, et al. Evaluation of electronic and paper-pen data capturing tools for data quality in a public health survey in a health and demographic surveillance site, Ethiopia: randomized controlled crossover health care information technology evaluation. JMIR Mhealth Uhealth 2019; 7 (2): e10995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89. Sarfo FS, Adusei N, Ampofo M, Kpeme FK, Ovbiagele B.. Pilot trial of a tele-rehab intervention to improve outcomes after stroke in Ghana: a feasibility and user satisfaction study. J Neurol Sci 2018; 387: 94–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90. El-Khatib Z, Shah M, Zallappa SN, et al. SMS-based smartphone application for disease surveillance has doubled completeness and timeliness in a limited-resource setting – evaluation of a 15-week pilot program in Central African Republic (CAR). Confl Health 2018; 12: 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91. Gallay C, Girardet A, Viviano M, et al. Cervical cancer screening in low-resource settings: a smartphone image application as an alternative to colposcopy. Int J Womens Health 2017; 9: 455–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92. Otu A, Okuzu O, Effa E, et al. Training health workers at scale in Nigeria to fight COVID-19 using the InStrat COVID-19 tutorial app: an e-health interventional study. Ther Adv Infect Dis 2021; 8: 20499361211040704. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93. Jarvis MA, Padmanabhanunni A, Chipps J.. An evaluation of a low-intensity cognitive behavioral therapy mhealth-supported intervention to reduce loneliness in older people. Int J Environ Res Public Health 2019; (7): 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94. Ginsburg AS, Delarosa J, Brunette W, et al. MPneumonia: development of an innovative mHealth application for diagnosing and treating childhood pneumonia and other childhood illnesses in low-resource settings. PLoS One 2015; 10 (10): e0139625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95. Kruger D, Dlamini NN, Meyer JC, et al. Development of a web-based application to improve data collection of antimicrobial utilization in the public health care system in South Africa. Hosp Pract 2021; 49 (3): 184–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96. Weierstall R, Crombach A, Nandi C, Bambonyé M, Probst T, Pryss R.. Effective adoption of tablets for psychodiagnostic assessments in Rural Burundi: evidence for the usability and validity of mobile technology in the example of differentiating symptom profiles in AMISOM soldiers 1 year after deployment. Front Public Health 2021; 9: 490604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97. Brinkel J, May J, Krumkamp R, et al. Mobile phone-based interactive voice response as a tool for improving access to healthcare in remote areas in Ghana – an evaluation of user experiences. Trop Med Int Health 2017; 22 (5): 622–30. [DOI] [PubMed] [Google Scholar]
- 98. Aw M, Ochieng BO, Attambo D, et al. Critical appraisal of a mHealth-assisted community-based cardiovascular disease risk screening program in rural Kenya: an operational research study. Pathog Glob Health 2020; 114 (7): 379–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99. Barrington J, Wereko-Brobby O, Ward P, Mwafongo W, Kungulwe S.. SMS for Life: a pilot project to improve anti-malarial drug supply management in rural Tanzania using standard technology. Malar J 2010; 9: 298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100. Crehan C, Kesler E, Nambiar B, et al. G286(P) The acceptability, feasibility and usability of the neotree application in Malawi: an integrated data collection, clinical management and education mHealth solution to improve quality of newborn care and thus newborn survival in health facilities in resource-poor settings. Arch Dis Child 2018; 103 (Suppl 1): A117.1–A117. [Google Scholar]
- 101. Ellington LE, Najjingo I, Rosenfeld M, et al. Health workers' perspectives of a mobile health tool to improve diagnosis and management of paediatric acute respiratory illnesses in Uganda: a qualitative study. BMJ Open 2021; 11 (7): e049708. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102. English LL, Dunsmuir D, Kumbakumba E, et al. The PAediatric Risk Assessment (PARA) mobile app to reduce postdischarge child mortality: design, usability, and feasibility for health care workers in Uganda. JMIR Mhealth Uhealth 2016; 4 (1): e16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103. King JD, Buolamwini J, Cromwell EA, et al. A novel electronic data collection system for large-scale surveys of neglected tropical diseases. PLoS One 2013; 8 (9): e74570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104. Musiimenta A, Atukunda EC, Tumuhimbise W, et al. Acceptability and feasibility of real-time antiretroviral therapy adherence interventions in rural Uganda: mixed-method pilot randomized controlled trial. JMIR Mhealth Uhealth 2018; 6 (5): e122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105. Shiferaw S, Spigt M, Tekie M, Abdullah M, Fantahun M, Dinant G-J.. The effects of a locally developed mHealth intervention on delivery and postnatal care utilization; a prospective controlled evaluation among health centres in Ethiopia. PLoS One 2016; 11 (7): e0158600. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106. Thomas DS, Daly K, Nyanza EC, Ngallaba SE, Bull S.. Health worker acceptability of an mHealth platform to facilitate the prevention of mother-to-child transmission of HIV in Tanzania. Digit Health 2020; 6: 2055207620905409. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
This article does not include a data set. The list of articles analyzed is available in the references and in Supplementary Table S4.



