Skip to main content
PLOS Digital Health logoLink to PLOS Digital Health
. 2022 Sep 1;1(9):e0000096. doi: 10.1371/journal.pdig.0000096

Leveraging mHealth usage logs to inform health worker performance in a Resource-Limited setting: Case example of mUzima use for a chronic disease program in Western Kenya

Simon Savai 1,*, Jemimah Kamano 2,3, Lawrence Misoi 3, Peter Wakholi 4, Md Kamrul Hasan 5, Martin C Were 6,*
Editor: J Mark Ansermino7
PMCID: PMC9931325  PMID: 36812583

Abstract

Background

Health systems in low- and middle-income countries (LMICs) can be strengthened when quality information on health worker performance is readily available. With increasing adoption of mobile health (mHealth) technologies in LMICs, there is an opportunity to improve work-performance and supportive supervision of workers. The objective of this study was to evaluate usefulness of mHealth usage logs (paradata) to inform health worker performance.

Methodology

This study was conducted at a chronic disease program in Kenya. It involved 23 health providers serving 89 facilities and 24 community-based groups. Study participants, who already used an mHealth application (mUzima) during clinical care, were consented and equipped with an enhanced version of the application that captured usage logs. Three months of log data were used to determine work performance metrics, including: (a) number of patients seen; (b) days worked; (c) work hours; and (d) length of patient encounters.

Principal findings

Pearson correlation coefficient for days worked per participant as derived from logs as well as from records in the Electronic Medical Record system showed a strong positive correlation between the two data sources (r(11) = .92, p < .0005), indicating mUzima logs could be relied upon for analyses. Over the study period, only 13 (56.3%) participants used mUzima in 2,497 clinical encounters. 563 (22.5%) of encounters were entered outside of regular work hours, with five health providers working on weekends. On average, 14.5 (range 1–53) patients were seen per day by providers.

Conclusions / Significance

mHealth-derived usage logs can reliably inform work patterns and augment supervision mechanisms made particularly challenging during the COVID-19 pandemic. Derived metrics highlight variabilities in work performance between providers. Log data also highlight areas of suboptimal use, of the application, such as for retrospective data entry for an application meant for use during the patient encounter to best leverage built-in clinical decision support functionality.

Author summary

Mobile health (mHealth) applications have gained significant penetration to support health care in low- and middle-income countries. Beyond improving care, these applications can help to strengthen the health system, but are currently not optimally employed for this goal. We explored whether we could leverage usage log data, which were captured as health providers were using a mobile application called mUzima, to inform health worker performance in Western Kenya. We were able to reliably demonstrate that the log data can provide detailed information of the days and hours worked, number of patients seen per day, and how the application was being used to inform many areas of improvement related to health worker performance. We also observed large differences in work performance between health providers, and work performed outside of official work hours and on weekends. Some of the health providers also used the application sub-optimally for entering patient data after the patient visit, as opposed to using the application during the patient encounter to best leverage built-in clinical decision support functionality. This study offers an approach to cost-effectively augment health worker supervision mechanisms that have been particularly challenging during the COVID-19 pandemic and for providers distributed over wide geographical areas.

Introduction

Health service delivery in low- and middle-income countries (LMICs) has often been characterized by inadequate health-worker performance, resulting in poor quality and low uptake of health services [1]. Poor health-worker performance is reflected in under-achievement towards clinical targets, absenteeism, low motivation, poor service quality, and fabrication of health data [25]. Among reasons cited as contributors to this poor performance include: inadequate numbers of health personnel leading to work overload [6], poor working conditions [2]. and inadequacy of supervision [6].

While supportive supervision is considered an essential intervention for improving health-worker performance [1,7,8], such interventions are only effective with consistent and targeted interactions between workers and their supervisors. Unfortunately, in many LMIC settings, supportive supervision can be difficult to achieve. Oftentimes, the number of supervisors is inadequate to support all workers, with supervisors typically responsible for covering multiple facilities and wide geographical areas. The challenges around direct monitoring of work performance and supportive supervision are further exacerbated with emergence of the COVID-19 pandemic that has limited travel and in-person contact between personnel. With restrictions and lockdowns imposed in many localities, it becomes difficult for supervisors to travel to communities to support workers, and vice versa. It is imperative that innovative mechanisms for effective performance monitoring and supportive supervision are explored. These approaches would require judgments to be made on a continuous basis, with availability of good quality and timely information at both individual health worker and at group levels.

mHealth technologies are now in broad use to support health workers in LMICs [912]. In many settings, workers and their supervisors use smartphone-based applications (apps) primarily for care coordination, data capture, retrieval of patient data, and decision support. [9,10,13] The apps can also support many other functions such as secure clinical messaging, tele-consultation and geo-location services. These mHealth solutions have an ability to collect paradata, defined as “process data documenting users’ access, participation, and navigation through an mHealth application” [1416]. Unfortunately, despite the rich quality of paradata that can be securely collected through usage logs from mHealth apps used by health workers, these data have not been leveraged to inform health work-performance and for supportive supervision. In fact, to date, the use of mHealth paradata have only been limited to evaluations of users’ engagement with mHealth applications [14,17].

We hypothesized that mHealth applications, through collected paradata could be used to improve performance monitoring and supportive supervision. In this paper, we describe an evaluation that uses mHealth application-derived logs (paradata) from a demonstrative application, mUzima [18], to inform work patterns for health care workers in an LMIC setting who do not have frequent direct in-person contact with their supervisors. The employed approach has broad applicability across mHealth applications, extending the use of mHealth solutions to support health systems strengthening initiatives in LMICs.

Materials and methods

Study overview

This study involved several steps as outlined in Fig 1. In the first step, the widely deployed mUzima mobile application was enhanced to enable capture of usage logs during use of the application providers [18]. Health workers, who were already familiar with mUzima, were then recruited and consented to use the enhanced mUzima application. Usage logs were collected for a study period of three months. At the end of this period, data were extracted from collected usage logs and from the associated electronic medical record (EMR) system at the study sites. The logs were analysed to gain insight into health worker performance, including: (a) days and times worked by providers, (b) workday length, (c) clinical encounter length, and (d) number of patients seen. Details for the steps are provided below.

Fig 1. Study overview.

Fig 1

Software development

mUzima is a robust open-source Android application for use by health providers to primarily capture data as part of patient care. It also has features for reviewing patient’s historical data, as well as for computerized clinical decision support, among others (Fig 2) [18]. The application works on Android-based smartphones and tablets. mUzima is configured to interoperate with the OpenMRS EMR system that is widely used in over 40 LMICs [19,20]. mUzima has offline capability and is thus used widely in communities and within facilities that do not have consistent online access to the EMR system.

Fig 2. Sample screenshot of the mUzima mobile application.

Fig 2

For this study, the mUzima application was enhanced to generate, store, and transmit event logs as the application was in use. As the goal of logging was to capture activities conducted by users on the mobile application, an event log was generated whenever a user navigated to a user interface (UI) page on the mobile device. Each event log had a standard structure, with key elements including: (a) event/activity type, (b) a unique identifier of the UI page, (c) a unique identifier of the mobile device, (c) a unique identifier of the mobile app user, (d) GPS geolocation information, (e) timestamp of the device, (f) event timestamp, (g) transmission server timestamp, and (h) other context details such as machine generated patient identifier and recorded data unique identifier. A full list of the elements that constituted an event log, as well as description of each element can be found in S1 Appendix. S2 Appendix details all the 76 types of event logs that were captured as a user navigated through the application—examples of these event logs included: ‘view_client_list’, ‘save_complete_encounter_form’ and ‘open_registration_form’.

Recognizing that all users and programs would not want usage logs captured, a setting was added to the mUzima application to easily turn the logging feature on or off. An Event Log Server was configured to securely receive generated event logs that were transmitted from the mUzima application. Transmission of these logs used the same secure https-based mechanism already in place to transmit clinical data from mUzima to the EMR system. The Event Log Server was based on the MongoDB NoSQL database, which provided flexibility and scalability in storing large volumes of data (Fig 3) [21].

Fig 3. Illustration of the interaction between components of the developed software system.

Fig 3

Study setting

Academic Model Providing Access To Healthcare (AMPATH) was established in Kenya in 2001, and has developed an HIV care system in Western Kenya that serves over 100,000 patients, with a robust data system, the AMRS [22]. Initially started to support HIV care, the AMPATH program has expanded its clinical scope of work in several counties in western Kenya, to address comprehensive primary care, including non-communicable diseases in its chronic disease management programs. Under the Primary-health Integrated Care project for Chronic diseases (PIC4C), AMPATH uses task shifting of chronic diseases management to nurses and clinical officers. The goal of this task shifting is to improve access to chronic disease care services across geographically decentralized facilities [23]. This study was conducted in two AMPATH supported counties, Busia and Trans Nzoia, which were at the time supported by the PIC4C program that sought to integrate hypertension and Diabetes care into the established primary healthcare platforms while strengthening referral systems. Within the study setting official clinician work hours were between 8 am to 5 pm from Monday to Friday.

Study participants

Participants for this study included health providers (nurses and clinical officers) involved in the PIC4C program at AMPATH and working in rural clinics and communities. These providers offered hypertension care to patients and were already using Samsung Galaxy Tab A7 tablets (Samsung Electronics Co., Ltd.) running the Android operating system, and which has the mUzima application installed. The providers used mUzima primarily for: (a) data entry and validation, (b) to retrieve and display historical data from the electronic AMPATH Medical Record System (AMRS, which is an instance of OpenMRS) [24], and (c) for decision support that provided prompts and reminders to guide the providers through the hypertension management algorithms [25]. The providers were expected to use mUzima as a point-of-care application during the patient encounter.

This study targeted to recruit all health providers who had been assigned mobile devices from which they used mUzima for patient data in the PIC4C program. A total 23 providers were recruited representing 89 facilities and 24 community-based groups in two Kenyan counties. It was common for a provider to serve multiple facilities and communities.

Study approval and informed consent

This study was approved by the Institutional Review and Ethics Committee (IREC) at Moi University School of Medicine in Eldoret, Kenya. To avoid possible change in behavior by providers when they are aware of being observed (Hawthorne effect), [26] the study was granted a partial waiver by the IREC. This waiver allowed the study to recruit and consent participants to using mUzima, with a commitment and obligation of making participants aware of recording of log data at the end of the data collection period. Prior to participation in the study, written informed consent (IC) was obtained. Each consenting participant signed two copies of the IC form. One copy of the signed form was given to the study participant while the second was stored by the study team. All data collected were used exclusively for the study, and only availed to approved study team members.

Study procedures

A trained research assistant recruited participants into the study, after which mobile devices used by the participants were upgraded with the new version of the mobile application that incorporated the logging capability. Usage logs were collected for a period of three months between Dec 2nd, 2019 and Mar 2nd, 2020, with collected log data stored in the secure Event Log Server. At the end of the study period, data were extracted from the collected usage logs and the AMRS EMR system. At the time of the study, no clear workday schedules were available for the clinicians, and individuals took two weeks off around the Christmas and New Year’s holidays.

Log data extracted were filtered to exclude any test users. Needed data cleaning of the logs was done prior to analyses and included: (a) removal of any duplicate log records; (b) exclusion of logs that were not relevant to participants’ app usage; (c) imputation of the correct timestamps (using operator and geolocation timestamps) for records from devices that had the wrong time set on the device; and (d) exclusion of records that were outside the study period. After data cleaning, the log data were transformed into patient-provider encounter records for analyses. Data from the EMRs included count and dates of relevant clinical encounter forms submitted by the study participants from the mUzima application.

Data cleaning and analyses

Data cleaning and analyses were done using Python and Jupyter notebook [2729]. Scripts were developed using python programming language to extract data from the MongoDB database, to clean and transform the data into encounter records, to generate datasets for the various metrics, to plot graphs for visualizations and to export the datasets into CSV file format for analyses (see S3 Appendix and S4 Appendix for sample scripts used). Python’s Pandas library was used for manipulation of the data to clean it and generate the various datasets, while Matplotlib library was used for visualization.

The objective of this study was to leverage mobile app log data to better inform work patterns for care providers who do not work under the direct presence of their supervisors. Thus, metrics of interest that were derived from the collected data included: (a) number of patients seen per day—this was the total count of patients registered or whose encounter data were recorded in mUzima on a daily basis by the study providers; (b) number of days worked by providers during the study period–with dates where no trace of at least one record of patient encounter used as an indicator of no work performed on that day. Elaboration of days worked was disaggregated per study participant and by day of week; (c) work hours per day–represented by duration of time participants actively used the mUzima mobile application to record patient clinical data with primary goal of capturing work-related activities conducted outside of official work hours; and (d) length of patient encounters–representing the start to end recording of each patient encounter within mUzima. Prior to analyses to derive indicators, we assessed the relative usefulness of the log data using Pearson correlation coefficient by comparing captured logs for number of completed clinical encounters with those stored in the EMRs, as these two should correlate well.

Results

Over the study period of 2nd December 2019 to 2nd March 2020, relevant usage logs were available for 13 of the 23 (56.3%) study participants, meaning that only this subset of study participants used the mobile application during the study period. In total, 39,229 log records were collected for the 13 study participants, and these included 2,497 clinical encounters.

Distribution of work activity by number of days worked

Comparison of the total numbers of the work days for each participant as derived from the logs as well as from records in the EMRs showed a strong positive correlation between the two data sources (r(11) = .92, p < .0005). Workdays using mobile logs also included days where clinicians used mUzima to fill an encounter form but did not complete the form–these incomplete forms were not transmitted to the EMRs until they were completed, hence work days were higher from log-based data when compared to EMRs data. A workday was defined as a unique date with evidence of logs related to a patient clinical encounter.

Fig 4 below presents the distribution of the total number of days worked by each provider as derived from the log data. There was wide variability in the number of days worked by providers, with only two of the thirteen providers (USER_01 and USER_02) working for over 50% of the expected workdays. Five providers (USER_03, USER_04, USER_06, USER_07, USER_11) had very few days recorded during the study period.

Fig 4. Bar graph showing the number of days worked per study participant as per usage logs.

Fig 4

Distribution of work activity by day of week

The work patterns by day of week were also analyzed (Fig 5). Breakdown of days worked by each participant did not show any clear patterns per worker of consistently missing work on particular workdays (e.g., Fridays). It was noted that five of the participants engaged in work during the weekends, contrary to the expectation of them being scheduled to work only from Monday to Friday (Fig 4).

Fig 5. Bar graph showing the total amount of workdays against day of week as derived from usage logs.

Fig 5

Average number of patients seen per day

The number of patients seen comprised the total count of patients registered or whose encounter records were logged on daily basis by each of the study participants. In total, 2,497 clinical encounters were registered during the study period, with 1,608 (64.4%) coming from just two users (USER_01 and USER_02). On average, 14.5 (range 1–53) patients were seen per day by the providers (Table 1).

Table 1. Total number of patients seen per study participant as derived from usage logs.

USER ID Total number of patients seen
USER_01 371
USER_01 1069
USER_03 5
USER_04 25
USER_05 186
USER_06 29
USER_07 7
USER_08 50
USER_09 162
USER_10 232
USER_11 3
USER_12 74
USER_13 95

Mobile application usage times and length of use

Fig 6 provides a visualization of the work activities for all study participants by time of day, while Fig 7 shows this visualization for one of the participants (USER_02). Through the logs, it was evident that many of the providers continued to use the application into the late evening and early night, signifying that work was done by these providers beyond official work hours. A total of 563 (22.5%) of the encounter forms were entered after regular work hours, signifying that some of the providers used the application to enter data retrospectively- as opposed to using the application as a point-of-care system, where they would be able to take advantage of the provided decision support.

Fig 6. Visualization of the distribution of workday activity by all study participants on days worked as derived from usage log data.

Fig 6

Fig 7. Visualization of the distribution of workday activity by one study participant (USER_02) on days worked as derived from usage log data.

Fig 7

Length to complete an encounter form

The duration of app usage per patient encounter session, and the time interval between app usage sessions were analyzed to determine the patterns of app usage as a surrogate for whether encounters were entered during actual encounter sessions or retrospectively. The results show that numerous encounter form entry sessions were conducted in a very short time duration, with short intervals between encounter form entry sessions (Fig 8). These short timeframes could only be achieved using retrospective data entry, rather that real-time entry during an actual patient encounter, as typical patient encounters take longer.

Fig 8. Encounter entry duration and time between encounters.

Fig 8

Discussion

To our knowledge, this is the first study to leverage mHealth usage logs from healthcare providers in LMICs to inform work patterns and work performance by health workers. Comparison of mHealth-derived usage log metrics against EMR-derived metrics showed a strong positive correlation between number of days worked per participant. This supported reliability of leveraging usage logs for performance evaluation. mHealth-derived usage logs have particular relevance for metrics that are usually not collected or available in EMRs, such as length of patient encounter, workday length, and work hours by providers. Even for metrics such as number of patients seen, log data perform better as they also capture incomplete encounters.

The described work provides an additional approach that can be used to evaluate work performance. Historically, work performance evaluations for health providers have involved time-and-motion studies, which often require human observers and/or manual recording of activities by the providers being assessed [3032]. Further, use of mHealth paradata for work performance assessment innovatively extends role of mHealth solutions to better support key functions of information technology in LMICs, which is to strengthen the health systems [13,33,34]. By re-using paradata collected as providers use the mHealth application, this approach promises to be cheaper and more scalable than traditional approaches that require human involvement to assess work patterns. However, use of paradata must take into strong consideration the ethical, legal and social implications (ELSI) of paradata use. Of particular importance is the need for informed consent for the providers, with mechanisms adopted to ensure that the paradata and knowledge derived from them are used strictly to inform and incentivize care providers and for supportive supervision, and never used for punitive purposes.

In the current study, we observed that healthcare providers often worked outside regular work hours and during weekends. Having this knowledge on hand equips care programs to investigate factors that contribute to these work patterns and seek remedies. It is possible that some health workers are simply overwhelmed during the day and resort to taking work home with them. Alternatively, some workers might be slow or uncomfortable with using mHealth technologies as a point-of-care system during patient visits. As such, the providers might prefer to retrospectively enter clinical data in the application–and this would lead to longer work hours, risking provider burnout. The finding that the mHealth application was not being used as expected (i.e., as a point-of-care system) has important implications—it means that the care providers are not taking advantage of real-time computerized clinical decision support features within the application that are relevant to improving quality of care [25]. These decision-support features have particular relevance in LMIC settings that employ task-shifting of care services to lower-cadre staff.

Differences between providers in number of days worked, average number of patients seen per day and hours worked offer insights on areas where individual health worker performance can be improved. In the age of COVID-19 pandemic, where in-person supervision can be particularly challenging in some settings, it is important to have mechanisms that provide insights into worker performance for timely supportive interventions. Further, the derived paradata-derived work performance metrics (e.g., patients seen, days worked and work day length) can be innovatively leveraged to improve performance, motivation and self-efficacy of health workers working in remote facilities through approaches such as gamification and ecological momentary interventions [3537].

Several limitations in our study deserve mention. The generalizability of our findings is limited by the fact that it involved one mHealth application, a few facilities in a single country, and with limited number of healthcare providers. Our evaluation also only lasted for a short period of time, and we cannot account for possible changes in work performance occasioned by other factors. However, the study achieved its main goal of evaluating feasibility of using mHealth-derived paradata to evaluate health worker performance and provides a re-usable approach for other mHealth applications and settings. Results of this study have been shared with the clinical team to help inform work patterns and approaches for improving them. As the next steps, we will employ qualitative approaches to better understand reasons for the observed work patterns and for the variations in performance between providers. We will also leverage the paradata-derived metrics to provide real-time visualization to providers for insights on their own individual performance, and their performance relative to their colleagues. Finally, we plan to incorporate timely automated feedback and support mechanisms based on derived performance metrics to supplement support where direct supervision is a challenge.

Conclusions

mHealth paradata can be used to derive work-performance metrics for providers working in disconnected LMIC settings. This approach extends the use of mHealth applications in the area of health systems strengthening and is easily scalable for supportive supervision.

Supporting information

S1 Appendix. Description of fields comprised in a usage log.

(DOCX)

S2 Appendix. Types of usage logs recorded.

(DOCX)

S3 Appendix. Excerpts of python scripts used for data extraction.

(DOCX)

S4 Appendix. Excerpts of python scripts used for data cleaning.

(DOCX)

S5 Appendix. Excerpts of python scripts used for generating encounter records.

(DOCX)

Acknowledgments

The authors thank the participating healthcare providers and clinics, as well as the Primary-health Integrated Care project for Chronic diseases (PIC4C) project at AMPATH, Kenya.

The contents are solely the responsibility of the authors and do not necessarily represent the official views of USAID, Norad, or the United States Government.

Data Availability

All data are contained in the manuscript and Supporting Information files.

Funding Statement

This work was made possible by the support of the American people through the United States Agency for International Development (USAID, grant number 7200AA18CA00019) and the Norwegian Agencies for Development Cooperation under the NORHED program (Norad: Project QZA-0484). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Rowe AK, de Savigny D, Lanata CF, Victora CG. How can we achieve and maintain high-quality performance of health workers in low-resource settings? Lancet (London, England). 2005;366(9490):1026–35. doi: 10.1016/S0140-6736(05)67028-6 [DOI] [PubMed] [Google Scholar]
  • 2.Chaudhury N, Hammer J, Kremer M, Muralidharan K, Rogers FH. Missing in action: teacher and health worker absence in developing countries. J Econ Perspect. 2006;20(1):91–116. doi: 10.1257/089533006776526058 [DOI] [PubMed] [Google Scholar]
  • 3.Tumlinson K, Gichane MW, Curtis SL, LeMasters K. Understanding healthcare provider absenteeism in Kenya: a qualitative analysis. BMC health services research. 2019;19(1):660. doi: 10.1186/s12913-019-4435-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Kisakye AN, Tweheyo R, Ssengooba F, Pariyo GW, Rutebemberwa E, Kiwanuka SN. Regulatory mechanisms for absenteeism in the health sector: a systematic review of strategies and their implementation. J Healthc Leadersh. 2016;8:81–94. doi: 10.2147/JHL.S107746 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Tweheyo R, Daker-White G, Reed C, Davies L, Kiwanuka S, Campbell S. ’Nobody is after you; it is your initiative to start work’: a qualitative study of health workforce absenteeism in rural Uganda. BMJ global health. 2017;2(4):e000455. doi: 10.1136/bmjgh-2017-000455 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dieleman M, Harnmeijer JW. Improving health worker performance: in search of promising practices. Geneva: World Health Organization. 2006. Sep;1(01). [Google Scholar]
  • 7.Frimpong JA, Helleringer S, Awoonor-Williams JK, Yeji F, Phillips JF. Does supervision improve health worker productivity? Evidence from the Upper East Region of Ghana. Trop Med Int Health. 2011;16(10):1225–33. doi: 10.1111/j.1365-3156.2011.02824.x [DOI] [PubMed] [Google Scholar]
  • 8.Rowe AK, Labadie G, Jackson D, Vivas-Torrealba C, Simon J. Improving health worker performance: an ongoing challenge for meeting the sustainable development goals. Bmj. 2018;362:k2813. doi: 10.1136/bmj.k2813 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Early J, Gonzalez C, Gordon-Dseagu V, Robles-Calderon L. Use of Mobile Health (mHealth) Technologies and Interventions Among Community Health Workers Globally: A Scoping Review. Health Promot Pract. 2019;20(6):805–17. doi: 10.1177/1524839919855391 [DOI] [PubMed] [Google Scholar]
  • 10.Feroz A, Jabeen R, Saleem S. Using mobile phones to improve community health workers performance in low-and-middle-income countries. BMC Public Health. 2020;20(1):49. doi: 10.1186/s12889-020-8173-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Braun R, Catalani C, Wimbush J, Israelski D. Community health workers and mobile technology: a systematic review of the literature. PLoS One. 2013;8(6):e65772. doi: 10.1371/journal.pone.0065772 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Källander K, Tibenderana JK, Akpogheneta OJ, Strachan DL, Hill Z, ten Asbroek AH, et al. Mobile health (mHealth) approaches and lessons for increased performance and retention of community health workers in low- and middle-income countries: a review. Journal of medical Internet research. 2013;15(1):e17. doi: 10.2196/jmir.2130 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Labrique AB, Vasudevan L, Kochi E, Fabricant R, Mehl G. mHealth innovations as health system strengthening tools: 12 common applications and a visual framework. Glob Health Sci Pract. 2013;1(2):160–71. doi: 10.9745/GHSP-D-13-00031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hightow-Weidman LB, Bauermeister JA. Engagement in mHealth behavioral interventions for HIV prevention and care: making sense of the metrics. Mhealth. 2020;6 7. doi: 10.21037/mhealth.2019.10.01 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bonett S, Connochie D, Golinkoff JM, Horvath KJ, Bauermeister JA. Paradata Analysis of an eHealth HIV testing intervention for young men who have sex with men. AIDS Education and Prevention. 2018;30(5):434–47. doi: 10.1521/aeap.2018.30.5.434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Couper MP, Alexander GL, Maddy N, Zhang N, Nowak MA, McClure JB, et al. Engagement and retention: measuring breadth and depth of participant use of an online intervention. Journal of medical Internet research. 2010;12(4):e52. doi: 10.2196/jmir.1430 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Baltierra NB, Muessig KE, Pike EC, LeGrand S, Bull SS, Hightow-Weidman LB. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention. J Biomed Inform. 2016;59:299–307. doi: 10.1016/j.jbi.2015.12.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Were MC, Savai S, Mokaya B, Mbugua S, Ribeka N, Cholli P, et al. mUzima Mobile Electronic Health Record (EHR) System: Development and Implementation at Scale. Journal of medical Internet research. 2021;23(12):e26381. doi: 10.2196/26381 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Verma N, Mamlin B, Flowers J, Acharya S, Labrique A, Cullen T. OpenMRS as a global good: Impact, opportunities, challenges, and lessons learned from fifteen years of implementation. Int J Med Inform. 2021;149:104405. doi: 10.1016/j.ijmedinf.2021.104405 [DOI] [PubMed] [Google Scholar]
  • 20.Mamlin BW, Biondich PG, Wolfe BA, Fraser H, Jazayeri D, Allen C, et al. Cooking up an open source EMR for developing countries: OpenMRS—a recipe for successful collaboration. AMIA Annu Symp Proc. 2006;2006:529–33. [PMC free article] [PubMed] [Google Scholar]
  • 21.Bradshaw S, Brazil E, Chodorow K. MongoDB: the definitive guide: powerful and scalable data storage: O’Reilly Media; 2019. [Google Scholar]
  • 22.Einterz RM, Kimaiyo S, Mengech HN, Khwa-Otsyula BO, Esamai F, Quigley F, et al. Responding to the HIV pandemic: the power of an academic medical partnership. Acad Med. 2007;82(8):812–8. doi: 10.1097/ACM.0b013e3180cc29f1 [DOI] [PubMed] [Google Scholar]
  • 23.Bloomfield GS, Kimaiyo S, Carter EJ, Binanay C, Corey GR, Einterz RM, et al. Chronic noncommunicable cardiovascular and pulmonary disease in sub-Saharan Africa: an academic model for countering the epidemic. Am Heart J. 2011;161(5):842–7. doi: 10.1016/j.ahj.2010.12.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Tierney WM, Rotich JK, Hannan TJ, Siika AM, Biondich PG, Mamlin BW, et al. The AMPATH medical record system: creating, implementing, and sustaining an electronic medical record system to support HIV/AIDS care in western Kenya. Studies in health technology and informatics. 2007;129(Pt 1):372–6. [PubMed] [Google Scholar]
  • 25.Vedanthan R, Blank E, Tuikong N, Kamano J, Misoi L, Tulienge D, et al. Usability and feasibility of a tablet-based Decision-Support and Integrated Record-keeping (DESIRE) tool in the nurse management of hypertension in rural western Kenya. Int J Med Inform. 2015;84(3):207–19. doi: 10.1016/j.ijmedinf.2014.12.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Adair JG. The Hawthorne effect: a reconsideration of the methodological artifact. Journal of applied psychology. 1984;69(2):334. [Google Scholar]
  • 27.Van Rossum G, Drake FL. Python 3 Reference Manual. Scotts Valley, CA: CreateSpace; 2009. [Google Scholar]
  • 28.Kluyver T., Ragan-Kelley B., Fernando Perez, Granger B., Bussonnier M., Frederic J., et al. (2016). Jupyter Notebooks–a publishing format for reproducible computational workflows. In Loizides F.& Schmidt B.(Eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas; (pp. 87–90). [Google Scholar]
  • 29.Savai SM, Hasan MK, Kamano J, Misoi L, Wakholi P, Were MC. Data Cleaning Process for mHealth Log Data to Inform Health Worker Performance. Studies in health technology and informatics. 2022;295:75–8. doi: 10.3233/SHTI220664 [DOI] [PubMed] [Google Scholar]
  • 30.Ogunfiditimi F, Takis L, Paige VJ, Wyman JF, Marlow E. Assessing the productivity of advanced practice providers using a time and motion study. J Healthc Manag. 2013;58(3):173–85; discussion 85–6. [PubMed] [Google Scholar]
  • 31.Singh S, Upadhyaya S, Deshmukh P, Dongre A, Dwivedi N, Dey D, et al. Time motion study using mixed methods to assess service delivery by frontline health workers from South India: methods. Hum Resour Health. 2018;16(1):17. doi: 10.1186/s12960-018-0279-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Were MC, Kessler J, Shen C, Sidle J, Macharia S, Lizcano J, et al. Implementation and Operational Research: A Time-Motion Analysis of HIV Transmission Prevention Counseling and Antiretroviral Adherence Messages in Western Kenya. J Acquir Immune Defic Syndr. 2015;69(4):e135–41. doi: 10.1097/QAI.0000000000000666 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Otto Kate; Shekar Meera; Herbst Christopher H.; Mohammed Rianna. 2015. Information and Communication Technologies for Health Systems Strengthening: Opportunities, Criteria for Success, and Innovation for Africa and Beyond. Health, Nutrition, and Population Discussion Paper;. World Bank, Washington, DC. © World Bank. https://openknowledge.worldbank.org/handle/10986/21710 License: CC BY 3.0 IGO. [Google Scholar]
  • 34.InfoDev. “Improving Health, Connecting People: The Role of ICT in the Health Sector of Developing Countries.” 31 May, 2006. Available at https://www.infodev.org/infodev-files/resource/InfodevDocuments_84.pdf. Last accesssed on Feb-28-2022.
  • 35.Landers RN, Bauer KN, Callan RC. Gamification of task performance with leaderboards: A goal setting experiment. Computers in Human Behavior. 2017;71:508–15. [Google Scholar]
  • 36.Zhu M, Huang Y, Contractor NS. Motivations for self-assembling into project teams. Social networks. 2013;35(2):251–64. [Google Scholar]
  • 37.Oprescu F, Jones C, Katsikitis M. I PLAY AT WORK—ten principles for transforming work processes through gamification. Frontiers in psychology. 2014;5:14. doi: 10.3389/fpsyg.2014.00014 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLOS Digit Health. doi: 10.1371/journal.pdig.0000096.r001

Decision Letter 0

Benjamin P Geisler, J Mark Ansermino

22 Jun 2022

PDIG-D-22-00108

Leveraging mHealth Usage Logs to Inform Health Worker Performance in a Resource-Limited Setting: Case Example of mUzima use for a Chronic Disease Program in Western Kenya

PLOS Digital Health

Dear Dr. Were,

Thank you for submitting your manuscript to PLOS Digital Health. After careful consideration, we feel that it has merit but does not fully meet PLOS Digital Health's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript within 30 days . If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at digitalhealth@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pdig/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

* A rebuttal letter that responds to each point raised by the editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

* A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

* An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

We look forward to receiving your revised manuscript.

Kind regards,

J Mark Ansermino, MBBCh

Section Editor

PLOS Digital Health

Journal Requirements:

1. Please amend your detailed Financial Disclosure statement. This is published with the article. It must therefore be completed in full sentences and contain the exact wording you wish to be published.

Please state what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

2. Please send a completed 'Competing Interests' statement, including any COIs declared by your co-authors. If you have no competing interests to declare, please state "The authors have declared that no competing interests exist". Otherwise please declare all competing interests beginning with the statement "I have read the journal's policy and the authors of this manuscript have the following competing interests:"

3. In the online submission form, you indicated that "Data will be availed with appropriate approvals.". All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either 1. In a public repository, 2. Within the manuscript itself, or 3. Uploaded as supplementary information.

This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons by return email and your exemption request will be escalated to the editor for approval. Your exemption request will be handled independently and will not hold up the peer review process, but will need to be resolved should your manuscript be accepted for publication. One of the Editorial team will then be in touch if there are any issues.

4. Please provide separate figure files in .tif or .eps format and remove any figures embedded in your manuscript file. Please also ensure that all files are under our size limit of 10MB.

For more information about how to convert your figure files please see our guidelines: https://journals.plos.org/digitalhealth/s/figures

5. We do not publish any copyright or trademark symbols that usually accompany proprietary names, eg (R), (C), or TM (e.g. next to drug or reagent names). Please remove all instances of trademark/copyright symbols throughout the text, including © World Bank on page 25.

6. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article's retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does this manuscript meet PLOS Digital Health’s publication criteria? Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe methodologically and ethically rigorous research with conclusions that are appropriately drawn based on the data presented.

Reviewer #1: No

Reviewer #2: Yes

--------------------

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

--------------------

3. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

--------------------

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS Digital Health does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

--------------------

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors report the results of a study conducted at a chronic disease program in Kenya. The study participants used an mHealth application (mUzima) during clinical care. In addition to having a decision support system, the app also captured usage logs. The objective of this study was to valuate usefulness of mHealth usage logs to inform health worker performance.

I find the paper interesting and useful. However, I believe the paper is incomplete. Although the authors’ stated objective is to valuate usefulness of mHealth usage logs to inform health worker performance, they did not achieve that. After presenting the data analysis, the authors leave it to future work to reasons of the findings and how the findings could be used to improve the health delivery process. (they state in lines 345-351 that “As the next steps, we will employ qualitative approaches to better understand reasons for the observed work patterns and for the variations in performance between providers. We will also leverage the paradata-derived metrics to provide real-time visualization to providers for insights on their own individual performance, and their performance.”) Please note that the study was completed more than two years ago. Although the results are generalizable, results would be valuable as a case study if they could explain how and why they did or did not use the findings to improve the care delivery process.

In addition to this, it would be useful if they provide the following information:

• What are the work requirements for each worker? Do the workers use the app only outside of their facilities? It is mentioned (line 229) that in total, 39,229 log records were collected for the 13 study participants, and these included 2,497 clinical encounters. It does not mention how many workdays are included in this three-months study period. My rough calculation leads to 115 workdays per worker giving a total of about 1485 workdays. However, from figure 5, total number of resulting workdays is only about 160 workdays, which is only 10% of the workdays. It is important that the authors justify this.

• Based on figure 3, newly collected patient data from the app is automatically uploaded to EMR. If so, the correlation coefficient should be 1, not 0.92.

• Figure 6 supersedes figure 4 and hence delete Figures 4. Also, it is not necessary to present user numbers sequentially. A Pareto chart is much more meaningful here.

• Figures 8 and 9 are not informative as presented. If the objective is to visualize the variation in length of workdays, a simple relative frequency distribution of length of workdays would be useful. Note that the actual dates are unimportant. You can also have two relative frequency distributions in figures 8 and 9 in one figure.

Reviewer #2: The authors present a novel use of usage log data to track health provider activity (a surrogate for performance) while using a mobile application (mUzima) that is linked to a larger electronic medical record system within the primary health care system in several counties in Western Kenya. The manuscript is well written, and the authors have provided adequate information regarding their methods and analyses. The additional appendices were also helpful to understand the scope of the data they gathered from the system for their research.

The authors conclude that they were able to reliably demonstrate that the log data can provide detailed information of the days and hours worked, number of patients seen per day, and how the application was being used to inform many areas of improvement related to health worker performance. They also observed large differences in work performance between health providers, and work performed outside of official work hours and on weekends and the authors also determined that many (if not all) of the health providers also used the application sub-optimally for entering patient data after the patient visit, as opposed to using the application during the patient encounter to best leverage built-in clinical decision support functionality.

The research would have been strengthened by using a mixed-methods approach to also interview the providers who participated in the study to better understand what factors impacted their work performance (barriers such as workload, lack of access to the mobile network/mobile data credit to utilize the application in real time, and/or functionality of the mUzima application itself etc), however, the COVID-19 pandemic may have made this complementary data collection possible (physical distancing etc) as the study data was collected in the 3 months leading up to the declaration of the global pandemic (March 2020). The authors state in their discussion that they recognize that the lack of qualitative data is a weakness/limitation and that they plan to explore qualitative approaches to better understand their research findings.

The authors used the usage log data as a surrogate for ‘work performance’. While this is a reasonable surrogate to track clinical activities (as mUzima links to the EMR) and using the usage log data can serve as a surrogate for work performance, they are not entirely equal. The authors have addressed this adequately within their discussion and they have also highlighted the ethical implications of using these types of approaches strictly to inform, support, and incentivize care providers and not use the data for punitive purposes.

Overall, the research is novel and serves as an excellent jumping off point for further research in this area. It has significant potential to be utilized and leveraged further for future efforts and strategies to improve health worker performance and strengthen the use of mHealth approaches for health system strengthening, especially within chronic disease management in primary care.

I did note one error/oversight in the reference list. Reference #13 and #35 are the same reference. This needs to be corrected.

--------------------

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public.

For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

--------------------

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLOS Digit Health. doi: 10.1371/journal.pdig.0000096.r003

Decision Letter 1

J Mark Ansermino

25 Jul 2022

Leveraging mHealth Usage Logs to Inform Health Worker Performance in a Resource-Limited Setting: Case Example of mUzima use for a Chronic Disease Program in Western Kenya

PDIG-D-22-00108R1

Dear Dr Were,

We are pleased to inform you that your manuscript 'Leveraging mHealth Usage Logs to Inform Health Worker Performance in a Resource-Limited Setting: Case Example of mUzima use for a Chronic Disease Program in Western Kenya' has been provisionally accepted for publication in PLOS Digital Health.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow-up email from a member of our team. 

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they'll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact digitalhealth@plos.org.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Digital Health.

Best regards,

J Mark Ansermino, MBBCh

Section Editor

PLOS Digital Health

***********************************************************

Thank you for your revision. We look forward to more detail on the qualitative component!

Reviewer Comments (if any, and for reference):

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Description of fields comprised in a usage log.

    (DOCX)

    S2 Appendix. Types of usage logs recorded.

    (DOCX)

    S3 Appendix. Excerpts of python scripts used for data extraction.

    (DOCX)

    S4 Appendix. Excerpts of python scripts used for data cleaning.

    (DOCX)

    S5 Appendix. Excerpts of python scripts used for generating encounter records.

    (DOCX)

    Attachment

    Submitted filename: 20220627 PLoS Digital Response Cover Letter.docx

    Data Availability Statement

    All data are contained in the manuscript and Supporting Information files.


    Articles from PLOS Digital Health are provided here courtesy of PLOS

    RESOURCES