Skip to main content
PLOS One logoLink to PLOS One
. 2021 Sep 7;16(9):e0256799. doi: 10.1371/journal.pone.0256799

A multivariate statistical evaluation of actual use of electronic health record systems implementations in Kenya

Philomena Ngugi 1,2,*, Ankica Babic 1,3, Martin C Were 4,5
Editor: Chaisiri Angkurawaranon6
PMCID: PMC8423313  PMID: 34492070

Abstract

Background

Health facilities in developing countries are increasingly adopting Electronic Health Records systems (EHRs) to support healthcare processes. However, only limited studies are available that assess the actual use of the EHRs once adopted in these settings. We assessed the state of the 376 KenyaEMR system (national EHRs) implementations in healthcare facilities offering HIV services in Kenya.

Methods

The study focused on seven EHRs use indicators. Six of the seven indicators were programmed and packaged into a query script for execution within each KenyaEMR system (KeEMRs) implementation to collect monthly server-log data for each indicator for the period 2012–2019. The indicators included: Staff system use, observations (clinical data volume), data exchange, standardized terminologies, patient identification, and automatic reports. The seventh indicator (EHR variable Completeness) was derived from routine data quality report within the EHRs. Data were analysed using descriptive statistics, and multiple linear regression analysis was used to examine how individual facility characteristics affected the use of the system.

Results

213 facilities spanning 19 counties participated in the study. The mean number of authorized users who actively used the KeEMRs was 18.1% (SD = 13.1%, p<0.001) across the facilities. On average, the volume of clinical data (observations) captured in the EHRs was 3363 (SD = 4259). Only a few facilities(14.1%) had health data exchange capability. 97.6% of EHRs concept dictionary terms mapped to standardized terminologies such as CIEL. Within the facility EHRs, only 50.5% (SD = 35.4%, p< 0.001) of patients had the nationally-endorsed patient identifier number recorded. Multiple regression analysis indicated the need for improvement on the mode of EHRs use of implementation.

Conclusion

The standard EHRs use indicators can effectively measure EHRs use and consequently determine success of the EHRs implementations. The results suggest that most of the EHRs use areas assessed need improvement, especially in relation to active usage of the system and data exchange readiness.

Introduction

Electronic Health Records systems (EHRs) have been introduced widely into medical processes in many countries worldwide, making patient data readily available for treatment, care and analysis [13]. These EHRs implementations promise to improve quality of patient care, patient safety and to reduce costs [46]. For instance, introduction of Electronic Medical records systems (EMRs) in health care has shown improvement in time dependent events such as patient waiting time, time to processing specimen in the laboratory from test request to results reporting among others benefits [7,8]. Moreover, a systematic review on utilization of EHRs for public health in Asia revealed their ability to help identify and predict seasonal outbreaks and high risk areas and prevent infections or diseases, leading to better health outcomes [9]. Schoen et al. noted an overall increase in EHR adoption and a significant variation in the growth rate across countries in their survey of primary care doctors in health reforms [10]. Despite the infrastructural and technical challenges experienced and reported in developing countries, the uptake of EHRs in healthcare processes have also been on the rise [2,11]. However, adoption of EHRs in Sub-Saharan Africa are largely driven by HIV treatment international programs such as President’s Emergency Plan for AIDS Relief (PEPFAR) to support patient data management [11,12].

EHRs implementations involve a significant up-front investment in software design and development, infrastructure, implementation, training and IT support [13]. Sponsors, donors and management are demanding demonstrated value of EHRs implementations to inform investments and sustainability of the implementations [14,15]. Furthermore, EHRs implementations are complex, multi-faceted and impact healthcare organizations on many levels [15,16]. Consequently, chances of dismal performance of these systems are high, which may be unknown especially in public healthcare facilities. Therefore, it becomes necessary to evaluate information systems to provide evidence on system functional status and its fitness for purpose with a view to inform future deployments. Maximum benefits of information systems (IS) implementation can only be realized if the systems are deeply used in the post-adoption phase [17]. As such, evaluation of actual use of EHRs once implemented provides vital information relevant to informing approaches to improve success of existing and subsequent implementations.

Assessment of information system (IS) implementation success is both complex and never a straightforward task [18]. Thus, a range of evaluation methodologies and frameworks have emerged with divergent approaches, strengths, and limitations [19,20]. DeLone & McLean (D&M) IS success model is a mature and validated model for measuring health information systems success that was established in 1992 and revised in 2003 [21]. The model has been used to evaluate implementation success for a wide range of health information systems. Berhe et al., recently used the model to evaluate EMRs effectiveness from a user’s perspective in Ayder Referral hospital in Ethiopia [22]. Cho et al. also used the model to evaluate the performance of newly-developed information systems in three public hospitals in Korea [23].

The revised D&M model has seven dimensions used to measure IS implementation success, namely: System quality, Information quality, Service quality, System Use, intention to use, User satisfaction and Net benefits. Of these dimensions, ‘System Use’ was identified as the most appropriate variable for measuring the success of IS [21,24]. System use is the utilization of an IS in work processes by individuals, groups or organizations [11]. A number of studies have measured the actual EHRs use in terms of extent, frequency, duration of use and functions of the system based majorly on behavioural response of users through questionnaires, interview and/or focus group discussions [2,11,17,25,26]. However, only limited evaluation studies utilizing computer-generated data to assess EHRs use are available. This study was conducted to fill this gap by evaluating actual use of a national level EHR system implemented in healthcare facilities in Kenya, as a demonstration of how similar approaches could be applied across other low- and middle-income countries (LMICs) to evaluate use.

In most LMICs, measure of success of EHRs scale-up often relies on simple counts of the number of EHRs implementations. This study demonstrates that: (a) through use of standardized indicators [27], key new insights and gaps on actual status of EHRs implementations within countries use can be identified; (b) aspects of national-level EHRs usage assessments need not be time- or resource-intensive, as assessments can be automated using data already within the EHRs; and (c) mechanisms that allow efficient EHRs usage assessments offer insights to enable any identified EHRs usage gaps to be addressed in a timely manner.

Materials and methods

Study setting

This evaluation was conducted in Kenya, a country in East Africa with approximately 50 million persons [28]. Recognizing the role that EHRs play in patient data management, the government of Kenya through the Ministry of Health (MoH) and in collaboration with its development partners, namely Centres of Disease Control (CDC) and United States Agency for International Development (USAID), has implemented EHRs in over 1,000 public health facilities countrywide [29]. These implementations mainly support HIV care and treatment programs. While two EHRs (KenyaEMR and IQCare) by different vendors were initially endorsed for national deployment in support of HIV care, the country has since 2019 transitioned to supporting and deploying only KenyaEMR system (KeEMRs). In Kenya, KeEMRs is implemented in facilities spread across 22 Counties with varying numbers of sites per county (S1 Appendix). This study evaluated the actual use of KeEMRs within the facilities in which the system is deployed to inform actual EHRs usage across the country, based on computer-generated data. The study was conducted using census method with all 376 facilities that had KeEMRs implemented between 2012–2019 eligible to participate. For efficiency in care delivery, these public facilities are organised into Kenya Essential Package for Health (KEPH) service levels as follows: Level 1—community level; Level 2—dispensaries and clinics; Level 3—Health centres, maternity homes and sub district hospitals; Level 4—primary facilities which include District hospitals; Level 5—secondary facilities/Provincial hospitals; and Level 6—Tertiary/National hospitals.

EHR system

KeEMRs is an implementation and adaptation of the open source OpenMRS system platform, which is widely deployed in many countries in Africa [30]. KeEMRs supports both retrospective and point-of-care data entry (RDE & POC) with most of the facilities equipped for POC implementation. It was designed and developed(customized) by International Training and Education Center for Health (I-TECH) in the year 2012 to support care and treatment of HIV/AIDS [31]. Currently, Kenya Health Management Information system II (KeHMIS II) project supports the implementation of KeEMRs in over 370 health facilities throughout Kenya [32]. Fig 1 shows the homepage of the EHRs under study.

Fig 1. Screenshot of KeEMRs home page.

Fig 1

Reprinted from [33] under a CC BY license, with permission from The Palladium Group- KeHMIS II Project, original copyright 2012.

KeEMRs uses a communication layer referred to as interoperability layer (IL) to enable health data exchange with other health information systems such as pharmacy system (ADT). KeEMRs version 16.0.2 and above enforced the use of a nationally-endorsed 10-digit patient identifier number (five digits representing master facility list (MFL) code and five digits comprehensive care clinic number (CCCNo)) as from the year 2017 for unique patient identification.

EHRs usage indicators

The EHRs use indicators used for this study are detailed in Ngugi et al [27]. The 15 rigorously derived indicators are modelled after the HIV Monitor, Evaluation and Reporting (MER) indicators, that facilities and implementations providing HIV care would be well-familiar with [34]. This study specifically focussed on the subset of the indicators that could be generated from within the implemented EHRs. This was because the ultimate goal is to have a module within the EHRs that can automatically generate indicators without human input for reporting and sharing with relevant stakeholders. The subset of the seven EHRs indicators included are outlined in Table 1. Three of the eight excluded indicators (namely Reporting rate, Report timeliness and Report completeness) rely on data in the national data aggregate system, the Kenya Health Information System (KHIS), and had already been evaluated and reported in a different study [29]. The other five excluded indicators (namely: Data entry statistics, System Uptime, EHR Variable concordance, Report Concordance and Clinical Data timeliness) required a level of human input to generate based on how the indicators are defined [27].

Table 1. EHRs usage indicators evaluated.

# Indicator (variable) Domain Indicator Measure Indicator query description Source of data
1. Staff system use System use Percentage of facility staff members who used the EHRs during the reporting period. Defined by create, update, and delete actions around a patient record by an authorized EHRs user EHRs
2. Observations (Clinical Volume) System use Number of mandatory HIV-related clinical data elements recorded for patients in the EHRs during the reporting period. A count of the data captured by the 23 data elements* per patient encounter per month EHRs
3 EHR Variable completeness Data quality The extent to which all required data elements for a patient are contained within the EHRs No query. Data elements* captured from RDQA report generated from EHRs EHRs
4 Data Exchange Interoperability Percentage of specified systems with which the EHRs can automatically exchange all required data with. Count of unique data exchange messages between EHRs and other sub-systems through IL EHRs
5 Standardized Terminologies Interoperability The proportion of key terminologies that are mapped to standard terminology services or use a nationally endorsed concept dictionary % mapping of EHRs concepts with the concepts_reference_map table EHRs
6 Patient Identification Interoperability Use of a nationally accepted patient identification method. Patient visits identified using 10-digit identifier vs total active patients during the reporting period EHRs
7 Automatic Reports Reporting The proportion of expected reports and sub-reports to the national level that are automatically generated and transmitted to the national reporting system. A count of the reports’ generation requests EHRs

*The 23 data elements include: Patient ID, sex, date of birth, date confirmed positive, enrolment date, initiation date, initial regimen, Current regimen, BMI at last visit date, TB screening at last visit, TB screening outcomes, IPT start date, IPT status, IPT outcome date, Second last VL result, second last VL date, most recent VL result, most recent VL date, last clinical encounter date, next visit date, Pregnancy assessment last date, Initial EID within 8 weeks, Infant prophylaxis.

EHRs indicators queries

Queries were developed using MySQL to generate monthly indicator reports for the evaluated indicators except EHR Variable Completeness. These queries were programmed to be run within each EHRs implementation and were tested for accuracy in a training server prior to deployment. The data generated from the testing phase were reviewed by the researchers together with a data analyst to ensure validity of the indicator outputs (data) and needed revisions made to the queries. The resulting six queries were then combined and packaged into a script that comprised the queries and Linux bash script for creating a zipped archive file as an output after running the script. Pilot testing of the script was conducted in six facilities selected randomly in two counties to ensure feasibility of data collection within facilities. The final script was distributed to the study healthcare facilities, with accompanying instructions detailing the step-by-step process (S2 Appendix) for executing the script. Data for the EHR Variable Completeness indicator (key data elements related to HIV care and treatment) were derived from routine data quality assessment (RDQA) report that were already being generated from the EHRs.

Data collection and analysis

All the 376 facilities implemented with KeEMRs were approached to participate in the study. Nevertheless, data collection script was distributed to 312 sites that gave authority for the commencement of the study and had used the EHRs for at least six months. Experienced system champions at each facility ran the query script as per the outlined protocol (S2 Appendix). Further support on running the query and generating the report was provided through a toll-free line to the EHRs developers helpdesk as needed. Monthly indicator data were generated from each EHRs implementing facility from January 2012 (the earliest possible time for system deployment) to December 2019. Generated reports (data) were transmitted electronically to the research team for consolidation and data cleaning thereby enforcing data quality. No personal identifiable information were contained in the resulting indicator reports. All the EHRs implementations used the same terminology service, hence assessment of the Standardized Terminologies indicator evaluated the proportion of terms in this dictionary that mapped to standard terminologies such as SNOMED and ICD [35,36]. Data collection for this study occurred over a period of eight weeks between April and June 2020.

Facility characteristics (KEPH levels, facility-type-category, ownership, services and mode of EHRs use) data were derived from Master Facility List (MFL) website maintained by the MoH. These data were summarized using descriptive statistics. Mean values and standard deviations of the collective performance by facilities for each indicator were calculated. One-way ANOVA (with Tukey’s b “post-hoc” test) were performed to measure the variance in variables means (Staff System Use, clinical volume, and patient identification indicators) across the counties. Correlation analysis was also performed to measure the relationship between staff system use indicator and volume of the clinical data for insight on user productivity. Weighted mean of Staff System Use and Patient Identification indicators was computed to determine the overall performance of each facility. The two indicators assumed a weighting mean of 1, hence each was assigned a weight of 0.5 in order to have an unbiased mean. A summation of the weighted mean of the two indicators for each facility was then computed and finally ranked in descending order. The two indicators were chosen because they are the key variables that show EHRs utilization in the facility. Data exchange indicator data were treated and analysed as dichotomous data (presence or absence) of interoperability layer (IL) software that facilitates data exchange with external systems.

Finally, we fitted multiple linear regression model to establish how individual facility characteristics affected the use of the system. The dependent variable was number of active system users while the covariates were the facility characteristics (KEPH level, ownership services and mode of EHRs use). All analyses were performed using IBM SPSS statistics 25 [37].

The primary outcome of interest for this study was to determine the collective performance by facilities on each of the seven indicators over the period of KeEMRs implementation in Kenya, as a measure of overall EHRs usage. In addition, this study had several secondary outcomes of interests, namely: (a) evaluation of variability in EHRs usage between counties, (b) relationship between number of active users of systems and the clinical volume for insight on user productivity, and (c) the effect of facility characteristics on EHRs use.

Ethical statement

The study was approved by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN: 0003348). Permission to collect data was also obtained from Ministry of Health (MoH), County Directors of Health of each county, as well as Service Delivery Partners (SDPs) responsible for EHRs implementations and HIV data at the facility level. Permission to collect data from 312 (out of 376) facilities in 19 counties were granted. All participants filled a consent form before taking part in the study. No personal identification data were collected from either patient records/system database or the healthcare facilities or personnel who executed the queries.

Results

Organizational characteristics of the responding facilities

Out of the 312 facilities that assented to participate in the study, 213 (68.3%) spanning 19 Counties responded. Characteristics of the responding facilities are detailed in Table 2. The responding facilities were largely between KEPH levels 2 and 4, as these were the ones offering HIV services and in which the EHRs was deployed. Most of these facilities offered care and treatment (C&T) service 161(72.3%). Over 86% of these facilities were either Health Centres or Hospitals and were largely owned and run by the Ministry of Health (88.7%). Only 9.4% of the facilities were completely paperless, with slightly over a third of the facilities (38.0%) still doing retrospective data entry (RDE) fully.

Table 2. Frequency distribution for the facility characteristics (n = 213).

Characteristics Count % P-value
KEPH Level
Level 2 28 13.10% 0.092
Level 3 100 46.90%
Level 4 85 39.90%
Total 213 100.00%
Facility type category
Dispensary 26 12.20% 0.057
Health Centre 99 46.50%
Hospitals 86 40.40%
Medical Clinic 2 0.90%
Total 213 100.00%
Ownership
Faith Based Organizations 21 9.90% 0.001
Ministry of Health 189 88.70%
Non-Governmental Organizations Private 3 1.40%
Total 213 100.00%
Services
CT* 161 72.30% <0.001
CT&HTS** 52 13.60%
Total 213 100.00%
Mode of use
HYBRID 112 52.60% <0.001
POC 20 9.40%
RDE 81 38.00%
Total 213 100.00%

* Care & Treatment service (CT)

**HTS–HIV counselling and testing service.

The total number of responding facilities with EHRs implementation varied by county, with the lowest county having three while the highest had 25. Most of these implementations occurred in 2014 (113 implementations, 53.1%) followed by 2013 (91 implementations, 42.7%) (S3 Appendix). No implementations occurred in the period 2015–2017 whilst there were only four new implementations (1.8%) between 2018 and 2019 in line with the country’s planned implementation strategy.

EHRs usage indicator results

Staff system use

An average of 18.1% (SD = 13.1%) staff members with EMRs access rights used the system in any given period. The best and worst facilities had a mean usage of 46.8% (SD = 23.3%) and 7.3% (SD = 3.3%) respectively (p< .001) (S4 Appendix).

Observations (clinical data volume)

On average, the facilities captured 3,363 (SD = 4,249) patient-related data elements (clinical data volume) monthly, based on the mandatory 23 data types of interest for HIV reporting in Kenya [38] showing there was high dispersion in the data collected (S4 Appendix). The facility with highest mean monthly volume captured 28,937 (SD = 11,356) data elements while the least had 251 (SD = 167). There was a weak positive correlation between Observations (Clinical data volume) and Staff System Use indicators (coefficient r = 0.01).

EHR variable completeness

We observed that all the 23 data elements required for HIV patients by the MoH were contained within the records for each patient in the EHRs. Hence the EHR Variable Completeness indicator as per the country’s standard operating procedures (SOP) was 100% across the study facilities.

Data exchange

Majority of the facilities (183/213) lack the interoperability layer (IL) module and hence had no capability to exchange health data with external systems (S5 Appendix). Of the 14.1% facilities which had data exchange capability, 56.7% of them were in one county. None of the facilities (n = 108) in 13 of the 19 counties had data exchange capability.

Standardized terminologies

On average 97.6% (52,098 out of 53,353) of KeEMR system concepts were mapped to the standardized (international) terminologies/concept dictionaries such as CIEL and SNOMED.

Patient identification

Only 50.5% (SD = 35.4%, p< 0.001) of the patient records had patients with identifiers in the nationally-endorsed patient identifier format (10-digit number = 5 MFL+5 CCCNo.) (S4 Appendix). There was a wide range of 3% to 100% conformity across the facilities, indicating the need for further investigation on why such low conformity rates. Three of the healthcare facilities fully adopted the approved patient identifier (100%) while 28 facilities had an average mean of < 10% conformity in the use of the national patient identifier.

Automatic reports

KeEMRs is configured to generate monthly Ministry of Health routine reports (MoH 731) for transmission to the national reporting system (KHIS). However, by the time of this study, we could not capture the data to compute automatic reports indicator (the proportion of expected reports to the national level that are automatically generated and transmitted to the national reporting system). This was because the records of the generated reports and their transmission are not saved, with tables refreshed on a daily basis.

Performance of the facilities

Using the weighted mean of the means scores of Staff System Use and Patient Identification indicators, facilities were benchmarked against each other using the “best performer” and “worst performer” approach. The weighted mean ranged from 9% to 65% across the 213 facilities. S6 Appendix presents facility performance list from the highest to the lowest. The top ten performing facilities had an average weighted mean of 61% (range 59–65%) while the bottom ten facilities had an average mean of 11% (range 9–12%).

EHRs use against facility characteristics

The relationship between the facility characteristics and the number of active system users assessed by the multiple linear regression analysis was statistically significant (p = 0.000) for all the covariates (Table 3). The characteristics influenced system usage positively, with the exception of Mode of EHRs use characteristic. RDE mode of EHRs use had the highest negative impact on the use of the system.

Table 3. Multiple linear regression model for staff system use and facility characteristics.

Facility Characteristics Unstandardized Coefficients Standardized Coefficients t P-value
B Std. Error Beta
(Constant) 0.354 0.084   4.213 0.000
KEPH Level Level 2
Level 3
Level 4
0.445 0.019 0.194 23.929 0.000
Ownership -Faith-Based Organisation
-Ministry of Heath
-Non-Governmental Organization
0.401 0.035 0.092 11.308 0.000
Services CT
CT&HTS
0.392 0.015 0.206 25.351 0.000
Mode of EHRs use Hybrid
POC
RDE
-0.124 0.014 -0.074 -9.176 0.000

Dependent Variable: Number of active system users; Independent Variables: KEPH level, ownership, mode EHRs of use, and services. p-value: When p< = 0.05, there is statistically significant difference. B (coefficient) explains a change in dependent variable that can be attributed to a change of one unit in the independent variable.

Discussion

To our knowledge, this is the first national-level study that has systematically evaluated actual EHRs use post-implementation utilizing computer-generated real-time data based on robustly developed EHRs usage indicators. A systematic review on measuring EHRs use in primary care revealed that most studies measured use through assessing the utilization of individual EHRs functions [26]. The findings from our study highlight the fact that simply counting number of EHRs implementations is highly inadequate in determining IS implementation success. Multidimensional set of indicators for evaluating EHRs use in this study align with the three main components of EHRs meaningful use, namely: (1) EHRs must be used in the care processes such as prescribing, (2) EHRs must encompass electronic health data exchange for improved health care quality and (3) EHRs must support reporting of clinical measures [39,40]. In this study, indicators reflecting system use and interoperability domains indicated low measures, suggesting the need for further improvement.

Measuring system-use at the application level sheds light on how fully or effectively organizations are using IT [41]. In our study, the overall Staff System Use was very low across all the facilities regardless of the period of EHRs implementation. The study established existence of many dormant user accounts in the EHRs across all facilities portraying a high number of users authorized to use the system (denominator) compared to actual number of users (numerator) hence the low average mean. Another possibility of low mean could have been occasioned by shared login credentials or shared computers resulting into multiple users on one user account. This presented a scenario like only one user performed activities around patients’ files i.e., create, update or delete, which were the assessed Staff system use indicator measures. Consequently, this compromised the accuracy of the numerator count. A study investigating users’ behaviour in password utilization revealed users share passwords for convenience as well as a show of trust [42]. The finding from our study warrants deeper assessment on user credentialing processes and account usage patterns (such as sharing of credentials). It also highlights the need to re-emphasize good password practices to the system users and active monitoring of user accounts by the system administrators. We also recommend further research to establish user-computer ratio in the healthcare facilities.

While our results show KeEMRs’ readiness to interoperate with other external systems due to high mapping rate of its concepts to standard terminology services like CIEL [38,43], the study established a slow incorporation of the interoperability layer (IL) within the EHRs. Integration with other systems is one of system quality measures among ease-of-use, functionality, reliability and flexibility [44]. The low data exchange indicator findings from this study suggests the need for investigation on other system quality measures. Technological barriers, such as functionality and compatibility issues, and non-user-friendliness can limit system use [45]. The actual uptake of the nationally-accepted patient identifiers was average although with large variations in uptake levels between facilities and between counties. Several studies reveal lack of interoperability as a well-known impediment to EHRs successful adoption and use [4649]. As such, interoperability layer should be incorporated into all EHRs implementations as well as concerted efforts towards nationwide adoption and use of unique patient identifier, which promises to improve patient safety and care efficiency [50].

The study expected a strong positive correlation between Staff System Use and Observations (clinical data volume) recorded in the EHRs, which was not the case. This could be attributed to the possibility of users sharing login credentials as intimated earlier. Several factors determine facility clinical volume such as patients’ volume, frequency of patients’ visits (encounters), EHRs mode of use and active usage of the system during care, all unique to each facility. Ideally, facilities entering data retrospectively should efficiently transfer paper records into the EHRs in a timely fashion for 100% concordance. However, a study on EHRs use and user satisfaction by Tilahum and Fritz revealed retrospective data entry as a major cause of dissatisfaction of EHRs use among users, especially when the same individuals collecting the data are tasked to enter it into the system later [2]. Indeed, our study revealed that point of care (POC) and hybrid modes of data capture were associated with increased system usage compared to retrospective data entry. Thus, EHRs implementors should aim at point of care mode of operation right from initiation.

Study strengths and limitations

The key strength of the study was the use of empirical data extracted directly from EHRs hence not subject to bias normally introduced by human judgment prevalent in self-reports such as questionnaires. Boon et al in their study on antecedents of continued use and extended use of enterprise systems strongly recommended use of system log file data to overcome human related response bias [51]. Secondly, the study period (2012–2019) was long enough to reveal the state of the EHRs use in the health care facilities. Also, the study results are reliable due to the use of census method in the collection of the primary data. Furthermore, these facilities had diverse range of characteristics in terms of ownership and facility levels and covered broad geographic area of Kenya. The study does, however, acknowledge a few limitations. It was only conducted in one country (Kenya) and the findings do not necessarily translate directly to other countries. However, the study provides a demonstration case that can be modelled by other countries to inform similar EHRs usage evaluations. Finally, this study only focused on facilities where the EHRs were in actual use, without mention of locations where the EHRs were implemented and actually failed. Attention needs to be paid to failed implementations, to ensure that usage rates are not being over-reported.

In the next step of our research, we will conduct qualitative assessments to better understand the observed findings. This will be done through Focus Group Discussions (FGD) and semi-structured interviews with EHRs users and key stakeholders. Further, we will work with relevant partners to help integrate outputs and visualizations of the usage reports within the EHRs, and to provide various visualizations and dashboards for managers and decision-makers to increase visibility on system usage within and across facilities. It is also recognized that continued usage of EHRs in the patient care processes do not necessarily lead to better work performance or improved care quality. Further research is needed to investigate impact of EHRs usage on care quality and outcomes.

Conclusion

Assessment of actual use of implemented EHRs within LMICs is important. The systematically generated standard EHRs usage indicators can be adopted and used successfully within facilities across countries. Results from this study demonstrate that there are many areas of improvement in EHRs use, as well as the need for continuous monitoring of EHRs use to inform timely interventions. Simply counting number of implementations, as is currently done in many settings, remains a highly inadequate measure for evaluating EHRs implementations success.

Supporting information

S1 Appendix. Distribution of KeEMRs implementations as of June 2020.

(PDF)

S2 Appendix. Standard operating procedures for query extraction.

(PDF)

S3 Appendix. KeEMRs implementations distribution in the period 2012–2019 across the counties (n = 19).

(PDF)

S4 Appendix. Facilities descriptive statistics for staff system use, observations & patient identification indicators.

(XLSX)

S5 Appendix. Interoperability layer (IL) module (data exchange) presence/absence in facilities across the counties.

(PDF)

S6 Appendix. Facilities performance using weighted means.

(XLSX)

S7 Appendix

(XLSX)

Acknowledgments

Authors would like to acknowledge KeEMR system developers for providing input into the testing of the study instrument (query script), and helpdesk support to study participants. We also appreciate logistical support by Kenya Ministry of Health, County health directors, AMPATH Plus and FACES service development partners, and County Health Records Information Officers (CHRIOs). Much appreciation also to all the healthcare facilities and the system champions for their participation in the study.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

This work was supported in part by the NORHED program (Norad: Project QZA-0484). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Norwegian Agency for Development Cooperation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Carolyn Petersen, “Visualization for All: The Importance of Creating Data Representations Patients Can Use,” in Proceedings of the 2016 Workshop on Visual Analytics in Healthcare in conjunction with AMIA, 2016, pp. 46–49.
  • 2.Tilahun B. and Fritz F., “Comprehensive Evaluation of Electronic Medical Record System Use and User Satisfaction at Five Low-Resource Setting Hospitals in Ethiopia,” JMIR Med. Informatics, vol. 3, no. 2, p. e22, 2015. doi: 10.2196/medinform.4106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Nguyen L., Bellucci E., and Nguyen L. T., “Electronic health records implementation: An evaluation of information system impact and contingency factors,” Int. J. Med. Inform., vol. 83, no. 11, pp. 779–796, 2014. doi: 10.1016/j.ijmedinf.2014.06.011 [DOI] [PubMed] [Google Scholar]
  • 4.King J., Patel V., Jamoom E. W., and Furukawa M. F., “Clinical benefits of electronic health record use: National findings,” Health Serv. Res., vol. 49, no. 1 PART 2, pp. 392–404, 2014. doi: 10.1111/1475-6773.12135 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Hydari M. Z., Telang R., and Marella W. M., “Electronic health records and patient safety,” Commun. ACM, vol. 58, no. 11, pp. 30–32, 2015. [Google Scholar]
  • 6.Health Infoway C, “The emerging benefits of EMR use in ambulatory care in Canada: Benefits Evaluation Study,” 2016. [Google Scholar]
  • 7.Alamo S. T. et al., “Electronic medical records and same day patient tracing improves clinic efficiency and adherence to appointments in a community based HIV/AIDS care program, in Uganda,” AIDS Behav., vol. 16, no. 2, pp. 368–374, 2012. doi: 10.1007/s10461-011-9996-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Jawhari B., Ludwick D., Keenan L., Zakus D., and Hayward R., “Benefits and challenges of EMR implementations in low resource settings: A state-of-the-art review,” BMC Med. Inform. Decis. Mak., vol. 16, no. 1, pp. 1–12, 2016. doi: 10.1186/s12911-016-0354-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Dornan L., Pinyopornpanish K., Jiraporncharoen W., Hashmi A., Dejkriengkraikul N., and Angkurawaranon C., “Utilisation of Electronic Health Records for Public Health in Asia: A Review of Success Factors and Potential Challenges,” Biomed Res. Int., vol. 2019, 2019. doi: 10.1155/2019/7341841 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Schoen C. et al., “A Survey Of Primary Care Doctors In Ten Countries Shows Progress In Use Of Health Information Technology, Less In Other Areas,” Health Aff., vol. 12, no. 31, pp. 2805–2816, 2012. doi: 10.1377/hlthaff.2012.0884 [DOI] [PubMed] [Google Scholar]
  • 11.Akanbi M. O. et al., “Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges.,” J. Med. Trop., vol. 14, no. 1, pp. 1–6, 2012. [PMC free article] [PubMed] [Google Scholar]
  • 12.Kang’a S. G., Muthee V. M., Liku N., Too D., and Puttkammer N., “People, Process and Technology: Strategies for Assuring Sustainable Implementation of EMRs at Public-Sector Health Facilities in Kenya.,” AMIAAnnu. Symp. proceedings. AMIA Symp., vol. 2016, pp. 677–685, 2016. [PMC free article] [PubMed] [Google Scholar]
  • 13.Kruse C. S., Kristof C., Jones B., Mitchell E., and Martinez A., “Barriers to Electronic Health Record Adoption: a Systematic Literature Review,” J. Med. Syst., vol. 40, no. 12, 2016. doi: 10.1007/s10916-016-0628-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Piette J. D. et al., “Impacts of e-health on the outcomes of care in low- and middle-income countries: where do we go from here?,” Bull. World Health Organ., vol. 90, no. 5, pp. 365–372, 2012. doi: 10.2471/BLT.11.099069 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Bardhan I. R. and Thouin M. F., “Health information technology and its impact on the quality and cost of healthcare delivery,” Decis. Support Syst., vol. 55, no. 2, pp. 438–449, 2013. [Google Scholar]
  • 16.Alolayyan M. N., Alyahya M. S., Alalawin A. H., Shoukat A., and Nusairat F. T., “Health information technology and hospital performance the role of health information quality in teaching hospitals,” Heliyon, vol. 6, no. 10, p. e05040, 2020. doi: 10.1016/j.heliyon.2020.e05040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tubaishat A. and AL-Rawajfah O. M., “The use of electronic medical records in Jordanian hospitals: A nationwide survey,” CIN—Comput. Informatics Nurs., vol. 35, no. 10, pp. 538–545, 2017. doi: 10.1097/CIN.0000000000000343 [DOI] [PubMed] [Google Scholar]
  • 18.Sligo J., Gauld R., Roberts V., and Villac L., “A literature review for large-scale health information system project.pdf,” Int. J. Med. Inform., vol. 97, pp. 86–97, 2017. doi: 10.1016/j.ijmedinf.2016.09.007 [DOI] [PubMed] [Google Scholar]
  • 19.Eslami Andargoli A., Scheepers H., Rajendran D., and Sohal A., “Health information systems evaluation frameworks: A systematic review,” Int. J. Med. Inform., vol. 97, pp. 195–209, 2017. doi: 10.1016/j.ijmedinf.2016.10.008 [DOI] [PubMed] [Google Scholar]
  • 20.Stylianides A., Mantas J., Roupa Z., and Yamasaki E. N. , “Development of an evaluation framework for health information systems (DIPSA),” Acta Inform. Medica, vol. 26, no. 4, pp. 230–234, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Mardiana S., Tjakraatmadja J. H., and Aprianingsih A., “DeLone–McLean Information System Success Model Revisited: The Separation of Intention to Use -Use and the Integration of Technology Acceptance Models,” Int. J. Econ. Financ. Issues, vol. 5, no. 5, pp. 172–182, 2015. [Google Scholar]
  • 22.Berhe M., Tadesse K., Berhe G., and Gebretsadik T., “Evaluation of Electronic Medical Record Implementation from User’s Perspectives in Ayder Referral Hospital Ethiopia,” J. Heal. Med. Informatics, vol. 08, no. 01, pp. 1–13, 2017. [Google Scholar]
  • 23.Cho K. W., Bae S. K., Ryu J. H., Kim K. N., An C. H., and Chae Y. M., “Performance evaluation of public hospital information systems by the information system success model,” Healthc. Inform. Res., vol. 21, no. 1, pp. 43–48, 2015. doi: 10.4258/hir.2015.21.1.43 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Delone W. H. and Mclean E. R., “Information Systems Success Measurement,” Found. Trends®in Inf. Syst., vol. 2, no. 1, pp. 1–116, 2016. [Google Scholar]
  • 25.Maillet E., Sicotte C., and Mathieu L., “The actual use of an electronic medical record (EMR) by acute care nurses: Examining a multidimensional measure at different adoption stages,” Stud. Health Technol. Inform., vol. 250, pp. 241–242, 2018. [PubMed] [Google Scholar]
  • 26.Huang M. Z., Gibson C. J., and Terry A. L., “Measuring Electronic Health Record Use in Primary Care: A Scoping Review,” Appl. Clin. Inform., vol. 9, no. 1, pp. 15–33, 2018. doi: 10.1055/s-0037-1615807 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ngugi P. N., Babic A., Kariuki J., Santas X., Naanyu V., and Were M., “Development of Standard Indicators to Assess Use of Electronic Health Record Systems Implemented in Low- and Medium-Income Countries,” PLoS One, no. 4, pp. 1–15, 2021. doi: 10.1371/journal.pone.0244917 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kenya National Bureau of Statistics, “2019 Kenya Population and Housing Census Volume 1: Population by County and Sub-County,” 2019.
  • 29.Ngugi P. N., Gesicho M. B., Babic A., and Were M. C., “Assessment of HIV Data Reporting Performance by Facilities During EMR Systems Implementations in Kenya,” Stud. Health Technol. Inform., vol. 272, pp. 167–170, 2020. doi: 10.3233/SHTI200520 [DOI] [PubMed] [Google Scholar]
  • 30.“KenyaEMR Distribution—Documentation—OpenMRS Wiki.” [Online]. Available: https://wiki.openmrs.org/display/docs/KenyaEMR+Distribution. [Accessed: 16-Sep-2020].
  • 31.“KenyaEMR Implemented at More Than 340 Sites in Under Two Years–I-TECH.” [Online]. Available: http://www.go2itech.org/2014/11/kenyaemr-implemented-at-more-than-340-sites-in-under-two-years/. [Accessed: 22-Nov-2017].
  • 32.“KeHMIS Applications.” [Online]. Available: https://data.kenyahmis.org:8181/. [Accessed: 16-Sep-2020].
  • 33.“KenyaEMR.” [Online]. Available: https://data.kenyahmis.org:8500/openmrs/kenyaemr/userHome.page? [Accessed: 16-Sep-2020].
  • 34.PEPFAR, “Monitoring, Evaluation, and Reporting Indicator Reference Guide. MER 2.0 (Version 2.4),” no. September. 2019.
  • 35.Randorff Højen A. and Rosenbeck Gøeg K., “Snomed CT implementation. Mapping guidelines facilitating reuse of data.,” Methods Inf. Med., 2012. doi: 10.3414/ME11-02-0023 [DOI] [PubMed] [Google Scholar]
  • 36.WHO, “International Statistical Classification of Diseases and Related Health Problems, 10th Revision ICD-10: Tabular List,” World Heal. Organ., vol. 1, pp. 332–345, 2016. [Google Scholar]
  • 37.“SPSS Software—United Kingdom | IBM.” [Online]. Available: https://www.ibm.com/uk-en/analytics/spss-statistics-software. [Accessed: 16-Sep-2020].
  • 38.Keny A., Wanyee S., Kwaro D., Mulwa E., and Were M. C., “Developing a National-Level Concept Dictionary for EHR Implementations in Kenya,” Stud. Health Technol. Inform., vol. 216, pp. 780–784, 2015. [PubMed] [Google Scholar]
  • 39.Bowens F. M., Frye P. A., and Jones W. A., “Health information technology: integration of clinical workflow into meaningful use of electronic health records.,” Perspect. Health Inf. Manag., vol. 7, 2010. [PMC free article] [PubMed] [Google Scholar]
  • 40.“United States Department of Health and Human Services. Centers for Medicare and Medicaid Services. CMS HER meaningful use overview.” [Online]. Available: https://www.cms.gov/EHRIncentivePrograms/30_Meaningful_Use.asp#TopOfPage. [Accessed: 04-Sep-2020].
  • 41.Maillet É., Mathieu L., and Sicotte C., “Modeling factors explaining the acceptance, actual use and satisfaction of nurses using an Electronic Patient Record in acute care settings: An extension of the UTAUT,” Int. J. Med. Inform., vol. 84, no. 1, pp. 36–47, 2015. doi: 10.1016/j.ijmedinf.2014.09.004 [DOI] [PubMed] [Google Scholar]
  • 42.Morris R. and Thompson K., “Password security,” Int. J. Secur., vol. 8, no. 1, 2014. [Google Scholar]
  • 43.“SNOMED—Columbia International eHealth Laboratory concept dictionary for OpenMRS.” [Online]. Available: http://www.snomed.org/snomed-ct/case-studies/columbia-international-ehealth-laboratory-concept. [Accessed: 16-Sep-2020].
  • 44.DeLone W. H. and Mclean E. R., “The DeLone and McLean Model of Information Systems Success: A Ten-Year Update,” J. Manag. Inf. Syst./Spring, vol. 19, no. 4, pp. 9–30, 2003. [Google Scholar]
  • 45.Mohamadali N. A. and Aziz N. F. A., “The Technology Factors as Barriers for Sustainable Health Information Systems (HIS)-A Review,” Procedia Comput. Sci., vol. 124, pp. 370–378, 2017. [Google Scholar]
  • 46.Ngugi P., Were M. C., and Babic A., “Facilitators and Barriers of Electronic Medical Records Systems Implementation in Low Resource Settings: A Holistic View,” Stud. Heal. Technol. Informatics IOS Press, vol. 251, pp. 187–190, 2018. [PubMed] [Google Scholar]
  • 47.Kruse C. S., Mileski M., Alaytsev V., Carol E., and Williams A., “Adoption factors associated with electronic health record among longterm care facilities: A systematic review,” BMJ Open, vol. 5, no. 1, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Farzianpour F., Amirian S., and Byravan R., “An Investigation on the Barriers and Facilitators of the Implementation of Electronic Health Records (EHR),” Health (Irvine. Calif)., vol. 7, no. December, pp. 1665–1670, 2015. [Google Scholar]
  • 49.Jawhari B. et al., “Barriers and facilitators to Electronic Medical Record (EMR) use in an urban slum,” Int. J. Med. Inform., vol. 94, pp. 246–254, 2016. doi: 10.1016/j.ijmedinf.2016.07.015 [DOI] [PubMed] [Google Scholar]
  • 50.Waruhari P., Babic A., Nderu L., and Were M. C., “A review of current patient matching techniques,” Stud. Health Technol. Inform., vol. 238, pp. 205–208, 2017. [PubMed] [Google Scholar]
  • 51.See B. P., Yap C. S., and Ahmad R., “Antecedents of continued use and extended use of enterprise systems,” Behav. Inf. Technol., vol. 38, no. 4, pp. 384–400, 2019. [Google Scholar]

Decision Letter 0

Chaisiri Angkurawaranon

30 Mar 2021

PONE-D-21-01911

A multivariate statistical evaluation of actual use of electronic health records systems implementations in Kenya

PLOS ONE

Dear Dr. Ngugi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by May 14 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that S2 Appendix in your submission contains map images which may be copyrighted.

All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (a) present written permission from the copyright holder to publish this figure specifically under the CC BY 4.0 license, or (b) remove the figure from your submission:

a. You may seek permission from the original copyright holder of S2 Appendix to publish the content specifically under the CC BY 4.0 license. 

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b. If you are unable to obtain permission from the original copyright holder to publish this figure under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

The following resources for replacing copyrighted map figures may be helpful:

USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/

The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/

Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html

NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/

Landsat: http://landsat.visibleearth.nasa.gov/

USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/#

Natural Earth (public domain): http://www.naturalearthdata.com/

3. Thank you for stating the following in the Acknowledgments Section of your manuscript:

'Authors would like to acknowledge KeEMR system developers for providing input into the testing of the study instrument (query script), and helpdesk support to study participants. We also appreciate the support by Kenya Ministry of Health, County health directors, AMPATH Plus and FACES service development partners, and County Health Records Information Officers (CHRIOs). Much appreciation also to all the healthcare facilities and the system champions for their participation in the study.'

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

a. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

'This work was supported in part by the NORHED program (Norad: Project QZA-0484). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Norwegian Agency for Development Cooperation.  The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.'

Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now 

Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement.

b. Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. We note that Figure 1 in your submission contains copyrighted images.

All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (a) present written permission from the copyright holder to publish this figure specifically under the CC BY 4.0 license, or (b) remove the figure from your submission:

a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. 

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

b. If you are unable to obtain permission from the original copyright holder to publish this figure under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I read the paper with interest and I can see that there is some interesting data here. However I was not really convinced by some aspects of the work as presented here and found the paper rather too long and a bit repetitious while not really focusing on a clear set of issues and a strong message for the reader.

In the introductory section a number of older references are used to set the scene. The paper would be stronger if it focused on more recent studies in its literature review and on literature that serves the needs of this particular study e.g. that details specific metrics of system use, , assessment of system value, and the analysis of log files. Some of what was here felt a bit out-of-date (references over 10 years old). It hardly mentioned core models of system usage and success e.g. DeLone and Mclean get a very brief mention but their model – first and revised version – may be very useful to underpin this work.

I would be more interested in the data that is collected here if it was not described (on a couple of occasions) as ‘objective’ or ‘un biased’. From my position this data is useful, interesting, but it is not a higher truth than other data collected in other ways. Nor is it (or any other data) un-biased – computer systems and their data are biased from the day they are developed! I would like more discussion of the validity and potential of the data and independent variables, and how they might be interpreted or add value. But this cannot be asserted or taken for granted..

Similarly I felt that the focus on the GLM model was the weakest part of the paper. To reduce the complex reality of clinical work and use of EHR over years to a single model seems a rush to judgement. The discussion the Table 4’s findings seemed to be more description than an evaluation or analysis of the results (although the discussion section did take this a little further – e.g. page 17).

Before any such model I would suggest more attention paid to the data itself and to the various correlations that it reveals.

I was not clear what the section on ‘Implementation by county (page 11 and 12) added to the paper – just a large table . The data on implementation year is interesting but could find a home elsewhere in the paper

I was puzzled by the definition of some variables. For example, is the ‘EHR Variable completeness’ (page 13) really 100%in any meaningful way? Is the fact that then data fields are defined in the software really enough to say this? A real measure might be the number of these fields that are actually in use – e.g. populated in over x% of patient records?

I was also puzzled by the Observations measures (page 13). I would have thought this would need a denominator to of any interest – and what the denominator should be would be (patient encounters, clinical staff-users etc) is an interesting issue to address in the paper at some depth, but it only appears briefly in the discussion (page 17).

In summary I believe that the paper could be significantly improved by tightening the presentation and some of the repetition of findings, focusing on interpreting the data more and less on the GLM. I would also suggest a stronger discussion section that takes the reader further and focuses on the ‘so-what ‘. The present version seems to repeat the main findings, but not take the argument further.

Reviewer #2: Thank you for the opportunity to review this manuscript on actual use of electronic health records systems. Since the advent of electronic health records, there have been concerns about the benefits of using EHRs and the success of implementation of EHRs. In the research conducted, the authors examined seven indicators that might reflect the actual use of electronic health records.

Upon reading the manuscript, the question arises whether the authors are really identifying the actual use of EHRs by health care staff with the seven indicators they have chosen.

The first indicator is ‘staff system use’ and was measured by the percentage of facility staff members who used the EHRs during the reporting period. In their study, the authors found a low average of 18.2%. However, this number has little meaning without providing more information about the context. For instance, are the health care facilities using a few stationary computers or are they using a shared or individual iPad? With a stationary computer health care professionals often log in with one user account and subsequently use the EHRs with multiple professionals on one user account. This is also the case when working with a shared iPad. Therefore, the ‘staff system use’ as measured by the authors is not an accurate indicator for actual use of the EHRs.

In addition, another indicator is ‘standardized terminologies’. This indicator is measured through the proportion of key terminologies that are mapped to standard terminologies. Mapping of terminologies is mostly relevant for the exchange of health care information and for the use of routine care data for research purposes. However, this mapping has no direct influence on the use of EHRs by health care staff in daily care practice. So, how does this indicator say anything about the actual use of EHRs? That remains unclear in this study. Moreover, there are different standardized terminologies for doctors, nursing staff and other health care staff. The authors lack to explain by which health care staff the EHRs are being used. Therefore, the indicator standardized terminology remains vague and unclear.

Besides, the other five indicators can also be questioned whether they are an accurate indicator for actual use of the EHRs by health care staff. Therefore it is very doubtful whether the authors in this study really assessed the state of the implementations of EHRs.

The authors already mentioned themselves that the study findings are not generalizable to other countries. They suggest that their approach for analysing EHRs use can be generalizable to other countries. However as explained before, this approach seems very questionable. Therefore, the added value of this manuscript for the international research community is not clear.

More detailed comments:

- Use of abbreviations. The authors are not consistent in use of abbreviations for the electronic health records. They use both EHR, EHRs and EMRs which causes confusion. Besides, the authors use a lot of other abbreviations which is not beneficial for the readability of the manuscript. The authors should only use universally used abbreviations and only when necessary. This applies both to the body of text and the tables.

- In the ‘Introduction’ section the authors write that EHRs improve quality of care and support HIV programs at a national level. Authors should explain these suggested relations further, since it remains unclear.

- In the ‘Introduction’ section the IS success model is introduced without further describing the model. For readers who are not familiar with this model this paragraph is unclear. Further explanation is needed.

- In the ‘Material and Methods’ section the authors write there are two types of EHRs endorsed for national development. It is unclear what was meant by ‘two types’. Does this mean EHRs from two different vendors?

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Tony Cornford

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Decision Letter 1

Chaisiri Angkurawaranon

29 Jun 2021

PONE-D-21-01911R1

A multivariate statistical evaluation of actual use of electronic health records systems implementations in Kenya

PLOS ONE

Dear Dr. Ngugi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Aug 13 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: I Don't Know

Reviewer #2: I Don't Know

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: I found this revised manuscript better in many respects,. It is tighter in its focus , less repetitious and has lost some extraneous material. I was however disappointed to see that the rebuttal letter’ was so brief and indicated so little about how the authors had chosen to revise the paper.

The way that the authors cite and use the DeLone and Mclean model is better in this version, but is still rather unimaginative. To say that the model has 6 dimensions and then only focus on one (‘system use’) is to underplay the model and the data available here. For example, is there not data on information quality here, and perhaps system quality too? Clearly this is not a ‘full’ D&M data set – but their work can be really helpful in interpreting and validating the data and analysis (more so than an add hoc GLM).

I am still confused as how the ‘Observations’ measure is to be interpreted without some denominator. As a raw value it seems to be a proxy for so many things (size, enthusiasm, patient type, resources etc. etc.) as to be of little use.

As I said in my earlier review, I am not convinced by the GLM analysis, and I am not sure that he findings here tell us very much. If the authors think otherwise then I encourage them to draw out the importance of these findings to convince me or other readers. - I note that in this version f he paper relatively little discussion is devoted to the GLM findings.

Overall, I believe that the paper does have interesting information to convey, and has potential for an analysis of this data that can make a contribution to the study of EHR in LMICs and in particular to studies of systems use. I do however suggest a further revision and edit to catch a number of small language issues and to strengthen the discussion of the data and its ability to reflect use. I would also tone down claims such as on page 18)to ‘conclusively give a true state of EHRs use’. That is a claim that no researcher across the world can make!! Equally I don’t think you can claim ‘highly reliable data’ (same page). I suggest fewer of such claims and more time addressing the subtlies of the data, set in the context. This will impress a reader far more.

Reviewer #2: Thank you for addressing the comments raised in a previous round of review and adjusting your manuscript. I believe the adjustments have strengthened the manuscript.

Yet, I have one remark left about the adjustments made regarding the indicator ‘staff system use’. It was good to read you now address the issue of sharing individual-passwords. However, you very quickly draw the conclusion that training on appropriate use of account credentials is needed for staff. Thereby you assume that a lack of knowledge is the underlying cause, yet how do you know such a lack of knowledge exists? Can you refer to studies that indicate such a lack of knowledge? Other studies often point towards problems with the user-friendliness of EHRs, instead of a lack of knowledge among staff. Therefore, I believe you should look again if you can substantiate your conclusion/recommendation.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Tony Cornford

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Sep 7;16(9):e0256799. doi: 10.1371/journal.pone.0256799.r004

Author response to Decision Letter 1


3 Aug 2021

Dear Academic Editor,

Re: Manuscript PONE-D-21-01911R1: A multivariate statistical evaluation of actual use of electronic health record systems implementations in Kenya.

We appreciate the second review by PLOS ONE Journal of our Manuscript entitled “A multivariate statistical evaluation of actual use of electronic health record systems implementations in Kenya.” We are grateful for the opportunity to respond comprehensively to the reviewers’ comments. Please find our responses to all the comments by the reviewers below:

Editor’s Comments

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice

Response: We confirm that we have reviewed our reference list and ensured that it is complete and correct.

We have effected the changes on the reference list as follows:

i) Reference no.6 - Revised by adding details to the reference

ii) Reference no. 10 - Corrected name of author

iii) Reference 34 – corrected the reference details

iv) Reference 37 – removed from the list

v) Added references 41,42, 44 & 45

Reviewers’ Comments

Reviewer #1:

I found this revised manuscript better in many respects,. It is tighter in its focus , less repetitious and has lost some extraneous material. I was however disappointed to see that the rebuttal letter’ was so brief and indicated so little about how the authors had chosen to revise the paper.

Response: We really appreciate the positive feedback on the revisions of our manuscript. Your comments have strengthened our manuscript. We do sincerely apologize where we were too brief in explaining our approach in addressing the raised concerns. Our intent was to answer comprehensively and to the point.

1. The way that the authors cite and use the DeLone and Mclean model is better in this version, but is still rather unimaginative. To say that the model has 6 dimensions and then only focus on one (‘system use’) is to underplay the model and the data available here. For example, is there not data on information quality here, and perhaps system quality too? Clearly this is not a ‘full’ D&M data set – but their work can be really helpful in interpreting and validating the data and analysis (more so than an add hoc GLM).

Response: We appreciate reviewer’s feedback and also take note of the concerns/suggestions. The main study on which our manuscript is based was a summative evaluation to assess the success of EHRs implementations in healthcare facilities after eight years of rollout nationally at different stages (implementation periods). The choice of only one success variable in the DeLone & McLean IS success model, in this case ‘system use’, was informed by IS researchers argument that “no single variable is intrinsically better than another, so the choice of success variables is often a function of the objective of the study, the organizational context . . . etc.”(Page 17 [1]). We found ‘system use’ success variable very suitable for our study objective which was to establish actual use of the implemented EHRs. The study is also part of information system (IS) effectiveness success measure, evaluated by use, user satisfaction and net benefits variables of D&M model.

The data collected in our study is not clinical related but reveals the EHRs use patterns (e.g user activities in the system assessed by Staff system use indicator). Indicators assessing data quality (information quality) concerning whether the data in the EHRs are relevant, comprehensive, precise, and providing adequate overview of clinical work were not considered in this phase of the study. However, EHRs readiness to capture quality data was assessed by EHR variable completeness indicator (Page 12/13, lines 264-267).

We have also strengthened the discussion part in relation to system quality and the discussion on the data as follows:

“Integration with other systems is one of system quality measures among ease-of-use, functionality, reliability and flexibility [43]. The low data exchange indicator findings from this study suggests the need for investigation on other system quality measures. Technological barriers, such as functionality and compatibility issues, and non-user-friendliness can limit system use [44].” (Page 16, lines 340-344)

2. I am still confused as how the ‘Observations’ measure is to be interpreted without some denominator. As a raw value it seems to be a proxy for so many things (size, enthusiasm, patient type, resources etc. etc.) as to be of little use.

Response: We appreciate reviewer’s comments on the Observations indicator. We agree a denominator may well be of help in better interpretation of the data. However, as explained in our earlier rebuttal letter, in this study, we followed the approved usage indicators as defined in Ngugi et al. study (https://doi.org/10.1371/journal.pone.0244917). Therefore, we are limited by the description on how to collect the clinical data by what is described in the manuscript. However, revisions on the indicator to incorporate a denominator may be considered in a future study (or revision of the indicators). For example, the ‘observations’ measure data can be matched against the national data warehouse data to see if required observational data is sent to the data warehouse as defined (i.e. concordance). We hope that this explanation clarifies and is satisfactory regarding the observation indicator.

3. As I said in my earlier review, I am not convinced by the GLM analysis, and I am not sure that he findings here tell us very much. If the authors think otherwise then I encourage them to draw out the importance of these findings to convince me or other readers. - I note that in this version f he paper relatively little discussion is devoted to the GLM findings.

Response: We have taken note of the comments on GLM analysis and apologize for the confusion. We have replace the GLM analysis with multiple linear regression model to establish how individual facility characteristics (level of hospital, ownership, services offered and mode of EHRs use) affected the use of the system. The result from the analysis shows the units by which each facility characteristic influenced system use (positively or negatively). We believe this is vital information to the system implementers and the Ministries of health, informing areas that need attention to improve success of existing and subsequent implementations.

We have revised the respective sections of the manuscript as follows:

Data analysis:

“Finally, we fitted multiple linear regression model to establish how individual facility characteristics affected the use of the system. The dependent variable was number of active system users while the covariates were the facility characteristics (KEPH level, ownership services and mode of EHRs use).” (Page 10, Lines 211-214)

Results:

“The relationship between the facility characteristics and the number of active system users assessed by the multiple linear regression analysis was statistically significant (p=0.000) for all the covariates (Table 3). The characteristics influenced system usage positively, with the exception of Mode of EHRs use characteristic. RDE mode of EHRs use had the highest negative impact on the use of the system.

Table 3. Multiple linear regression model for staff system use and facility characteristics

Facility Characteristics Unstandardized Coefficients Standardized Coefficients t P-value

B Std. Error Beta

(Constant) 0.354 0.084 4.213 0.000

KEPH Level Level 2

Level 3

Level 4

0.445

0.019

0.194

23.929

0.000

Ownership -Faith-Based Organisation

-Ministry of Heath

-Non-Governmental Organization

0.401

0.035

0.092

11.308

0.000

Services CT

CT&HTS 0.392 0.015 0.206 25.351 0.000

Mode of EHRs use Hybrid

POC

RDE

-0.124

0.014

-0.074

-9.176

0.000

Dependent Variable: Number of active system users; Independent Variables: KEPH level, ownership, mode EHRs of use, and services. p-value: when p<=0.05, there is statistically significant difference. B (coefficient) explains a change in dependent variable that can be attributed to a change of one unit in the independent variable.”

(Page 14/15, lines 297-307)

Discussion:

“Indeed, our study revealed that point of care (POC) and hybrid modes of data capture were associated with increased system usage compared to retrospective data entry. Thus, EHRs implementors should aim at point of care mode of operation right from initiation.” (Page 17, lines 358-361)

4. Overall, I believe that the paper does have interesting information to convey, and has potential for an analysis of this data that can make a contribution to the study of EHR in LMICs and in particular to studies of systems use. I do however suggest a further revision and edit to catch a number of small language issues and to strengthen the discussion of the data and its ability to reflect use.

Response: Thank you for this feedback. Based on the above comment, we have completely reworked the manuscript in addressing language issues as shown in the track changes document.

5. I would also tone down claims such as on page 18)to ‘conclusively give a true state of EHRs use’. That is a claim that no researcher across the world can make!! Equally I don’t think you can claim ‘highly reliable data’ (same page). I suggest fewer of such claims and more time addressing the subtlies of the data, set in the context. This will impress a reader far more.

Response: We have taken note of the comment and agree that the claims can be misleading. We have however revised that part of the manuscript as follows:

“Secondly, the study period (2012 – 2019) was long enough to reveal the state of the EHRs use in the health care facilities. Also, the study results are reliable due to the use of census method in the collection of the primary data.” (Page 17 lines 368-370)

Reviewer #2:

Thank you for addressing the comments raised in a previous round of review and adjusting your manuscript. I believe the adjustments have strengthened the manuscript.

Response: We really appreciate the positive feedback on the revisions of our manuscript. Your comments have surely strengthened our manuscript.

1. Yet, I have one remark left about the adjustments made regarding the indicator ‘staff system use’. It was good to read you now address the issue of sharing individual-passwords. However, you very quickly draw the conclusion that training on appropriate use of account credentials is needed for staff. Thereby you assume that a lack of knowledge is the underlying cause, yet how do you know such a lack of knowledge exists? Can you refer to studies that indicate such a lack of knowledge? Other studies often point towards problems with the user-friendliness of EHRs, instead of a lack of knowledge among staff. Therefore, I believe you should look again if you can substantiate your conclusion/recommendation.

Response: We appreciate the reviewer’s pointing the need to support our claims. Our claim was informed by findings from focus group discussion with system users in the setting, which was part of this study. The publication on the findings in under review in another journal. However, we have revised that section of the manuscript as follows with relevant literature referenced:

“A study investigating users’ behaviour in password utilization revealed users share passwords for convenience as well as a show of trust [41]. The finding from our study warrants deeper assessment on user credentialing processes and account usage patterns (such as sharing of credentials). It also highlights the need to re-emphasize good password practices to the system users and active monitoring of user accounts by the system administrators. We also recommend further research to establish user-computer ratio in the healthcare facilities. (Page 15/16, lines 330-336)

We truly appreciate the feedback from the reviewers which have strengthened our manuscript. We hope that you will find our responses satisfactory. Thank you once again for considering our manuscript in PLOS One Journal.

Sincerely,

Philomena Ngugi

waruharip@gmail.com

Corresponding author

[1] W. H. DeLone and E. R. Mclean, “The DeLone and McLean Model of Information Systems Success: A Ten-Year Update,” J. Manag. Inf. Syst. / Spring, vol. 19, no. 4, pp. 9–30, 2003.

Attachment

Submitted filename: S4_Appendix.xlsx

Decision Letter 2

Chaisiri Angkurawaranon

17 Aug 2021

A multivariate statistical evaluation of actual use of electronic health records systems implementations in Kenya

PONE-D-21-01911R2

Dear Dr. Ngugi,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Chaisiri Angkurawaranon

27 Aug 2021

PONE-D-21-01911R2

A multivariate statistical evaluation of actual use of electronic health record systems implementations in Kenya

Dear Dr. Ngugi:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Distribution of KeEMRs implementations as of June 2020.

    (PDF)

    S2 Appendix. Standard operating procedures for query extraction.

    (PDF)

    S3 Appendix. KeEMRs implementations distribution in the period 2012–2019 across the counties (n = 19).

    (PDF)

    S4 Appendix. Facilities descriptive statistics for staff system use, observations & patient identification indicators.

    (XLSX)

    S5 Appendix. Interoperability layer (IL) module (data exchange) presence/absence in facilities across the counties.

    (PDF)

    S6 Appendix. Facilities performance using weighted means.

    (XLSX)

    S7 Appendix

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Attachment

    Submitted filename: S4_Appendix.xlsx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES