Skip to main content
PLOS One logoLink to PLOS One
. 2021 Jan 11;16(1):e0244917. doi: 10.1371/journal.pone.0244917

Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries

Philomena Ngugi 1,2,*, Ankica Babic 1,3, James Kariuki 4, Xenophon Santas 5, Violet Naanyu 6, Martin C Were 2,7
Editor: Chaisiri Angkurawaranon8
PMCID: PMC7799790  PMID: 33428656

Abstract

Background

Electronic Health Record Systems (EHRs) are being rolled out nationally in many low- and middle-income countries (LMICs) yet assessing actual system usage remains a challenge. We employed a nominal group technique (NGT) process to systematically develop high-quality indicators for evaluating actual usage of EHRs in LMICs.

Methods

An initial set of 14 candidate indicators were developed by the study team adapting the Human Immunodeficiency Virus (HIV) Monitoring, Evaluation, and Reporting indicators format. A multidisciplinary team of 10 experts was convened in a two-day NGT workshop in Kenya to systematically evaluate, rate (using Specific, Measurable, Achievable, Relevant, and Time-Bound (SMART) criteria), prioritize, refine, and identify new indicators. NGT steps included introduction to candidate indicators, silent indicator ranking, round-robin indicator rating, and silent generation of new indicators. 5-point Likert scale was used in rating the candidate indicators against the SMART components.

Results

Candidate indicators were rated highly on SMART criteria (4.05/5). NGT participants settled on 15 final indicators, categorized as system use (4); data quality (3), system interoperability (3), and reporting (5). Data entry statistics, systems uptime, and EHRs variable concordance indicators were rated highest.

Conclusion

This study describes a systematic approach to develop and validate quality indicators for determining EHRs use and provides LMICs with a multidimensional tool for assessing success of EHRs implementations.

Introduction

Electronic Health Record Systems (EHRs) are increasingly being implemented within low-and middle-income countries (LMICs) settings, with the goal of improving clinical practice, supporting efficient health reporting and improving quality of care provided [1,2]. System implementation is the installation and customization of information systems in organizations making them available for use to support service delivery, e.g. EHRs in healthcare [3,4]. National-level implementations of EHRs in many LMICs primarily aim to support HIV care and treatment, with funding for these systems coming from programs such as the US President’s Emergency Plan for AIDS Relief (PEPFAR) [5,6]. Several countries, such as Rwanda, Uganda, Mozambique, and Kenya, have gone beyond isolated and pilot implementations of EHRs to large-scale national rollout of systems within government-run public facilities [7]. For example, Kenya has had over 1000 electronic medical systems (EMRs) implementations progressively since 2012 in both private and public facilities supporting patient data management mainly in HIV care and treatment [8]. With such large-scale EHRs implementations, developing countries are finding themselves in the unenviable position of being unable to easily track the status of each implementation, especially given that most of the EHRs implementations are standalone and are distributed over large geographical areas. A core consideration is the extent to which the EHRs implemented are actually in use to support patient care, program monitoring, and reporting. Without robust evidence of use of the implemented EHRs, it becomes difficult to justify continued financial support of these systems within these resource-constrained settings and to realize the anticipated benefits of these systems.

In LMICs, implementation of EHRs within a clinical setting does not automatically translate to use of the system. While the evidence is mounting on the benefits of EHRs in improving patient care and reporting in these settings, a number of studies reveal critical challenges to realizing these benefits [911]. Some of these challenges include: poor infrastructure (lack of stable electricity, unreliable Internet connectivity, inadequate computer equipment), inadequate technical support, limited computer skills and training, and limited funding [1217]. Additionally, implementation of EHRs is complex and can be highly disruptive to conventional workflows. Disruption caused by the EHRs can affect its acceptance and use; this is more likely to happen if the implementation was not carefully planned and if end-users were not adequately involved during all stages of the implementation [1821]. The use of the EHRs can also be affected by data quality issues, such as completeness, accuracy, and timeliness [22]. This is a particular risk in LMICs given the lack of adequate infrastructure, human capacity, and EHRs interoperability across healthcare facilities [23].

Although LMICs have embraced national-level EHRs implementations, little evidence exists to systematically evaluate actual success of these implementations, with success largely defined as a measure of effectiveness of the EHRs in supporting care delivery and health system strengthening [2426]. Success of EHRs implementation depends on numerous factors, and these often go beyond simple consideration of the technology used [19,20]. Many information system (IS) success frameworks and models incorporate a diverse set of success measures, such as “effectiveness, efficiency, organizational attitudes and commitment, users’ satisfaction, patient satisfaction, and system use” [2734]. Among numerous IS success frameworks and models, “system use” is considered an important measure in evaluating IS success; IS usage being “the utilization of information technology (IT) within users’ processes either individually, or within groups or organizations” [29,31]. There are several proposed measures for system use, such as frequency of use, extent of use, and number of system accesses, but these tend to differ between models. The system use measures are either self-reported (subjective) or computer-recorded (objective) [22,29,30].

There is compelling evidence that IS success models need to be carefully specified for a given context [34]. EHRs implementations within LMICs have unique considerations, hence system use measures need to be defined in a way to ensure that they are relevant, meet the EHRs monitoring needs, while not being too burdensome to accurately collect. Carefully developed EHRs use indicators and metrics are needed to regularly monitor the status of the EHRs implementations, in order to identify and rectify challenges to advance effective use. A common set of EHRs indicators and metrics would allow for standardized aggregation of performance of implementations across locations and countries. This is similar to the systems currently in use for monitoring the success of HIV care and treatment through a standard set of HIV Monitoring, Evaluation and Reporting (MER) indicators [35].

All care settings providing HIV care through the PEPFAR program and across all countries are required to report the HIV indicators per the MER indicator definitions. An approach that develops EHRs indicators along the same lines and format as HIV MER indicators assures that the developed EHRs system use indicators are in a format well-familiar to most care settings within LMICs. This approach reduces the learning curve to understanding and applying the developed indicators. In this paper, we present development and validation of a detailed set of EHRs use indicators that follows the HIV MER format, using nominal group technique (NGT) and group validation technique. This was developed for Kenya, however, it is applicable to LMICs and similar contexts.

Materials and methods

Identification of candidate set of EHRs use indicators

Using desk review, literature review, and discussions with subject matter experts, the study team (PN, MW, JK, XS, AB) identified an initial set of 14 candidate indicators for EHRs use [3639] The candidate set of indicators were structured around four main thematic areas, namely: system use, data quality, interoperability, and reporting. System use and data quality dimensions broadly reflect IS system use aspects contained in the DeLone and McLean IS success model, while interoperability and reporting dimensions enhance system availability and use [39]. The focus was to come up with practical indicators that were specific, measurable, achievable, relevant, and time-bound (SMART) [40]. This would allow the developed indicators to be collected easily, reliably, accurately, and in a timely fashion within the resource constraints of clinical settings where the information systems are implemented.

Each of the 14 candidate indicators was developed to clearly outline the description of the indicator, the data elements constituting the numerator and denominator, how the indicator data should be collected, and what data sources would be used for the indicator. These details for the indicators were developed using a template adapted from the HIV MER 2.0 indicator reference guide, given that information systems users in most of these implementation settings were already familiar with this template (S1 Appendix) [35]. Nevertheless, it will require short training time for those unfamiliar due the simplicity of the format.

Nominal group technique (NGT)

NGT is a ranking method that enables a controlled group of nine or ten subject matter experts to generate and prioritize a large number of issues within a structure that gives the participants an equal voice [41]. The NGT involves several steps, namely: 1) silent, written generation of responses to a specific question, 2) round-robin recording of ideas, 3) serial discussion for clarification and, 4) voting on item importance. It allows for equal participation of members, and generates data that is quantitative, objective, and prioritized [42,43]. Nominal group technique (NGT) was used in the study to reach consensus on the final set of indicators for monitoring EHRs use.

NGT participants

Indicator development requires consultation with broad-range of subject matter experts with knowledge of the development, implementation, and use of EHRs. With guidance from Kenya Ministry of Health (MoH), a heterogeneous group of 10 experts was invited for a two-day workshop led by two of the researchers (M.W. and P.N.) and a qualitative researcher (V.N.). Inclusion in the NGT team was based on the ability of the NGT participant to inform the conversation around EHRs usage metrics and indicators, with an emphasis on assuring that multiple perspectives were represented in the deliberations. The NGT participants’ average age was 40 years where majority were males (69%). The participants included: the researchers acting as facilitators; one qualitative researcher (an associate professor and lecturer); two MoH representatives from the Division of Health Informatics and M&E (health information systems management experts); one Service Development Partners (SDPs) representative (oversees EHRs implementations and training of users); four users of the EHRs (clinical officers (2) & Health records information officers (2)); CDC funding agency representative (an informatics service fellow in the Health Information Systems); and two representatives from the EHRs development and implementing partners (Palladium and International Training and Education Center for Health (I-TECH)), who have been involved in the EHRs implementations and who selected sites for EHRs implementations [44,45]. The study participants were consenting adults, and participation in the group discussion was voluntary. All participants filled a printed consent form before taking part in the study. Discussions were conducted in English, with which all participants were conversant. For analysis and reporting purposes, demographic data and roles of participants were collected, but no personal identifiers were captured. The study was approved by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN:0003348).

NGT process

The NGT exercise was conducted on April 8–9, 2019, in Naivasha, Kenya. After providing informed consent, the NGT participants were informed about the purpose of the session through a central theme question: “How can we determine the actual use of EHRs implemented in our healthcare facilities?” Participants were first given an overview on the NGT methodology and how it has been used in the past. Given that candidate indicators had already been defined in a separate process, we did not include the first stage of silent generation of ideas. Ten NGT participants (excluding research team members) evaluated the candidate indicators on quality using the SMART criteria on a 5-point Likert scale rating on each of the five quality components. The NGT exercise was conducted using the following five specific steps:

  • Step 1: Clarification of indicators. For each of the 14 candidate indicators, the facilitator took five minutes to introduce and clarify details of the candidate indicator to ensure all participants understood what each indicator was meant to measure and how it would be generated. Where needed, participants asked questions and facilitators provided clarifications.

  • Step 2: Silent indicator rating. The participants were given 10 minutes per indicator and were asked to: (1) individually and anonymously rate each candidate indicator on each of the SMART dimensions using a 5-point Likert scale for each dimension where 1 = Very Low, 2 = Low, 3 = Neutral, 4 = High, and 5 = Very high level of quality; (2) provide an overall rating of each indicator on a scale from 1–10, with 10 being the highest overall rating for an indicator; (3) indicate whether the indicator should be included in the final list of indicators or removed from consideration; and (4) provide written comments on any aspect regarding the indicator and their rating process. To help with this process, a printed standardized indicator ranking form was provided (S2 Appendix), and the indicator details were projected on a screen.

  • Step 3: Round-robin recording of indicator rating. Each participant in turn was asked to give their overall rating of each indicator and these were recorded on a frequency table. No discussions, questions, or comments were allowed until all the participants had given their ratings. At the end of the round-robin, each participant in turn elucidated his/her criteria for the indicator overall rating score. At this stage, open discussions, questions and comments on the indicator were allowed. The discussions were recorded verbatim. The participants were not allowed to revise their individual rating score after the discussion.

  • Step 4: Silent generation of new indicators. After steps 2 and 3 were repeated for all 14 candidate indicators, the participants were given ten minutes to think and write down any missing indicators in line with the central theme question. The new indicator ideas were shared in a round-robin without repeating what had been shared by other participants. These new proposed indicators were written on a flip chart and discussed to ensure all participants understood and approved any new indicator suggestions. The facilitator ensured that all participants were given an opportunity to contribute. From this exercise, new indicators were generated and details defined collectively by the team.

  • Step 5: Ranking and sequencing the indicators. After Step 4, with exclusion of some of the original candidate indicators and addition of new ones based on team discussions, a final list of 15 indicators was generated. Each participant was asked to individually and anonymously rank the final list of the 15 indicators in order of importance, with rank 1 being the most important and rank 15 the least important. The participants were also asked to group the 15 indicators by the implementation priority and sequence into Phase 1 or 2. Phase 1 indicators would be those deemed as not requiring much work to collect, while Phase 2 indicators would require more human input and resources to collect.

Selection of final indicators

All the individual rankings for each indicator were summed across participants and the final list of prioritized consensus-based EHRs use indicators was derived from the rank order based on the average scores. The ranked indicator list was shared for final discussion and approval by the full team of NGT participants. The relevant indicator reference sheets for every indicator were also updated based on discussions from the NGT exercise. No fixed threshold number was used to select the indicators for inclusion. Finally, the indicator details were reviewed (including indicator definition or how data elements are collected, and indicator calculated) as guided by the NGT session discussions, resulting in the final consensus-based EHRs use reference sheets with details for each indicator.

Data analysis

Descriptive statistics were computed to investigate statistical differences on the rating of the 14 candidate indicators among the participants. Chi-square test was used to determine if there were statistically significant differences in rating of indicators across each of the SMART dimensions. The ratings totals per SMART dimension from the crosstabs analysis output were summarized in a table (Table 1), indicating the p-value generated from the Chi-square output for each dimension. The variability between the SMART dimensions and the rating was tested using Chi-square since the parameters under investigation were categorical variables (non-parametric data). The totals include rating count and its percentage. Weighted mean for each SMART dimension across all the 14 indicators was calculated to identify how the participants rated various candidate indicators. For the final indicator list, descriptive statistics were computed to determine the average rank score for each indicator and to assign priority numbers from the lowest average score to the highest. As such, the indicator with the lowest average score was considered the most important per the participants’ consensus. All analyses were performed in SPSS version 25 (IBM, https://www.ibm.com/analytics/spss-statistics-software). The indicators were also grouped according to implementation phase number assigned by the participants (either 1 or 2) to form the implementation order phases.

Table 1. Summary of the indicators rating on the various SMART quality dimensions.

SMART Quality Responses Total Meanb P-value
Ratinga of SMART Survey
1 2 3 4 5
Specific Count 7 7 18 54 48 134 3.96 0.141
Percent 5.2% 5.2% 13.4% 40.3% 35.8% 100.0%
Measurable Count 6 12 19 52 43 132 3.86 0.009
Percent 4.5% 9.1% 14.4% 39.4% 32.6% 100.0%
Achievable Count 4 8 24 42 53 131 4.01 0.039
Percent 3.1% 6.1% 18.3% 32.1% 40.5% 100.0%
Relevant Count 5 6 11 37 74 133 4.27 0.023
Percent 3.8% 4.5% 8.3% 27.8% 55.6% 100.0%
Time-bound Count 5 3 15 51 59 133 4.17 0.228
Percent 3.8% 2.3% 11.3% 38.3% 44.4% 100.0%

a Rating Scale 1 = Very Low; 2 = Low; 3 = Neutral; 4 = High; 5 = Very high.

b Mean range 1.0–2.5 = Low; 2.6–3.5 = Neutral; 3.6–5.0 = High.

Results

SMART criteria rating for candidate indicators

The participants rated the collective set of the 14 candidate indicators highly (i.e. 4 or 5) across all the SMART dimensions (Table 1). However, a variation in the totals across the SMART components was due to some participants’ non-response in rating some of the components.

From the analysis, the indicators were rated high for specific and time-bound SMART quality dimensions with a mean of 3.96 (p-value = 0.141) for specific and 4.17 (p-value = 0.228) for time-bound. However, the two dimensions did not show any statistically significant difference in how various participants rated them. Measurable, achievable, and relevant dimensions were also high, with the mean of 3.86(p-value = 0.009), 4.01(p-value = 0.039) and 4.27(p-value = 0.023), respectively, and showed statistically significant difference in how the participants rated them across all the indicators.

Individual indicator ratings

Table 2 shows the participants’ overall ratings for each of the 14 candidate indicators on a scale of 1 to 10, reflecting lowest to highest rating respectively. Generally, the participants rated the candidate set of indicators highly with an overall mean rating of 6.6. Data concordance and automatic reports were rated highest with a mean above 8.0. However, the participants rated the observations indicator low with a mean of 3.8, while staff system use, system uptime, and report completeness indicators were moderately rated with a mean of 4.4, 5.9, and 5.8 respectively. The individual indicator ratings and ratings against SMART criteria served as a validation metric for candidate indicators.

Table 2. Candidate indicators overall rating.

#Indicator Indicator overall rating Total Mean*
1 2 3 4 5 6 7 8 9 10
1 Data entry statistics 0 0 1 0 1 1 1 2 3 0 9 7.1
2 Staff system use 0 0 3 0 5 1 0 0 0 0 9 4.4
3 Observations 1 2 2 3 0 0 1 1 0 0 10 3.8
4 System uptime 0 0 1 1 2 3 1 1 1 0 10 5.9
5 Data timeliness 0 0 0 0 0 1 7 2 0 0 10 7.1
6 Data concordance 0 0 0 0 1 0 2 4 1 2 10 8.0
7 Data completeness 0 0 0 0 0 2 3 4 1 0 10 7.4
8 Automatic reports 0 0 0 1 0 0 0 5 3 1 10 8.1
9 Report timeliness 1 0 0 0 2 3 0 1 2 1 10 6.5
10 Report completeness 0 0 1 1 1 4 2 1 0 0 10 5.8
11 Reporting rate 0 0 1 0 1 0 4 1 0 2 9 7.1
12 Data exchange 0 0 0 0 1 2 4 2 0 0 9 6.8
13 Standardized terminologies 0 0 0 1 1 2 3 0 2 0 9 6.7
14 Patient identification 0 0 0 0 0 1 5 2 1 0 9 7.3
Total 2 2 9 7 15 20 33 26 14 6 134 6.6

* Mean ranges 1.0–4.0 = Low 4.1–6.0 = Neutral 6.1–10.0 = High

Final indicators list

The NGT team reached a consensus to include all 14 candidate indicators in the final list of indicators, and added one additional indicator, report concordance, for a total of 15 EHRs usage indicators. The final set of indicators fell into four categories, namely (Fig 1 and Table 3):

Fig 1. Infographic of key domains for EHRs use indicators.

Fig 1

Table 3. The set of validated reporting indicators on EHR system use.

# Domain Indicator Name Description Frequency
1 System Use Data entry statistics Number and % of patient records entered into system during reporting period Monthly
2 System Use Staff system use % of providers who entered data into system as expected for at least 90% of encounters Quarterly
3 System Use Observations Number of observations recorded during period Quarterly
4 System Use System Uptime % of time system is up when needed during care Monthly
5 Data Quality Clinical data Timeliness % of clinical provider encounters entered into the EHRs within agreed time period. Monthly
6 Data Quality Variable Concordance % concordance of data in paper form vs data in EHRs Quarterly
7 Data Quality Variable Completeness % of required data elements contained in EHRs Quarterly
8 Interoperability Data Exchange Automatic exchanging of data with different systems Quarterly
9 Interoperability Standardized Terminologies % of terms that are mapped to standardized terminologies or national dictionary. Yearly
10 Interoperability Patient identification % of nationally accepted patient identification instances in the EHRs. Quarterly
11 Reporting Automatic Reports Proportion of expected reports generated automatically by system In-line with PEPFARa reports
12 Reporting Reporting Rate Proportion of expected reports that are actually submitted Monthly
13 Reporting Report Timeliness Timeliness of expected reports to national reporting system Monthly
14 Reporting Report Completeness Completeness of expected reports to national reporting system In-line with PEPFAR reports
15 Reporting Report Concordance % concordance of data contained in paper-derived reports compare to report data derived from the EHRs Biannual

a Monitoring, Evaluation, and Reporting [MER] indicators reporting by PEPFAR initiated HIV programs

  1. System Use—these indicators are used to identify how actively the EHRs is being used based on the amount of data, number of staff using the system, and uptime of the system.

  2. Data Quality—these indicators are used to highlight proportion and timeliness of relevant clinical data entered into the EHRs. They also capture how well EHRs data captures an accurate clinical picture of the patient.

  3. Interoperability—given that a major perceived role of EHRs is to improve sharing of health data, these indicators are used to measure maturity level of implemented EHRs to support interoperability.

  4. Reporting—aggregation and submission of reports is a major goal of the implemented EHRs, and these indicators capture how well the EHRs are actively used to support the various reporting needs.

As part of the NGT exercise, the details of each of the indicators was also refined. S3 Appendix presents the detailed EHRs MER document, with agreed details for each indicator provided. In this document, we also highlight the changes that were suggested for each indicator as part of the NGT discussions.

Indicator ranking

The score and rank procedure generated a prioritized consensus-based list of EHRs use indicators with a score of 1 (highest rated) to 15 (lowest rated). As such, a low average score Mean’ meant that the particular indicator was on average rated higher by the NGT participants. Table 4 presents the ordered list of ranking for the indicators as rated by nine of the NGT participants as one participant was absent during this NGT activity. Data Entry Statistics and System Uptime indicators were considered to be the most relevant in determining EHRs usage, while Reporting Rate indicator was rated as least relevant.

Table 4. Ranking of finalized EHRs use indicators.

Indicator Ranking Indicator Name Average Score Mean (SD)
1 Data Entry Statistics 2.78 (2.33)
2 System Uptime 4.56 (5.22)
3 EHR Variable concordance 6.44 (2.80)
4 EHR Variable Completeness 6.56 (3.32)
5 Report Concordance 6.67 (4.66)
6 Staff system use 6.78 (4.64)
7 Clinical Data Timeliness 7.33 (4.61)
8 Report Completeness 7.89 (2.98)
9 Patient Identification 8.00 (4.33)
10 Data exchange 8.67 (4.12)
11 Reporting timeliness 9.00 (3.61)
12 Automatic Reports 10.33 (2.83)
13 Observations 11.56 (4.12)
14 Standardized Terminologies 11.56 (2.87)
15 Reporting Rate 11.89 (2.76)

Indicator implementation sequence

Nine of the 15 indicators were recommended for implementation in the first phase of the indicator tool rollout, while the other six indicators were recommended for Phase 2 rollout (Table 5). The implementation sequence largely aligns with the indicator priority ranking by the participants (Table 4). The indicators proposed for Phase 1 implementation are a blend from the four indicator categories but are mostly dominated by the System Use subcategory.

Table 5. Recommended implementation sequence of the EHRs use indicators.

Implementation sequence Indicator name
Phase 1 Phase 2
1 Data Entry Statistics Standardized Terminologies
2 System Uptime Observations
3 EHRs data concordance Automatic Reports
4 EHRs Data Completeness Report timeliness
5 Staff system use Reporting Rates
6 Clinical Data Timeliness Data Exchange
7 Report Concordance
8 Reporting Completeness
9 Patient Identification

Discussion

To the best of our knowledge, this is the first set of systematically developed indicators to evaluate the actual status of EHRs usage once an implementation is in place within LMIC settings. At the completion of the modified NGT process, we identified 15 potential indicators for monitoring and evaluating status of actual EHRs use. These indicators take into consideration constraints within the LMIC’s setting such as system availability, human resource constraints, and infrastructure needs. Ideally, an IS implementation is considered successful if the system is available to the users whenever and wherever it is needed for use [46]. Clear measures of system availability, use, data quality, and reporting capabilities will ensure that decision makers have clear and early visibility into success and challenges facing system use. Further, the developed indicators allow for aggregation of usage indicators to evaluate performance of systems by type, regions, facility level, and implementing partners.

An important consideration of these indicators is the source of measure data. Most published studies on evaluating success of information system focus on IS use indicators or variables such as ease of use, frequency of use, extent of use, and ease of learning, mostly evaluated by means of self-reporting tools (questionnaires and interviews) [19,39,47]. As such, the resulting data can be subjective and prone to bias. We tailored our indicators to ensure that most can be computer-generated through queries, hence incorporating objectivity into the measurement. However, a few of these indicators, such as data entry statistics as well as those on concordance (variable concordance and report concordance) derive measure data from facility records in addition to computer logs data.

Although the NGT expert panel was national, we are convinced the emerging results are of global interest. First, we developed the indicators in-line with the internationally renowned PEPFAR Monitoring, Evaluation, and Reporting (MER) indicators Reference Guide [35]. Secondly, the development process was mainly based on methodological criteria that are valid everywhere [48,49]. Furthermore, the indicators are not system-specific and hence can be used to evaluate usage of other types of EHRs, including other clinical information systems implementations like laboratory, radiology, and pharmacy systems. However, we recognize that differences exist in systems database structure; hence, the queries to determine indicator measures data from within each system will need to be customized and system-specific. It is important to also point out that these indicators are not based on real-time measures and can be applied both for point of care and non–point of care systems.

The selected set of indicators have a high potential to determine the status of EHRs implementations considering that the study participants rated all five SMART dimensions high (over 70%) across all the indicators. Further, the indicators reference guide provides details on “how to collect” and the sources of measure data for each indicator (S3 Appendix). This diminishes the level of ambiguity in regard to measurability of the indicators. Nonetheless, some of the indicators need countries to define their own thresholds and reporting frequencies. For instance, a country would need to define the length of acceptable time duration within which a clinical encounter should be entered into the EHRs for that encounter to be considered as having been entered in a timely fashion. As such, the indicator and reference guide need to be adapted for specific country and use context. Despite staff system use and observations indicators low overall rating (4.4 and 3.8 respectively), they were included in the final list of indicators after consensus-based discussions as part of the NGT exercise. We believe this is due to the indicators’ direct role in determining system usage and the fact that they were scored highly in the SMART assessment. Further assessment with a wider group of intermediate system users would be beneficial to estimate the value of the indicators in question before rendering them irrelevant.

This study has several limitations. It was based on a multidisciplinary panel of 10 experts, which is adequate for most NGT exercises, but still has a limited number of individuals who might not reflect all perspectives. On average, 5–15 participants per group are recommended depending on the nature of the study [50,51]. The low ranking of Data Exchange and Standardized Terminologies indicators indicate that the participants might have limited knowledge or appreciation of certain domains and their role in enhancing system use. Further, all participants were drawn from one country. Nevertheless, a notable strength was the incorporation of participants from more than one EHRs (KenyaEMR and IQCare systems) and a diverse set of expertise. In addition, the derived indicators do not assess the "satisfaction of use" dimension outlined in Delone & McLean mode [39] and future work should extend the indicators to explore this dimension.

A next step in our research is to conduct an evaluation on actual system use status for an information system rolled-out nationally, using the developed set of indicators. We will also evaluate the real-world challenges of implementing the indicators and refine them based on the findings. We also anticipate sharing these indicators with a global audience for input, validation, and evaluation. We are cognizant of the fact that the indicators and reference guides are living documents and they are bound to evolve over time, given the changing nature of the IS field and maturity of EHRs implementations.

Conclusion

An NGT approach was used to generate and prioritize a list of consensus-based indicators to assess actual EHRs usage status in Kenya. However, the indicators can be applicable to LMICs and similar contexts. This list of indicators can allow for monitoring and aggregation of EHRs usage measures to ensure that appropriate and timely actions are taken at institutional, regional, and national levels to assure effective use of EHRs implementations.

Supporting information

S1 Appendix. System usage indicator template.

(PDF)

S2 Appendix. Indicator rating form.

(PDF)

S3 Appendix. Monitoring, Evaluation and Reporting (MER v1.0): Electronic Health Record (EHR) system usage indicator reference guide.

(DOCX)

S1 File

(XLSX)

Acknowledgments

Authors would like to acknowledge the US Centers for Disease Control and Prevention (CDC) for providing input into the candidate set of indicators. We also appreciate the insights and contributions from all the workshop participants drawn from CDC-Kenya, Kenyan Ministry of Health, Palladium (EHRs development partners), EHRs implementing partners, Moi University, and EHRs users.

Data Availability

All relevant data are within the paper and its Supporting information files.

Funding Statement

Author: MCW Norwegian Programme for Capacity Development in Higher Education and Research for Development (NORAD: Project QZA-0484) through the HITRAIN program) https://norad.no/en/front/funding/norhed/projects/#&sort=date The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Ludwick DA, Doucette J. Adopting electronic medical records in primary care: Lessons learned from health information systems implementation experience in seven countries. Int J Med Inform. 2009;78(1):22–31. 10.1016/j.ijmedinf.2008.06.005 [DOI] [PubMed] [Google Scholar]
  • 2.Blaya JA, Fraser HSF, Holt B. E-health technologies show promise in developing countries. Health Aff. 2010;29(2):244–51. 10.1377/hlthaff.2009.0894 [DOI] [PubMed] [Google Scholar]
  • 3.Hillestad R, Bigelow J, Bower A, Girosi F, Meili R, Scoville R, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Aff. 2005;24(5):1103–17. [DOI] [PubMed] [Google Scholar]
  • 4.Laudon KC, Laudon JP. Information Systems, Organizations, and Strategy In: Management Information Systems: Managing the digital firm. 2015. p. 81–123. [Google Scholar]
  • 5.Akanbi MO, Ocheke AN, Agaba P a, Daniyam C a, Agaba EI, Okeke EN, et al. Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges. J Med Trop [Internet]. 2012;14(1):1–6. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4167769&tool=pmcentrez&rendertype=abstract [PMC free article] [PubMed] [Google Scholar]
  • 6.Report SA. The U. S. President ‘ s Emergency Plan for AIDS Relief Seventh Annual Report to Congress. 2019.
  • 7.Tierney WM, Sidle JE, Diero LO, Sudoi A, Kiplagat J, Macharia S, et al. Assessing the impact of a primary care electronic medical record system in three Kenyan rural health centers. J Am Med Informatics Assoc. 2016;23(3):544–52. 10.1093/jamia/ocv074 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ngugi PN, Gesicho MB, Babic A, Were MC. Assessment of HIV Data Reporting Performance by Facilities During EMR Systems Implementations in Kenya. Stud Health Technol Inform. 2020;272:167–70. 10.3233/SHTI200520 [DOI] [PubMed] [Google Scholar]
  • 9.Zlabek JA, Wickus JW, Mathiason MA. Early cost and safety benefits of an inpatient electronic health record. J Am Med Informatics Assoc. 2011;18(2):169–72. 10.1136/jamia.2010.007229 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Singer A, Yakubovich S, Kroeker AL, Dufault B, Duarte R, Katz A. Data quality of electronic medical records in Manitoba: Do problem lists accurately reflect chronic disease billing diagnoses? J Am Med Informatics Assoc. 2016;23(6):1107–12. 10.1093/jamia/ocw013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Wang SJ, Middleton B, Prosser LA, Bardon CG, Spurr CD, Carchidi PJ, et al. A cost-benefit analysis of electronic medical records in primary care. Am J Med. 2003;114(5):397–403. 10.1016/s0002-9343(03)00057-3 [DOI] [PubMed] [Google Scholar]
  • 12.Odekunle FF, Odekunle RO, Shankar S. Why sub-Saharan Africa lags in electronic health record adoption and possible strategies to increase its adoption in this region. Int J Health Sci (Qassim) [Internet]. 2017;11(4):59–64. Available from: http://www.ncbi.nlm.nih.gov/pubmed/29085270%0A http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=PMC5654179 [PMC free article] [PubMed] [Google Scholar]
  • 13.Ngugi P, Were MC, Babic A. Facilitators and Barriers of Electronic Medical Records Systems Implementation in Low Resource Settings: A Holistic View. Stud Heal Technol Informatics IOS Press. 2018;251:187–90. [PubMed] [Google Scholar]
  • 14.Khalifa M. Barriers to health information systems and electronic medical records implementation a field study of Saudi Arabian hospitals. Procedia Comput Sci [Internet]. 2013;21:335–42. Available from: 10.1016/j.procs.2013.09.044 [DOI] [Google Scholar]
  • 15.Farzianpour F, Amirian S, Byravan R. An Investigation on the Barriers and Facilitators of the Implementation of Electronic Health Records (EHR). Health (Irvine Calif). 2015;7(December):1665–70. [Google Scholar]
  • 16.Sood SP, Nwabueze SN, Mbarika VWA, Prakash N, Chatterjee S, Ray P, et al. Electronic medical records: A review comparing the challenges in developed and developing countries. In: Proceedings of the Annual Hawaii International Conference on System Sciences. 2008.
  • 17.Jawhari B, Ludwick D, Keenan L, Zakus D, Hayward R. Benefits and challenges of EMR implementations in low resource settings: A state-of-the-art review. BMC Med Inform Decis Mak [Internet]. 2016;16(1):1–12. Available from: 10.1186/s12911-016-0354-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Abraham C, Junglas I. From cacophony to harmony: A case study about the IS implementation process as an opportunity for organizational transformation at Sentara Healthcare. J Strateg Inf Syst [Internet]. 2011;20(2):177–97. Available from: 10.1016/j.jsis.2011.03.005 [DOI] [Google Scholar]
  • 19.Landis-Lewis Z, Manjomo R, Gadabu OJ, Kam M, Simwaka BN, Zickmund SL, et al. Barriers to using eHealth data for clinical performance feedback in Malawi: A case study HHS Public Access. Int J Med Inf [Internet]. 2015;84(10):868–75. Available from: http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC4841462&blobtype=pdf [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Zviran M, Erlich Z. Measuring IS User Satisfaction: Review and Implications. Commun Assoc Inf Syst [Internet]. 2003;12(12):81–103. Available from: http://aisel.aisnet.org/cais%0Ahttp://aisel.aisnet.org/cais/vol12/iss1/5 [Google Scholar]
  • 21.Boonstra A, Broekhuis M. Barriers to the acceptance of electronic medical records by physicians from systematic review to taxonomy and interventions. BMC Health Serv Res. 2010;10 10.1186/1472-6963-10-231 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Barkhuysen P, De Grauw W, Akkermans R, Donkers J, Schers H, Biermans M. Is the quality of data in an electronic medical record sufficient for assessing the quality of primary care? J Am Med Informatics Assoc. 2014;21(4):692–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Kihuba E, Gheorghe A, Bozzani F, English M, Griffiths UK. Opportunities and challenges for implementing cost accounting systems in the Kenyan health system [Internet]. Vol. 9, Global Health Action. 2016. Available from: https://www.tandfonline.com/doi/full/10.3402/gha.v9.30621 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Delone WH, Mclean ER. Information Systems Success Measurement. Found Trends®in Inf Syst. 2016;2(1):1–116. [Google Scholar]
  • 25.Van Der Meijden MJ, Tange HJ, Troost J, Hasman A. Determinants of Success of Inpatient Clinical Information Systems: A Literature Review. J Am Med Inform Assoc. 2003;10(3):235–43. 10.1197/jamia.M1094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Erlirianto LM, Ali AHN, Herdiyanti A. The Implementation of the Human, Organization, and Technology-Fit (HOT-Fit) Framework to Evaluate the Electronic Medical Record (EMR) System in a Hospital. Procedia Comput Sci [Internet]. 2015;72:580–7. Available from: 10.1016/j.procs.2015.12.166 [DOI] [Google Scholar]
  • 27.Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems—Problems and challenges. Int J Med Inform. 2003;71(2–3):125–35. 10.1016/s1386-5056(03)00131-x [DOI] [PubMed] [Google Scholar]
  • 28.Heeks R. Health information systems: Failure, success and improvisation. Int J Med Inform. 2006;75(2):125–37. 10.1016/j.ijmedinf.2005.07.024 [DOI] [PubMed] [Google Scholar]
  • 29.Prijatelj V. Success factors of hospital information system implementation: What must go right? Vol. 68, Studies in Health Technology and Informatics. 1999. p. 197–200. [PubMed] [Google Scholar]
  • 30.Seddon PB. DeLone and McLean Model of IS Success A Respecification and Extension of the. Inf Syst Res. 1997;8(3):240–53. [Google Scholar]
  • 31.Yusof MM, Paul RJ, Stergioulas LK. Towards a Framework for Health Information Systems Evaluation. In: Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06) [Internet]. 2006. p. 95a-95a. http://ieeexplore.ieee.org/document/1579480/
  • 32.Cuellar MJ, McLean ER, Johnson RD. The measurement of information system use:Primary considerations. Proc 2006 ACM SIGMIS CPR Conf Comput Pers Res Forty four years Comput Pers Res Achiev challenges Futur—SIGMIS CPR ‘06 [Internet]. 2006;(May 2014):164–8. http://dl.acm.org/citation.cfm?id=1125170.1125214
  • 33.Szajna B. Determining information system usage: Some issues and examples. Inf Manag. 1993;25(3):147–54. [Google Scholar]
  • 34.Eslami Andargoli A, Scheepers H, Rajendran D, Sohal A. Health information systems evaluation frameworks: A systematic review. Int J Med Inform [Internet]. 2017;97:195–209. Available from: 10.1016/j.ijmedinf.2016.10.008 [DOI] [PubMed] [Google Scholar]
  • 35.PEPFAR. Monitoring, Evaluation, and Indicator Reference (MER 2.0) Indicator Reference Guide. 2017.
  • 36.Straub D, Limayem M, Karahanna-Evaristo E. Measuring System Usage: Implications for IS Theory Testing. Manage Sci [Internet]. 1995;41(8):1328–42. Available from: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.41.8.1328 [Google Scholar]
  • 37.Boland MR, Trembowelski S, Bakken S, Weng C. An Initial Log Analysis of Usage Patterns on a Research Networking System. Clin Transl Sci. 2012;5(4):340–7. 10.1111/j.1752-8062.2012.00418.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Iivari J. An empirical test of the DeLone-McLean model of information system success. ACM SIGMIS Database [Internet]. 2005;36(2):8–27. Available from: http://portal.acm.org/citation.cfm?doid=1066149.1066152 [Google Scholar]
  • 39.DeLone WH, McLean ER. Information systems success revisited. In: Proceedings of the 35th Hawaii International Conference on System Sciences (HICSS) [Internet]. 2002. p. 238–49. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=994345
  • 40.Bours D. A good start with S.M.A.R.T. (indicators). Adaptation and Resilience M & E. 2014. [Google Scholar]
  • 41.Delp P., Thesen A., J. M, N. S. Nominal Group Technique. Systems tools for project planning. Bloomington, Indiana: International Development Institute; 1977. [Google Scholar]
  • 42.Gallagher M, Hares T, Spencer J, Bradshaw C, Webb I. The nominal group technique: A research tool for general practice? Fam Pract. 1993;10(1):76–81. 10.1093/fampra/10.1.76 [DOI] [PubMed] [Google Scholar]
  • 43.Delbecq AL, Van de Ven AH, Gustafson DH. Group Techniques for Program Planning: A guide to Nominal Group and Delphi Processes. Glenview, Illinois: Scott, Foresman and Company; 1975. [Google Scholar]
  • 44.I-TECH. Health Information Systems in Kenya [Internet]. 2019 [cited 2019 Jan 18]. www.go2itech.org/2017/08/health-information-systems-in-kenya/
  • 45.Palladium Group International [Internet]. 2018 [cited 2018 Oct 20]. https://en.m.wikipedia.org/wiki/Palladium_International
  • 46.Xu J, Quaddus M. Managing Information Systems: Ten Essential Topics. In 2013. p. 27–41. http://link.springer.com/10.2991/978-94-91216-89-3
  • 47.Dirksen CD. Nominal group technique to select attributes for discrete choice experiments: an example for drug treatment choice in osteoporosis. 2019; [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Berhe M, Tadesse K, Berhe G, Gebretsadik T. Evaluation of Electronic Medical Record Implementation from User’s Perspectives in Ayder Referral Hospital Ethiopia. J Heal Med Informatics [Internet]. 2017;08(01):1–13. Available from: https://www.omicsonline.org/open-access/evaluation-of-electronic-medical-record-implementation-from-usersperspectives-in-ayder-referral-hospital-ethiopia-2157-7420-1000249.php?aid=85647 [Google Scholar]
  • 49.Despont-Gros C, Mueller H, Lovis C. Evaluating user interactions with clinical information systems: A model based on human-computer interaction models. J Biomed Inform. 2005;38(3):244–55. 10.1016/j.jbi.2004.12.004 [DOI] [PubMed] [Google Scholar]
  • 50.Harvey N, Holmes CA. Nominal group technique: An effective method for obtaining group consensus. Int J Nurs Pract. 2012;18(2):188–94. 10.1111/j.1440-172X.2012.02017.x [DOI] [PubMed] [Google Scholar]
  • 51.Lennon R, Glasper A, Carpenter D. Nominal Group Technique: Its utilisation to explore the rewards and challenges of becoming a mental health nurse, prior to the introduction of the all graduate nursing curriculum in England. [Internet]. Working Papers in Health Sciences 1:2 ISSN 2051-6266/20120000. 2012. http://www.southampton.ac.uk/assets/centresresearch/documents/wphs/NominalGroupTechnique.pdf

Decision Letter 0

Chaisiri Angkurawaranon

10 Dec 2020

PONE-D-20-26805

Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries

PLOS ONE

Dear Dr. Ngugi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jan 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1.) Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.) Thank you for including your ethics statement:  "The study obtained written approval by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN:0003348).".   

Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

3.) We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this manuscript. In this manuscript the authors describe a study to develop and validate a set of electronic health record systems use indicators for LMICs. This is a well-constructed and reported study with sound methodology. The reported consensus-based set of indicators will be of interest to a broad, international readership. Further, the manuscript is well supported by the provision of a comprehensive indicator reference guide in the supplementary material.

Reviewer #2: This manuscript addresses an interesting topic, the evaluation of the implementation of EHRs in Low or Midlle-income countries (LMICs). It relies on a consensus method to define 15 indicators and a timeline to use them.

This paper seems to me to ask the following questions:

- The choice of the 14 initial indicators seems to be inspired by the well-known analytical framework of De Loine & Mac Lean. One point of this model is to emphasize the "satisfaction of use" dimension. However, this does not appear in the article, the ‘use‘ and ‘ease of use’ dimension being the only one mentioned. This is a problem when you consider that part of the issues lies in user acceptability, regardless of the architectural and technical quality of the EHR. This is emphasized in the introduction. Authors should position themselves more clearly on this point in the introduction and discussion parts.

- The description of the participants is a key point in these consensus methods. The number of participants is quite low, coming from a single country if we refer to what is specified in the limit part. It is mentioned they have various expertise in this part. The description of the participants and their profile should be further developed in the method section.

- The interest of the analysis of variability in rating based on chi-square test is not explained.

- The current development context of EHR in these countries is not sufficiently described. It would be important to specify whether the EHR has been already developed, in part or completely, its objectives of use (accountability, coordination between professionals), and its relationship with a paper sheet record. Understanding the maturity of EHRs’ implementation remains a key point, several countries facing this question until recently (2012. Couralet M, et al. Method for developing national quality indicators based on manual data extraction from medical records, BMJ Quality and Safety. 22-2:155-162).

- Two indicators are rated very low, but are selected, which suggests that the rating did not have influence. It is important to justify why they were selected.

Staff system use 0 0 3 0 5 1 0 0 0 0 9 4.4

Observations 1 2 2 3 0 0 1 1 0 0 10 3.8

These questions deserve to be addressed to improve the manuscript.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jan 11;16(1):e0244917. doi: 10.1371/journal.pone.0244917.r002

Author response to Decision Letter 0


17 Dec 2020

Dear Academic Editor,

Re: Manuscript PONE-D-20-26805: Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries.

We appreciate the review by PLOS ONE Journal Scientific Program Committee of our Manuscript entitled “Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries” and are grateful for the opportunity to respond comprehensively to the reviewers’ comments.

Please find our responses to all the comments by the reviewers below:

Reviewers’ comments

Reviewer 1:

Thank you for the opportunity to review this manuscript. In this manuscript the authors describe a study to develop and validate a set of electronic health record systems use indicators for LMICs. This is a well-constructed and reported study with sound methodology. The reported consensus-based set of indicators will be of interest to a broad, international readership. Further, the manuscript is well supported by the provision of a comprehensive indicator reference guide in the supplementary material.

We appreciate the positive comments on our work and manuscript.

Reviewer 2:

This manuscript addresses an interesting topic, the evaluation of the implementation of EHRs in Low or Middle-income countries (LMICs). It relies on a consensus method to define 15 indicators and a timeline to use them.

This paper seems to me to ask the following questions:

1. The choice of the 14 initial indicators seems to be inspired by the well-known analytical framework of De Loine & Mac Lean. One point of this model is to emphasize the "satisfaction of use" dimension. However, this does not appear in the article, the ‘use‘ and ‘ease of use’ dimension being the only one mentioned. This is a problem when you consider that part of the issues lies in user acceptability, regardless of the architectural and technical quality of the EHR. This is emphasized in the introduction. Authors should position themselves more clearly on this point in the introduction and discussion parts.

We do appreciate reviewer pointing out the need to include “satisfaction of use” dimension which is one of the constructs of D&M IS success model. Satisfaction of use is an important dimension to evaluate, and we do agree that systems should be designed to allow for feedback on user satisfaction. However, this component was outside the scope of this project, which focussed on actual system use components. In response to the reviewer’s comment, we have added the following sentence in the Discussion:

“In addition, the derived indicators do not assess the "satisfaction of use" dimension outlined in Delone & McLean model,[39] and future work should extend the indicators to explore this dimension.” – Line 360-362.

2. The description of the participants is a key point in these consensus methods. The number of participants is quite low, coming from a single country if we refer to what is specified in the limit part. It is mentioned they have various expertise in this part. The description of the participants and their profile should be further developed in the method section.

We have taken note of the comments. We have added further description of the participants as follows:

The NGT participants’ average age was 40 years where majority were males (69%). The participants included: the researchers acting as facilitators; one qualitative researcher (an associate professor and lecturer); two MoH representatives from the Division of Health Informatics and M&E (health information systems management experts); one Service Development Partners (SDPs) representative (oversees EHRs implementations and training of users); Four users of the EHRs (clinical officers (2) & Health records information officers (2)); CDC funding agency representative (an informatics service fellow in the Health Information Systems); and two representatives from the EHRs development and implementing partners (Palladium and International Training and Education Center for Health (I-TECH)), who have been involved in the EHRs implementations and who selected sites for EHRs implementations

The manuscript was updated accordingly (starting at line 145-155).

3. The interest of the analysis of variability in rating based on chi-square test is not explained.

We appreciate and agree with the reviewer’s comment. We have added an explanation on this as follows:

“The variability between the SMART dimensions and the rating was tested using Chi-square since the parameters under investigation were categorical variables (non-parametric data).”

The manuscript was updated accordingly (starting at line 226-227)

4. The current development context of EHR in these countries is not sufficiently described. It would be important to specify whether the EHR has been already developed, in part or completely, its objectives of use (accountability, coordination between professionals), and its relationship with a paper sheet record. Understanding the maturity of EHRs’ implementation remains a key point, several countries facing this question until recently (2012. Couralet M, et al. Method for developing national quality indicators based on manual data extraction from medical records, BMJ Quality and Safety. 22-2:155-162).

We do appreciate reviewers pointing out the need to add more literature on EHRs in the context of our study. In the introduction part of the manuscript, we have mentioned that implementations of EHRs exist and have continued to grow, and the driving factors, but their success has not been investigated. We have, however, added an example of implementations in Kenya as follows:

“For example, Kenya has had over 1000 electronic medical systems (EMRs) implementations progressively since 2012 in both private and public facilities supporting patient data management mainly in HIV care and treatment.” (Introduction, paragraph 1, Line 52)

5. Two indicators are rated very low, but are selected, which suggests that the rating did not have influence. It is important to justify why they were selected.

Staff system use 0 0 3 0 5 1 0 0 0 0 9 4.4

Observations 1 2 2 3 0 0 1 1 0 0 10 3.8

These questions deserve to be addressed to improve the manuscript.

We have taken note of the comments. We have added an explanation on this as follows:

“Despite staff system use and observations indicators low overall rating (4.4 and 3.8 respectively), they were included in the final list of indicators after consensus-based discussions as part of the NGT exercise. We believe this is due to the indicators’ direct role in determining system usage and the fact that they were scored highly in the SMART assessment. Further assessment with a wider group of intermediate system users would be beneficial to estimate the value of the indicators in question before rendering them irrelevant.”

We have updated the manuscript accordingly (starting at line 346-351)

Editors comments:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

We have followed the provided guidelines and conformed to manuscript style requirements

2. Thank you for including your ethics statement: "The study obtained written approval by the Institutional Review and Ethics Committee at Moi University, Eldoret (MU/MTRH-IREC approval Number FAN:0003348).".

Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”).

For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research.

We have taken note of the comments. We have amended the ethics statement in the methods section of the manuscript as follows:

“All participants filled a printed consent form before taking part in the study”. (starting at line 156-157)

We also added the text in the Ethics statement field of the submission form.

3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide

We appreciate your clarification on data availability. We have revised the cover letter to include this as guided. We have also uploaded the study data as supporting information (S4_File.xlsx).

Thank you once again for considering our manuscript in PLOS ONE.

Sincerely,

Philomena Ngugi

waruharip@gmail.com

Corresponding author

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Chaisiri Angkurawaranon

21 Dec 2020

Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries

PONE-D-20-26805R1

Dear Dr. Ngugi,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Chaisiri Angkurawaranon

23 Dec 2020

PONE-D-20-26805R1

Development of standard indicators to assess use of electronic health record systems implemented in low-and medium-income countries

Dear Dr. Ngugi:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Chaisiri Angkurawaranon

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. System usage indicator template.

    (PDF)

    S2 Appendix. Indicator rating form.

    (PDF)

    S3 Appendix. Monitoring, Evaluation and Reporting (MER v1.0): Electronic Health Record (EHR) system usage indicator reference guide.

    (DOCX)

    S1 File

    (XLSX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES