Abstract
Objective
Toolkits are an important knowledge translation strategy for implementing digital health. We studied how toolkits for the implementation and evaluation of digital health were developed, tested, and reported.
Materials and Methods
We conducted a systematic review of toolkits that had been used, field tested or evaluated in practice, and published in the English language from 2009 to July 2019. We searched several electronic literature sources to identify both peer-reviewed and gray literature, and records were screened as per systematic review conventions.
Results
Thirteen toolkits were eventually identified, all of which were developed in North America, Europe, or Australia. All reported their intended purpose, as well as their development process. Eight of the 13 toolkits involved a literature review, 3 did not, and 2 were unclear. Twelve reported an underlying conceptual framework, theory, or model: 3 cited the normalization process theory and 3 others cited the World Health Organization and International Telecommunication Union eHealth Strategy. Seven toolkits were reportedly evaluated, but details were unavailable. Forty-three toolkits were excluded for lack of field-testing.
Discussion
Despite a plethora of published toolkits, few were tested, and even fewer were evaluated. Methodological rigor was of concern, as several did not include an underlying conceptual framework, literature review, or evaluation and refinement in real-world settings. Reporting was often inconsistent and unclear, and toolkits rarely reported being evaluated.
Conclusion
Greater attention needs to be paid to rigor and reporting when developing, evaluating, and reporting toolkits for implementing and evaluating digital health so that they can effectively function as a knowledge translation strategy.
Keywords: digital health, toolkit, framework, implementation, evaluation, eHealth
INTRODUCTION
The 2018 World Health Assembly Resolution 71.7 on digital health recognized the role of digital technologies in achieving universal health coverage and other targets of the Sustainable Development Goals, urging stakeholders “to assess their use of digital technologies for health… and to prioritise [their] development, evaluation, implementation, scale-up and greater use…”1 To this end, toolkits are being increasingly used to guide the implementation and evaluation of digital health interventions and systems. Toolkits are a knowledge translation (KT) strategy used to communicate messages or share decision aids, tools, or goods to improve health, educate, or change practice or behavior among diverse populations—including patients, carers, clinical and managerial health professionals, policymakers, community and health organizations—and the health system.2
In 2012, the World Health Organization (WHO) and International Telecommunication Union (ITU) developed the National eHealth Strategy Toolkit, which provides guidance to governments on the tools and processes to be considered in the development, implementation, monitoring and evaluation of a national strategy.3 This WHO-ITU toolkit has prompted many public and private organizations, including global aid agencies, to design domain-specific toolkits, interactively mapping and addressing the digital health strategy, universal health coverage, and Sustainable Development Goals. Examples of such toolkits are the COCIR eHealth Toolkit by the European Coordination Committee of the Radiological, Electromedical and Healthcare IT Industry4 and the Global Digital Health Index Indicator Guide.5 However, little is known about the success or otherwise of the use of these toolkits. Significant resources are needed to produce such toolkits, underscoring the need that they be of adequate quality and have evidence from user and utility testing and evaluation that they actually “work” in the real world. The WHO Collaborating Centre for eHealth (AUS-135) was tasked with conducting a systematic review on implementation and evaluation of toolkits for digital health, given the emerging evidence and lack of syntheses on this topic. The review is the first part of a wider project to support WHO activities in digital health strategy development, implementation, capacity building, and evaluation.6
Background
A 2014 scoping review of toolkits used to disseminate health knowledge or support practice change found that only 31 (37%) of the 83 toolkits included had been evaluated to any extent.2 The majority (70%) did not specify the evidence base from which they draw, and their effectiveness as a KT strategy is rarely assessed. To truly inform health and health care, toolkits should include comprehensive descriptions of their content, be explicit regarding content that is evidence based, and include an evaluation of their effectiveness as a KT strategy, addressing both clinical and implementation outcomes.2 This message is reinforced by a 2015 systematic review that concluded that toolkits should be informed by high-quality evidence and theory, and should be evaluated using rigorous study designs to explain the factors underlying their effectiveness and successful implementation.7 A recent qualitative study of clinic and community perceptions of a general intervention toolkit asserted that “unless the toolkit is used, it won’t help solve the problem.” The authors recommended that studies be conducted to determine when and how toolkits are used. Funders, policymakers, researchers, and leaders in primary care and public health should allocate resources to foster toolkit development, testing, implementation, and evaluation.8
A cursory search of the literature found no reviews of digital health toolkits, including for implementation or evaluation. To the best of our knowledge, there are currently no defined standards or critical appraisal checklists for measuring or evaluating the quality of digital health toolkits.
Defining key terms
The Agency for Healthcare Research and Quality (AHRQ) defines a toolkit as “a collection of related information, resources, or tools that together can guide users to develop a plan or organise efforts to follow evidence-based recommendations or meet evidence-based specific practice standards.”9 The AHRQ also defines a tool as “an instrument (e.g., survey, guidelines, or checklist) that helps users accomplish a specific task that contributes to meeting a specific evidence-based recommendation or practice standard.” As a KT strategy, toolkits should be concise, understandable and clearly focused, to enable the user to understand and operationalize or evaluate digital health strategies through a clear step-by-step process.
As defined by the WHO Global Strategy on Digital Health 2020-2025, digital health is “the field of knowledge and practice associated with the development and use of digital technologies to improve health.”10 It is “a broad umbrella term encompassing eHealth (which includes mHealth)” and “expands the concept of eHealth to include digital consumers, with a wider range of smart and connected devices, such as the Internet of Things, advanced computing, big data analytics, artificial intelligence (AI) including machine learning, and robotics.”10
Review objectives
For toolkits to function as a robust and trustworthy KT strategy, toolkits should (1) be developed with adequate methodological rigor and (2) transparently report the process of their development, testing, and refinement.
This review focused on knowledge-to-action digital health toolkits, studied their development process for both rigor and reporting, their intended use, and their level of application. We also considered whether toolkit evaluations provided evidence for their effectiveness as a KT strategy and thus identified determinants of successful toolkits. The review addressed the following questions about toolkits for implementation and evaluation of digital health that have been practically used, tested, or evaluated in any way:
What is the purpose of the toolkit?
Who is it meant to be used by?
Which operational level is it intended to be applied at?
-
Is the toolkit development process methodologically rigorous and adequately reported?
Is the toolkit informed by a literature review or evidence synthesis?
Is the toolkit based on an underlying conceptual framework?
Is the toolkit informed by expert consensus?
Has the toolkit been evaluated?
Has the toolkit reported the process of its development, testing, evaluation, and refinement?
MATERIALS AND METHODS
This systematic review was conducted from March 2019 to May 2020. Our methods were informed by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, and modified to suit this review’s meta-research objectives.11
Search methods and strategy
We searched several electronic databases for literature published from 2009 to July 2019, including Global Health, Scopus, ProQuest, Web of Science, and PubMed. Relevant gray literature (eg, technical reports, dissertations, patents, meeting reports, annual reports, government publications) was also identified using Google Advanced Search, and the first 10 pages of relevance-sorted results were perused for relevant material. Other highly relevant sources in the public domain that were specifically searched included: WHO Institutional Repository for Information Sharing, Digital Impact Alliance,12 Asian Development Bank, Asia eHealth Information Network, MEASURE Evaluation and Health Data Collaborative, Digital Square Global Goods,13 Health Metrics Network, AHRQ, COCIR, Joint Learning Network, Healthcare Information and Management Systems Society, U.S. Office of the National Coordinator, and The Open Group. Gray literature is being identified on an ongoing basis, as part of the WHO Collaborating Centre activities.6 Records were pooled using Covidence to create a single literature corpus and duplicates removed.14 The search strategy was comprised of groups of similar search terms combined into strings using Boolean operators, for example, (toolkit OR tool) AND (eHealth OR mhealth OR digital health) AND (evaluation OR assessment). The search strings are listed in Supplementary Appendix 1.
Screening process
Search results were pooled in EndNote X7 library (Clarivate, Philadelphia, PA) for reference management and exported to Covidence for screening. Duplicates were removed using both Endnote X7 and Covidence. All records pertaining to the same toolkit and its testing or evaluation were grouped together. Records were screened for inclusion in 2 stages: (1) title and abstract and (2) full text. Using Covidence, each record was screened by 2 reviewers working independently, choosing to include or exclude records based on selection criteria. During title and abstract screening, records were excluded or progressed to full-text screening if both reviewers agreed. During full-text screening, records were excluded or included if both reviewers agreed. Disagreements between reviewers at either screening stage were resolved through arbitration and consensus with a third reviewer.
Selection criteria
We included any toolkit for the implementation and evaluation of digital health that had been used or field tested or evaluated in practice. We excluded non-English literature, conference abstracts, reviews, practice guidelines, and checklists. Conceptual frameworks are sometimes described as a tool or toolkit because they are used to guide approaches to implementation and evaluation; we restricted our selection to the AHRQ instrumental definition of tool and toolkit and excluded conceptual frameworks from this review.
Data extraction and quality appraisal
A data extraction template was developed to capture data of relevance to the reviews questions. Two independent reviewers extracted data from each full text, and all authors reviewed completed data extraction sheets, and resolved disagreements through discussion and consensus. As there are currently no critical appraisal checklists for toolkit quality, during data extraction we captured any available information regarding toolkit development methods to serve as an indicator of methodological rigor. Specifically, this included the conduct of a literature review, the incorporation of expert consensus, and the use of toolkit field testing (eg, pilot testing, use, evaluation). Recognizing that there are currently no reporting criteria for toolkits, we extracted the reporting characteristics of toolkits that reported their development processes, an approach informed by our prior work on reporting completeness.15 The details captured by the data extraction form fields are outlined in Table 1.
Table 1.
Data extraction form fields and description
Toolkit | What is the toolkit called? |
Year | In which year was the toolkit published? |
Organization(s) | Which organization(s) developed the toolkit? |
Summary (rationale and scope) | A brief summary of the toolkit's rationale, purpose, aim/scope, and tools/components, if described. |
Purpose: Implementation | Is the toolkit intended for use in implementation? |
Purpose: Evaluation | Is the toolkit intended for use in evaluation? |
Target user group | Who are the target user group, as specified by the toolkit itself? |
Toolkit development overview | Description of the stages and processes involved in toolkit development |
Reported process | Did the toolkit report its development process? |
Literature review | Did the development process include a literature review or evidence synthesis? |
Conceptual framework | Did the development process include a conceptual framework? |
Expert consensus | Did the development process include expert consensus? |
Field testing | Did the development process include field testing? |
Evaluation | Was the toolkit evaluated? |
Organizational level | Which organizational level does the toolkit address? (ie, the national health system level, health facility level, or operational/staff/user level). |
RESULTS
Of the 1473 records sourced from searches of electronic databases, gray literature, and perusal of relevant journals, 138 duplicates were removed and 1335 records were screened at the title and abstract stage (Figure 1). Of these, 1231 records were excluded and 104 articles proceeded to full-text screening. During full text screening, 88 full-text articles were excluded (reasons for these are outlined in Figure 1) and 16 full-text articles (13 toolkits) were included for data extraction.
Figure 1.
Flow diagram of study selection process. Ab: Abstract; DH: digital health; DHI: digital health implementation; Ti: Title.
The earliest toolkit we included was developed in 2010, with more being developed each subsequent year (Table 2). All toolkits reported their intended purpose 7 were for digital health implementation, 11 for digital health evaluation, and 5 addressed both. While toolkits for use at the national or sectoral level were developed by international agencies, the U.S. Agency for International Development developed most of the toolkits for health organization use. Almost all the toolkits were developed in North America or Europe, except for one from Australia.
Table 2.
Characteristics of included toolkits (listed chronologically)
Year | Toolkit, Case Examples, and Evaluation | Institution | Purpose | Summary |
---|---|---|---|---|
2010-2019 | Performance of Routine Information System Management (PRISM) Toolkit16–18 | MEASURE Evaluation | Implementation and evaluation | Conceptual framework and associated data collection and analysis tools to assess, design, strengthen, and evaluate RHIS |
2011 | eHealth Implementation toolkit19,20 | eHealth Unit, University College London | Implementation and evaluation | Toolkit to enable senior staff to analyze challenges likely to arise when implementing an e-health initiative. Intended to promote critical thinking, not to replace it (ie, not a “tick-box” tool). |
2012 | National eHealth Strategy Toolkit3,21–24 | WHO-ITU | Implementation | Comprehensive practical guidance for development of a national eHealth strategy, action plan and monitoring framework. |
2015 | mHealth Health Assessment and Planning for Scale (MAPS) toolkit25,26 | WHO | Implementation and evaluation | Comprehensive self-assessment and planning guide to improve the potential for scaling up and achieving long-term sustainability in mHealth |
2016 | HIS assessment support tool27,28 | WHO Regional Office for Europe | Evaluation | European-specific version of the National HIS Assessment toolkit,29 which uses the HMN framework to guide HIS evaluation, for the achievement of HMN goals. |
2017 | Evaluating Person-Centred Digital Health and Wellness at Scale30 | Computer and Information Sciences, University of Strathclyde | Implementation and evaluation | Flexible toolkit used to evaluate an evolving large scale, national digital health project. |
2017 | Informatics CapabilityMaturity Toolkit31 | Academic GP Unit, UNSW Medicine | Evaluation | Assisted self-assessment tool to reflect on and document the informatics capability maturity of a health facility. |
2018 | Routine HIS Rapid Assessment Tool32,33 | MEASURE Evaluation | Evaluation | Toolkit for assisting health information system managers identify RHIS gaps using global standards to identify where resources should be invested for system strengthening. |
2018 | eHealth Literacy Assessment Toolkit34,35 | Department of Public Health, University of Copenhagen | Implementation and evaluation | Toolkit for assessing individuals’ health literacy and digital literacy across 7 dimensions, using a mix of existing and newly developed scales. |
2019 | Coordinating Digital TransformationToolkit36–40 | Digital Square, PATH | Implementation | Toolkit for implementing practical strategies, enabling approaches and best practices for successfully coordinating the digital health sector. |
2019 | Global Digital HealthIndex54 | Global Digital HealthIndex | Evaluation | Index for tracking, monitoring, and evaluating progress in digital health technology at the country level across the 7 components of the WHO-ITU eHealth strategy framework. |
2019 | HIS Interoperability Maturity (HISIM) Toolkit41–43 | Health Data Collaborative and MEASURE Evaluation | Evaluation | Self-administered toolkit designed to monitor, evaluate, and report on domains and components required for a country’s digital HIS to exchange data (interoperate) with other health systems, to inform a plan for a strong, responsive, and sustainable national HIS. |
2019 | HIS Stages of Continuous Improvement (HISSCI) Toolkit44,45 | Health Data Collaborative and MEASURE Evaluation | Evaluation | Toolkit to help countries or organizations holistically assess, plan, and prioritize interventions and investments to strengthen an HIS. |
HIS: health information system; HMN: Health Metrics Network; ITU: International Telecommunication Union; mHealth: mobile health; RHIS: routine health information system; WHO: World Health Organization.
Methods of development
All of the 13 included toolkits reported their development process (Table 3), all of which involved expert consensus. Eight of 13 toolkits reported that their development process involved a literature review, while 3 did not, and a further 2 were unclear. Twelve reported basing the toolkit on an underlying conceptual framework, theory, or model, and of them, 3 cited the use of normalization process theory and 3 others cited the WHO-ITU eHealth Strategy; only 1 toolkit did not report using an underlying framework. The intended user groups for each toolkit included government policymakers, implementing partners, donors, system managers, system users, and care program managers. Because evidence of the toolkit being used or field tested in practice was required for inclusion of a toolkit, all the toolkits we included necessarily reported this. Seven toolkits were reportedly evaluated,16–20,27,28,32–35,41–45 but detailed evaluation results were only available for 316–20,27,28 and, when available, addressed usability, relevance, and facilitators and barriers to toolkit use. The PRISM (Performance of Routine Information System Management) toolkit is a good example of toolkit development and reporting, as it is based on a literature review, an underlying framework, and stakeholder consensus,18 and has been evaluated in several real-world settings, the findings of which have been reported.16,17 The extracted data for included toolkits is available in Supplementary Appendix 2.
Table 3.
Toolkit development approach and methods
Year | Toolkit | Literature Review | Expert Consensus | Underlying Framework | Intended User Group | Evaluated |
---|---|---|---|---|---|---|
2010-2019 | PRISMToolkit16–18 | Yes | Yes | PRISM framework | Managers, system evaluators, and policy makers | Yes |
2011 | eHealth Implementation toolkit19,20 | Yes | Yes | Normalization process theory | Senior staff and Managers of eHealth implementation | Yes |
2012 | National eHealth Strategy Toolkit3,21–24 | Unclear | Yes | WHO-ITU framework | Policymakers | No |
2015 | MAPS toolkit25,26 | Yes | Yes | MAPS framework | Project managers and project teams | No |
2016 | HIS assessment support tool27,28 | No | Yes | European Health Information Initiative framework | HIS evaluators | Yes |
2017 | Evaluating Person-Centred Digital Health and Wellness at Scale30 | No | Yes | Normalization process theory | Care managers | No |
2017 | Informatics Capability Maturity Toolkit31 | Yes | Yes | Capability maturity models | Care providers and managers of health organizations | No |
2018 | Routine HIS Rapid Assessment Tool32,33 | Yes | Yes | Health Facility and Community Information System Standards | Managers of HIS, programs, and data. | Yes |
2018 | eHealth Literacy Assessment Toolkit34,35 | No | Yes | The eHealth Literacy Framework | eHealth intervention Users | Yes |
2019 | Coordinating Digital TransformationToolkit36–40 | Yes | Yes | Not reported | Governments, implementing partners, donors | No |
2019 | Global Digital Health Index46 | Unclear | Yes | WHO-ITU Framework | Policymakers | No |
2019 | HIS Interoperability Maturity (HISIM)Toolkit41–43 | Yes | Yes | Principles of digital development; several maturity models and assessment tools | Policymakers in low-resource settings | Yes |
2019 | HIS Stages of Continuous Improvement (HISSCI)Toolkit44,45 | Yes | Yes | National eHealth Strategy Toolkit (WHO) toolkit, Health Metrics Network Assessment (WHO); HIS Strengthening Model (MEASURE Evaluation); Demand and Readiness Tool. | Policymakers | Yes |
HIS: health information system; ITU: International Telecommunication Union; MAPS: mHealth Health Assessment and Planning for Scale; PRISM: Performance of Routine Information System Management; WHO: World Health Organization.
DISCUSSION
What we learned
Very few of the potentially relevant toolkits met the inclusion criterion of field testing, let alone evaluation; this could indicate a lack of field testing or poor reporting of field testing. The description of the development of the toolkits was often inconsistent and unclear, possibly reflecting an inconsistent methodology.
The toolkits for implementing and evaluating digital health broadly fell into 3 categories, based on the organizational level they applied to: the national health system level, health organization or facility level (eg, health information system), or user level (eg, individual or health professional) (Table 4). These align closely with the organizational levels identified by the “Framework of e-health for improved health service delivery,”47 which might offer a common conceptual framework to enable a multiperspective and interprofessional approach to implementation and evaluation across the levels. Furthering this line of thinking, more coordinated implementation of digital health could be achieved by better conceptual synchronicity between toolkits of all levels.
Table 4.
Generic classification of toolkits by organizational level
Organizational level | Toolkit |
---|---|
Health system level | Coordinating Digital Transformation Toolkit36–40 |
Global Digital Health Index46 | |
National eHealth Strategy Toolkit3,21–23 | |
Health organization/facility level | HIS Interoperability Maturity (HISIM) Toolkit41–43 |
HIS Stages of Continuous Improvement (HISSCI) Toolkit44,45 | |
Routine HIS Rapid Assessment Tool32,33 | |
HIS assessment support tool27,28 | |
Performance of Routine Information System Management (PRISM) Toolkit16,17 | |
Informatics Capability Maturity Toolkit31 | |
Users (care providers and patients) level | eHealth Literacy Assessment Toolkit34,35 |
Evaluating Person-Centred Digital Health and Wellness at Scale30 | |
mHealth Health Assessment and Planning for Scale (MAPS) toolkit25,26 | |
eHealth Implementation toolkit19,20 |
HIS: health information system.
Following the release of the WHO-ITU National eHealth Strategy toolkit, many domain-specific operational and technical toolkits have emerged and are continuing to emerge. The plethora of largely untested toolkits we excluded risks “cognitive overload” among intended users; a limited number of focused, instructive, well-tested, and clearly reported toolkits would likely be a more effective KT strategy to guide implementation of digital health interventions and systems. Our findings indicate that increasingly more toolkits were made available over time. In light of the burgeoning demand for health services,48,49 many more such toolkits will likely be developed as emerging digital technologies (eg, artificial intelligence, robotics, “-omics”) are more widely adopted to support increasingly integrated models of care.50–53 However, generic frameworks or guidance for methodologically rigorous toolkit development are few and far between. The AHRQ offers some guidelines,9 but these are mainly related to reporting and communication quality, rather than to rigor.
Recommendations for developing and reporting a digital health toolkit
For toolkits to successfully fulfill their purpose as a tool for KT, it is imperative that they demonstrate adequate methodological rigor, to convey “true” knowledge (epistemology) about a given implementation reality (ontology). This process should also include their evaluation, which ought to test and demonstrate the toolkits’ real-world applicability and usability. Furthermore, they should be trustworthy, and transparently and completely report their development process. Users should understand how a toolkit was developed in order to meaningfully appraise its relevance and applicability to their context and situation.
Starting by asking, “What makes a good toolkit?,” we build on this logic, pooling together our study findings, and guidance on guideline development,54 to propose a preliminary standard approach to developing, testing and reporting toolkits for implementing and evaluating digital health interventions (Table 5). Developing a good toolkit should include robust methodological rigor and complete and transparent reporting. These, in turn, are characterized by several characteristics (Table 5, left column) that can be operationalized through the recommendations provided (Table 5, right column). These recommendations are not intended as gospel, but rather as a starting point for discourse among the digital health research and implementation community on this important matter. By this review’s own critical standards, we recommend that these be subjected to expert review and consensus, to create guiding criteria for developing and reporting DHI toolkits. As there are currently no critical appraisal checklists for toolkit quality, we recommend that similar principles be used to inform the development of a standardized tool for assessing toolkits’ quality and rigor, with a scoring system, for example (Table 5).
Table 5.
Preliminary considerations in developing toolkits for implementing and evaluating digital health
What Makes a Good Toolkit? | Recommendations for Digital Health Intervention Toolkits |
---|---|
Rigor | |
|
|
|
|
|
|
|
Include all stakeholders in development process, especially intended user groups (eg, during needs assessment, establishing expert consensus).
|
|
|
|
|
|
|
Reporting | |
|
|
|
|
|
|
|
|
|
|
Limitations
Our study did not include non-English literature; it would add relevance and validity to this study to include literature published in other languages.
CONCLUSION
The findings of this review raise concerns regarding the potential of toolkits to effectively facilitate digital health implementation and evaluation. Despite a plethora of published digital health toolkits, very few demonstrated their application in real-world testing, and even fewer had any evidence of having been evaluated. Of those that included demonstrated use cases, methodological rigor was of concern, as several did not include an underlying conceptual framework, or literature review, or evaluation and refinement in real-world settings. Reporting of approaches and methods was often inconsistent and unclear, and toolkits rarely reported being evaluated. Toolkit development should exhibit greater methodological rigor, whether in the evidence base (eg, literature review), theoretical grounding (ie, underlying conceptual framework), participatory approach (eg, co-creation, consensus processes), or in the testing, evaluation, and refinement of the toolkits in real-world settings.
As a vital component of the widespread, global rollout of digital health, it is imperative that, as a knowledge translation strategy, toolkits fulfill their function efficiently and effectively. Greater attention needs to be paid to developing, evaluating, and reporting toolkits to ensure that they effectively perform their intended function.
FUNDING
None.
AUTHOR CONTRIBUTIONS
MAG led the review and wrote the first draft of the manuscript. SA coordinated the review and conducted the database searches. All authors conceptualized the work, contributed to data collection, participated in screening search results, critically revised the manuscript, approved the final version to be published; and agree to be accountable for all aspects of the work. MAG, S-TL and SA participated in data extraction. S-TL guided the overall direction of the work.
DATA AVAILABILITY STATEMENT
The data underlying this article are available in the article and in its online supplementary material.
Supplementary Material
ACKNOWLEDGMENTS
We thank Dr Jitendra Jonnagaddala and Dr Padmanesan Narasimhan for their insights toward conceptualizing the review, and Ms Donna Medeiros for her recommendations on sources of toolkits.
CONFLICT OF INTEREST STATEMENT
This review was conducted to meet the Terms of Reference for the WHO Collaborating Centre for eHealth (AUS-135).
REFERENCES
- 1.World Health Organization. Seventy-First World Health Assembly, Agenda item 12.4, Digital Health. Secondary Seventy-First World Health Assembly, Agenda item 12.4, Digital Health 2018. https://apps.who.int/gb/ebwha/pdf_files/WHA71/A71_R7-en.pdf Accessed September 15, 2020.
- 2. Barac R, Stein S, Bruce B, Barwick M.. Scoping review of toolkits as a knowledge translation strategy in health. BMC Med Inform Decis Mak 2014; 14 (1): 121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.World Health Organization and International Telecommunication Union. National eHealth Strategy Toolkit. 2012. https://apps.who.int/iris/bitstream/handle/10665/75211/9789241548465_eng.pdf?sequence=1&isAllowed=y Accessed September 15, 2020.
- 4.European Coordination Committee of the Radiological, Electromedical, and Healthcare IT Industry. COCIR eHealth Toolkit. Integrated Care: Breaking the Silos. 2015. https://www.cocir.org/uploads/media/15013.COC_2.pdf Accessed September 15, 2020.
- 5.Global Digital Health Index. Global Digital Health Index Indicator Guide. https://static1.squarespace.com/static/5ace2d0c5cfd792078a05e5f/t/5c1153d1352f53f8337b8dfb/1544639443105/GDHI+Indicator+Guide.pdf Accessed September 15, 2020.
- 6.WHO Collaborating Centre for eHealth. Terms of Reference: AUS 135 WHO Collaborating Centre for eHealth. eHealth 2019. https://sphcm.med.unsw.edu.au/sites/default/files/sphcm/Centres_and_Units/eHealth_terms_reference.pdf Accessed March 20, 2019.
- 7. Yamada J, Shorkey A, Barwick M, Widger K, Stevens BJ.. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open 2015; 5 (4): e006808. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Davis MM, Howk S, Spurlock M, McGinnis PB, Cohen DJ, Fagnan LJ.. A qualitative study of clinic and community member perspectives on intervention toolkits: “Unless the toolkit is used it won’t help solve the problem. BMC Health Serv Res 2017; 17 (1): 497. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Agency for Healthcare Research and Quality (AHRQ). AHRQ Publishing and Communications Guidelines, Section 6: Toolkit Guidance. 2013. https://www.ahrq.gov/sites/default/files/publications/files/pcguide6.pdf Accessed September 15, 2020.
- 10.World Health Organization. Global Strategy for Digital Health 2020-2024. 2019. https://extranet.who.int/dataform/upload/surveys/183439/files/Draft%20Global%20Strategy%20on%20Digital%20Health.pdf Accessed September 15, 2020.
- 11. Ansari S, Godinho MA, Jonnagaddala J, Narasimhan P, Guo G, Liaw S-T. A systematic review of toolkits for implementation and evaluation of digital health interventions. 2019https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42019147273 Accessed September 15, 2020.
- 12.Digital Impact Alliance. Principles for Digital Development. 2019. https://digitalprinciples.org/ Accessed September 15, 2020.
- 13.Digital Square. Global Goods Guidebook: PATH. 2019. https://digitalsquare.org/s/Global-Goods-Guidebook_V1.pdf Accessed September 15, 2020.
- 14.Veritas Health Innovation. Covidence systematic review software. 2019. www.covidence.org Accessed September 15, 2020.
- 15. Godinho MA, Gudi N, Milkowska M, Murthy S, Bailey A, Nair NS.. Completeness of reporting in Indian qualitative public health research: a systematic review of 20 years of literature. J Public Health 2019; 41 (2): 405–11. [DOI] [PubMed] [Google Scholar]
- 16. Hotchkiss DR, Aqil A, Lippeveld T, Mukooyo E.. Evaluation of the performance of routine information system management (PRISM) framework: evidence from Uganda. BMC Health Serv Res 2010; 10 (1): 188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.MEASURE Evaluation. PRISM Case Studies: Strengthening and Evaluating RHIS. Chapel Hill, NC: University of North Carolina at Chapel Hill; 2008. [Google Scholar]
- 18. Aqil A, Lippeveld T, Hozumi D.. PRISM framework: a paradigm shift for designing, strengthening and evaluating routine health information systems. Health Policy Plan 2009; 24 (3): 217–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. MacFarlane A, Clerkin P, Murray E, et al. The e-health implementation toolkit: qualitative evaluation across four European countries. Implement Sci 2011; 6 (1): 122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Murray E, May C, Mair F.. Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT). BMC Med Inform Decis Mak 2010; 10 (1): 61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Darcy N, Elias M, Swai A, Danford H, Rulagirwa H, Perera S.. eHealth strategy development: a case study in Tanzania. J Health Inform Africa 2014; 2 (2). doi: 10.12856/JHIA-2014-v2-i2-107 [Google Scholar]
- 22. Ali S. Formulation of a National e-Health Strategy Development Framework for Pakistan. Graduate Studies [master’s thesis]. Calgary, Alberta, Canada, University of Calgary; 2013.
- 23. Riazi H, Jafarpour M, Bitaraf E.. Towards National eHealth Implementation--a comparative study on WHO/ITU National eHealth Strategy Toolkit in Iran. Stud Health Technol Inform 2014; 205: 246–50. [PubMed] [Google Scholar]
- 24. Hamilton C. The WHO-ITU national eHealth strategy toolkit as an effective approach to national strategy development and implementation. Stud Health Technol Inform 2013; 192: 913–6. [PubMed] [Google Scholar]
- 25.World Health Organization. The MAPS Toolkit: mHealth Assessment and Planning for Scale. Geneva, Switzerland: World Health Organization; 2015. [Google Scholar]
- 26. Labrique AB, Wadhwani C, Williams KA, et al. Best practices in scaling digital health in low and middle income countries. Global Health 2018; 14 (1): 103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Verschuuren M, Diallo K, Calleja N, Burazeri G, Stein C.. World health organization. First experiences with a WHO tool for assessing health information systems. Public Health Panor 2016; 2 (3): 379–82. [Google Scholar]
- 28.World Health Organization Regional Office for Europe. Support Tool to Assess Health Information Systems and Develop and Strengthen Health Information Strategies. Copenhagen, Denmark: WHO Regional Office for Europe; 2015. [Google Scholar]
- 29.World Health Organization. Assessing the National Health Information System: An Assessment Tool. Version 4.00. Geneva, Switzerland: World Health Organization; 2008. [Google Scholar]
- 30. McGee-Lennon M, Bouamrane M-M, Grieve E, et al. A flexible toolkit for evaluating person-centred digital health and wellness at scale. In: Duffy VG, Lightner N, eds. Advances in Human Factors and Ergonomics in Healthcare. New York, NY: Springer; 2017: 105–18. [Google Scholar]
- 31. Liaw S-T, Kearns R, Taggart J, et al. The informatics capability maturity of integrated primary care centres in Australia. Int J Med Inform 2017; 105: 89–97. [DOI] [PubMed] [Google Scholar]
- 32.MEASURE Evaluation. Validating the Effectiveness of a Rapid Assessment Tool for Routine Health Information Systems. Chapel Hill, NC: MEASURE Evaluation, University of North Carolina., 2018. [Google Scholar]
- 33.MEASURE Evaluation. Routine Health Information System Rapid Assessment Tool: Implementation Guide. Chapel Hill, NC: MEASURE Evaluation, 2018. [Google Scholar]
- 34. Karnoe A, Furstrand D, Christensen KB, Norgaard O, Kayser L.. Assessing competencies needed to engage with digital health services: development of the eHealth literacy assessment toolkit. J Med Internet Res 2018; 20 (5): e178. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Knudsen AK, Kayser L.. Validation of the eHealth Literacy Assessment tool (eHLA). Int J Integr Care 2016; 16 (6): 349. [Google Scholar]
- 36.PATH. Coordinating Digital Transformation: Ethiopia. Seattle, WA: Digital Square; 2019. [Google Scholar]
- 37.PATH. Coordinating Digital Transformation: Nepal. Seattle, WA: Digital Square; 2019. [Google Scholar]
- 38.PATH. Coordinating Digital Transformation: Tanzania. Seattle, WA: Digital Square; 2019. [Google Scholar]
- 39.PATH. Coordinating Digital Transformation: Replication Guide. Seattle, WA: Digital Square; 2019. [Google Scholar]
- 40.PATH. Coordinating Digital Transformation: Overview. Seattle, WA: Digital Square; 2019. [Google Scholar]
- 41.MEASURE Evaluation. Health Information Systems Interoperability Maturity Toolkit. 2019. https://www.measureevaluation.org/resources/tools/health-information-systems-interoperability-toolkit Accessed September 15, 2020.
- 42.MEASURE Evaluation. Building a Strong and Interoperable Digital Health Information System for Uganda. Chapel Hill, NC: MEASURE Evaluation; 2018. [Google Scholar]
- 43.MEASURE Evaluation. Building a Strong and Interoperable Health Information System for Ghana. Chapel Hill, NC: MEASURE Evaluation; 2018. [Google Scholar]
- 44.MEASURE Evaluation. HIS Stages of Continuous Improvement Toolkit. Chapel Hill, NC: MEASURE Evaluation; 2017. [Google Scholar]
- 45.MEASURE Evaluation. Mapping a Path to Improve Uganda’s Health Information System Using the Stages of Continuous Improvement Toolkit - Workshop Report. Chapel Hill, NC: MEASURE Evaluation; 2019. [Google Scholar]
- 46. Mechael P, Ke Edelman J. The State of Digital Health 2019: Global Development Incubator, 2019. https://static1.squarespace.com/static/5ace2d0c5cfd792078a05e5f/t/5d4dcb80a9b3640001183a34/1565379490219/State+of+Digital+Health+2019.pdf Accessed September 15, 2020.
- 47.World Health Organization Regional office for South-East Asia. Regional Strategy for Strengthening eHealth in the South-East Asia Region WHO (2014-2020). 2015. https://apps.who.int/iris/bitstream/handle/10665/160760/SEA-HSD-366%20Rev.pdf?sequence=1&isAllowed=y Accessed September 15, 2020.
- 48. Lozano R, Fullman N, Mumford JE, et al. Measuring universal health coverage based on an index of effective coverage of health services in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet 2020; 396 (10258): 1250–84. doi: 10.1016/S0140-6736(20)30750-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Murray CJL, Abbafati C, Abbas KM, et al. Five insights from the global burden of disease Study 2019. Lancet 2020; 396 (10258): 1135–59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Marcelo A, Medeiros D, Ramesh K, Roth S, Wyatt P. Transforming Health Systems Through Good Digital Health Governance. 2018. www.adb.org/sites/default/files/publication/401976/sdwp-051-transforming-health-systems.pdf Accessed September 15, 2020.
- 51. Godinho MA, Borda A, Kostkova P, Molnar A, Liaw S-T.. Serious Games’ for unboxing Global Digital Health policymaking. BMJ Simul Technol Enhanc Learn 2020; 6: 255–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Godinho MA, Ashraf MM, Narasimhan P, Liaw S-T.. Community health alliances as social enterprises that digitally engage citizens and integrate services: A case study in Southwestern Sydney (protocol). Digit Health 2020; 6: 205520762093011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Godinho MA, Jonnagaddala J, Gudi N, Islam R, Narasimhan P, Liaw S-T.. mHealth for integrated people-centred health services in the Western Pacific: a systematic review. Int J Med Inform 2020; 142: 104259.[published Online First: Epub Date]. [DOI] [PubMed] [Google Scholar]
- 54. Logullo P, MacCarthy A, Kirtley S, Collins GS.. Reporting guideline checklists are not quality evaluation forms: they are guidance for writing. Health Sci Rep 2020; 3 (2): e165. doi: 10.1002/hsr2.165. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article are available in the article and in its online supplementary material.