Skip to main content
Digital Health logoLink to Digital Health
. 2022 Jul 11;8:20552076221113396. doi: 10.1177/20552076221113396

Time to act mature—Gearing eHealth evaluations towards technology readiness levels

Stephanie Jansen-Kosterink 1,2,, Marijke Broekhuis 1,2, Lex van Velsen 1,2
PMCID: PMC9280845  PMID: 35847525

Abstract

It is challenging to design a proper eHealth evaluation. In our opinion, the evaluation of eHealth should be a continuous process, wherein increasingly mature versions of the technology are put to the test. In this article, we present a model for continuous eHealth evaluation, geared towards technology maturity. Technology maturity can be determined best via Technology Readiness Levels, of which there are nine, divided into three phases: the research, development, and deployment phases. For each phase, we list and discuss applicable activities and outcomes on the end-user, clinical, and societal front. Instead of focusing on a single perspective, we recommend to blend the end-user, health and societal perspective. With this article we aim to contribute to the methodological debate on how to create the optimal eHealth evaluation design.

Keywords: eHealth, evaluation, design, technology readiness level, continuous process, perspectives

Introduction

The World Health Organization (WHO) stressed Digital Health guidelines the need for rigorous evaluation of eHealth, in order to generate evidence and to promote the appropriate integration and use of technologies for improving health and reducing health inequalities. 1 In the scientific community that focuses on eHealth evaluation, there is no consensus on how to create the best evaluation design.2,3 According to the standards of evidence-based medicine, large prospective randomized controlled trials (RCTs) are considered the gold standard for evaluating the safety and effectiveness of medical interventions. 4 As the characteristics of an RCT do not match well with the evaluation of eHealth, it is currently acknowledged among experts that there is an urgent need for other evaluation designs.57 This makes it challenging to perform a proper eHealth evaluation, which hampers the subsequent implementation of eHealth in daily clinical practice.8,9 In this paper, we define eHealth according to Eysenbach (2001), 10 as not just a technology, but as a concept. Eysenbach’s definition of eHealth is: “An emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology. 10

To streamline the set-up of eHealth evaluations, various frameworks have been developed.3,11 The most widely used eHealth evaluation framework in European eHealth studies is the Model for Assessment of Telemedicine (MAST). 12 This model is based on the principles of Health Technology Assessment (HTA) 13 and is used to assess the effects and costs of eHealth from a multidimensional perspective. The strong points of MAST are the involvement of all the actors and the assessment of outcomes in seven domains: (1) health problem and description of the application; (2) safety; (3) clinical effectiveness; (4) patient perspectives; (5) economic aspects; (6) organizational aspects; and (7) socio-cultural ethical and legal aspects. Another commonly used framework is the five-stage model for comprehensive research on telehealth by Fatehi et al. 14 This framework outlines five important stages for eHealth intervention: concept development, service design, pre-implementation, implementation, and post-implementation. By outlining these stages, this framework addresses the difference between the assessment of prototypes and the evaluation of mature technology. The assessment of prototypes helps to identity the required improvements, while the evaluation of a mature technology aims to measuring the overall success factors and performance after implementation.

The endorsement of applying an iterative approach and to focus on multiple perspectives, are strong points of these current frameworks to streamline the set-up of eHealth evaluations. While these frameworks are useful, we foresee three major limitations. The first limitation is the current frameworks are only applicable for fully mature technologies and are no solution for technology still in development. Therefore, the applicability of these frameworks is limited. The second limitation is that these frameworks do not provide a clear method for determining technology maturity. When using these frameworks, in combination with immature technologies to evaluate the value of an eHealth service, results are likely to be overly negative or, at the very least, biased. Finally, the third limitation is the over-representation of the clinical perspective. Most of the articles that report on the use of these frameworks only present the results of a single perspective. 15 In previous eHealth evaluation studies the clinical perspective is over-represented and findings related to usability, the user-experience, technology acceptance, and costs are rarely addressed. 5 To overcome these limitations, and based on our experience within the field of eHealth evaluation, we present our position toward eHealth evaluations and present a model for the continuous evaluation of eHealth, aligned to technology maturity levels, and incorporating different evaluation perspectives.

Our position towards eHealth evaluation

Evaluation, defined here as the collection, interpretation, and presentation of information in order to determine the value of a result or process, 16 becomes a possibility and necessity as soon as technology development starts. Evaluation should be a continuous process, whereby the evaluation setup is geared towards the maturity of the technology. There is no need to wait with the evaluation until the technology is mature, the evaluation of the technology can start from the first concept. In other disciplines, such as software development in which agile SCRUM is a common approach, continuous evaluation is very common.

Next, we think that, instead of focusing on a single perspective, evaluations should incorporate a multitude of complementary evaluation perspectives. In our opinion these perspectives are (1) the end-user, (2) the health, and (3) the societal perspective. The end-user perspective focuses on the task-technology fit (which differs per type of end-user), usability, the user-experience (UX) and technology acceptance, to ensure that a technology is suitable for the intended end-users and their context; the health (or clinical) perspective should safeguard the health benefits that one derives from using the technology; the societal perspective should ensure that the technology can be implemented with the support of relevant stakeholders, and is durable.

Technology readiness levels

The maturity of a technology can be determined based on the technology readiness levels (TRLs). TRLs are a widely-accepted method to assess the maturity level of a technology, also in the context of eHealth.1719 These levels (Figure 1) were developed by NASA in the early 70s as a means to determine whether emerging technology is suitable for space exploration. In total, there are nine levels divided in three phases: the research, development and deployment phase. With TRLs we can clearly communicate the level of maturity of a technology and determine whether the technology is ready for tests or evaluations in a real-world setting. When a technology consists of different modules, the weakest (or most immature) module determines the TRL.

Figure 1.

Figure 1.

Technology readiness level scale.

A model for continuous eHealth evaluation

Our model for continuous eHealth evaluation addresses both the maturity of the technology as the starting point for eHealth evaluations, as well as the inclusion of the different evaluation perspectives. An overview of the suggested activities for the three perspectives in each phase is provided in Table 1.

Table 1.

An overview of the activities on the end-user, health, and societal perspective for the research, development, and deployment phase.

End-user perspective Health perspective Societal perspective
Research phase TRL 1 Testing of basic principles with relevant stakeholders.
TRL 2 Testing of basic concept to obtain fit between use and technology.
TRL 3 Identifying the merits of the technology.
Development phase TRL 4 Small scale usability/UX studies in a lab setting, to test prototype components.
TRL 5 Small scale usability/UX studies in a lab setting, to test integrated system
TRL 6 Clinical study to the use, acceptance and potential health benefits of the technology within the daily clinical context.
Deployment phase TRL 7 Large scale clinical study to the use, acceptance, health benefits, and safety of the technology within the daily clinical context. Discussions with relevant stakeholders to assess the forecast of financial and extra-financial value.
TRL 8 Long term monitoring of the health benefits and safety of the technology within the broad clinical context. Strengthen the model of financial and extra-financial-value with the outcomes of clinical studies
TRL 9 Strengthen model of financial and extra-financial-value with the outcomes of long term monitoring.

Research phase

During the research phase, the technology is immature and the new concept, often in the form of a low-fidelity prototype, is discussed with potential end-users (end-user perspective) (e.g. as in van Velsen et al. 20 and Jansen-Kosterink et al.). 21 These discussions aim to gauge the end-users’ reactions towards the basic concepts and main functionality of the prototype. The main aim of the continuous evaluations in the research phase is to optimize the new concept and technology. As the technology mainly consists of ideas and simple prototypes at this stage, applying an iterative approach in this phase is crucial. Quick rounds of testing-redesing-testing should ensure the proper focus of the innovation. The work of Schnall and colleagues2224 on their Health Information Technology Usability Evaluation Scale (Health-ITUES) fit very well with this phase.

Development phase

Within this phase the technology evolves from a prototype towards a more mature application. At this moment end-users can interact with a high-fidelity prototype. We start by incorporating small-scale usability tests and short-term clinical studies in a controlled setting should be conducted to identify usability issues and to assess use, acceptance, and potential health benefits (e.g. as in Olde Keizer et al., 2019). 25 The outcomes of the technology-oriented evaluations (e.g. usability tests) should feed an iterative redesign process, in which technology is optimized. The outcomes of the short-term clinical studies help to compose hypotheses concerning health benefits for subsequent evaluations. Next to these activities the discussions with relevant stakeholders can be started to assess the forecast of financial and extra-financial value.

Deployment phase

At this stage, the technology is almost ready for market launch. There are no more critical usability issues left and the next step is a large-scale clinical study combined with a summative usability study in a real-life setting. This clinical study could be a RCT to assess the safety and clinical effectiveness of the technology in comparison to usual care in daily clinical practice (e.g. as in Kosterink et al.). 26 To comply with national or international legislation the technology needs to be certified based on the outcome of these studies, such as a CE marking in Europe. Besides, based on the outcome of these studies the forecast of financial and extra-financial value can be validated and finalized. During the deployment phase there is little focus anymore on research and development. Although it is important to keep monitoring the long-term health benefits and safety of the technology within the broad clinical context by for instance a large cohort study. During this study, the long-term financial and extra-financial value also need to be assessed (e.g. as in Talboom et al.), 27 so as to become aware of additional exploitation opportunities.

Discussion

The evaluation of eHealth should be a continuous process, based on the maturity of the technology, and should focus on the end-user perspective, the health perspective, and the societal perspective. The focus of an evaluation should be aligned with the maturity of the technology that is being put to the test. The use of TRL levels and their alignment to evaluation perspectives is what mainly distinguishes our model from other evaluation models for eHealth. These models only focus on one perspective,2224 are only applicable for mature technology,12,15 or do not specify how to assess the maturity of a technology.14,28

Our model for the continuous evaluation of eHealth is based on our experience within the field of eHealth evaluation and the lessons we have learned during our involvement in various national and international eHealth projects. However, since this model reflects a vision on eHealth evaluation, it would be impossible to prove the truth of it. Therefore, case studies should inform us of its worth and the opportunities for improvement. Additionally, the role of the environment and technical infrastructure in which an eHealth technology is embedded plays a role. 29 How does environmental and infrastructure maturity affect evaluation? While we consider these factors to be aspects of the technology maturity, it would be interesting to see studies that aim to distinguish among the different types of maturity. We hope that the research community sees this article as a source of inspiration to combine evaluation approaches with TRL levels and will share their experiences with us.

Footnotes

Contributorship: All authors (SJK, MB, and LvV) contributed substantially to this article and all participated in drafting the article and revising it critically for important intellectual content.

Declaration of conflicting interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs: Stephanie Jansen-Kosterink https://orcid.org/0000-0002-2095-7104

Lex van Velsen https://orcid.org/0000-0003-0599-8706

References

  • 1.World Health Organization. WHO guideline: recommendations on digital interventions for health system strengthening. Geneva: World Health Organization, 2019. [PubMed] [Google Scholar]
  • 2.Enam A, Torres-Bonilla J, Eriksson H. Evidence-based evaluation of eHealth interventions: systematic literature review. J Med Internet Res 2018; 20: e10971. 2018/11/25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bonten TN, Rauwerdink A, Wyatt JC, et al. Online guide for electronic health evaluation approaches: systematic scoping review and concept mapping study. J Med Internet Res 2020; 22: e17774. Original Paper 12.8.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Hobbs N, Dixon D, Johnston M, et al. Can the theory of planned behaviour predict the physical activity behaviour of individuals? Psychol Health 2013; 28: 234–249. [DOI] [PubMed] [Google Scholar]
  • 5.Kairy D, Lehoux P, Vincent C, et al. A systematic review of clinical outcomes, clinical process, healthcare utilization and costs associated with telerehabilitation. Disabil Rehabil 2009; 31: 427–447. 2008/08/23. [DOI] [PubMed] [Google Scholar]
  • 6.Ekeland AG, Bowes A, Flottorp S. Methodologies for assessing telemedicine: a systematic review of reviews. Int J Med Inf 2012; 81: 1–11. 2011/11/23. [DOI] [PubMed] [Google Scholar]
  • 7.Laplante C, Peng W. A systematic review of e-health interventions for physical activity: an analysis of study design, intervention characteristics, and outcomes. Telemed J e-Health: Off J Am Telemed Assoc 2011; 17: 509–523. 2011/07/02. [DOI] [PubMed] [Google Scholar]
  • 8.Granja C, Janssen W, Johansen MA. Factors determining the success and failure of eHealth interventions: systematic review of the literature. J Med Internet Res 2018; 20: e10235–e10235. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Broens TH, Huis in,'t Veld RM, Vollenbroek-Hutten MM, et al. Determinants of successful telemedicine implementations: a literature study. J Telemed Telecare 2007; 13: 303–309. 2007/09/06. [DOI] [PubMed] [Google Scholar]
  • 10.Eysenbach G. What is e-health? J Med Internet Res 2001; 3: E20–E20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.van Dyk L. A review of telehealth service implementation frameworks. Int J Environ Res Public Health 2014; 11: 1279–1298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Kidholm K, Ekeland AG, Jensen LK, et al. A model for assessment of telemedicine applications: mast. Int J Technol Assess Health Care 2012; 28: 44–51. 2012/05/24. [DOI] [PubMed] [Google Scholar]
  • 13.Lampe K, Mäkelä M, Garrido MV, et al. The HTA core model: a novel method for producing and reporting health technology assessments. Int J Technol Assess Health Care 2009; 25: 9–20. 2009/12/25. [DOI] [PubMed] [Google Scholar]
  • 14.Fatehi F, Smith AC, Maeder A, et al. How to formulate research questions and design studies for telehealth assessment and evaluation. J Telemed Telecare 2016; 23: 759–763. [DOI] [PubMed] [Google Scholar]
  • 15.Kidholm K, Clemensen J, Caffery LJ, et al. The model for assessment of telemedicine (MAST): a scoping review of empirical studies. J Telemed Telecare 2017; 23: 803–813. 2017/08/02. [DOI] [PubMed] [Google Scholar]
  • 16.Shaw I, Shaw IGR, Greene JC, et al. The sage handbook of evaluation. Thousand Oaks, CA: Sage, 2006. [Google Scholar]
  • 17.Liu L, Stroulia E, Nikolaidis I, et al. Smart homes and home health monitoring technologies for older adults: a systematic review. Int J Med Inf 2016; 91: 44–59. 2016/05/18. [DOI] [PubMed] [Google Scholar]
  • 18.Roach DP, Neidigk S. Does the Maturity of Structural Health Monitoring Technology Match User Readiness? 2011. Sandia National Lab.(SNL-NM), Albuquerque, NM (United States).
  • 19.Lyng KM, Jensen S, Bruun-Rasmussen M. A paradigm shift: sharing patient reported outcome via a national infrastructure. In: MEDINFO 2019: health and wellbeing e-networks for all. IOS Press, 2019, pp.694–698. [DOI] [PubMed] [Google Scholar]
  • 20.van Velsen L, Evers M, Bara C-D, et al. Understanding the acceptance of an eHealth technology in the early stages of development: an end-user walkthrough approach and two case studies. JMIR Formativ Res 2018; 2: e10474. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Jansen-Kosterink S, van Velsen L, Cabrita M. Clinician acceptance of complex clinical decision support systems for treatment allocation of patients with chronic low back pain. BMC Med Inform Decis Mak 2021; 21: 37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Brown W, 3rd, Yen PY, Rojas M, et al. Assessment of the health IT usability evaluation model (health-ITUEM) for evaluating mobile health (mHealth) technology. J Biomed Inform 2013; 46: 1080–1087. 2013/08/27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Cho H, Yen PY, Dowding D, et al. A multi-level usability evaluation of mobile health applications: a case study. J Biomed Inform 2018; 86: 79–89. 2018/08/27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Schnall R, Rojas M, Bakken S, et al. A user-centered model for designing consumer mobile health (mHealth) applications (apps). J Biomed Inform 2016; 60: 243–251. 2016/02/24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Keizer RACM O, van Velsen L, Moncharmont M, et al. Using socially assistive robots for monitoring and preventing frailty among older adults: a study on usability and user experience challenges. Health Technol (Berl) 2019; 9: 595–605. [Google Scholar]
  • 26.Kosterink SM, Huis in 't Veld RM, Cagnie B, et al. The clinical effectiveness of a myofeedback-based teletreatment service in patients with non-specific neck and shoulder pain: a randomized controlled trial. J Telemed Telecare 2010; 16: 316–321. 2010/08/28. [DOI] [PubMed] [Google Scholar]
  • 27.Talboom-Kamp E, Ketelaar P, Versluis A. A national program to support self-management for patients with a chronic condition in primary care: a social return on investment analysis. Clin eHealth 2021; 4: 45–49. [Google Scholar]
  • 28.Sadegh SS, Khakshour Saadat P, Sepehri MM, et al. A framework for m-health service development and success evaluation. Int J Med Inf 2018; 112: 123–130. 2018/03/04. [DOI] [PubMed] [Google Scholar]
  • 29.Scherr TF, Moore CP, Thuma P, et al. Evaluating network readiness for mHealth interventions using the beacon mobile phone app: application development and validation study. JMIR Mhealth Uhealth 2020; 8: e18413. 2020/07/29. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES