Skip to main content
The BMJ logoLink to The BMJ
. 1999 Nov 13;319(7220):1291. doi: 10.1136/bmj.319.7220.1291

Keeping pace with new technologies: systems needed to identify and evaluate them

Andrew Stevens a, Ruairidh Milne b, Richard Lilford a, John Gabbay c
PMCID: PMC1129069  PMID: 10559044

Of the three major pressures on health services worldwide—changing demography, growing expectations, and new healthcare interventions (technologies)—the last is generating the most concern and the most dramatic responses. New healthcare technologies are becoming more numerous, more expensive, and possibly more effective than ever before. About 50 new drugs are launched each year, and the number of new devices, procedures, and ways of providing care is growing all the time.

Summary points

  • New and changing technologies are a major pressure on health services, challenging cost control and research capacity

  • The NHS system to identify and evaluate new technologies and select the most important ones for assessment needs to develop a range of suitable research methods and the means to disseminate knowledge and implement the technologies

  • A pragmatic research solution is evolving, with rapid reviews, modelling, economic evaluation, and pragmatic trials as well as the mainstream efficacy trials and Cochrane reviews

  • Closer contact between research managers and the research team and new tools such as tracker trials may need to be developed too

Effectiveness: clinical and cost

Not only is there a challenge in relation to cost control, but, more compellingly, in relation to determining effectiveness.1 For example, it is often not clear:

  • Which patients will benefit most

  • What the balance of benefits and harms is

  • What value for money technologies offer

  • How affordable they are

  • Whether it is appropriate for them to be provided by the NHS.

Without this information there is a risk of distorted priorities as political pressure to keep a lid on budgets creates a tension between the claims of different technologies for part of the public purse.

In 1992 the Advisory Group on Health Technology Assessment suggested that the “costs of providing unevaluated new forms of care within the NHS should be met only if they are being offered within the context of properly designed research to assess their effects.” Despite the growth of evidence based health care we are still some way from achieving this.1

System required

If the NHS is to make further progress, it needs a system for managing the four stages involved in identifying and evaluating new technologies:

  • Identifying new and emerging technologies

  • Selecting the most important topics for assessment

  • Responding appropriately with suitable research methods, and

  • Operating a system for knowledge dissemination, and implementation.

Progression through the four stages is not a simple, linear one, but a more iterative process involving stages two, three, and four. This paper focuses mainly on the first three stages. It includes some brief remarks on the fourth stage, which is considered in more detail in the editorial by Rosen and Gabbay.2

Identifying new technologies

Identifying technologies, whether new, diffusing, or established, presents surprising challenges. There are great numbers of technologies. Some 10 000 distinguishable diseases are included in the 10th revision of the International Classification of Diseases, with at least 10 important interventions per disease, implying that there are about 100 000 existing interventions to which the annual new “intake” needs to be added. In addition, technologies are diverse; the NHS Health Technology Assessment Programme recognises drugs, devices, procedures, professions, screening programmes, and diffuse technologies. The relative balance of these in health technology assessment has changed over recent years. Fifteen years ago health technology assessment in the United Kingdom was about lithotripsy, ultrasound screening, and computed tomography. Today drugs loom so large that some fear that they will swamp the work of the new National Institute for Clinical Excellence (NICE).

Different sources

Different sources are required to spot the most important of the various new technologies. Drugs can be identified in phase II and phase III trials, but surgical interventions often emerge in practice with no obvious warning. The Health Technology Assessment Programme identifies new (and existing) technologies with a mixture of methods that includes widespread consultation (among clinicians, scientists, policymakers, and managers); noting the recommendations of existing systematic reviews; and a variety of processes collectively described as horizon scanning.3

Horizon scanning

There are three sources for horizon scanning—word of mouth (talking to manufacturers and clinicians); published reports (scanning scientific, medical, and pharmaceutical journals); and the world wide web. These sources can also be characterised by their proximity to the technology's invention as primary (the manufacturers, inventors, and patenters); secondary (written and conference material); and tertiary (others involved in horizon scanning).

graphic file with name stea4160.fa.jpg

SUE SHARPLES

Collective scanning

Robert et al concluded that a combination of scanning specialist journals accompanied (iteratively) by regular meetings and surveys of “sentinel” groups of experts was both sensitive and specific.4 In horizon scanning it is critically important to network with other related activities such as the monitoring of pharmaceutical licensing and the informal registration of procedures. In the United Kingdom, this would involve the Drug Information Pharmacists' Group and the Safety and Efficacy Register of New Interventional Practices. A number of Western countries have recently set up horizon scanning or early warning systems to inform their health technology assessment and other health service processes. The National Horizon Scanning Centre in England servicing the needs of the NICE and the Health Technology Assessment Programme is typical. Six countries have brought their processes together in the Euro-scan collaboration, which, it is proposed, will share intelligence on new and emerging technologies across Europe and Canada.

Selecting the most important topics

The second stage, choosing which topics to assess, encompasses two distinct but interlinked processes. One is to narrow down the number of possibilities; the other is to undertake preliminary evaluation that may help determine whether further, more expensive research (a full systematic review or randomised controlled trial) would be value for money. The Health Technology Assessment Programme reduces the 1500 possible assessments it identifies each year by means of the steps outlined in the box.

Choosing topics for assessment

  • A preliminary in-house elimination discards “trivial” topics

  • Five panels (acute sector, primary and community care, pharmaceuticals, diagnostic and imaging devices, and screening) produce a short list of 20 topics each (from around 80) for more detailed consideration

  • Details of each technology are summarised by a research secretariat and this enables the panels to pick 12 topics from the shortlisted 20

  • Up to 50 topics are finally selected from a review of the recommendations of all the panels; about half require systematic reviews and the remainder new primary research5

Research criteria

The criteria for selecting research, particularly at the last two stages, include judgments on (a) the degree of uncertainty on the effectiveness (including side effects) of the technology, (b) the size of the potential client group, (c) the unit cost (including knock-on effects), (d) the rate of diffusion of the technology, and (e) the likelihood of the research having an impact.6 The Institute of Medicine in the United States has adopted a formal quantified application of similar criteria to address research priorities.7 The adoption of these criteria, once formalised and documented, amounts to preliminary economic evaluation for which a variety of models have been proposed. These are exemplified by Eddy's technology assessment priority setting system, which provides a framework for estimating the expected impact of an assessment on health and economic outcomes for a population.8 Explicit calculations use estimates, such as the number of people affected, and the probability of particular results. Similarly, Buxton and Hannay consider how “payback” from health services research can be assessed—including a suggestion of how it could be assessed before being commissioned.9 Preliminary economic evaluation and modelling can sometimes provide enough information to inform not just the urgency and design of a trial but also policy, and can be considered part of the research response.

Responding appropriately

The methods available for health technology assessment are those available for health services research concerned with the efficacy, effectiveness, and wider impact of healthcare interventions. They include modelling and review methods, ranging from preliminary economic evaluation and modelling to the synthesis of existing data by systematic review, and various methods of collecting new primary data, from large randomised controlled trials (at the top end of the hierarchy of efficacy evidence) through to case series.10 Primary data collection and systematic review may or may not be accompanied by economic data collection (in the case of primary research) and economic modelling (accompanying primary research or systematic review). The fact that many of the currently available methods of commissioning and delivering research do not deliver what is needed in a timely and reliable way creates a problem for health technology assessment. Even a one year systematic review, let alone a further trial taking several years to determine cost effectiveness, can fail to provide information in time to help manage the introduction of a fast growing, new technology.

Pragmatic sequence

Although the case for randomised controlled trials (where new data are needed) and systematic reviews (synthesising existing evidence) has been made unequivocally, the staged process for handling new technologies is less clear. In practice in the United Kingdom, the handling of important (expensive) new technologies often shows the pragmatic sequence of events that are outlined in the box (and see table). The degree to which each of these stages happens varies with circumstances such as the importance of the technology, its rate of diffusion, and, most importantly, the findings of each stage.

Staged process for new technologies

  • Primary data sufficient for launch or licensing assembled by the manufacturer or pharmaceutical company

  • Brief report on the advantages and disadvantages of the new technology, who might prescribe, and the need for research

  • Rapid systematic review (based on published and unpublished primary research) and cost effectiveness modelling (making various assumptions on cost and longer term outcomes)11

  • Longer term health technology assessment, Cochrane review, or other systematic review

  • Pragmatic randomised controlled trials, the results of which sould be set in the context of updting existing systematic reviews

In the case of donepezil, a drug for Alzheimer's disease, for example, only one randomised controlled trial had been published in full at the time of its launch in 1997. The Health Technology Assessment Programme had at that time considered, but not been able to commission, suitable research in this area, but an editorial was published in the BMJ the week after its launch.12 A “quick and clean” review was considered by the NHS's south and west development and evaluation system within three months of the drug's launch.13 A conventional systematic review was published in the Cochrane Library,14 and the NHS Executive has now funded a large pragmatic trial (“AD 2000”) on new drugs for dementia.

Change and advance

The need for a sequence of steps in evaluating healthcare technologies such as this pragmatic sequence has been variously recognised and refined with suggestions of a series of expanding economic evaluations over time, as necessary, and modelling in advance of clinical trials.15,16 We need, however, to address not just the speed of arrival of new technologies, particularly expensive new drugs with a high demand such as interferon beta and sildenafil, but also technologies that change and advance during the course of a trial such as coronary artery stents, mechanisms for endovascular aneurysm repair, or stereotactic radiosurgery for Parkinson's disease. A further problem is created by the fact that the framing of a precise research question is sometimes a research exercise in itself—for example, studying which categories of elderly people are best placed in which categories of post-acute care (community hospitals, local authority homes, etc).

Tracker trials

The sequence of brief reports, rapid systematic review, and longer term review is well adapted to rapidly emerging technologies, particularly those that have been “horizon scanned” before their launch. But for changing technologies, Lilford et al have proposed the possibility of “tracker trials” to ensure that high quality randomised controlled trial evidence is not postponed until stability of a technology is reached (which may never happen) and resistance to randomised trials on the basis of supposed lack of equipoise becomes entrenched.17 A tracker trial would include all examples of variant technologies (including new ones as they emerge) through both development and stable phases. For complicated evaluation questions, closer research management will be needed that rethinks iteratively the research questions as preliminary work is undertaken.18 This “iterative commissioning” as well as tracker trials are still pre-experimental, and costs and benefits of different strategies will have to be worked out. However, it is clear that a multifaceted approach with proactive research management is going to be increasingly necessary in health technology assessment of management.

Knowledge dissemination and implementation

No matter how sophisticated the evaluation or assessment process is, it is unlikely ever to be sufficient for managing the introduction of new technologies without some mechanism for ensuring knowledge of, and adherence to, its findings. This is covered in more detail elsewhere in this issue of the BMJ. What is important is that any system for identifying and evaluating new technologies is linked to the system for knowledge dissemination and implementation. It must not stand in splendid isolation. Those with responsibility for disseminating knowledge will have special insights into where the gaps in knowledge are. Those working in developing clinical fields, witnessing or fostering the development of new technologies, need to see that it is in their interest that new technologies do not spring on an unprepared NHS. All those with special responsibility for developing and implementing clinical policy—those in the NHS Executive, the royal colleges, health authorities, and trusts—need to be involved in identifying technologies and assisting with appropriate assessment and research. Only in this way can introducing new technologies to the NHS be handled in an informed as well as a managed way.

Table.

Stages in handling of new technologies

Stage Purpose Examples
Primary research To support licensing and marketing decisions Trials sponsored by pharmaceutical companies
Research (often case series) sponsored by other manufacturers
Brief report by an NHS agency or independent agent To give early advice on use of technology National Prescribing Centre bulletin
HTA programme briefings or vignettes5
Editorials in medical journals
Drugs and Therapeutics Bulletin reports9
Rapid and rigorous systematic review To support timely guidance and guidelines (for example, from the National Institute for Clinical Excellence) Development and Evaluation Systems reports
Assessments externally commissioned by NICE
Longer term systematic review To bring together all high quality evidence about effects, costs, and broader impact of technology Products of Cochrane Collaboration
Reviews commissioned by NHS Health Technology Assessment Programme
Pragmatic randomised controlled trial To provide more definitive answers to the question: “For whom does it work, and is it worth it in practice?” Occasionally funded by NHS Health Technology Assessment Programme, by other research and development programmes, by the Medical Research Council

Footnotes

Competing interests: None declared.

References

  • 1.Advisory Group on Health Technology Assessment for the Director of Research and Development. Assessing the effects of health technologies: principles, practice, proposals. London: Department of Health; 1992. [Google Scholar]
  • 2.Rosen R, Gabbay J. Linking health technology assessment to practice. BMJ. 1999;319:1292. doi: 10.1136/bmj.319.7220.1292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Chase D, Milne R, Stein K, Stevens A. What are the relative merits of the services used to identify potential research priorities for the UK's health technology assessment programme? Int J Tech Assess Health Care (in press.) [DOI] [PubMed]
  • 4.Robert G, Stevens A, Gabbay J. Methods and information sources for identifying new health care technologies and predicting their impact. Int J Tech Assess Health Care. 1999;3:13. [Google Scholar]
  • 5.Health Technology Assessment Programme. The annual report of the NHS Health Technology Assessment Programme 1998. London: NHS Executive; 1998. [Google Scholar]
  • 6.Milne R, Stein K. The NHS R&D Health Technology Assessment Programme. In: Baker MR, Kirk S, editors. Research and development for the NHS: evidence, evaluation and effectiveness. Oxford: Radcliffe Medical Press; 1998. [Google Scholar]
  • 7.Donaldson MS, Sox HC, editors. Setting priorities for health technology assessment. A model process. Washington: National Academy Press; 1992. [PubMed] [Google Scholar]
  • 8.Eddy D. Selecting technologies for assessment. Int J Tech Assess Health Care. 1989;5:484–501. doi: 10.1017/s0266462300008424. [DOI] [PubMed] [Google Scholar]
  • 9.Buxton M, Harmly S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1:35–47. [PubMed] [Google Scholar]
  • 10.Woolf SH, Battista RN, Anderson GM, Logan AG, Wang E the Canadian Task Force on the Periodic Health Examination. Assessing the clinical effectiveness of preventive manoeuvres: analytic principles and systematic methods in reviewing evidence and developing clinical practice recommendations. J Clin Epidemiol. 1990;43:891–905. doi: 10.1016/0895-4356(90)90073-x. [DOI] [PubMed] [Google Scholar]
  • 11.Stevens A, Colin-Jones D, Gabbay J. Quick and clean: authoritative health technology assessment for local health care contracting. Health Trends. 1995;27:1994. [PubMed] [Google Scholar]
  • 12.Kelly CA, Harvey RJ, Cayton H. Drug treatments for Alzheimer's disease. BMJ. 1997;314:693. doi: 10.1136/bmj.314.7082.693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Stein K. Donepezil in the treatment of mild to moderate dementia of the Alzheimer type (SDAT). Southampton: Wessex Institute for Health Research and Development; 1997. (Development and Evaluation Committee report No 69.) [Google Scholar]
  • 14.Birks JS, Melzer D. Donepezil for mild and moderate Alzheimer's disease. Cochrane Library. Issue 4. Oxford: Update Software, 1999. [DOI] [PubMed]
  • 15.Sculpher MS, Drummond M F, Buxton M. The iterative use of economic evaluation as part of the process of health technology assessment. J Health Serv Res Policy. 1997;2:26–30. doi: 10.1177/135581969700200107. [DOI] [PubMed] [Google Scholar]
  • 16.Lilford RJ, Royston G. Decision analysis in the selection, design and application of clinical and health services research. J Health Serv Res Policy. 1998;3:159–166. doi: 10.1177/135581969800300307. [DOI] [PubMed] [Google Scholar]
  • 17.Lilford R, Braunholz D, Greenhalgh RM, Edwards S. Trials and fast changing technologies: the case for tracker studies. BMJ in press. [DOI] [PMC free article] [PubMed]
  • 18.Lilford R, Jecock R, Shaw H, Chard J, Morrison B. Commissioning health services research: an iterative method. J Health Serv Res Policy. 1999;3:164–167. doi: 10.1177/135581969900400308. [DOI] [PubMed] [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES