Abstract
Objective
OpenClinical.net is a way of disseminating clinical guidelines to improve quality of care whose distinctive feature is to combine the benefits of clinical guidelines and other human-readable material with the power of artificial intelligence to give patient-specific recommendations. A key objective is to empower healthcare professionals to author, share, critique, trial and revise these ‘executable’ models of best practice.
Design
OpenClinical.net Alpha (www.openclinical.net) is an operational publishing platform that uses a class of artificial intelligence techniques called knowledge engineering to capture human expertise in decision-making, care planning and other cognitive skills in an intuitive but formal language called PROforma.3 PROforma models can be executed by a computer to yield patient-specific recommendations, explain the reasons and provide supporting evidence on demand.
Results
PROforma has been validated in a wide range of applications in diverse clinical settings and specialties, with trials published in high impact peer-reviewed journals. Trials have included patient workup and risk assessment; decision support (eg, diagnosis, test and treatment selection, prescribing); adaptive care pathways and care planning. The OpenClinical software platform presently supports authoring, testing, sharing and maintenance. OpenClinical’s open-access, open-source repository Repertoire currently carries approximately 50+ diverse examples (https://openclinical.net/index.php?id=69).
Conclusion
OpenClinical.net is a showcase for a PROforma-based approach to improving care quality, safety, efficiency and better patient experience in many kinds of routine clinical practice. This human-centred approach to artificial intelligence will help to ensure that it is developed and used responsibly and in ways that are consistent with professional priorities and public expectations.
Keywords: BMJ Health Informatics, computer methodologies
Introduction
The knowledge crisis
Every week, hundreds of papers by expert medical researchers are published in high-quality journals and this research fuels continuous improvements in treatments and medical practice. New knowledge produced through research is often summarised and disseminated in the form of clinical practice guidelines (CPGs) which give short, evidence-based summaries of best practice for specific medical conditions. CPGs are generally considered to be a vital way of disseminating up-to-date recommendations for high quality and safe clinical practice.
However, there are important difficulties for achieving all the potential benefits of CPGs: they take time to read and absorb; they are difficult to keep up to date, and they only provide general guidance not patient-specific recommendations. Chidgey et al1 reviewed the successes and issues facing the UK National Institute for Health and Care Excellence (NICE) in improving the delivery of medical services in the National Health Service. The focus was the NICE programme of guideline development, ‘arguably the largest in the world’, whose goal is to carry out rigorous and up-to-date reviews of evidence for alternative treatments and develop recommendations for practice. Chidgey et al discussed successes in changing practice but concluded that the record of translating guidance into successful implementation was ‘mixed’.
The NICE guidance programme has grown to cover approaching 500 areas of medical practice and diversified into a wide range of products: CPGs, clinical pathways, quality standards, technology assessments, evidence summaries and more. NICE is presently in the process of reviewing its guidance development processes in a programme called NICE Connect (https://www.nice.org.uk/about/who-we-are/nice-connect) whose goal is to facilitate more effective use of NICE content in clinical practice.
In response to the emergence of ‘Rapid Learning Systems’ in healthcare and ‘Computable Biomedical Knowledge’, NICE is also investigating options for disseminating knowledge of best practice in a computable form that can support individualised care. There are many options to be considered, from restructuring and enriching guideline documents so they can be read by computers and applied in more context specific ways (eg, GEM Guideline Elements Model (https://www.astm.org/DATABASE.CART/HISTORICAL/E2210-02.htm), MAGIC enriched guideline content (http://magicproject.org/)) to more technical approaches like statistical and knowledge-based decision-support systems (https://en.wikipedia.org/wiki/Clinical_decision_support_system); rule-based systems for making patient-specific recommendations (eg, Arden syntax (https://en.wikipedia.org/wiki/Arden_syntax), Clinical Quality Language (https://ecqi.healthit.gov/cql)) to Computer Interpretable Clinical Guidelines or CIGs (Peleg et al2) which can also be integrated with the other methods if required.3
Computer interpretable clinical guidelines: publets
OpenClinical was conceived as a way of developing CIGs using a class of artificial intelligence techniques called knowledge engineering.3 It is a web-based knowledge-sharing platform which offers a radically different approach to summarising knowledge of best practice in the form of executable models of practice that we call publets. Publets include structured data models and executable logic (rules, decisions, pathways) as well as traditional CPG content. When a publet is consulted in a patient’s care, it can request and interpret information about the patient and offer personalised recommendations and explanations of the reasons for its recommendations and the supporting evidence if required. OpenClinical (https://www.openclinical.net/) was originally developed at Cancer Research UK and later at Oxford University and UCL/Royal Free Hospital in London. It was launched in its first ‘alpha’ version at the Royal Free in 2013. We plan to deploy a significantly improved technology and publishing platform in 2020.
Publets can support a clinician through just the parts of the CPG that are relevant for each patient, summarise the patient-specific pros and cons of each decision option and provide the supporting evidence. Users can either act on or critique suggestions as they see fit in light of the patient-specific rationale provided by the publet and their own professional judgement.
The OpenClinical concept is that clinicians, researchers and other authors develop models of best practice and submit them for review and publication. Authors will submit publets to OpenClinical in much the same way that researchers submit research papers to conventional research journals; each publet can be peer reviewed by independent experts and validated against cases before publication. Several publet demonstrations can be accessed at https://openclinical.net/index.php?id=68 and a repository of diverse examples is at https://openclinical.net/index.php?id=69.
Publets empower healthcare professionals and clinical researchers to develop and share models of practice in any area of medicine. Colleagues in other institutions, countries and specialties can download, assess and adapt the publet to meet local requirements and constraints.
Methods for creating publets: knowledge engineering
The tools for developing publets provided by OpenClinical exploit a range of artificial intelligence techniques known collectively as Knowledge Engineering. This discipline emerged from artificial intelligence and computer science but has also drawn on insights from cognitive science in that it exploits our understanding of human decision-making and the cognitive skills needed to carry out complex tasks. Medicine and clinical practice have been among the most important targets and challenges for this branch of artificial intelligence research.
An important use of knowledge engineering is to formalise human-readable CPGs (traditionally text, tables, flow charts, etc) in a logical or other symbolic form rather than as a mathematical function or algorithm. Like algorithms knowledge models can be executed on a computer and assist clinicians in routine tasks, but symbolic task models are easier for healthcare professionals to understand and critique than conventional algorithms.
In a classic paper, Peleg et al4 reviewed a number of approaches to modelling clinical guidelines that draw on concepts from knowledge engineering including EON (USA), ASBRU (Israel), GUIDE (Italy), GLIF (USA) and PROforma (UK5). Peleg et al systematically compared the abilities of these different methods in capturing practice guidelines. OpenClinical uses the PROforma language model and CIG authoring tools for modelling decision-making and pathways, but we wish to accommodate other approaches as they emerge.
There are many differences between publets and other CIG models that have been proposed. PROforma is based on a theory of human expertise and it is easily understood and written by clinicians so they can create, validate and share publets with their peers. Despite its naturalistic form, PROforma has sound foundations in logic and decision theory. In practical terms, it has proved to be a versatile, scalable and effective tool for improving decision-making in many clinical settings and specialties (see table). Tools for authoring and testing publets are available from OpenClinical.
Publets and other CIG models | |
Prescribing by GPs | Walton et al6 |
Mammography screening, | Taylor et al7 |
Genetic risk assessment | Emery et al8 |
Prescribing antiretrovirals from genotype | Tural et al9 |
Chemotherapy prescribing for ALL | Bury et al10 |
Early referrals of suspected cancer | Bury et al, (internal report 2006)11 |
Diagnosis and investigation of breast cancer | Patkar et al12 |
Hospitalisation of children with acute asthma | Best Practice Advocacy Centre, NZ 2009 |
Genetic risk management | Glasspool et al13 |
Support for multidisciplinary teams | Patkar et al14 |
Diagnosis and treatment of thyroid nodules | Peleg et al15 |
Guideline adherence in acute stroke | Ranta et al16 |
Shared decision-making in chemotherapy | Miles et al17 |
Diagnosis of hyponatremia (training) | Gonzales Ferrer et al18 |
Detection and diagnosis of ophthalmic disease | Chandrasekaran G, Phd Thesis 2017 |
Kidney transplant donor eligibility | Knight et al (Transplantation 2018) |
The PROforma approach to designing and deploying artificial intelligence at the point of care is different from the currently popular machine learning and data science approaches, in that it is grounded in an understanding of human expertise and getting the best from a combination of natural and artificial intelligence. It has shown that knowledge engineering methods are effective and versatile and that publets can be deployed at scale in a way that healthcare professionals can understand, critically engage with and, if warranted, challenge.
Discussion
Despite the current excitement around artificial intelligence, many clinicians struggle with the claims of ‘medical revolution’ scenarios which are promulgated by journalists, politicians and some healthcare professionals. However, until recently, the bulk of the medical community has stayed largely silent; claims of ‘breakthroughs’ come and go in medicine, with many technologies eventually falling by the wayside despite early promise.
Although the current focus of public and business attention is on the use of data science and machine learning in medicine, the executable knowledge approach is in our view key to the acceptance of artificial intelligence as a useful tool in supporting clinicians and other healthcare professionals who are trying to cope with the knowledge crisis. It is not only that published evidence shows that knowledge engineering methods are effective and clinically acceptable at the point of care, they can also empower healthcare professionals who are not programmers to understand the content which is being used in an artificial intelligence system and when appropriate discuss and overrule an artificial intelligence’s recommendations.
This is not to say that data science and machine learning algorithms are not important; they will surely find an important role in delivering precise and personalised care, but this is far better done within a framework in which abstract data can be understood in terms of human concepts and practice.
What is arguably even more revolutionary about knowledge engineering is not only that it can improve consistency, quality and safety of care but healthcare professionals can themselves be the authors of artificial intelligence models which can be debated critically with other clinicians and shared fully with their patients. In our view, people who are primarily technologists should not ‘own’ artificial intelligence in medicine. Our role is to provide healthcare practitioners with the tools to deliver better care and to be able to assess and audit the services that AI technologies provide.
The mission of OpenClinical is to provide a means to support the creation and dissemination of effective, appropriate services for improving quality and safety of patient care in an ethical, transparent and trustworthy way. The OpenClinical technology is now mature enough to deliver a practical service, and we hope to transform it from a practical demonstration of capability to a scalable publishing platform that delivers the benefits of artificial intelligence and knowledge sharing to healthcare professionals.
Footnotes
Contributors: JF wrote and revised the paper. Other authors contributed to OpenClinical technical development and project management.
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests: None declared.
Patient consent for publication: Not required.
Provenance and peer review: Commissioned; externally peer reviewed.
Data availability statement: There are no data associated with this paper.
Author note: JF is founder and director of Deontics Ltd., a for-profit medical AI Company in London.
References
- 1.Chidgey J, Leng G, Lacey T. Implementing NICE guidance. J R Soc Med 2007;100:448–52. 10.1177/014107680710001012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Peleg M, Tu S, Bury J, et al. . Comparing computer-interpretable guideline models: a case-study approach. J Am Med Inform Assoc 2003;10:52–68. 10.1197/jamia.M1135 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Fox J, Gutenstein M, Khan O, et al. . OpenClinical.net: A platform for creating and sharing knowledge and promoting best practice in healthcare. Computers in Industry 2015;66:63–72. 10.1016/j.compind.2014.10.001 [DOI] [Google Scholar]
- 4.Peleg M. Computer-interpretable clinical guidelines: a methodological review. J Biomed Inform 2013;46:744–63. 10.1016/j.jbi.2013.06.009 [DOI] [PubMed] [Google Scholar]
- 5.Sutton DR, Fox J. The Syntax and Semantics of the PRO forma Guideline Modeling Language. J Am Med Inform Assoc 2003;10:433–43. 10.1197/jamia.M1264 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Walton RT, Gierl C, Yudkin P, et al. . Evaluation of computer support for prescribing (capsule) using simulated cases. BMJ 1997;315:791–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Taylor P, Fox J, Pokropek AT. The development and evaluation of cadmium: a prototype system to assist in the interpretation of mammograms. Med Image Anal 1999;3:321–37. [DOI] [PubMed] [Google Scholar]
- 8.Emery J, Walton R, Murphy M, et al. . Computer support for interpreting family histories of breast and ovarian cancer in primary care: comparative study with simulated cases. BMJ 2000;321:28–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Tural C, Ruiz L, Holtzer C, et al. . Clinical utility of HIV-1 genotyping and expert advice: the havana trial. AIDS 2002;16:209–18. [DOI] [PubMed] [Google Scholar]
- 10.Bury J, Hurt C, Roy A, et al. . LISA: a web-based decision-support system for trial management of childhood acute lymphoblastic leukaemia. Br J Haematol 2005;129:746–54. [DOI] [PubMed] [Google Scholar]
- 11.Early referrals in primary care for patients with suspected cancer. London, UK: Tenth World Congress on Health and Medical Informatics; 2001. [Google Scholar]
- 12.Patkar V, Hurt C, Steele R, et al. . Evidence-Based guidelines and decision support services: a discussion and evaluation in triple assessment of suspected breast cancer. Br J Cancer 2006;95:1490–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Glasspool DW, Oettinger A, Braithwaite D, et al. . Interactive decision support for risk management: a qualitative evaluation in cancer genetic counselling sessions. J Cancer Educ 2010;25:312–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Patkar V, Acosta D, Davidson T, et al. . Using computerised decision support to improve compliance of cancer multidisciplinary meetings with evidence-based guidance. BMJ Open 2012;2. 10.1136/bmjopen-2011-000439. [Epub ahead of print: 25 06 2012]. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Peleg M, Fox J, Patkar V, et al. . A Computer-Interpretable version of the AACE, AME, ETA medical guidelines for clinical practice for the diagnosis and management of thyroid nodules. Endocr Pract 2014;20:352–9. [DOI] [PubMed] [Google Scholar]
- 16.Ranta A, Dovey S, Weatherall M, et al. . Cluster randomized controlled trial of TIA electronic decision support in primary care. Neurology 2015;84:1545–51. [DOI] [PubMed] [Google Scholar]
- 17.Miles A, Chronakis I, Fox J, et al. . Use of a computerised decision aid (dA) to inform the decision process on adjuvant chemotherapy in patients with stage II colorectal cancer: development and preliminary evaluation. BMJ Open 2017;7:e012935. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.González-Ferrer A, Valcárcel M Ángel, Cuesta M, et al. . Development of a computer-interpretable clinical guideline model for decision support in the differential diagnosis of hyponatremia. Int J Med Inform 2017;103:55–64. [DOI] [PubMed] [Google Scholar]