Skip to main content
Bone & Joint Research logoLink to Bone & Joint Research
. 2018 May 5;7(3):223–225. doi: 10.1302/2046-3758.73.BJR-2017-0147.R1

Artificial intelligence, machine learning and the evolution of healthcare

A bright future or cause for concern?

L D Jones 1,, D Golan 2, S A Hanna 2, M Ramachandran 2
PMCID: PMC5987686  PMID: 29922439

First proposed by Professor John McCarthy at Dartmouth College in the summer of 1956,1 Artificial Intelligence (AI) – human intelligence exhibited by machines – has occupied the lexicon of successive generations of computer scientists, science fiction fans, and medical researchers. The aim of countless careers has been to build intelligent machines that can interpret the world as humans do, understand language, and learn from real-world examples. In the early part of this century, two events coincided that transformed the field of AI. The advent of widely available Graphic Processing Units (GPUs) meant that parallel processing was faster, cheaper, and more powerful. At the same time, the era of ‘Big Data’ – images, text, bioinformatics, medical records, and financial transactions, among others – was moving firmly into the mainstream, along with almost limitless data storage. These factors led to a dramatic resurgence in interest in AI in both academic circles and industries outside traditional computer science. Once again, AI occupies the zeitgeist, and is poised to transform medicine at a basic science, clinical, healthcare management, and financial level.

Terminology surrounding these technologies continues to evolve and can be a source of confusion for non-computer scientists. AI is broadly classified as: general AI, machines that replicate human thought, emotion, and reason (and remain, for now, in the realm of science fiction); and narrow AI, technologies that can perform specific tasks as well as, or better than, humans. Machine learning (ML) is the study of computer algorithms that can learn complex relationships or patterns from empirical data and make accurate decisions.2 Rather than coding specific sets of instructions to accomplish a task, the machine is ‘trained’ using large amounts of data and algorithms that confer it the ability to learn how to perform the task. Unlike normal algorithms, it is the data that ‘tells’ the machine what the ‘good answer’ is, and learning occurs without explicit programming. ML problems can be classified as supervised learning or unsupervised learning.3 In a supervised machine learning algorithm, such as face recognition, the machine is shown several examples of ‘face’ or ‘non-face’ and the algorithm learns to predict whether an unseen image is a face or not. In unsupervised learning, the images shown to the machine are not labelled as ‘face’ or ‘non-face’.

Artificial Neural Networks (ANN)4 are one group of algorithms used for machine learning. While ANNs have existed for over 60 years, they fell out of favour during the 1990s and 2000s. In the last half-decade, ANNs have had a resurgence under a new name: deep artificial networks (or ‘Deep Learning’). ANNs are uniquely poised to take full advantage of the computational boost offered by GPUs, allowing them to crunch through data sets of enormous sizes. These range from computer vision tasks, such as image classification, object detection, face recognition, and optical character recognition (OCR), to natural language processing and even game-playing problems (from mastering simple Atari games to the recent AlphaGo victory against human grandmasters).5

ANNs work by constructing layers upon layers of simple processing units (often referred to as ‘neurons’), interconnected via many differentially weighted connections. ANNs are ‘trained’ by using backpropagation algorithms, essentially telling the machine how to alter the internal parameters that are used to compute the representation in each layer from the representation in the previous layer. As such, deep learning can be largely automatic once set in motion, learning intricate patterns from even high-dimensional raw data with little guidance6 and continuously improving.

How might machine learning and deep learning transform the current medical, and specifically the musculoskeletal, landscape? Search engines, spam filters, voice recognition software, and autonomous driving vehicles all depend on ML technologies and are now part of our daily lives, irrespective of the industry sector we occupy. Medicine seems particularly amenable to ML solutions and has been the focus of much interest in thriving technological economies, such as Silicon Valley.

The impact of AI can be considered in two main themes: first, extracting meaning from ‘Big Data’ in the research domain; and second, aiding clinicians in delivering care to patients.4 Using machine learning to extract information on treatment patterns and diagnoses has already been used in large digital databases of Electronic Health Records in the United Kingdom, and has enabled data-driven prediction of drug effects and interactions, identification of type 2 diabetes subgroups, and discovery of comorbidity clusters in autism spectrum disorders.7

In the United States, the IBM Watson Health cognitive computing system (IBM Corp., Armonk, New York) has used machine learning approaches to create a decision support system for physicians treating cancer patients, with the intention of improving diagnostic accuracy and reducing costs using large volumes of patient cases and over one million scholarly articles.

Within musculoskeletal medicine, machine learning and active shape modelling have proven influential in understanding biomechanics, orthopaedic implant design,8 bone tumour resection,9 prediction of progression of osteoarthritis based on anatomical shape assessment,10,11 and robotic surgery.12 The analysis of complex physiological data via ML has been used in patients with spinal degenerative changes. Hayashi et al13 focused on gait analysis as a classification method to improve diagnostic accuracy in patients with multilevel spinal stenosis. By identifying gait characteristics of those with confirmed L4 or L5 radiculopathy, support vector machine (SVM) analysis was used to contrast patient motion with normal controls, allowing the development of a gait model to aid diagnosis. Similar work has been done with knee osteoarthritis, using ML to analyze massive data inputs from complex gait analysis to develop models that provide an estimation of the presence of disease.14 This engineering approach to a medical diagnostic problem is not isolated. In the lower limb, deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. Eskenazi et al15 created an ANN-based contact model of the tibiofemoral joint using over 75 000 evaluations of a fine-grid elastic foundation (EF) contact model. The contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model, removing an important computational bottleneck from musculoskeletal simulations incorporating deformable joint contact models. Similar approaches have been used in the analysis of preoperative images to help the surgeon define intraoperative bone resection levels in upper limb arthroplasty.16

Major improvements have been seen in all stages of the medical imaging pathway, from acquisition and reconstruction to analysis and interpretation. Segmentation, the division of digitized images into homogeneous partitions with respect to specific borders of regions of interest, is commonly used in the assessment of cartilage lesions. Traditionally performed manually, it is a difficult and time-consuming task with limited standardization. Fully automated ML analysis segmentation of hip, knee, and wrist cartilage MRI images17 have transformed this process and promise to bring automated segmentation into the mainstream of research and clinical practice. Complex, user-dependent image analysis techniques, such as ultrasound for developmental dysplasia of the hip, are particularly amenable to deep learning techniques.18 Individuals in geographically underserved or remote locations can be imaged by unskilled users, accurately diagnosed, and then directed to expert care at an earlier stage in the natural history of disease, potentially transforming outcomes.

As both the number of imaging studies and the number of images per study grows, radiology has become threatened by its own success: the workload of radiologists has increased dramatically, the number of radiologists is limited, and healthcare costs related to imaging continue to increase. With 40 million mammograms and 38 million MRI scans19 performed each year in the United States alone, and a trend to extend the indications for scans containing huge amounts of data,20 ML has and will continue to have an important role to play in image interpretation. Several studies have suggested that the incorporation of computer-aided detection (CADe) systems into the diagnostic process can improve the performance of image interpretation by providing quantitative support for clinical decision-making, particularly the differentiation of malignant and benign tumours.21 CADe provides an effective way to reduce reading time, increase detection sensitivity, and improve diagnosis accuracy, thus supporting rather than usurping the need for specialist musculoskeletal radiologists. In the future, these technologies may progress from clinical decision support use to diagnostic decision-making (computer-aided diagnosis (CADx)). Currently, regulatory bodies such as the United States Food and Drug Administration do not permit this, and it is unlikely that doctor’s representative groups will embrace it enthusiastically either. However, as the accuracy and speed of the technology increases, clinicians should consider their role in repetitive manual tasks that involve pattern recognition and reporting, and understand how they can incorporate technology into their practice rather than resist it.

Not all commentators share unbridled enthusiasm for adoption of AI technologies. Professor Stephen Hawking, in 2016 declared AI to be “either the best, or the worst thing, ever to happen to humanity”.22 The Royal Society, in an attempt to address public concern on the topic, commissioned a report on AI and specifically on Machine Learning. While championing the role of the technology in the management of Big Data, it also highlighted legitimate anxiety regarding the implications for governance of data, as well as the role of AI in automation, and subsequently the future of work and employment.23 What does the future hold for clinicians and researchers as ML and DL technologies advance? Data will continue to increase and there can be little doubt that machine learning will be integral to interpretation and utilization. While it is easy to consider medicine a rational, evidence-based activity focused on well-defined conditions, in reality it is ambiguous and emphasizes relationships, advice, and reassurance. So, while medicine is much more than simply a diagnosis and treatment algorithm, machine learning looks set to both transform and complement the way we deliver care.

Footnotes

Author Contributions: L. D. Jones: Conceived editorial, Manuscript drafting.

D. Golan: Conceived editorial, Manuscript drafting, Approved the manuscript.

S. A. Hanna: Conceived editorial, Manuscript drafting, Approved the manuscript.

M. Ramachandran: Conceived editorial, Manuscript drafting, Approved the manuscript.

Conflicts of Interest Statement: Two authors (DG and RM) declare shares in or take a salary from an artificial intelligence company. None of the authors declare any conflicts of interest relevant to this paper.

Follow us @BoneJointRes

Funding Statement

None declared

References

  • 1. Moor J. The Dartmouth College Artificial Intelligence Conference : The Next Fifty Years. AI Mag 2006;27:87-91. [Google Scholar]
  • 2. Wang S, Summers RM. Machine learning and radiology. Med Image Anal 2012;16:933-951. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Kassahun Y, Yu B, Tibebu AT, et al. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J CARS 2016;11:553-568. [DOI] [PubMed] [Google Scholar]
  • 4. Sheikhtaheri A, Sadoughi F, Hashemi Dehaghi Z. Developing and using expert systems and neural networks in medicine: a review on benefits and challenges. J Med Syst 2014;38:110. [DOI] [PubMed] [Google Scholar]
  • 5. Mozur P. Googles AlphaGo Beats Chinese Go Master in Win for AI. New York Times. https://www.nytimes.com/2017/05/23/business/google (date last accessed 20 December 2017).
  • 6. Mamoshina P, Vieira A, Putin E, Zhavoronkov A. Applications of Deep Learning in Biomedicine. Mol Pharm 2016;13:1445-1454. [DOI] [PubMed] [Google Scholar]
  • 7. Wang Z, Shah AD, Tate AR, et al. Extracting diagnoses and investigation results from unstructured text in electronic health records by semi-supervised machine learning. PLoS One 2012;7:e30412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Kozic N, Weber S, Büchler P, et al. Optimisation of orthopaedic implant design using statistical shape space analysis based on level sets. Med Image Anal 2010;14:265-275. [DOI] [PubMed] [Google Scholar]
  • 9. Cho HS, Park YK, Gupta S, et al. Augmented reality in bone tumour resection: an experimental study. Bone Joint Res 2017;6:137-143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. van IJsseldijk EA, Valstar ER, Stoel BC, et al. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;5:320-327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Agricola R, Leyland KM, Bierma-Zeinstra SMA, et al. Validation of statistical shape modelling to predict hip osteoarthritis in females: data from two prospective cohort studies (Cohort Hip and Cohort Knee and Chingford). Rheumatology (Oxford) 2015;54:2033-2041. [DOI] [PubMed] [Google Scholar]
  • 12. Karthik K, Colegate-Stone T, Dasgupta P, Tavakkolizadeh A, Sinha J. Robotic surgery in trauma and orthopaedics: a systematic review. Bone Joint J 2015;97-B:292-299. [DOI] [PubMed] [Google Scholar]
  • 13. Hayaahi H, Toribatake Y, Murakami H, et al. Gait Analysis Using a Support Vector Machine for Lumbar Spinal Stenosis. Orthopaedics 2015;38:959-964. [DOI] [PubMed] [Google Scholar]
  • 14. Kotti M, Duffell LD, Faisal AA, McGregor AH. Detecting knee osteoarthritis and its discriminating parameters using random forests. Med Eng Phys 2017;43:19-29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Eskinazi I, Fregly BJ. Surrogate modeling of deformable joint contact using artificial neural networks. Med Eng Phys 2015;37:885-891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Tschannen M, Vlachopoulos L, Gerber C, Székely G, Fürnstahl P. Regression forest-based automatic estimation of the articular margin plane for shoulder prosthesis planning. Med Image Anal 2016;31:88-97. [DOI] [PubMed] [Google Scholar]
  • 17. Pedoia V, Majumdar S, Link TM. Segmentation of joint and musculoskeletal tissue in the study of arthritis. MAGMA 2016;29:207-221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Golan D, Donner Y, Mansi C, Jaremko J, Ramachandran M. Fully Automating Graf’s Method for DDH Diagnosis Using Deep Convolutional Neural Networks. Deep Learning and Data Labeling for Medical Applications. Springer International Publishing; 2016:130-141. [Google Scholar]
  • 19. No authors listed. Magnetic resonance imaging (MRI) exams https://data.oecd.org/healthcare/magnetic-resonance-imaging-mri-exams.htm (date last accessed 20 December 2017).
  • 20. Thompson MJ, Ross J, Domson G, Foster W. Screening and surveillance CT abdomen/pelvis for metastases in patients with soft-tissue sarcoma of the extremity. Bone Joint Res 2015;4:45-49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Cheng J-Z, Ni D, Chou Y-H, et al. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci Rep 2016;6:24454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Hern A. Stephen Hawking: AI will be ‘either best or worst thing’ for humanity. https://www.theguardian.com/science/2016/oct/19/stephen-hawking-ai-best-or-worst-thing-for-humanity-cambridge (date last accessed 20 December 2018).
  • 23. Baker DR, Said C. Major job losses feared when self-driving cars take to the road. http://www.sfchronicle.com/business/article/Major-job-losses-feared-when-self-driving-cars-9203301.php (date last accessed 20 December 2017).

Articles from Bone & Joint Research are provided here courtesy of British Editorial Society of Bone and Joint Surgery

RESOURCES