Abstract
Artificial intelligence (AI) is a commonly used term in daily life, and there are now two subconcepts that divide the entire range of meanings currently encompassed by the term. The coexistence of the concepts of strong and weak AI can be seen as a result of the recognition of the limits of mathematical and engineering concepts that have dominated the definition. This presentation reviewed the concept, history, and the current application of AI in daily life. Applications of AI are becoming a reality that is commonplace in all areas of modern human life. Efforts to develop robots controlled by AI have been continuously carried out to maximize human convenience. AI has also been applied in the medical decision-making process, and these AI systems can help nonspecialists to obtain expert-level information. Artificial neural networks are highly interconnected networks of computer processors inspired by biological nervous systems. These systems may help connect dental professionals all over the world. Currently, the use of AI is rapidly advancing beyond text-based, image-based dental practice. This presentation reviewed the history of artificial neural networks in the medical and dental fields, as well as current application in dentistry. As the use of AI in the entire medical field increases, the role of AI in dentistry will be greatly expanded. Currently, the use of AI is rapidly advancing beyond text-based, image-based dental practice. In addition to diagnosis of visually confirmed dental caries and impacted teeth, studies applying machine learning based on artificial neural networks to dental treatment through analysis of dental magnetic resonance imaging, computed tomography, and cephalometric radiography are actively underway, and some visible results are emerging at a rapid pace for commercialization.
Keywords: Artificial Intelligence, decision-making, dentistry, machine learning, neural networks
INTRODUCTION
Artificial intelligence (AI) is a commonly used term as a result of adopting an overly generalized representation. The main problem is definitions of “intelligence,” which often misinterpret practical notions that the term indicates. The word “artificial,” from medical and biological points of view, quite naturally designates a nonnatural property. The proper concept definition of this term cannot be achieved simply by applying a mathematical, engineering, or logical approach but requires an approach that is linked to a deep cognitive scientific inquiry.[1] The aim of this review was to describe the concept, history, and the current application of AI in daily life.
DEFINITION OF STRONG ARTIFICIAL INTELLIGENCE
There are now two subconcepts that divide the entire range of meanings currently encompassed by the term “AI.” The coexistence of the concepts of strong and weak AI can be seen as a result of the recognition of the limits of mathematical and engineering concepts that dominated definitions of AI in the first place. When the term “AI” was introduced, it meant a system that was operated in the same way as human intelligence through nonnatural, artificial hardware, and software construction, meaning strong AI. The concept of strong AI first requires an adjustment of the definition of intelligence. Here, intelligence is defined as “the capacity of a system that can act appropriately in an uncertain environment.”[2] From the point of view of this definition, human intelligence is sometimes imperfect, but in general, it has the ability of natural intelligence to cope with the most widespread, uncertain environments. AI is also referred to as artificial general intelligence, where “general” indicates intelligence with a universal ability to cope with uncertain environment. To be able to function in the same way as the intelligence of the human being and to be able to replace the intellect of the person, a necessary premise is required. The premise is that human intelligence has a structure that can be digitized purely in computing.[3] If every thought of a person is implemented in a conditional and propositional way that can be unambiguously synthesized in a formal, logical manner, then, in principle, a computer has the potential to completely replace a person's mind. In other words, the computing machine can self-consciously reach the stage of recognizing and “understanding” the object in an autonomous and active way.
DEFINITION OF WEAK ARTIFICIAL INTELLIGENCE
On the other hand, weak AI is a concept that intends to build a cognitive and judgmental system inherent in computing, refusing the unreasonable reduction, and reproduction attempt of the human intelligence, which is expected and intended by strong intelligence. Weak AI means a system in which human beings take advantage of some medical and logical mechanisms in which intelligence works to efficiently execute intellectual activities that a human can perform.[3] The definition of a weak AI is carried out while acknowledging that the implementation of computing is fundamentally different from the intelligence of a person. It is a fundamental precept of those who lead the development of weak AI that it is not necessary to implement comprehensive human intelligence to obtain a desired functional system.
Initially, attempts to implement AI have begun based on the concept of strong AI. This is derived from the conceptual project of the universal Turing machine proposed by the English mathematician Alan Turing in 1936.[4] A universal Turing machine is a virtual machine that can solve all problems that can be solved mathematically. If all problem-solving processes can be simplified to a form that can be included in mathematical formulas, even a computing machine capable of performing relatively simple forms of computation could solve all problems with appropriate behavioral commands. What is needed is an infinite amount of storage that can be analyzed and filled with a variety of simplified behavioral commands, even for extremely complex problems.[5]
However, Turing's vision of universal Turing machines has a cognitive–scientific fundamental limitation. This is because of the simplest existential truth: human cognition is not purely mathematical. In reality, most human thoughts and actions are performed in an arbitrary and improvisational manner, regardless of mathematical calculations.[6] The problem of storage with infinite capacity is also a limiting factor in the implementation of strong AI. This condition is currently being satisfied by parallel computing or cloud computing, which manages big data, but this does not mean that the fundamental limitations of the concept of strong AI have been sufficiently surmounted. The concept of weak AI accepts the fundamental impossibility of attempting to completely imitate and reproduce human intelligence through the universal Turing machine and suggests that the limitations of the problem should be solved by mathematical reduction only. Weak AI attempts to implement a system that develops the problem-solving ability by itself through learning using some of the sense and thinking mechanisms of people. The realization of machine learning through the construction of a bio-inspired artificial neural network is a widely used method that applies weak AI.[7]
MACHINE LEARNING
The basic concept of machine learning is also based on the definition of intelligence adopted as an agent-environment model for the purpose of implementing strong AI. It is important to note that, in the conceptualization of artificial neural networks and machine learning, they are planned on the basis of the Kantian transcendentality concept. According to Kant, human intellect has the ability to synthesize sensory data obtained with certain a priori categories.[8] This can be said to correspond to the ability to process data obtained through a specific recognition sensor or input tool according to a predetermined problem-solving and learning algorithm in preparation for an artificial neural network.
The scholars who established and developed the concept of an artificial neural network and machine learning gained lessons from the case of aviation technology development in the early 20th century. Although aeronautical technology had already been acquired for humanity thousands of years ago, its actual achievement was not realized until the early 20th century. The aeronautical achievements of humanity are obtained from the apparent transformation of thought. As can be seen in the Icarus narrative of ancient Greek mythology, the early aviation technology exploration of mankind began by imitating bird body structure and flight patterns.
However, this kind of attempt has never been successful in history, and the achievement of actual aeronautical technology has been realized through a combination of aerodynamics and internal combustion engines, which is totally different from the flight method of birds.
Just as aeronautical technology does not need to mimic the bird itself to functionally achieve its purpose, the basic idea for weak AI is that it is not necessary to make machines that understand and think like human beings to obtain the necessary performance.[9] Weak AI researchers believe that AI does not need to be prepared for all indefinite existential environments such as a human being. They are dedicated to building practical agents that are suited to the environmental conditions that can cope with computing, and this is the main trend in AI research and development today. Of course, researchers and developers have not completely abandoned the implementation of weak AI.
HISTORY OF ARTIFICIAL INTELLIGENCE
There are several internal tributaries in the history of formulation, research, and development of AI. Here, our primary concerns are two historical tributaries among them, the history of strong AI and weak AI. Although the two historical tributaries are indeed inseparably associated, it is believed that a historical review centering on the distinction between them will be beneficial in reflecting on the overall history of AI in a more systematic way. Some scholars point to Aristotle, who presented the concept of AI for the first time in history. He did not suggest a direct view of the emergence of machinery that could replace human thinking. However, his attempt to identify man's method of thinking as a form of logic centered on syllogism has since become a source of belief that computing can completely replace human thought mechanisms.[10]
Based on this idea of Aristotle, Ramon Llull, a 14th-century Catalan poet and great missionary theologian, published a book called Ars generalis ultima (The Ultimate General Art) in 1308. In this book, the author devised a mechanical means of recreating the mind of man through a logical combination of concepts based on Aristotle's logic.[11] In 1666, German mathematician and philosopher Gottfried Leibniz published a book entitled Dissertatio de arte combinatoria (On the Combinatorial Art). Here, the author mentioned that every thought of man is implemented with a relatively simple combination of simple concepts.[12] In 1854, George Boole asserted that logical reasoning is performed in the same way as a solution of equations with a set of systems, assuring confidence in the possibility of complete replacement of logical thinking and computing.
TURING MACHINE OVER THE YEARS
This conviction is inherited by Alan Turing, who established the notion of universal computer and AI today.[13] This universal Turing machine project is considered the beginning of the modern AI concept. Turing introduced the concept of the imitation game, or Turing Test, in 1950.[14] If a person does not know that his or her contact is AI and does not know whether he or she is talking to another person or with AI while conducting a conversation, AI has reached the intelligence level of a person. This is what the imitation game means. In August 1955, the term “AI” was mainly adopted and used by the personnel who played a leading role in studies on contemporary AI in the United States. John McCarthy at Dartmouth University, Marvin Minsky at Harvard University, Nathaniel Rochester at IBM, and Claude Shannon at the Bell Telephony Institute have introduced this term through workshops. The workshops held in July and August of 1956, which they organized the following year, are recognized as events in which the conceptual project of AI was developed into one academic field in earnest.[15] At this time, most of the scholars thought of AI as strong AI. However, some of the pioneers of AI have become aware of the weak points in the ideals regarding what Turing universal machines were seeking, and there were those from the earliest days who conducted research aimed at implementing weak AI.
NEURAL NETWORKS
In 1943, Warren McCulloch and Walter Pitts published a paper suggesting neural networks as a way to imitate human brains.[16] In 1951, Minsky and Dean Edmunds developed the stochastic neural analog reinforcement calculator, which is recognized as the first neural network in history.[17] In 1955, Allen Newell and Herbert Simon developed AI programs for the first time in history.[18] The program called the Logic Theorist proved 38 of the first 52 axioms of Principia Mathematica, a work co-authored by Whitehead and Russell.[11]
While the prospects for the implementation of strong AI were unclear, the evolution of weak AI through artificial neural network construction continued. In 1959, Arthur Samuel accelerated the development of weak AI by introducing the term “machine learning.”[19]
In 1965, meaningful research was carried out in succession to point out the fundamental problems of strong AI. Hubert Dreyfus emphasized in his book that there is an area in the mind of a person that works in a way that computers cannot reach.[20] Joseph Weizenbaum developed an AI program called ELIZA, revealing the illusion of Turing's imitation game.[21] ELIZA was an AI program that allowed people to communicate with machines in English. Through experiments of the program's work, it was shown that the machine could be used to communicate with people on a superficial level without any special self-consciousness or deep understanding of the person in communication. It was suggested that the reason why an imitation game can be established is not because the entire human mind is performed in the same way as computing but because the minds of those participating in the game were emotionally assimilated with the machine in dialog.[22]
DEEP LEARNING AND COGNITIVE SCIENCE
Since then, the dominance of research and development has been focused on implementing weak AI, and this trend has been accelerated by Arthur Bryson and Yu-Chi Ho.[23] By developing the backpropagation algorithm in 1969, they made a decisive contribution to the implementation of today's deep learning. This backpropagation algorithm utilizes a partial derivative approach to refine the AI execution result, which is implemented in a propositional and symbolic way and is designed to improve the AI self-execution algorithm.[24] Through the concept of machine learning, AI moves from the stage that was used for the Turing test implementation and mathematical and logical verification to the upper level of real-life use.
In 1972, an early expert system emerged, which is a system that enables nonspecialists to use knowledge by organizing and processing expert knowledge in a specific field.[25] The expert system MYCIN, developed in 1972, was developed at Stanford University and is intended to identify bacteria that cause serious infections and to present antibiotics suitable for them.[26] The fact that the application of the initial expert system was focused on the medical field reflects the high utilization of AI in the medical field.
Although the prospects for implementing strong AI have not completely vanished, AI research is now gaining a significant level of performance with only weak AI, and the dominant prospect is that this situation will continue in the foreseeable future.
However, the AI research aimed at mimicking humans has contributed greatly to promoting the pioneering of new disciplines of cognitive science. Cognitive science studies how people's minds work, based on the existing philosophical epistemology and reflection of psychology, and through the various computing languages and concepts introduced by AI research.[27] Although AI and cognitive science show a clear difference in terms of academic interests, whenever a robust public awareness of strong AI is raised, cognitive science, which attempts to explain the human mind as a computing language and concept, naturally gains public attention.
ARTIFICIAL INTELLIGENCE IN DAILY LIFE
The application of AI is becoming a reality that is commonplace in all areas of modern human life. Twenty years ago, when we entered the 21st century, AI was limited to a certain degree. The development of AI research over the past two decades has been remarkable, and it is difficult to find a place where AI is not utilized at all in ordinary or professional activities involving digital or mobile devices. Machine running is absolutely essential for this to be possible. Therefore, considering the general tendency of AI, it would be useful to understand the use of AI in daily life.
AIs dominant area of activity, nowadays, is a field that requires solving the satisfiability problem. The possibility of satisfaction is simply a matter of finding the value of a variable that makes a given logical expression true. Here, the dependence upon AI is especially high in areas that need to overcome the problems caused by the complexity of the logical expression and to solve the extreme uncertainty of variables, especially in the simulation requiring high precision.
Due to the improved method of handling large amounts of data to find the parameters matching the logical equations, this field has become more popular in recent years. Advances in hardware and algorithm have greatly accelerated the widespread use of AI in all types of simulation activities. Representative examples include forecasting activities (forecasts and simulations on weather, natural environment, specific chemical effects, management variables, risks of financial institutions, etc.), design and management work (architectural and civil engineering project design, programming system design, human resources and customer management, etc.), and the synthesis of information (database construction and processing and synthesis of specific data).[28]
Most of today's AI is designed as an agent that self-enhances the processing power of controlling environmental variables. Typical examples of these characteristics are AI applications for recognition and identification activities (image, voice, and body recognition) and activities that replace human daily life (translation, autonomous navigation, etc.).[29] At present, the World Wide Web (Web), along with machine learning, has contributed significantly to the rapid increase in the utilization of AI. The application of AI is not only required in terms of the quantitative efficiency of data processing in the daily use of the Web but also in terms of its qualitative efficiency. Specifically, various AI programs are actively developed and utilized to provide personalized network services (presenting and optimizing online behavioral forms such as search, shopping, and network management).
Another feature that is remarkable in recent AI applications is AI's combination with robotic engineering. It is intended to physically implement the AI and ultimately aims to realize all the professional activities that can be performed by humans through AI. Efforts to develop robots controlled by AI have been continuously carried out to maximize human convenience and safety, and the efficiency of activities such as smart factories, unmanned agriculture, and unmanned defense robots.[30] In particular, various types of diagnostic robots have already appeared in the medical field. Nowadays, the development of semi-automatic surgical operation AI robots, which assist physicians in various aspects, has become more apparent.[31]
THE USE OF ARTIFICIAL INTELLIGENCE IN THE MEDICAL FIELD
AI has been applied in the medical decision-making process, and these systems can help nonspecialists to obtain expert-level information.[32] Artificial neural networks are highly interconnected networks of computer processors inspired by biological nervous systems.[33] These systems may help connect dental professionals all over the world.[34] This presentation reviewed the history of artificial neural networks in the medical and dental fields, as well as current application in dentistry.
The prospects of AI in the medical field are infinite, but the realization of this rosy outlook has not yet occurred.[35] This is because the field of AI itself has a relatively short history compared to other disciplines.[36] The concept of AI was attempted in the first half of the 1960s. Academic achievement in this field has grown at an incomparably faster rate than other disciplines, but it is still in its infancy. At present, one of the areas in which AI is most powerful is in expert systems.[37] Expert systems that are primarily aimed at organizing and classifying the knowledge that experts in a particular field are able to use are highly sophisticated in that they have highly refined data. Because AI deals mainly with expertise that has been systematized through research, AI research has begun to bear fruit.
In this context, medicine, theoretically or practically, can be considered to have optimal conditions in which expert systems based on AI can be involved. The affinity between AI and the medical field was established in the early 1970s with the launch of the Stanford University Medical Experimental Computer System for AI in Medicine project, as well as the initial expert systems MYCIN, INTERNIST, CASNET, and so on.[38] Since the initiation of the “democratization” of the Internet in 1991, global biomedical network infrastructures have been formed beyond the national level, and there has been a breakthrough in the development of AI in the medical sector.[39]
So far, the most active areas of medical AI are diagnostics and prediction of prognoses.[40] Medical sector AI is contributing significantly to help make decisions related to medical practice while presenting a considerable level of potential for sound diagnosis and prediction.
Data mining and machine learning, which process cumulative medical data through backpropagation and Bayesian inference methods, are used for AI.[41] The emergence of an infrastructure expert system is the largest share of the contribution of AI to the medical sector to date.[42]
APPLICATION OF ARTIFICIAL NEURAL NETWORKS IN THE DENTAL FIELD
In the dental field, although it is clear that it is still a basic step, AI application technology is progressing remarkably.[43] Clinical decision support systems are one of the examples. These are computer programs designed to provide expert support for health professionals.[43]
In a previous study, artificial neural network analysis was applied to construct a toothache prediction model and to explore the relationship between dental pain and daily toothbrushing frequency, toothbrushing time (before meals or after meals and so on), experience of toothbrushing instruction, use of dental floss, toothbrush replacement cycle, receiving scaling or not, and other factors, including nutrition and exercise.[44] As a result, a predictive model of toothache development with a fitness of about 80% was derived. This model identifies proper eating habits, education related to oral hygiene, and stress prevention as the most important factors in preventing toothaches.
Bayesian network analysis of the factors that influence the clinical approach to impacted maxillary canines was conducted.[45] This study included 168 patients with maxillary canines who underwent a combined surgical and orthodontic procedure. The data were gathered by comparing the pretreatment and the posttreatment. Patient-related quantitative variables, metric variables, and nominal variables were collected, and the causal relationships between the variables were found through Bayesian network analysis. Considering that the Bayesian network-based AI originally employed in this study did not have algorithms associated with prior art, this study suggests that AI could be used to assist dental professionals in decision-making and suggests that the possibility of substitution is considerably high.
The necessity of extraction before orthodontic treatment was modeled using an artificial neural network.[46] This study specifically selected a way to significantly increase the accuracy of the decision-making model through pretraining. The pretraining is a variable processing method that focuses on the discovery that the input and output variables in the machine learning based on artificial neural network are converted to values between 0 and 1, respectively. The results showed that an artificial neural network without pretraining was effective with 80% accuracy, and it achieved 100% accuracy with pretraining. It was proven that machine learning based on data derived from the decisions of dental professionals achieved significant performance.
Data mining based on a large amount of restorative data was performed to reveal whether the material differences in the restorations serve as determinants of the lifespan of the restoration.[47] While machine learning focuses on helping make predictions based on analytic learning of existing data, data mining focuses on finding causal relationships and comparisons that are inherent in existing data.[48] The mean and median survival time (MST) of the amalgam restorations for the occlusal surface was 16.8 years in the 1960 group, 13.6 years in the 1970 group, and 7.9 years in the 1980 group. The MST of the glass ionomer and composite resin restoration on the occlusal surface was 4.9 years in the 1970 group and 7.3 years in the 1980 group.[47] All these observations were derived by data mining, and the role of researchers in this process was to collect and organize existing data and apply it to artificial neural network algorithms. The study clearly suggests that undocumented information in large amounts for long years can be extracted and analyzed through data mining.
Data mining of digital dental records provides possibilities for analyzing the variation between dentists when diagnosing caries.[49] The authors analyzed a large amount of electronic patient data to compare the difference in detection of dental caries in initial and re-examinations of the same participants. As a condition for inputting variables, all patients who were first diagnosed with caries in each health center were classified as “new patients,” and those who were diagnosed again by the same dentist were classified as “old patients.” The results show that dentists pay more attention to the patients at the initial visit than at the re-examination visit.
In another previous study, data mining was applied to analyze the indirect cause of extraction based on a large volume of electronic medical records.[50] Direct causes of extraction in 5257 cases were dental caries (43.8%), periodontal diseases (37.2%), fractures (6.8%), prostheses (4.3%), impaction (3.1%), orthodontics (2.7%), and deciduous teeth (0.3%). As a result of data mining of the electronic medical records of the selected subjects with a specific AI algorithm, it was confirmed that the presence of the extraction experience and the number of teeth extracted were statistically affected by gender, age, and occupation. This algorithm not only processes the existing data statistically but also reveals significant causal factors affecting candidates for extraction based on causal relationships. The estimated number of outcomes of extraction according to age and occupation group was close to the number of outcomes according to actual age.
The above-mentioned examples show the utilization of AI in the current dental field to apply machine learning to diagnose and make predictions through extraction of meaningful information from large amounts of medical records. This is mainly focused on building expert systems to help dental professionals make decisions, as well as to help patients understand their diseases.
Although the achievement is not trivial, considering that autonomous and active medical treatment and treatment are the ultimate goals of using AI in the medical field, the present research results are only at the beginning stage. Machine learning and data mining used in the dental field to date and the establishment of expert systems are based on the large amount of previous data dealing with dental diagnosis, treatment, and professional judgment, which is converted into text and numerics. In other words, the actual clinical judgment and treatment are carried out by the dentist continuously, and AI plays a role to assist such judgment and treatment.
CONCLUSION
The current presentation has established the concept, history, and current application of AI in daily life. The application of AI is becoming a reality that is commonplace in all areas of modern human life, and efforts to develop robots controlled by AI have been continuously carried out to maximize human convenience. As the use of AI in the entire medical field increases, the role of AI in dentistry will be greatly expanded. Currently, the use of AI is rapidly advancing beyond text-based, image-based dental practice. In addition to diagnosis of visually confirmed dental caries and impacted teeth, studies applying machine learning based on artificial neural networks to dental treatment through analysis of dental magnetic resonance imaging, computed tomography, and cephalometric radiography are actively underway, and some visible results are emerging at a rapid pace for commercialization.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
REFERENCES
- 1.Legg S, Hutter M. Universal intelligence: A definition of machine intelligence. Minds Mach. 2007;17:391–444. [Google Scholar]
- 2.Albus JS. Outline for a theory of intelligence. IEEE Trans Sys Man Cybern. 1991;21:473–509. [Google Scholar]
- 3.Zackova E. Intelligence explosion quest for humankind. In: Romportl J, Zackova E, Kelemen J, editors. Beyond Artificial Intelligence: The Disappearing Human-Machine Divide. Dordrecht: Springer; 2015. p. 34. [Google Scholar]
- 4.Copeland BJ, Sylvan R. Beyond the universal Turing machine. Australas J Philos. 1999;77:46–66. [Google Scholar]
- 5.Pradhan T. Enhancement of Turing machine to universal Turing machine to halt for recursive enumerable language and its JFLAP simulation. Int J Hybrid Infor Tech. 2015;8:193–202. [Google Scholar]
- 6.Dreyfus H. The Myth of the Pervasiveness of the Mental. In: Schear J, editor. Mind, Reason, and Being-in-the-World. New York: Routledge; 2013. pp. 17–23. [Google Scholar]
- 7.Weinbaum D, Veitas V. Open ended intelligence: the individuation of intelligent agents. J Exp Theor Artif Intell. 2017;29:371–96. [Google Scholar]
- 8.Kant I. Kritik der Reinen Vernunft. Hamburg: Felix Meiner; 1956. pp. A76–80. B102-6. [Google Scholar]
- 9.Floridi L. Philosophy and Computing: An Introduction. New York: Routledge; 1999. p. 148. [Google Scholar]
- 10.Perlovsky LI. Neural mechanisms of the mind, aristotle, zadeh, and fMRI. IEEE Trans Neural Netw. 2010;21:718–33. doi: 10.1109/TNN.2010.2041250. [DOI] [PubMed] [Google Scholar]
- 11.Nilsson N. The Quest for Artificial Intelligence. New York: Cambridge University Press; 2010. [Google Scholar]
- 12.Uckelman SL. Computing with Concepts, Computing with Numbers: Llull, Leibniz, and Boole. Berlin: Springer; 2010. pp. 427–37. [Google Scholar]
- 13.Turing A. On computable numbers, with an application to the Entscheidungsproblem. Proc London Math Soc. 1936;42:230–65. [Google Scholar]
- 14.Turing A. Computing Machinery and Intelligence. Mind. 1950;49:433–60. [Google Scholar]
- 15.Kline R. Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann History Computing. 2011;33:5–16. [Google Scholar]
- 16.McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity 1943. Bull Math Biol. 1990;52:99–115. [PubMed] [Google Scholar]
- 17.Poulton MM. A Brief History. Oxford: Elsevier Science; 2001. p. 10. [Google Scholar]
- 18.Newell A, Simon HA. Computer science as empirical inquiry: Symbols and search. Commun ACM. 1976;19:113–26. [Google Scholar]
- 19.Bowling M, Fürnkranz J, Graepel T, Musick R. Machine learning and games. Mach Learn. 2006;63:211–5. [Google Scholar]
- 20.Dreyfus H. Alchemy and Artificial Intelligence. Santa Monica: Rand Corporation; 1965. [Google Scholar]
- 21.Weizenbaum J. Computer Power and Human Reason: From Judgment to Calculation. New York: W.H. Freeman and Company; 1976. [Google Scholar]
- 22.Weizenbaum J. ELIZA-a computer program for the study of natural language communication between man and machine. Commun ACM. 1966;9:36–45. [Google Scholar]
- 23.Ho Y, Bryson A, Baron S. Differential games and optimal pursuit-evasion strategies. IEEE Trans Autom Control. 1965;10:385–9. [Google Scholar]
- 24.Bryson A, Ho Y. Applied Optimal Control: Optimization, Estimation, and Control. New York: Taylor and Francis; 1975. [Google Scholar]
- 25.Jackson P. Introduction to Expert Systems. London: Addison Wesley; 1986. [Google Scholar]
- 26.Shortliffe EH, Buchanan BG. A model of inexact reasoning in medicine. Math Biosci. 1975;23:351–79. [Google Scholar]
- 27.Forbus KD. AI and cognitive science: The past and next 30 years. Top Cogn Sci. 2010;2:345–56. doi: 10.1111/j.1756-8765.2010.01083.x. [DOI] [PubMed] [Google Scholar]
- 28.Balint A, Belov A, Matti J, Sinz C. Overview and analysis of the SAT Challenge 2012 solver competition. Artif Intell. 2015;223:120–55. [Google Scholar]
- 29.Zinchenko K, Wu CY, Song KT. A study on speech recognition control for a surgical robot. IEEE Trans Indust Inform. 2017;13:607–15. [Google Scholar]
- 30.Dirican C. The impacts of robotics, artificial intelligence on business and economics. Proced Soc Behav Sci. 2015;195:564–73. [Google Scholar]
- 31.Alessandri E, Gasparetto A, Garcia RV, Béjar RM. An application of artificial intelligence to medical robotics. J Intell Robot Syst. 2005;41:225–43. [Google Scholar]
- 32.White SC. Decision-support systems in dentistry. J Dent Educ. 1996;60:47–63. [PubMed] [Google Scholar]
- 33.Steimann F. On the use and usefulness of fuzzy sets in medical AI. Artif Intell Med. 2001;21:131–7. doi: 10.1016/s0933-3657(00)00077-4. [DOI] [PubMed] [Google Scholar]
- 34.Ramesh AN, Kambhampati C, Monson JR, Drew PJ. Artificial intelligence in medicine. Ann R Coll Surg Engl. 2004;86:334–8. doi: 10.1308/147870804290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Barbieri C, Molina M, Ponce P, Tothova M, Cattinelli I, Ion Titapiccolo J, et al. An international observational study suggests that artificial intelligence for clinical decision support optimizes anemia management in hemodialysis patients. Kidney Int. 2016;90:422–9. doi: 10.1016/j.kint.2016.03.036. [DOI] [PubMed] [Google Scholar]
- 36.Maes P. Artificial life meets entertainment: Lifelike autonomous agents. Commun ACM. 1995;38:108–14. [Google Scholar]
- 37.Wong BK, Monaco JA. Expert system applications in business: A review and analysis of the literature (1977 and ndash; 1993) Inf Manage. 1995;29:141–52. [Google Scholar]
- 38.Collen MF, Shortliffe EH. The Creation of a New Discipline. In: Collen MF, Ball MJ, editors. The History of Medical Informatics in the United States. London: Springer London; 2015. pp. 75–120. [Google Scholar]
- 39.Best ML, Wade KW. The internet and democracy: Global catalyst or democratic dud? B Sci Technol Soc. 2009;29:255–71. [Google Scholar]
- 40.Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol. 2017;2:230–43. doi: 10.1136/svn-2017-000101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Salari N, Shohaimi S, Najafi F, Nallappan M, Karishnarajah I. A novel hybrid classification model of genetic algorithms, modified k-nearest neighbor and developed backpropagation neural network. PLoS One. 2014;9:e112987. doi: 10.1371/journal.pone.0112987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Zhu J, Wen D, Yu Y, Meudt HM, Nakhleh L. Bayesian inference of phylogenetic networks from bi-allelic genetic markers. PLoS Comput Biol. 2018;14:e1005932. doi: 10.1371/journal.pcbi.1005932. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Khanna S. Artificial intelligence: Contemporary applications and future compass. Int Dent J. 2010;60:269–72. [PubMed] [Google Scholar]
- 44.Kim EY, Lim KO, Rhee HS. Predictive modeling of dental pain using neural network. Stud Health Technol Inform. 2009;146:745–6. [PubMed] [Google Scholar]
- 45.Nieri M, Crescini A, Rotundo R, Baccetti T, Cortellini P, Prato GP. Factors affecting the clinical approach to impacted maxillary canines: A bayesian network analysis. Am J Orthod Dentofac Orthop. 2010;137:755–62. doi: 10.1016/j.ajodo.2008.08.028. [DOI] [PubMed] [Google Scholar]
- 46.Xie X, Wang L, Wang A. Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment. Angle Orthod. 2010;80:262–6. doi: 10.2319/111608-588.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Kakilehto T, Salo S, Larmas M. Data mining of clinical oral health documents for analysis of the longevity of different restorative materials in Finland. Int J Med Inform. 2009;78:e68–74. doi: 10.1016/j.ijmedinf.2009.04.004. [DOI] [PubMed] [Google Scholar]
- 48.Jackson J. Data mining; A conceptual overview. Commun Assoc Infor Sys. 2002;8:267–96. [Google Scholar]
- 49.Korhonen M, Gundagar M, Suni J, Salo S, Larmas M. A practice-based study of the variation of diagnostics of dental caries in new and old patients of different ages. Caries Res. 2009;43:339–44. doi: 10.1159/000231570. [DOI] [PubMed] [Google Scholar]
- 50.Miladinović M, Mihailović B, Janković A, Tošić G, Mladenović D, Živković D, et al. Reasons for extraction obtained by artificial intelligence. Acta Fac Med Naissensis. 2010;27:143–58. [Google Scholar]