Abstract
Material science has historically evolved in tandem with advancements in technologies for characterization, synthesis, and computation. Another type of technology to add to this mix is machine learning (ML) and artificial intelligence (AI). Now increasingly sophisticated AI‐models are seen that can solve progressively harder problems across a variety of fields. From a material science perspective, it is indisputable that machine learning and artificial intelligence offer a potent toolkit with the potential to substantially accelerate research efforts in areas such as the development and discovery of new functional materials. Less clear is how to best harness this development, what new skill sets will be required, and how it may affect established research practices. In this paper, those question are explored with respect to increasingly more sophisticated ML/AI‐approaches. To structure the discussion, a conceptual framework of an AI‐ladder is introduced. This AI‐ladder ranges from basic data‐fitting techniques to more advanced functionalities such as semi‐autonomous experimentation, experimental design, knowledge generation, hypothesis formulation, and the orchestration of specialized AI modules as stepping‐stones toward general artificial intelligence. This ladder metaphor provides a hierarchical framework for contemplating the opportunities, challenges, and evolving skill sets required to stay competitive in the age of artificial intelligence.
Keywords: artificial intelligence, closed‐loop experimentation, machine learning, material science
In this perspective, the implications of adopting increasingly advanced AI technologies in materials science are discussed. It is considered how to best utilize AI technologies, identify the necessary new skills, and examine the impact they may have on traditional research methodologies. Central to the discussion is a conceptual AI ladder that spans from elementary data fitting to general artificial intelligence.
1. Introduction
Throughout human history, the discovery of new materials has transformed and reshaped societies. Given the needs we have and the vanishingly small part of the chemical space that has been explored, there is no reason to believe the future will be any different. New functional materials will be pivotal in enabling breakthroughs in applications ranging from renewable energy, clean air and water, space exploration, next‐generation nuclear power, energy storage, catalysis, and quantum computing. Advances in material science may also give us things like room‐temperature superconductors and other innovations still belonging to the realm of science fiction.
Unfortunately, discovering and developing new functional materials is an inherently complex challenge that often is slow and labor‐intensive. Theories that could guide rational material design are scarce, and there is no counterpart to the Schrödinger equation for predicting synthesisability. Each potential material is further associated with a nearly infinite parameter space of variables that can affect the synthesis, and once synthesized, the properties are influenced by a multitude of factors such as microstructure, imperfections, defects, and impurities. Moreover, feedback frequently depends on a range of highly specialized characterization techniques, each necessitating time, resources, and specialized training to operate effectively.
Great needs and infinite possibilities provide strong incentives for improving the rate of development. Historically, advancements in material science have been driven by human ingenuity, curiosity, and experimental expertise. This has been further augmented by increasingly advanced characterization techniques, more powerful computing capabilities, and an ever‐expanding body of knowledge. Recently, machine learning (ML) and artificial intelligence (AI) have emerged as vital components of this toolkit, showing great potential for playing an increasingly important role in accelerating the pace of materials discovery and development. This paper will discuss the potential impact that increasingly capable ML/AI systems may have in material science.
In some sense, ML parallels traditional statistics: often useful, sometimes misinterpreted, occasional pivotal for generating new insights, but generally only a small part of the scientific narrative. However, we are now witnessing a rapid evolution in artificial intelligence where increasingly capable systems are solving problems that were until recently considered to be the stuff of science fiction. AI systems have already surpassed human abilities in games such as chess,[ 1 ] Jeopardy,[ 2 ] and GO,[ 3 ] can predict how proteins fold,[ 4 ] and are even capable of autonomous driving.[ 5 ] Furthermore, Large language models (LLM) like ChatGPT can produce text nearly indistinguishable from human‐generated content,[ 6 ] and text‐to‐image systems based on latent diffusion models can create visually stunning art.[ 7 ] What's more, we are beginning to see how different AI algorithms and subsystems are being integrated to tackle increasingly complex problems.[ 8 ] This synergy among AI components has the potential to start a new industrial revolution,[ 9 , 10 ] but may also render a significant portion of existing jobs obsolete.[ 11 , 12 ]
From the perspective of material science, this rapid development of artificial intelligence prompts a series of compelling questions. To what extent can AI accelerate the development of new materials? Does materials science present unique challenges for AI, or can generalized algorithms suffice? Could AI fundamentally revolutionize the way materials science is conducted? What new skill sets will be required, and which existing practices will need to evolve for that to happen? How much of the research process could potentially be delegated to AI entities? May there even be a conceivable future in which today's materials researchers and their skill sets becomes obsolete, replaced entirely by AI systems? Or is the perceived significance, importance, and future impact of AI greatly inflated and merely a contemporary hype?
In this perspective, we delve into those questions by examining current trends and making informed projections into the near future. AI is a broad concept, ranging from relatively simple algorithms to sophisticated universal function approximators[ 13 ] that when integrated with robotics, can autonomously interact with the physical world. We can conceptualize the complexity of AI systems with a ladder (see Figure 1 ) where each rung represents increasingly advanced capabilities – spanning from basic data fitting to semi‐autonomous experimentation, experimental design, knowledge creation, general artificial intelligence, and beyond. We frame our discussion on the use of AI in materials science around this ladder metaphor, which provide a hierarchical framework for contemplating the opportunities, challenges, and evolving skill sets that may be required. While the primary focus and examples in this paper relate to materials science, much of the analysis is likely applicable also to other scientific disciplines.
Figure 1.
The increasing complexity and sophistication of AI systems can be thought about in terms of an AI‐ladder that is stretching from basic linear regression all the way up to general artificial intelligence and beyond. In this paper we are discussing AI in five broad categories illustrated in the ladder in the figure, but one can imagine an arbitrary number of rungs on this ladder.
2. The First Rung Up the Ladder
2.1. ML‐Models and What to Do with Them
The initial rung of the ML/AI ladder resemble traditional statistics, albeit approached with a somewhat different mindset. At its most basic, this includes straightforward techniques like linear regression. More broadly, this stage often entails employing models trained on limited datasets to accomplish specific tasks, which are usually oriented toward regression or classification. The overarching aim is to create a statistical model that can serve as a surrogate for a physical model. This is especially useful when a physical model is either too intricate to derive or entirely elusive. In essence, this first step is about leveraging statistical inference to provide an alternative way of understanding and predicting outcomes in situations where traditional physical models may not be practical.
There is a plethora of machine learning models, with some of the more widely used ones being linear regression, decision trees, Extra Trees, Random Forest (RF), AdaBoost (ABoost), Gradient Boosting (GBoost), Extreme Gradient Boosting (XGBoost), Support Vector Machines (SVM), and Multi‐Layer Perceptrons (MLP), among others. Even for those with minimal experience in a programming language such as Python, utilizing these models have become increasingly accessible thanks to well‐maintained open‐source libraries such as Scikit‐learn,[ 14 ] TensorFlow,[ 15 ] Keras,[ 16 ] PyTorch,[ 17 ] etc. For those interested in delving into the mathematical underpinnings of these algorithms or learning how to implement them in code, there are numerous high‐quality resources available.[ 18 , 19 ] Here we will instead focus on the applications and implications of trained models.
Regression and classification models essentially serve as shortcuts, enabling a reasonable prediction of the outcomes of experiments, or the properties of materials, without synthesising the material and conducting the experiments. This becomes particularly useful when comprehensive physics‐based models are not available, but when data has been collected. One can then train models that establish relationships between for example material composition and solar cell efficiency,[ 20 , 21 , 22 , 23 ] or molecular structure to attributes such as solubility,[ 24 ] toxicity,[ 25 ] or antibacterial effect.[ 26 ] Once trained, such a models can be used for virtual screening of new molecules and materials to identify promising candidates for further detailed investigation, thus dramatically reducing the number of experiments needed. A recent notable example is given by Stokes et al., who employed such an approach to discover a new type of antibiotics.[ 27 ]
A trained model can also serve as a tool for introspection, enabling deeper understanding of the data and relationships within it. With techniques such as associative rule mining,[ 28 ] SHAP‐analysis (SHapley Additive exPlanations),[ 29 ] correlation plots, and feature weighting it is possible to assess the significance of individual variables or clusters of variables. This knowledge can then guide the formulation of new heuristics and hypotheses for subsequent experiments, potentially paving the way for more robust physics‐based models and transferable insights. Utilizing machine learning models in this manner aligns well with the current academic publishing paradigm, wherein a high‐quality study typically introduces a new material, proposes a novel synthesis route, or offers insights into the material's behavior under specific conditions.
Another valuable application for machine learning models arises in scenarios where physics‐based models do exist but are computationally expensive, such as in the case of quantum mechanical simulations. For example, Density Functional Theory (DFT), which is the workhorse of molecular and materials simulations, is relatively affordable for limited‐scale screening but is still constrained by computational costs. By using existing DFT data, a neural network can be trained to act as a surrogate for DFT computations. The advantage here is that running a forward pass through a neural network can be orders of magnitude faster than executing the corresponding DFT computation,[ 30 , 31 ] which enables screening over far larger compositional spaces. It is important to note that a neural network cannot be expected to produce results more accurate than the DFT data upon which it is trained. However, what it can offer is computational speed and strategic guidance for identifying scenarios that warrant more in‐depth analysis with more rigorous physics‐based models. A similar case can be made for the applicability of machine learning in molecular dynamics simulations.[ 32 , 33 ]
Even when the procedure for a task is well‐understood, ML‐models can still offer value by enabling automation and faster workflows. Image recognition serves as a good example. While it's relatively straightforward for a human to take a photo of a reaction outcome and evaluate whether large crystals have formed, the task is monotonous and time‐consuming. A convolutional neural network can perform the same task but without a human in the loop,[ 34 ] which is both cheaper and more time efficient, even if not necessarily more accurate. Another illustrative example is the automation of X‐ray diffraction (XRD) analysis for high‐throughput combinatorial experiments.[ 35 , 36 ]
Yet another use case is clustering. When dealing with a large volume of unlabelled data, algorithms can be employed to group similar items together, revealing connections and patterns that might not be immediately apparent. These insights can serve as the foundation for subsequent studies aimed at developing more accurate physics‐based models.
2.2. Under the Hood: Data, Features, and Models
Machine learning models at the initial rung of the ML/AI‐ladder generally follow a similar workflow: data collection, data cleaning, feature selection, model selection, model training, and hyperparameter tuning using a subset of the data (i.e., the training and/or validation set), followed by model evaluation using the remaining data (i.e., the test set).[ 37 , 38 , 39 ] What distinguishes the application of machine learning in the field of materials science from other domains is primarily the nature of the data collected and the specific features that are of importance.[ 40 , 41 , 42 , 43 , 44 ]
When discussing data in the realm of materials science, it is useful to differentiate among theoretical data, publicly available experimental data, and in‐house generated experimental data. Generally, datasets within the materials field tend to be relatively small, with a few notable exceptions. Among the exceptions are DFT databases like the Materials Project,[ 45 ] NOMAD,[ 46 ] Aflow,[ 47 ] etc. which may have data for a few hundred thousand compounds.
These DFT databases are interesting not only because of their large size but also because they contain data on the materials’ crystal structure, from which much of the intrinsic properties of a material is derived. From an ML perspective, a current challenge is how to develop featurization schemes that effectively utilize the information contained within the DFT data. When working with truly large datasets, it may be possible to get away with using very simple features such as various one‐hot encoding schemes. One could for example imagine to only use atomic numbers as features. This is because more complex, expressive features can be learned during the training process, which is an approach commonly employed in for example image recognition.[ 48 ] While DFT databases may be large in the context of materials science, they are still relatively small when compared to typical ML datasets. This necessitates the creation of richer, more informative features. Additionally, most ML models require feature vectors of consistent lengths for each material. Simpler featurization schemes are often based on the materials composition, with atomic features being averaged based on the stoichiometry of the compounds. Various versions of these exist,[ 49 ] such as Magpie[ 50 ] and Oliynk.[ 51 ] While easy to compute, these featurisation schemes are position‐independent and thus overlook valuable structural data. By using the atomic coordinates, it is possible to construct more sophisticated and expressive features. Examples include sine matrices, aimed at generalizing the concept of molecular Coulomb matrices to periodic crystals;[ 52 ] Smooth Overlap of Atomic Positions fingerprint (SOAP);[ 53 ] Many‐Body Tensor Representations (MBTR);[ 53 ] and Partial Radial Distribution Functions (PRDF).[ 54 ] Yet another approach is the use of Graph Neural Networks (GNNs), which focus on the bonding information between atoms in the unit cell, rather than their atomic coordinates.[ 55 , 56 , 57 ] Developing functional featurisation schemes for materials remains and open field of research, and there will be reasons to return to that topic in later papers.
Another valuable source of data comes from experimental results collected in large databases. Historically, the field of materials science has not excelled at creating open‐access experimental databases. A notable exception exists within the crystallographic community, which early on established standards for formatting, reporting, and storing crystallographic data. This proactive approach has led to the creation of databases such as the Crystallographic Open Database (COD)[ 58 ] and the Cambridge Structural Database (CSD),[ 59 ] each housing hundreds of thousands of crystal structures derived from diffraction measurements. These databases greatly complement the theoretical DFT databases discussed above.
Several factors contribute to the limited availability of experimental materials databases. First, experiments are not only challenging to execute but also costly and time‐consuming. Materials data is also highly heterogeneous, encompassing a wide array of synthesis and characterization techniques, each of which requires extensive metadata ontologies to make sense. Moreover, there are numerous different applications for materials, each emphasizing a distinct set of properties, which further complicates the data landscape. This has not been an environment that encourage strong cultures of open data sharing. Instead, the prevailing practice has been to visualize and describe selected data in academic papers without providing easy access to the raw data. Practises are, however, now gradually changing to the better. In part, this is a consequence of more researchers seeing the value in what is known as FAIR data treatment, i.e., that data should be made findable, accessible, interoperable, and reusable.[ 60 , 61 ] There are also an increasing number of funding agencies, governmental bodies, and publishers demanding data to be shared openly. In both cases, the popularisation of ML/AI‐modelling and the associated need for open data is catalyzing the process.
In addition to publicly available and proprietary databases, there is also in‐house data. Apart from the effort and resources involved in gathering new experimental data, and the limited amount of data that results in, it is often easier to work with. One advantage is internal consistency; it can be uniformly formatted from the start, the parameter space is well defined, missing values can be complemented, and data from failed experiments are accessible, which can significantly enhance model performance.[ 59 ] Models derived from this type of data may be good for solving specific problems, but are typically narrow in scope, and often not very generalisable. Regarding model selection, a common practice is to explore a range of models available in frameworks like Scikit‐learn,[ 14 ] or other frameworks that offer high‐level implementations of a wide array of traditional ML‐algorithms. There are plenty of excellent sources discussing the mathematics and implementation of such models in detail.[ 18 , 19 ]
2.3. Consequences
Utilizing ML‐models in the way described in this section has the potential to accelerate research, uncover hidden patterns, and simplify the screening of new materials. At this stage, machine learning serves as a set of tools that, when properly implemented, is an indispensable part of modern research practises. Consequently, mastering these tools should be an essential part in any STEM‐education. However, while valuable, ML‐modelling at this level is not revolutionary in nature. It primarily involves employing robust statistical methods, translated into computer code, and adopting a mindset that treats all data – both positive and negative – as valuable assets. While perhaps not transformative, those who adopt machine learning techniques and this data‐centric mindset, are likely to experience increased productivity and can tackle more complex research questions.
3. The Second Rung: Robots, Automation, and Physical Manipulations
3.1. The Case for Automation
The next step up the complexity ladder occurs when machine learning models gain the ability to directly interface with physical laboratory equipment. At this stage, the models begin to use their predictive capabilities to autonomously manipulate the physical environment, by for example synthesizing new samples or generating new measurement data. The enabler for this direct interaction is robotics, which is intrinsically tied to the concept of automation.
Automation has since the industrial revolution served as a catalyst for enhancing efficiency, increasing throughput, reducing cost, and liberating humans from repetitive tasks. While academic research has not been immune to this trend, the complex and ever‐shifting nature of research activities has made them more challenging to automate compared to standardized industrial processes. Human dexterity and adaptability are hard to outcompete when it comes to moving samples around and manipulate vials, pipets, bottles, powders, and other things design for human operations. Consequently, automation in academic settings has largely been confined to specialized instruments capable of executing well‐defined, repetitive tasks, with sample exchangers and pipetting robots being prime examples. The investment cost, the skillset, and the commitment required for complete lab automation have also been limiting factors.
With cheaper hardware and better software, we are now witnessing a growing number of example of lab automation for high‐throughput synthesis and characterization platforms that can automate an increasing number of consecutive steps in the research process.[ 36 , 62 , 63 , 64 , 65 , 66 , 67 ] Those systems are often referred to as Materials Acceleration Platforms (MAPs),[ 64 , 68 ] and can vary in complexity and in the number of tasks they can execute.
One type of MAPs is based on microfluidic systems. Those allow for precise mixing of small liquid volumes and high‐time resolution monitoring of reactions through optical methods.[ 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 ] These systems offer the advantages of minimal sample volumes, high precision, and high throughput with potentially thousands of experiments per day. However, they are constrained in terms of the types of chemistries that can be investigated and the in‐line characterization techniques that can be applied.
A more versatile approach involves the use of pipetting robots or robotic arms for manipulation of vials and pipets and standard liquid‐based synthesis, as well as transfer of samples between various measurement stations.[ 63 , 78 ] This enables the exploration of a broader range of chemistries and allows for workflows that incorporate a variety of standard equipment. Essentially anything that fits on a lab bench could be integrated in such workflows. At the even higher end of the complexity spectrum are autonomous, self‐navigating collaborative robots that can be integrated into standard lab environments. These advanced robots are capable, in principle, of executing any manual task that a human researcher could perform.[ 66 , 79 ] Burges et al. have proved a nice example of such a system exploring new photocatalysts.[ 64 ]
When executed effectively, robot‐assisted lab automation can substantially increase sample throughput compared to traditional manual experimentation. Moreover, it enhances data consistency by minimizing human variability, and it simplifies automatic logging of data and related metadata. Robot‐assisted lab work is at its core not fundamentally different from traditional artisanal lab work. However, the sheer increase in data output made possible within given time and budget constraints can cause this quantitative advantage to morph into a qualitative change as well. A parallel of such a transformation can be seen for computer power where more powerful computers have not only accelerated computations but also unlocked entirely new possibilities. Lab automation may be transformative in the same way. If you can suddenly synthesize and characterize samples at a rate 1000 times faster than before, it opens the door to exploring entirely new research questions.
While lab automation offers significant advantages, it is not a one‐size‐fits‐all solution. High‐quality robotic systems targeting laboratory work remain costly and are relatively scarce. Moreover, the learning curve to fully utilize these systems can be steep. In a dynamic lab setting where research focus frequently shifts, the cost‐effectiveness of robotic automation may also be questionable for short‐term projects. However, the trajectory is promising. The cost of robotic solutions is gradually decreasing, while their availability, user‐friendliness, and adaptability are on the rise. As these trends continue, robot‐assisted experimentation is poised to become an increasingly appealing option for accelerating materials research.
Even though robotic automation offers several advantages, it is important to remember that in traditional setups, robots only execute tasks explicitly programmed by humans. Moreover, even with the most efficient robots, we can only explore a tiny fraction of the synthetic parameter space, except for the most constrained problems. Lab automation, therefore, does thus not remove the intellectual challenges inherent in experimental research. It is still up to the human researcher to formulate relevant questions, define the boundary of the parameter space to explore, decide which experiments that should be conducted, and to interpret the data generated.
3.2. Combining Robots with Machine Learning
Another step up in complexity involves integrating robotics and lab automation with machine learning and artificial intelligence. This has the potential to augment not just the manual but also the intellectual aspects of research. One emerging concept in this realm is closed‐loop experimentation, which aims to minimize human involvement in the research process as much as possible (Figure 2 ).[ 63 , 64 ] The core idea behind this concept is the recognition that the development of new functional materials often resembles an optimization problem. Typical research objectives include identifying material compositions with specific properties, as well as determining the synthetic conditions that enable these materials to achieve the desired microstructure and how to incorporate them into devices. These challenges usually involve navigating large, nonlinear, multidimensional parameter spaces under the hypothesis that a specific region within these spaces will yield the desired results. Even with a relatively small number of variables and a coarse grid, conducting an exhaustive search becomes impractical within any reasonable budget. A critical task, therefore, is to wisely select experiments to minimize the path travelled toward the goal while navigating these large multidimensional parameter spaces.
Figure 2.
Workflow for Bayesian optimization combined with robotics for accelerated experimentation. It is up to the human researcher to formulate hypothesises and set the experimental boundaries 1). With initial data and insights 2) a Gaussian process (GP) can be used to generate a prior 3), here illustrated for 1D‐data. Based on the Gaussian process, an acquisition function (AQ) is computed and optimized 4) which guide a robot system 5) to do a new synthesis and set of measurements. That generated data is then automatically analyzed 6) after which the prior is updated (3). The process is then repeated until a stopping criterion is reached, A final model is then precented 7), which can be used as the basis for new models and theory 8), or more dedicated experiments 9).
Several strategies exist for automating optimization, with Bayesian optimization[ 78 , 80 , 81 , 82 , 83 , 84 ] being a popular example. Genetic algorithms is another example.[ 85 ] A core idea behind Bayesian optimization is to initiate the process with a few randomly selected experiments, or to leverage prior experience, to construct a preliminary model of the system. This model is often termed the hypothesis function, or the prior. Gaussian processes are a popular choice for these functions as they provide not only interpolated estimates but also uncertainty estimates. The goal in designing the prior is to ensure it can be easily optimized to achieve the overarching research objective. This optimized prior then serves as a guide for selecting the next experiment to conduct. After executing the recommended experiment, the newly acquired data can be used to refine the existing prior model. By iteratively performing these steps, researchers can dramatically reduce the number of necessary experiments, enabling more efficient navigation through the parameter space compared to traditional design of experiments methods.[ 86 , 87 ]
Traditional iterative experimental development often follows a similar logic, even if this process is not always formalized or consciously acknowledged. An automated and mathematically formalized approach eliminates human ambiguity and removes the bottleneck caused by manual data evaluation and experimental planning after each test. However, the application of robot‐assisted Bayesian optimization in materials science is still a rather new practice. Much development remains in terms of best practices, user‐friendliness, and cost‐effectiveness until closed‐loop systems become standard equipment. As robots become more affordable, successful case studies increase in number, and software integrations grows increasingly sophisticated, we can anticipate that these methods will eventually become standard practices in the academic research toolkit.
3.3. Consequences
Closed‐loop experimentation represents a qualitative leap forward in the ongoing quest to conduct more research with fewer resources. By combining the high‐throughput capabilities of automation with the efficiency of Bayesian optimization, which automates data analysis and guides subsequent experiments, substantial advantages can be realized. While this approach may not be applicable to every research problem, when it is effective, it has the potential to dramatically accelerate the pace of discovery.[ 67 ] Compared to traditional methods of experimentation and conventional automation, this approach represents more than just an increase in throughput. It marks a significant qualitative shift by introducing autonomous decision‐making. Here we are not only replacing and/or expanding the human capacity for manual labor and number crunching. We are also augmenting intellectual aspects of the research process. This is particularly evident in how the system autonomously hypothesizes the best subsequent experiment after each round of measurements. We may today be at the initial stages of this development, but continued progress may from a laboratory perspective fundamentally change the relationship between humans and machines. The dynamic may shift from one where machines serve to augment human scientific capabilities to one where humans assist the machine to work as efficiently as possible. This transition will undoubtedly occur in incremental steps, but it is worth contemplating how these changes could reshape the skill sets required for competitive materials research.
The human experience of research may change at this level of artificial intelligence. Even so, it will not make humans obsolete. The closed‐loop experimentation paradigm can accelerate optimization processes, handle the practical aspects of experiments, and even automate intermediate data analysis and decision‐making. However, the intellectual underpinnings of the research – identifying what is worth to explore, formulating research questions, and decide what to optimize – will still rely on human visions and ingenuity. It will also be up to humans to set the boundaries for the optimization, and to interpret the significance of the results. The intellectual load put upon the human researcher could actually be expected to increase. While machines may handle an increasing share of the operational workload, there will be an increased demand for generating hypothesizes and formulating research questions. There will also be an increased demand for the strategic and interpretive aspects of research.
Operating within this new paradigm will require a certain skillset. Programming, tinkering with robot equipment, advanced data analysis, and strategic experimental planning are already valuable competencies, but they are likely to become even more important for researchers aiming to stay competitive. These skills should therefore be more heavily emphasised in research educations. The pace of hypothesis testing will also intensify. Gone are the days when a single good idea could fuel months of data collection and analysis in the lab. With automated systems, preliminary answers could arrive in a matter of days, or even hours, necessitating a continuous stream of new ideas for exploration. This quick turnover will place greater demands on researchers to generate hypotheses and adapt more rapidly to the results. The ability to think fast, broad, and innovatively will thus become even more valuable in the research landscape of the future.
Closed‐loop‐experimentation may also require roles that could be classified as less skilled, although essential for system functioning. In principle, everything could be automated with enough resources. However, a cost‐benefit analysis will probably often favor flexible humans with dexterous hands for tasks like supplying clean substrates, vials, and pipettes, preparing stock solutions, weighing dry chemicals, packing up new deliveries, and waste management.
Another significant shift that closed‐loop experimentation could catalyze is the transformation of the types of services that laboratories can provide. Currently, it is common to offer what can be called “analysis as a service,” where samples are sent to an external lab for specialized testing. In the future, we may instead see the rise of what can be called “optimization as a service.” In this evolved model, instead of sending a sample, clients would provide the lab with specific boundary conditions and objectives. The lab would then use automated systems to identify the optimal conditions within the provided parameter space given the stated objectives. This could dramatically expand the scope and efficiency of laboratory services.
4. The Third Rung: Generative Models and Hypothesis Generation
The next rung up the AI/ML ladder encompasses a broader development of artificial intelligence with potentially far‐reaching implications for numerous aspects of human life, including material science. At this level of sophistication we encounter large language models (LLMs) like GTP‐4,[ 6 ] LaMDA,[ 88 ] and LLaMA,[ 89 ] that recently have attracted lots of attention for their ability to generate text with human qualities based on neural networks utilizing the transformer architecture.[ 90 , 91 ] This technology is still in its early stages and is evolving rapidly, and its future potential remains exciting but uncertain. Nevertheless, we can already today start to see how these models could be utilized in material science research.[ 92 ]
The core strength of large language models lies in text generation, making them particularly useful for writing applications.[ 93 ] They already today excel at condensing complex text into more digestible formats, such as educational materials or public communication documents.[ 94 , 95 ] They have a tendency to be factual incorrect and they are not yet capable of writing scientific papers that would pass per‐review – we think. However, they are sufficiently good at simpler writing exercises to cause some panic in the educational sector, and when used as a writing assistant they could improve the text quality of most average writers.
If trained on comprehensive scientific literature databases those models have the capability to mine, scan, summarize, and analyse the academic literature.[ 96 , 97 , 98 , 99 ] While they are not designed to replace human experts with specialized domain knowledge, these models could significantly streamline and simplify the process of conducting literature reviews.[ 98 , 100 ] This could become an invaluable tool for researchers trying to understand a field and identify emerging trends and key discoveries within it.
One of the most intriguing possibilities, however, lies in the potential for these AI systems to generate new hypotheses based on existing knowledge, which can then be explored experimentally. Such generative AI could recommend novel material systems, suggest alterations to existing systems, or assist in brainstorming innovative methodologies, techniques, or experiments worth pursuing. At the time of writing, state‐of‐the‐art models, like Chat‐GPT, still find it challenging to produce hypotheses robust enough to serve as the foundation for a scholarly article. It is not impossible, but to succeed requires both domain knowledge and a bit of luck. However, it is not all that far from being there,[ 101 , 102 ] and it can quite consistently provide topics, questions, and ideas that could form a good basis for a Ph.D. research project.[ 103 ] Given the rapid pace of advancements in this field, we can anticipate that these models will mature into highly effective tools for academic research.
This marks a significant step into what has traditionally been an exclusively human intellectual domain. However, in its current form, rather than posing a risk of replacing humans, this technology has the potential to significantly amplify our capacity for generating hypotheses. A plausible workflow might involve employing generative AI as an assistant to brainstorm a list of ten novel hypotheses, followed by utilizing its capabilities to sift through existing literature to assess the plausibility, originality, and significance of each hypothesis. Armed with this groundwork, the human researcher can then make an informed decision about which hypothesises that seems most promising and design new experiments accordingly.
One distinguishing feature of ML/AI‐models at this level of abstraction, compared to lower rungs on the AI ladder, is model size and generalizability. Earlier, we discussed smaller models that typically are trained on a moderate volume of data generated either in‐house or extracted from specialized databases. For these models, a basic understanding of the underlying mathematics and code implementation is essential to unlock their full potential. For generative models, like LLMs or text‐to‐image generation, the situation is different. These are instead expansive models trained on vast datasets – essentially, a substantial portion of the text or image data available on the internet. Consequently, what the average user interacts with is not the complex mathematical underpinnings, but the trained model and its user interface. While a considerable amount of effort will be invested in refining and evolving these models, the primary concern for material scientists will be how to harness the capabilities of these models and how to use them as building blocks in new workflows.
Large language models are not the only generative technologies that recently have attracted massive attention also outside academic circles. The field of text‐to‐image generation, exemplified by techniques such as stable diffusion, has also gained a lot of attention. To date, these technologies have primarily been employed for artistic endeavours by generating stunning images, which among other things have sparked conversations about the essence of art.[ 104 , 105 , 106 ] While the potential applications within material science remain unclear at the moment, it is not unreasonable to anticipate that compelling use‐cases will eventually be found also for this technology.
4.1. Consequences
Sophisticated generative artificial intelligence is still a relatively recent development, and we are in the process of discovering all the ways it can augment, enhance, and accelerate our research efforts. The rapid advancements these technologies are currently experiencing add another a layer of uncertainty, making it challenging to foresee the full extent of their future capabilities. However, one thing is clear: these systems have the potential to become invaluable and transformative research tools. Those AI systems could streamline the process of identifying patterns and trends in scientific literature, enhance the quality of scientific writing and public dissemination, assist in coding for data analysis and visualization, and perhaps most crucially, amplify our ability to generate research hypotheses. For research groups aiming to maintain long‐term competitiveness, it would be highly advisable to closely monitor these technological advancements, and experiment with how they could be used to augment, improve, and extend the research process. At this stage of sophistication, researchers will not lose their work to an AI agent; but they may lose it to researchers that have figured out how to use AI‐systems to improve the quality and throughput of their own research.
5. The Fourth Rung: Orchestration and Autonomy
We are currently witnessing rapid growth in increasingly capable AI systems, each designed to handle specific tasks. The next step up the AI ladder is less about inventing new technologies and more about creatively combining already existing ML/AI modules. This can be conceptualized as orchestrating sub‐modules into larger, more versatile systems. For example, if a large language model is integrated with a voice‐to‐text module, a language translation module, domain‐specific dictionaries, a physics engine, a mathematics program, a web crawler, a CAD program, and a text‐to‐voice module, the creation of a comprehensive personal digital assistant becomes an achievable goal.
To envision a hypothetical use case, let's say a user speaks to the computer, asking if there is any substance that could be used as a dye with strong absorption in the green wavelength range, is soluble in toluene, non‐toxic, and is not too expensive. The voice‐to‐text module would first transcribe the spoken question into text. Then, the language model would interpret the meaning of the query. Specialized algorithms would mine scientific literature for potential candidates, while web crawlers would scrape commercial websites for pricing and availability data. Additional specialized modules, including a physics engine, could perform the necessary computations to evaluate the suitability of the target molecules. Finally, the text‐to‐voice module would present the user with a suggestion and inquire whether an order for the selected molecule should be placed.
Integrating various modules to operate cohesively is no easy task, but it may be a less daunting task than it was to develop all the specialized narrow AI models. Progress in this area is already underway, exemplified by initiatives from companies like Hugging Face which have showed that it is possible to use LLMs as a controller to manage existing AI models to solve sophisticated AI tasks in different modalities and domains.[ 8 ] Examples in material science where LLMs are connected to robotic experimentation are still few but include recrystallization experiments,[ 107 ] successful performance of catalysed cross‐coupling reactions,[ 108 ] and synthesis of humidity colorimetric sensors.[ 109 ] The reports currently available on this topic has the character of initial proof‐of‐concepts but provide an indication of where we may be heading.
A valuable distinction to make is between the orchestration of digital and physical systems. The integration of digital components is likely to precede their physical counterparts, primarily because it doesn't necessitate the development of new robotic hardware. The closed‐loop materials platforms mentioned earlier serve as an example of how digital models can be integrated with robotics. For the foreseeable future, robotics will likely remain the most significant challenge in these types of integrative efforts. However, in principle, there are no inherent limitations preventing us from expanding these systems to incorporate increasingly sophisticated and capable models, both in the realms of robotics and AI.
Robots are generally specialized to excel in a narrow set of tasks. For example, a pipetting robot is adept at pipetting but cannot do anything more. Even a robotic arm, while somewhat more flexible, has its own set of limitations. The problem for robots is the bar set by humans, which repertoire of motions is incredibly diverse and flexible and guided by complex sensory input and computational power – most of which we take for granted. Steve Wozniak's “coffee test” serves as a popular illustration of this challenge. A human can easily walk into an unfamiliar kitchen and make a cup of coffee, a feat that is extraordinarily difficult for a robot unless the kitchen has been specifically designed for robotic coffee preparation. This limitation is referred to as the problem of universal robotics. To fully automate a physical lab environment and eliminate the need for human intervention, significant advancements in universal robotics will be necessary.
5.1. Consequences
In a future with affordable universal robotics, we could envision these technologies to be fully integrated with the AI models previously discussed (Figure 3 ). This would enable fully autonomous scientific facilities. At such facilities, all we would need to provide are ideas, hypotheses, objectives, and capital. In return, we would receive data, materials, and insights. This would significantly alter the role of human researchers. For instance, the hands‐on, practical skills that currently constitute a large part of many Ph.D. students' daily work would become a thing of the past. Instead, we would need a stronger emphasis on data science, a deeper understanding of the theoretical aspects of our chosen field, a more comprehensive view of the bigger picture, and a clear sense of what we seek to discover – and why we want it. Particularly, this last point may define our enduring role in the scientific process. As automation takes over many tasks that today are considered intellectual labor, what remains uniquely human is the ability to weave the broader narrative of “why.” Ultimately, the core purpose of research is to cater to human needs, aspirations, and curiosity. As long as we can articulate those objectives, there will be a place for humans in the research process, albeit different from what it is today.
Figure 3.
Illustration of the human AI interaction in an orchestrated system which has a central AI unit, using an LLM as an interface, which have access to the scientific literature, physics and mathematics engines, can search the internet, store data in databases, and control robot driven experimentation.
6. The Fifth Rung and Beyond: Toward the Singularity
Even with the remarkable advancements covered in the preceding sections, there are still numerous rungs to ascend on the AI ladder toward ever‐greater complexity, sophistication, and capability. At the heart of this discussion is the concept of General Artificial Intelligence (AGI), which is an AI system that has reached the cognitive flexibility to perform any intellectual task that a human can do. Whether AGI is an unattainable goal or an imminent reality is a subject under lively debate.[ 110 , 111 ] However, one thing is certain: if AGI becomes a reality, it will open a Pandora's box of unknowns, with strong arguments suggesting it could be one of the most transformative developments in human history. The core argument posits that an AGI, unencumbered by the biological constraints that limit human intelligence, could initiate a positive feedback loop.[ 112 ] In this loop, the AGI would continually use its computational prowess to refine and enhance its own algorithms, potentially leading to a state of superintelligence.[ 113 ] Once this self‐amplifying cycle is established, the concept of a technological singularity – the hypothetical point in time at which technological growth becomes uncontrollable and irreversible – is not an implausible scenario.[ 114 ] Under those conditions, the question we set out with in the beginning – “What role will material scientists play in the era of artificial intelligence?” – transitions from a subject suitable for educated speculation to one more appropriately confined to the realm of science fiction.
7. Concluding Remarks
We are currently witnessing rapid advancements in increasingly sophisticated machine learning and artificial intelligence systems. Even if general artificial intelligence may not be imminent, these technologies provide invaluable tools that can significantly accelerate efforts in material science in for example developing and discovering new functional materials aimed at addressing urgent global challenges. In this paper, we have arranged ML/AI‐approaches based on their level of sophistication, spanning from simple regression analysis to AI‐guided robotic systems, generative models for hypothesis generation, and the orchestration of specialized AI modules as a stepping‐stone toward general artificial intelligence. As these models increase in sophistication, so does their potentially transformative impact on material development. However, this also necessitates a shift in the skill sets required by researchers. We anticipate that skills that will increase in value will include data science, programming, a deep understanding of the theoretical aspects of the chosen field, and a clear vision of what we aim to discover – and why it is important to do so.
Looking forward, it is unlikely that the typical materials researcher will be replaced by AI agents within the next few decades. However, they may find themselves outperformed by researchers who have successfully harnessed the power of AI to enhance both the quality and efficiency of their work. Therefore, the overarching advice for those wishing to stay competitive is to invest in understanding and mastering the emerging ML/AI methods and models, and to experiment in how one can leveraging their capabilities to improve both the quantity and quality of research.
Conflict of Interest
The authors declare no conflict of interest.
Use of AI
Maybe somewhat ironically given that this is a perspective about the usefulness of AI, the intellectual work in this perspective is completely made by humans. However, ChatGPT 4 has been used as a writing assistance to suggest improvements of the grammar and the flow of the text, which is a great use for non‐native English speakers.
Acknowledgements
The author would like to acknowledge the Ministry of science and technology in China via the National Key Research and Development Program of China (Grant No. 2021YFF0500501), Applied Basic Research Projects in Tianjin, (Grant No. 22JCYBJC01530), and Åforsk (Grant No. 23–629).
Maqsood A., Chen C., Jacobsson T. J., The Future of Material Scientists in an Age of Artificial Intelligence. Adv. Sci. 2024, 11, 2401401. 10.1002/advs.202401401
References
- 1. Campbell M., Hoane A. J. Jr., Hsu F.‐H., Artif. Intell. 2002, 134, 57. [Google Scholar]
- 2. Ferrucci D. A., IBM J. Res. Dev. 2012, 56, 1:1. [Google Scholar]
- 3. Silver D., Schrittwieser J., Simonyan K., Antonoglou I., Huang A., Guez A., Hubert T., Baker L., Lai M., Bolton A., Nature 2017, 550, 354. [DOI] [PubMed] [Google Scholar]
- 4. Jumper J., Evans R., Pritzel A., Green T., Figurnov M., Ronneberger O., Tunyasuvunakool K., Bates R., Žídek A., Potapenko A., Nature 2021, 596, 583. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Badue C., Guidolini R., Carneiro R. V., Azevedo P., Cardoso V. B., Forechi A., Jesus L., Berriel R., Paixao T. M., Mutz F., Expert Syst. Appl. 2021, 165, 113816. [Google Scholar]
- 6. Achiam J., Adler S., Agarwal S., Ahmad L., Akkaya I., Aleman F. L., Almeida D., Altenschmidt J., Altman S., S. Anadkat , arXiv: 2303.08774, 2023.
- 7. Rombach R., Blattmann A., Lorenz D., Esser P., Ommer B., presented at Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, June 2022. [Google Scholar]
- 8. Shen Y., Song K., Tan X., Li D., Lu W., Zhuang Y., Advances in Neural Information Processing Systems 2024, 36. [PMC free article] [PubMed] [Google Scholar]
- 9. Brynjolfsson E., Mcafee A., Harvard Bus. Rev. 2017, 1, 1. [PubMed] [Google Scholar]
- 10. Oosthuizen R. M., Front. Artif. Intell. 2022, 5, 913168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Wang W., Siau K., J. Database Manage. (JDM) 2019, 30, 61. [Google Scholar]
- 12. Susskind D., Susskind R., Proc. Am. Philos. Soc. 2018, 162, 125. [Google Scholar]
- 13. Hornik K., Stinchcombe M., White H., Neural Networks 1989, 2, 359. [Google Scholar]
- 14. Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., J. Mach. Learn. Res. 2011, 12, 2825. [Google Scholar]
- 15. Abadi M., Agarwal A., Barham P., Brevdo E., Chen Z., Citro C., Corrado G. S., Davis A., Dean J., Devin M., arXiv: 1603.04467, 2016.
- 16. Gulli A., Pal S., Deep Learning with Keras, Packt Publishing Ltd, Birmingham: 2017. [Google Scholar]
- 17. Paszke A., Gross S., Massa F., Lerer A., Bradbury J., Chanan G., Killeen T., Lin Z., Gimelshein N., Antiga L., Advances in Neural Information Processing Systems, Curran Associates, Inc., 2019, p. 32. [Google Scholar]
- 18. Géron A., Hands‐On Machine Learning with Scikit‐Learn, Keras, and TensorFlow, O'Reilly Media, Inc., Sebastopol, CA: 2022. [Google Scholar]
- 19. Raschka S., Liu Y. H., Mirjalili V., Dzhulgakov D., Machine Learning with PyTorch and Scikit‐Learn: Develop Machine Learning and Deep Learning Models with Python, Packt Publishing Ltd, Birmingham: 2022. [Google Scholar]
- 20. Odabaşı Ç., Yıldırım R., Nano Energy 2019, 56, 770. [Google Scholar]
- 21. She C., Huang Q., Chen C., Jiang Y., Fan Z., Gao J., J. Mater. Chem. A 2021, 9, 25168. [Google Scholar]
- 22. Li J., Pradhan B., Gaur S., Thomas J., Adv. Energy Mater. 2019, 9, 1901891. [Google Scholar]
- 23. Liu Y., Yan W., Han S., Zhu H., Tu Y., Guan L., Tan X., Sol. RRL 2022, 6, 2101100. [Google Scholar]
- 24. Boobier S., Hose D. R., Blacker A. J., Nguyen B. N., Nat. Commun. 2020, 11, 5753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Wu Y., Wang G., Int. J. Mol. Sci. 2018, 19, 2358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Jukič M., Bren U., Front. Pharmacol. 2022, 13, 864412. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Stokes J. M., Yang K., Swanson K., Jin W., Cubillos‐Ruiz A., Donghia N. M., MacNair C. R., French S., Carfrae L. A., Bloom‐Ackermann Z., Cell 2020, 180, 688. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Zhao Q., Bhowmick S. S., Association Rule Mining: A Survey, Nanyang Technological University, Singapore: 2003, p. 135. [Google Scholar]
- 29. Lundberg S. M., Lee S.‐I., Advances in Neural Information Processing Systems, Curran Associates, Inc., 2017, p. 30. [Google Scholar]
- 30. Gubaev K., Podryabinkin E. V., Hart G. L., Shapeev A. V., Comput. Mater. Sci. 2019, 156, 148. [Google Scholar]
- 31. Fung V., Ganesh P., Sumpter B. G., Chem. Mater. 2022, 34, 4848. [Google Scholar]
- 32. Botu V., Ramprasad R., Int. J. Quantum Chem. 2015, 115, 1074. [Google Scholar]
- 33. Noé F., Tkatchenko A., Müller K.‐R., Clementi C., Annu. Rev. Phys. Chem. 2020, 71, 361. [DOI] [PubMed] [Google Scholar]
- 34. Kirman J., Johnston A., Kuntz D. A., Askerka M., Gao Y., Todorović P., Ma D., Privé G. G., Sargent E. H., Matter 2020, 2, 938. [Google Scholar]
- 35. Massuyeau F., Broux T., Coulet F., Demessence A., Mesbah A., Gautier R., Adv. Mater. 2022, 34, 2203879. [DOI] [PubMed] [Google Scholar]
- 36. Sun S., Hartono N. T., Ren Z. D., Oviedo F., Buscemi A. M., Layurova M., Chen D. X., Ogunfunmi T., Thapa J., Ramasamy S., Joule 2019, 3, 1437. [Google Scholar]
- 37. Badillo S., Banfai B., Birzele F., Davydov I. I., Hutchinson L., Kam‐Thong T., Siebourg‐Polster J., Steiert B., Zhang J. D., Clin. Pharmacol. Ther. 2020, 107, 871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Alpaydin E., Introduction to Machine Learning, 4th ed., MIT Press, Cambridge: 2020. [Google Scholar]
- 39. Mueller J. P., Massaron L., Machine Learning for Dummies, 2nd ed., John Wiley & Sons, Hoboken, NJ: 2021. [Google Scholar]
- 40. Westermayr J., Gastegger M., Schütt K. T., Maurer R. J., J. Chem. Phys. 2021, 154, 230903. [DOI] [PubMed] [Google Scholar]
- 41. Artrith N., Butler K. T., Coudert F.‐X., Han S., Isayev O., Jain A., Walsh A., Nat. Chem. 2021, 13, 505. [DOI] [PubMed] [Google Scholar]
- 42. Chen C., Maqsood A., Jacobsson T. J., J. Alloys Compd. 2023, 960, 170824. [Google Scholar]
- 43. Wang A. Y.‐T., Murdock R. J., Kauwe S. K., Oliynyk A. O., Gurlo A., Brgoch J., Persson K. A., Sparks T. D., Chem. Mater. 2020, 32, 4954. [Google Scholar]
- 44. Schmidt J., Marques M. R., Botti S., Marques M. A., npj Comput. Mater. 2019, 5, 83. [Google Scholar]
- 45. Jain A., Ong S. P., Hautier G., Chen W., Richards W. D., Dacek S., Cholia S., Gunter D., Skinner D., Ceder G., APL Mater. 2013, 1, 011002. [Google Scholar]
- 46. Draxl C., Scheffler M., J. Phys.: Mater. 2019, 2, 036001. [Google Scholar]
- 47. Curtarolo S., Setyawan W., Hart G. L., Jahnatek M., Chepulskii R. V., Taylor R. H., Wang S., Xue J., Yang K., Levy O., Comput. Mater. Sci. 2012, 58, 218. [Google Scholar]
- 48. He K., Zhang X., Ren S., Sun J., presented at Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, June 2016.
- 49. Murdock R. J., Kauwe S. K., Wang A. Y.‐T., Sparks T. D, Integr. Mater. Manuf. Innovation 2020, 9, 221. [Google Scholar]
- 50. Ward L., Agrawal A., Choudhary A., Wolverton C., npj Comput. Mater. 2016, 2, 16028. [Google Scholar]
- 51. Oliynyk A. O., Antono E., Sparks T. D., Ghadbeigi L., Gaultois M. W., Meredig B., Mar A., Chem. Mater. 2016, 28, 7324. [Google Scholar]
- 52. Faber F., Lindmaa A., von Lilienfeld O. A., Armiento R., Int. J. Quantum Chem. 2015, 115, 1094. [Google Scholar]
- 53. Bartók A. P., Kondor R., Csányi G., Phys. Rev. B 2013, 87, 184115. [Google Scholar]
- 54. Schütt K. T., Glawe H., Brockherde F., Sanna A., Müller K.‐R., Gross E. K., Phys. Rev. B 2014, 89, 205118. [Google Scholar]
- 55. Xie T., Grossman J. C., Phys. Rev. Lett. 2018, 120, 145301. [DOI] [PubMed] [Google Scholar]
- 56. Chen C., Ye W., Zuo Y., Zheng C., Ong S. P., Chem. Mater. 2019, 31, 3564. [Google Scholar]
- 57. Merchant A., Batzner S., Schoenholz S. S., Aykol M., Cheon G., Cubuk E. D., Nature 2023, 624, 80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Gražulis S., Chateigner D., Downs R. T., Yokochi A., Quirós M., Lutterotti L., Manakova E., Butkus J., Moeck P., Le Bail A., J. Appl. Crystallogr. 2009, 42, 726. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Groom C. R., Bruno I. J., Lightfoot M. P., Ward S. C., Acta Crystallogr., Sect. B: Struct. Sci., Cryst. Eng. Mater. 2016, 72, 171. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Wilkinson M. D., Dumontier M., Aalbersberg I. J., Appleton G., Axton M., Baak A., Blomberg N., Boiten J.‐W., da Silva Santos L. B., Bourne P. E., Sci. Data 2016, 3, 160018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Draxl C., Scheffler M., MRS Bull. 2018, 43, 676. [Google Scholar]
- 62. Langner S., Häse F., Perea J. D., Stubhan T., Hauch J., Roch L. M., Heumueller T., Aspuru‐Guzik A., Brabec C. J., Adv. Mater. 2020, 32, 1907801. [DOI] [PubMed] [Google Scholar]
- 63. MacLeod B. P., Parlane F. G., Morrissey T. D., Häse F., Roch L. M., Dettelbach K. E., Moreira R., Yunker L. P., Rooney M. B., Deeth J. R., Sci. Adv. 2020, 6, eaaz8867. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Wagner J., Berger C. G., Du X., Stubhan T., Hauch J. A., Brabec C. J., J. Mater. Sci. 2021, 56, 16422. [Google Scholar]
- 65. Du X., Lüer L., Heumueller T., Wagner J., Berger C., Osterrieder T., Wortmann J., Langner S., Vongsaysy U., Bertrand M., Joule 2021, 5, 495. [Google Scholar]
- 66. Burger B., Maffettone P. M., Gusev V. V., Aitchison C. M., Bai Y., Wang X., Li X., Alston B. M., Li B., Clowes R., Nature 2020, 583, 237. [DOI] [PubMed] [Google Scholar]
- 67. Szymanski N. J., Rendy B., Fei Y., Kumar R. E., He T., Milsted D., McDermott M. J., Gallant M., Cubuk E. D., Merchant A., Nature 2023, 624, 86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Flores‐Leonar M. M., Mejía‐Mendoza L. M., Aguilar‐Granda A., Sanchez‐Lengeling B., Tribukait H., Amador‐Bedolla C., Aspuru‐Guzik A., Curr. Opin. Green Sustainable Chem. 2020, 25, 100370. [Google Scholar]
- 69. Epps R. W., Bowen M. S., Volk A. A., Abdel‐Latif K., Han S., Reyes K. G., Amassian A., Abolhasani M., Adv. Mater. 2020, 32, 2001626. [DOI] [PubMed] [Google Scholar]
- 70. Abdel‐Latif K., Epps R. W., Bateni F., Han S., Reyes K. G., Abolhasani M., Adv. Intell. Syst. 2021, 3, 2000245. [Google Scholar]
- 71. Li S., Baker R. W., Lignos I., Yang Z., Stavrakis S., Howes P. D., deMello A. J., Mol. Syst. Des. Eng. 2020, 5, 1118. [Google Scholar]
- 72. Epps R. W., Volk A. A., Abdel‐Latif K., Abolhasani M., React. Chem. Eng. 2020, 5, 1212. [Google Scholar]
- 73. Bateni F., Epps R. W., Abdel‐latif K., Dargis R., Han S., Volk A. A., Ramezani M., Cai T., Chen O., Abolhasani M., Matter 2021, 4, 2429. [Google Scholar]
- 74. Lignos I., Maceiczyk R. M., Kovalenko M. V., Stavrakis S., Chem. Mater. 2019, 32, 27. [Google Scholar]
- 75. Bateni F., Epps R. W., Antami K., Dargis R., Bennett J. A., Reyes K. G., Abolhasani M., Adv. Intell. Syst. 2022, 4, 2200017. [Google Scholar]
- 76. Bezinge L., Maceiczyk R. M., Lignos I., Kovalenko M. V., deMello A. J., ACS Appl. Mater. Interfaces 2018, 10, 18869. [DOI] [PubMed] [Google Scholar]
- 77. Lignos I., Morad V., Shynkarenko Y., Bernasconi C., Maceiczyk R. M., Protesescu L., Bertolotti F., Kumar S., Ochsenbein S. T., Masciocchi N., ACS Nano 2018, 12, 5504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Liu Z., Rolston N., Flick A. C., Colburn T. W., Ren Z., Dauskardt R. H., Buonassisi T., Joule 2022, 6, 834. [Google Scholar]
- 79. Li J., Lu Y., Xu Y., Liu C., Tu Y., Ye S., Liu H., Xie Y., Qian H., Zhu X., J. Phys. Chem. A 2018, 122, 9142. [DOI] [PubMed] [Google Scholar]
- 80. Chen X., Wang C., Li Z., Hou Z., Yin W.‐J., Sci. China Mater. 2020, 63, 1024. [Google Scholar]
- 81. Heimbrook A., Higgins K., Kalinin S. V., Ahmadi M., Nanophotonics 2020, 10, 1977. [Google Scholar]
- 82. Wang K., Dowling A. W., Curr. Opin. Chem. Eng. 2022, 36, 100728. [Google Scholar]
- 83. Lei B., Kirk T. Q., Bhattacharya A., Pati D., Qian X., Arroyave R., Mallick B. K., npj Comput. Mater. 2021, 7, 194. [Google Scholar]
- 84. Greenhill S., Rana S., Gupta S., Vellanki P., Venkatesh S., IEEE Access 2020, 8, 13937. [Google Scholar]
- 85. Mitchell M., An Introduction to Genetic Algorithms, MIT Press, Cambridge: 1998. [Google Scholar]
- 86. Kumar P. V., Jin Y., Nanoscale 2023, 15, 10975. [DOI] [PubMed] [Google Scholar]
- 87. Li C., Rubín de Celis Leal D., Rana S., Gupta S., Sutti A., Greenhill S., Slezak T., Height M., Venkatesh S., Sci. Rep. 2017, 7, 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88. Thoppilan R., De Freitas D., Hall J., Shazeer N., Kulshreshtha A., Cheng H.‐T., Jin A., Bos T., Baker L., Du Y., arXiv: 2201.08239, 2022.
- 89. Touvron H., Lavril T., Izacard G., Martinet X., Lachaux M.‐A., Lacroix T., Rozière B., Goyal N., Hambro E., Azhar F., arXiv: 2302.13971, 2023.
- 90. Lin T., Wang Y., Liu X., Qiu X., AI Open 2022, 3, 111. [Google Scholar]
- 91. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., Kaiser Ł., Polosukhin I., Advances in Neural Information Processing Systems (Eds.: Guyon I., Von Luxburg U., Bengio S., Wallach H., Fergus R., Vishwanathan S., Garnett R.), Curran Associates, Inc., New York: 2017, p. 30. [Google Scholar]
- 92. Guo T., Guo K., Liang Z., Guo Z., Chawla N. V., Wiest O., Zhang X., arXiv: 2305.18365, 2023.
- 93. Lund B. D., Wang T., Mannuru N. R., Nie B., Shimray S., Wang Z., J. Assoc. Inf. Sci. Technol. 2023, 74, 570. [Google Scholar]
- 94. AlAfnan M. A., Dishari S., Jovic M., Lomidze K., J. Artif. Intell. Technol. 2023, 3, 60. [Google Scholar]
- 95. Lo C. K., Educ. Sci. 2023, 13, 410. [Google Scholar]
- 96. Hu Y., Buehler M. J., APL Mach. Learn. 2023, 1, 010901. [Google Scholar]
- 97. Thirunavukarasu A. J., Ting D. S. J., Elangovan K., Gutierrez L., Tan T. F., Ting D. S. W., Nat. Med. 2023, 29, 1930. [DOI] [PubMed] [Google Scholar]
- 98. Rahman M. M., Terano H. J., Rahman M. N., Salamzadeh A., Rahaman M. S., J. Educ., Manage. Dev. Stud. 2023, 3, 1. [Google Scholar]
- 99. Dönmez İ., Sahin I., Gülen S., J. Steam Educ. 2023, 6, 101. [Google Scholar]
- 100. Alshami A., Elsayed M., Ali E., Eltoukhy A. E., Zayed T., Systems 2023, 11, 351. [Google Scholar]
- 101. Park Y. J., Kaplan D., Ren Z., Hsu C.‐W., Li C., Xu H., Li S., Li J., arXiv: 2304.12208, 2023.
- 102. Qi B., Zhang K., Li H., Tian K., Zeng S., Chen Z.‐R., Zhou B., arXiv: 2311.05965, 2023.
- 103. This is something that has been explored by the authors and something that will be described in detailed in another paper in the near future.
- 104. Cetinic E., She J., ACM Trans. Multimedia Comput. Commun. Appl. 2022, 18, 66. [Google Scholar]
- 105. Epstein Z., Hertzmann A., Creativity T. I. O. H., Akten M., Farid H., Fjeld J., Frank M. R., Groh M., Herman L., Leach N., Mahari R., Pentland A. S., Russakovsky O., Schroeder H., Smith A., Science 2023, 380, 1110. [DOI] [PubMed] [Google Scholar]
- 106. Elgammal A., Am. Sci. 2019, 107, 18. [Google Scholar]
- 107. Yoshikawa N., Skreta M., Darvish K., Arellano‐Rubach S., Ji Z., Bjørn Kristensen L., Li A. Z., Zhao Y., Xu H., Kuramshin A., Aspuru‐Guzik A., Shkurti F., Garg A., Autonomous Rob. 2023, 47, 1057. [Google Scholar]
- 108. Boiko D. A., MacKnight R., Gomes G., arXiv: 2304.05332, 2023.
- 109. Qin X., Song M., Chen Y., Ai Z., Jiang J., arXiv: 2309.16721, 2023.
- 110. Fjelland R., Humanit. Soc. Sci. Commun. 2020, 7, 10. [Google Scholar]
- 111. Müller V. C., Bostrom N., in Fundamental Issues of Artificial Intelligence, (Ed. Müller V. C.), Springer International Publishing, Cham: 2016, pp. 555–572. [Google Scholar]
- 112. Tegmark M., Life 3.0: Being Human in the Age of Artificial Intelligence, Knopf Doubleday Publishing Group, New York: 2018. [Google Scholar]
- 113. Bostorm N., Superintelligence: Paths, Dangers, Strategies, Oxford University Press, Oxford: 2014. [Google Scholar]
- 114. Kurzweil R., Ethics and Emerging Technologies, Springer, Cham: 2005, pp. 393–406. [Google Scholar]