Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: PET Clin. 2021 Oct;16(4):525–532. doi: 10.1016/j.cpet.2021.06.012

Potential Applications of Artificial Intelligence and Machine Learning in Radiochemistry and Radiochemical Engineering

E William Webb 1, Peter JH Scott 1,*
PMCID: PMC9168959  NIHMSID: NIHMS1721073  PMID: 34537128

INTRODUCTION

Radiochemistry for PET applications is a complex amalgam of different areas of expertise. The field combines fundamental organic chemistry and analytical sciences, all under the constraint of timely production for short-lived isotopes (11C, 18F, and 68 Ga) to meet medical demand with sufficient activity and purity. Taken as a whole, these constraints have barred all but a few small molecules from being studied in animals and/or being commercialized. Although the generation of novel molecules for further study is addressed else-where in this issue, the role of radiochemists in the radiotracer pipeline (Fig. 1) is to identify which site in the molecule is best for labeling, to determine what is the ideal strategy for labeling at that site, to optimize the chemistry to effectively produce the radiolabeled product compound, and finally, to develop an appropriate analytical technique to verify the identity and purity of the labeled molecule. To date, the main approach to achieving these ends has been through substantial trial and error, consuming a great deal of time (both human and instrument) and resources. As many of the tools used in artificial intelligence (AI) and machine learning (ML) become accessible to researchers, there is mounting potential to turn these tools to the problems encountered in the production of radiolabeled molecules for PET applications.1 Although a commonly bandied “buzzword” meant to evoke a superhuman ability to understand a system, AI is simply the “intelligence” displayed by machines that emulates the “natural intelligence” of animals or humans through the application of mathematical and computer science algorithms for evaluating data (“machine learning”) and executing decisions. These do not replace humans in the scientific process; rather, these can be thought of as convenient “experts” and tools to complement and enhance chemists in the field. In this perspective, the authors outline some of the potential applications of AI in the field of radiochemistry.

Fig. 1.

Fig. 1.

Radiotracer development and production workflow and areas of radiochemist involvement.

HOW MACHINE LEARNING WORKS

There is some disagreement as to whether ML is a subfield of AI or a separate field that overlaps with some of the AI field.2,3 Regardless of this disagreement, a functional definition is that AI is a nonbiological system that displays human-like intelligence through rules, whereas ML is represented by algorithms that learn from data and examples.3 An AI could be developed by an expert to run through a set of encoded decisions (a series of if-then statements), similar to an expert’s logical and experiential workflow. Following the encoding, an AI agent would then be capable of following the same logic for a novel input to determine a predicted output just as the expert would provide based on their logic and experience. Alternatively, ML could be used to develop a similar set of encoded decisions starting from data or examples using a variety of algorithms, just as the expert once learned.3 The focus of this perspective is on the development of AIs in radiochemistry from data, without the intermediacy of an expert, and thus is most aptly described as ML for radiochemistry and radiochemical engineering.

ML has been traditionally broken into 3 categories: (1) supervised learning, in which the algorithms are presented with data containing example inputs (features) and corresponding desired outputs (labels); (2) unsupervised learning, in which the labels are not given, and algorithms discover underlying structure; and (3) reinforcement learning, in which an algorithm interacts with an environment and is provided with rewards to maximize or losses (penalties) to minimize.3,4 Depending on the question under study by the researcher, the appropriate mode of ML may be different. For example, examining a large set of chest radiograph images using unsupervised learning techniques may identify certain characteristics consistent with a pathologic condition (perhaps an increase in localized densities or increased heart size) without knowing those attributes were a part of the diagnosis. Alternatively, radiographic images labeled with a diagnosis may be used to train a supervised model to diagnose on the basis of an image. However, for the purposes of radiochemical engineering and radiochemistry, supervised and reinforcement learning methods are the most easily applicable.

Whatever the approach and the problem under study, care must be taken to verify that the model generalizes.4 With small data sets and a large number of features, overfitting can readily occur in supervised learning.4,5 This functionally tests whether the model can “memorize” rather than correctly perform the desired function. In unsupervised learning contexts, clustering and attribute features can be identified simply because of the original data set. Testing an additional data set composed of similar data to verify that the same features are recognized prevents making assertions that will not hold.5,6

IDENTIFICATION OF OPTIMAL SITE AND STRATEGY FOR LABELING

After identifying an appropriate target molecule, a key part of radiochemistry is to distinguish next which site is optimal for labeling from a metabolic stability perspective and radiochemical accessibility.7 Automated retrosynthetic analysis dates back to proposals by Corey and colleagues8,9 in the late 1960s, and as accessibility has risen with hardware capabilities, additional implementations of ML and AI as applied to retrosynthetic analysis have seen an upswing.10 Schematically, older versions of the programs developed for automated retrosynthetic analysis sought to “disconnect” molecules according to template reactions that were encoded by an expert.10,11 For example, in 2’-methoxyphenyl-(N-2’-pyrinyl)-p-fluoro-benz-amidoethylpiperazine (MPPF) an amide bond may be retrosynthetically broken down into an acyl chloride and an amine according to one template, whereas another template breaks down the molecule into an amide and an electrophile, whereas still another may break the same amide bond into a palladium-catalyzed carbonylation-amination (Fig. 2A). The program iteratively applies these templates to the building blocks produced by each disconnection until arriving at a set of building block molecules that are commercially available or unable to be disconnected into simpler species using the set of templates. This produces an entire “tree” of routes that converge on the target molecule (Fig. 2B). However, this “tree” contains some effective routes and some ineffective routes.10 To identify the most viable path for synthetic efforts, the various routes need to be ranked by some criteria. The various retrosynthetic routes may be scored by “greenness,” by length of route, commercial availability of building block materials, or some other criterion developed alongside the program.1012 More recent advances have used ML and reaction databases to eliminate the need for an expert in the construction of template sets and more sophisticated scoring systems that measure the feasibility of the forward reactions to determine the viability of a given retrosynthetic route.10

Fig. 2.

Fig. 2.

(A) Template-based retrosynthetic analysis of a complex molecule. (B) Exhaustive deconstruction and generation of multiple potentially viable routes. (C) Retro-radio-synthetic analysis of [11C]UCB-J.

In contrast to typical, multistep organic synthesis, radiochemists are specifically focused on just one step: the incorporation of the radioisotope, ideally as the last step in the synthetic pathway. This last step is further complicated as different radiolabeling strategies may require completely different starting materials, conditions, workup, or purification strategies, all of which must be completed while limiting radiation exposure, using standardized equipment, and incorporating sufficient activity for transport to the scanning suite and completion of the imaging study. For example, apply a retro-radio-synthetic approach to [11C]UCB-J as a prospective compound for labeling (Fig. 2C).13 Any of the aryl fluorides (highlighted in green) could potentially undergo effective labeling via SNAr14 or a transition metal-mediated radiofluorination with [18F]fluoride.15 Alternatively, at another site, an iodopyridyl moiety could be labeled with [11C]MeLi,16 whereas the used strategy of methylation with [11C]MeI of a pyridyl trifluoroborate13 offers still another labeling chemistry (highlighted in red). Ab initio, the selection of which strategy to pursue is an extremely difficult problem. An SNAr approach would necessitate the formation of any of several highly reactive electrophiles, whereas using a transition-metal–mediated approach, such as copper-mediated radiofluorination, may not tolerate ortho-fluorine substitution.14,15 Taken altogether, this obligates either a brute force approach to radiosynthesis, testing all possible methodologies presented in the literature and variations thereof in a limited throughput manner, or testing the most easily accessed methodology, which, even if modestly successful, may not be optimal. If either of those approaches fail to provide sufficient activity in a timely fashion, all too often the target molecule is discarded as “unlabelable.”

Just as ML provides additional tools for retrosynthetic analysis, ML has potential as a fundamental tool for constructing a retro-radio-synthesis tool. Similar to traditional retrosynthetic analysis tools, template reactions for radiolabeling can be developed. However, the difference between traditional retrosynthetic tools and any potential tool for retro-radio-synthesis is in the scoring function defined for radiochemistry. Any program for this purpose will seek to maximize feasibility, activity, and specific activity while minimizing time for the all-important radiolabeling step and metabolic breakdown7,17,18 rather than the economics or “greenness” of the route. Foundationally, this will require a change in the way radiochemistry methodology development is conducted. Methodologies will need to be conducted with the intent of translation of the corresponding data set into an ML model that can be further augmented as additional methodologies are developed.10 For this to occur, substrate scopes, the main data sets of methodologies, must be redefined. This redefinition of substrate scope evaluations must include the following:

  1. Not just successful reactions but also unsuccessful reactions

  2. Substrates that span the chemical space of accessible or pertinent substrates

  3. “Clean” data

Only by following these constraints can robust ML models be developed to define the feasibility of any proposed reaction. Reaction chemical space is complex, and although two substrates may “seem” similar to a radiochemist, they may perform radically different for a given methodology. As demonstrated by Taylor and colleagues,19 two seemingly similar aryl boronic acid pinacol esters undergo labeling with extremely different efficacy (Fig. 3A). Without actively conducting this experiment, it would be difficult to predict this effect, as humans would identify these two substrates as near in chemical space. In contrast, with appropriate featurization, most ML algorithms could identify whether these two substrates are neighbors in feature space or not. Given a novel substrate that could potentially be reactive, unreactive model substrates may be nearer in n-dimensional chemical feature space than successful scope substrates (Fig. 3B). This would lead to the prediction that the novel substrate is more likely to perform similarly to poor performing substrates, but without substrate scopes that contain these unsuccessful substrates such a comparison is unable to be made by models. Unfortunately, the inclusion of ineffective substrates in the literature remains a rarity, except in the case of clearly instructive examples.

Fig. 3.

Fig. 3.

(A) Similar substrates that display drastically different activity.19 (B) Literature bias of methodology scope toward highly reactive substrates around optimal reactivity rather than potentially reactive or unreactive substrates. (C) Comparison of how different screening strategies span chemical space.

Unsuccessful substrates help define the radiolabeling reaction efficiency surface (Fig. 3B). To best define that surface, substrates would be selected to fully span and represent the whole of chemical space. However, spanning chemical space is a daunting and, in truth, impossible prospect. The number of molecules theoretically accessible in small molecule chemical space is more than 1060 molecules.20 Even when the set of methodology input molecules is reduced to only those functionally pertinent, such as aryl boronate derivatives for a Chan-Lam coupling,21 and then reducing further to the set of commercially available (as an approximate estimation of actual accessibility), brings one into a regime ~106 molecules. This is still beyond the synthetic accessibility of most laboratories that lack high-throughput experimentation equipment. Because this scale remains synthetically out of reach to most laboratories, two approaches have become commonplace: additive screening with a model reaction22 and identification of representative sets23,24 (Fig. 3C). Additive screening is able to readily identify functional groups that act as poisons using a single analytical method but fail to capture the effect those functional groups may have when more proximate to the reactive center.19,22,23 For the identification of representative libraries, after property calculations are performed on a large number of potential molecules, the set may be reduced through principal components analysis23 or via finding the most diverse small set using the Kennard-Stone Algorithm.25,26 These informer sets may not include all potential functionality and may also be combined with additive screening to provide an information-rich data set for model construction.24 To date, a standardized set of representative functional group molecules for additive screening has not been demonstrated, but the introduction of a common set would provide a better evaluation of differences between literature methodologies. By attempting to span a wider range of chemical space algorithmically, ML models will demonstrate higher generalizability to novel molecules, in particular, for the substrates of labeling interest.

“Clean” data are, unfortunately, the most ambiguous and potentially most important part for the construction of ML data sets for predicting labeling efficiency. There are few standardized and tabulated data of radiochemical reactions and conditions that are fit to be parsed and mined for AI development. Even the wider organic chemistry field notes the absence of tabulated reaction data.10 Minute variations in multiple variables that are not clearly annotated in the literature can combine to produce a very large “hidden” variable. Differences in amount of precursor, amount of activity used, the preparation of that activity, or even amounts of different counterions all may have an effect on labeling efficiency. In a supervised learning problem when two data sets that share a common point are combined, without clear annotation of these differences, one input may map to two outputs in the training set. This will increase the error rate of the model simply because of this “hidden” difference. To achieve so-called “clean” data, as many variables as possible need to be annotated and consistent across all experiments. With data sets designed for ML, the major hurdle for implementation and use of ML can be overcome and the potential of AI for radiolabeling chemistry realized.

REACTION OPTIMIZATION

After determining an appropriate target and labeling strategy, radiochemistry becomes an optimization problem, in both reaction development and analytical chemistry. For optimization of a specific reaction, chemist-led or design-of-experiment approaches have been most prevalently applied.27 Fully automated systems have the brute-force capability to conduct many more experiments than a chemist; however, a chemist’s intuition may be more efficient in identifying the best experiment to run for optimization.28,29 Although a chemist’s instinct and flexibility may be invaluable, it cannot be parallelized and is dependent on the chemist who, although perhaps an expert on one type of reaction, may not be an expert on the specifically needed reaction. In the case whereby the chemist is not an expert, a large number of variable conditions may be readily identified from the literature, leading to an exponentially increasing number of potential experiments (“the curse of dimensionality”).

As a complement to chemical intuition and to speed navigation of this high dimensional optimization problem, efforts have been undertaken to automate the decision-making process of a chemist so that an optimal set of conditions can be identified in a minimal number of experiments. These algorithms are most similar to a reinforcement learning approach: an initial environment is defined, and on the basis of the outcome of those experiments, additional experiments are selected to maximize (optimize) a reward function (Fig. 4).

Fig. 4.

Fig. 4.

Algorithmic and automated optimization.

For a continuous, single objective variable system optimization, for example, identifying optimal amounts of reactants or chromatography gradients, various algorithms have been demonstrated, such as the Nelder-Mead simplex method, Stable Noisy Optimization by Branch and Fit, and gradient descent optimization.29,30 However, chemistry is rife with categorical variables as well as continuous variables. For these, alternative approaches, like mixed integer linear programming, as demonstrated by the Jensen lab,2831 Bayesian optimization,32,33 or Deep Reinforcement Optimization,34 may be used to optimize across mixed variables. Even multiple objective optimizations have been demonstrated to be feasible.33

Taken as a whole, the opportunities for optimization algorithms in radiochemistry are plentiful. Ideal analytical methods that most efficiently and effectively characterize the target molecule can be found in an automated fashion; optimization of reaction conditions can be treated as a multi-objective maximization problem for yield, purity, molar activity, and ease of purification. The high degree of automation currently used in radiochemistry will facilitate the implementation of these techniques. However, further efforts will be needed to adapt fully automated systems (both hardware and software) into a radiochemical optimization workflow. At present, standard automated synthesis boxes that are commercially available (both cassette-based modules and fixed tube systems)35 balance current good manufacturing practice (cGMP) features and flexibility for production of various different radiotracers, but are poorly adapted to sequential, fully automated testing. With appropriate equipment and software application programming interfaces (API), implementation of AI into the workflow for radiochemistry methodology and tracer development becomes accessible to general radiochemistry laboratories.

PRODUCTION AND OTHER CONCERNS

The workflow of radiochemistry extends past identification and development of a tracer to long-term, repeated production.36 In the course of long-term production, additional issues may arise that impact effective production. These range from maintenance of automated equipment like cyclotrons and synthesis boxes to changes in the quality of production reagents. Provided a problem in these areas can be defined and reduced to several input factors (for example, electrical current use or coolant temperature may provide alarms for cyclotron maintenance) or sufficient tabulated data collected for unsupervised-learning and identification of underlying patterns, the potential for implementing ML in other areas may be readily realized.

SUMMARY

Radiochemistry, as a field, has readily embraced automated technology to solve various problems, including radiation safety and cGMP compliance. For both radiochemistry and radiochemical engineering, ML and AI offer an additional, powerful tool for evaluating data. This tool will be applied with increasing regularity and offers many time- and cost-saving advantages over traditional resource-intensive laboratory approaches. This perspective sheds light on the potential problems to which ML may be applied, and how to begin approaching those problems. This article is far from exhaustive, and the only limit as to what problems are fit for ML and AI is how well scientists and engineers can define their problems to apply these techniques.

CLINICS CARE POINTS.

  • ML/AI programs and models are not infallible and must be undergo adequate robustness testing via external validation and at all points in model development, appropriate controls to prevent information leakage and bias must be in place. These include use of cross-validation and external validation datasets to prevent information leak and appropriately large and varied datasets to prevent bias. Any developed models or programs must demonstrate real world performance prior to implementation.

  • A number of “black box” algorithms exist but for medical applications and cGMP, development of non-“black box”-models is paramount for transparency, standardization, quality control, trustworthiness and long-term security.

KEY POINTS.

  • Selecting an appropriate radiolabeling strategy and optimizing it for a new radiotracer has historically been a resource-intensive task.

  • Machine learning has potential as a fundamental tool for designing radiochemical syntheses.

  • Efforts are underway to use machine learning for identification of optimal labeling strategies and radiochemistry reaction optimization.

ACKNOWLEDGMENTS

This work was supported by the NIH (Award Number R01EB021155). The authors gratefully thank Dr. J.S. Wright, Dr. A.F. Brooks, and K. Cheng, as well as Prof. Melanie S. Sanford and her group members for helpful discussions.

Footnotes

DISCLOSURE

The authors declare that they have no conflicts of interest relating to the subject matter of the present review.

ADDITIONAL RESOURCES

Given the rapidly developing nature of artificial intelligence and machine learning, references quickly become outdated. There are several readily accessible manuals for beginning to construct the coding framework for ML models from O’Reilly Publishers and others. These should remain effective provided the program language in vogue (presently Python, R, or Matlab) does not update. For further concepts and updated applied research on chemistry and AI, the Web sites of the following professors will likely provide the most up-to-date information: Prof. Alán Aspuru-Guzik, Prof. Connor W. Coley, Prof. Klavs F. Jensen, Prof. Abigail G. Doyle, Prof. Leroy Cronin, Prof. Timothy A. Cernak, and Prof. Scott E. Denmark.

REFERENCES

  • 1.Webb EW, Wright JS, Sharninghausen LS, et al. Machine learning for translation of published methodologies. J Nucl Med 2021;62 (Suppl. 1):13. [Google Scholar]
  • 2.Garbade MJ. Clearing the confusion: AI vs machine learning vs deep learning differences. 2018. Available at: https://towardsdatascience.com/clearing-the-confusion-ai-vs-machine-learning-vs-deep-learning-differences. Accessed February 23, 2021.
  • 3.Raschka S Introduction to machine learning and deep learning 2020. Available at: https://sebastianraschka.com/blog/2020/intro-to-dl-ch01.html. Accessed February 23, 2021.
  • 4.Géron A The landscape of machine learning. In: Hands-on machine learning with scikit-learn, Keras, and Tensorflow: concepts, tools, and techniques to build intelligent systems. Sebastopol (CA): O’Reilly; 2019. p. 1–33. [Google Scholar]
  • 5.Roberts M, Driggs D, Thorpe M, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell 2021;3(3):199–217. [Google Scholar]
  • 6.US Food and Drug Administration (FDA). Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)-Discussion Paper and Request for Feedback. US Food Drug Adm. 2019:1–20. Available at: https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf. Accessed July 23, 2021.
  • 7.Liang SH, Vasdev N. Total radiosynthesis: thinking outside “the box. Aust J Chem 2015;68(9):1319–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Corey EJ, Long AK, Rubenstein SD. Computer-assisted analysis in organic synthesis. Science 1985; 228(4698):408–18. [DOI] [PubMed] [Google Scholar]
  • 9.Corey EJ. General methods for the construction of complex molecules. Pure Appl Chem 1967;14(1): 19–38. [Google Scholar]
  • 10.Coley CW, Green WH, Jensen KF. Machine learning in computer-aided synthesis planning. Acc Chem Res 2018;51(5):1281–9. [DOI] [PubMed] [Google Scholar]
  • 11.Szymkuć S, Gajewska EP, Klucznik T, et al. Computer-assisted synthetic planning: the end of the beginning. Angew Chem Int Ed Engl 2016;55(20): 5904–37. [DOI] [PubMed] [Google Scholar]
  • 12.Coley CW, Rogers L, Green WH, et al. Computer-assisted retrosynthesis based on molecular similarity. ACS Cent Sci 2017;3(12):1237–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Nabulsi NB, Mercier J, Holden D, et al. Synthesis and preclinical evaluation of 11C-UCB-J as a PET tracer for imaging the synaptic vesicle glycoprotein 2A in the brain. J Nucl Med 2016;57(5):777–84. [DOI] [PubMed] [Google Scholar]
  • 14.Cole E, Stewart M, Littich R, et al. Radiosyntheses using fluorine-18: the art and science of late stage fluorination. Curr Top Med Chem 2014;14(7): 875–900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Wright JS, Kaur T, Preshlock S, et al. Copper-mediated late-stage radiofluorination: five years of impact on preclinical and clinical PET imaging. Clin Transl Imaging 2020;8(3):167–206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Helbert H, Antunes IF, Luurtsema G, et al. Cross-coupling of [11C]methyllithium for 11C-labelled PET tracer synthesis. Chem Commun 2021;57(2): 203–6. [DOI] [PubMed] [Google Scholar]
  • 17.Miller PW, Long NJ, Vilar R, et al. Synthesis of 11C, 18F, 15O, and 13N radiolabels for positron emission tomography. Angew Chem Int Ed Engl 2008;47(47): 8998–9033. [DOI] [PubMed] [Google Scholar]
  • 18.Djoumbou-Feunang Y, Fiamoncini J, Gil-de-la-Fuente A, et al. A comprehensive computational tool for small molecule metabolism prediction and metabolite identification. J Cheminform 2019;11(1): 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Taylor NJ, Emer E, Preshlock S, et al. Derisking the Cu-mediated 18F-fluorination of heterocyclic positron emission tomography radioligands. J Am Chem Soc 2017;139(24):8267–76. [DOI] [PubMed] [Google Scholar]
  • 20.Virshup AM, Contreras-García J, Wipf P, et al. Stochastic voyages into uncharted chemical space produce a representative library of all possible drug-like compounds. J Am Chem Soc 2013;135(19): 7296–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Chen JQ, Li JH, Dong ZB. A review on the latest progress of Chan-Lam coupling reaction. Adv Synth Catal 2020;362(16):3311–31. [Google Scholar]
  • 22.Collins KD, Glorius F. Intermolecular reaction screening as a tool for reaction evaluation. Acc Chem Res 2015;48(3):619–27. [DOI] [PubMed] [Google Scholar]
  • 23.Kutchukian PS, Dropinski JF, Dykstra KD, et al. Chemistry informer libraries: a chemoinformatics enabled approach to evaluate and advance synthetic methods. Chem Sci 2016;7(4):2604–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Ahneman DT, Estrada JG, Lin S, et al. Predicting reaction performance in C–N cross-coupling using machine learning. Science 2018;360(6385):186–90. [DOI] [PubMed] [Google Scholar]
  • 25.Kennard ARW, Stone LA. Computer aided design of experiments. Technometrics 2016;11(1):137–48. [Google Scholar]
  • 26.Zahrt AF, Henle JJ, Rose BT, et al. Prediction of higher-selectivity catalysts by computer-driven workflow and machine learning. Science 2019; 363(6424):eaau5631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Bowden GD, Pichler BJ, Maurer A. A design of experiments (DoE) approach accelerates the optimization of copper-mediated 18F-fluorination reactions of arylstannanes. Sci Rep 2019;9(1):1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Reizman BJ, Wang Y-M, Buchwald SL, et al. Suzuki–Miyaura cross-coupling optimization enabled by automated feedback. React Chem Eng 2016;1: 658–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Baumgartner LM, Coley CW, Reizman BJ, et al. Optimum catalyst selection over continuous and discrete process variables with a single droplet microfluidic reaction platform. React Chem Eng 2018; 3(3):301–11. [Google Scholar]
  • 30.Reizman BJ, Jensen KF. Feedback in flow for accelerated reaction development. Acc Chem Res 2016; 49(9):1786–96. [DOI] [PubMed] [Google Scholar]
  • 31.Reizman BJ, Jensen KF. Simultaneous solvent screening and reaction optimization in microliter slugs. Chem Commun 2015;51(68):13290–3. [DOI] [PubMed] [Google Scholar]
  • 32.Shields BJ, Stevens J, Li J, et al. Bayesian reaction optimization as a tool for chemical synthesis. Nature 2021;590:89. [DOI] [PubMed] [Google Scholar]
  • 33.Schweidtmann AM, Clayton AD, Holmes N, et al. Machine learning meets continuous flow chemistry: automated optimization towards the Pareto front of multiple objectives. Chem Eng J 2018;352:277–82. [Google Scholar]
  • 34.Zhou Z, Li X, Zare RN. Optimizing chemical reactions with deep reinforcement learning. ACS Cent Sci 2017;3(12):1337–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Bruton L, Scott PJH. Automated synthesis modules for PET radiochemistry. In: Kilbourn MR, Scott PJH, editors. Handbook of radiopharmaceuticals: methodology and applications. 2nd edition. Hoboken: Wiley; 2021. p. 437–56. [Google Scholar]
  • 36.Thompson S, Kilbourn MR, Scott PJH. Radiochemistry, PET imaging, and the internet of chemical things. ACS Cent Sci 2016;2(8):497–505. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES