Abstract
Physics‐driven discovery in an autonomous experiment has emerged as a dream application of machine learning in physical sciences. Here, this work develops and experimentally implements a deep kernel learning (DKL) workflow combining the correlative prediction of the target functional response and its uncertainty from the structure, and physics‐based selection of acquisition function, which autonomously guides the navigation of the image space. Compared to classical Bayesian optimization (BO) methods, this approach allows to capture the complex spatial features present in the images of realistic materials, and dynamically learn structure–property relationships. In combination with the flexible scalarizer function that allows to ascribe the degree of physical interest to predicted spectra, this enables physical discovery in automated experiment. Here, this approach is illustrated for nanoplasmonic studies of nanoparticles and experimentally implemented in a truly autonomous fashion for bulk‐ and edge plasmon discovery in MnPS3, a lesser‐known beam‐sensitive layered 2D material. This approach is universal, can be directly used as‐is with any specimen, and is expected to be applicable to any probe‐based microscopic techniques including other STEM modalities, scanning probe microscopies, chemical, and optical imaging.
Keywords: automated experiment, electron energy loss spectroscopy, machine learning, plasmonics, scanning transmission electron microscopy
The scanning transmission electron microscope is autonomously operated using the live electron energy loss spectroscopy data stream. During operation and without any pretraining, deep kernel learning (DKL) is used to build structure–property relationships between annular dark field image patches and their corresponding spectral signatures. The knowledge gained on‐the‐fly guides each future measurement throughout the experiment, here leading to physical discovery.

1. Introduction
The emergence of advanced electron and scanning probe microscopies has opened fundamentally new opportunities for exploring the physics of nanoscale and atomic systems.[ 1 , 2 , 3 ] Over the last decades, multiple advances ranging from mapping structure of ferroic[ 4 , 5 ] and charge density wave systems,[ 6 , 7 ] to exploring the nature of electronic and superconductive order parameters in quantum materials,[ 8 , 9 ] to probing vibrational and plasmonic properties on the nanometer and atomic level[ 10 , 11 , 12 ] has become commonplace. The advent of monochromated and aberration‐corrected electron microscopy[ 13 ] combined with the beam engineering[ 14 ] and low‐temperature environments[ 15 ] opens the pathway toward further advancements.
In scanning transmission electron microscopy (STEM), one of the most prominent recent experimental directions is the high‐energy resolution low‐loss electron energy loss spectroscopy (EELS) for probing phononic and plasmonic properties.[ 16 , 17 ] These responses contain the information on the local dielectric function convolved with local geometry,[ 18 , 19 , 20 ] and hence contain the key to understanding the physics of collective excitations in the nanoscale systems. Similarly, plasmons strongly couple to local chemical reactivity and form the basis for chemical characterization or even beam‐controlled chemical reactions. Finally, the development of multiple classes of quantum devices hinges on the capability to understand and harness phonon and plasmon functionalities on the nanometer levels.
These considerations have stimulated the large body of experimental effort in EEL spectroscopy and EELS imaging of 2D materials, nano‐optical systems, semiconductors, and many more.[ 13 , 21 , 22 ] Currently, the experimental studies in STEM‐EELS, much like virtually all other areas of modern physical imaging, are based on the human operator paradigm. The regions for spectral imaging, whether single point spectroscopy or a grid of points, are selected based on operator intuition, and data analysis is performed after (in some cases, long after) it has been acquired. At the same time, generative theoretical models for many of these are not well understood, have a large number of poorly known parameters, or are absent. Hence, experimental discovery of new physical relationships and behaviors is often limited by chance, i.e., whether the object or behaviors of interest happened to be observed during the experiment and if they contain features identifiable by an operator and collaborating team. Note that multiple calls for storing full experimental data streams and making the data open that are now prevalent in multiple scientific communities[ 23 , 24 ] seek to address the second aspect.
These factors have launched much interest in automated and autonomous experiment as an approach to accelerate the rate of physical discovery, both in the context of electron microscopy, scanning probe microscopy methods, as well as scattering techniques. Over the last several years, multiple opinion pieces toward development of automated experiment (AE) in microscopy have been published.[ 25 , 26 ] However, the targets of the automated experiment have not been generally analyzed in detail, with vast majority of applications to date reporting either probe‐based manipulation based on rigid discovery and manipulation rules,[ 27 , 28 ] implementation of deep neural network‐based computer vision algorithms toward identification of objects of interest over large scan regions,[ 29 , 30 , 31 ] or dynamic sampling routines determined in the feature space.[ 32 , 33 , 34 ] At the same time, automated discovery of novel physical phenomena has remained elusive. Currently, the primary bottleneck toward implementation of the autonomous experiment in microscopy is the lack of an algorithm for active learning targeting physical discovery.
Indeed, in many areas of scientific research including X‐Ray scattering, automated synthesis, and recently SPM and STEM, the primary paradigm for the AE is Gaussian processes (GP) and GP‐based Bayesian optimization (BO).[ 35 , 36 , 37 , 38 ] However, the extant implementations of BO use only the information acquired during the experiment to select the new locations for exploration. Similarly, the relationship between the explored locations is defined by the functional form of the kernel function (somewhat equivalent to the parametrized correlation function discovered during the experiment), thus offering only limited flexibility. While new forms of kernel functions are becoming available and the nonparametric and analytic forms of acquisition functions can be balanced, the lack of prior knowledge utilization is common to the BO methods that are also usually limited to low‐dimensional parameter space. Secondly, the learning of the kernel structure is effectively equivalent to the discovery of the correlation function parametrized via a certain functional form, and hence its capability to represent complex nonperiodic spatial structures prevalent in microscopic data is extremely limited. Structural discovery was recently explored in detail using BO,[ 39 ] but only insignificant improvements were realized compared to rectilinear grid scans.
At the same time, in the typical microscopy experiment, the locations for the detailed spectroscopic or imaging studies are selected based on specific features in the structural dataset. For example, in nanoplasmonic structures we want to collect EEL spectra on “special” locations associated with certain positions with respect to geometric structures. This information is not available for conventional BO. Ideally, we would like to balance this feature‐based exploration with the perceived value of the already‐discovered behaviors, for example, intensities in a specific energy region, functional form of the spectrum, presence of features that are expected to be signatures of physical phenomena of interest, or exploration targeting the discovery of new behaviors (i.e., curiosity learning). It is also important to mention that very often scientists operating the microscope have a degree of anticipation—based on prior knowledge and experience, comparisons with other similar systems, or general physical considerations—on how the interesting behaviors are likely to manifest in the observed data. These considerations are often formulated in a somewhat flippant form as “we know interesting physics when we see it.” However, this level of analysis until now was unavailable for autonomous microscopy.
Here, we use the combined power of BO and feedforward neural networks in the form of deep kernel learning (DKL) to enable the autonomous physical discovery in microscopy and implement it with STEM‐EELS of nanoplasmonic structures. In this approach, we perform active learning of structure–property relationships as single‐shot learning, rather than reconstruction of an image or spectrum (either data driven or using physical priors) from preacquired sparse data.[ 40 , 41 , 42 ] In other words, the algorithm actively interrogates the microscope during operation, rather than working with the data samples according to a certain pattern after it is acquired. We further develop an approach to incorporate physical knowledge in the discovery in the form of a scalarizer function that explicitly allows to define what physics is of interest. Note that the scalarizer can be even trained by human—and then the experiment will discover interesting physical signatures following the model set by a trained physicist. We develop the associated workflow, quantify its performance on ground truth data of plasmonic nanoparticles, and deploy it on the operational microscope such that it operates autonomously without any additional hardware to actively discover the edge plasmon functionality in MnPS3,[ 43 , 44 , 45 , 46 ] a relatively unexplored beam‐sensitive 2D antiferromagnet.
2. Results and Discussion
As a first model system, we have chosen fluorine and tin co‐doped indium oxide nanoparticles whose plasmon response lies in the near infrared. This is the same material system studied in previous works[47– 51 ] and we find it to be an excellent choice for visualizing the spatial dependence of a variety of plasmonic features that depend on geometry and local particle organization. The typical structural image obtained via high‐angle annular dark field (HAADF) scattering and several spectra are shown in Figure 1 . It is common to show deconvolved components using exploratory data techniques such as non‐negative matrix factorization (NMF) for visualizing the locality of the spectral behavior.[ 52 , 53 , 54 , 55 ] Here, we show the NMF in Supporting Information and code which is available in the notebook, but we do not discuss it here. Figure 1A,B largely illustrates the rich plasmonic features possible in this system.
Figure 1.

System overview and interpretation of data by processing techniques. Non‐negative matrix factorization (NMF) deconvolution for N = 4 components where spectral endmembers and abundance maps are shown in A and B, respectively. Sub‐image formation from structural image in D is shown in E and F, where electron energy loss (EEL) spectra in C correspond to response from center pixel in sub‐images E and F. The dataset of the plasmonic cubes is used.[ 47 , 48 ] Scale bar in D is 20 nm.
A crucial aspect of NMF and other spectrum‐based linear and manifold learning methods is that they do not consider any relationships to the structural data or the spatial location from which the spectra are acquired. However, it is these relationships which guides the human operator to decide where to perform measurements. Previously, we have introduced the im2spec approach,[ 47 , 56 ] using the encoder‐decoder neural network architecture to establish the correlative relationship between local structure and functional responses. However, the im2spec approach requires a full dataset to perform the analysis. Furthermore, transferring a pretrained deterministic neural network to autonomous experiment is limited both by out‐of‐distribution effects and the capability of the network to “discover” only the phenomena it has been trained on. In other words, the approach where the first part of the experiment is used to train the network, and the second uses thus trained networks to explore (A/B testing[ 57 ]) are known to be least data efficient and most sensitive to the changes in the data generation process.
With this in mind, and using the same dataset, we illustrate the DKL approach, where the fully available structural data is used to actively navigate the spectrum “acquisition” process. Similar to the im2spec method, we split the structural image into a set of patches of size w at every spatial coordinate and pair these with the EEL spectrum at that coordinate. The patches are hence the structural descriptors, whereas EEL spectra are the target functionalities (see Figure 2 ). Figure 1C–F illustrates the patch formation and corresponding spectra. Here, we assume that the structural information is available everywhere, whereas spectra are available only sequentially as the algorithm actively interrogates the microscope to perform sequential spectrum acquisition. In this process, the subsequent locations in the image plane are chosen based on the results obtained from previous locations and predefined criteria that defines degree of interest to prediction.
Figure 2.

Schematic illustration of a single deep kernel learning (DKL) step for active learning of system properties where an input is a structural image, and an output is either a spectrum or a specific scalar property derived from it. Computationally wise, a feedforward neural network takes structural images and embeds them into a low‐dimensional feature space in which a standard (base) Gaussian process kernel operates. At training step, the weights ( w ) of the neural network and the hyperparameters ( θ ) of the base kernel are optimized jointly using a small number of available measurements. Once a DKL model is trained, it can be used to compute expected function value () and the associated uncertainty () for all structural data (for which there are no measured spectra). These are used to derive the acquisition function, , for selecting the next measurement point according to x next = argmax(acq).
To illustrate the behavior of DKL in this system, we develop several examples demonstrating its potential to reconstruct the data for full sampling, followed by sparse (random) sampling, and subsequently for active sampling. These examples hence cover a range of applications such as denoising and pansharpening, batch updates, and particularly active learning with physics‐based discovery. Note that the post‐acquisition tasks (pansharpening) can be also implemented with different algorithms such as compressed sensing and Gaussian Processes.[ 58 , 59 , 60 , 61 ] However, the DKL is crucial in enabling physics‐based active learning and is demonstrated here. In the latter case, we assume that we have access to the full structural information within the image and the EEL spectra at several locations. We postulate the existence of a (unknown) relationship between local structure and EEL spectrum and seek to reconstruct it for the full dataset. However, here we assume that only partial information on functionality is available.
The DKL reconstruction of structure–property relationships for random selection of sampling points is shown in Figure 3 . Individual spectral reconstruction using all known points for selected coordinates in A is shown in B, which are compared to the ground truth spectra. Supporting Information contains additional spectral comparisons. We note that the DKL reconstructs the spectra from images. However, for visualization purposes we further calculate the energy‐averaged images across the entire spectrum, which are shown in Figure 3C for 10%, 3%, and 1% fraction of known points, where the corresponding uncertainties in the spectral reconstructions are also shown. Even for only 1% known points, the reconstruction is strikingly good, indicating that using structural similarity allows to harness a wealth of information contained in the structure–spectra relationships. It is important to consider here that the partial data is chosen randomly, and the model is trained all at once for each percentage set of partial data. In other words, this is not a scenario of active learning where the model is continuously retrained with each new measurement, and therefore random initializations may have an impact on the behavior of the model.
Figure 3.

Deep kernel learning (DKL) model performance for full data reconstruction. A) Ground truth image representing energy average over all energies (normalized), with selected ground truth spectra and DKL spectral predictions using all data shown in (B). DKL predictions and associated uncertainties using partial data (indicated by percentages) are shown in (C). 10%, 3%, and 1% partial data represent 280, 85, and 28 measurements, respectively. Predictions exist on relative scale. All scale bars 20 nm.
To rationalize these observations, we note that there is a relationship between local structures and EEL spectra. Generally, it can be expected that similar structures have similar spectra and discovering the relationship between the two constitutes the correlative model. The deviations from such relationships signify the presence of unknown mechanisms, with the uncertainty maps providing likely spatial locations on where they manifest. However, the points in this case are chosen randomly (i.e., absent any knowledge of the system) and hence behaviors of interest localized in a small number of spatial locations remain poorly explored.
The analysis above illustrates the power of DKL to reconstruct spectra from structure based on partial data and yield corresponding uncertainties after the experiment. However, the ultimate target of autonomous experiment is not only reconstruction of behaviors, but also the search for specific behaviors of interest. To understand fundamental principles behind this process, we note that each human‐led experiment is guided by the combination of exploratory and exploitative strategies. Exploratory strategies generally target the discovery of possible behaviors in the system. In exploitative strategies, we seek to find new behaviors, refine the parameters of the known models, or refine a certain hypothesis.
The flexibility of the DKL allows exploration of the image space based on the combination of the predicted target function and its uncertainty, where both are vectors. This allows significant flexibility in defining exploration and exploitation strategies, based for example on the maximum value of a function, total integrated uncertainty, or other criteria. However, this also provides a unique opportunity to define physics‐based acquisition functions that target the behaviors of interest as reflected in spectra. In other words, much like human‐based exploration of data after acquisition (we know interesting physics when we see it), here we can formulate the exploration criterion based on the physical behavior of interest, whether it is specific peak intensity or width, peak intensity ratios, presence of certain features, similarity to previously known behaviors, etc.
Here, we first illustrate this approach on the model data, with the full spectral dataset (invisible to the DKL algorithm) being available as a ground truth. As a test case, after removing the zero‐loss peak (ZLP) in the usual manner[ 62 , 63 ] by fitting a power law to the ZLP, the maximum spectral intensity is used as the optimization parameter for DKL. For the plasmonic nanoparticles under test, it turns out that the maximum intensity generally corresponds to a collective plasmon resonance which is most strongly observed when the electron probe is near—but not on—particles. By merely attempting to maximize spectral intensity, in this case, DKL can determine the relationship between intense spectral response and local geometries which partially contain particles. In comparison to the cluster learning in Figure 3 where the structure–property relationship is learned from a given segment of the full data all at once, Figure 4 illustrates the discovery process at several different stages (number of measurement points), where the overlaid red points on the HAADF–STEM images in Figure 4C are the locations of where the EELS measurements are taken. A structure–property relationship is established with each new measurement, where the subsequent measurement location is deeply related to the previous accumulation of structure–property knowledge. Emphasis is placed on the fact that these four images actually show snapshots of the active learning pathway by where the measured points are acquired. Each EEL spectrum is converted into a scalar—in this case, the maximum spectral intensity after ZLP removal. This scalar is considered the measurement for DKL, and with each new measurement, DKL is retrained. Figure 4C illustrates this at four different stages throughout the active learning process, where the maximum valued pixel in the map of the acquisition function is the location of the next measurement. Strikingly, even at very few measured points, the model has learned the relationship between structure in the HAADF and maximizing plasmon intensity, as illustrated by the acquisition functions and model predictions in Figure 4C. If individual reconstructed spectra at different percentages of acquired points are compared to one another, in general, there is not a considerable difference between even 0.1% and 10% spectral predictions. This is shown in Supporting Information in more detail; however, the primary message remains the same: the spectral relationship to the local structure is determined very early in the experiment, and is only refined with more sampling points and training.
Figure 4.

Deep kernel learning (DKL) discovery pathway for edge plasmon search. Normalized two‐dimensional map of maximum electron energy loss (EEL) spectral intensity (after zero‐loss peak (ZLP) removal) is shown in A, where the model's goal is to optimize the peak near 0.4 eV in (B). Additional spectral comparisons are presented in Supporting Information. DKL pathway shown in panel (C) for increasing number of measured points; sampled points are shown in high‐angle annular dark fields (HAADFs) and (scalarized) acquisition functions in red, with DKL predictions and associated uncertainties also shown for where to best optimize maximization of the plasmon peak. Acquisition function and prediction maps are both relative quantities. Percentages in (C) indicate the total amount of points that have been measured and used in training up to that point, where 0.1%, 1%, 3%, and 10% represent 3, 28, 85, and 280 measurements, respectively. Scale bar in A is 20 nm.
Finally, we proceed to implement the DKL‐based discovery in the autonomous experiment implemented on an operational scanning transmission electron microscope. Here, we used the NION Monochromated Aberration Corrected STEM (MACSTEM) and used the Python‐scripting environment enabled by the SWIFT software suite[ 64 ] to establish the communication between DKL control routines and the beam position. A link between the EELS camera and SWIFT is already present since we use the NION IRIS spectrometer and camera system, therefore all data modalities (EELS, 4D‐STEM, ADF) are accessible directly via a Python interface. Additional details are available in Experimental Section.
Here, we illustrate this approach for physics‐driven discovery of plasmonic behavior discovery in MnPS3, a less‐known layered 2D van der Waals antiferromagnet. By analogy with other layered materials, we hypothesize that this material system should demonstrate specific plasmonic behaviors in the vicinity of interfaces, i.e., develop edge plasmons. The physical criterion for edge plasmon detection is the presence of the peak in EELS spectrum below the bulk plasmon. Hence, here we choose to optimize a peak ratio in the EEL spectrum where we posit the existence of a low energy electronic feature as well as a higher energy bulk plasmon resonance. The EEL spectra predicted from the structural descriptors are fit with two Lorentzian functions whose peak positions can fluctuate in energy but not overlap. The ratio of the amplitudes of the two are then used as the guiding principle in the autonomous experiment, where we seek to maximize the low energy feature and simultaneously minimize the bulk plasmon. Emphasis is placed on the fact that a priori we have no knowledge of any edge plasmon behavior in this material—we merely suggest a physical criterion in which we are interested. Note that this scalarizer function that defines the degree of physical interest can be considerably more complex and be realized as a decision tree, neural network trained on human‐labeled data, or even reflect human decisions made during the experiment. In other words, it allows to incorporate a full spectrum of methods available to the well‐trained human operator.
For this specific experiment, equally important is that this material is rather sensitive to electron beam irradiation, making high‐fidelity hyperspectral imaging difficult. This lends well to the paradigm of autonomous experiment which allows to only irradiate regions that are deemed to be of interest, limiting the total dose the specimen receives. Lastly, it is crucial to convey the fact that the DKL algorithm has not been pretrained and no sample information has been supplied to guide or otherwise assist in its performance. In other words, the following experiment applies to practically any never‐before‐seen specimen.
The autonomous experiment (AE) using the DKL workflow is illustrated in Figure 5 , where a HAADF–STEM and selected EEL spectra of an exfoliated 2D flake consisting of several layers are shown. We first demonstrate in Figure 5C the DKL pathway, prediction, and acquisition function as a function of number of measured points when we choose to optimize the peak ratio previously discussed. For clarity, the acquisition function effectively combines the knowledge of the DKL prediction with minimizing the uncertainty. Fairly quickly the DKL recognizes the relationship between the boundary of the flake and vacuum, in which a strong localized edge plasmon is found to exist and the higher energy bulk plasmon is weak or nonexistent. Note that the top edge is more favorable than other edges presumably due to a sharper boundary. After measuring between 10 and 25 EEL spectra (which corresponds to only 1% of the entire space), it is apparent that the DKL predicts the location of edge modes rather well (Figure 5C, DKL Prediction). This has implication that after only a fraction of points are initially sampled, the model either need not be trained further or not as frequently, allowing a substantial speed increase.
Figure 5.

Autonomous experimentation (AE) enabled by deep kernel learning (DKL). Autonomous experiment with structural (high‐angle annular dark field, HAADF) image of 2D section of suspended MnPS3 shown in (A); autonomous peak ratio optimization shown in (C), while autonomous optimization of maximum spectral intensity shown in (D). Average spectrum from all visited points in autonomous experiment in (C) is shown in (B), where arrows indicate the peaks from which a peak ratio is determined. Maximum spectral intensity in most space tends to relate to high energy peak near 20 eV. Process shown for 3, 10, 25, and 100 acquired measurements for two different physics search criteria employed: peak ratio optimization (C) and maximization of spectral intensity (D). Scalebar in (A) 50 nm. Acquisition function and prediction maps are both relative quantities.
Note that the choice of the physical model influences the discovery pathway and consequently the features that are discovered. Once again, this is similar to the human‐based workflow, when depending on the specific hypothesis the operator will choose a different sequence of measurements. To illustrate this, we change the criterion from peak ratios to maximum peak intensity as is shown in Figure 5D. In this case, the maximum peak intensity appears to correspond to the bulk plasmon resonance, which generally increases in strength with thickness. As we would expect, since the intensity in the HAADF can be treated as a measurement of relative thickness (at these scales), we observe the DKL pathway to preponderantly explore the regions on the flake. This second search criterion serves to support the fact that this autonomous experiment enabled by DKL indeed does search for physics that depends on the built‐in models that are provided. Comparing the two pathways in Figure 5C,D, a different number of acquired points are needed to satisfactorily learn the structure–property relationships. We selected to stop the experiments after acquisition of 100 points for both pathways only for demonstration and comparison purposes; in a practical setting, one might choose to halt the training after a threshold level of uncertainty has been reached and proceed to acquire additional points (without further training), which also allows dramatic increase in speed of the remaining experiment. In Supporting Information, videos are provided which show the algorithm in action with more detail.
3. Conclusion
To summarize, we have implemented the physics discovery workflow for electron energy loss spectroscopy in STEM. This approach uses the structural information as initial data for scientific discovery and uses active learning to build the structure–property relationships in the system under investigation. The discovery process is guided by the physics‐based acquisition function that allows incorporation of physical models, target functionalities, or discovery of novel phenomena. Depending on experimental goals, this approach can provide a full hyperspectral reconstruction (so‐called vector DKL), as is performed in Figures 3 and 4; or as in Figure 5, it can solely focus on the specified structure–property relationship (scalar DKL) to conduct autonomous microscopy. Note that one may choose to combine these and perform the full vector DKL with the autonomous experiment, but at the expense of slightly increased computation costs, although there is not necessarily a benefit to perform the full hyperspectral reconstruction during the experiment. Here, this approach enabled the detection and realization of edge plasmon behavior in the 2D layered antiferromagnet MnPS3. No prior knowledge of edge plasmon activity has been mentioned in literature or was expected to occur, and while it is true that such a phenomenon could have been discovered without DKL—it was not—and precisely demonstrates the power of using DKL for physical discovery.
Having illustrated the active learning approach with DKL for MnPS3, once again it is stressed that DKL is not limited to certain material systems—on the contrary, any material that can be placed in the microscope can be used without any prior knowledge of the material system. This active learning of structure–property relationships (with quantified uncertainties) that are valid for the observed material system is the first distinctive aspect of this approach. The second key point about the DKL discovery is that almost any physical criteria can be put forth—this may be as simple or as complicated as desired. In STEM, the autonomous experimentation can be readily extended beyond the spectral measurements such as EELS and EDS, toward 4D‐STEM modes. Here, scalar‐derived quantities such as lattice parameters, strain, and electric fields that are calculated from a diffraction pattern, along with their uncertainties, can be used as a part of the exploration criteria. With the myriad of different modalities that are possible in the electron microscope and other probe‐based imaging such as variants of scanning probe microscopy, DKL holds the key to opening the floodgates of autonomous experimentation.
Perhaps even more importantly, similar principles (sequential scanning of feature rich parameter spaces) underpin multiple other areas of scientific discovery, from instrumental characterization methods to the discovery of new materials, molecules, etc. Hence, we believe that the proposed principle combining the correlative ML prediction of complex responses combined with physics‐based navigation can be of interest for multiple other areas of scientific research.
4. Experimental Section
Materials
F,Sn:In2O3 nanoparticles were chemically synthesized by Shin‐Hum Cho and Delia J. Milliron using standard Schlenk line techniques with a modification of previously reported methods for continuous slow injection synthesis of indium oxide nanoparticles as described.[ 50 ]
MnPS3 crystals were grown by Nan Huang and David G. Mandrus using a chemical vapor transport (CVT) method as described.[ 65 ] Single crystals were mechanically exfoliated directly onto Au Quantifoil TEM grids.
Scanning Transmission Electron Microscopy
All STEM experiments were carried out using the NION MACSTEM with an accelerating voltage of 60 kV and probe semiconvergence angle of 30 mrad. EEL spectra were acquired using 100 ms dwell time per pixel while HAADF–STEM images were acquired with 16 us per pixel. For the F,Sn:In2O3 nanoparticle system, the electron source was monochromated such that the full width at half max (FWHM) of the ZLP was approximately 40 meV, and a dispersion of 0.8 meV ch−1 (2048 channels) was selected. This resulted in a nominal probe current of 30 pA. With MnPS3, the source was not monochromated due to the presence of higher energy excitations, giving a FWHM of 350 meV and nominal probe current of 200 pA, using a dispersion of 20 meV ch−1.
Autonomous Microscopy
As part of the NION Swift package, direct access via a Python scripting interface exists to both monitor and control various aspects of the NION MACSTEM. In our case, the electron probe position, EELS camera, and Ronchigram camera are all accessible with appropriate commands. This implies there is no additional hardware requirement for autonomous experiments using the NION systems. Installation of various required Python packages (e.g., PyTorch and SciPy) was done such that the model training is performed on the microscope hardware PC. The CPU or GPU can be used for training, where in the form of scalar DKL and small/sparse data, use of a GPU presents no significant benefit. For vector DKL (i.e., full spectrum prediction), use of GPU is practically required due to needed parallelization of many spectral targets.
Autonomous experiment begins with the collection of HAADF–STEM (structural) image of size m × n pixels, followed by N number of randomly sampled EEL spectra in the same space, where here m × n is 50 × 50 and N is usually two or three. The probe is then immediately moved to a predefined safe location (e.g., for beam sensitive samples) or is blanked. Image patches (features) are created for all pixels in the HAADF using a user‐specified window size, e.g., 8 × 8 pixels, which should be appropriately selected based on field of view (FOV) and structural details that may be contained in each image patch.
Searching for particular physical effects is facilitated by specification of search criteria—in scalar DKL, the EEL spectrum is reduced to a single scalar quantity in any imaginable way. Note that in vector DKL, the entire spectrum does not need to be reduced to a scalar quantity, though depending on GPU, the number of spectral channels may be limited. In this work, we showed examples of two relatively simple criteria with scalar DKL—maximum spectral intensity and a peak ratio of fitted curves. Curve fitting is performed by defining a Lorentzian function and using the SciPy package in Python.
The DKL model is trained with each additional EELS measurement such that it continuously learns structure–property relationships. This is adjustable to the point where training may be halted at a certain number of measurements, or a pretrained model may be used, however the use of pretrained models is not discussed here as our focus is that of active learning. A built‐in exploration parameter also exists in the model which allows for the balance of exploitation and exploration of the data space, i.e., this might help to avoid remaining in a local minimum.
Gaussian Process Regression and Bayesian Optimization
In the Gaussian process regression setting, given inputs X = (x 1,…, xN ) and targets y = (y 1,…, yN ), we assume that each target point is generated according to[ 66 ]
| (1) |
where f is drawn from a Gaussian process (GP) prior, , and . The covariance (kernel) function k defines a strength of correlation between points in the input space and its hyperparameters are learned from the data (along with the noise variance) by performing a gradient ascent on the log marginal likelihood. The most common choice of kernel is a radial basis function (RBF) which on one hand supports a large class of functions with various shapes but on the other hand have inductive biases toward very simple solutions.
Once a GP model is trained (i.e., the kernel and noise variance are learned), it can be used to compute predictive mean (µ *) and variance (Σ*) for a new (test) point x * as
| (2a) |
| (2b) |
where K is the n‐by‐n matrix of covariances computed for all pairs of training points and k * is the vector of covariance between the test and training points. In the context of microscopy/spectroscopy, the training data (X and y) correspond to sparse measurements over a selected FOV defined on rectangular grid of size i × j whereas the new/test points (X *) correspond to yet unmeasured regions from the same FOV. The goal of BO is to discover regions where a particular functionality or structure is maximized using a minimal number of measurements. The next measurement points are selected using the maxima of so‐called acquisition function(s) derived from the GP‐predicted mean, µ *, and uncertainty, = diag(Σ*) on the test data. The popular choice of acquisition functions is the expected improvement (EI), which tells a likelihood of the highest improvement over the current “best measurement”[ 67 ] and is defined as
| (3) |
where Φ is a standard normal cumulative distribution function, y + is the best predicted value, ϕ is the standard normal probability density function, and ξ balances the exploration and exploitation (here it was set to 0.01).
Unfortunately, the classical GP has several major limitations that preclude its widespread adoption for real‐time microscopy and spectroscopy measurements. First, it does not scale well to high‐dimensional (D ≥ 3) inputs making it unsuitable for hyperspectral measurements (D ≥ 3) in online settings. Second, the classical GP does not actually learn the representations from the data. As such, it does not allow for using information obtained through different experimental modalities for more efficient navigation during the BO. For example, it is not uncommon to see the associations between features observed in easy‐to‐acquire structural 2D images and those seen in spectroscopic data (the existence and strength of such associations depend on the particular system and experimental method), but it is not possible to exploit such associations in the BO setting with the classical GP approach.
Deep Kernel Learning
One of the solutions to the aforementioned limitations of the classical GP is to combine a deep neural network with a Gaussian process. The former is well‐known to be a powerful deterministic tool for representation learning that can easily scale to high‐dimensional inputs whereas the latter provides the reliable uncertainty estimates. This hybrid solution is known as DKL[ 68 ] which we schematically illustrated in Figure 1. Its central idea is straightforward. The high‐dimensional input data is embedded by a feed‐forward neural network g ω into the (latent) feature space where a standard (“base”) GP kernel operates. The new effective kernel is expressed as kDKL (x,x′) = k(g ω (x),g ω (x′)), where ω are the parameters (weights and biases) of the neural network. The parameters of the base kernel and the weights of the neural network are trained jointly by maximizing the log marginal likelihood. Once trained, the DKL‐GP model can make a prediction with quantified uncertainty on new (high‐dimensional) input data for selecting the next measurement point according to Equations (2) and (3).
In the DKL GP, the inputs can be image patches extracted at (k, l) locations of the 2D grid and the targets can be either full spectra in those locations or some specific characteristic(s) of spectra such as intensity of selected peaks (note that the dimensionality of each input patch is equal to the total number of pixels contained in the patch; hence, even for a patch of 8 × 8 the input dimensionality is 64). In this case, the DKL GP will learn a correlative structure–property relationship whereas the uncertainty in property prediction from patches of structural data at (k, l) points on the grid can be used for selecting the next measurement point using a predefined acquisition function. We note that while predicting property from structure can be also done with the im2spec type of encoder‐decoder networks,[43,49] it does not output predictive uncertainty and hence cannot be used for the BO‐based autonomous experimentation.
We note that during the exact GP training, the entire dataset must be passed “through” a model at once, which may not be possible for modern neural network architectures containing millions and sometimes billions of parameters even when using state‐of‐art graphics processing units (GPUs). One possible solution is to use a stochastic variational (i.e., approximate) inference that allows for mini‐batching of training data. At the same time, we found that in many cases the experimental observations can be well modeled by simple few‐layer perceptrons (MLPs) with a much smaller number of parameters that can accommodate an entire dataset allowing for the exact GP training and inference. We also note that it is possible to use a neural network (pre‐)trained in the offline setting (e.g., using the previously collected data) and “freeze” its weights during the DKL training on new data from a similar experiment, which could save computational time/cost in the online setting.
Conflict of Interest
The authors declare no conflict of interest.
Supporting information
Supporting Information
Supplemental Video 1
Acknowledgements
This research is sponsored by the INTERSECT Initiative as part of the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT‐Battelle, LLC, for the US Department of Energy (DOE) under contract DE‐AC05‐00OR22725. This research used resources of the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. This effort is based upon work supported by the U.S. DOE, Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division (S.V.K.). The authors acknowledge Shin‐Hum Cho and Delia J. Milliron for supplying semiconducting nanoparticles as well as Nan Huang and David G. Mandrus for the MnPS3 used in this work.
Roccapriore K. M., Kalinin S. V., Ziatdinov M., Physics Discovery in Nanoplasmonic Systems via Autonomous Experiments in Scanning Transmission Electron Microscopy. Adv. Sci. 2022, 9, 2203422. 10.1002/advs.202203422
Contributor Information
Sergei V. Kalinin, Email: sergei2@utk.edu.
Maxim Ziatdinov, Email: ziatdinovma@ornl.gov.
Data Availability Statement
The data that support the findings of this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.5167958, reference number 5167958.
References
- 1. Pennycook S. J., Ultramicroscopy 2017, 180, 22. [DOI] [PubMed] [Google Scholar]
- 2. Gerber C., Lang H. P., Nat. Nanotechnol. 2006, 1, 3. [DOI] [PubMed] [Google Scholar]
- 3. Pennycook S. J., Ultramicroscopy 2012, 123, 28. [DOI] [PubMed] [Google Scholar]
- 4. Chisholm M. F., Luo W., Oxley M. P., Pantelides S. T., Lee H. N., Phys. Rev. Lett. 2010, 105, 197602. [DOI] [PubMed] [Google Scholar]
- 5. Jia C.‐L., Nagarajan V., He J.‐Q., Houben L., Zhao T., Ramesh R., Urban K., Waser R., Nat. Mater. 2007, 6, 64. [DOI] [PubMed] [Google Scholar]
- 6. Dai Z., Slough C. G., Coleman R. V., Phys. Rev. Lett. 1991, 66, 1318. [DOI] [PubMed] [Google Scholar]
- 7. Tsen A. W., Hovden R., Wang D., Kim Y. D., Okamoto J., Spoth K. A., Liu Y., Lu W., Sun Y., Hone J. C., Kourkoutis L. F., Kim P., Pasupathy A. N., Proc. Natl. Acad. Sci. U.S.A. 2015, 112, 15054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Roushan P., Seo J., Parker C. V., Hor Y. S., Hsieh D., Qian D., Richardella A., Hasan M. Z., Cava R. J., Yazdani A., Nature 2009, 460, 1106. [DOI] [PubMed] [Google Scholar]
- 9. Moler K. A., Nat. Mater. 2017, 16, 1049. [DOI] [PubMed] [Google Scholar]
- 10. Idrobo J. C., Lupini A. R., Feng T., Unocic R. R., Walden F. S., Gardiner D. S., Lovejoy T. C., Dellby N., Pantelides S. T., Krivanek O. L., Phys. Rev. Lett. 2018, 120, 095901. [DOI] [PubMed] [Google Scholar]
- 11. Venkatraman K., Levin B. D. A., March K., Rez P., Crozier P. A., Nat. Phys. 2019, 15, 1237. [Google Scholar]
- 12. Hage F. S., Radtke G., Kepaptsoglou D. M., Lazzeri M., Ramasse Q. M., Science 2020, 367, 1124. [DOI] [PubMed] [Google Scholar]
- 13. Krivanek O. L., Lovejoy T. C., Dellby N., Aoki T., Carpenter R. W., Rez P., Soignard E., Zhu J., Batson P. E., Lagos M. J., Egerton R. F., Crozier P. A., Nature 2014, 514, 209. [DOI] [PubMed] [Google Scholar]
- 14. Grillo V., Karimi E., Gazzadi G. C., Frabboni S., Dennis M. R., Boyd R. W., Phys. Rev. X 2014, 4, 011013. [Google Scholar]
- 15. Zachman M. J., Tu Z., Choudhury S., Archer L. A., Kourkoutis L. F., Nature 2018, 560, 345. [DOI] [PubMed] [Google Scholar]
- 16. Lagos M. J., Trügler A., Hohenester U., Batson P. E., Nature 2017, 543, 529. [DOI] [PubMed] [Google Scholar]
- 17. Yin P., Radovanovic P. V., ChemSusChem 2020, 13, 4885. [DOI] [PubMed] [Google Scholar]
- 18. Kociak M., Henrard L., Stéphan O., Suenaga K., Colliex C., Phys. Rev. B 2000, 61, 13936. [Google Scholar]
- 19. Bigelow N. W., Vaschillo A., Iberi V., Camden J. P., Masiello D. J., ACS Nano 2012, 6, 7497. [DOI] [PubMed] [Google Scholar]
- 20. Cherqui C., Bigelow N. W., Vaschillo A., Goldwyn H., Masiello D. J., ACS Photonics 2014, 1, 1013. [Google Scholar]
- 21. Nelayah J., Kociak M., Stéphan O., García de Abajo F. J., Tencé M., Henrard L., Taverna D., Pastoriza‐Santos I., Liz‐Marzán L. M., Colliex C., Nat. Phys. 2007, 3, 348. [DOI] [PubMed] [Google Scholar]
- 22. Zhou W., Lee J., Nanda J., Pantelides S. T., Pennycook S. J., Idrobo J.‐C., Nat. Nanotechnol. 2012, 7, 161. [DOI] [PubMed] [Google Scholar]
- 23. Vasilevsky N. A., Minnier J., Haendel M. A., Champieux R. E., PeerJ 2017, 5, e3208. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Blaiszik B., Chard K., Pruyne J., Ananthakrishnan R., Tuecke S., Foster I., JOM 2016, 68, 2045. [Google Scholar]
- 25. Kalinin S. V., Ziatdinov M., Hinkle J., Jesse S., Ghosh A., Kelley K. P., Lupini A. R., Sumpter B. G., Vasudevan R. K., ACS Nano 2021, 15, 12604. [DOI] [PubMed] [Google Scholar]
- 26. Dyck O., Jesse S., Kalinin S. V., MRS Bull. 2019, 44, 669. [Google Scholar]
- 27. Kelley K. P., Ren Y., Morozovska A. N., Eliseev E. A., Ehara Y., Funakubo H., Giamarchi T., Balke N., Vasudevan R. K., Cao Y., Jesse S., Kalinin S. V., ACS Nano 2020, 14, 10569. [DOI] [PubMed] [Google Scholar]
- 28. Krull A., Hirsch P., Rother C., Schiffrin A., Krull C., Commun. Phys. 2020, 3, 54. [Google Scholar]
- 29. Doty C., Gallagher S., Cui W., Chen W., Bhushan S., Oostrom M., Akers S., Spurgeon S. R., Comp. Mater. Sci. 2022, 203, 111121. [Google Scholar]
- 30. Trentino A., Madsen J., Mittelberger A., Mangler C., Susi T., Mustonen K., Kotakoski J., Nano Lett. 2021, 21, 5179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Sotres J., Boyd H., Gonzalez‐Martinez J. F., Nanoscale 2021, 13, 9193. [DOI] [PubMed] [Google Scholar]
- 32. Scarborough N. M., Godaliyadda G. M. D. P., Ye D. H., Kissick D. J., Zhang S., Newman J. A., Sheedlo M. J., Chowdhury A., Fischetti R. F., Das C., Buzzard G. T., Bouman C. A., Simpson G. J., IS&T Int. Symp. Electron. Imaging 2017, 2017, 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Zhang Y., Godaliyadda G. M. D., Ferrier N., Gulsoy E. B., Bouman C. A., Phatak C., Ultramicroscopy 2018, 184, 90. [DOI] [PubMed] [Google Scholar]
- 34. Helminiak D., Hu H., Laskin J., Ye D. H., IS&T Int. Symp. Electron. Imaging 2021, 2021, 2901. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Snoek J., Rippel O., Swersky K., Kiros R., Satish N., Sundaram N., Patwary M., Prabhat Mr, Adams R., in Proc. 32nd Int. Conf. Machine Learning (Eds: Bach F., Blei D.), PMLR, Lille, France: 2015, p. 2171. [Google Scholar]
- 36. Noack M. M., Doerk G. S., Li R., Streit J. K., Vaia R. A., Yager K. G., Fukuto M., Sci. Rep. 2020, 10, 17663. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Noack M. M., Zwart P. H., Ushizima D. M., Fukuto M., Yager K. G., Elbert K. C., Murray C. B., Stein A., Doerk G. S., Tsai E. H. R., Li R., Freychet G., Zhernenkov M., Holman H.‐Y. N., Lee S., Chen L., Rotenberg E., Weber T., Goc Y. L., Boehm M., Steffens P., Mutti P., Sethian J. A., Nat. Rev. Phys. 2021, 3, 685. [Google Scholar]
- 38. Vasudevan R. K., Kelley K. P., Hinkle J., Funakubo H., Jesse S., Kalinin S. V., Ziatdinov M., ACS Nano 2021, 15, 11253. [DOI] [PubMed] [Google Scholar]
- 39. Creange N., Dyck O., Vasudevan R. K., Ziatdinov M., Kalinin S. V., Mach. Learn.: Sci. Technol. 2021, 3, 015024. [Google Scholar]
- 40. Chen X., Yao Z., Xu S., McLeod A. S., Gilbert Corder S. N., Zhao Y., Tsuneto M., Bechtel H. A., Martin M. C., Carr G. L., Fogler M. M., Stanciu S. G., Basov D. N., Liu M., ACS Photonics 2021, 8, 2987. [Google Scholar]
- 41. Ede J. M., Beanland R., Sci. Rep. 2020, 10, 8332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Yang Y., Ji C., Deng K., Ann. Appl. Stat. 2021, 15, 768. [Google Scholar]
- 43. Susner M. A., Chyasnavichyus M., McGuire M. A., Ganesh P., Maksymovych P., Adv. Mater. 2017, 29, 1602852. [DOI] [PubMed] [Google Scholar]
- 44. Kumar R., Jenjeti R. N., Austeria M. P., Sampath S., J. Mater. Chem. C 2019, 7, 324. [Google Scholar]
- 45. Long G., Henck H., Gibertini M., Dumcenco D., Wang Z., Taniguchi T., Watanabe K., Giannini E., Morpurgo A. F., Nano Lett. 2020, 20, 2452. [DOI] [PubMed] [Google Scholar]
- 46. Neal S. N., Kim H.‐S., O'Neal K. R., Haglund A. V., Smith K. A., Mandrus D. G., Bechtel H. A., Carr G. L., Haule K., Vanderbilt D., Musfeldt J. L., Phys. Rev. B 2020, 102, 085408. [Google Scholar]
- 47. Roccapriore K. M., Ziatdinov M., Cho S. H., Hachtel J. A., Kalinin S. V., Small 2021, 17, 2100181. [DOI] [PubMed] [Google Scholar]
- 48. Kalinin S. V., Roccapriore K. M., Cho S. H., Milliron D. J., Vasudevan R., Ziatdinov M., Hachtel J. A., Adv. Opt. Mater. 2021, 9, 2001808. [Google Scholar]
- 49. Cho S. H., Ghosh S., Berkson Z. J., Hachtel J. A., Shi J., Zhao X., Reimnitz L. C., Dahlman C. J., Ho Y., Yang A., Liu Y., Idrobo J.‐C., Chmelka B. F., Milliron D. J., Chem. Mater. 2019, 31, 2661. [Google Scholar]
- 50. Cho S. H., Roccapriore K. M., Dass C. K., Ghosh S., Choi J., Noh J., Reimnitz L. C., Heo S., Kim K., Xie K., Korgel B. A., Li X., Hendrickson J. R., Hachtel J. A., Milliron D. J., J. Chem. Phys. 2020, 152, 014709. [DOI] [PubMed] [Google Scholar]
- 51. Roccapriore K. M., Cho S.‐H., Lupini A. R., Milliron D. J., Kalinin S. V., Small 2021, 18, 2105099. [DOI] [PubMed] [Google Scholar]
- 52. Shiga M., Tatsumi K., Muto S., Tsuda K., Yamamoto Y., Mori T., Tanji T., Ultramicroscopy 2016, 170, 43. [DOI] [PubMed] [Google Scholar]
- 53. Zhang S., Scheu C., Microscopy 2018, 67, i133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Agrawal A., Singh A., Yazdi S., Singh A., Ong G. K., Bustillo K., Johns R. W., Ringe E., Milliron D. J., Nano Lett. 2017, 17, 2611. [DOI] [PubMed] [Google Scholar]
- 55. Barrow S. J., Collins S. M., Rossouw D., Funston A. M., Botton G. A., Midgley P. A., Mulvaney P., ACS Nano 2016, 10, 8552. [DOI] [PubMed] [Google Scholar]
- 56. Kalinin S. V., Kelley K., Vasudevan R. K., Ziatdinov M., ACS Appl. Mater. Interfaces 2021, 13, 1693. [DOI] [PubMed] [Google Scholar]
- 57. Young S. W. H., Weave: J Libr. User Exper. 2014, 10.3998/weave.12535642.0001.101. [DOI] [Google Scholar]
- 58. Krull A., Buchholz T.‐O., Jug F., arXiv:1811.10980 [cs] 2019.
- 59. Kelley K. P., Ziatdinov M., Collins L., Susner M. A., Vasudevan R. K., Balke N., Kalinin S. V., Jesse S., Small 2020, 16, 2002878. [DOI] [PubMed] [Google Scholar]
- 60. Kalinin S. V., Lupini A. R., Vasudevan R. K., Ziatdinov M., npj Comput. Mater. 2021, 7, 154. [Google Scholar]
- 61. Oxley M. P., Ziatdinov M., Dyck O., Lupini A. R., Vasudevan R., Kalinin S. V., npj Comput. Mater. 2021, 7, 65. [Google Scholar]
- 62. Kimoto K., Ishizuka K., Matsui Y., Micron 2008, 39, 257. [DOI] [PubMed] [Google Scholar]
- 63. Hachtel J. A., Lupini A. R., Idrobo J. C., Sci. Rep. 2018, 8, 5637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Meyer C., Dellby N., Hachtel J. A., Lovejoy T., Mittelberger A., Krivanek O., Microsc. Microanal. 2019, 25, 122. [Google Scholar]
- 65. Nitsche R., Wild P., Mater. Res. Bull. 1970, 5, 419. [Google Scholar]
- 66. Rasmussen C. E., Williams C. K. I., Gaussian Processes for Machine Learning, MIT Press, Cambridge, MA: 2005. [Google Scholar]
- 67. Shahriari B., Swersky K., Wang Z., Adams R. P., de Freitas N., Proc. IEEE 2016, 104, 148. [Google Scholar]
- 68. Wilson A. G., Hu Z., Salakhutdinov R., Xing E. P., in Artificial Intelligence and Statistics, PMLR, Hamilton, New Zealand: 2016, p. 370. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting Information
Supplemental Video 1
Data Availability Statement
The data that support the findings of this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.5167958, reference number 5167958.
