Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2020 Aug 13;10:13772. doi: 10.1038/s41598-020-70558-1

Accelerated spin dynamics using deep learning corrections

Sojeong Park 1,2, Wooseop Kwak 1, Hwee Kuan Lee 2,3,4,5,
PMCID: PMC7426868  PMID: 32792674

Abstract

Theoretical models capture very precisely the behaviour of magnetic materials at the microscopic level. This makes computer simulations of magnetic materials, such as spin dynamics simulations, accurately mimic experimental results. New approaches to efficient spin dynamics simulations are limited by integration time step barrier to solving the equations-of-motions of many-body problems. Using a short time step leads to an accurate but inefficient simulation regime whereas using a large time step leads to accumulation of numerical errors that render the whole simulation useless. In this paper, we use a Deep Learning method to compute the numerical errors of each large time step and use these computed errors to make corrections to achieve higher accuracy in our spin dynamics. We validate our method on the 3D Ferromagnetic Heisenberg cubic lattice over a range of temperatures. Here we show that the Deep Learning method can accelerate the simulation speed by 10 times while maintaining simulation accuracy and overcome the limitations of requiring small time steps in spin dynamic simulations.

Subject terms: Phase transitions and critical phenomena, Statistical physics, Magnetic properties and materials, Phase transitions and critical phenomena, Computational science

Introduction

Magnetic materials have a wide range of industrial applications such as in Nd–Fe–B-type permanent magnets used for motors in hybrid cars1,2, magnetoresistive random access memory (MRAM) based on the storage of data in stable magnetic states3, ultrafast spins dynamics in magnetic nanostructures4,5, heat assisted magnetic recording and ferromagnetic resonance methods for increasing the storage density of hard disk drives6,7, exchange bias related to magnetic recording8, and magnetocaloric materials for refrigeration technologies1. Understanding the underlying physics of magnetic material enables us to develop much better applications. In particular, the study of the properties of these magnetic materials is performed experimentally by using neutron scattering9. Magnetic properties of materials are also studied theoretically using computational methods. Spin dynamics simulations10 are powerful tools for understanding fundamental properties of magnetic materials that can be verified by experimental methods. In spin dynamics simulations, classical equations of motion of spin systems are solved numerically using well known integrators such as leapfrog, Verlet, predictor-corrector, and Runge-Kutta methods1113. The accuracy of these simulations depends on a time integration step size. If a large time step is used, the accumulated truncation error becomes larger. Conversely, using a short time step is very computationally demanding. So, it is important to find a trade off between speed and accuracy.

Symplectic methods14,15 are among the most useful time integrators for spin dynamics simulations. The numerical solutions of symplectic methods have properties of the time reversibility and the energy conservation. For example, high order Suzuki–Trotter decomposition method, one of the symplectic methods, allows for larger time step with limited error in its computation. In this paper, we seek to enhance the time integration step of Suzuki–Trotter decomposition method further using Deep Learning techniques. For second-order Suzuki–Trotter decomposition method, the integration time step is limited up to τ0.04/J and for fourth-order Suzuki–Trotter decomposition method, the integration time step is limited up to τ0.2/J16.

Recently, Machine Learning techniques are used to enhance simulation efficiencies in the condensed matter physics. Its applications include addressing difficulties of phase transition1722 and accelerating the Monte-Carlo simulations23. A crucial issue in molecular dynamics simulations24 is that generating samples from the equilibrium distributions is time consuming. Boltzmann generators machine25 addresses the long-standing rare-event (e.g. transition) sampling problem.  In addition, study of quantum many body systems using Machine Learning is applied to simulation of the quantum spin dynamics26,27, identifying phase transitions28, and solves the exponential complexity of the many body problem in quantum systems29.

In this paper, we show that speed up is achieved if we combine spin dynamics simulation and Deep Learning to learn the error corrections. The first condition for speed up is enough capacity of Deep Learning to learn the associations between spin configuration generated by large time steps and spin configuration generated by accurate short time steps. The second condition is enough training data for learning and show the Deep Learning enough pairs of patterns between spin configuration for large and short time steps. We propose to use Deep Learning to estimate the error correction terms of Suzuki–Trotter decomposition method, and then add the correction terms back to spin dynamics results, making them more accurate. As a result of this correction, larger time step can be used for Suzuki–Trotter decomposition method, and corrections can be made for each time step. To evaluate our Deep Learning method, we analyze spin-spin correlation as a more stringent measure. We also use thermal averages to benchmark the performance of our method. We compare the Deep Learning results with those from spin dynamics simulation without Deep Learning for short time steps.

Methods

Heisenberg model

The ferromagnetic Heisenberg model on a cubic lattice is used to demonstrate the efficiency of our method. The Hamiltonian for this model is given as H=-J<i,j>Si·Sj, where a vector Si has three components (Sxi,Syi,Szi) and |Si| is a unit vector. We formalize our spin dynamics following the notations of Tsai etal.16. We write the equations of motion for all spins as

dσ(t)dt=R^σ(t), 1

where σ(t)=(S1(t),S2(t),,Sn(t)) is the spin configuration at time t. The integration of the equations of motion in Eq. (1) is done using the second order Suzuki–Trotter decomposition method as in Tsai etal.16. As following the mathematical notations of Tsai et al., we decompose the evolution operator R^ into R^A and R^B on the sublattices A and B respectively, and obtain

e(R^A+R^B)τ=eR^Bτ/2eR^AτeR^Bτ/2+O(τ3) 2

The ferromagnetic Heisenberg model is considered on the cubic lattice of dimensions L×L×L with periodic boundary conditions. This model undergoes a phase transition at a temperature kBTc/J=1.44230, where kB is Boltzmann’s constant. In the spin dynamics approach, the equations of motion for the Heisenberg model is governed by the following equation:

dSidt=-Si×Heffi=0-Heff,ziHeff,yiHeff,zi0-Heff,xi-Heff,yiHeff,xi0Si=RiSi. 3

Here, Heffi is the effective field acting on the ith spin. The k component of the effective field can be specified as Heff,ki = -j=nn(i)Skj, where the sum runs over the nearest neighbor pairs of sites and k=x,y, and z.

Deep Learning approach

A fully supervised Deep Learning method is developed to perform the spin dynamics by using the second order Suzuki–Trotter decomposition method to reduce simulation errors. In order to produce training data for our supervised Deep Learning, initial spin configurations are considered at ordered, near-critical, and disordered states in the temperature range kBT/J[0.5,2.4] and sampling 9.1×105 independent spin configurations using Monte-Carlo simulations with the Metropolis–Hastings algorithm3033. The initial spin configurations are prepared with 300,000 samples in ordered states, 210,000 samples near critical states, and 400,000 samples disordered states by simulated annealing method. The temperature annealing scheme will be described in more details in the supplementary information. The temperatures for annealing are gradually lowered from high to low temperatures and Monte Carlo data are always obtained at equilibrium configurations. For each sampled initial spin configuration σi, two sets of spin dynamics simulations are performed with the time steps τ1=10-1 and τ3=10-3 as illustrated in Fig. 1a. Second-order Suzuki–Trotter method uses τ=0.04 as typical integration time step, so we use τ=10-3 which would give good accurate simulation. For large time step, we tried τ=10-2 and τ=10-1, with our Deep Learning corrections, a large time step of τ=10-1 gives the best speed up with a good accuracy. The spin configuration with time step τ3=10-3 needs 100 time steps of simulations to pair with the spin configuration with one time step τ1=10-1. Formally, we represent the updated spin configurations σi(10-1) and σi(10-3) by using the Suzuki–Trotter decomposition method as

σi(10-1)eR^Bτ1/2eR^Aτ1eR^Bτ1/2σi,τ1=10-1σi(10-3)(eR^Bτ3/2eR^Aτ3eR^Bτ3/2)100σi,τ3=10-3i=1,D, 4

where σi is an initial spin configuration and D represents the number of training data. The difference between spin configuration σi(10-3) generated using τ3=10-3 and spin configuration σi(10-1) generated using τ1=10-1 is captured by

σi(res)=σi(10-3)-σi(10-1)i=1,,D, 5

where σi(res) is residue. For our Deep Learning, initial spin configuration σi and spin configuration σi(10-1) are used as the inputs into U-Net34, a kind of convolutional neural networks. The U-Net is a proven architecture for image segmentation as well as for extracting subtle features. The detailed structure of U-Net is shown in Fig. 1b. The architecture of U-net used for 8×8×8 cubic lattice is that convolutional layers are used as an encoder on left upper side followed by a decoder on right upper side that consists of upsamplings and concatenations with the correspondingly feature maps from the encoder. We add fully connected layers (FC) in the bottom of the network between the encoder and the decoder to efficiently determine particular weights in the feature map from the encoder , such as capturing more information of spin-spin interactions. The input channels C are 6 by concatenating spin coordinates Sx, Sy, and Sz of both σi and σi(10-1), respectively. The input dimensions of U-Net are reshaped to [DLLLC] as cubic grid vector map, where D is the total number of training data, L is lattice size, and C is input channels. The encoder consists of the repeated two convolutional layers with 3×3×3 filters followed by a 2×2×2 max pooling. We apply a reshaping function to FC with dimensions from [D,L4×L4×L4×C4] into [D,L4,L4,L4,C4]. Every step in decoder consists of upsampling layers with a 2×2×2 filters followed by the repeated two convolutional layers with 3×3×3 filters and copies with correspondingly cropped feature maps from encoding layers. The periodic boundary conditions are also applied to the convolutional layers. The activation function of the output is a sigmoid for predicting values of residue with [D,L,L,L,Co] dimensions, where the number of output channels Co is 3. A simpler U-Net architecture is used for 4×4×4 cubic lattice (see supplementary information).

Figure 1.

Figure 1

Deep learning for Heisenberg model. a Spin configurations for training data preparation. σi is initial spin configuration, σi(10-1) is spin configuration after one time step of τ1=10-1 from σi, and σi(10-3) is spin configuration after 100 time steps of τ3=10-3 from σi. σi(res) is residue of σi(10-3) and σi(10-1). b Illustration of the U-Net architecture. Each vertical black line represents a multi-channel feature map. The number of channels is denoted on the top of the straight vertical black line and each map’s dimension is indicated on the left edge. Vertical dashed black lines correspond on the copied feature maps from each encoder layer. c, A sequence of spin dynamics for testing the trained U-Net model: (a) conduct one time step τ1=10-1 of spin dynamics simulation; (b) use σi(10-1) to predict the spin configuration σi(10-3) by estimating predicted residue σ^i(res) using Eq. (6); Steps (a) and (b) are repeated up to tmax time.

Deployment of our U-Net for spin dynamics

To deploy the trained U-Net for spin dynamics, spin dynamics simulation is carried out with one large time step τ1=10-1 and this simulation result σi(10-1) can be used to predict σi(10-3) as follows:

σ^i(10-3)=σi(10-1)+σ^i(res)σi(10-3), 6

where σ^i(10-3) is the predicted spin configuration for 100 time steps of τ3=10-3 and predicted residue σ^i(res) is the correction term by Deep Learning. A sequence of spin dynamics are conducted at τ1=10-1 and for each step, Eq. (6) is used to perform corrections as shown in Fig. 1c. This new time integration scheme is repeated up to maximum time tmax. This scheme requires only forward propagation using the GPU implemented with TensorFlow library35, so the computing time is negligible.

Normalization of residue

The difference between spin configuration generated with τ3=10-3 and that generated with τ1=10-1 is captured by residue σi(res) in Eq. (5). Let (σi(res))kj be the k component of residual spin at site j of the lattice, and k denotes x, y, and z components. The values of (σi(res))kj can be quite small for some simulations, to maintain numerical stability, we normalize these values as follows. Each component (σi(res))kj over D samples of training data is normalized to a range of [0,1] by fitting to have a Gaussian distribution, and find the mean and standard deviation for each k component, respectively.

For lattice size L=4, λmin=-0.22455 and λmax=0.22455 are defined by taking 11 times the largest standard deviation of k component. 11 standard deviations translates to a p-value of 1.911×10-28, which ensures that during inference, the normalized residue (σi(res))kj is always within the range [0,1]. For lattice size L=8, λmin=-0.25472 and λmax=0.25472 are defined by taking 13 times the largest standard deviation of k component. Finally, each component (σi(res))kj is normalized to the range [0, 1] and guarantee stable convergence of weights and biases in Deep Learning as follows :

(σinorm)kj=(σi(res))kj-λminλmax-λmin(k=x,y,z,i=1,...D). 7

During the prediction, (σi(res))kj from test data is normalized to a range of [0, 1] by using λmin and λmax, which have already been obtained.

Loss function and training

The loss function for one data point of (σi,σi(10-1),σinorm) is the mean-square error between the normalized residue σinorm and the predicted normalized residue σ^inorm and is defined as

L(σi,σi(10-1),σinorm)=1L3j=1L3(σinorm)j-(σ^inorm)j22, 8

where j is the index of lattice sites. The distance function between the jth site of σinorm and the jth site of σ^inorm is the sum of the square difference of all spin components :

(σinorm)j-(σ^inorm)j22=k=x,y,zσinormkj-σ^inormkj2, 9

where i is the index of training data.

Converting σ^inorm to σ^i(res)

For our Deep Learning, inputs into U-Net are obtained initial spin configurations σi and spin configurations σi(10-1) generated by spin dynamics simulations, and output is σ^inorm. We finally predict the spin configuration for 100 time steps of τ3=10-3 using trained Deep Learning model as σ^i(10-3)=σi(10-1)+σ^i(res), where the predicted residue σ^i(res) can be obtained by the following converting formula as σ^i(res)=σ^inorm(λmax-λmin)+λmin.

Results

The effectiveness of our proposed Deep Learning method is evaluated at kBT/J=0.4<kBTc/J, kBT/J=1.44kBTc/J, and kBT/J=2.4>kBTc/J. Note that at kBT/J=2.4, the system is in a disordered state and spatial corrections between spins are very short. One hundred independent spin configurations are generated by using Monte-Carlo simulation for use as test data sets at each temperature kBT/J=0.4, 1.44, and 2.4. Second order Suzuki–Trotter decomposition methods are used for all experiments in this paper.

To evaluate the accuracy of simulation results, correlation is investigated by comparing spin dynamics trajectory σ(t) with highly accurate spin dynamics trajectory ρ(t) performed with τ=10-6. τ=10-6 is used as the reference time step as we found that it can give accurate trajectories. Correlation ξ(t) as function of time t in which σ(t) and ρ(t) are compared is given by

ξ(σ,t)=1L3j=1L3[(ρj(t))x(σj(t))x+(ρj(t))y(σj(t))y+(ρj(t))z(σj(t))z], 10

where index j denotes lattice site of spins, L is the linear dimension of the lattice, and L3 is total number of spins at lattice sites. Since the initial spin configurations are the same, ρ(0) is identical to σ(0). We compute one hundred correlation ξ(σi,t) for spin configurations σi(t), where i is from 1 to 100. Then, we also estimate the mean of correlation μξ(t) and the standard deviation of correlation stdξ(t) of ξ(σi,t) as a function of time at each temperature.

Suzuki–Trotter decomposition method provides important properties such as conservation of energy e=-L-3<i,j>L3Si·Sj and magnetization m=L-3iSxi2+iSyi2+iSzi2 , and time reversibility. We wish to compare the conservation of energy and magnetization across one hundred samples, but their starting spin configurations are different. In order to take statistics across the samples, we shift the energy and magnetization of the initial spin configurations to zero. Eq. (11) and Eq. (12) show how we shift the energy per site e(t) and magnetization per site m(t) at each time step t. Here, Q represents the number of samples at each temperature. We use Q as one hundred.

e~i(t)=ei(t)-ei(0)i=1,,Q 11
m~i(t)=mi(t)-mi(0)i=1,,Q 12

With the shifting of energy and magnetization, we can compute the mean of absolute energy per site μ|e~(t)|, the mean of absolute magnetization per site μ|m~(t)|, standard deviation of energy per site std(e~(t)), and standard deviation of magnetization per site std(m~(t)) over independent samples.

In Fig. 2, the spin-spin correlation plots are shown as using reference trajectory generated at the reference time step τ=10-6 for kBT/J=0.4(kBT/J<kBTc/J) [Fig. 2a,d], kBT/J=1.44(kBT/JkBTc/J) [Fig. 2b,e], and kBT/J=2.4(kBT/J>kBTc/J) [Fig. 2c,f]. At kBT/J<kBTc/J , correlations remain high (red line, yellow line, and blue line) except for at τ=10-1 without Deep Learning corrections (black line), where correlation drops around t=2. This is due to accumulation of errors for large time steps. Correlation is recovered with Deep Learning corrections (blue line). Indeed correlations of τ=10-1 with Deep Learning corrections are as good as for τ=10-2 without Deep Learning corrections (yellow line), demonstrating a 10 times speed up. At kBT/JkBTc/J and kBT/J>kBTc/J , spin-spin correlation drops faster than kBT/J<kBTc/J even for short time steps, τ=10-4 (green line) and τ=10-5 (violet line), due to disorder in the spin lattices.

Figure 2.

Figure 2

Spin-spin correlation using reference trajectory generated at τ=10-6. Analysis of the mean of correlation μξ(t) as a function of time on 4×4×4 cubic lattice at a,kBT/J=0.4, b,kBTc/J1.44, and c,kBT/J=2.4 and those on 8×8×8 cubic lattice at d,kBT/J=0.4, e,kBTc/J1.44, and f,kBT/J=2.4. Blue line presents the Deep Learning (DL) result while black line, yellow line, and red line are the simulation results for τ=10-1, τ=10-2, and τ=10-3, respectively. Especially, at kBT/J=1.44 and kBT/J=2.4, green line and violet line show the simulation results for τ=10-4 and τ=10-5, respectively. g Threshold time tthres as function of temperature. Filled rhombi (Inline graphic ) represents the Deep Learning result while filled black triangles(), filled yellow circles (Inline graphic ), filled red squares (Inline graphic ), filled green inverted triangles (Inline graphic ), and filled violet pentagons (Inline graphic ) are the simulation results without DL corrections for τ=10-1, τ=10-2, τ=10-3, τ=10-4, and τ=10-5, respectively.

We define threshold time tthres as the average time required for spin-spin correlation μξ(t) to drop from 1 to 0.99. In Fig. 2g, the plot of tthres as a function of temperature kBT/J has the logarithmic scale on the y-axis, and simulations for τ=10-3 have higher threshold time (red squares) at each temperature than for τ=10-1 without Deep Learning corrections. Threshold time (filled blue diamonds) for τ=10-1 with Deep Learning corrections approaches to almost the same threshold time (yellow circles) for τ=10-2 without Deep Learning corrections at each temperature.

Figure 3 (L=4) and Fig. 4 (L=8) show μ|e~(t)|, std(e~(t)), μ|m~(t)|, and std(m~(t)) as a function of time at kBT/J=0.4(kBT/J<kBTc/J) [Figs. 3a and 4a], kBT/J=1.44(kBT/JkBTc/J) [Figs. 3b and 4b], and kBT/J=2.4(kBT/J>kBTc/J) [Figs. 3c and 4c]. For time steps τ=10-2 (yellow line) and τ=10-3 (red line), conservation of both energy and magnetization is good, as shown by the relatively constant mean plots (μ|e~(t)| and μ|m~(t)|) and small standard deviations (std(e~(t)) and std(m~(t))) across independent simulations. At kBT/J<kBTc/J and kBT/JkBTc/J, both energy and magnetization are not conserved in simulations without Deep Learning corrections for time step τ=10-1 (black line). On the other hand, conservation is recovered using Deep Learning corrections (blue line). In Fig. 3c, at kBT/J>kBTc/J, the system is disordered and the mean of absolute energy μ|e~(t)| and the mean of absolute magnetization μ|m~(t)| become more constant, simply due to averaging of disordered spins. Especially, Fig. 4c shows that at kBT/J>kBTc/J, the effect of averaging over disordered spins for L=8 is stronger than for L=4. At high temperature, the number of possible states increase exponentially and hence fitting by Deep Learning corrections is more difficult.

Figure 3.

Figure 3

Conservation of energy and magnetization on 4×4×4 cubic lattice. Predictions of the mean of absolute energy per site μ|e~(t)|, standard deviation of energy per site std(e~(t)), the mean of absolute magnetization per site μ|m~(t)|, and standard deviation of magnetization per site std(m~(t)) as a function of time at a,kBT/J=0.4, b,kBTc/J1.44, and c,kBT/J=2.4. Black line, yellow line, and red line represent data obtained from spin dynamics simulations with τ=10-1, τ=10-2, and τ=10-3, respectively, while blue line represents data from Deep Learning (DL) correction.

Figure 4.

Figure 4

Conservation of energy and magnetization on 8×8×8 cubic lattice. Predictions of μ|e~(t)|, std(e~(t)), μ|m~(t)|, and std(m~(t)) as a function of time at a,kBT/J=0.4, b,kBTc/J1.44, and c,kBT/J=2.4. Black line, yellow line, and red line represent data obtained from spin dynamics simulations with τ=10-1, τ=10-2, and τ=10-3, respectively, while blue line represents data from Deep Learning (DL) correction. These figures show that the effect of averaging over disordered spins for L=8 is stronger than for L=4 above the critical temperature kBTc/J.

Discussion

Our results have demonstrated that the Deep Learning corrections enhance the time integration step of the original Suzuki–Trotter method and have achieved 10 times computational speed up while maintaining accuracy compared to the original Suzuki–Trotter decomposition method. The nature of local nearest neighbours interactions in the lattice means that convolutional structure of the Deep Neural Network is a nature choice of network architecture. Since convolution is translationally invariant, the effect of lattice size on training our U-Net is not a major concern. For example, between L=4 and L=8 lattices, the time required for training the U-Net parameters increases by about 4 times, which is sub-linear with respect to the number of lattice sites. Our Deep Learning was trained on simulation data at τ=10-3, however its accuracy performance is equivalent to simulation data at τ=10-2. This shows that our Deep Learning training has not reached its theoretical limit of a perfect prediction. This theoretical limit can be achieved exactly if we train on an infinite amount of data for an infinite capacity. In practise, Deep Learning methods can not be perfect because the amount of data and the capacity of U-Net are finite. The main source of inaccuracies in our Deep Learning method is that U-Net’s output does not fit exactly the labeled data generated at τ=10-3 and that even if U-Net is able to fit the data it has through training, it may not predict perfectly on the data it has never seen in training. For future work, we will explore the effects of Deep Learning corrections on higher order Suzuki–Trotter decomposition. We will also apply Deep Learning corrections such as off lattice systems and integrators such as velocity-verlet.

Supplementary information

Acknowledgements

We are grateful to Mustafa Umit Oner for useful suggestions. We would like to thank Kaicheng Liang, Mahsa Paknezhad, Connie Kou, Shier Nee Saw, Liu Wei, and Kenta Shiina for proof reading our paper and giving valuable comments. This work was supported by the Biomedical Research Council of A*STAR (Agency for Science, Technology and Research), Singapore, and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1A2C1003743). S.J.P. is grateful to the A*STAR Research Attachment Programme (ARAP) of Singapore for financial support.

Author contributions

S.J.P., W.S.K., and H.K.L. contributed to the discussion and development of project. All authors approved the final manuscript.

Data availability

The data and code that support the findings of this study are available from corresponding authors upon request.

Competing interest

The authors declare no competing interests.

Footnotes

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

is available for this paper at 10.1038/s41598-020-70558-1.

References

  • 1.Gutfleisch O, et al. Magnetic Materials and Devices for the 21st Century: Stronger, Lighter, and More Energy Efficient. Adv. Mater. 2011;23:821–842. doi: 10.1002/adma.201002180. [DOI] [PubMed] [Google Scholar]
  • 2.Sugimoto S. Current status and recent topics of rare-earth permanent magnets. J. Phys. D Appl. Phys. 2011;44:064001. doi: 10.1088/0022-3727/44/6/064001. [DOI] [Google Scholar]
  • 3.Slaughter J. Materials for Magnetoresistive Random Access Memory. Annu. Rev. Mater. Res. 2009;39:277–296. doi: 10.1146/annurev-matsci-082908-145355. [DOI] [Google Scholar]
  • 4.Bigot J-Y, Vomir M. Ultrafast magnetization dynamics of nanostructures: Ultrafast magnetization dynamics of nanostructures. Ann. Phys. 2013;525:2–30. doi: 10.1002/andp.201200199. [DOI] [Google Scholar]
  • 5.Walowski J, Münzenberg M. Perspective: Ultrafast magnetism and THz spintronics. J. Appl. Phys. 2016;120:140901. doi: 10.1063/1.4958846. [DOI] [Google Scholar]
  • 6.Lee HK, Yuan Z. Studies of the magnetization reversal process driven by an oscillating field. J. Appl. Phys. 2007;101:033903. doi: 10.1063/1.2426381. [DOI] [Google Scholar]
  • 7.Kryder M, et al. Heat Assisted Magnetic Recording. Proc. IEEE. 2008;96:1810–1835. doi: 10.1109/JPROC.2008.2004315. [DOI] [Google Scholar]
  • 8.Lee HK, Okabe Y. Exchange bias with interacting random antiferromagnetic grains. Phys. Rev. B. 2006;73:140403. doi: 10.1103/PhysRevB.73.140403. [DOI] [Google Scholar]
  • 9.Lynn JW. Temperature dependence of the magnetic excitations in iron. Phys. Rev. B. 1975;11:2624–2637. doi: 10.1103/PhysRevB.11.2624. [DOI] [Google Scholar]
  • 10.Landau DP, Krech M. Spin dynamics simulations of classical ferro- and antiferromagnetic model systems: comparison with theory and experiment. J. Phys. Condens. Matter. 1999;11:R179–R213. doi: 10.1088/0953-8984/11/18/201. [DOI] [Google Scholar]
  • 11.Frenkel, D. & Smit, B. Understanding Molecular Simulation: from Algorithms to Applications 2nd edn, Vol. 50 (Elesiver, Amsterdam, 1996).
  • 12.Beeman D. Some multistep methods for use in molecular dynamics calculations. J. Comput. Phys. 1976;20:130–139. doi: 10.1016/0021-9991(76)90059-0. [DOI] [Google Scholar]
  • 13.Allen MP, Tildesley DJ. Computer Simulation of Liquids. Clarendon: Clarendon Press; 1988. [Google Scholar]
  • 14.Kim S. Time step and shadow Hamiltonian in molecular dynamics simulations. J. Kor. Phys. Soc. 2015;67:418–422. doi: 10.3938/jkps.67.418. [DOI] [Google Scholar]
  • 15.Engle RD, Skeel RD, Drees M. Monitoring energy drift with shadow Hamiltonians. J. Comput. Phys. 2005;206:432–452. doi: 10.1016/j.jcp.2004.12.009. [DOI] [Google Scholar]
  • 16.Tsai S-H, Lee HK, Landau DP. Molecular and spin dynamics simulations using modern integration methods. Am. J. Phys. 2005;73:615–624. doi: 10.1119/1.1900096. [DOI] [Google Scholar]
  • 17.Carrasquilla J, Melko RG. Machine learning phases of matter. Nat. Phys. 2017;13:431–434. doi: 10.1038/nphys4035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Zhang W, Liu J, Wei T-C. Machine learning of phase transitions in the percolation and X Y models. Phys. Rev. E. 2019;99:032142. doi: 10.1103/PhysRevE.99.032142. [DOI] [PubMed] [Google Scholar]
  • 19.Li Z, Luo M, Wan X. Extracting critical exponents by finite-size scaling with convolutional neural networks. Phys. Rev. B. 2019;99:075418. doi: 10.1103/PhysRevB.99.075418. [DOI] [Google Scholar]
  • 20.van Nieuwenburg E, Liu Y-H, Huber S. Learning phase transitions by confusion. Nat. Phys. 2017;13:435–439. doi: 10.1038/nphys4037. [DOI] [Google Scholar]
  • 21.Morningstar A, Melko RG. Deep learning the ising model near criticality. J. Mach. Learn. Res. 2017;18:5975–5991. [Google Scholar]
  • 22.Greitemann J, Liu K, Pollet L. Probing hidden spin order with interpretable machine learning. Phys. Rev. B. 2019;99:060404. doi: 10.1103/PhysRevB.99.060404. [DOI] [Google Scholar]
  • 23.Huang L, Wang L. Accelerated Monte Carlo simulations with restricted Boltzmann machines. Phys. Rev. B. 2017;95:035105. doi: 10.1103/PhysRevB.95.035105. [DOI] [Google Scholar]
  • 24.Rapaport DC. The Art of Molecular Dynamics Simulation. Cambridge: Cambridge University Press; 2004. [Google Scholar]
  • 25.Noé F, Olsson S, Köhler J, Wu H. Boltzmann generators: sampling equilibrium states of many-body systems with deep learning. Science. 2019;365:eaaw1147. doi: 10.1126/science.aaw1147. [DOI] [PubMed] [Google Scholar]
  • 26.Fabiani G, Mentink JH. Investigating ultrafast quantum magnetism with machine learning. SciPost Phys. 2019;7:4. doi: 10.21468/SciPostPhys.7.1.004. [DOI] [Google Scholar]
  • 27.Weinberg P, Bukov M. QuSpin: a python package for dynamics and exact diagonalisation of quantum many body systems part I: spin chains. SciPost Phys. 2017;2:003. doi: 10.21468/SciPostPhys.2.1.003. [DOI] [Google Scholar]
  • 28.Kharkov YA, Sotskov VE, Karazeev AA, Kiktenko EO, Fedorov AK. Revealing quantum chaos with machine learning. Phys. Rev. B. 2020;101:064406. doi: 10.1103/PhysRevB.101.064406. [DOI] [Google Scholar]
  • 29.Carleo G, Troyer M. Solving the quantum many-body problem with artificial neural networks. Science. 2017;355:602–606. doi: 10.1126/science.aag2302. [DOI] [PubMed] [Google Scholar]
  • 30.Chen K, Ferrenberg AM, Landau DP. Static critical behavior of three-dimensional classical Heisenberg models: a high-resolution Monte Carlo study. Phys. Rev. B. 1993;48:3249–3256. doi: 10.1103/PhysRevB.48.3249. [DOI] [PubMed] [Google Scholar]
  • 31.Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953;21:1087–1092. doi: 10.1063/1.1699114. [DOI] [Google Scholar]
  • 32.Binder K. The Monte Carlo method for the study of phase transitions: a review of some recent progress. J. Comput. Phys. 1985;59:1–55. doi: 10.1016/0021-9991(85)90106-8. [DOI] [Google Scholar]
  • 33.Paauw T, Compagner A, Bedeaux D. Monte-Carlo calculation for the classical F.C.C. Heisenberg ferromagnet. Physica A. 1975;79:1–17. doi: 10.1016/0378-4371(75)90084-9. [DOI] [Google Scholar]
  • 34.Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional Networks for Biomedical Image Segmentation. 10.1016/j.jcp.2004.12.0096 (2015).
  • 35.Abadi, M. et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 10.1016/j.jcp.2004.12.0097 (2016).

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The data and code that support the findings of this study are available from corresponding authors upon request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES