Abstract
Theoretical models capture very precisely the behaviour of magnetic materials at the microscopic level. This makes computer simulations of magnetic materials, such as spin dynamics simulations, accurately mimic experimental results. New approaches to efficient spin dynamics simulations are limited by integration time step barrier to solving the equations-of-motions of many-body problems. Using a short time step leads to an accurate but inefficient simulation regime whereas using a large time step leads to accumulation of numerical errors that render the whole simulation useless. In this paper, we use a Deep Learning method to compute the numerical errors of each large time step and use these computed errors to make corrections to achieve higher accuracy in our spin dynamics. We validate our method on the 3D Ferromagnetic Heisenberg cubic lattice over a range of temperatures. Here we show that the Deep Learning method can accelerate the simulation speed by 10 times while maintaining simulation accuracy and overcome the limitations of requiring small time steps in spin dynamic simulations.
Subject terms: Phase transitions and critical phenomena, Statistical physics, Magnetic properties and materials, Phase transitions and critical phenomena, Computational science
Introduction
Magnetic materials have a wide range of industrial applications such as in Nd–Fe–B-type permanent magnets used for motors in hybrid cars1,2, magnetoresistive random access memory (MRAM) based on the storage of data in stable magnetic states3, ultrafast spins dynamics in magnetic nanostructures4,5, heat assisted magnetic recording and ferromagnetic resonance methods for increasing the storage density of hard disk drives6,7, exchange bias related to magnetic recording8, and magnetocaloric materials for refrigeration technologies1. Understanding the underlying physics of magnetic material enables us to develop much better applications. In particular, the study of the properties of these magnetic materials is performed experimentally by using neutron scattering9. Magnetic properties of materials are also studied theoretically using computational methods. Spin dynamics simulations10 are powerful tools for understanding fundamental properties of magnetic materials that can be verified by experimental methods. In spin dynamics simulations, classical equations of motion of spin systems are solved numerically using well known integrators such as leapfrog, Verlet, predictor-corrector, and Runge-Kutta methods11–13. The accuracy of these simulations depends on a time integration step size. If a large time step is used, the accumulated truncation error becomes larger. Conversely, using a short time step is very computationally demanding. So, it is important to find a trade off between speed and accuracy.
Symplectic methods14,15 are among the most useful time integrators for spin dynamics simulations. The numerical solutions of symplectic methods have properties of the time reversibility and the energy conservation. For example, high order Suzuki–Trotter decomposition method, one of the symplectic methods, allows for larger time step with limited error in its computation. In this paper, we seek to enhance the time integration step of Suzuki–Trotter decomposition method further using Deep Learning techniques. For second-order Suzuki–Trotter decomposition method, the integration time step is limited up to and for fourth-order Suzuki–Trotter decomposition method, the integration time step is limited up to 16.
Recently, Machine Learning techniques are used to enhance simulation efficiencies in the condensed matter physics. Its applications include addressing difficulties of phase transition17–22 and accelerating the Monte-Carlo simulations23. A crucial issue in molecular dynamics simulations24 is that generating samples from the equilibrium distributions is time consuming. Boltzmann generators machine25 addresses the long-standing rare-event (e.g. transition) sampling problem. In addition, study of quantum many body systems using Machine Learning is applied to simulation of the quantum spin dynamics26,27, identifying phase transitions28, and solves the exponential complexity of the many body problem in quantum systems29.
In this paper, we show that speed up is achieved if we combine spin dynamics simulation and Deep Learning to learn the error corrections. The first condition for speed up is enough capacity of Deep Learning to learn the associations between spin configuration generated by large time steps and spin configuration generated by accurate short time steps. The second condition is enough training data for learning and show the Deep Learning enough pairs of patterns between spin configuration for large and short time steps. We propose to use Deep Learning to estimate the error correction terms of Suzuki–Trotter decomposition method, and then add the correction terms back to spin dynamics results, making them more accurate. As a result of this correction, larger time step can be used for Suzuki–Trotter decomposition method, and corrections can be made for each time step. To evaluate our Deep Learning method, we analyze spin-spin correlation as a more stringent measure. We also use thermal averages to benchmark the performance of our method. We compare the Deep Learning results with those from spin dynamics simulation without Deep Learning for short time steps.
Methods
Heisenberg model
The ferromagnetic Heisenberg model on a cubic lattice is used to demonstrate the efficiency of our method. The Hamiltonian for this model is given as , where a vector has three components and is a unit vector. We formalize our spin dynamics following the notations of Tsai 16. We write the equations of motion for all spins as
1 |
where is the spin configuration at time t. The integration of the equations of motion in Eq. (1) is done using the second order Suzuki–Trotter decomposition method as in Tsai 16. As following the mathematical notations of Tsai et al., we decompose the evolution operator into and on the sublattices A and B respectively, and obtain
2 |
The ferromagnetic Heisenberg model is considered on the cubic lattice of dimensions with periodic boundary conditions. This model undergoes a phase transition at a temperature 30, where is Boltzmann’s constant. In the spin dynamics approach, the equations of motion for the Heisenberg model is governed by the following equation:
3 |
Here, is the effective field acting on the ith spin. The k component of the effective field can be specified as = , where the sum runs over the nearest neighbor pairs of sites and and z.
Deep Learning approach
A fully supervised Deep Learning method is developed to perform the spin dynamics by using the second order Suzuki–Trotter decomposition method to reduce simulation errors. In order to produce training data for our supervised Deep Learning, initial spin configurations are considered at ordered, near-critical, and disordered states in the temperature range and sampling independent spin configurations using Monte-Carlo simulations with the Metropolis–Hastings algorithm30–33. The initial spin configurations are prepared with 300,000 samples in ordered states, 210,000 samples near critical states, and 400,000 samples disordered states by simulated annealing method. The temperature annealing scheme will be described in more details in the supplementary information. The temperatures for annealing are gradually lowered from high to low temperatures and Monte Carlo data are always obtained at equilibrium configurations. For each sampled initial spin configuration , two sets of spin dynamics simulations are performed with the time steps and as illustrated in Fig. 1a. Second-order Suzuki–Trotter method uses as typical integration time step, so we use which would give good accurate simulation. For large time step, we tried and , with our Deep Learning corrections, a large time step of gives the best speed up with a good accuracy. The spin configuration with time step needs 100 time steps of simulations to pair with the spin configuration with one time step . Formally, we represent the updated spin configurations and by using the Suzuki–Trotter decomposition method as
4 |
where is an initial spin configuration and D represents the number of training data. The difference between spin configuration generated using and spin configuration generated using is captured by
5 |
where is residue. For our Deep Learning, initial spin configuration and spin configuration are used as the inputs into U-Net34, a kind of convolutional neural networks. The U-Net is a proven architecture for image segmentation as well as for extracting subtle features. The detailed structure of U-Net is shown in Fig. 1b. The architecture of U-net used for cubic lattice is that convolutional layers are used as an encoder on left upper side followed by a decoder on right upper side that consists of upsamplings and concatenations with the correspondingly feature maps from the encoder. We add fully connected layers (FC) in the bottom of the network between the encoder and the decoder to efficiently determine particular weights in the feature map from the encoder , such as capturing more information of spin-spin interactions. The input channels C are 6 by concatenating spin coordinates , , and of both and , respectively. The input dimensions of U-Net are reshaped to [D, L, L, L, C] as cubic grid vector map, where D is the total number of training data, L is lattice size, and C is input channels. The encoder consists of the repeated two convolutional layers with filters followed by a max pooling. We apply a reshaping function to FC with dimensions from into . Every step in decoder consists of upsampling layers with a filters followed by the repeated two convolutional layers with filters and copies with correspondingly cropped feature maps from encoding layers. The periodic boundary conditions are also applied to the convolutional layers. The activation function of the output is a sigmoid for predicting values of residue with dimensions, where the number of output channels is 3. A simpler U-Net architecture is used for cubic lattice (see supplementary information).
Deployment of our U-Net for spin dynamics
To deploy the trained U-Net for spin dynamics, spin dynamics simulation is carried out with one large time step and this simulation result can be used to predict as follows:
6 |
where is the predicted spin configuration for 100 time steps of and predicted residue is the correction term by Deep Learning. A sequence of spin dynamics are conducted at and for each step, Eq. (6) is used to perform corrections as shown in Fig. 1c. This new time integration scheme is repeated up to maximum time t. This scheme requires only forward propagation using the GPU implemented with TensorFlow library35, so the computing time is negligible.
Normalization of residue
The difference between spin configuration generated with and that generated with is captured by residue in Eq. (5). Let be the k component of residual spin at site j of the lattice, and k denotes x, y, and z components. The values of can be quite small for some simulations, to maintain numerical stability, we normalize these values as follows. Each component over D samples of training data is normalized to a range of [0,1] by fitting to have a Gaussian distribution, and find the mean and standard deviation for each k component, respectively.
For lattice size , and are defined by taking 11 times the largest standard deviation of k component. 11 standard deviations translates to a p-value of , which ensures that during inference, the normalized residue is always within the range [0,1]. For lattice size , and are defined by taking 13 times the largest standard deviation of k component. Finally, each component is normalized to the range [0, 1] and guarantee stable convergence of weights and biases in Deep Learning as follows :
7 |
During the prediction, from test data is normalized to a range of [0, 1] by using and , which have already been obtained.
Loss function and training
The loss function for one data point of is the mean-square error between the normalized residue and the predicted normalized residue and is defined as
8 |
where j is the index of lattice sites. The distance function between the site of and the site of is the sum of the square difference of all spin components :
9 |
where i is the index of training data.
Converting to
For our Deep Learning, inputs into U-Net are obtained initial spin configurations and spin configurations generated by spin dynamics simulations, and output is . We finally predict the spin configuration for 100 time steps of using trained Deep Learning model as , where the predicted residue can be obtained by the following converting formula as .
Results
The effectiveness of our proposed Deep Learning method is evaluated at , and . Note that at , the system is in a disordered state and spatial corrections between spins are very short. One hundred independent spin configurations are generated by using Monte-Carlo simulation for use as test data sets at each temperature , 1.44, and 2.4. Second order Suzuki–Trotter decomposition methods are used for all experiments in this paper.
To evaluate the accuracy of simulation results, correlation is investigated by comparing spin dynamics trajectory with highly accurate spin dynamics trajectory performed with . is used as the reference time step as we found that it can give accurate trajectories. Correlation as function of time t in which and are compared is given by
10 |
where index j denotes lattice site of spins, L is the linear dimension of the lattice, and is total number of spins at lattice sites. Since the initial spin configurations are the same, is identical to . We compute one hundred correlation for spin configurations , where i is from 1 to 100. Then, we also estimate the mean of correlation and the standard deviation of correlation std of as a function of time at each temperature.
Suzuki–Trotter decomposition method provides important properties such as conservation of energy and magnetization , and time reversibility. We wish to compare the conservation of energy and magnetization across one hundred samples, but their starting spin configurations are different. In order to take statistics across the samples, we shift the energy and magnetization of the initial spin configurations to zero. Eq. (11) and Eq. (12) show how we shift the energy per site e(t) and magnetization per site m(t) at each time step t. Here, Q represents the number of samples at each temperature. We use Q as one hundred.
11 |
12 |
With the shifting of energy and magnetization, we can compute the mean of absolute energy per site , the mean of absolute magnetization per site , standard deviation of energy per site std, and standard deviation of magnetization per site std over independent samples.
In Fig. 2, the spin-spin correlation plots are shown as using reference trajectory generated at the reference time step for [Fig. 2a,d], [Fig. 2b,e], and [Fig. 2c,f]. At , correlations remain high (red line, yellow line, and blue line) except for at without Deep Learning corrections (black line), where correlation drops around . This is due to accumulation of errors for large time steps. Correlation is recovered with Deep Learning corrections (blue line). Indeed correlations of with Deep Learning corrections are as good as for without Deep Learning corrections (yellow line), demonstrating a times speed up. At and , spin-spin correlation drops faster than even for short time steps, (green line) and (violet line), due to disorder in the spin lattices.
We define threshold time as the average time required for spin-spin correlation to drop from 1 to 0.99. In Fig. 2g, the plot of as a function of temperature has the logarithmic scale on the y-axis, and simulations for have higher threshold time (red squares) at each temperature than for without Deep Learning corrections. Threshold time (filled blue diamonds) for with Deep Learning corrections approaches to almost the same threshold time (yellow circles) for without Deep Learning corrections at each temperature.
Figure 3 () and Fig. 4 () show , std, , and std as a function of time at [Figs. 3a and 4a], [Figs. 3b and 4b], and [Figs. 3c and 4c]. For time steps (yellow line) and (red line), conservation of both energy and magnetization is good, as shown by the relatively constant mean plots ( and ) and small standard deviations (std and std) across independent simulations. At and , both energy and magnetization are not conserved in simulations without Deep Learning corrections for time step (black line). On the other hand, conservation is recovered using Deep Learning corrections (blue line). In Fig. 3c, at , the system is disordered and the mean of absolute energy and the mean of absolute magnetization become more constant, simply due to averaging of disordered spins. Especially, Fig. 4c shows that at , the effect of averaging over disordered spins for is stronger than for . At high temperature, the number of possible states increase exponentially and hence fitting by Deep Learning corrections is more difficult.
Discussion
Our results have demonstrated that the Deep Learning corrections enhance the time integration step of the original Suzuki–Trotter method and have achieved times computational speed up while maintaining accuracy compared to the original Suzuki–Trotter decomposition method. The nature of local nearest neighbours interactions in the lattice means that convolutional structure of the Deep Neural Network is a nature choice of network architecture. Since convolution is translationally invariant, the effect of lattice size on training our U-Net is not a major concern. For example, between and lattices, the time required for training the U-Net parameters increases by about 4 times, which is sub-linear with respect to the number of lattice sites. Our Deep Learning was trained on simulation data at , however its accuracy performance is equivalent to simulation data at . This shows that our Deep Learning training has not reached its theoretical limit of a perfect prediction. This theoretical limit can be achieved exactly if we train on an infinite amount of data for an infinite capacity. In practise, Deep Learning methods can not be perfect because the amount of data and the capacity of U-Net are finite. The main source of inaccuracies in our Deep Learning method is that U-Net’s output does not fit exactly the labeled data generated at and that even if U-Net is able to fit the data it has through training, it may not predict perfectly on the data it has never seen in training. For future work, we will explore the effects of Deep Learning corrections on higher order Suzuki–Trotter decomposition. We will also apply Deep Learning corrections such as off lattice systems and integrators such as velocity-verlet.
Supplementary information
Acknowledgements
We are grateful to Mustafa Umit Oner for useful suggestions. We would like to thank Kaicheng Liang, Mahsa Paknezhad, Connie Kou, Shier Nee Saw, Liu Wei, and Kenta Shiina for proof reading our paper and giving valuable comments. This work was supported by the Biomedical Research Council of A*STAR (Agency for Science, Technology and Research), Singapore, and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1A2C1003743). S.J.P. is grateful to the A*STAR Research Attachment Programme (ARAP) of Singapore for financial support.
Author contributions
S.J.P., W.S.K., and H.K.L. contributed to the discussion and development of project. All authors approved the final manuscript.
Data availability
The data and code that support the findings of this study are available from corresponding authors upon request.
Competing interest
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
is available for this paper at 10.1038/s41598-020-70558-1.
References
- 1.Gutfleisch O, et al. Magnetic Materials and Devices for the 21st Century: Stronger, Lighter, and More Energy Efficient. Adv. Mater. 2011;23:821–842. doi: 10.1002/adma.201002180. [DOI] [PubMed] [Google Scholar]
- 2.Sugimoto S. Current status and recent topics of rare-earth permanent magnets. J. Phys. D Appl. Phys. 2011;44:064001. doi: 10.1088/0022-3727/44/6/064001. [DOI] [Google Scholar]
- 3.Slaughter J. Materials for Magnetoresistive Random Access Memory. Annu. Rev. Mater. Res. 2009;39:277–296. doi: 10.1146/annurev-matsci-082908-145355. [DOI] [Google Scholar]
- 4.Bigot J-Y, Vomir M. Ultrafast magnetization dynamics of nanostructures: Ultrafast magnetization dynamics of nanostructures. Ann. Phys. 2013;525:2–30. doi: 10.1002/andp.201200199. [DOI] [Google Scholar]
- 5.Walowski J, Münzenberg M. Perspective: Ultrafast magnetism and THz spintronics. J. Appl. Phys. 2016;120:140901. doi: 10.1063/1.4958846. [DOI] [Google Scholar]
- 6.Lee HK, Yuan Z. Studies of the magnetization reversal process driven by an oscillating field. J. Appl. Phys. 2007;101:033903. doi: 10.1063/1.2426381. [DOI] [Google Scholar]
- 7.Kryder M, et al. Heat Assisted Magnetic Recording. Proc. IEEE. 2008;96:1810–1835. doi: 10.1109/JPROC.2008.2004315. [DOI] [Google Scholar]
- 8.Lee HK, Okabe Y. Exchange bias with interacting random antiferromagnetic grains. Phys. Rev. B. 2006;73:140403. doi: 10.1103/PhysRevB.73.140403. [DOI] [Google Scholar]
- 9.Lynn JW. Temperature dependence of the magnetic excitations in iron. Phys. Rev. B. 1975;11:2624–2637. doi: 10.1103/PhysRevB.11.2624. [DOI] [Google Scholar]
- 10.Landau DP, Krech M. Spin dynamics simulations of classical ferro- and antiferromagnetic model systems: comparison with theory and experiment. J. Phys. Condens. Matter. 1999;11:R179–R213. doi: 10.1088/0953-8984/11/18/201. [DOI] [Google Scholar]
- 11.Frenkel, D. & Smit, B. Understanding Molecular Simulation: from Algorithms to Applications 2nd edn, Vol. 50 (Elesiver, Amsterdam, 1996).
- 12.Beeman D. Some multistep methods for use in molecular dynamics calculations. J. Comput. Phys. 1976;20:130–139. doi: 10.1016/0021-9991(76)90059-0. [DOI] [Google Scholar]
- 13.Allen MP, Tildesley DJ. Computer Simulation of Liquids. Clarendon: Clarendon Press; 1988. [Google Scholar]
- 14.Kim S. Time step and shadow Hamiltonian in molecular dynamics simulations. J. Kor. Phys. Soc. 2015;67:418–422. doi: 10.3938/jkps.67.418. [DOI] [Google Scholar]
- 15.Engle RD, Skeel RD, Drees M. Monitoring energy drift with shadow Hamiltonians. J. Comput. Phys. 2005;206:432–452. doi: 10.1016/j.jcp.2004.12.009. [DOI] [Google Scholar]
- 16.Tsai S-H, Lee HK, Landau DP. Molecular and spin dynamics simulations using modern integration methods. Am. J. Phys. 2005;73:615–624. doi: 10.1119/1.1900096. [DOI] [Google Scholar]
- 17.Carrasquilla J, Melko RG. Machine learning phases of matter. Nat. Phys. 2017;13:431–434. doi: 10.1038/nphys4035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Zhang W, Liu J, Wei T-C. Machine learning of phase transitions in the percolation and X Y models. Phys. Rev. E. 2019;99:032142. doi: 10.1103/PhysRevE.99.032142. [DOI] [PubMed] [Google Scholar]
- 19.Li Z, Luo M, Wan X. Extracting critical exponents by finite-size scaling with convolutional neural networks. Phys. Rev. B. 2019;99:075418. doi: 10.1103/PhysRevB.99.075418. [DOI] [Google Scholar]
- 20.van Nieuwenburg E, Liu Y-H, Huber S. Learning phase transitions by confusion. Nat. Phys. 2017;13:435–439. doi: 10.1038/nphys4037. [DOI] [Google Scholar]
- 21.Morningstar A, Melko RG. Deep learning the ising model near criticality. J. Mach. Learn. Res. 2017;18:5975–5991. [Google Scholar]
- 22.Greitemann J, Liu K, Pollet L. Probing hidden spin order with interpretable machine learning. Phys. Rev. B. 2019;99:060404. doi: 10.1103/PhysRevB.99.060404. [DOI] [Google Scholar]
- 23.Huang L, Wang L. Accelerated Monte Carlo simulations with restricted Boltzmann machines. Phys. Rev. B. 2017;95:035105. doi: 10.1103/PhysRevB.95.035105. [DOI] [Google Scholar]
- 24.Rapaport DC. The Art of Molecular Dynamics Simulation. Cambridge: Cambridge University Press; 2004. [Google Scholar]
- 25.Noé F, Olsson S, Köhler J, Wu H. Boltzmann generators: sampling equilibrium states of many-body systems with deep learning. Science. 2019;365:eaaw1147. doi: 10.1126/science.aaw1147. [DOI] [PubMed] [Google Scholar]
- 26.Fabiani G, Mentink JH. Investigating ultrafast quantum magnetism with machine learning. SciPost Phys. 2019;7:4. doi: 10.21468/SciPostPhys.7.1.004. [DOI] [Google Scholar]
- 27.Weinberg P, Bukov M. QuSpin: a python package for dynamics and exact diagonalisation of quantum many body systems part I: spin chains. SciPost Phys. 2017;2:003. doi: 10.21468/SciPostPhys.2.1.003. [DOI] [Google Scholar]
- 28.Kharkov YA, Sotskov VE, Karazeev AA, Kiktenko EO, Fedorov AK. Revealing quantum chaos with machine learning. Phys. Rev. B. 2020;101:064406. doi: 10.1103/PhysRevB.101.064406. [DOI] [Google Scholar]
- 29.Carleo G, Troyer M. Solving the quantum many-body problem with artificial neural networks. Science. 2017;355:602–606. doi: 10.1126/science.aag2302. [DOI] [PubMed] [Google Scholar]
- 30.Chen K, Ferrenberg AM, Landau DP. Static critical behavior of three-dimensional classical Heisenberg models: a high-resolution Monte Carlo study. Phys. Rev. B. 1993;48:3249–3256. doi: 10.1103/PhysRevB.48.3249. [DOI] [PubMed] [Google Scholar]
- 31.Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953;21:1087–1092. doi: 10.1063/1.1699114. [DOI] [Google Scholar]
- 32.Binder K. The Monte Carlo method for the study of phase transitions: a review of some recent progress. J. Comput. Phys. 1985;59:1–55. doi: 10.1016/0021-9991(85)90106-8. [DOI] [Google Scholar]
- 33.Paauw T, Compagner A, Bedeaux D. Monte-Carlo calculation for the classical F.C.C. Heisenberg ferromagnet. Physica A. 1975;79:1–17. doi: 10.1016/0378-4371(75)90084-9. [DOI] [Google Scholar]
- 34.Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional Networks for Biomedical Image Segmentation. 10.1016/j.jcp.2004.12.0096 (2015).
- 35.Abadi, M. et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. 10.1016/j.jcp.2004.12.0097 (2016).
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data and code that support the findings of this study are available from corresponding authors upon request.