Significance
To understand the evolution of the Universe requires a concerted effort of accurate observation of the sky and fast prediction of structures in the Universe. N-body simulation is an effective approach to predicting structure formation of the Universe, though computationally expensive. Here, we build a deep neural network to predict structure formation of the Universe. It outperforms the traditional fast-analytical approximation and accurately extrapolates far beyond its training data. Our study proves that deep learning is an accurate alternative to the traditional way of generating approximate cosmological simulations. Our study shows that one can use deep learning to generate complex 3D simulations in cosmology. This suggests that deep learning can provide a powerful alternative to traditional numerical simulations in cosmology.
Keywords: cosmology, deep learning, simulation
Abstract
Matter evolved under the influence of gravity from minuscule density fluctuations. Nonperturbative structure formed hierarchically over all scales and developed non-Gaussian features in the Universe, known as the cosmic web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and use a large ensemble of computer simulations to compare with the observed data to extract the full information of our own Universe. However, to evolve billions of particles over billions of years, even with the simplest physics, is a daunting task. We build a deep neural network, the Deep Density Displacement Model (), which learns from a set of prerun numerical simulations, to predict the nonlinear large-scale structure of the Universe with the Zel’dovich Approximation (ZA), an analytical approximation based on perturbation theory, as the input. Our extensive analysis demonstrates that outperforms the second-order perturbation theory (2LPT), the commonly used fast-approximate simulation method, in predicting cosmic structure in the nonlinear regime. We also show that is able to accurately extrapolate far beyond its training data and predict structure formation for significantly different cosmological parameters. Our study proves that deep learning is a practical and accurate alternative to approximate 3D simulations of the gravitational structure formation of the Universe.
Astrophysicists require a large amount of simulations to extract the information from observations (1–8). At its core, modeling structure formation of the Universe is a computationally challenging task; it involves evolving billions of particles with the correct physical model over a large volume over billions of years (9–11). To simplify this task, we either simulate a large volume with simpler physics or a smaller volume with more complex physics. To produce the cosmic web (12) in large volume, we select gravity, the most important component of the theory, to simulate at large scales. A gravity-only -body simulation is the most popular and effective numerical method to predict the full 6D phase-space distribution of a large number of massive particles whose position and velocity evolve over time in the Universe (13). Nonetheless, -body simulations are relatively computationally expensive, thus making the comparison of the -body–simulated large-scale structure (of different underlying cosmological parameters) with the observed Universe a challenging task. We propose to use a deep model that predicts the structure formation as an alternative to -body simulations.
Deep learning (14) is a fast-growing branch of machine learning, where recent advances have led to models that reach and sometimes exceed human performance across diverse areas, from analysis and synthesis of images (15–17), sound (18, 19), text (20, 21), and videos (22, 23) to complex control and planning tasks as they appear in robotics and game play (24–26). This new paradigm is also significantly impacting a variety of domains in the sciences, from biology (27, 28) to chemistry (29, 30) and physics (31, 32). In particular, in astronomy and cosmology, a growing number of recent studies are using deep learning for a variety of tasks, ranging from analysis of cosmic microwave background (33–35), large-scale structure (36, 37), and gravitational lensing effects (38, 39) to classification of different light sources (40–42).
The ability of these models to learn complex functions has motivated many to use them to understand the physics of interacting objects, leveraging image, video, and relational data (43–53). However, modeling the dynamics of billions of particles in N-body simulations poses a distinct challenge.
In this work, we show that a variation on the architecture of a well-known deep-learning model (54) can efficiently transform the first-order approximations of the displacement field and approximate the exact solutions, thereby producing accurate estimates of the large-scale structure. Our key objective is to prove that this approach is an accurate and computationally efficient alternative to expensive cosmological simulations, and, to this end, we provide an extensive analysis of the results in the following section.
The outcome of a typical N-body simulation depends on both the initial conditions and on cosmological parameters which affect the evolution equations. A striking discovery is that the Deep Density Displacement Model (), trained by using a single set of cosmological parameters, generalizes to new sets of significantly different parameters, minimizing the need for training data on a diverse range of cosmological parameters.
Setup
We build a deep neural network, , with similar input and output of an -body simulation. The input to our is the displacement field from the Zel’dovich Approximation (ZA) (55). A displacement vector is the difference of a particle position at target redshift —i.e., the present time—and its Lagrangian position on a uniform grid. ZA evolves the particles on linear trajectories along their initial displacements. It is accurate when the displacement is small; therefore, ZA is frequently used to construct the initial conditions of -body simulations (56). As for the ground truth, the target displacement field is produced by using FastPM (57), a recent approximate N-body–simulation scheme that is based on a particle-mesh (PM) solver. FastPM quickly approaches a full N-body simulation with high accuracy and provides a viable alternative to direct N-body simulations for the purpose of our study.
A significantly faster approximation of N-body simulations is produced by second-order Lagrangian perturbation theory (2LPT), which bends each particle’s trajectory with a quadratic correction (58). The 2LPT is used in many cosmological analyses to generate a large number of cosmological simulations for comparison of the astronomical dataset against the physical model (59, 60) or to compute the covariance of the dataset (61–63). We regard 2LPT as an effective way to efficiently generate a relatively accurate description of the large-scale structure, and therefore we select 2LPT as the reference model for comparison with .
We generate 10,000 pairs of ZAs as input and accurate FastPM approximations as the target. We use simulations of -body particles in a volume of (600 million light years, where is the Hubble parameter). The particles have a mean separation of per dimension.
An important choice in our approach is training with a displacement field rather than a density field. Displacement field and density field are two ways of describing the same distribution of particles. And an equivalent way to describe a density field is the overdensity field, defined as , with denoting the mean density. The displacement field and overdensity field are related by Eq. 1.
| [1] |
When the displacement field is small and has zero curl, the choice of overdensity vs. displacement field for the output of the model is irrelevant, as there is a bijective map between these two representations, described by the equation:
| [2] |
However, as the displacements grow into the nonlinear regime of structure formation, different displacement fields can produce identical density fields (e.g., ref. 64). Therefore, providing the model with the target displacement field during the training eliminates the ambiguity associated with the density field. Our inability to produce comparable results when using the density field as our input and target attests that relevant information resides in the displacement field (SI Appendix, Fig. S1).
Results and Analysis
Fig. 1 shows the displacement vector field as predicted by (Left) and the associated point-cloud representation of the structure formation (Right). It is possible to identify structures such as clusters, filaments, and voids in this point-cloud representation. We proceed to compare the accuracy of and 2LPT compared with ground truth.
Fig. 1.
The displacement vector field (Left) and the resulting density field (Right) produced by . The vectors in Left are uniformly scaled down for better visualization.
Point-Wise Comparison.
Let denote the displacement field, where is the number of spatial-resolution elements in each dimension (). A natural measure of error is the relative error , where is the true displacement field (FastPM), and is the prediction from 2LPT or . Fig. 2 compares this error for different approximations in a 2D slice of a single simulation. We observe that predictions are very close to the ground truth, with a maximum relative error of 1.10 over all 1,000 simulations. For 2LPT, this number is significantly higher at 4.23. In average, the result of comes with a 2.8% relative error, while for 2LPT, it equals 9.3%.
Fig. 2.
The columns show 2D slices of full-particle distribution (Upper) and displacement vector (Lower) by various models: FastPM, the target ground truth, a recent approximate N-body simulation scheme that is based on a PM solver (A); ZA, a simple linear model that evolves particle along the initial velocity vector (B); 2LPT, a commonly used analytical approximation (C); and deep-learning model () as presented in this work (D). While FastPM (A) served as our ground truth, B–D include color for the points or vectors. The color indicates the relative difference between the target location (A) or displacement vector and predicted distributions by various methods (B–D). The error bar shows that denser regions have a higher error for all methods, which suggests that it is harder to predict highly nonlinear region correctly for all models: , 2LPT, and ZA. Our model has the smallest differences between predictions and ground truth among the above models B–D.
Two-Point Correlation Comparison.
As suggested by Fig. 2, the denser regions seem to have a higher error for all methods—that is, more nonlinearity in structure formation creates larger errors for both and 2LPT. The dependence of error on scale is computed with two- and three-point correlation analysis.
Cosmologists often use compressed summary statistics of the density field in their studies. The most widely used of these statistics are the two-point correlation function (2PCF) and its Fourier transform, the power spectrum :
| [3] |
where the ensemble average is taken over all possible realizations of the Universe. Our Universe is observed to be both homogeneous and isotropic on large scales—i.e., without any special location or direction. This allows one to drop the dependencies on and on the direction of , leaving only the amplitude in the final definition of . In the second equation, is simply the Fourier transform of and captures the dispersion of the plane-wave amplitudes at different scales in the Fourier space. is the 3D wavevector of the plane wave, and its amplitude (the wavenumber) is related to the wavelength by . Due to isotropy of the Universe, we drop the vector form of and .
Because FastPM, 2LPT, and take the displacement field as input and output, we also study the two-point statistics for the displacement field. The displacement power spectrum is defined as:
| [4] |
We focus on the Fourier-space representation of the two-point correlation. Because the matter and the displacement power spectrum take the same form, in what follows, we drop the subscript for matter and displacement field and use to stand for both matter and displacement power spectrum. We use the transfer function and the correlation coefficient as metrics to quantify the model performance against the ground truth (FastPM) in the two-point correlation. We define the transfer function as the square root of the ratio of two power spectra,
| [5] |
where is the density or displacement power spectrum as predicted by 2LPT or , and, analogously, is the ground truth predicted by FastPM. The correlation coefficient r(k) is a form of normalized cross-power spectrum,
| [6] |
where is the cross-power spectrum between 2LPT or predictions and the ground-truth (FastPM) simulation result. The transfer function captures the discrepancy between amplitudes, while the correlation coefficient can indicate the discrepancy between phases as functions of scales. For a perfectly accurate prediction, and are both 1. In particular, describes stochasticity, the fraction of the variance in the prediction that cannot be explained by the true model.
Fig. 3A shows the average power spectrum, transfer function , and stochasticity of the displacement field and the density field over 1,000 simulations. The transfer function of density from 2LPT predictions is 2% smaller than that of FastPM on large scales (). This is expected since 2LPT performs accurately on very large scales (). The displacement transfer function of 2LPT increases >1 at and then drops sharply. The increase of the 2LPT displacement transfer function is because 2LPT overestimates the displacement power at small scales (e.g., ref. 65). There is a sharp drop of power near the voxel scale because smoothing over voxel scales in our predictions automatically erases power at scales smaller than the voxel size.
Fig. 3.
(A) Displacement and density power-spectrum of FastPM (orange), 2LPT (blue), and c (green) (Top); transfer function—i.e., the square root of the ratio of the predicted power-spectrum to the ground truth (Middle); and 1 – , where is the correlation coefficient between the predicted fields and the true fields (Bottom). Results are the averaged values of 1,000 test simulations. The transfer function and correlation coefficient of the predictions are nearly perfect from large to intermediate scales and outperform our benchmark 2LPT significantly. (B) The ratios of the multipole coefficients () (to the target) of the two 3PCFs for several triangle configurations. The results are averaged over 10 test simulations. The error bars (padded regions) are the SDs derived from 10 test simulations. The ratio shows that the 3PCF of is closer than 2LPT to our target FastPM with lower variance.
Now, we turn to the predictions: Both the density and displacement transfer functions of the differ from 1 by a mere 0.4% at scale , and this discrepancy only increases to 2% and 4% for density field and displacement field, respectively, as increases to the Nyquist frequency at ?. The stochasticity hovers at ∼ and for most scales. In other words, for both the density and displacement fields, the correlation coefficient between the predictions and FastPM simulations, all the way down to small scales of , is >90%. The transfer function and correlation coefficient of the predictions show that it can reproduce the structure formation of the Universe from large to seminonlinear scales. significantly outperforms our benchmark model 2LPT in the two-point function analysis. only starts to deviate from the ground truth at fairly small scales. This is not surprising, as the deeply nonlinear evolution at these scales is more difficult to simulate accurately and appears to be intractable by current analytical theories (66).
Three-Point Correlation Comparison.
The three-point correlation function (3PCF) expresses the correlation of the field of interest among three locations in the configuration space, which is equivalently defined as bispectrum in Fourier space. Here, we concentrate on the 3PCF for computational convenience:
| [7] |
where = and = . Translation invariance guarantees that is independent of . Rotational symmetry further eliminates all direction dependence except dependence on , the angle between and . The multipole moments of , , where is the Legendre polynomial of degree , can be efficiently estimated with pair counting (67). While the input (computed by ZA) do not contain significant correlations beyond the second order (power spectrum level), we expect to generate densities with a 3PCF that mimics that of ground truth.
We compare the 3PCF calculated from FastPM, 2LPT, and by analyzing the 3PCF through its multipole moments . Fig. 3B shows the ratio of the binned multipole coefficients of the two 3PCFs for several triangle configurations, , where can be the 3PCF for M or 2LPT and is the 3PCF for FastPM. We used 10 radial bins with . The results are averaged over 10 test simulations, and the error bars are the SD. The ratio shows that the 3PCF of M is closer to FastPM than 2LPT, with smaller error bars. To further quantify our comparison, we calculate the relative 3PCF residual defined by
| [8] |
where is the number of () bins. The mean relative 3PCF residual of the and 2LPT predictions compared with FastPM are and , respectively. The accuracy on 3PCF is also an order of magnitude better than 2LPT, which indicates that the is far better at capturing the non-Gaussian structure formation.
Generalizing to New Cosmological Parameters
So far, we train our model using a “single” choice of cosmological parameters (hereafter ) and (68). is the primordial amplitude of the scalar perturbation from cosmic inflation, and is the fraction of the total energy density that is matter at the present time, and we will call it matter density parameter for short. The true exact value of these parameters is unknown, and different choices of these parameters change the large-scale structure of the Universe; Fig. 4.
Fig. 4.
We show the differences of particle distributions and displacement fields when we change the cosmological parameters and . (A) The error bar shows the difference of particle distribution (Upper) and displacement fields (Lower) between and the two extremes for As = 0.2 A0 (Center) and (Right). (B) A similar comparison showing the difference of the particle distributions (Upper) and displacement fields (Lower) for smaller and larger values of with regard to , which was used for training. While the difference for the smaller value of () is larger, the displacement for the larger () is more nonlinear. This nonlinearity is due to concentration of mass and makes the prediction more difficult.
Here, we report an interesting observation: The trained on a single set of parameters in conjunction with ZA (which depends on and ) as input can predict the structure formation for widely different choices of and . From a computational point of view, this suggests a possibility of producing simulations for a diverse range of parameters, with minimal training data.
Varying Primordial Amplitude of Scalar Perturbations .
After training the using , we change in the input of our test set by nearly one order of magnitude: and . Again, we use 1,000 simulations for analysis of each test case. The average relative displacement error of remains < per voxel (compared with when train and test data have the same parameters). This is still well below the error for 2LPT, which has relative errors of and for larger and smaller values of , respectively.
Fig. 5A shows the transfer function and correlation coefficient for both and 2LPT. The performs much better than 2LPT for . For small , 2LPT does a better job than predicting the density transfer function and correlation coefficient at the largest scales; otherwise, predictions are more accurate than 2LPT at scales larger than . We observe a similar trend with 3PCF analysis: The 3PCFs of predictions are notably better than 2LPT ones for larger , compared with smaller , where it is only slightly better. These results confirm our expectation that increasing increases the nonlinearity of the structure-formation process. While 2LPT can predict fairly well in linear regimes, compared with , its performance deteriorates with increased nonlinearity. It is interesting to note that the prediction maintains its advantage, despite being trained on data from more linear regimes.
Fig. 5.
Similar plots as in Fig. 3A, except that we test the two-point statistics when we vary the cosmological parameters without changing the training set (which has different cosmological parameters) or the trained model. We show predictions from and 2LPT when tested on different (A) and (B). We show the transfer function—i.e., the square root of the ratio of the predicted power spectrum to the ground truth (Upper)—and 1 – , where is the correlation coefficient between the predicted fields and the true fields (Lower). The prediction outperforms 2LPT prediction at all scales except in the largest scales, as the perturbation theory works well in linear regime (large scales).
Varying Matter Density Parameter .
We repeat the same experiments, this time changing to 0.5 and 0.1, while the model is trained on , which is quite far from both of the test sets. For , the relative residual displacement errors of the and 2LPT averaged over 1,000 simulations are 3.8% and 15.2%, and for , they are 2.5% and 4.3%. Fig. 5 C and D show the two-point statistics for density field predicted by using different values of . For , the results show that the outperforms 2LPT at all scales, while for smaller , outperforms 2LPT on smaller scales (). As for the 3PCF of simulations with different values of , the mean relative 3PCF residual of the for and are 1.7% and 1.2%, respectively, and for 2LPT, they are 7.6% and 1.7%, respectively. The prediction performs better at than . This is again because the Universe is much more nonlinear at than . The learns more nonlinearity than is encoded in the formalism of 2LPT.
Conclusions
To summarize, our deep model can accurately predict the large-scale structure of the Universe as represented by FastPM simulations, at all scales, as seen in Table 1. Furthermore, learns to predict cosmic structure in the nonlinear regime more accurately than our benchmark model 2LPT. Finally, our model generalizes well to test simulations with cosmological parameters ( and ) significantly different from the training set. This suggests that our deep-learning model can potentially be deployed for a range of simulations beyond the parameter space covered by the training data (Table 1). Our results demonstrate that the successfully learns the nonlinear mapping from first-order perturbation theory to FastPM simulation beyond what higher-order perturbation theories currently achieve.
Table 1.
A summary of our analysis
| Data | Point-wise | k = 0.11 | 3PCF | |||
| Test phase | ||||||
| 2LPT density | N/A | 0.96 | 1.00 | 0.74 | 0.94 | 0.0782 |
| density | N/A | 1.00 | 1.00 | 0.99 | 1.00 | 0.0079 |
| 2LPT displacement | 0.093 | 0.96 | 1.00 | 1.04 | 0.90 | N/A |
| displacement | 0.028 | 1.00 | 1.00 | 0.99 | 1.00 | N/A |
| 2LPT density | N/A | 0.93 | 1.00 | 0.49 | 0.78 | 0.243 |
| density | N/A | 1.00 | 1.00 | 0.98 | 1.00 | 0.039 |
| 2LPT displacement | 0.155 | 0.97 | 1.00 | 1.07 | 0.73 | N/A |
| displacement | 0.039 | 1.00 | 1.00 | 0.97 | 0.99 | N/A |
| 2LPT density | N/A | 0.99 | 1.00 | 0.98 | 0.99 | 0.024 |
| density | N/A | 1.00 | 1.00 | 1.03 | 1.00 | 0.022 |
| 2LPT displacement | 0.063 | 0.99 | 1.00 | 0.95 | 0.98 | N/A |
| displacement | 0.036 | 1.00 | 1.00 | 1.01 | 1.00 | N/A |
| 2LPT density | N/A | 0.94 | 1.00 | 0.58 | 0.87 | 0.076 |
| density | N/A | 1.00 | 1.00 | 1.00 | 1.00 | 0.017 |
| 2LPT displacement | 0.152 | 0.97 | 1.00 | 1.10 | 0.80 | N/A |
| displacement | 0.038 | 1.00 | 1.00 | 0.98 | 0.99 | N/A |
| 2LPT density | N/A | 0.97 | 1.00 | 0.96 | 0.99 | 0.017 |
| density | N/A | 0.99 | 1.00 | 1.04 | 1.00 | 0.012 |
| 2LPT displacement | 0.043 | 0.97 | 1.00 | 0.97 | 0.98 | N/A |
| displacement | 0.025 | 0.99 | 1.00 | 1.02 | 1.00 | N/A |
The unit of k is hMpc−1. N/A, not applicable.
Looking forward, we expect that replacing FastPM with exact N-body simulations would improve the performance of our method. As the complexity of our model is linear in the number of voxels, we expect to be able to further improve our results if we replace the FastPM simulations with higher-resolution simulations. Our work suggests that deep learning is a practical and accurate alternative to the traditional way of generating approximate simulations of the structure formation of the Universe.
Materials and Methods
Dataset.
The full simulation data consists of 10,000 simulations of boxes with ZA and FastPM as input–output pairs, with an effective volume of 20 (Gpc/h)3 (), comparable to the volume of a large spectroscopic sky survey like Dark Energy Spectroscopic Instrument or EUCLID. We split the full simulation dataset into 80%, 10%, and 10% for training, validation, and test, respectively. We also generated 1,000 simulations for 2LPT for each set of tested cosmological parameters.
Model and Training.
The adopts the U-Net architecture (54) with 15 convolution or deconvolution layers and ∼ trainable parameters. Our generalizes the standard U-Net architecture to work with 3D data (69–71). The details of the architecture are described in the following sections, and a schematic figure of the architecture is shown in SI Appendix, Fig. S2. In the training phase, we use the Adam Optimizer (72) with a learning rate of 0.0001, and first- and second-moment exponential decay rates equal to 0.9 and 0.999, respectively. We use the mean-squared error as the loss function (Loss Function) and regularization with regularization coefficient 0.0001.
Details of the Architecture.
The contracting path follows the typical architecture of a convolution network. It consists of two blocks, each of which consists of two successive convolutions of stride 1 and a down-sampling convolution with stride 2. The convolution layers use 33 filters with a periodic padding of size 1 (Padding and Periodic Boundary) on both sides of each dimension. Notice that at each of the two down-sampling steps, we double the number of feature channels. At the bottom of the , another two successive convolutions with stride 1 and the same periodic padding as above are applied. The expansive path of our is an inverted version of the contracting path of the network. (It includes two repeated applications of the expansion block, each of which consists of one up-sampling–transposed convolution with stride 1/2 and two successive convolutions of stride 1. The transposed convolution and the convolution are constructed with 33 filters.)
We take special care in the padding and cropping procedure to preserve the shifting and rotation symmetry in the up-sampling layer in expansive path. Before the transposed convolution, we apply a periodic padding of length 1 on the right, down, and back sides of the box [padding = (0,1,0,1,0,1) in pytorch], and after the transposed convolution, we discard one column on the left, up, and front sides of the box and two columns on the right, down, and back sides [crop = (1,2,1,2,1,2)].
A special feature of the is the concatenation procedure, where the up-sampling layer halves the feature channels and then concatenates them with the corresponding feature channels on the contracting path, doubling the number of feature channels.
The expansive building block then follows a 11 convolution without padding, which converts the 64 features to the final 3D displacement field. All convolutions in the network except the last one are followed by a rectified linear unit activation and batch normalization.
Padding and Periodic Boundary.
It is common to use constant or reflective padding in deep models for image processing. However, these approaches are not suitable for our setting. The physical model we are learning is constructed on a spatial volume with a periodic boundary condition. This is sometimes also referred to as a torus geometry, where the boundaries of the simulation box are topologically connected—that is, , where is the index of the spatial location, and is the periodicity (size of box). Constant or reflective padding strategies break the connection between the physically nearby points separated across the box, which not only loses information but also introduces noise during the convolution, further aggravated with an increased number of layers.
We find that the periodic padding strategy significantly improves the performance and expedites the convergence of our model, comparing to the same network using a constant padding strategy. This is not surprising, as one expects that it is easier to train a model that can explain the data than to train a model that does not.
Loss Function.
We train the to minimize the mean square error on particle displacements
| [9] |
where labels the particles and N is the total number of particles. This loss function is proportional to the integrated squared error, and by using a Fourier transform and Parseval’s theorem, it can be rewritten as
| [10] |
where is the Lagrangian space position, and its corresponding wavevector. is the transfer function defined in Eq. 5, and is the correlation coefficient defined in Eq. 6, which characterize the similarity between the predicted and true fields, in amplitude and phase, respectively. Eq. 10 shows that our simple loss function jointly captures both of these measures: As and approach 1, the loss function approaches 0.
Data Availability.
The source code of our implementation is available at https://github.com/siyucosmo/ML-Recon. The code to generate the training data is also available at https://github.com/rainwoodman/fastpm.
Supplementary Material
Acknowledgments
We thank Angus Beane, Peter Braam, Gabriella Contardo, David Hogg, Laurence Levasseur, Pascal Ripoche, Zack Slepian, and David Spergel for useful suggestions and comments; Angus Beane for comments on the paper; and Nick Carriero for help on Center for Computational Astrophysics (CCA) computing clusters. The work was supported partially by the Simons Foundation. The FastPM simulations were generated on the computer cluster Edison at the National Energy Research Scientific Computing Center, a US Department of Energy Office of Science User Facility operated under Contract DE-AC02-05CH11231. The training of the neural network model was performed on the CCA computing facility and the Carnegie Mellon University AutonLab computing facility. The open-source software toolkit nbodykit (73) was used for the clustering analysis. Y.L. was supported by the Berkeley Center for Cosmological Physics and the Kavli Institute for the Physics and Mathematics of the Universe, established by the World Premier International Research Center Initiative of the MEXT, Japan. S. Ho was supported by NASA Grants 15-WFIRST15-0008 and Research Opportunities in Space and Earth Sciences Grant 12-EUCLID12-0004; and the Simons Foundation.
Footnotes
The authors declare no conflict of interest.
This article is a PNAS Direct Submission.
Data deposition: The source code of our implementation is available at https://github.com/siyucosmo/ML-Recon. The code to generate the training data is available at https://github.com/rainwoodman/fastpm.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1821458116/-/DCSupplemental.
References
- 1.Colless M., et al. , The 2dF galaxy redshift survey: Spectra and redshifts. Mon. Not. R. Astron. Soc. 328, 1039–1063 (2001). [Google Scholar]
- 2.Eisenstein D. J., et al. SDSS-III: Massive spectroscopic surveys of the distant universe, the Milky Way galaxy, and extra-solar planetary systems. Astron. J. 142, 72 (2011). [Google Scholar]
- 3.Jones H. D., et al. The 6dF galaxy survey: Final redshift release (DR3) and southern large-scale structures. Mon. Not. R. Astron. Soc. 399, 683–698 (2009). [Google Scholar]
- 4.Liske J., et al. , Galaxy and Mass Assembly (GAMA): End of survey report and data release 2. Mon. Not. R. Astron. Soc. 452, 2087–2126 (2015). [Google Scholar]
- 5.Scodeggio M., et al. The VIMOS Public Extragalactic Redshift Survey (VIPERS). Full spectroscopic data and auxiliary information release (PDR-2). arXiv:1611.07048 (21 November 2016).
- 6.Ivezić Ž., et al. , LSST: From science drivers to reference design and anticipated data products. arXiv:0805.2366(15 May 2008).
- 7.Amendola L., et al. , Cosmology and fundamental physics with the Euclid satellite. Living Rev Relativ. 21, 2 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Spergel D., et al. , Wide-Field InfraRed Survey Telescope–Astrophysics Focused Telescope Assets WFIRST-AFTA 2015 report. arXiv:1503.03757 (12 March 2015).
- 9.MacFarland T., Couchman H. M. P., Pearce F. R., Pichlmeier J., A new parallel P3M code for very large-scale cosmological simulations. New Astron. 3, 687–705(1998). [Google Scholar]
- 10.Springel V., Yoshida N., White S. D. M., GADGET: A code for collisionless and gasdynamical cosmological simulations. New Astron. 6, 79–117 (2001). [Google Scholar]
- 11.Bagla J. S., TreePM: A code for cosmological N-body simulations. J. Astrophys. Astron. 23, 185–196 (2002). [Google Scholar]
- 12.Bond J. R., Kofman L., Pogosyan D., How filaments of galaxies are woven into the cosmic web. Nature 380, 603–606 (1996). [Google Scholar]
- 13.Davis M., Efstathiou G., Frenk C. S., White S. D. M., The evolution of large-scale structure in a universe dominated by cold dark matter. Astrophys. J. 292, 371–394 (1985). [Google Scholar]
- 14.LeCun Y., Bengio Y., Hinton G., Deep learning. Nature 521, 436–444 (2015). [DOI] [PubMed] [Google Scholar]
- 15.Huang G., Liu Z., Van Der Maaten L., Weinberger K. Q., “Densely connected convolutional networks” in Proceedings: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Piscataway, NJ, 2017), pp. 2261–2269. [Google Scholar]
- 16.Karras T., Aila T., Laine S., Lehtinen J., Progressive growing of GANS for improved quality, stability, and variation. arXiv:1710.10196 (27 October 2017).
- 17.Gulrajani I., Ahmed F., Arjovsky M., Dumoulin V., Courville A. C., “Improved training of Wasserstein GANS” in Advances in Neural Information Processing Systems 30 (NIPS 2017), Guyon I., et al., Eds. (Neural Information Processing Systems Foundation, Inc., San Diego, CA, 2017), pp. 5767–5779. [Google Scholar]
- 18.Van Den Oord A., et al. , WaveNet: A generative model for raw audio. arXiv:1609.03499 (12 September 2016).
- 19.Amodei D., et al. , “Deep speech 2: End-to-end speech recognition in English and Mandarin” in Proceedings of the 33rd International Conference on Machine Learning, Balcan M. F., Weinberger K. Q., Eds. (Association for Computing Machinery, New York, NY, 2016), pp. 173–182. [Google Scholar]
- 20.Hu Z., Yang Z., Liang X., Salakhutdinov R., Xing E. P., Toward controlled generation of text. arXiv:1703.00955 (2 March 2017).
- 21.Vaswani A., et al. , “Attention is all you need” in Advances in Neural Information Processing Systems 30 (NIPS 2017), Guyon I., et al., Eds. (Neural Information Processing Systems Foundation, Inc., San Diego, CA, 2017), pp. 5998–6008. [Google Scholar]
- 22.Denton E., Fergus R., Stochastic video generation with a learned prior. arXiv:1802.07687 (21 February 2018).
- 23.Donahue J., et al. , “Long-term recurrent convolutional networks for visual recognition and description” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Piscataway, NJ, 2015), pp. 2625–2634. [DOI] [PubMed] [Google Scholar]
- 24.Silver D., et al. , Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016). [DOI] [PubMed] [Google Scholar]
- 25.Mnih V., et al. , Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015). [DOI] [PubMed] [Google Scholar]
- 26.Levine S., Finn C., Darrell T., Abbeel P., End-to-end training of deep visuomotor policies. J. Mach. Learn Res. 17, 13d34–1373 (2016). [Google Scholar]
- 27.Ching T., et al. , Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interf. 15, 20170387 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Alipanahi B., Delong A., Weirauch M. T., Frey B. J., Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat. Biotechnol. 33, 831–838 (2015). [DOI] [PubMed] [Google Scholar]
- 29.Segler M. H. S., Preuss M., Waller M. P., Planning chemical syntheses with deep neural networks and symbolic AI. Nature 555, 604–610 (2018). [DOI] [PubMed] [Google Scholar]
- 30.Gilmer J., Schoenholz S. S., Riley P. F., Vinyals O., Dahl G. E., Neural message passing for quantum chemistry. arXiv:1704.01212 (4 April 2017).
- 31.Carleo G., Troyer M., Solving the quantum many-body problem with artificial neural networks. Science 355, 602–606 (2017). [DOI] [PubMed] [Google Scholar]
- 32.Adam-Bourdarios C., et al. , “The Higgs boson machine learning challenge” in Proceedings of the NIPS 2014 Workshop on High-Energy Physics and Machine Learning (Neural Information Processing Systems Foundation, Inc., San Diego, CA, 2015), pp. 19–55. [Google Scholar]
- 33.He S., Ravanbakhsh S., Ho S., “Analysis of cosmic microwave background with deep learning” in Proceedings of the 33rd International Conference on Machine Learning (Journal of Machine Learning Research, 2016), Vol. 48. (2018).
- 34.Perraudin N., Defferrard M., Kacprzak T., Sgier R., DeepSphere: Efficient spherical convolutional neural network with HEALPix sampling for cosmological applications. arXiv:1810.12186 (29 October 2018).
- 35.Caldeira J., et al. , DeepCMB: Lensing reconstruction of the cosmic microwave background with deep neural networks. arXiv:1810.01483 (2 October 2018).
- 36.Ravanbakhsh S., et al. , Estimating cosmological parameters from the dark matter distribution. arXiv:1711.02033 (6 November 2017).
- 37.Mathuriya A., et al. , CosmoFlow: Using deep learning to learn the universe at scale. arXiv:1808.04728 (14 August 2018).
- 38.Hezaveh Y. D., Levasseur L. P., Marshall P. J., Fast automated analysis of strong gravitational lenses with convolutional neural networks. Nature 548, 555–557 (2017). [DOI] [PubMed] [Google Scholar]
- 39.Lanusse F., et al. , CMU DeepLens: Deep learning for automatic image-based galaxy-galaxy strong lens finding. Mon. Not. R. Astron. Soc. 473, 3895–3906 (2018). [Google Scholar]
- 40.Kennamer N., Kirkby D., Ihler A., Sanchez-Lopez F. J., “ContextNet: Deep learning for star galaxy classification” in Proceedings of the 35th International Conference on Machine Learning, Dy J., Krause A., Eds. (Proceedings of Machine Learning Research, Journal of Machine Learning Research, 2018), vol. 80, pp. 2582–2590. [Google Scholar]
- 41.Kim E. J., Brunner R. J., Star-galaxy classification using deep convolutional neural networks. Mon. Not. R. Astron. Soc. 464, 4463–4475 (2016). [Google Scholar]
- 42.Lochner M., McEwen J. D., Peiris H. V., Lahav O., Winter M. K., Photometric supernova classification with machine learning. Astrophys. J. Suppl. Ser. 225, 31 (2016). [Google Scholar]
- 43.Battaglia P. W., Hamrick J. B., Tenenbaum J. B., Simulation as an engine of physical scene understanding. Proc. Natl. Acad. Sci. U.S.A. 45, 18327–18332 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Battaglia P., et al. “Interaction networks for learning about objects, relations and physics” in Proceedings of the 30th International Conference on Neural Information Processing Systems, Lee D. D., von Luxburg U., Garnett R., Sugiyama M., Guyon I., Eds. (Association for Computing Machinery, New York, NY, 2016), pp. 4502–4510. [Google Scholar]
- 45.Mottaghi R., Bagherinezhad H., Rastegari M., Farhadi A., “Newtonian scene understanding: Unfolding the dynamics of objects in static images” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Piscataway, NJ, 2016), pp. 3521–3529. [Google Scholar]
- 46.Chang M. B., Ullman T., Torralba A., Tenenbaum J. B., A compositional object-based approach to learning physical dynamics. arXiv:1612.00341 (1 December 2016).
- 47.Wu J., Yildirim I., Lim J. J., Freeman B., Tenenbaum J., “Galileo: Perceiving physical object properties by integrating a physics engine with deep learning” in Proceedings of the 28th International Conference on Neural Information Processing Systems, Cortes C., Lee D. D., Sugiyama M., Garnett R., Eds. (Association for Computing Machinery, New York, NY, 2015), pp. 127–135. [Google Scholar]
- 48.Wu J., Lim J. J., Zhang H., Tenenbaum J. B., Freeman W. T., “Physics 101: Learning physical object properties from unlabeled videos” in Proceedings of the British Machine Vision Conference 2016, R. C. Wilson, E. R. Hancock, W. A. P. Smith, Eds. (BMVA Press, Durham, UK, 2016), pp. 39.1–39.12.
- 49.Watters N., et al. , “Visual interaction networks: Learning a physics simulator from video” in Advances in Neural Information Processing Systems (Neural Information Processing Systems Foundation, Inc., 2017), vol. 30, pp. 4539–4547. [Google Scholar]
- 50.Lerer A., Gross S., Fergus R., Learning physical intuition of block towers by example. arXiv:1603.01312 (3 March 2016).
- 51.Agrawal P., Nair A. V., Abbeel P., Malik J., Levine S., “Learning to poke by poking: Experiential learning of intuitive physics” in Advances in Neural Information Processing Systems, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, Eds. (Neural Information Processing Systems Foundation, Inc., 2016), vol. 29, pp. 5074–5082. [Google Scholar]
- 52.Fragkiadaki K., Agrawal P., Levine S., Malik J., Learning visual predictive models of physics for playing billiards. arXiv:1511.07404 (23 November 2015).
- 53.Tompson J., Schlachter K., Sprechmann P., Perlin K., Accelerating Eulerian fluid simulation with convolutional networks. arXiv:1607.03597 (13 July 2016).
- 54.Ronneberger O., Fischer P., Brox T., “U-Net: Convolutional networks for biomedical image segmentation” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, N. Navab, J. Hornegger, W. Wells, A. Frangi, Eds. (Lecture Notes in Computer Science, Springer, Cham, Switzerland, 2015), vol. 9351, pp. 234–241.
- 55.Zel’dovich Y. B., Gravitational instability: An approximate theory for large density perturbations Astron. Astrophys. 5, 84–89 (1970). [Google Scholar]
- 56.White M., The Zel’dovich approximation. Mon. Not. R. Astron. Soc. 439, 3630–3640 (2014). [Google Scholar]
- 57.Feng Y., Chu M.-Y., Seljak U., McDonald P., FASTPM: A new scheme for fast simulations of dark matter and haloes. Mon. Not. R. Astron. Soc. 463, 10.1093/mnras/stw2123 (2016). [Google Scholar]
- 58.Buchert T., Lagrangian theory of gravitational instability of Friedman-Lemaitre cosmologies—a generic third-order model for nonlinear clustering. Mon. Not. R. Astron. Soc. 267, 811–820 (1994). [Google Scholar]
- 59.Jasche J., Wandelt B. D., Bayesian physical reconstruction of initial conditions from large-scale structure surveys. Mon. Not. R. Astron. Soc. 432, 894–913 (2013). [Google Scholar]
- 60.Kitaura F.-S., The initial conditions of the Universe from constrained simulations. Mon. Not. R. Astron. Soc. 429, L84–L88 (2013). [Google Scholar]
- 61.Dawson K. S., et al. , The Baryon oscillation spectroscopic survey of SDSS-III. Astron. J. 145, 10 (2013). [Google Scholar]
- 62.Dawson K. S., et al. , The SDSS-IV extended Baryon oscillation spectroscopic survey: Overview and early data. Astron. J. 151, 44 (2016). [Google Scholar]
- 63.DESI Collaboration et al. , The DESI Experiment part I: Science, targeting, and survey design. arXiv:1611.00036 (31 October 2016).
- 64.Feng Y., Seljak U., Zaldarriaga M., Exploring the posterior surface of the large scale structure reconstruction. J. Cosmol. Astropart. Phys. 7, 043 (2018). [Google Scholar]
- 65.Chan K. C., Helmholtz decomposition of the Lagrangian displacement (2014) Phys. Rev. D89, 083515.
- 66.Perko A., Senatore L., Jennings E., Wechsler R. H., Biased tracers in redshift space in the EFT of large-scale structure. arXiv:1610.09321 (28 October 2016).
- 67.Slepian Z., Eisenstein D. J., Computing the three-point correlation function of galaxies in time. Mon. Not. R. Astron. Soc. 454, 4142–4158 (2015). [Google Scholar]
- 68.Planck Collaboration et al. , Planck 2015 results. XIII. Cosmological parameters Astron. Astrophys. 594, A13 (2016). [Google Scholar]
- 69.Milletari F., Navab N., Ahmadi S.-A., V-Net: Fully convolutional neural networks for volumetric medical image segmentation. arXiv:1606.04797 (15 June 2016).
- 70.Berger P., Stein G., A volumetric deep convolutional neural network for simulation of mock dark matter halo catalogues. Mon. Not. R. Astron. Soc. 482, 2861–2871(2019). [Google Scholar]
- 71.Aragon-Calvo M. A., Classifying the large scale structure of the universe with deep neural networks. arXiv:1804.00816 (3 April 2018).
- 72.Kingma D., Ba J., Adam: A method for stochastic optimization, arXiv:1412.6980 (22 December 2014).
- 73.Hand N., et al. , nbodykit: An open-source, massively parallel toolkit for large-scale structure. Astron. J. 156, 160 (2018). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The source code of our implementation is available at https://github.com/siyucosmo/ML-Recon. The code to generate the training data is also available at https://github.com/rainwoodman/fastpm.





