Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2023 Jul 19;120(30):e2305765120. doi: 10.1073/pnas.2305765120

Forecasting small-scale dynamics of fluid turbulence using deep neural networks

Dhawal Buaria a,b,1, Katepalli R Sreenivasan a,c,1
PMCID: PMC10372621  PMID: 37467268

Significance

In recent decades, direct numerical simulations (DNS) of the Navier–Stokes equations have become a prominent tool for studying turbulent flows ubiquitously encountered in nature and technology. However, both practical and theoretical requirements for increasing the Reynolds number present a challenge for the DNS: As the Reynolds number increases, cutting-edge simulations need to be larger and longer, rendering them prohibitively expensive and unfeasible. Alternatively, can one leverage deep neural networks to advantageously learn from existing simulations at lower Reynolds numbers to cheaply and accurately predict small-scale properties at higher unseen Reynolds numbers? This study represents a significant step forward in addressing this fundamental and crucial question at hand.

Keywords: fluid dynamics, turbulence, deep learning, intermittency, extreme events

Abstract

Turbulence in fluid flows is characterized by a wide range of interacting scales. Since the scale range increases as some power of the flow Reynolds number, a faithful simulation of the entire scale range is prohibitively expensive at high Reynolds numbers. The most expensive aspect concerns the small-scale motions; thus, major emphasis is placed on understanding and modeling them, taking advantage of their putative universality. In this work, using physics-informed deep learning methods, we present a modeling framework to capture and predict the small-scale dynamics of turbulence, via the velocity gradient tensor. The model is based on obtaining functional closures for the pressure Hessian and viscous Laplacian contributions as functions of velocity gradient tensor. This task is accomplished using deep neural networks that are consistent with physical constraints and explicitly incorporate Reynolds number dependence to account for small-scale intermittency. We then utilize a massive direct numerical simulation database, spanning two orders of magnitude in the large-scale Reynolds number, for training and validation. The model learns from low to moderate Reynolds numbers and successfully predicts velocity gradient statistics at both seen and higher (unseen) Reynolds numbers. The success of our present approach demonstrates the viability of deep learning over traditional modeling approaches in capturing and predicting small-scale features of turbulence.


Turbulent fluid flows, ubiquitous in nature and technology, are characterized by strong and chaotic fluctuations across a wide range of interacting scales in space and time. Such multiscale interactions are highly nonlinear, leading to mathematical intractability of the governing equations. Consequently, turbulence has defied an adequate framework despite a sustained effort in physics, mathematics and engineering, and our present understanding remains incomplete, often relying on phenomenological approaches (13). An essential notion in this regard is that of small-scale universality (4), which forms the backbone of turbulence theories and models. It stipulates that, while the large scales are nonuniversal because of their dependence on flow geometry and energy injection mechanisms, such dependencies become progressively weaker as energy cascades to smaller scales, ultimately endowing them with some form of universality that depends only on a few parameters of the flow. This, of course, is not guaranteed to be true.

From this perspective, this putative universality requires sufficiently large separation between the scales at which the energy is injected and those at which it is dissipated into molecular motion. This scale separation is determined by the Reynolds number, Re (1); thus, investigating universality requires data at high Re. While such high Reynolds numbers are attainable in some laboratory flows and all geophysical flows, most quantities pertaining to small scales are still very difficult to measure (5). Alternatively, direct numerical simulations (DNS) of the governing equations, where the entire range of scales is resolved on a computational mesh (6), provide information (at least in principle) on every quantity desired. However, DNS is extremely expensive, with recent studies showing that its cost scales even faster than the traditional estimate of Re3 (79). Thus, despite the rapid advances in high-performance computing, high-Re DNS, representative of natural and engineering flows, remains unlikely for the foreseeable future.

Motivated by these considerations, we devise here an alternative approach based on machine learning techniques to characterize the small scales of turbulence. In recent years, the use of machine learning, especially deep learning, has ushered in a new paradigm in various scientific fields (10). The field of turbulence is no different, and there has been a flurry of machine learning methods to improve turbulence modeling (11, 12). A vast majority of them utilize the framework of supervised learning (13), where neural networks are trained on input data against labeled output data, although other paradigms have also be used (14, 15). The learning is also often “physics informed,” i.e., neural networks are designed to satisfy some physical constraints, enabling efficient learning including significantly improved accuracy and stability (16). The approach utilized here follows a broadly similar paradigm, but in a framework specifically designed for small scales of turbulence. In particular, we capture the small-scale dynamics of turbulence by training deep neural networks on existing DNS data at low and moderate Re and demonstrate the capability for predicting their dynamics at both seen Re and higher unseen Re, with important consequences for turbulence simulations.

The small scales of turbulence can be conveniently studied via the velocity gradient tensor A = ∇u, where u is the turbulent velocity field. The tensor A encodes various structural and statistical properties of turbulence, which are known to be universal to various degrees. The non-Gaussianity of its fluctuations and the associated extreme events (8, 1720), the negative skewness of the longitudinal (or diagonal) components associated with the energy cascade (from large to small scales) (21, 22), the preferential alignment of vorticity with the intermediate strain eigenvector (23, 24), are a few notable examples. By taking the gradient of the incompressible Navier–Stokes equations, the evolution equation for A becomes

DADt=A2H+ν2A, [1]

where D/Dt is the material (or Lagrangian) derivative, H = ∇∇P is the Hessian tensor of the kinematic pressure P and ν the kinematic viscosity. The above equation, studied by a number of authors with different perspectives, dictates that the velocity gradient tensor changes along a fluid element according to quadratic nonlinearity, the pressure effects, and viscous diffusion. Since the trace Tr(A)=0 by incompressibility, it follows that

2P=Tr(H)=Tr(A2). [2]

That is, the pressure field is related to A through a Poisson equation, implying that the pressure Hessian is nonlocal, essentially coupling all scales of the flow.

In DNS, Eq. 1 is numerically solved on a large computational mesh by resolving all dynamically relevant scales (6); whereas other simulation paradigms resolve only a range of scales. For instance, in large-eddy simulation (LES), the large-scales are resolved on a mesh, and the effects of small scales are directly modeled. In contrast, our approach here is to directly develop a reduced-order closure model for A. This can be accomplished by modeling the pressure Hessian and the viscous Laplacian terms explicitly in terms of A, leading to a fully local description (25), i.e., the dynamics of A can be modeled by an ordinary differential equation (ODE), whereby statistical quantities of interest can be obtained, for example, by running Monte Carlo simulations of the ODE with arbitrary initial conditions. Note that this approach models the Lagrangian dynamics of A, as opposed to Eulerian results obtained on a computational mesh. The main benefit of this approach is that it allows us to directly obtain the statistical properties of A, which otherwise can be obtained only from DNS.

Several attempts of this sort—see, for example, refs. 2632—have been made over the years, including the use of neural networks more recently (33, 34). While these models have enjoyed reasonable success, they still struggle to capture various crucial aspects of velocity gradient dynamics. In particular, they have not been able to capture the Re dependencies of velocity gradient statistics, which is a crucial aspect of small-scale intermittency (2, 3). To rectify such shortcomings is a primary motivation of the current work. In the following section, we first provide the general framework for developing the closure model and describe the deep learning tools required for our purposes.

Modeling Using Tensor Representation Theory

To obtain a functional closure, the pressure Hessian and viscous Laplacian terms in Eq. 1 need to be specified as tensor functions of A. This can be most generally achieved by using tensor representation theory (35, 36), which has often been used in various modeling contexts (3234, 37, 38). Below, we briefly summarize the basic theory and the changes introduced in this work to better capture the dynamics of A.

Basic Framework.

Tensor representation theory allows us to express any desired (second-order) tensor as a function of A. This is achieved by expressing the desired tensor as a linear combination of tensors in an appropriate tensor basis constructed from A, with the coefficients that are functions of the scalar basis of A (35, 36). To obtain the tensor and scalar bases, the first step is to decompose A into its symmetric and skew-symmetric parts, which are the strain-rate and rotation rate tensors, respectively:

S=12(A+AT),R=12(AAT). [3]
T(1)=S,T(2)=SRRS,T(3)=S213Tr(S2)IT(4)=R213Tr(R2)I,T(5)=RS2S2R,T(6)=SR2+R2S23Tr(SR2)I,T(7)=RSR2R2SR,T(8)=SRS2S2RS,T(9)=R2S2+S2R223Tr(S2R2)I,T(10)=RS2R2R2S2R. [4]
B(1)=R,B(2)=SR+RS,B(3)=S2R+RS2,B(4)=R2SSR2,B(5)=R2S2S2R2,B(6)=SR2S2S2R2S. [5]
λ1=Tr(S2),λ2=Tr(R2),λ3=Tr(S3),λ4=Tr(R2S),λ5=Tr(S2R2). [6]

Using S and R, we can construct general bases for tensors and scalars given in Eqs. 46. The ten T(i) in Eq. 4 form the basis for symmetric tensors and the six B(i) in Eq. 5 for skew-symmetric tensors; λi in Eq. 6 is the basis of scalar invariants required to determine the necessary coefficients. Since incompressibility gives Tr(S)=0 (and Tr(R)=0 trivially), it is easy to show that Tr(T(i))=0. While the symmetric tensor basis T(i) is formulated to be trace-free owing to incompressibility, a symmetric tensor basis does not in general have to be trace-free; indeed, such a basis will be somewhat different from T(i) (36). Likewise, there are six scalar invariants for a general tensor basis (36), but incompressibility reduces the number to five, given in Eq. 3 (37).

Using the above framework, we can functionally model the pressure Hessian and viscous Laplacian tensors. However, note that while the pressure Hessian tensor is symmetric, it has a nonzero trace given by Eq. 2. Thus, we have to model the deviatoric part of the pressure Hessian tensor Hd as

HdH13Tr(H)I=i=110c1(i)(λ1,...,λ5)T(i), [7]

where the ten coefficients c1(i) have to be determined as functions of the scalar invariants. The term Tr(H) (and hence the isotropic part of H) does not pose a closure problem, since it can be written exactly in terms of A as in Eq. 2. The viscous Laplacian can be decomposed simply into symmetric and skew-symmetric contributions as ∇2A = ν2S + ν2R, which can be modeled using the respective tensor bases:

2A=i=110c2(i)(λ1,...,λ5)T(i)+i=16c3(i)(λ1,...,λ5)B(i). [8]

Here, we need to evaluate the 16 coefficients c2(i) and c3(i) as functions of the scalar invariants.

This brief description nominally captures all previous attempts to model the velocity gradient dynamics. For example, the pressure Hessian models developed in refs. 2932 retain up to second-order terms in Eq. 4, whereas the models for viscous Laplacian retain (in the same works) just the first-order term. In refs. 33 and 34, the pressure Hessian tensor was modeled in the same fashion as here. (However, no previous model accounted for the Re dependence of velocity gradient statistics, as we shall discuss in the next subsection.) In all these methods, the required scalar coefficients c1(i) to c3(i) were obtained as nonlinear scalar functions of A, satisfying a small set of the physical constraints. However, one can more generally utilize the power of deep learning to directly obtain the coefficients c1(i) to c3(i), thus, in principle, learning all the necessary physical constraints from the data itself (33, 34).

Nondimensionalization and Reynolds Number Dependence.

To utilize the above framework in conjunction with neural networks, it is important to nondimensionalize all quantities. There are several reasons. First, it is simply more convenient to work with nondimensional quantities in numerics. Second, it facilitates efficient learning (of network weights and biases) since the tensors in the bases are otherwise of different orders in A. For instance, while the pressure Hessian is second order in A, the symmetric tensor basis spans first to fifth order in A, implying that the coefficients c1(i) vary from order 1 to −3 in A. Appropriate nondimensionalization renders all coefficients to be of the same order, which leads to better and faster learning (13). Finally, nondimensionalization also allows us to appropriately introduce Re as a parameter, whereby the model system can be run at any chosen Re to obtain desired (nondimensional) statistics of velocity gradients.

However, given the multiscale nature of turbulence, the choice of variables for nondimensionalization is not unique. Since velocity gradients characterize small scales, a natural choice is to utilize the Kolmogorov time and length scales given as

τK=(ν/ϵ)1/2,ηK=(ν3/ϵ)1/4. [9]

Here, ϵ = 2νSijSij is the energy dissipation rate and ⟨ ⋅ ⟩ denotes averaging over space and time. In homogeneous turbulence, we have ⟨SijSij⟩=⟨RijRij⟩=⟨AijAij⟩/2. Thus, ⟨ϵ⟩=νAijAij⟩, giving ⟨AijAijτK2 = 1, implying that 1/τK quantifies the rms amplitude of A. This justifies the choice of Kolmogorov variables to nondimensionalize A; in fact, the above relations between the mean quantities and τK allow us to impose convenient constraints while running the model system (as described in Materials and Methods). However, the choice is not obvious if one considers the extreme events and hence higher-order statistics of A, and also of pressure Hessian and viscous Laplacian terms, since, due to intermittency, they cannot be expected to scale on Kolmogorov variables (7, 8). We will persist with Kolmogorov normalization but introduce a phenomenological procedure to account for intermittency.

In summary, the following nondimensionalization is used: t* = t/τK, x* = x/ηK (i.e., ∇* = ηK∇), A* = AτK, H* = HτK2 (and Hd* = HdτK2). We then obtain the following equation for A*:

DADt=(A213Tr(A2)I)Hd+2A. [10]

The terms Hd* and ∇*2A* can now be modeled in terms of A* using the previously described framework, i.e., utilizing Eqs. 7 and 8, where the tensor bases are appropriately replaced by their nondimensional counterparts, i.e., T*(i) and B*(i), and the coefficients c1(i), c2(i), c3(i) are dimensionless. It can be seen immediately that the above system does not have any Reynolds number dependence. This is not surprising since we utilized Kolmogorov scales for nondimensionalization, temporarily choosing to ignore intermittency. Thus, the Reynolds number dependence has to be reintroduced by hand (and validated a posteriori). Previous modeling attempts do not recognize this aspect and have consequently not captured the Reynolds number dependence; see, for example, refs. 25, 31, and 34.

It is worth stressing that there is no foolproof way of introducing the Re dependence into the system. For instance, one could use the large scales, say L for length and U for velocity (and L/U for time) for nondimensionalization, but it would be an incorrect choice for many aspects of the gradients. Instead, we devise the following pragmatic way to introduce the Reynolds number dependence and intermittency effects. We maintain the nondimensionalization by Kolmogorov variables and rescale the tensor bases as

Hd=i=110c1(i)Rλβ1(i)T(i), [11]
2A=i=110c2(i)Rλβ3(i)T(i)+i=16c3(i)Rλβ3(i)B(i). [12]

Here, Rλ is the Reynolds number based on Taylor length scale [note that Rλ ∼ Re1/2 (1)] and the exponents β1(i), β2(i), and β3(i) are additional model parameters which will be determined. This choice is motivated by two main reasons. First, the well-known multifractal description of turbulence suggests that velocity gradient statistics scale as power laws (or combinations of power laws) in Rλ (13). Second, the tensors in the bases span various orders of A, all of which feel the intermittency effects differently. Thus, the Reynolds number factors can rescale them to the same order, allowing for more efficient learning, essentially acting as additional physics-informed constraints to accommodate intermittency. If our physical understanding improves, it may well be possible to improve upon our present formulation.

Reynolds-Number-Scaled Tensor-Based Neural Network (ReS-TBNN).

We now consider the neural network architecture utilized to model the unclosed terms. The tensor-based neural network (TBNN), utilizing only the symmetric basis from Eq. 4, was first proposed by Ling et al. (38) for turbulence modeling of the Reynolds stress tensor. More recently, it was extended to modeling pressure Hessian in refs. 33 and 34, while continuing to ignore Reynolds number effects. Unlike traditional neural networks, TBNN utilizes two input layers. The architecture to model the pressure Hessian is shown in Fig. 1. The first input layer uses the scalar basis λi, which are then fed forward to multiple hidden layers to obtain the scalar coefficients c1(i) in the first output layer. The essence of this step is to model the scalar coefficients as strongly nonlinear functions of the scalar basis— an exercise traditionally performed by humans (2931). This is precisely the step where deep neural networks are advantageous. The second input layer uses the rescaled tensor basis as input, for instance, Rλβ1(i)T(i), for the pressure Hessian. This second input layer is contracted with the first output layer to obtain the predicted pressure Hessian tensor in the final output layer, in accordance with Eq. 11. We reiterate that only the deviatoric part of pressure Hessian needs to be modeled. The architecture for viscous Laplacian is essentially identical to that shown in Fig. 1, with the difference that the first output layer and the second input layer have both 16 nodes, corresponding to the coefficients c2(i) and c3(i) and the tensors T(i) and B(i), with appropriate prefactors corresponding to the Reynolds number scaling.

Fig. 1.

Fig. 1.

Reynolds-number-scaled tensor-based neural network (ReS-TBNN) architecture utilized for modeling the deviatoric pressure Hessian, based on Eq. 11. The first output layer and the second input layer both have 10 nodes. Note that the exponents β(i)s are not fixed inputs but obtained from network training using standard backpropagation. A similar network is utilized for the viscous Laplacian, utilizing both the symmetric and skew-symmetric tensor bases, as mentioned in Eq. 12. The first output layer and second input layer have both 16 nodes in this case.

Training and Validation of the ReS-TBNN Model

DNS Data.

To train the ReS-TBNN model, the “ground truth” data are obtained from a massive DNS database corresponding to forced stationary isotropic turbulence in a periodic domain (39). The simulations were performed using Fourier pseudospectral methods (40), allowing us to obtain the data with the highest accuracy practicable. A key aspect of our data is that we have simultaneously achieved a wide range of Reynolds numbers and the necessary small-scale resolution to accurately resolve extreme events (41, 42). Both of these conditions are indispensable for successful model development. The Taylor-scale-based Reynolds number Rλ of our database ranges from 140 to 1,300. The data have been since utilized and validated in several recent studies (4347). A brief account of DNS and the database is provided in Materials and Methods, and more details can be found in the references just mentioned. In order to train our network, only the data for Rλ= 140 to 650 are utilized; subsequently, we will demonstrate that the trained network can predict with reasonable success the statistics at higher (unseen) Rλ= 1,300. Though this Rλ is only twice as large as the largest one used in the training, its usefulness should be assessed in the context of the computational expense of DNS, which would be easily 100 times larger. This is because the cost of DNS increases at least as strongly as Rλ6, going up to Rλ8 in the limit of large Rλ, to accurately resolve the smallest scales (7, 9).

We point out that the model can also be trained over the range Rλ= 140 to 390 and used to predict results at Rλ = 650 and 1,300. However, learning from this smaller range of Rλ is suboptimal for capturing trends that can be extrapolated to significantly higher Rλ. Additionally, learning from low Rλ alone is not helpful because many features of turbulence are not fully developed for those conditions. For the present, learning from the range Rλ= 140 to 650 and predicting the performance at Rλ= 1,300 allows an optimal situation. As remarked in the previous paragraph, the corresponding DNS effort would be much more expensive. Clearly, prediction over a wider range would be desirable. One can also use the full range of available of Rλ= 140 to 1,300 for learning and predicting at a higher unseen Rλ. Since the DNS data at higher Rλ are not yet available, the predictions would be unverifiable, so we leave this task for the future.

Training of the ReS-TBNN Model.

The training of the ReS-TBNN model is implemented in FORTRAN using a massively parallel in-house deep-learning library. We utilize the distributed training paradigm with data parallelism, i.e., the training data are split across many processors, with each processor having access to the same model. The model parameters are synchronized via interprocessor communication after each training epoch (executed using MPI collective communication calls). To update the parameters of the neural network, the quadratic loss function is minimized using the standard backpropagation algorithm (13). For example, for the pressure Hessian tensor, the loss function is given by

L=12Ndatam=1Ndata||H^d(m)Hd(m)||F2. [13]

where H^d is the model output and ||⋅||F denotes the Frobenius norm. A similar loss function can also be written for the viscous Laplacian term. The network weights and biases, as well as the exponents β(i) in Eqs. 11 and 12, are simply updated using gradient descent: x = x − α(∂ℒ/∂x), where x is the variable being updated and α is the learning rate.

The training data are compiled from DNS runs corresponding to Rλ= 140 to 650. About one billion data points are utilized for training, split evenly across all Rλ in the range. The training is performed on about a thousand processors (with each processor handling one million data points). Note that each data point corresponds to a combination of the tensors A, Hd, and ∇2A and the particular value of Rλ. As discussed earlier, all variables are nondimensionalized by the Kolmogorov scales. The training is performed for many thousands of epochs, until the loss function reaches a plateau (Materials and Methods). The learning rate is always kept low at 10−6. As one might expect, the choice of the hyperparameters, such as the number of hidden layers and the number of nodes per layer, plays a crucial role in obtaining the best model. To select the optimal network configuration, we do not utilize a validation dataset, but directly compare the velocity gradient statistics with DNS results. We found that a network with about 25 layers, each with about 50 nodes, provides optimal results; increasing the number of layers and the number of nodes per layer results in only marginal improvement [and can also lead to overfitting (13)].

Comparison of the ReS-TBNN Model with DNS

The effectiveness of the trained ReS-TBNN model will now be evaluated by comparing its outcome with DNS results. We first focus on the Reynolds number trend of velocity gradient statistics (this being a key contribution of the model). We particularly consider the PDFs that display increasingly non-Gaussian tails with an increasing Reynolds number because of intermittency. All components of A exhibit intermittency, but it is convenient to consider scalar quantities of direct physical significance, such as the energy dissipation rate, whose mean value is the net energy flux from large to small scales. As is well known, the instantaneous energy transfers are highly intermittent, leading to extreme dissipation events (48).

Fig. 2 shows comparisons of the PDFs of the energy dissipation rate, normalized by its mean value, from DNS and the model. Panels AC illustrate the comparison on log–log scales at Rλ = 140, 650, and 1,300, respectively, showing excellent agreement between the two results. The ReS-TBNN model has been trained only up to Rλ = 650 and has not seen any data for Rλ= 1,300. For a closer inspection, Panels D and E show the same comparisons on linear-log scales for all Rλ available. The model captures the intermittent tails qualitatively well, though the extreme events are overpredicted (see below). We believe that this overprediction occurs because of the Reynolds number scaling of the tensor bases; essentially, the rescaling serves to normalize the tensor and the extreme events have slightly stronger influence on the weights and biases. Note that, similar to the dissipation rate, one can also consider other scalar measures derived from A, such as enstrophy Ω = ωiωi, where ωi = ϵijkAjk is the vorticity vector (with ϵijk being the Levi-Civita symbol). Although not shown here, the agreement observed for enstrophy is similar.

Fig. 2.

Fig. 2.

Comparisons of probability density functions (PDFs) of the energy dissipation rate, nondimensionalized by the mean value, as obtained from network model and DNS. Panels AC show the comparison at Rλ = 140, 650, and 1,300, respectively, on log–log scales. Panel D shows the PDFs from DNS for various Rλ on lin–log scales highlighting the intermittency of PDF tails. Panel E shows the PDFs obtained from the network model for the same set of Rλ as panel D, showing the effectiveness of the model in predicting the PDF tails. We reiterate that Rλ= 1,300 is never seen by the model.

The overprediction by the model occurs principally for events with a probability less than about 10−9. Such events are obviously very important for high-order moments, but the reliability of such very large moments is not quite assured for the DNS data itself. For example, suppose we compute the sixth moment of the energy dissipation rate. This is equivalent to obtaining the 12th-order moment of velocity gradients, which would be stretching one’s credulity even for the large size of the present database. To better understand how well the model and the DNS agree, we directly compare some moments from the PDFs. In Table 1, we list the second- and fourth-order moments of dissipation and enstrophy from both the model and the DNS. We also compare the third- and fourth-order moments of individual components of A. The results for skewness of A12 are not shown since it is zero (within statistical error) for both the DNS and the model. Clearly, the results from the model are very satisfactory for the second order of dissipation and enstrophy but less so for the fourth.

Table 1.

Comparison of various statistics from DNS and the model

R λ 140 240 390 650 1,300
ϵ2⟩/⟨ϵ2
 DNS 2.75 3.28 3.86 4.67 6.27
 model 2.49 2.78 3.74 5.16 7.72
Ω2⟩/⟨Ω2
 DNS 4.89 6.22 7.49 9.26 12.6
 model 4.48 5.50 7.25 9.80 14.4
ϵ4⟩/⟨ϵ22
 DNS 37.8 83.8 171 387 1,298
 model 101 260 434 628 2,355
Ω4⟩/⟨Ω22
 DNS 150 345 741 1,636 6,917
 model 316 812 1,666 3,101 13,210
skewness A11
 DNS -0.52 -0.55 -0.59 -0.63 -0.70
 model -0.49 -0.54 -0.60 -0.66 -0.75
flatness A11
 DNS 5.73 6.82 8.02 9.87 13.1
 model 5.20 5.80 7.80 10.9 15.2
flatness A12
 DNS 8.71 10.9 13.2 16.5 22.2
 model 8.07 9.90 13.0 17.8 25.8

Shown are the second- and fourth-order moments of dissipation (ϵ) and enstrophy (Ω), and the skewness and flatness factors of longitudinal (A11) and transverse (A12) velocity gradient components. Note that the skewness of A12 is not shown, since it is zero (within statistical uncertainty), from both DNS and the model.

It is worth noting that the second-order moments of dissipation and enstrophy can be directly related to fourth-order moments of velocity gradient components in the following manner (49):

ϵ2/ϵ2=157F(A11),Ω2/Ω2=95F(A12). [14]

Here, F(A11) and F(A12) are the flatness factors of the components A11 and A12, respectively. These results are exact for isotropic turbulence at any Reynolds number and are nominally satisfied far from solid boundaries in other turbulent flows at high Reynolds numbers, i.e., when local isotropy holds (2, 4). It can be seen from Table 1 that both these relations are well satisfied in the model and DNS results. In fact, many such isotropic relations exist for various moment orders of velocity gradients. For instance, for the second-order moments, they are

Aαα2=Aββ2,forα=βAαβ2=2Aαα2,forαβAααAββ=AαβAβα=Aαα2/2,forαβ [15]

where repeated indices in α and β do not imply summation. Essentially, all second-order moments can be simply described by the moment ⟨A112⟩. From the above, it also follows

ϵ/ν=2SijSij=15A112,Ω=2RijRij=15A112. [16]

Although not shown explicitly, we note that the above relations are all satisfied in our model results (and, of course, DNS) at all Reynolds numbers.

The comparisons in Fig. 2 and Table 1 predominantly focused on Reynolds number scaling of various individual statistics. However, it is equally important to capture the structure of velocity gradient tensor. We examine two well-known universal results to this end, the first being the alignment of vorticity vector with the eigenvectors of strain tensor, shown in Fig. 3. Panel a shows the PDFs of the cosines of the alignment from DNS. Consistent with the well-known result from the literature (23), vorticity preferentially aligns with the second eigenvector of strain and is weakly orthogonal to the third eigenvector, whereas there is no preferential alignment with the first eigenvector. There is virtually no Rλ dependence of these PDFs (as noted in ref. 24). In Fig. 3B, the corresponding result is shown from the ReS-TBNN model. The model captures the trends very well, with slight enhancement of the respective alignments. This trend is consistent with the result in Fig. 2 where the model slightly overpredicts extreme events [note that the alignments are enhanced when considering extreme events (24)]. We also note that the model shows only very weak Reynolds number dependence of the alignment properties, which is inconsequential for all practical purposes.

Fig. 3.

Fig. 3.

Comparison of the PDFs of the cosine of the angles between the vorticity unit vector ω^ and the eigenvectors of the strain tensor ei, corresponding to eigenvalues λi, where λ1 ≥ λ2 ≥ λ3. Panel A shows the result from DNS corresponding to Rλ=1,300 in solid lines and Rλ = 140 in dashed lines. Panel B shows the result from the network model corresponding to same Rλ values in solid and dashed lines. The alignment PDFs have almost no Rλ dependence in DNS; that shown by the model is negligible also.

The second structural aspect concerns the local critical point analysis of A, which identifies the flow topology using the second and third invariants of A (50): Q = −Tr(A2)/2 and R = −Tr(A3)/3. (Note that the first invariant of A, i.e., its trace, is zero from incompressibility.) The joint PDF of these two invariants is known to exhibit a universal tear-drop shape (25, 51). For a final comparison between DNS and model, we compare the joint PDF in Fig. 4 A and B obtained from DNS and the model, respectively, at Rλ = 140; the same results for Rλ=1,300 are shown in Fig. 4 C and D. In both cases, the model predicts the joint PDF quite well in all four quadrants. It is worth noting that the joint PDFs also exhibit strong intermittency with increasing Reynolds number; this aspect is again well captured by the model, similar to the result in Fig. 2.

Fig. 4.

Fig. 4.

Comparison of joint PDFs of the invariants of the velocity gradient tensor, defined as Q=12Tr(A2)τK2 and R=13Tr(A3)τK3. Panel A shows the result from DNS at Rλ = 140, and Panel B shows the corresponding result from the network model. Panel C shows the DNS result at Rλ= 1,300, and Panel D shows the corresponding result from the model.

As a final remark, we note that one can compare many other quantities to evaluate the performance of the model (with respect to DNS results). For instance, one can consider the temporal dynamics of velocity gradients projected onto the Q − R plane (29, 31, 34) or how they influence trajectories of particles in turbulence (32). Such comprehensive studies, including detailed comparisons of our model with the previous ones, will be presented in a subsequent paper. In the current study, we have highlighted the most important contribution of our model, which is to capture intermittency and Reynolds numbers trends of velocity gradient statistics.

Discussion

In studying the dynamics of turbulence, the DNS of Navier–Stokes equations on massive supercomputers is now an established area for gaining a fuller understanding of flow physics, leading to more reliable predictions. However, both theoretical and practical needs demand ever-increasing size of computations, so fluid turbulence will always remain, for the foreseeable future, as one of the frontier computational problems, no matter how large the supercomputers become.

In this regard, the major bottleneck is the need to simulate small scales of turbulence with high fidelity (adequate resolution in space and time, convergence, etc.). To make progress on real problems, one needs to model small scales well, for instance, in LES where large scales are resolved, but small scales are modeled assuming a degree of universality. This modeling approach has been largely guided by “human learning,” often resulting in ad hoc considerations depending on the flow. As modern methods of deep machine learning have expanded, it appears possible for them to aid in modeling by directly learning from a vast amount of high-fidelity data that are already available over some range of Reynolds numbers. In this scenario, deep neural networks are allowed to do the fitting at a deeper level of instantaneous data, in the process satisfying a substantially larger set of constraints than possible by “human learning.” If this attempt succeeds, we will have a powerful tool in assimilating the lower Reynolds number data for predicting flow properties at higher unseen Reynolds numbers. This is a difficult problem given the nature of turbulence.

In this paper, we have made a ground-level attempt toward our stated goal. We have demonstrated that the small-scale dynamics of turbulence, as captured by velocity gradients, can be modeled reasonably well using deep neural networks. The deep neural networks are set up to functionally model the nonlocal pressure and viscous contributions to velocity gradient dynamics. The networks are then trained on a range of Reynolds numbers available from DNS, and the training is leveraged to predict results at higher Reynolds numbers, whose properties the network does not know in advance. The effort is very encouraging not only in predicting the intermittency of velocity gradients with increasing Reynolds number but also various signature topological properties of velocity gradient tensor, such as alignment of vorticity with strain rate eigenvectors, and the joint PDFs of invariants displaying a tear-drop shape. Overall, the modeling effort developed here provides a substantial improvement upon prior work, especially with respect to the robustness of the local functional modeling of pressure and viscous terms across a range of Reynolds numbers.

There are certain shortcomings of the trained model when considering truly extreme events; fortunately, they contribute significantly only to higher-order moments. It should be possible to further improve this aspect by incorporating the current deep learning approach in alternative frameworks for velocity gradient dynamics, which incorporate Reynolds number dependencies more naturally; see, for example, refs. 52 and 53. Likewise, it would be worth utilizing recurrent neural networks (13) to enable direct learning of both spatial and temporal correlations in the data. While the current model already captures many temporal correlations well (although they are not reported here), the training is performed purely on spatial data. Note that in stationary homogeneous turbulence explored here, one-point and one-time statistics are identical. Nevertheless, we note that the incorporation of both spatial and temporal learning could greatly improve the effectiveness of the current framework.

Finally, it would also be worth expanding the current effort in a more concerted way to other modeling paradigms such as LES (54)—allowing one to tackle more complex turbulent flows at Reynolds numbers of practical interest in nature and engineering. Such an extension can be accomplished, for instance, by considering filtered velocity gradient tensor, which would be amenable to the same tensor framework as utilized here (34, 38). Likewise, the framework developed here can also be extended to study the dynamics of scalar gradients in turbulent mixing problems (55), especially in the high Schmidt number regime; though these conditions are even more challenging for DNS (56), recent efforts have led to generation of high-fidelity data at reasonably high Reynolds numbers (57, 58). Efforts in these directions are under way and will be reported as future work.

Materials and Methods

Direct Numerical Simulations.

The data utilized here are obtained by the DNS of incompressible Navier–Stokes equations

u/t+u·u=P+ν2u+f, [17]

where u is the velocity, satisfying ∇ ⋅ u = 0, P is the kinematic pressure, and ν is the kinematic viscosity. The term f corresponds to large-scale forcing required to maintain a statistically stationary state. The simulations correspond to the canonical setup of isotropic turbulence with periodic boundary conditions in a cubic domain of side length L0 = 2π. It is well known that such a setup allows one to reach the highest Reynolds numbers in DNS and is ideal for studying small scales (39). Taking the gradient of Eq. 17 leads to Eq. 1, without the forcing term. When performing the Monte-Carlo simulations of the ReS-TBNN model, a forcing term is reintroduced to mimic this effect and achieve stationary statistics (as described in the subsection after the next).

The DNS domain consists of N3 grid points with uniform grid spacing Δx = L0/N in each direction. The equations are solved using a massively parallelized version of the well-known Fourier pseudospectral algorithm of Rogallo (40); the resulting aliasing errors are controlled by a combination of grid shifting and spherical truncation (59). For time integration, the second-order Runge–Kutta method is used, with the time step Δt subject to the Courant number C constraint for numerical stability: Δt = CΔx/||u|| (where ||⋅|| is the L norm). An important consideration in studying velocity gradients and associated extreme events is that of spatial resolution, captured by the ratio Δx/ηK (ηK being the Kolmogorov length scale, defined earlier in Eq. 9). For pseudospectral DNS, spatial resolution is also prescribed by the parameter kmaxηK, where kmax=2N/3 is the maximum resolved wavenumber. It can be easily shown that Δx/η ≈ 3/kmaxηK. All our runs correspond to high spatial resolution, going up to kmaxηK ≈ 6 to accurately resolve extreme events. The DNS database, along with various simulation parameters, are summarized in Table 2.

Table 2.

Various simulation parameters for the DNS runs utilized here: the Taylor-scale Reynolds number (Rλ), the number of grid points (N3), spatial resolution (kmaxη), ratio of large-eddy turnover time (TE) to Kolmogorov time scale (τK), and length of simulation (Tsim) in statistically stationary state

R λ N 3 k max η TE/τK T sim
140 1,0243 5.82 16.0 6.5TE
240 2,0483 5.70 30.3 6.0TE
390 4,0963 5.81 48.4 4.0TE
650 8,1923 5.65 74.4 2.0TE
1,300 12,2883 2.95 147.4 20τK

We note that one can also utilize Lagrangian data, as obtained from following fluid particle trajectories along with the Eulerian DNS (60), to train and validate the deep learning network (34). However, since the network relies on obtaining a local functional closure, it does not make a difference whether Eulerian or Lagrangian data are utilized, provided they both are statistically stationary. Note that Lagrangian data are obtained from Eulerian data using spline interpolation (61, 62) and thus are less accurate, especially for higher-order moments (46, 60). Even with this caveat, it would be desirable to construct recurrent neural networks (13), to enable direct learning of both spatial and temporal dependencies in the data, possibly leading to improved predictive capabilities.

ReS-TBNN Loss Function.

Fig. 5 shows the behavior of the loss functions, for training of both pressure Hessian and viscous Laplacian terms, versus the number of epochs lapsed as the training is performed. Evidently, they become flat beyond a certain point. An early stopping criterion, as marked in Fig. 5, is utilized for training of the networks to avoid overfitting (63).

Fig. 5.

Fig. 5.

Decay of the loss function during training of the ReS-TBNN for pressure Hessian and viscous Laplacian terms. One epoch corresponds to the entire set of available training data spanning Rλ= 140 to 650.

Monte-Carlo Simulations of the ReS-TBNN Model.

Once the pressure Hessian and viscous Laplacian terms are modeled as functions of A, we can obtain a closed system in A, which can be solved for arbitrary initial conditions. However, it is also necessary to add a forcing term to the model (26), mimicking the effects of large-scale forcing, to achieve stationary statistics. Additionally, the forcing term also acts to reproduce some effects of nonlocality (of pressure Hessian), which are lost due to a local functional closure (28, 29, 31). Thus, the closed system is posed in the form of a stochastic ODE, given as

dA=F(A)dt+dF. [18]

Here, ℱ(A*) is the deterministic tensor function, obtained from Eqs. 1012 and given as

F(A)=(A213Tr(A2)I)i=110c1(i)Rλβ1(i)T(i)+i=110c2(i)Rλβ2(i)T(i)+i=16c3(i)Rλβ3(i)B(i). [19]

and dF* is the stochastic forcing term

dFij=bijkldWkl. [20]

built on the tensorial Wiener process, i.e., ⟨dWij⟩=0 and ⟨dWijdWkl⟩=δikδjldt* with diffusion tensor Dijkl = bijmnbklmn.

For the drift term bijkl, we utilize the result of ref. 31

bijkl=13DSδijδkl+12DS+DRδikδjl+12DSDRδilδjk. [21]

where DS and DR are free parameters that can be tuned to appropriately force the symmetric and skew-symmetric parts of A*, i.e., S* and R*, respectively, allowing us to impose consistency conditions for stationarity: ⟨Sij*Sij*⟩=1/2 and ⟨Rij*Rij*⟩=1/2. Note that the former follows directly from the definition of Kolmogorov time scale, whereas the latter follows from statistical homogeneity. It also readily follows that ⟨Aij*Aij*⟩=1. The Monte-Carlo simulations are performed starting from random Gaussian initial conditions of A*, until a stationary state as prescribed by the above conditions is reached. Thereafter, the simulations are extended for a desired duration to obtain converged statistics.

We note that given the complex functional form of the deep learning closure, we encounter some rogue trajectories (since the deep learning–based closure does not guarantee stability). The encounter rate is only about one in a million. Such trajectories are simply discarded from the ensemble but, if desired, can be regularized as described in ref. 32.

Acknowledgments

We gratefully acknowledge the Gauss Centre for Supercomputing e.V. (http://www.gauss-centre.eu) for providing computing time on the supercomputers JUQUEEN and JUWELS at Jülich Supercomputing Centre (JSC), where the simulations and analyses reported in this paper were primarily performed.

Author contributions

D.B. and K.R.S. designed research; D.B. and K.R.S. performed research; D.B. analyzed data; and D.B. and K.R.S. wrote the paper.

Competing interests

The authors declare no competing interest.

Footnotes

Reviewers: R.B., Universita degli Studi di Roma Tor Vergata; M.C., The University of Arizona; and P.H., University of Colorado, Boulder.

Contributor Information

Dhawal Buaria, Email: dhawal.buaria@nyu.edu.

Katepalli R. Sreenivasan, Email: katepalli.sreenivasan@nyu.edu.

Data, Materials, and Software Availability

All study data are included in the article.

References

  • 1.Monin A. S., Yaglom A. M., Statistical Fluid Mechanics, Vol. II (MIT Press, 1975). [Google Scholar]
  • 2.Frisch U., Turbulence: The Legacy of Kolmogorov (Cambridge University Press, Cambridge, 1995). [Google Scholar]
  • 3.Sreenivasan K. R., Antonia R. A., The phenomenology of small-scale turbulence. Annu. Rev. Fluid Mech. 29, 435–77 (1997). [Google Scholar]
  • 4.Kolmogorov A. N., The local structure of turbulence in an incompressible fluid for very large Reynolds numbers. Dokl. Akad. Nauk. SSSR 30, 299–303 (1941). [Google Scholar]
  • 5.Wallace J. M., Twenty years of experimental and direct numerical simulation access to the velocity gradient tensor: What have we learned about turbulence? Phys. Fluids 21, 021301 (2009). [Google Scholar]
  • 6.Moin P., Mahesh K., Direct numerical simulation: A tool in turbulence research. Annu. Rev. Fluid Mech. 30, 539–578 (1998). [Google Scholar]
  • 7.Yakhot V., Sreenivasan K. R., Anomalous scaling of structure functions and dynamic constraints on turbulence simulation. J. Stat. Phys. 121, 823–841 (2005). [Google Scholar]
  • 8.Buaria D., Pumir A., Bodenschatz E., Yeung P. K., Extreme velocity gradients in turbulent flows. New J. Phys. 21, 043004 (2019). [Google Scholar]
  • 9.Buaria D., Pumir A., Vorticity–strain rate dynamics and the smallest scales of turbulence. Phys. Rev. Lett. 128, 094501 (2022). [DOI] [PubMed] [Google Scholar]
  • 10.LeCun Y., Bengio Y., Hinton G., Deep learning. Nature 521, 436–444 (2015). [DOI] [PubMed] [Google Scholar]
  • 11.Duraisamy K., Iaccarino G., Xiao H., Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357–377 (2019). [Google Scholar]
  • 12.Pandey S., Schumacher J., Sreenivasan K. R., A perspective on machine learning in turbulent flows. J. Turb. 21, 567–584 (2020). [Google Scholar]
  • 13.Goodfellow I., Bengio Y., Courville A., Deep Learning (MIT Press, 2016). [Google Scholar]
  • 14.Kim H., Kim J., Won S., Lee C., Unsupervised deep learning for super-resolution reconstruction of turbulence. J. Fluid Mech. 910, A29 (2021). [Google Scholar]
  • 15.Novati G., de Laroussilhe H. L., Koumoutsakos P., Automating turbulence modelling by multi-agent reinforcement learning. Nat. Mach. Intell. 3, 87–96 (2021). [Google Scholar]
  • 16.Karniadakis G. E., et al. , Physics-informed machine learning. Nat. Rev. Phys. 3, 422–440 (2021). [Google Scholar]
  • 17.Siggia E. D., Numerical study of small-scale intermittency in three-dimensional turbulence. J. Fluid Mech. 107, 375–406 (1981). [Google Scholar]
  • 18.Zeff B. W., et al. , Measuring intense rotation and dissipation in turbulent flows. Nature 421, 146–149 (2003). [DOI] [PubMed] [Google Scholar]
  • 19.Ishihara T., Kaneda Y., Yokokawa M., Itakura K., Uno A., Small-scale statistics in high resolution of numerically isotropic turbulence. J. Fluid Mech. 592, 335–366 (2007). [Google Scholar]
  • 20.Schumacher J., et al. , Small-scale universality in fluid turbulence. Proc. Natl. Acad. Sci. U.S.A. 111, 10961–10965 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Batchelor G. K., The Theory of Homogeneous Turbulence (Cambridge University Press, 1953). [Google Scholar]
  • 22.Kerr R. M., Higher-order derivative correlations and the alignment of small-scale structures in isotropic numerical turbulence. J. Fluid Mech. 153, 31–58 (1985). [Google Scholar]
  • 23.Ashurst W. T., Kerstein A. R., Kerr R. M., Gibson C. H., Alignment of vorticity and scalar gradient with strain rate in simulated Navier–Stokes turbulence. Phys. Fluids 30, 2343–2353 (1987). [Google Scholar]
  • 24.Buaria D., Bodenschatz E., Pumir A., Vortex stretching and enstrophy production in high Reynolds number turbulence. Phys. Rev. Fluids 5, 104602 (2020). [Google Scholar]
  • 25.Meneveau C., Lagrangian dynamics and models of the velocity gradient tensor in turbulent flows. Annu. Rev. Fluid Mech. 43, 219–245 (2011). [Google Scholar]
  • 26.Girimaji S. S., Pope S. B., A diffusion model for velocity gradients in turbulence. Phys. Fluids A: Fluid Dyn. 2, 242–256 (1990). [Google Scholar]
  • 27.Chertkov M., Pumir A., Shraiman B. I., Lagrangian tetrad dynamics and the phenomenology of turbulence. Phys. Fluids 11, 2394–2410 (1999). [Google Scholar]
  • 28.Chevillard L., Meneveau C., Lagrangian dynamics and statistical geometric structure of turbulence. Phys. Rev. Lett. 97, 174501 (2006). [DOI] [PubMed] [Google Scholar]
  • 29.Wilczek M., Meneveau C., Pressure Hessian and viscous contributions to velocity gradient statistics based on Gaussian random fields. J. Fluid Mech. 756, 191–225 (2014). [Google Scholar]
  • 30.Lawson J. M., Dawson J. R., On velocity gradient dynamics and turbulent structure. J. Fluid Mech. 780, 60–98 (2015). [Google Scholar]
  • 31.Johnson P. L., Meneveau C., A closure for Lagrangian velocity gradient evolution in turbulence using recent-deformation mapping of initially Gaussian fields. J. Fluid Mech. 804, 387–419 (2016). [Google Scholar]
  • 32.Leppin L. A., Wilczek M., Capturing velocity gradients and particle rotation rates in turbulence. Phys. Rev. Lett. 125, 224501 (2020). [DOI] [PubMed] [Google Scholar]
  • 33.Parashar N., Srinivasan B., Sinha S. S., Modeling the pressure-Hessian tensor using deep neural networks. Phys. Rev. Fluids 5, 114604 (2020). [Google Scholar]
  • 34.Tian Y., Livescu D., Chertkov M., Physics-informed machine learning of the Lagrangian dynamics of velocity gradient tensor. Phys. Rev. Fluids 6, 094607 (2021). [Google Scholar]
  • 35.Smith G. F., On isotropic functions of symmetric tensors, skew-symmetric tensors and vectors. Int. J. Eng. Sci. 9, 899–916 (1971). [Google Scholar]
  • 36.Zheng Q. S., Theory of representations for tensor functions—A unified invariant approach to constitutive equations. Appl. Mech. Rev. 47, 545–587 (1994). [Google Scholar]
  • 37.Pope S. B., A more general effective-viscosity hypothesis. J. Fluid Mech. 72, 331–340 (1975). [Google Scholar]
  • 38.Ling J., Kurzawski A., Templeton J., Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, 155–166 (2016). [Google Scholar]
  • 39.Ishihara T., Gotoh T., Kaneda Y., Study of high-Reynolds number isotropic turbulence by direct numerical simulations. Annu. Rev. Fluid Mech. 41, 165–80 (2009). [Google Scholar]
  • 40.R. S. Rogallo, Numerical experiments in homogeneous turbulence. NASA Technical Memo (1981).
  • 41.Buaria D., Pumir A., Bodenschatz E., Self-attenuation of extreme events in Navier–Stokes turbulence. Nat. Commun. 11, 5852 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Buaria D., Sreenivasan K. R., Dissipation range of the energy spectrum in high Reynolds number turbulence. Phys. Rev. Fluids 5, 092601(R) (2020). [Google Scholar]
  • 43.Buaria D., Pumir A., Nonlocal amplification of intense vorticity in turbulent flows. Phys. Rev. Res. 3, 042020 (2021). [Google Scholar]
  • 44.Buaria D., Pumir A., Bodenschatz E., Generation of intense dissipation in high Reynolds number turbulence. Philos. Trans. R. Soc. A 380, 20210088 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Buaria D., Sreenivasan K. R., Intermittency of turbulent velocity and scalar fields using three-dimensional local averaging. Phys. Rev. Fluids 7, L072601 (2022). [Google Scholar]
  • 46.Buaria D., Sreenivasan K. R., Scaling of acceleration statistics in high Reynolds number turbulence. Phys. Rev. Lett. 128, 234502 (2022). [DOI] [PubMed] [Google Scholar]
  • 47.Buaria D., Sreenivasan K. R., Lagrangian acceleration in fully developed turbulence and its Eulerian decompositions. Phys. Rev. Fluids 8, L032601 (2023). [Google Scholar]
  • 48.Sreenivasan K. R., Meneveau C., Singularities of the equations of fluid motion. Phys. Rev. A 38, 6287–6295 (1988). [DOI] [PubMed] [Google Scholar]
  • 49.Siggia E. D., Invariants for the one-point vorticity and strain rate correlation functions. Phys. Fluids 24, 1934–1936 (1981). [Google Scholar]
  • 50.Perry A. E., Chong M. S., A description of eddying motions and flow patterns using critical-point concepts. Annu. Rev. Fluid Mech. 19, 125–155 (1987). [Google Scholar]
  • 51.Tsinober A., An Informal Conceptual Introduction to Turbulence (Springer, Berlin, Germany, 2009). [Google Scholar]
  • 52.Johnson P. L., Meneveau C., Turbulence intermittency in a multiple-time-scale Navier–Stokes-based reduced model. Phys. Rev. Fluids 2, 072601 (2017). [Google Scholar]
  • 53.Das R., Girimaji S. S., On the Reynolds number dependence of velocity-gradient structure and dynamics. J. Fluid Mech. 861, 163–179 (2019). [Google Scholar]
  • 54.Beck A., Flad D., Munz C. D., Deep neural networks for data-driven les closure models. J. Comput. Phys. 398, 108910 (2019). [Google Scholar]
  • 55.D. Buaria, K. R. Sreenivasan, Capturing small scale dynamics of turbulent velocity and scalar fields using deep learning. Bull. Am. Phys. Soc. (2022).
  • 56.Yeung P. K., Donzis D. A., Sreenivasan K. R., High-Reynolds-number simulation of turbulent mixing. Phys. Fluids 17, 081703 (2005). [Google Scholar]
  • 57.Buaria D., Clay M. P., Sreenivasan K. R., Yeung P. K., Small-scale isotropy and ramp–cliff structures in scalar turbulence. Phys. Rev. Lett. 126, 034504 (2021). [DOI] [PubMed] [Google Scholar]
  • 58.Buaria D., Clay M. P., Sreenivasan K. R., Yeung P. K., Turbulence is an ineffective mixer when Schmidt numbers are large. Phys. Rev. Lett. 126, 074501 (2021). [DOI] [PubMed] [Google Scholar]
  • 59.Patterson G. S., Orszag S. A., Spectral calculations of isotropic turbulence: Efficient removal of aliasing interactions. Phys. Fluids 14, 2538–2541 (1971). [Google Scholar]
  • 60.Yeung P. K., Pope S. B., Lamorgese A. G., Donzis D. A., Acceleration and dissipation statistics of numerically simulated isotropic turbulence. Phys. Fluids 18, 065103 (2006). [Google Scholar]
  • 61.Yeung P. K., Pope S. B., An algorithm for tracking fluid particles in numerical simulations of homogeneous turbulence. J. Comput. Phys. 79, 373–416 (1988). [Google Scholar]
  • 62.Buaria D., Yeung P. K., A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations. Comput. Phys. Commun. 221, 246–258 (2017). [Google Scholar]
  • 63.Girosi F., Jones M., Poggio T., Regularization theory and neural networks architectures. Neural Comput. 7, 219–269 (1995). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All study data are included in the article.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES