Skip to main content
National Science Review logoLink to National Science Review
. 2023 Dec 29;11(4):nwad336. doi: 10.1093/nsr/nwad336

Deciphering and integrating invariants for neural operator learning with various physical mechanisms

Rui Zhang 1, Qi Meng 2,, Zhi-Ming Ma 3,
PMCID: PMC10939376  PMID: 38487494

ABSTRACT

Neural operators have been explored as surrogate models for simulating physical systems to overcome the limitations of traditional partial differential equation (PDE) solvers. However, most existing operator learning methods assume that the data originate from a single physical mechanism, limiting their applicability and performance in more realistic scenarios. To this end, we propose the physical invariant attention neural operator (PIANO) to decipher and integrate the physical invariants for operator learning from the PDE series with various physical mechanisms. PIANO employs self-supervised learning to extract physical knowledge and attention mechanisms to integrate them into dynamic convolutional layers. Compared to existing techniques, PIANO can reduce the relative error by 13.6%–82.2% on PDE forecasting tasks across varying coefficients, forces or boundary conditions. Additionally, varied downstream tasks reveal that the PI embeddings deciphered by PIANO align well with the underlying invariants in the PDE systems, verifying the physical significance of PIANO.

Keywords: neural operator, PDE solver, contrastive learning, physical invariants


PIANO: a new operator learning framework that deciphers and incorporates invariants from the PDE series via self-supervised learning and attention technique, achieving superior performance in scenarios with various physical mechanisms.

INTRODUCTION

Partial differential equations (PDEs) provide a fundamental mathematical framework to describe a wide range of natural phenomena and physical processes, such as fluid dynamics [1], life science [2] and quantum mechanics [3], among others. Accurate and efficient solutions of PDEs are essential for understanding and predicting the behavior of these physical systems. However, due to the inherent complexity of PDEs, analytical solutions are often unattainable, necessitating the development of numerical methods for their approximation [4]. Over the years, numerous numerical techniques have been proposed for solving PDEs, such as the finite difference method, finite element method and spectral method [5]. These methods have been widely used in practice, providing valuable insights into the behavior of complex systems governed by PDEs [6,7]. Despite the success of classical numerical methods in solving a wide range of PDEs, there are several limitations associated with these techniques, such as the restriction on step size, difficulties in handling complex geometries and the curse of dimensionality for high-dimensional PDEs [8–10].

In recent years, machine learning (ML) methods have evolved as a disruptive technology to classical numerical methods for solving scientific calculation problems for PDEs. By leveraging the power of data-driven techniques or the expression ability of neural networks, ML-based methods have the potential to overcome some of the shortcomings of traditional numerical approaches [8,11–15]. In particular, by using the deep neural network to represent the solution of the PDE, ML methods can efficiently handle complex geometries and solve high-dimensional PDEs [16]. The representative works include the DeepBSDE method, which can solve parabolic PDEs in 100 dimensions [8]; the random feature model, which can easily handle complex geometries and achieve spectral accuracy [17]; the ML-based reduced-order modeling, which can improve the accuracy and efficiency of traditional reduced-order modeling for nonlinear problems [18–20]. However, these methods are applied to the fixed initial field (or external force field), and they require the retraining of neural networks when solving PDEs with changing high-dimensional initial fields.

In addition to these developments, neural operators have emerged as a more promising approach to simulate physical systems with deep learning, using neural networks as surrogate models to learn the PDE operator between functional spaces from data [9,21,22], which can significantly accelerate the simulation process. Most studies along this line focus on the network architecture design to ensure both simulation accuracy and inference efficiency. For example, DeepONet [21] and its variants [23–25], Fourier neural operators [9,26,27] and transformer-based operators [28,29] have been proposed to respectively deal with continuous input and output space different frequency components and complex geometries. Compared to traditional methods neural operators break the restriction on spatiotemporal discretization and enjoy a speed up of thousands of times, demonstrating enormous potential in areas such as inverse design and physical simulations, among others [9,30]. However, these methods only consider PDEs generated from a single formula by default, limiting the applicability of neural operators to multi-physical scenarios, e.g. datasets of the PDE systems sampled under different conditions (boundary conditions, parameters, etc.).

To address this issue, message-passing neural networks (MPNNs) incorporate the indicator of the scenario (i.e. the PDE parameters) into inputs to improve the generalization capabilities of the model [10]. DyAd supervision learns the physical information through an encoder and automatically adapts it to different scenarios [31]. Although incorporating the physical knowledge can enhance the performance of the neural operator these methods still require access to the high-level PDE information in the training or test stage [10,31]. However, in many real-world applications collecting high-level physical information that governs the behavior of PDE systems can be infeasible or prohibitively expensive. For example, in fluid dynamics or ocean engineering, scientists can gather numerous flow field data controlled by varying and unknown Reynolds numbers, and calculating them would require numerous calls to PDE solvers [12,32].

To this end, we propose the physical invariant attention neural operator (PIANO), a novel operator learning framework for deciphering and integrating physical knowledge from PDE series with various PIs, such as varying coefficients and boundary conditions. PIANO has two branches: a PI encoder that extracts physical invariants and a personalized operator that predicts the complementary field representation of each PDE system (Fig. 1(a)). As illustrated in Fig. 1, PIANO employs two key designs: the contrastive learning stage for learning the PI encoder and an attention mechanism to incorporate this knowledge into neural operators through dynamic convolutional (DyConv) layers [33]. On the one hand contrastive learning extracts the PI representation through the similarity loss defined on augmented spatiotemporal patches cropped from the dataset (Fig. 1(b)). To enhance consistency with physical priors we propose three physics-aware cropping techniques to adapt different PI properties for different PDE systems, such as spatiotemporal invariant, boundary invariant, etc. (Fig. 1(b)(iii)). This physics-aware contrastive learning technique extracts the PI representation without the need for the labels of the PDE conditions, thus providing the corresponding PI information for each PDE series (Fig. 1(b)). On the other hand, after the PI encoder is trained by contrastive learning, we compute attention (i.e. Inline graphic in Fig. 1(c)) of the PI representation extracted by the PI encoder and reweight the convolutional kernel in the DyConv layer to obtain a personalized operator (Fig. 1(c)). This personalized operator, incorporated with the PI information as an indicator of the PDE condition, can predict the evolution of each PDE field in a mixed dataset with guaranteed generalization performance.

Figure 1.

Figure 1.

Illustration of PIANO. (a) The overall framework for PIANO when forecasting the PDE series. Given the ith PDE initial fields Inline graphic, PIANO first infers the PI embedding hi via the PI encoder Inline graphic, and then integrates hi into neural operator Inline graphic to obtain a personalized operator Inline graphic for ui. After that, PIANO predicts the subsequent PDE fields with this personalized operator. (b) Training stage of the PI encoder. (i) Illustration of contrastive learning. We crop two patches from each PDE series in a mini-batch according to the physical priors. The PI encoder and the projector are trained to maximize the similarity of two homologous patches. (ii) The effect of SimCLR loss, which brings closer (pushes apart) the representations governed by the same (different) physical parameters. (iii) Physics-aware cropping strategy of contrastive learning in PIANO. The cropping strategy should align with the physical prior of the PDE system. We illustrate the cropping strategies for spatiotemporal, temporal and boundary invariants. We also represent the global cropping strategy for comparison, which does not consider the more detailed physical priors and feeds the entire spatial fields directly. (c) Integration of the PI embedding into the neural operator. We use a split-merge trick to obtain the PI embedding hi for the PDE field ui, and feed hi into a multi-layer perception (MLP) to obtain K non-negative scales Inline graphic with Inline graphic. We use Inline graphic as the attention to reweight the DyConv layer in the neural operator and thus obtain a personalized operator for ui, which is incorporated with physical knowledge in hi.

We demonstrate our method’s effectiveness and physical meaning on several benchmark problems, including Burgers’ equation, the convection-diffusion equation (CDE) and the Navier-Stokes equation (NSE). Our results show that PIANO achieves superior accuracy and generalization compared to existing methods for solving PDEs with various physical mechanisms. According to the results of four experiments, PIANO can reduce the relative error rate by 13.6%–82.2% by deciphering and integrating the PIs of PDE systems. Furthermore, we conduct experiments to evaluate the quality of PI embedding through some downstream tasks, such as unsupervised dimensionality reduction and supervised classification (regression). These results indicate that the manifold structures of PI embeddings align well with the underlying PIs hidden in the PDE series (e.g. Reynolds numbers in NSE and external forces in Burgers’ equation), thereby enjoying the physical significance.

THE FRAMEWORK OF PIANO

In this section, we introduce the framework of PIANO, including how PIANO deciphers PIs from unlabeled multi-physical datasets and the procedure to incorporate them into the neural operator.

Description of the PDE system

Consider the time-dependent PDE system, which can be expressed as

graphic file with name TM0009.gif (1)

where Inline graphic is the differential operator with parameter Inline graphic, Ω is a bounded domain and u0 represents the initial conditions. Let Inline graphic be the boundary condition governed by the parameter Inline graphic. Let Inline graphic be the product space between Inline graphic and Inline graphic, and let Inline graphic be the global parameters of the PDE system. We utilize uk, t[Ω] ≔ [uk[Ω], …, uk+t−1[Ω]] to denote the t frame (Inline graphic) PDE series defined in Ω.

In this paper, we consider the scenario where θ ∈ Θ is a time-invariant parameter. In other words, the parameters θ that govern the PDE system in Equation (1) do not change over time, which includes the following three scenarios.

  • Spatiotemporal invariant: Inline graphic and Inline graphic share the same Inline graphic for all k1, k2 ∈ [0, T] and Ω1, Ω2 ⊂ Ω.

  • Temporal invariant: given Ω′ ⊂ Ω, Inline graphic and Inline graphic share the same Inline graphic for all k1, k2 ∈ [0, T].

  • Boundary invariant: Inline graphic and Inline graphic share the same Inline graphic for all k1, k2 ∈ [0, T].

In Table 1, we give some examples of one-dimensional (1D) heat equations to illustrate the above three types of PI.

Table 1.

Examples of three types of PIs on 1D heat equations (Ω = [− 1, 1]).

PDE formula Type of PI PI θ PI space Θ
tu = κΔu, u(±1, t) = 0 Spatiotemporal invariant κ [0, 1]
tu = 0.1Δu + f(x), u(±1, t) = 0 Temporal invariant f(x) {a sin (x): a ∈ [0, 1]}
tu = 0.1Δu, u(±1, t) = c Boundary invariant c [−1, 1]

The learning regime

Given the t frame PDE series uk, t[Ω] governed by Equation (1), an auto-regressive neural operator Inline graphic acts as a surrogate model, which produces the next t frame PDE solution as follows:

graphic file with name TM0029.gif (2)

We assume that the neural operator Inline graphic is trained under the supervision of the dataset Inline graphic (Inline graphic), where Inline graphic is the ith PDE series defined in Ω × [0, Mt] and governed by the parameter θi ∈ Θ. Existing methods typically assume that all ui in Inline graphic share the same θ [9,21] or have known different parameters θi [10,31]. However, we consider a more challenging scenario where data are generated from various physical systems (with varying but unknown θi in Inline graphic and Inline graphic) and no additional knowledge of θi is provided during the training and test stages.

Forecasting stage of PIANO

As shown in Fig. 1(a), given the initial PDE fields Inline graphic, the forecasting stage of PIANO includes three steps: (1) infer the PI embedding hi via the PI encoder Inline graphic; (2) integrate  hi into neural operator Inline graphic to obtain a personalized operator Inline graphic for ui; (3) predict the subsequent PDE fields with the personalized operator Inline graphic. As a result, two key technical problems arise when performing the above plans. On the one hand, we need to decipher the PI information behind the PDE system without the supervision of known labels. To this end, we utilize contrastive learning to pre-train the PI encoder in a self-supervised manner and propose the physics-aware cropping strategy to constrain the learned representation to align with the physical prior. On the other hand, we need to integrate the PI embedding into the neural operator to obtain the personalized operator. In this paper, we borrow the DyConv technique  [33] and propose the split-merge trick to use the PI embedding fully.

Contrastive training stage of the PI encoder

In this section we introduce how to train an encoder Inline graphic for extracting the PI information from the training set Inline graphic that is generated from various PDE fields without the supervision of Inline graphic. We begin by considering the scenario where θi is a spatiotemporal invariant, i.e. Inline graphic and Inline graphic share the same θi for all k1, k2 ∈ [0, T] and Ω1, Ω2 ⊂ Ω. When θ ∈ Θ is identifiable, there exists a mapping Inline graphic satisfying the property

graphic file with name TM0048.gif (3)

However, the mapping Inline graphic that can directly output θi is not available due to the absence of θ. To decipher the information implying θi, we adopt the technique from SimCLR [34] to train Inline graphic in a self-supervised manner. In each mini-batch we sample training data Inline graphic from Inline graphic with index set Inline graphic and randomly intercept two patches from each PDE sample, i.e. Inline graphic and Inline graphic. The PI encoder Inline graphic maps each patch to a representation vector, denoted Inline graphic for v ∈ {1, 2}. Subsequently, we employ a two-layer MLP g as a projection head to obtain Inline graphic (Fig. 1(b)(i)). Considering the PDE patches cropped from the same/different PDE series as positive/negative samples, the SimCLR loss can be expressed as

graphic file with name TM0059.gif (4)

where Inline graphic denotes the cosine similarity between u and v, and τ > 0 denotes a temperature parameter. As shown in Fig. 1(b)(ii), the SimCLR loss brings the representations governed by the same physical parameters closer, while pushing apart those with different parameters. After the training stage of contrastive learning, we throw away projector g and only utilize encoder Inline graphic to extract PI information from PDE fields, which is in line with the SimCLR method [34]. See the Method section for more details on the architecture of the PI encoder and the physics-aware cropping strategy.

Integrate the PI representation

In this section we introduce how PIANO integrates the pre-trained PI representation into the neural operator. Given the pre-trained PI encoder Inline graphic and an initial PDE field Inline graphic, we first obtain the PI embedding hi via a split-merge trick (see the Method section for more details), and then we adopt the DyConv [33] technique to incorporate the PI information into the neural operator Inline graphic. In the first layer of Inline graphic there are K convolutional matrices of the same size, denoted Inline graphic. In detail, we transform the first Fourier or convolutional layer into a DyConv layer in the Fourier-based or convolutional-based neural operators, respectively. All other layers maintain the same structure as the original neural operators. When predicting the PDE fields for a specific instance ui, we use an MLP to transform its PI representation hi into K non-negative scales Inline graphic with Inline graphic. The normalization of Inline graphic is implemented by a softmax layer. We use Inline graphic as the attention to reweight the K convolution matrices, i.e. Inline graphic. We replace the first layer of Inline graphic with Inline graphic and denote this new operator as Inline graphic, which can be considered as the personalized operator for ui (Fig. 1(c)). It is worth mentioning that the parameters Inline graphic in Inline graphic are obtained by the weighted summation, whose computational cost is almost negligible compared with the convolutional operation. Therefore, when aligning the parameters of PIANO and other neural operators, PIANO enjoys a comparable or faster inference speed, even considering the calculation of the PI representation h.

EXPERIMENTS

In this section, we conduct a series of numerical experiments to assess the performance of our proposed PIANO method and other baseline techniques in simulating PDE systems governed by diverse PIs.

Experimental setup

We divide the temporal intervals into 200 frames for training and validation. The input and output frames are set as 20 for neural operator and PI encoders in the experiments. In order to assess the out-of-distribution generalization capabilities of the trained operator, we set the test temporal intervals at 240, with the last 40 frames occurring exclusively in the test set. We refer to the temporal interval in the training set as the training domain, and the temporal interval that only occurs in the test set as the future domain. The spatial intervals are partitioned into 64 frames for the 1D case and 64 × 64 frames for the 2D case. The training, test and validation set sizes for all tasks are 1000, 200 and 200, respectively. All experiments are carried out using the PyTorch package [35] on an NVIDIA A100 GPU. We repeat each experiment with three random seeds from the set {0 1, 2} and report the mean value and variance. The performance of the model is evaluated using the average relative ℓ2 error (Inline graphic) and the ℓ error (Inline graphic) over all frames in the training domain and the future domain, respectively.

Dataset

In this section, we introduce the PDE dataset utilized in this paper, including two kinds of Burgers’ equation, the 1D CDE and three kinds of 2D NSEs.

Experiment E1: Burgers’ equation with varying external forces f. We simulate the 1D Burgers’ equation with varying external forces f, defined as

graphic file with name TM0079.gif (5)

where f(x) is a smooth function representing the external force. In this experiment, we select 14 different f to evaluate the performance of PIANO and other baseline methods under varying external forces. These forces are uniformly sampled from the set Inline graphic  Inline graphic.The ground-truth data are generated using the Python package ‘py-pde’ [36] with a fixed step size of 10−4. The final time T is set to 5 for the training set and 6 for the test set.

Experiment E2: Burgers’ equation with varying diffusivities D. We simulate the 1D Burgers’ equation with spatially varying diffusivities, defined as

graphic file with name TM0082.gif (6)

where D(x) is a smooth and non-negative function representing the spatially varying diffusivity. In this experiment, we select 10 different diffusivities to evaluate the performance of PIANO and other baseline methods under varying spatial fields. Ten types of diffusivities are uniformly sampled from the set {1, 2, 1 ± cos (x), 1 ± sin (x), 1 ± cos (2x), 1 ± sin (2x)}. The data generation scheme and the final time T are aligned with experiment E1.

Experiment E3: CDE with varying boundary conditions Inline graphic. We simulate the 1D CDEs with varying boundary conditions, defined as

graphic file with name TM0084.gif (7)

where Inline graphic represents the boundary conditions. In this experiment, we select four types of Inline graphic to evaluate the generalizability of PIANO and other baseline methods under varying boundary conditions. In this dataset, four types of boundary conditions include the Dirichlet condition (u = 0.2), the Neumann condition (∂nu = 0.2), the curvature condition (Inline graphic) and the Robin condition (∂nu + u = 0.2). The data generation scheme and the final time T align with experiment E1.

Experiment E4: NSE with varying viscosity terms ν. We simulate the vorticity fields for 2D flows within a periodic domain Ω = [0, 1] × [0, 1], governed by the NSEs:

graphic file with name TM0088.gif (8)

where f(x) = 0.1 sin (2π(x1 + x2)) + 0.1 cos (2π(x1 + x2)) and Inline graphic represents the forcing function and viscosity term, respectively. The viscosity is a crucial component in NSEs that determines the turbulence of flows [37,38]. We generate NSE data with varying viscosity coefficients to simulate heterogeneity, ranging from 10−2 to 10−5. The viscosity fields become more complicated as ν decreases because the nonlinear term −(u · ∇)ω gradually governs the motion of the fluids. The data generation process employs the pseudo-spectral method with a time step of 10−4 and a 256 × 256 grid size. The data are then downsampled to a grid size 64 × 64, which aligns with the settings in [9]. The final time T is 20 and 24 for the training and test sets respectively.

Experiment E5: NSE with varying viscosity terms ν and external forces f. In this experiment, we aim to simulate the 2D NSE as shown in Equation (8), with varying viscosity terms ν and external forces f. The viscosity coefficients ν range from 10−2 to 10−5. The form of the forcing function is given by f(x) = a sin (2π(x1 + x2)) + a cos (2π(x1 + x2)), where the coefficient a is uniformly sampled from [0, 0.2]. All other experimental settings are consistent with those described in experiment E4.

Experiment E6: Kolmogorov flow with varying viscosity terms ν. We simulate the vorticity fields for 2D NSEs within a periodic domain Ω = [0, 1] × [0, 1] driven by Kolmogorov forcing [39]:

graphic file with name TM0090.gif (9)

The fluid fields in Equation (9) result in much more complex trajectories due to the involvement of Kolmogorov forcing. We generate NSE data with varying viscosity coefficients to simulate heterogeneity, ranging from 10−2 to 10−4. All other experimental settings are consistent with those described in experiment E4.

Baselines

We consider several representative baselines from operator learning models, including the following.

  • Fourier neural operator (FNO) [9]: a classical neural operator that uses the Fourier transform to handle PDE information in the frequency domain.

  • Unet [40,41]: a classic architecture for semantic segmentation in biomedical imaging recently utilized as a surrogate model for PDE solvers.

  • Low-rank decomposition network (LordNet) [42]: a convolutional-based neural PDE solver that learns a low-rank decomposition layer to extract dominant patterns.

  • MultiWaveleT- (MWT) based model [43]: a neural operator that compresses the kernel of the corresponding operator using a fine-grained wavelet transform.

  • Factorized Fourier neural operators (FFNOs) [27]: an FNO variant that improves performance using a separable spectral layer and enhanced residual connections.

For PIANO we conduct experiments on PIANO + X, where X represents the backbone models. For the neural operator X and PIANO + X, we align the critical parameters of X and adjust the widths of the networks to match the number of parameters between X and PIANO + X, thereby ensuring a fair comparison.

Results

Table 2 presents the performance of various models for the PDE simulation on the experiments (E1–E6), as well as their computational costs. PIANO achieves the best prediction results across most metrics and experiments. When compared with the backbone models X (FNO, Unet and FFNO), the three variants of PIANO + X consistently outperform their backbone models on all tasks for both Inline graphic and Inline graphic errors, demonstrating that the PI embedding can enhance the robustness and accuracy of neural operators’ prediction capabilities. Specifically, PIANO + FNO, compared to FNO, reduces the error rate Inline graphic by 26.5%–63.1% in the training domain and by 35.7%–51.7% in the future domain over four experiments. PIANO + Unet, compared to Unet, reduces the error rate Inline graphic by 32.9%–76.8% in the training domain and by 36.7%–82.2% in the future domain over four experiments. PIANO provides a more significant enhancement to Unet than FNO in most tasks. One potential explanation is that the Fourier layer within the PI encoder introduces additional frequency domain information to the convolution-based Unet. In contrast, FNO is already based on a Fourier layer network. We compare the vorticity fields (in E4 and E6) predicted by FNO and PIANO + FNO from T = 4 to T = 24 in Fig. 2. Within the training domain, PIANO demonstrates a superior ability to capture the intricate details of fluid dynamics compared to FNO. As for the future domain, where supervised data are lacking, PIANO and FNO struggle to provide exact predictions in E4. However, PIANO still forecasts the corresponding trends of fluids more accurately than FNO.

Table 2.

Results of the PDE simulation for experiments E1, E2, E3, E4, E5 and E6. Relative errors (%) and computational costs for baseline methods and PIANO. The computational cost and numbers of parameters for PIANO reported in this table consider both the expenses of the PI encoder and neural operator. The best results in each task are highlighted in bold.

Training domain Future domain Time Param
Data Model Inline graphic (%) Inline graphic (%) Inline graphic (%) Inline graphic (%) Train (s) Infer (s) # (million)
E1 FNO 0.669Inline graphic 0.978Inline graphic 1.062Inline graphic 1.340Inline graphic 0.128 0.018 0.757
Burgers’ equation with varying external forces f LordNet 1.660Inline graphic 2.406Inline graphic 2.782Inline graphic 3.529Inline graphic 0.317 0.138 0.810
MWT 1.962Inline graphic 2.737Inline graphic 2.764Inline graphic 3.572Inline graphic 0.460 0.111 0.789
Unet 2.576Inline graphic 4.205Inline graphic 3.280Inline graphic 4.687Inline graphic 0.256 0.041 0.860
PIANO + FNO 0.492Inline graphic 0.611Inline graphic 0.536Inline graphic 0.700Inline graphic 0.147 0.022 0.762
PIANO + Unet 1.605Inline graphic 3.130Inline graphic 1.796Inline graphic 2.946Inline graphic 0.299 0.039 0.766
E2 FNO 6.328Inline graphic 10.847Inline graphic 13.111Inline graphic 19.379Inline graphic 0.128 0.018 0.757
Burgers’ equation with varying diffusivities D LordNet 8.471Inline graphic 22.016Inline graphic 23.786Inline graphic 62.977Inline graphic 0.317 0.138 0.810
MWT 6.381Inline graphic 12.355Inline graphic 12.013Inline graphic 18.952Inline graphic 0.460 0.111 0.789
Unet 7.087Inline graphic 12.592Inline graphic 13.593Inline graphic 20.221Inline graphic 0.256 0.041 0.860
PIANO + FNO 4.559Inline graphic 8.932Inline graphic 8.421Inline graphic 13.680Inline graphic 0.147 0.022 0.762
PIANO + Unet 4.149Inline graphic 8.879Inline graphic 7.342Inline graphic 12.330Inline graphic 0.299 0.039 0.766
E3 FNO 1.127Inline graphic 1.742Inline graphic 1.468Inline graphic 2.041Inline graphic 0.128 0.018 0.757
CDE with varying boundary conditions Inline graphic LordNet 0.605Inline graphic 0.990Inline graphic 0.901Inline graphic 0.832Inline graphic 0.317 0.138 0.810
MWT 0.662Inline graphic 1.232Inline graphic 0.781Inline graphic 1.385Inline graphic 0.460 0.111 0.789
Unet 12.565Inline graphic 20.786Inline graphic 20.335Inline graphic 22.686Inline graphic 0.256 0.041 0.860
PIANO + FNO 0.416Inline graphic 0.893Inline graphic 0.708Inline graphic 1.098Inline graphic 0.148 0.022 0.763
PIANO + Unet 2.921Inline graphic 5.773Inline graphic 3.611Inline graphic 5.446Inline graphic 0.299 0.039 0.767
E4 FNO 10.433Inline graphic 16.937Inline graphic 30.702Inline graphic 56.563Inline graphic 0.384 0.182 2.085
NSE with varying viscosity terms ν LordNet 8.469Inline graphic 15.574Inline graphic 30.348Inline graphic 57.728Inline graphic 1.031 0.547 2.069
MWT 10.135Inline graphic 17.917Inline graphic 32.232Inline graphic 61.572Inline graphic 1.067 0.229 2.295
Unet 9.054Inline graphic 18.483Inline graphic 31.830Inline graphic 60.106Inline graphic 0.335 0.089 3.038
FFNO 3.698Inline graphic 6.943Inline graphic 15.845Inline graphic 35.766Inline graphic 1.964 1.008 2.013
PIANO + FNO 4.652Inline graphic 9.191Inline graphic 17.393Inline graphic 39.953Inline graphic 0.395 0.138 2.020
PIANO + Unet 6.070Inline graphic 15.356Inline graphic 20.132Inline graphic 47.079Inline graphic 0.440 0.111 1.941
PIANO + FFNO 3.140Inline graphic 5.935Inline graphic 12.155Inline graphic 28.985Inline graphic 1.364 0.682 1.888
E5 FNO 19.277Inline graphic 26.354Inline graphic 44.467Inline graphic 57.912Inline graphic 0.384 0.182 2.085
NSE with varying viscosity terms ν and external forces f LordNet 27.675Inline graphic 39.617Inline graphic 76.273Inline graphic 111.628Inline graphic 1.031 0.547 2.069
MWT 18.908Inline graphic 25.361Inline graphic 40.919Inline graphic 53.123Inline graphic 1.067 0.229 2.295
Unet 25.374Inline graphic 37.916Inline graphic 52.505Inline graphic 73.183Inline graphic 0.335 0.089 3.038
FFNO 8.032Inline graphic 11.607Inline graphic 20.750Inline graphic 28.939Inline graphic 1.964 1.008 2.013
PIANO + FNO 9.082Inline graphic 12.731Inline graphic 21.795Inline graphic 29.912Inline graphic 0.457 0.144 2.071
PIANO + Unet 12.829Inline graphic 23.184Inline graphic 24.060Inline graphic 40.415Inline graphic 0.491 0.115 2.158
PIANO + FFNO 6.937Inline graphic 9.736Inline graphic 18.062Inline graphic 25.411Inline graphic 1.424 0.686 1.997
E6 FNO 4.017Inline graphic 5.250Inline graphic 5.241Inline graphic 6.842Inline graphic 0.384 0.182 2.085
Kolmogorov flow with varying viscosity terms ν LordNet 6.559Inline graphic 8.159Inline graphic 11.343Inline graphic 17.940Inline graphic 1.031 0.547 2.069
MWT 4.663Inline graphic 5.769Inline graphic 6.511Inline graphic 8.062Inline graphic 1.067 0.229 2.295
Unet 9.807Inline graphic 19.449Inline graphic 13.949Inline graphic 27.505Inline graphic 0.335 0.089 3.038
FFNO 1.727Inline graphic 2.194Inline graphic 2.608Inline graphic 3.357Inline graphic 1.964 1.008 2.013
PIANO + FNO 1.908Inline graphic 2.419Inline graphic 2.840Inline graphic 3.552Inline graphic 0.395 0.138 2.020
PIANO + Unet 6.704Inline graphic 12.143Inline graphic 9.676Inline graphic 16.495Inline graphic 0.440 0.111 1.941
PIANO + FFNO 1.491Inline graphic 1.876Inline graphic 2.277Inline graphic 3.040Inline graphic 1.364 0.682 1.888

Figure 2.

Figure 2.

Comparison of the vorticity fields in E4 (a) and E6 (b) between FNO and PIANO + FNO from T = 4 to 24 in the periodic domain [0, 1]2 for a 2D turbulent flow. Note that the times T = {4, 8, 12, 16, 20} are in the training domain, while T = 24 is in the future domain. The vorticity fields in the bounding boxes indicate that PIANO can capture more details than FNO.

Regarding computational costs, it is worth mentioning that the PI encoder is a significantly lighter network (0.053 and 0.184 million for the Burgers and NSE cases) compared to the neural operator. As a result, the inference time added by the PI encoder is generally negligible, which is 0.002 and 0.004 s for the Burgers and NSE data, respectively. Furthermore, in situations where the computational cost of the convolutional layers in the backbone is substantial, PIANO can considerably enhance the computation speed with the help of dynamic convolutional techniques. For example, PIANO can reduce the inference time by 24.2% and 32.3% for FNO and FFNO, respectively, when simulating 2D NSEs. More detailed discussions on computational costs are given in the online supplementary material.

Physical explanation of the PI encoder

In this section, we describe experiments to investigate the physical significance of the PI encoder on the Burgers (E1) and NSE (E4) data; specifically, whether the learned representation can reflect the PI information hidden within the PDE system. We consider two kinds of downstream task, unsupervised dimensionality reduction and supervised classification (regression), to analyze the properties of PI embeddings for PIANO. Furthermore, we compare several corresponding baselines to study the effects of each component in PIANO as follows.

  • PIANO-Inline graphic: in this model, we jointly train the PI encoder and neural operator without the contrastive pre-training, which can be regarded as an FNO version for DyConv technique. We train this model to reveal the impact of contrastive learning in PIANO.

  • PIANO-Inline graphic: in PIANO, we utilize the split-merge trick to divide the PDE fields Ω into several patches Inline graphic and then input them into the PI encoder during the training and testing phases (Fig. 1(b) and (c)). In PIANO-Inline graphic, we directly feed the entire PDE fields into the PI encoder.

  • PIANO-Inline graphic: we assert that cropping strategies should align with the physics prior of the PDE system and propose physics-aware cropping methods for contrastive learning (Fig. 1(b)). In PIANO-Inline graphic, we discard the physics-aware cropping technique and swap two corresponding augmentation methods for the Burgers and NSE data, respectively.

For dimensionality reduction tasks, we utilize UMAP [44] to project the PI embedding into a 2D and 1D manifold for the Burgers and NSE data respectively (Fig. 3). For Burgers’ data, PIANO-Inline graphic fails to obtain a meaningful representation, highlighting the importance of contrastive learning. PIANO-Inline graphic and PIANO-Inline graphic can distinguish half of the external force types, but struggle to separate some similar functions, such as −tanh (kx) for k ∈ {1, 2, 3}. Only PIANO achieves remarkable clustering results (Fig. 3(a)). We also calculate four clustering metrics to quantitatively evaluate the performance of clustering (Fig. 3(b)), where the clustering results are obtained via K-means [45] with the PI representation. These four metrics include the silhouette coefficient, the adjusted Rand index, normalized mutual information and the Fowlkes–Mallows index, which assess the clustering quality through measuring intra-cluster similarity, agreement between partitions, shared information between partitions and the similarity of pairs within clusters, respectively. The larger their values, the better the clustering quality. As shown in Fig. 3(b), PIANO is the only method that achieves a silhouette coefficient greater than 0.65, with the other three metrics achieving values larger than 0.90; thus, PIANO significantly outperforms the other methods. For NSE data, PIANO is the only method where the first component of PI embeddings exhibits a strong correlation with the logarithmic viscosity term (with correlation coefficients greater than 98%). At the same time, the other three PIANO variants fail to distinguish viscosity terms ranging from 10−3 to 10−5 (Fig. 3(b)).

Figure 3.

Figure 3.

The performances of the learned representation on the unsupervised dimensionality reduction tasks. CL, SM and PC denote contrastive learning, the split-merge trick and the physics-aware cropping strategy, respectively. (a) The dimensionality reduction results of PI embeddings via UMAP for Burgers’ data. The horizontal and vertical axes represent the two main components of UMAP, and each color represents a different external force in the dataset. Colors numbered from 0 to 13 correspond to the 14 types of external forces, including 0, 1, cos (x), sin (x), −tanh (x), tanh (x), cos (2x), sin (2x), tanh (2x), −tanh (2x), cos (3x), sin (3x), tanh (3x) and −tanh (3x). (b) Four metrics to evaluate the quantity of clustering via representation vectors given by different methods, including the silhouette coefficient, the adjusted Rand index, normalized mutual information and the Fowlkes–Mallows index. All four of these metrics indicate that the larger the value, the better the clustering performance. (c) The dimensionality reduction results of the PI embeddings via UMAP for the NSE data. The horizontal axis and the vertical axis represent the first component of UMAP and the logarithmic viscosity term Inline graphic in the dataset. We also calculate the Spearman and Pearson correlation coefficients between the first component and logarithmic viscosity term Inline graphic, which represent the rank order and linear relationships between two variables, respectively.

For supervised tasks, we train a linear predictor Inline graphic that maps the learned representation hi to the corresponding PDE parameters θi under the supervision of ground-truth labels (Table 3). For the dataset of Burgers’ equation, which involves 14 types of external forces, the training of Inline graphic naturally becomes a softmax regression problem. In the case of NSE, where the viscosity term continuously changes, we treat the training of Inline graphic as a ridge regression problem. According to the supervised downstream tasks, the PI encoder trained in PIANO exhibits the best ability to predict the PIs in Burgers’ equation and NSE compared to other baseline methods, which aligns with the experimental result in the unsupervised part.

Table 3.

The performances of the learned representation on the supervised tasks. Accuracy (relative ℓ2 error) of the PI encoder in PIANO and other baselines using linear evaluation on Burgers’ equation (NSE). CL, SM and PC denote contrastive learning, the split-merge trick and the physics-aware cropping strategy, respectively. The best results in each task are highlighted in bold.

Method Burgers’ equation (accuracy, ↑) NSE (ℓ2 error, ↓)
PIANO-Inline graphic 0.078 Inline graphic 0.161 Inline graphic
PIANO-Inline graphic 0.988 Inline graphic 0.086 Inline graphic
PIANO-Inline graphic 0.955 Inline graphic 0.092 Inline graphic
PIANO 0.997 Inline graphic 0.033 Inline graphic

The results of downstream tasks indicate that PIANO can represent the physical knowledge via a low-dimensional manifold and predict corresponding PDE parameters, thus demonstrating the physical meaning of PIANO.

CONCLUSION

In this paper, we introduce PIANO, an innovative operator learning framework designed to decipher the PI information from PDE series with various physical mechanisms and integrate them into neural operators to conduct forecasting tasks. We propose the physics-aware cropping technique to enhance consistency with physical priors and the split-merge trick to fully utilize the physical information across the spatial domain. According to our numerical results, PIANO successfully overcomes the limitations of current neural operator learning methods, thereby demonstrating its capability to process PDE data from a diverse range of sources and scenarios. Furthermore, the results of a series of downstream tasks verify the physical significance of the extracted PI representation by PIANO.

We propose the following future works to enhance the capabilities and applications of PIANO further.

  • Expanding PIANO to PDE types with varying geometries. In this study, we primarily focused on 1D equations when simulating PDEs with varying boundary conditions. However, it would be valuable to explore the extension of PIANO to more complex PDEs, such as PDEs with 2D and 3D complex geometries.

  • Addressing large-scale challenges using PIANO. In large-scale real-world problems, such as weather forecasting, PIANO can potentially extract meaningful PI representations, such as geographical information of various regions. This capability could enhance the accuracy and reliability of forecasting tasks and other large-scale applications.

  • Integrating additional physical priors into PIANO. Our current study assumes that the underlying PI in the PDE system is time invariant. However, real-world systems often exhibit other physical properties, such as periodicity and spatial invariance. By incorporating these additional physical priors into the contrastive learning stage, PIANO could be applied to a broader range of problems.

METHOD

Architecture of the PI encoder

In this paper, the architecture of Inline graphic consists of six layers, successively including two Fourier layers [9] two convolutional layers and two fully connected layers. The Fourier layers can extract the PDE information in the frequency space and other layers downsample the feature map to a low-dimensional vector. We employ the ‘GeLU’ function as the activation function. It is important to note that we only feed a sub-patch of the PDE field to Inline graphic and that the output of Inline graphic is a low-dimensional vector. Furthermore the amount of information required to infer PIs is significantly less than that needed to forecast the physical fields in a PDE system. Consequently, compared with the main branch of the neural operator, this component is a lightweight network that extracts PIs and enjoys fast inference speed.

Physics-aware cropping strategy

The cropping of the PDE series can be interpreted as data augmentation in contrastive learning. Unlike previous argumentation methods in vision tasks [46–49], those for PDE representation should comply with the physical prior accordingly. We have previously discussed cases where the PI represents spatiotemporal invariants. When the PI is only a temporal invariant and exhibits spatial variation, such as an external force, it is necessary to align spatial positions when implementing the crop operator. As a result, we extract two patches from the same spatial location for each PDE sample, i.e. Inline graphic and Inline graphic. For boundary invariants, we need to crop the PDE patches near the boundary to encode the boundary conditions. We illustrate all three cropping methods in Fig. 1(b)(iii). Note that we also illustrate another cropping approach, called the global cropping technique, which directly selects the PDE patch across the entire spatial field as augmentation samples, i.e. Inline graphic and Inline graphic. This global cropping strategy considers the time-invariant property of PIs, while ignoring the more detailed physical priors of different types of PI.

Split-merge trick

We split the PDE fields according to the physical prior in the contrastive training stage. Compared to global cropping, such a splitting strategy can encode the physical knowledge into Inline graphic through a more accurate approach. In the forecasting stage, we split the initial PDE fields Inline graphic into V uniform and disjointed patches Inline graphic, which are aligned with the patch size in the pre-training stage and satisfy Inline graphic. We feed all patches into Inline graphic to obtain the corresponding representations Inline graphic, and merge them together as the PI vector of ui, i.e. Inline graphic (Fig. 1(c)). This merge operation can make full use of the PDE information. In practice, we fix the parameters of the pre-trained PI encoder Inline graphic and only optimize the neural operator Inline graphic in the training stage.

MATERIALS AND ADDITIONAL EXPERIMENTS

Detailed experimental settings and additional experiments are reported in the online supplementary material. The source code is publicly available at https://github.com/optray/PIANO.

Supplementary Material

nwad336_Supplemental_File

Contributor Information

Rui Zhang, Academy of Mathematics and Systems Science, Chinese Academy of Sciences (CAS), Beijing 100190, China.

Qi Meng, Microsoft Research, Beijing 100080, China.

Zhi-Ming Ma, Academy of Mathematics and Systems Science, Chinese Academy of Sciences (CAS), Beijing 100190, China.

FUNDING

This work was supported by the National Key R&D Program of China (2020YFA0712700).

AUTHOR CONTRIBUTIONS

R.Z. contributed to the primary idea, software designs, experiments and manuscript writing. Q.M. contributed to the original idea, supervised the whole project and revised the paper. Q.M. and Z.-M.M. led the related project and directed the study.

Conflict of interest statement . None declared.

REFERENCES

  • 1. Qin  Z, Qian  C, Shen  L  et al.  Superscattering of water waves. Natl Sci Rev  2022; 10: nwac255. 10.1093/nsr/nwac255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Shi  J, Aihara  K, Chen  L. Dynamics-based data science in biology. Natl Sci Rev  2021; 8: nwab029. 10.1093/nsr/nwab029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Liu  S, Zhang  Y, Malomed  BA  et al.  Experimental realisations of the fractional Schrödinger equation in the temporal domain. Nat Commun  2023; 14: 222. 10.1038/s41467-023-35892-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Marion  M, Temam  R. Navier-Stokes equations: theory and approximation. In: Handbook of Numerical Analysis. Amsterdam: Elsevier, 1998, 503-689. [Google Scholar]
  • 5. Tadmor  E. A review of numerical methods for nonlinear partial differential equations. Bull Amer Math Soc  2012; 49: 507–54. 10.1090/S0273-0979-2012-01379-4 [DOI] [Google Scholar]
  • 6. Córdoba  D, Fontelos  MA, Mancho  AM  et al.  Evidence of singularities for a family of contour dynamics equations. Proc Natl Acad Sci USA  2005; 102: 5949–52. 10.1073/pnas.0501977102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Shi  J, Aihara  K, Li  T  et al.  Energy landscape decomposition for cell differentiation with proliferation effect. Natl Sci Rev  2022; 9: nwac116. 10.1093/nsr/nwac116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Han  J, Jentzen  A, E  W. Solving high-dimensional partial differential equations using deep learning. Proc Natl Acad Sci USA  2018; 115: 8505–10. 10.1073/pnas.1718942115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Li  Z, Kovachki  NB, Azizzadenesheli  K  et al.  Fourier neural operator for parametric partial differential equations. The Ninth International Conference on Learning Representations (ICLR 2021), Virtual Event, 3–7 May 2021. [Google Scholar]
  • 10. Brandstetter  J, Worrall  DE, Welling  M. Message passing neural PDE solvers. The Tenth International Conference on Learning Representations (ICLR 2022), Virtual Event, 25–29 April 2022. [Google Scholar]
  • 11. Raissi  M, Perdikaris  P, Karniadakis  GE. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys  2019; 378: 686–707. 10.1016/j.jcp.2018.10.045 [DOI] [Google Scholar]
  • 12. Raissi  M, Yazdani  A, Karniadakis  GE. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science  2020; 367: 1026–30. 10.1126/science.aaw4741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Chen  Z, Liu  Y, Sun  H. Physics-informed learning of governing equations from scarce data. Nat Commun  2021; 12: 6136. 10.1038/s41467-021-26434-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Zhang  R, Hu  P, Meng  Q  et al.  DRVN (deep random vortex network): a new physics-informed machine learning method for simulating and inferring incompressible fluid flows. Phys Fluids  2022; 34: 107112. 10.1063/5.0110342 [DOI] [Google Scholar]
  • 15. Gong  S, Hu  P, Meng  Q  et al.  Deep latent regularity network for modeling stochastic partial differential equations. In: Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023). AAAI Press, 2023, 7740–7. [Google Scholar]
  • 16. Karniadakis  GE, Kevrekidis  IG, Lu  L  et al.  Physics-informed machine learning. Nat Rev Phys  2021; 3: 422–40. 10.1038/s42254-021-00314-5 [DOI] [Google Scholar]
  • 17. Chen  J, Chi  X, E W  et al. Bridging traditional and machine learning-based algorithms for solving PDEs: the random feature method. J Mach Learn  2022; 1: 268–98. 10.4208/jml.220726 [DOI] [Google Scholar]
  • 18. Xie  X, Mohebujjaman  M, Rebholz  LG  et al.  Data-driven filtered reduced order modeling of fluid flows. SIAM J Sci Comput  2018; 40: B834–57. 10.1137/17M1145136 [DOI] [Google Scholar]
  • 19. Chen  W, Wang  Q, Hesthaven  JS  et al.  Physics-informed machine learning for reduced-order modeling of nonlinear problems. J Comput Phys  2021; 446: 110666. 10.1016/j.jcp.2021.110666 [DOI] [Google Scholar]
  • 20. Fresca  S, Dede’  L, Manzoni  A. A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs. J Sci Comput  2021; 87: 1–36. 10.1007/s10915-021-01462-7 [DOI] [Google Scholar]
  • 21. Lu  L, Jin  P, Pang  G  et al.  Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nat Mach Intell  2021; 3: 218–29. 10.1038/s42256-021-00302-5 [DOI] [Google Scholar]
  • 22. Kovachki  N, Li  Z, Liu  B  et al.  Neural operator: learning maps between function spaces with applications to PDEs. J Mach Learn Res  2023; 24: 1–97. [Google Scholar]
  • 23. Seidman  JH, Kissas  G, Perdikaris  P  et al.  NOMAD: nonlinear manifold decoders for operator learning. Annual Conference on Neural Information Processing Systems 2022 (NeurIPS 2022), New Orleans, LA, 12-16 December 2022. [Google Scholar]
  • 24. Venturi S and Casey  T. SVD perspectives for augmenting deeponet flexibility and interpretability. Comput Meth Appl Mech Eng  2023; 403: 115718. 10.1016/j.cma.2022.115718 [DOI] [Google Scholar]
  • 25. Lee  JY, Cho  SW, Hwang  HJ. HyperDeepONet: learning operator with complex target function space using the limited resources via hypernetwork. The Eleventh International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  • 26. Rahman  MA, Ross  ZE, Azizzadenesheli  K. U-NO: U-shaped neural operators. Trans Mach Learn Res  2023; 2023: 1–17. 10.48550/arXiv.2204.11127 [DOI] [Google Scholar]
  • 27. Tran  A, Mathews  AP, Xie  L  et al.  Factorized fourier neural operators. The Eleventh International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  • 28. Cao  S. Choose a transformer: Fourier or Galerkin. In: Advances in Neural Information Processing Systems, Vol. 34. Red Hook, NY: Curran Associates, 2021, 24924–40. 10.48550/arXiv.2105.14995 [DOI] [Google Scholar]
  • 29. Li  Z, Meidani  K, Farimani  AB. Transformer for partial differential equations’ operator learning. Trans Mach Learn Res  2023; 2023: 1–34. 10.48550/arXiv.2205.13671 [DOI] [Google Scholar]
  • 30. Li  Z, Peng  W, Yuan  Z  et al.  Fourier neural operator approach to large eddy simulation of three-dimensional turbulence. Theor Appl Mech Lett  2022; 12: 100389. 10.1016/j.taml.2022.100389 [DOI] [Google Scholar]
  • 31. Wang  R, Walters  R, Yu  R. Meta-learning dynamics forecasting using task inference. In: Advances in Neural Information Processing Systems, Vol. 35. Red Hook, NY: Curran Associates, 2022, 21640–53. [Google Scholar]
  • 32. Molinaro  R, Yang  Y, Engquist  B  et al.  Neural inverse operators for solving PDE inverse problems. In: Proceedings of the 40th International Conference on Machine Learning, Vol. 2. JMLR, 2023, 25105–39. [Google Scholar]
  • 33. Chen  Y, Dai  X, Liu  M  et al.  Dynamic convolution: Attention over convolution kernels. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2015, 11027–36. [Google Scholar]
  • 34. Chen  T, Kornblith  S, Norouzi  M  et al.  A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, Vol. 119. PMLR, 2020, 1597–607. [Google Scholar]
  • 35. Paszke  A, Gross  S, Massa  F  et al.  PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates, 2019, 8026–37. [Google Scholar]
  • 36. Zwicker  D. py-pde: a python package for solving partial differential equations. J Open Source Softw  2020; 5: 2158. 10.21105/joss.02158 [DOI] [Google Scholar]
  • 37. Smith  F. On the high Reynolds number theory of laminar flows. IMA J Appl Math  1982; 28: 207–81. 10.1093/imamat/28.3.207 [DOI] [Google Scholar]
  • 38. Smits  AJ, McKeon  BJ, Marusic  I. High-Reynolds number wall turbulence. Annu Rev Fluid Mech  2011; 43: 353–75. 10.1146/annurev-fluid-122109-160753 [DOI] [Google Scholar]
  • 39. Smaoui  N, El-Kadri  A, Zribi  M. On the control of the 2D Navier–Stokes equations with Kolmogorov forcing. Complexity  2021; 2021: 1–18. 10.1155/2021/3912014 [DOI] [Google Scholar]
  • 40. Ronneberger  O, Fischer  P, Brox  T. U-Net: convolutional networks for biomedical image segmentation. In: Navab  N, Hornegger  J, Wells  W  et al. (eds ) Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. Cham: Springer, 2015, 234–41. [Google Scholar]
  • 41. Takamoto  M, Praditia  T, Leiteritz  R  et al.  PDEBench: an extensive benchmark for scientific machine learning. In: Advances in Neural Information Processing Systems, Vol. 35. Red Hook, NY: Curran Associates, 2022, 1596–611. [Google Scholar]
  • 42. Huang  X, Shi  W, Meng  Q  et al.  NeuralStagger: accelerating physics-constrained neural PDE solver with spatial-temporal decomposition. In: Proceedings of the 40th International Conference on Machine Learning. JMLR, 2023, 13993–4006. [Google Scholar]
  • 43. Gupta  G, Xiao  X, Bogdan  P. Multiwavelet-based operator learning for differential equations. In: Advances in Neural Information Processing Systems, Vol. 34. Red Hook, NY: Curran Associates, 2021, 24048–62. [Google Scholar]
  • 44. McInnes  L, Healy  J, Saul  N  et al.  UMAP: uniform manifold approximation and projection. J Open Source Softw  2018; 3: 861. 10.21105/joss.00861 [DOI] [Google Scholar]
  • 45. Hartigan JA and Wong  MA. Algorithm AS 136: a K-means clustering algorithm. J R Stat Soc Ser C-Appl Stat  1979; 28: 100–8. 10.2307/2346830 [DOI] [Google Scholar]
  • 46. Qian  R, Meng  T, Gong  B  et al.  Spatiotemporal contrastive video representation learning. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2021, 6960–70. [Google Scholar]
  • 47. Ma  S, Zeng  Z, McDuff  D  et al. Contrastive learning of global and local video representations. In: Ranzato  M, Beygelzimer  A, Dauphin  YN  et al. (eds ) Advances in Neural Information Processing Systems, Vol. 34. Red Hook, NY: Curran Associates, 2021, 7025–40. [Google Scholar]
  • 48. Pan  T, Song  Y, Yang  T  et al.  VideoMoCo: contrastive video representation learning with temporally adversarial examples. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE Press, 2021, 11200–9. [Google Scholar]
  • 49. Dorkenwald  M, Xiao  F, Brattoli  B  et al.  SCVRL: shuffled contrastive video representation learning. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ: IEEE Press, 2022, 4131–40. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

nwad336_Supplemental_File

Articles from National Science Review are provided here courtesy of Oxford University Press

RESOURCES