Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2026 Feb 13;42(2):e70147. doi: 10.1002/cnm.70147

Physics‐Informed Emulation of Systemic Circulation for Fast Parameter Estimation and Uncertainty Quantification

William Ryan 1,, Alyssa Taylor‐LaPole 2, Mette Olufsen 3, Dirk Husmeier 1, Vladislav Vyshemirskiy 1
PMCID: PMC12905477  PMID: 41689204

ABSTRACT

There are many computational models set up to predict blood flow and pressure in vascular networks. Methods for a single forward solution of such models are well established but become problematic in clinical applications, where model calibration and patient‐specific parameter estimation call for repeated forward simulations of the model requiring substantial computational costs. A potential workaround is emulation, which approximates the original mathematical model by a statistical or machine learning surrogate model. Our methodological framework is based on physics‐informed neural networks, with a particular focus on patient‐specific model calibration. Once fully trained, our machine learning model predicts flow and pressure waveforms in a fraction of the time required by the numerical solver, enabling fast parameter inference and inverse uncertainty quantification. The proposed framework is applied to clinical data from four patients diagnosed with a Double Outlet Right Ventricle (DORV), a congenital heart defect where both the aorta and main pulmonary artery connect to the right ventricle, potentially leading to insufficient oxygen delivery to the body and hence requiring careful blood flow monitoring. We assess the performance of our method in a comparative evaluation study that includes several alternative state‐of‐the‐art machine learning methods, and we quantify the improvement achieved in terms of accuracy and efficiency gains.

Keywords: computational fluid dynamics, FALD, Fontan, perfusion, wall shear stress


This work presents an innovative approach to modelling 1D fluid dynamics in complex networks using physics‐informed neural networks as surrogate models. By integrating physics‐based constraints with data‐driven learning, we develop an efficient and generalisable framework for uncertainty quantification and parameter estimation in real‐world applications.

graphic file with name CNM-42-e70147-g004.jpg

1. Introduction

One‐dimensional (1D) computational fluid dynamics (CFD) models have long been used for simulating haemodynamics in vascular networks including studies by Reymond et al. [1], Mariscal‐Harana et al. [2], Johnson et al. [3], and Olufsen et al. [4]. They represent key haemodynamic properties such as pressure, flow rate, wave propagation, and shear stress, making them relevant to clinical decision‐making. However, while the computational efficiency gain over 3D models enables forward simulations for fixed model parameters in real time [5], 1D CFD models are computationally too complex for real‐time model calibration and parameter estimation (the so‐called inverse problem). This would require a substantial number of forward simulations with different model parameters as part of an iterative optimisation or sampling scheme. Such patient‐specific model calibration and physical parameter inference are of critical importance for clinical applications, for instance, as part of a clinical decision support system based on proper inverse uncertainty quantification (IUQ).

Surrogate models can be introduced to approximate the outputs of 1D fluid dynamics simulations, resulting in a significant reduction in computation time. After incurring an initial time cost in the form of a training period, such models can make rapid predictions. This allows for extensive simulations in terms of performing IUQ for potential surgical outcomes, such as the construction of a Fontan circuit in single‐ventricle infants [6, 7]. Applications of data‐driven surrogate models to CFD (in general) and cardiovascular modelling (in particular) have been extensively researched, for example, using polynomial chaos expansions (PCE) [8, 9], Gaussian Process (GP) regression [10], and deep learning [11, 12]. A disadvantage of these purely data‐driven surrogate models is that they may not fully capture underlying dynamics. As a result, their performance may suffer when extrapolating to conditions outside the training domain. This shortcoming can be addressed by explicitly integrating physics‐based information into the surrogate modelling process.

One of the first approaches to modelling fluid dynamics using physics‐informed machine learning was introduced by Raissi et al. [13], where 2D and 3D Navier–Stokes problems were tackled. In this study, the networks were trained using (synthetic) observational data and governing equations where Reynolds and Peclet numbers were inferred as part of the training process. Their work included an application to cardiovascular fluid dynamics, namely in a 2D stenosis toy problem. In a similar vein, Garay et al. [14] used physics‐informed neural networks to infer Windkessel parameters, again as part of the network optimisation. Their study used 2D MRI flow and pressure data observed at vessel midpoints over one period to inform these parameters, which defined the model's boundary condition. Of particular interest is the work carried out by Kissas et al. [15], where PINNs were applied to a 1D model of blood flow in both synthetic and patient‐specific vessels. The authors utilised observational flow and cross‐sectional area data at vessel midpoints to train a physics‐informed neural network, which could then be used to predict blood pressure, flow, and vessel cross‐sectional area at any time point or location in their respective domains, with a particular focus on non‐invasive aortic pressure predictions. The applications ranged from a three vessel system including one bifurcation with synthetic data to a seven vessel network with three bifurcations applied to real patient data. Windkessel parameters were inferred post‐training using predictions of flow and pressure at the ends of the outflow vessels. Yang et al. [16] introduced a general class of Bayesian physics‐informed neural networks (b‐PINNs) designed to handle noisy observational data. By modelling uncertainty in the network weights, their approach yielded predictive intervals rather than a single deterministic solution. The authors employed both Markov Chain Monte Carlo (MCMC) and variational inference (VI) methods to infer the network parameters. The methodology was tested on a variety of 1D and 2D PDEs toy problems. Sun and Wang [17] approached the problem of modelling 2D fluid flow from noisy or sparse data by using b‐PINNs and a slightly modified VI method. Their application included a toy problem of an idealised stenosis. They demonstrated that hybrid data‐driven and physics‐informed models (i.e., models that utilise both observational (simulation) data and PDE residuals) learn solutions faster and more accurately than purely PDE‐residual‐based learning methods. More recently, Huang et al. [18] tackled a one‐fluid cavitation model problem using a joint data‐driven and physics‐informed approach, finding that combining the two sources of information led to more accurate results than previously achieved. Note that in this case, data‐driven means using simulation data obtained by numerically solving the equations. Investigations into fully data‐free (i.e., fully physics‐informed) modelling have also been carried out. Sun et al. [19] introduced a surrogate model for steady‐state 2D flow, which did not require data to fit, as long as boundary conditions were imposed correctly. The work focused on a toy problem of idealised stenotic flow. This approach included an input parameter ν in the model, in addition to the spatial coordinates, which corresponded to fluid viscosity.

Isaev et al. [20] modelled flow in a steady‐state system of four vessels resulting from the Fontan procedure in a data‐driven manner to investigate how perfusion changed given a change in geometry. Their model accepted geometry parameters (vessel radii and bifurcation angles) and boundary conditions (pressure values at the inlet and outlets) as inputs and outputted flow predictions at single points in each of the four vessels. Physics‐informed machine learning continues to be a heavy focus in both the machine learning literature (sampling strategies [21, 22], avoiding failure modes [23], architectures [24]) and applied engineering problems [25, 26, 27].

When making a model patient‐specific for medical applications, each parameter adaptation calls for another numerical integration of the PDEs. The accumulated time can reach days or weeks. However, since predictions made using a trained machine‐learning model can be done in a fraction of the time required by a standard numerical solver, the overall computational costs of Bayesian inference can be reduced by several orders of magnitude to time frames acceptable for clinical decision making. The focus of the present work is on implementing physics‐informed machine learning for emulation in the context of Bayesian inference and inverse uncertainty quantification to understand aortic remodelling in infants with a Fontan circulation. The novelty lies in a substantial upscaling of the model complexity to a 1D fluid dynamics model of a large haemodynamic network of up to 17 interconnected blood vessels. In addition to predicting flow and pressure as a function of space and time, our model accepts biophysical parameters as inputs. These parameters define the elasticity of the vessel walls in the large and small vessels, as well as the geometry of the network of small vessels. Given noisy, clinical flow waveforms extracted from 4D‐magnetic resonance images (MRIs), our aim is to carry out Bayesian inference and inverse uncertainty quantification of the physical parameters using posterior sampling. To this end, we need model outputs for repeatedly varying parameters as part of an established sampling routine, such as Hamiltonian Monte Carlo [28] (HMC).

This study is organised as follows: Section 2 introduces the multiscale 1D fluid dynamics model and Section 3 the physics‐informed neural networks used as surrogate models. The implementation workflow is presented in Figure 1. Numerical results are reported in Section 4, demonstrating predictive performance compared to proven surrogate models and the ability to perform uncertainty quantification in two arterial network models: a proof‐of‐concept application in a smaller network consisting of 9 vessels (the aorta and those immediately branching off), as well as a more realistic model composed of 17 vessels (extension of the 9 vessel model including head and neck vessels) with accompanying real patient data. Finally, findings are summarised in Section 5.

FIGURE 1.

FIGURE 1

Overview of the modelling and inference pipeline. The process begins with the 1D fluid dynamics model (top‐left panel) and patient‐specific geometry extracted from MRA data. A PINN is then trained as a surrogate model using both clinical inflow boundary conditions from 4D MRI and embedded physical constraints such as conservation laws and boundary conditions (bottom‐left panel). Once trained, the PINN is used to perform fast forward predictions of flow and pressure at any coordinate in space, time and physical parameter space (top‐right panel) and inverse inference of biophysical parameters, enabling uncertainty quantification and non‐invasive pressure predictions (bottom‐right panel). The top‐right panel displays flow and pressure fields predicted by the trained PINN model alongside fields obtained via a high‐fidelity PDE solver in two different vessels.

2. Fluid Dynamics Model and Data

2.1. Vascular Network and Haemodynamic Data

The aorta, head, and neck vessel geometry is taken from Taylor‐LaPole et al. [29] and Paun et al. [30]. Vascular dimensions are determined by first segmenting the vasculature from patient MRA images, creating a 3D‐rendered volume of the vessels. Then, the centerlines are generated throughout each vessel. From the centerlines, a directed labelled tree is defined including vessel connectivity, length, and radii.

Flow and cuff pressure data are collected from four double outlet right ventricle (DORV) patients. These patients are born with the pulmonary artery and the aorta connected to the functioning right ventricle. To prevent the mixing of systemic (oxygenated) and pulmonary (deoxygenated) blood flow, the DORV patients included in this study have previously undergone surgeries to obtain a single‐ventricle Fontan circulation, that is, the pulmonary arteries are detached from the right ventricle and connected to the inferior and superior vena cava. The flow waveforms are extracted from 4D MRIs [29, 31, 32], and these 4D‐MRI images are then registered on the MRA images used to segment the networks, as described in [29, 31, 32]. The time‐resolved 4D flow MRI data in the ascending aorta is used as the inflow boundary condition for the model. Flow data is also determined in the innominate artery, left common carotid, left subclavian artery, and descending aorta (Vessels 3, 5, 6 and 7 in Figure 2b). Finally, diastolic and systolic pressure measurements were obtained via cuff measurements in the supine position with a sphygmomanometer.

FIGURE 2.

FIGURE 2

Diagrams of the two models considered in this work. (a) Layout of the arteries in the 9‐vessel model. Structured trees, which describe the outflow boundary condition, are attached to the terminal vessels. (b) Diagram depicting the 17‐vessel network. The numbers corresponding to each vessel are coloured by group, where each group is assigned its own vessel stiffness value. (c) The terminal vessels are indicated by ending in a structured tree of small vessels that determines the outflow boundary conditions. The red lines intersecting Vessels 3, 5, 6 and 7 in both (a) and (b) indicate the approximate location at which flow data was available to infer physical parameters with.

2.2. One‐Dimensional (1D) Arterial Model With Structured Tree Boundary Conditions

The 1D model integrates the axisymmetric Navier Stokes equations for an incompressible fluid to predict periodic blood flow, pressure, and cross‐sectional area in each vessel as a function of time. Each vessel is assumed to be cylindrical with a thin, impermeable wall. We assume that the flow is Newtonian, incompressible, and axisymmetric.

The computational domain is represented by a network of interconnected vessels, and has two parts: large vessels and small vessels. The connectivity and geometry (length and radius) of the large vessels can be extracted from images, while a self‐similar structured tree represents the small vessel network. In the large vessels, blood flow and pressure are predicted by solving the non‐linear 1D equations, whereas in the small vessels, we solve a linear 1D model. From the perspective of the large vessels, haemodynamic predictions in the small vessels form the boundary conditions. We prescribe a flow waveform at the inlet to the network, and assume that flow is conserved and pressure is continuous at each junction (both in the large and small networks). The non‐linear system of PDEs in the large vessels is hyperbolic, that is, for each vessel, we require a boundary condition at each end. The equations are non‐dimensionalised and solved using an in‐house, two‐step Lax‐Wendroff solver [33]. This algorithm is suitable for studying blood flow in arteries, where we do not anticipate shock under physiological flow conditions.

2.2.1. Networks—Computational Domain

2.2.1.1. Large Vessel Networks

The present study examines haemodynamics in two vascular networks. Network 1 (Figure 2a) includes 9 large vessels: the ascending aorta, the aortic arch, the descending aorta, and vessels branching into the neck. The length l (cm) and unstressed vessel radii r0 (cm) of each vessel are obtained from the skeletonised network data. It is important to note that this smaller network includes only those vessels that are visible and reliably segmented in the medical images. Network 2 (Figure 2b) extends network 1 to incorporate vessels outside of the imaged region. This larger network includes 17 large vessels, with vessels extending into the arms and brain. Note all junctions, in both the large and small networks, are bifurcations, that is, a given vessel will always have exactly two daughter vessels. Detailed descriptions of these networks are provided in the modelling study by Taylor‐LaPole et al. [29] which was followed closely, besides from applying a stronger tapering effect to the descending aorta.

2.2.1.2. Small Vessel Networks

The small vessel networks (Figure 2c) (attached at the terminal vessels of the large vessels, Figure 2a,b) consist of self‐similar bifurcating trees in which the unstressed vessel radii r0 (cm) of the daughter vessels are scaled relative to the parent vessel as

r0d1=αr0p,r0p=βr0d2 (1)

The scaling factors are chosen such that the vessel labelled daughter 1 (d1) is always bigger than daughter 2 (d2). We assume that the radii of the daughter vessels are smaller than those of the parent vessel (0<α<β<1). The structured tree continues to bifurcate until the radii are smaller than a set minimum radius rmin. Self‐similarity is also reflected in the length of the vessel, which is set for each vessel at L=lrrr0, where lrr is the length‐to‐radius ratio.

2.2.2. Large Vessel Equations

The system of equations solved in each large vessel relates volumetric flow qx,t (mL/s), pressure px,t (mmHg), and cross‐sectional area Ax,t=πRx,t2 (cm2). For each cardiac cycle, we ensure that 0<t<T and x0,L, where L is the length of the vessel. The system of equations satisfying conservation of mass and balance of momentum is given by

At+qx=0, (2)
qt+xq2A+Aρpx=2πνRδqA, (3)

where ρ=1.057 (g/cm3) is density, μ=0.032 (g/cm/s) is viscosity and ν=μρ (cm2/s) denotes the kinematic viscosity. In the present study, we kept these quantities constant with values obtained from literature [34], as they were not expected to change much from patient to patient. The equations are derived by assuming a Stokes boundary layer with thickness δ=νT/2π.

To close the system of equations, a linear stress–strain model relating pressure to area is imposed.

px,tp0=43Ehr0AA01, (4)
Ehr0=fr0=f1expf2r0+f3, (5)

where p0 (mmHg) is the unstressed pressure and r0 (cm), A0=πr02 (cm2) are the equivalent unstressed radius and cross‐sectional area, respectively. The latter are obtained from the segmentation of vascular networks. To account for the increased stiffening with the decrease in vessel radius, Young's modulus E (g/cm/s2) and wall thickness h (cm) are related to the reference vessel radius r0 as an exponentially tapering function fr0 parametrised by three parameters f1, f2 and f3.

2.2.3. Boundary Conditions

From the perspective of the large networks (shown in Figure 2a,b), three types of boundary conditions are imposed: at the inlet, at each junction, and the terminal vessels.

2.2.3.1. Inflow

At the network inlet, we impose a smooth periodic flow waveform q0,t (mL/s) obtained by generating a smooth waveform from the measured flow in the ascending aorta, discretised at the number of time‐points required by the numerical solver.

2.2.3.2. Junctions (Bifurcations)

At the junctions, we impose conservation of flow and continuity of pressure, that is, the flow and pressure at the end of each parent vessel (subscript p) are related to the values at the inlet of the two daughter vessels (subscript d1 and d2). These conditions give

qpL,t=qd10,t+qd20,t (6)
ppL,t=pd10,t=pd20,t. (7)
2.2.3.3. Outflow Conditions

At the end of the terminal vessels (within the tree of large vessels), we impose an outlet condition via an impedance obtained by solving a system of linearised equations in the structured tree model described in detail in [4].

Within each vessel, the linearised equations can be solved analytically in the frequency domain, generating expressions for impedance Zxω=Pxω/Qxω. To generate a solution in the tree, we determine the impedance at the beginning of each vessel x=0 as a function of the impedance at the end of the vessel x=L as

Z0ω=igω1sinωL/c+ZLωcosωL/ccosωL/c+igωZLωsinωL/c,ω0 (8)
Z0ω=8μlrrπr03+ZLω, (9)

where g=CA01FJ/ρ (n.d.) and the wave speed c=A01FJρC (n.d.). In these equations A0 is the unstressed vessel radius, C=Ap=3A0r02Eh is the vessel compliance, and FJ=2J1w0w0J0w0,w0=i3r02ω/ν is a ratio of Bessel functions Ji,i=0,1 (functions of the Womersley number w0) arising from the solution of the linearised PDE [4]. Similar to the large vessel network, the equation in the interior is combined with junction conditions, which in the frequency domain are given by

1ZpLω=1Zd10ω+1Zd20ω. (10)

Using a recursive algorithm (described in detail in [4]), the solution is propagated to the root of the structured tree, where the impedance defines the relationship between flow and pressure as follows:

pL,t=tTTqL,tτzLτ, (11)

where z (x, t) is obtained by an inverse periodic Fourier transform of Zxω. Alternatively, by the convolution theorem,

pL,t=qL,tzLτ, (12)

where denotes the Fourier transform.

2.2.4. Model Parameters

The fluid dynamics model has three types of parameters specifying: the computational domains (the large and small vessel networks), fluid properties, and physiological properties. Nominal values and a priori bounds for all parameters, as well as a subset of identifiable parameters via a sensitivity analysis, are determined from data and results reported in previous studies [29, 30]. We examine haemodynamics in two networks (Figure 2a,b) with N=9 and N=17 large vessels, respectively. For the N=9 vessel network, the vessel parameters are the same for all vessels, while for the N=17 network, we include four groups denoting vessels with varying properties (see Figure 2b for the groupings). Below we summarise all model parameters noting identifiable parameters for each type.

  • Large vessel networks: Large vessel parameters include estimates of vessel radii r0i,i=1,,N, vessel length li,i=1,,N, and vessel connectivity. For both networks, estimates for these parameters are obtained from segmentation of the MRA images.

  • Small vessel networks: Terminal vessels of the large vessel networks are connected to structured trees characterised by four parameters: α, β, lrr, and rmin. The parameter rmin specifying the size of the smallest vessels is kept fixed. For the terminal tree attached to the large descending aorta rmin=0.001 and for the structured tree attached to the descending aorta and for the terminal vessels in the head and neck rmin=0.0001. The parameters αβlrr are correlated. Similar to the original study [29], α is inferred, while β and lrr kept fixed at their nominal values.

  • Fluid properties include blood density ρ and viscosity μ. Parameters associated with these quantities are fixed for all simulations (see above), as they are easy to measure in vivo and have been extensively reported in haemodynamics studies.

  • Biophysical parameters include parameters denoting vessel compliance, that is, parameters required to specify Eh/r0, that is, (fi1,fi2,fi3,i=s,l). These parameters appear both in both the large (i=l) and small (i=s) networks. Again, the parameters of the large and small vessels are correlated. For both networks we fix f1 for all large vessels and structured trees, we infer one value of fi2, i=l,s for the large and small vessels, respectively, and group specific values for fi3,j (one value j=1 for the 9‐vessel network and 4 values j=1,,4 for the 17‐vessel network).

In summary, for the 9‐vessel network (Figure 2a), we infer 5 parameters

θ=fl2fs2fl3fs3α (13)

and for the 17‐vessel network (Figure 2b), we infer 10 parameters

θ=fl2fs2fl3,jfs3,jα,j=1,,4. (14)

3. Methods

3.1. Training the Physics‐Informed Emulator

An independent, fully connected feed‐forward neural network, also known as a multilayer perceptron (MLP), is assigned separately to each vessel in the system. A comprehensive comparison of network architectures for blood flow modelling was carried out by Moser et al. [35], where it was found that fully connected feedforward networks provide a favourable balance between prediction accuracy and training time. The networks accept spatial location in the axial direction x0,L, where L (cm) is the vessel‐specific length, time point t, and biophysical parameters θ as inputs, and output flow and pressure values

uNN:x,t,θq^p^. (15)

Note that the domain of the spatial input x is different for each vessel, reflecting the vessel‐specific lengths. The parameters belonging to the vector of biophysical parameters θ are the quantities to be inferred and remain a modelling choice.

A neural network with L layers is defined as a composition of functions

uNNx,t,θ=uLuL1u1x,t,θ. (16)

Each layer u:n1n is given by

uz=σWz+b, (17)

where Wn×n1 is the weight matrix, bn is the bias vector, and σ is a (typically non‐linear) activation function applied elementwise. The complete set of parameters is ω=W,m,b,m|=1,,L,m=1,,M, where M denotes the total number of vessels in the system. Note that the cross‐sectional vessel area A is an additional quantity present in the equations of the 1D model and can be easily found by rearranging Equation (5) given a predicted pressure value and vessel stiffness parameters:

A^x,t,θ=A034r0Ehp^x,t,θp0+12. (18)

The 1D model outlined in Section 2.2 imposes several constraints that the flow and pressure solutions must adhere to. These form a collection of loss functions that must be satisfied for the neural network to provide the correct solution to the problem.

3.2. Exact Imposition of Inlet Boundary Condition and Periodicity

Solving the patient‐specific fluid dynamics problem requires a set of geometry parameters (lengths and radii of arteries) and an inflow profile. The inflow profile imposed at the inlet of the first vessel in the system remains fixed and periodic, given period T (s), irrespective of additional physiological input parameters. Furthermore, since the system is assumed to be periodic, flow and pressure should be equal at all times t=kT, k. Both of these conditions are imposed exactly as described in the subsequent sections, rather than being learnt through loss functions.

3.2.1. Inflow

Embedding the inflow boundary condition into the solution requires a representation of flow as a continuous, at least one‐time differentiable function of time. When numerically solving the PDEs, the inflow exists as a vector of flow data on a sufficiently fine grid to ensure numerically stable solutions. Incorporating a deterministic inflow into the PINN solution requires a continuous inflow profile. GP [36] regression offers itself as an excellent choice for the following reasons: GPs are fast to apply and train given the number of points expected in this application (<100), satisfy the differentiability requirement given appropriate choice of covariance function (kernel) and periodicity can be enforced through a periodic kernel. In our study, a Matérn 5/2 kernel wrapped inside a periodic kernel [37] was chosen for modelling the inflow. This kernel is twice differentiable, which appears more realistic than the more common infinitely differentiable (and hence potentially over‐smooth) Squared Exponential (SE) kernel. Having fitted the GP and optimised hyperparameters via maximising the marginal likelihood, the posterior predictive mean was used as the surrogate inlet boundary condition.

Let qint be the inflow profile over one period and q^x,t,θ the neural network's flow prediction of the first vessel in the system. The corresponding PINN can be augmented to satisfy the inflow profile exactly at location x=0 by mixing the two outputs [38].

q~x,t,θ=cosπ2xLqint+sinπ2xLq^x,t,θ, (19)

where L (cm) denotes the length of the vessel and x0,L.

3.2.2. Periodicity

Satisfying one‐dimensional periodicity exactly requires imposing the condition

uNNx,t,θ=uNNx,t+kT,θ,k, (20)

on the neural network, where T is the period of the system. A change of variable is introduced in the first layer of the neural network for the variable t, which naturally satisfies the constraint in Equation (20)

v:tcos2πtTsin2πtT. (21)

The input‐augmented neural network is then represented as follows:

u~NN:xνtθq~p~, (22)

where q~ is given by Equation (19) in the inflow vessel, and otherwise flow and pressure are the raw neural network outputs: q~=q^,p~=p^.

3.3. Loss Functions

3.3.1. Bifurcations

The outputs of the neural network representing flow and pressure in each vessel in the artery tree must satisfy the same continuity conditions as presented in Section 2.2. This is incorporated into the training process via a soft constraint, using mean‐squared error loss functions. Given a bifurcating parent vessel of length Lp with 2 daughter vessels, for a time point t and vector of biophysical parameters θ, the residuals between flow predictions are defined as

rqtθq~pLptθq~d10,t,θq~d20,t,θ (23)

The pressure continuity residuals are split into two equations:

rp1tθp~pLptθp~d10,t,θ, (24)
rp2tθp~pLptθp~d20,t,θ. (25)

Given an n‐length vector of training time points and biophysical parameters, the bifurcation loss function is then

3.3.1. (26)

3.3.2. PDE Residuals

Satisfying the PDEs represents the bulk of the networks' training. Given Equations (2) (conservation of mass) and (3) (conservation of momentum), residual functions are introduced as functions of space, time, and biophysical parameters

rmassx,t,θA~t+q~x, (27)
rmomx,t,θq~t+xq~2A~+A~ρp~x+2πνR~δq~A~, (28)

where q~, p~ and A~ denote the predicted flow, pressure and cross‐sectional area solutions at x,t,θ.

The loss function that captures the errors in the PDEs for an n‐length vector of training spatial locations, time points and biophysical parameters is hence

3.3.2. (29)

3.3.3. Outflow

The loss function corresponding to the outflow boundary condition follows from the convolution relationship between flow and pressure at the end of the terminal vessels presented in Equation (12). Implementing the outflow loss requires pre‐computing a set of impedance vectors zimp obtained using an nimp‐length training set of biophysical parameters θimp generated from an LHS design which sufficiently densely covers the parameters' (patient‐specific) viable regions. The impedances are calculated at m equidistant time points in one period, timp. For a vector of biophysical parameters θ with corresponding impedance vector z, the outflow residuals in a terminal vessel of length L are given by

rimpθz1mj=1mp~Ltjimpθ1((q~(Ltjimpθ))zj)2, (30)

which follows from Equation (12), and where and 1 denote the Fourier transform and its inverse, respectively. Then the outflow loss function for nimp‐length full batch of biophysical parameters and corresponding impedance vectors is given by

imp1nimpi=1nimprimpθiimpziimp. (31)

3.3.4. Data‐Driven Loss

Lastly, a loss function that corresponds to a data‐driven loss term is introduced. In this case, the data represent simulation results of flow and pressure at the mid‐points of the vessels, obtained by numerically solving the partial differential equations for a set of biophysical parameters. For an n‐length vector of training spatial locations, time points and biophysical parameters, the data‐driven (simulation) loss is given by

3.3.4. (32)

where qi and pi denote the numerical solutions of flow and pressure, respectively.

3.3.5. Total Loss

Summing the individual loss terms together provides the total loss function to be minimised. Previous research has shown that weighting the individual terms appropriately can help in reaching the optima more consistently as well as more efficiently by getting there in fewer iterations [39, 40, 41, 42]:

ωλbifbif+λPDEPDE+λimpimp+λsimsim. (33)

The goal is to find the networks' parameters which minimise Equation (33):

ω*=argminωω, (34)

which is performed using stochastic gradient descent. More details on optimisation can be found in Section 3.8.

3.4. Weighting Scheme

Manually tuning the individual loss component weights in Equation (33) would prove arduous. For this reason, there has been a keen interest in investigating automatic tuning methods in the existing body of literature. The weighting scheme used in this work to set the loss terms in Equation (33) utilises the method introduced by Wang et al. [39], in which the authors investigate a potential failure mode in training PINNs caused by stiffness in gradient flow dynamics leading to unbalanced back‐propagated gradients of each loss function. An algorithm is introduced that updates the loss weights based on the 2 norm of the network parameters' ω gradients via a moving average using gradient statistics obtained during training every m steps

λi,new=ωbifω+ωPDEω+ωimpω+ωsimωωiω, (35)
λi=γλi,old+1γλi,new, (36)

for ibifPDEimpsim, and denotes the 2 norm. In this work, γ was set to 0.9 and m to 5, which follows a general guideline proposed by Wang et al. [39]. The weighting scheme makes for a computationally cheap method, since the gradients are readily available as part of a single gradient descent step.

3.5. Multi‐Task Data‐Driven and Physics‐Informed Learning

The modelling approach investigated here is a mixed learning approach, whereby the physics‐informed loss functions presented in the previous sections are combined with a standard data‐driven approach to neural network learning. In the context of emulation, this entails conducting numerical simulations of the physical model for a set of input parameters and using the simulator's results as an additional loss objective, in addition to the physics‐informed loss terms. However, a data‐free approach may also be taken, in which case the neural networks are trained using a modified total loss (Equation 33) which does not include the data‐driven loss term (sim). It is of interest to quantify the information gained by including the physics‐based loss terms over a data‐driven model, that is, a model trained using only sim in Equation (33). The out‐of‐sample predictive accuracy of the joint data‐driven and physics‐informed emulator is compared to that of both purely data‐driven surrogate models (using deep neural networks and Gaussian Processes) and strictly physics‐informed neural networks via an ablation study in Section 4.2.2. We demonstrate that introducing even a small amount of simulator output, combined with physics‐informed constraints, yields more accurate out‐of‐sample predictions than typical data‐driven surrogate models that use several times the amount of training data. This may be especially useful in a clinical setting, for example, in the case of transfer‐learning a pre‐existing and well‐trained model to a new patient, given only a limited amount of time to run forward simulations. Further transfer learning approaches are investigated in Section 4.3.

3.6. Neural Network Enhancements and Activation Function

Two other methods to improve the learning of the underlying physical system by PINNs were also considered: a modified MLP that allows residual connections between the input and each hidden layer [39], and random weight factorisation (RWF) [43], a method of initialising and parameterising the weights of the hidden layers that has been shown to improve performance in some applications of PINNs [44]. A comparison of network architectures and training approaches was conducted for the 9‐vessel 5‐biophysical parameter model (Figure 2). The results are displayed in Section 4.2.2. tanh activation functions were used for all hidden layers, which is a reliable activation function in applications of physics‐informed machine‐learning [35, 44, 45].

3.7. Transfer‐Learning

For the analysis of real patient data in Section 4.3, the quality of emulation was evaluated through the lens of its application in a clinical setting. For all patient‐specific applications, the PINN emulator is trained solely on numerical solutions of the 1D model (using the patient‐specific geometry and inflow) together with the physics‐based loss terms. The measured flows and brachial pressures enter the workflow only through the likelihood in the Bayesian calibration step, and are not used as direct targets when optimising the PINN weights. CFD simulations were run for varying amounts of time using new patients' vascular geometries and inflow profiles, which were then used to build patient‐specific surrogate models. Instead of learning the weights and biases of the PINNs from scratch, a network trained on a previous patient was used as the starting point before fine‐tuning the weights using a stochastic gradient descent method with a small learning rate for a maximum of 30 min. In this work, the same pre‐existing PINN model was used as the base for all new patients; however, building a large set of emulators for pseudo‐patients with varying vascular geometries, boundary conditions, and physiological features and then choosing one similar to a new patient based on the features as the starting point may be an attractive route to take for larger scale applications.

3.8. Implementation

Optimisation of PINN network weights was performed using Adam [46], a variant of stochastic gradient descent (SGD), with a learning rate starting at 1×103 and following a cosine‐shaped decay function to a lowest value of 1×104, based on optimisation of similar problems [15, 38], and using mini‐batches consisting of 512 points.

3.9. Inverse Uncertainty Quantification of Biophysical Parameters

To quantify the uncertainty of the estimated biophysical parameters θ given noisy flow data, a Bayesian framework was adopted where the likelihood model used for inference was multivariate normal. The covariance structure aimed to capture the discrepancy between the fluid dynamics model and the true underlying physical model in a patient [47, 48]. This modelling choice has two benefits: firstly, it takes into account that the mathematical model used to predict flow may not be representative of the underlying system (model discrepancy), and secondly, it captures observation errors that are correlated in time (noise model mismatch). Model discrepancy may be explained by several factors and modelling choices, such as a flawed patient geometry due to errors in the MRA measurements, an incorrect inflow profile, or incorrect fixed parameter values [49]. Assuming correlated errors instead of an i.i.d. noise model was also motivated by the temporal nature of the data, that is, a measurement error affecting one time point in the period may depend on the previous measurement and affect the following. The approach taken in this study addresses both of these potential problems at once, using the GP model laid out by Kennedy and O'Hagan [48]. As indicated in Figure 2, flow data was measured in four vessels at a fixed spatial location xi (i=1,2,3,4) in each over nT time points t1,,tnT spanning one period. For each vessel, we define the vector of observed flow values as

yi=yit1yit2yitnTT, (37)

and the corresponding vector of flow values predicted by the physics‐informed neural network, parametrised by θ, as

fiθ=q~xit1θq~xit2θq~xitnTθT. (38)

We assume for each vessel

yiθNfiθΣi. (39)

The correlated errors of vessel i were modelled via a zero‐mean GP prior with a Matérn 3/2 covariance function, motivated by existing works in the literature which account for model mismatch during inference using similar GPs [50],

Σi=Kdisc,i+τiItn, (40)
Kdisc,ij,j=σi21+3|tjtj|iexp3|tjtj|i, (41)

where i, σi2 and τi are vessel‐specific length‐scale, discrepancy variance, and noise variance parameters.

Uniform proper priors over physiologically realistic parameter ranges, as shown in Tables 1 and 4, were placed on the biophysical parameters θ. For the Matérn 3/2 kernel parameters, Unif0,T priors were placed on the lengthscale terms, where T denotes the period of a patients' circulation. Exponential priors were placed on the variance terms corresponding to each vessel, where the scale chosen was equal to the range of observed data (maximum observed—minimum observed) in that vessel. This was motivated by the idea that the mismatch between model outputs and observed data was expected to increase at a steady rate given an increase in flow magnitude. Finally, log‐normal priors were placed on the vessel‐specific observation noise terms with means of 2 and standard deviations of 1:

iUnif0,T,σi2Expscale=maxjyitjmaxjyitj,logτiN2,1,θUniformphysiologicalbounds.

TABLE 1.

9‐vessel model parameter descriptions, and values and ranges for fixed and inferrable parameters.

Parameter Value Description
f1
2×108 (fixed) Large vessel stiffness
fs1
2×108 (fixed) Small vessel stiffness
f2
[25, 45] Large vessel stiffness
fs2
[25, 45] Small vessel stiffness
f3
2×105,9×105
Large vessel stiffness
fs3
2×105,9×105
Small vessel stiffness
α
[0.85, 0.94] Structured tree scaling term
β
0.60 (fixed) Structured tree scaling term
lrr
50 (fixed) Small vessel length:radius ratio
rmin
0.001 (Vessel 7), 0.0001 (rest) Radius cut‐off of structured tree

Note: The stiffness parameters, which are used in Equation (5), relate either to the large vessels (f) or the small vessels which describe flow in the structured trees (fs).

TABLE 4.

Patient 1: 17‐vessel model parameter descriptions, values and ranges for fixed and inferrable parameters.

Parameter Value Description
f1
2×107
Large vessel stiffness (fixed)
fs1
2×107
Small vessel stiffness (fixed)
f2
35
Large vessel stiffness (fixed)
fs2
16,88
Small vessel stiffness
f3
4×105,2.2×106
Large vessel stiffness
fs3
3.52×104,1.936×105
Small vessel stiffness
α
0.90,0.94
Structured tree scaling term
β
0.60 (fixed) Structured tree scaling term
lrr
50 (fixed) Small vessel length:radius ratio
rmin
0.001 (Vessel 7), 0.0001 (rest) Radius cut‐off of structured tree

Note: There are four unique, group‐specific parameters for f3 and fs3, and the empty subscript replaced with a dot refers to the parameter bounds being shared between groups.

The log‐likelihood for vessel i was

logpyiθliσiτi=T2log2π12logΣi12yifiθTΣi1yifiθ, (42)

and combining all vessels,

pθliσi2τiyii=1Mj=1nobspyi,jθliσi2τpθplipσi2pτi, (43)

where the second product's subscript j=1,,nobs applies when there are multiple observed flow waveforms per vessel for a patient. Samples were obtained from this posterior using Hamiltonian Monte Carlo (HMC). Specifically, the No‐U‐Turn sampler [51] algorithm was used, eliminating the need to pre‐specify the number of leapfrog steps. Four chains were run in parallel with starting points sampled from the parameters' respective prior distributions. Convergence was tested by monitoring the potential scale reduction factor (PSRF) [52] until it reached 1.1, after which 2000 further samples were obtained.

4. Results

4.1. Evaluation Strategy

The performance of the proposed PINN framework is assessed using the three‐level evaluation strategy introduced in Figure 3: (1) testing the surrogate model's accuracy in reproducing outputs from the high‐fidelity CFD solver, (2) assessing the quality of parameter inference in synthetic settings with known ground truth, and (3) evaluating predictive performance in function space via push‐forward simulations.

FIGURE 3.

FIGURE 3

Illustration of the three levels of model evaluation. (1) Forward prediction (emulator verification): The PINN surrogate is compared to the high‐fidelity numerical solver to assess prediction accuracy in function or output space (bottom subpanel) for different combinations of physical parameters θ (note that θ1 and θ2 on the axes in the top subpanel denote two arbitrary input physical parameters). (2) Parameter‐space inference: Given noisy observational data (bottom subpanel), posterior parameter samples of the biophysical parameters are inferred (top subpanel), and their distributions are compared against known ground truth values. (3) Push‐forward validation: Posterior samples (top subpanel) are re‐inserted into the high‐fidelity solver to assess predictive accuracy in function space (bottom subpanel), comparing resulting waveforms against observed data to validate both emulator and inference performance.

4.2. 9‐Vessel Network, 5 Biophysical Parameters

The PINNs framework laid out above was first applied to a 9‐vessel network (Figure 2a) beginning in the ascending aorta. Five biophysical parameters were included as inputs to the model: stiffness parameters relating to both the 9 large vessels and the microvasculature that make up the structured tree, and an additional term that determines the shape of the structured trees (defined in Equation 13). The chosen parameter ranges are listed in Table 1. They were chosen to allow for typical wave speeds and realistic pressure ranges [53]. More details on parameter bounds, sensitivities and variable selection can be found in Paun et al. [30] and Taylor‐LaPole et al. [29].

4.2.1. Training, Test and Noisy Data

214 combinations of spatial locations, time points, and biophysical parameter vectors were obtained using uniform sampling for space and time, and a Latin hypercube sampling (LHS) design for the biophysical parameters, with the aim being a dense coverage of all input coordinates. These vectors were then used as inputs to the loss functions, where applicable. For example, the bifurcation loss function accepts time points and biophysical parameters as inputs, whereas the PDE residuals loss function accepts spatial locations, time points and biophysical parameters. Numerical simulations of the physical model described in Section 2.2 were performed using biophysical parameters generated from the same LHS design consisting of roughly 5000 points. The outputs (flow and pressure) of the numerical solver consisted of 512 time points at the midpoint of each vessel. These input–output pairs were then subsampled using the following training data set sizes: {128, 512, 1024, 2048, 4096}, to be used in the data‐driven loss (Equation 32). Each set of simulation data was then used to build separate surrogate models.

A test set with 128 input–output numerical simulation pairs was created by randomly sampling and removing from the training set, as outlined in the previous paragraph. Both training and test sets used only 20 time‐wise equidistant flow data points over one period from the midpoints of each vessel instead of the full 512, to mirror what the data‐driven surrogates would typically use (see Section 4.2.2). Predictive accuracies of the trained PINN models are compared to purely data‐driven methods in Section 4.2.2. For each vessel and each test simulation, the vessel‐specific test loss was defined as the root mean squared error (RMSE) of predictions against the test data:

RMSEi=120j=120qijq~ij2, (44)

where the indices i,j denote the ith element of the test set and the jth time point, respectively. The total loss was calculated by summing the RMSE scores of all individual vessels, and the distributions of RMSEi over the 128 test simulations are shown in Figure 4a.

FIGURE 4.

FIGURE 4

(a) Comparison of the models' predictive accuracy on an out‐of‐sample test set: Number of neurons per hidden layer versus sum over 9 vessels of mean RMSE scores. The colours and shapes refer to the number of hidden layers and neural network properties, respectively. Both the number of neurons per hidden layer and the number of hidden layers have a strong impact on predictive performance, with performance plateauing when using at least 5 hidden layers, 256 neurons per layer, and the modified MLP architecture. Note that values for the 7‐layer 256‐neuron Base and RWF models were left out, as they were not able to converge during training, and further investigation of appropriate learning rates was considered outside the scope of this work. (b) Traceplots describing the evolution of loss component weights over the course of training the best performing model in the architecture comparison.

To give a more concrete sense of emulator accuracy in physical units and relative terms, mean absolute error (MAE), maximum absolute error, and a relative L2 error for flow and pressure were computed for each vessel and test simulation. These are defined for flow as

MAEi=120j=120qijq~ij,MaxAEi=max1j20qijq~ij, (45)

and

reli2=j=120qijq~ij2j=120qij2. (46)

The same quantities are defined for pressure by replacing qij and q~ij with pij and p~ij.

In order to test the PINNs' ability to perform uncertainty quantification, 4 sets of synthetic, noisy flow data were generated using biophysical parameters sampled from their viable ranges, as presented in Section 4.2. The ground truth data consisted of 20 time‐wise equidistant flow measurements in one period, obtained using the CFD solver introduced in Section 2.2 at the midpoints of Vessels 3, 5, 6 and 7. Correlated errors were produced and added to the CFD solutions by sampling from the zero‐mean GP prior outlined in Section 3.9. The variance parameter σi2, i3,5,6,7, was set to be vessel‐specific to aim for similar signal‐to‐noise ratios, while the same lengthscale parameter l and noise variance τ was shared between all vessels for generating the data. These values were considered known and fixed during inference in order to isolate the performance of the emulator in recovering the physical parameters. While this would not be realistic in a clinical setting, it was decided upon for this synthetic validation study to ensure a controlled and interpretable comparison to ground‐truth posteriors.

4.2.2. Emulation Accuracy

A comparison of architectures was conducted using a simulation batch consisting of 128 input and output pairs. The performance with respect to out‐of‐sample predictive accuracy of the various architectures is presented in Figure 4a. This figure illustrates the importance of deeper networks, with performance further improving as more neurons are added. Figure 5 displays predicted pressure and flow waveforms at the midpoint of each of the 9 vessels of the 5‐layer, 128‐neuron PINN model utilising both the modified MLP and RWF, and the corresponding ground truth obtained via CFD simulations for 3 vectors of biophysical parameters included in the test set.

FIGURE 5.

FIGURE 5

Flow and pressure waveform predictions versus ground truths for three different vectors of physical parameters in the test set. The predicted flow (a) and pressure (b) were obtained using the best performing model in the architecture comparison.

To compare the emulation performance of PINNs versus standard data‐driven surrogate modelling techniques, deep neural networks and Gaussian processes were assigned to each of the 9 vessels, which were trained on varying amounts of simulation data. This corresponds to the first level of evaluation in Figure 3, focusing on forward prediction accuracy of the surrogate model. The data‐driven methods were trained using vectors consisting of 20 equidistant points of the original 512 stretching across the whole temporal domain. This was chosen based on a typical amount of patient data available in a clinic setting for inference (see Section 4.3) [29]. The data‐driven neural networks (DDNNs) were trained in a multi‐output fashion with 20 output neurons, corresponding to each time point, whereas 20 independent Gaussian processes were fit using the numerical simulation flow output at each of the time points. The total data budget was fixed for all models. Neural networks used a 90:10 train/validation split of the simulation data for early stopping and underwent a grid search to find the optimal architecture, while GPs were trained using the full set. This design ensured that no method benefitted from access to more simulation outputs. Accompanying PINN models were fit using the same amount of numerical simulation output as the data‐driven methods. The architecture chosen was based on the comparison above: 5 hidden layers with 128 neurons per layer, and utilising both the modified MLP and RWF, which provided a good balance between predictive performance and training time.

The same test set was used as in the architecture analysis. RMSE scores for each of the 20‐length test set vectors were obtained using the fully trained DDNNs, PINNs and GPs, and the results are summarised in Figure 6. The PINNs averaged better scores than the data‐driven methods across all tests (48% ± 16% and 40% ± 15% improvement in mean RMSE (± one standard deviation) across all vessels and number of simulations over GPs and DDNNs, respectively) due to having more information of the underlying physical process naturally built into them. Using as few as 512 simulations provided roughly the same level of predictive performance as the competing data‐driven methods achieved when using 4096 simulations. This stark difference describes the impact that the physics‐based loss function has on out‐of‐sample predictive accuracy. Table 2 provides further metrics for the PINN trained with 128 simulations, displaying the mean and standard deviation of the MAE and relative L2 error across the 128 test simulations, together with the corresponding maximum absolute error. Maximum absolute errors for pressure are within a unit across the network, showing strong performance.

FIGURE 6.

FIGURE 6

Comparison of physics‐informed neural networks, data‐driven neural networks and Gaussian processes' predictive performances in the 9‐vessel model given by RMSE error scores (displayed on the log scale) on an out‐of‐sample test set of 128 simulations using varying amounts of data to train each model (displayed on the x‐axis). The extra information provided by the physics‐informed constraints contained within the PINNs resulted in lower averages (59% and 49% improvement in mean RMSE across all vessels and number of simulations over GPs and DDNNs, respectively) across the entire test set compared to either of the other two methods when trained on the same amount of simulation data. Training the PINNs using 512 forward simulations yields comparable performance to the data‐driven methods trained using 8 times as many (4096) simulations.

TABLE 2.

Flow and pressure mean (standard deviation) error scores (mean and max absolute, and relative L2) across the test set per vessel for the PINN model trained using 128 simulations.

Vessel Flow MAE Flow Max AE Flow Rel L2 Pressure MAE Pressure Max AE Pressure Rel L2
1 0.089 (0.066) 0.232 (0.172) 0.001 (0.001) 0.185 (0.133) 0.545 (0.398) 0.003 (0.002)
2 0.222 (0.163) 0.641 (0.470) 0.003 (0.002) 0.182 (0.132) 0.537 (0.404) 0.003 (0.002)
3 0.126 (0.084) 0.365 (0.258) 0.010 (0.007) 0.181 (0.129) 0.530 (0.384) 0.003 (0.002)
4 0.279 (0.201) 0.805 (0.614) 0.004 (0.003) 0.176 (0.130) 0.511 (0.394) 0.003 (0.002)
5 0.089 (0.102) 0.248 (0.291) 0.030 (0.032) 0.242 (0.184) 0.700 (0.567) 0.004 (0.003)
6 0.220 (0.152) 0.675 (0.442) 0.016 (0.010) 0.182 (0.135) 0.526 (0.405) 0.003 (0.002)
7 0.365 (0.230) 1.042 (0.691) 0.006 (0.004) 0.136 (0.098) 0.392 (0.295) 0.002 (0.002)
8 0.110 (0.063) 0.332 (0.200) 0.010 (0.006) 0.177 (0.130) 0.499 (0.371) 0.003 (0.002)
9 0.081 (0.089) 0.200 (0.179) 0.030 (0.033) 0.253 (0.193) 0.705 (0.537) 0.004 (0.003)

Note: Mean and max absolute error scores are in the corresponding units (mL/s and mmHg), while relative L2 is in percentage points.

4.2.3. Inverse Uncertainty Quantification of Biophysical Parameters Using the Emulator

Given the noisy flow observational data generated as outlined in Section 4.2.1, ground truth posteriors were obtained using Bayesian inference, where forward runs of the numerical CFD solver were used within a Markov Chain Monte Carlo (MCMC) sampling procedure to evaluate the likelihood at each parameter setting. This approach provides a high‐fidelity posterior against which the PINN‐based posteriors can be compared. This analysis addresses the second level of model assessment shown in Figure 3, examining how well the inferred parameter posterior distributions align with the ground‐truth posterior. Due to the computational cost of repeatedly solving the full 1D fluid dynamics model, this procedure required a timescale on the order of days. In contrast, the PINN emulators produced comparable posteriors in under an hour, highlighting the practical advantage of emulation in time‐constrained settings. Boxplots of the marginal posteriors are displayed in Figure 7, and KL‐divergence scores summarising the differences between the PINN and ground truth posteriors are given in Table 3.

FIGURE 7.

FIGURE 7

Boxplots comparing the PINNs' and ground truth marginal posterior quartiles. Note that the outputs have been scaled to lie between 0 and 1 for ease of comparison. The models' boxplots are in the following order: N = 128, 512, 1024, 2048, 4096, and finally the ground truth obtained via the numerical solver. There is not much of a difference in posterior quartiles when using more than or equal to 512 runs of the numerical solver to train the models.

TABLE 3.

Summary of KL‐divergences between the ground truth posterior obtained via simulations and the posteriors found using PINNs with varying amounts of simulation data used during training.

128 512 1024 2048 4096
Sample 1 0.072 0.033 0.022 0.012 0.0093
Sample 2 0.46 0.064 0.057 0.051 0.032
Sample 3 0.062 0.069 0.12 0.043 0.033
Sample 4 0.059 0.015 0.0101 0.016 0.009

Note: The KL‐divergences tend to decrease in size when using PINNs trained on more simulation data. The posteriors which resulted in the smallest KL‐divergence are in bold.

4.3. 17‐Vessel Network, 10 Biophysical Parameters

For the 17‐vessel network, the arteries are divided into 4 groups by location, with each group having a unique vessel stiffness parameter, thereby increasing the number of parameters in the model from 5 to 10 (Equation 14, see Table 4 for inferrable parameter bounds and descriptions). This allows for more flexible modelling of blood circulation, with the challenge of a larger parameter space required to be accurately emulated. The modelling methods closely follow the study conducted by Taylor‐LaPole et al. [29], including variable selection and most of the parameter bounds.

In this study, bounds for these parameters were estimated using Equation (5), inserting the patient's pulse pressure obtained from systolic and diastolic cuff pressure measurements and typical expansion rates of arterial cross‐sectional area reported in the literature. For example, Thijssen et al. [54] and Green et al. [55] noted that the typical arterial expansion ranges from 3% to 10% depending on the artery.

4.3.1. Uncertainty Quantification of Vessel‐Stiffness Parameters

For the 17‐vessel model, the parameters to be inferred using the likelihood model defined in Section 3.9 were: 10 biophysical parameters (4 large and 5 small vessel stiffness parameters and a structured tree scaling parameter), 8 parameters (2 per vessel) defining the vessel‐specific Matérn 3/2 kernel used for the covariance matrices to model correlated error/model discrepancy structure, and 4 vessel‐specific observation noise parameters. Results for Patient 1 are discussed in the following section, and results for the 3 further patients can be found in Supporting Information.

4.3.2. Patient 1

Physics‐informed neural networks, data‐driven neural networks, and Gaussian processes were implemented and trained using varying amounts of simulation data for different vectors of physical parameters. Simulations were run in parallel and performed using Dual Intel Xeon Gold 6138 processors. Table 5 displays the time (in minutes) to run 4 different batches of input data. Figure 8a highlights the out‐of‐sample predictive performance of each of the emulation methods trained on each of the batches of simulation data.

TABLE 5.

Time taken to run varying amounts of simulations.

Number of simulations 128 512 2048 8148
Time taken in minutes 30 120 480 1920
FIGURE 8.

FIGURE 8

Comparing predictive and inference performance of physics‐informed and data‐driven neural networks trained on varying amounts of simulator runs: (a) RMSE scores on a held‐out test set of flow data from vessel midpoints generated using 512 different vectors of biophysical parameters. Similarly to the 9‐vessel network, the PINNs in the 17‐vessel network exhibit better predictive performance compared to the other methods when trained on the same amount of data. This is explained by the fact that PINNs have more information about the underlying process built into them. Notably, the PINNs require roughly 4 times fewer simulations to achieve similar predictive accuracy as the data‐driven methods trained on 2048 simulations. (b) Marginal posterior quartiles of the large vessel stiffness parameters inferred using DDNNs and PINNs as surrogate models trained with varying amounts of simulation training data. The posteriors obtained using the PINNs and the DDNN with the most training data appear to approximate the ground truth posterior quite accurately, apart from bias in the posterior mean of f34. In contrast, the DDNNs with the least training data cannot approximate the posterior. The PINNs' outputs are regularised strongly given input stiffness parameters due to the relationship between area and pressure in Equation (5).

As outlined in the evaluation strategy in Figure 3, we assess the inference performance of the emulators not only in parameter space but also in output space via push‐forward validation. Figure 8b shows the marginal posterior distributions inferred using both data‐driven emulators and physics‐informed neural networks (PINNs), each trained on simulation datasets of increasing size. To assess how well each model captured the true posterior distribution under noisy observational data, we computed a ground‐truth posterior using the numerical solver embedded within an adaptive Metropolis algorithm. Achieving sufficient convergence across all parameters (PSRF < 1.1) required slightly over a week of computation. As shown in Table 6, PINNs achieved lower KL‐divergences to the ground‐truth posterior than their data‐driven counterparts, despite requiring roughly four times fewer training simulations to reach comparable predictive accuracy on the test set. Contour plots of posterior samples of stiffness parameters corresponding to each of the four vessel group obtained using the PINNs and DDNNs with the varying training set sizes and the numerical solver are displayed in Figure 9.

TABLE 6.

KL‐divergences between the ground truth posterior obtained using the numerical solver and the posteriors found using both DDNNs and PINNs, with varying amounts of simulation data used during training.

Number of simulations 128 512 2048
DDNN 18.46 5.94 1.71
PINN 2.11 1.44 1.13
FIGURE 9.

FIGURE 9

Pairplots and densities comparing posteriors obtained via PINNs and DDNNs using varying amounts of training data (128, 512 and 2048). The green densities and contours correspond to the ground truth posterior and hence remain constant between plots.

Figure 10 displays posterior predictive intervals for both flow and pressure. The posterior samples were subsampled down to 50 sets of physical parameters, which were then inserted into the PDE solver to obtain the set of high‐fidelity pressure solutions for the inferred posterior displayed in Figure 10b. This push‐forward validation corresponds to the third level of evaluation in Figure 3, where we assess whether posterior uncertainty over parameters translates into plausible uncertainty in predicted pressure waveforms. The flow credible intervals were generated by passing each of the samples from the inferred posterior distribution through the trained PINN model. This push‐forward step propagates parametric uncertainty into the output space, producing distributions over predicted physiological quantities such as flow and pressure waveforms.

FIGURE 10.

FIGURE 10

Patient 1 inference results: (a) 95% posterior predictive credible and prediction intervals in each of the four vessels used for inference. (b) 95% posterior predictive credible interval in the brachial artery. Pressure predictions based on the inferred biophysical parameters show relatively good agreement with the measured systolic and diastolic pressures. The shaded regions indicate ±3 mmHg measurement error.

In Vessels 3, 5 and 6, the predicted flow envelopes show close agreement with the observed 4D‐MRI data. In contrast, Vessel 7 exhibits a discrepancy between predicted and observed waveforms, which may be attributed to model mismatch, measurement noise, or dynamics missing from the model which are specific to that location. In the absence of invasive pressure measurements, we use cuff‐based brachial pressure as a surrogate validation target. While this is an indirect metric, it reflects a clinically meaningful endpoint and provides a practical means of validating output‐space predictions. The predicted pressure distribution at the midpoint of the brachial artery (Vessel 10; see Figure 2b) demonstrates the model's ability to estimate central blood pressure non‐invasively. Dashed red and blue lines represent the measured systolic and diastolic pressures, respectively, with the shaded band indicating a ±3 mmHg tolerance corresponding to the reported accuracy of clinical cuff‐based measurements [56, 57]. The predicted range lies well within this band, supporting the model's potential for practical use in pressure estimation.

All predictive simulations used posterior samples obtained from a PINN trained on 512 simulation points. The full inference workflow required approximately three hours per patient: two hours to generate simulation data (see Table 5), 30 min for patient‐specific fine‐tuning via transfer learning, and 30 min for posterior inference.

4.3.3. Patients 2–4

To test the generalisability of the method across a broader cohort, we applied the same workflow to three additional patients. In all three cases, the predicted brachial pressure intervals aligned well with the measured systolic and diastolic values, falling within the ±3 mmHg clinical tolerance. These results, shown in the Supporting Information, provide further evidence that the proposed approach can generalise across patients and produce clinically relevant predictions with minimal tuning. Additionally, compliance distributions in the four vessels where flow data was measured are reported in Supporting Information. The values match the range of inferred compliances in the original study.

4.3.4. Information Provided by Physics‐Based Constraints

Based on the results of the previous sections, the constraints provided by the physics‐based loss functions have a significant impact on creating an accurate predictive model. In this section, an attempt is made to quantify the amount of information that is provided by the physics‐informed loss functions in finding the solutions to the physical problem. The predictive performance in Section 4.2.2 (specifically Figure 6) highlighted that introducing even a small amount of data in the form of solutions to the physical problem for a small set of physical parameters improved predictive performance by a significant amount, despite the remaining loss functions being the same. In other words, introducing the solutions obtained via the numerical solver as a loss function may aid the neural networks in minimising the remaining loss functions, and hence finding the solutions to the PDEs.

To validate this idea, a second round of PINNs were fit to the 17‐vessel network corresponding to Patient 1 (Section 4.3.2). Instead of using an uninformative set of initial weights as the starting point for fitting the model, the weights were initialised at the final weights of a well‐trained model, that is, the emulator which was trained using the largest amount of numerical solver data (see Figure 8a, 8148 runs of the numerical solver using different vectors of physical parameters each time). The predictive performance of the trained models after using the best available initialisations are compared to the previous results (presented in Section 4.3.2) in Figure 11. The models trained on fewer data retain much of the information provided by the initialisation, as the physical constraints provided by the loss functions are still being met. In summary, the physics‐informed loss functions offer a wealth of information for obtaining solutions to the PDEs. One of the drawbacks of the PINNs compared to the data‐driven models, however, is an increase in training time due to the much more complex loss surface resulting from the sum of individual losses. The weighting scheme introduced in Section 3.4 was one attempt to balance the loss terms and allow for efficient training. For the 1D fluid dynamics problem of branching networks tackled in this work, the focus moving forward may lie on finding an ideal optimisation routine to obtain the encouraging results in Figure 11 without relying on a fully informative initialisation.

FIGURE 11.

FIGURE 11

Comparing RMSE scores on an out‐of‐sample test set given different PINN initialisations. The boxplots in orange display scores obtained by models which used the final weights of the model trained on 8148 data points as their starting points, whereas the boxplots in blue display scores of models initialised on a separate patient's (less informative) model weights.

5. Discussion

5.1. Physics‐Informed Emulator

In this work, a method for implementing physics‐informed neural networks as emulators of a complex fluid dynamics problem involving a system of vessels around the heart was introduced and compared to standard emulation techniques. In comparative experiments the physics‐informed surrogate consistently outperformed standard data‐driven emulators in both forward prediction accuracy and inverse UQ while using substantially fewer high‐fidelity simulations. The PINN emulator achieved comparable RMSE performance when trained on 512 CFD simulations whereas the purely data‐driven deep neural networks and GP surrogates required 4096 simulations to reach a similar error level (8× larger training budget). Across the tested vessels the PINN models yielded mean RMSE improvements on the order of 40%–48% relative to DDNNs and GPs, and produced posteriors with low KL divergence to simulator‐based ground truth (see Table 3 and Figures 7 and 8b). These quantitative gains highlight that embedding conservation laws and boundary constraints directly into the learning objective substantially improves out‐of‐sample generalisation. Compared to other applications in the field which use physics‐informed models to infer physiological parameters from simplified or smaller‐scale models, we extend the framework to a larger vessel network with multiple interconnected arteries, vessel compliance, and structured outflow boundary conditions. For instance, Naghavi et al. [58] employ a closed‐loop 0D circulation model of the left ventricle and surrounding components aorta, peripheral arteries, vena cava, and left atrium, using PINNs to estimate contractility parameters from single‐beat pressure–volume waveforms. While accurate and fast, their approach does not capture spatial heterogeneity or flow in multiple vessels. Similarly, Jara et al. [59] use a 1D Navier–Stokes model together with MRI imaging to estimate pulmonary artery pressures, which brings in spatial flow/area data but remains limited in network complexity and number of vessels (three vessels with one bifurcation). In contrast, we model a 17‐vessel network with full enforcement of mass conservation and flow continuity, and crucially, infer both systolic and diastolic pressures in unobserved vessels (i.e., vessels lacking direct pressure measurements).

A practical concern is whether a PINN‐based approach remains tractable as the vascular network grows. The architecture used throughout the paper, 5 hidden layers with 128 neurons per layer with the modified MLP, has a total of 68,096 parameters. In our formulation, a separate fully connected network is assigned to each large vessel, all sharing the same architecture. Consequently, both the total number of trainable parameters and the associated training cost scale linearly with the number of vessels. For example, moving from 9 to 17 vessels increased the total parameter count from 6.13×105 to 1.16×106, with a proportional change in training time and no degradation in optimisation stability due to the loss scaling scheme which has trivial complexity (Section 3.4). Cross‐vessel coupling enters only through bifurcation constraints, whose number is also linear in network size. Importantly, once trained, inference for biophysical parameters remains constant‐time (depending only on the number of vessels data in which data was observed), as predictions are vessel‐specific and do not depend on the total network size. This linear scaling suggests that extending the framework to larger arterial networks is computationally feasible, especially given the natural parallelism of the branching structure and the potential for transfer learning demonstrated in this study.

Several alternative strategies have been proposed concerning patient‐specific parameter inference in haemodynamic networks. A common approach is surrogate‐assisted Bayesian sampling, for example GP emulators used to approximate a likelihood in MCMC to accelerate posterior exploration [10, 50], or as emulators of the forward model outputs themselves [60, 61, 62]. For example, Schiavazzi et al. [60] use a GP surrogate for virtual Fontan surgery to propagate uncertainty in geometric and boundary‐condition parameters through 3D haemodynamic simulations for single‐ventricle palliation. Their surrogate, however, is constructed in a very low‐dimensional parameter space and targets a small set of scalar outputs, whereas the present work proposes an emulator for time‐resolved 1D haemodynamics over a higher‐dimensional biophysical parameter space. More recently, graph neural networks have been proposed as surrogates for cardiovascular simulations, such as left ventricular displacement by Dalton et al. [63, 64] and arterial blood flow by Pegolotti et al. [65]. While the GNN‐based models achieve very low errors for fixed anatomies and boundary conditions, extending them to emulate solutions over a biophysical parameter space (e.g., vessel wall stiffness and structured‐tree outflow parameters) would require explicitly embedding these parameters into the graph representation. Our PINN formulation instead accepts them as inputs at each prediction, while enforcing the governing 1D PDEs and network constraints. In parallel, amortised simulation‐based inference methods have been developed for lumped‐parameter haemodynamic models. For example, the InVAErt framework [66, 67] trains a conditional generative model intended to approximate θyand enables fast approximate posterior sampling and identifiability analyses across many patients. However, that work does not explicitly assess the accuracy of the inferred posterior in parameter space. These approaches also require specifying a noise/likelihood model a priori in order to generate synthetic training pairs. In the present work we deliberately retain an explicit Bayesian calibration step with a flexible, GP model discrepancy term, allowing the observation noise and model mismatch to be inferred jointly with θ rather than fixed at training time. Ensemble‐based methods such as the ensemble Kalman filter/inversion [68] require forward evaluations of the full model for every ensemble member at each assimilation step; this is computationally inexpensive for 0D or lumped‐parameter haemodynamic models and thus commonly used in clinical pipelines, but it becomes prohibitive for expensive 1D/3D solvers unless paired with a fast surrogate. Conversely, adjoint‐based gradient approaches [69, 70] are attractive because they yield gradients at a cost roughly independent of parameter dimension, but deriving and implementing either the continuous or discrete adjoint for complex coupled blood flow models is non‐trivial and often a significant engineering effort. PINNs sit between these classes: by embedding the PDE structure they retain computational efficiency (fewer numerical solver runs required to achieve a similar level of accuracy to a data‐driven surrogate) while enabling full posterior inference using the full‐order model (FOM). In this study the PINN‐based emulator reduced the simulator budget needed for reasonable posterior recovery by roughly an order of magnitude compared with purely data‐driven surrogates, making it an attractive compromise for networks where both fidelity and computational cost matter.

5.2. Model Parameters and Predictions

Parameter inference was performed using flow data from four patients, and the emulated fluid dynamics model's pressure predictions were shown to accurately match systolic and diastolic cuff pressure measurements as shown in Figure 10b and in Supporting Information. Inference was performed first by transfer‐learning an existing model to the new patient, reducing the time needed to fit the model and allowing for rapid parameter estimation. The Supporting Information contains further comparisons to the point estimates of the stiffness values for the DORV patients obtained in a previous study by Taylor‐LaPole et al. [29]. The posterior flow predictions displayed in Figure 10a as well as in Supporting Information indicate a strong ability to match most reflective waves as well as peak flows. However, even with patient‐specific geometry and inflow, we do not assume that the 1D model can reproduce all measured 4D‐MRI flows exactly. The GP discrepancy term in the likelihood is intended to capture structural model error and noise‐model mismatch arising from, for example, uncertainty in segmented radii and lengths [49], imperfect registration between MRI planes and 1D vessel locations [71], simplified outflow boundary conditions, and the reduction from 3D haemodynamics to a 1D elastic‐tube model.

To provide a physiologically meaningful comparison we also translate inferred f‐parameters into effective compliance as in Section 2.2.3 (C=3A0r02Eh). The results for all four patients are shown in the Supporting Information.

5.3. Limitations

The inference framework presented in this work has several limiting factors. The PINN model requires patient‐specific inputs, including vascular geometry and inflow waveforms, which are themselves obtained from data subject to measurement error. In this study, these quantities were treated as known and fixed, meaning that uncertainty in their measurement was not propagated through the inference procedure. While this may not strongly bias the inferred parameters in the absence of systematic error, it can lead to an underestimation of posterior uncertainty. Including the inflow function or geometry parameters as inputs into the model, using, for example, neural operators [72], would allow their uncertainty to be taken into account.

Furthermore, the parameter ranges that were used in building the emulator for each of the four patients used expert knowledge from a previous study by Taylor‐LaPole et al. [29]. Extending the emulator to a new patient may require additional expert knowledge, or a new method to ascertain nominal parameter values and ranges. Lastly, the framework laid out in this paper was only applied to four DORV patients and would benefit from a larger case–control study including both healthy (control) and diverse pathological cohorts to assess the robustness of the inference pipeline in realistic clinical settings.

6. Conclusion

Physics‐informed machine learning is a powerful tool for incorporating valuable information about an underlying process into a model. When clinicians are making surgical decisions when constructing the Fontan circuit, the limited time available to create a patient‐specific model must be used efficiently. The PINNs presented in this paper were shown to outperform two widely used, purely data‐driven surrogate modelling methods, deep neural networks and Gaussian Processes, in terms of predictive accuracy (output space) and inverse UQ (parameter space) in two different models of systemic circulation, requiring up to eight times fewer CFD simulations to reach the same level of predictive accuracy. A potential avenue for future work is to use the PINNs as efficient emulators to quickly infer pressure waveforms in the aorta and use these as initial conditions in complex 3D Navier–Stokes simulations.

Author Contributions

William Ryan: conceptualisation, methodology, formal analysis, software, visualisation, investigation, writing – original draft, writing – review and editing. Alyssa Taylor‐LaPole: conceptualisation, visualisation, software, writing – review and editing. Mette Olufsen: conceptualisation, visualisation, software, formal analysis, writing – review and editing. Dirk Husmeier: conceptualisation, methodology, formal analysis, writing – review and editing. Vladislav Vyshemirskiy: conceptualisation, methodology, formal analysis, writing – review and editing.

Funding

This work was supported by Engineering and Physical Sciences Research Council (EPSRC), grant reference no. EP/T017899/1; National Science Foundation (NSF), grant reference no. DGE‐2137100, DMS‐2231482, and 2342344; Additional Ventures, award number 1449780.

Conflicts of Interest

The authors declare no conflicts of interest.

Supporting information

Data S1: cnm70147‐sup‐0001‐Supplementary.pdf.

CNM-42-e70147-s001.pdf (979KB, pdf)

Acknowledgements

This work has been funded by EPSRC, grant reference no. EP/T017899/1 (research hub for statistical inference in complex cardiovascular and cardiomechanic systems). Work carried out by A.T.‐L. was funded by the National Science Foundation (NSF) grant reference no. DGE‐2137100 and DMS‐2231482. Work carried out by M.O. was funded by the NSF, award number 2342344, and by Additional Ventures, award number 1449780. We also would like to acknowledge collaboration with Charles Puelz, University of Houston (cepuelz@central.uh.edu) and Justin Weigand, Baylor College of Medicine (justin.weigand@bcm.edu), who were primary contributors to obtaining data and developing the computational model used in this study.

Ryan W., Taylor‐LaPole A., Olufsen M., Husmeier D., and Vyshemirskiy V., “Physics‐Informed Emulation of Systemic Circulation for Fast Parameter Estimation and Uncertainty Quantification,” International Journal for Numerical Methods in Biomedical Engineering 42, no. 2 (2026): e70147, 10.1002/cnm.70147.

Data Availability Statement

Waveform and geometry data for DORV patients included in this study can be found at https://doi.org/10.5061/dryad.zpc866tj0 and in Supporting Information of Taylor‐LaPole et al. [29].

References

  • 1. Reymond P., Crosetto P., Deparis S., Quarteroni A., and Stergiopulos N., “Physiological Simulation of Blood Flow in the Aorta: Comparison of Hemodynamic Indices as Predicted by 3d Fsi, 3d Rigid Wall and 1d Models,” Medical Engineering & Physics 35, no. 6 (2013): 784–791, 10.1016/j.medengphy.2012.08.009. [DOI] [PubMed] [Google Scholar]
  • 2. Mariscal‐Harana J., Charlton P. H., Vennin S., et al., “Estimating Central Blood Pressure From Aortic Flow: Development and Assessment of Algorithms,” American Journal of Physiology. Heart and Circulatory Physiology 320, no. 2 (2021): H494–H510, 10.1152/ajpheart.00241.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Johnson D. A., Rose W. C., Edwards J. W., Naik U. P., and Beris A. N., “Application of 1d Blood Flow Models of the Human Arterial Network to Differential Pressure Predictions,” Journal of Biomechanics 44, no. 5 (2011): 869–876, 10.1016/j.jbiomech.2010.12.003. [DOI] [PubMed] [Google Scholar]
  • 4. Olufsen M. S., Peskin C. S., Kim W. Y., Pedersen E. M., Nadim A., and Larsen J., “Numerical Simulation and Experimental Validation of Blood Flow in Arteries With Structured‐Tree Outflow Conditions,” Annals of Biomedical Engineering 28, no. 11 (2000): 1281–1299, 10.1114/1.1326031. [DOI] [PubMed] [Google Scholar]
  • 5. Blanco P. J., Bulant C. A., Müller L. O., et al., “Comparison of 1d and 3d Models for the Estimation of Fractional Flow Reserve,” Scientific Reports 8 (2018): 17275, 10.1038/s41598-018-35344-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Rychik J., Atz A. M., Celermajer D. S., et al., “Evaluation and Management of the Child and Adult With Fontan Circulation: A Scientific Statement From the American Heart Association,” Circulation 140, no. 6 (2019): e234–e284. [DOI] [PubMed] [Google Scholar]
  • 7. Fontan F. and Baudet E., “Surgical Repair of Tricuspid Atresia,” Thorax 26 (1971): 240–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Najm H. N., “Uncertainty Quantification and Polynomial Chaos Techniques in Computational Fluid Dynamics,” Annual Review of Fluid Mechanics 41, no. 1 (2009): 35–52. [Google Scholar]
  • 9. Colebank M. J. and Chesler N. C., “Efficient Uncertainty Quantification in a Spatially Multiscale Model of Pulmonary Arterial and Venous Hemodynamics,” Biomechanics and Modeling in Mechanobiology 23, no. 6 (2024): 1909–1931, 10.1007/s10237-024-01875-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Paun L. M. and Husmeier D., “Markov Chain Monte Carlo With Gaussian Processes for Fast Parameter Estimation and Uncertainty Quantification in a 1d Fluid‐Dynamics Model of the Pulmonary Circulation,” International Journal for Numerical Methods in Biomedical Engineering 37, no. 2 (2021): e3421, 10.1002/cnm.3421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Du P., Zhu X., and Wang J.‐X., “Deep Learning‐Based Surrogate Model for Three‐Dimensional Patient‐Specific Computational Fluid Dynamics,” Physics of Fluids 34, no. 8 (2022): 081906, 10.1063/5.0101128. [DOI] [Google Scholar]
  • 12. Cai L., Ren L., Wang Y., Xie W., Zhu G., and Gao H., “Surrogate Models Based on Machine Learning Methods for Parameter Estimation of Left Ventricular Myocardium,” Royal Society Open Science 8, no. 1 (2021): 201121, 10.1098/rsos.201121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Raissi M., Yazdani A., and Karniadakis G. E., “Hidden Fluid Mechanics: A Navier–Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data,” 2018.
  • 14. Garay J., Dunstan J., Uribe S., and Sahli Costabal F., “Physics‐Informed Neural Networks for Blood Flow Inverse Problems,” 2023. [DOI] [PubMed]
  • 15. Kissas G., Yang Y., Hwuang E., Witschey W. R., Detre J. A., and Perdikaris P., “Machine Learning in Cardiovascular Flows Modeling: Predicting Arterial Blood Pressure From Non‐Invasive 4d Flow Mri Data Using Physics‐Informed Neural Networks,” Computer Methods in Applied Mechanics and Engineering 358 (2020): 112623, 10.1016/j.cma.2019.112623. [DOI] [Google Scholar]
  • 16. Yang L., Meng X., and Karniadakis G. E., “B‐Pinns: Bayesian Physics‐Informed Neural Networks for Forward and Inverse Pde Problems With Noisy Data,” Journal of Computational Physics 425 (2021): 109913, 10.1016/j.jcp.2020.109913. [DOI] [Google Scholar]
  • 17. Sun L. and Wang J.‐X., “Physics‐Constrained Bayesian Neural Network for Fluid Flow Reconstruction With Sparse and Noisy Data,” Theoretical and Applied Mechanics Letters 10, no. 3 (2020): 161–169, 10.1016/j.taml.2020.01.031. [DOI] [Google Scholar]
  • 18. Huang M., Yao C., Wang P., Cheng L., and Ying W., “Physics‐Informed Data‐Driven Cavitation Model for a Specific Mie–grüneisen Equation of State,” Journal of Computational Physics 524 (2025): 113703, 10.1016/j.jcp.2024.113703. [DOI] [Google Scholar]
  • 19. Sun L., Gao H., Pan S., and Wang J.‐X., “Surrogate Modeling for Fluid Flows Based on Physics‐Constrained Deep Learning Without Simulation Data,” Computer Methods in Applied Mechanics and Engineering 361 (2020): 112732, 10.1016/j.cma.2019.112732. [DOI] [Google Scholar]
  • 20. Isaev A., Dobroserdova T., Danilov A., and Simakov S., “Physically Informed Deep Learning Technique for Estimating Blood Flow Parameters in Four‐Vessel Junction After the Fontan Procedure,” Computation 12, no. 3 (2024): 41, 10.3390/computation12030041. [DOI] [Google Scholar]
  • 21. Wu C., Zhu M., Tan Q., Kartha Y., and Lu L., “A Comprehensive Study of Non‐Adaptive and Residual‐Based Adaptive Sampling for Physics‐Informed Neural Networks,” Computer Methods in Applied Mechanics and Engineering 403 (2023): 115671, 10.1016/j.cma.2022.115671. [DOI] [Google Scholar]
  • 22. Gao Z., Yan L., and Zhou T., “Failure‐Informed Adaptive Sampling for Pinns,” SIAM Journal on Scientific Computing 45, no. 4 (2023): A1971–A1994, 10.1137/22M1527763. [DOI] [Google Scholar]
  • 23. Daw A., Bu J., Wang S., Perdikaris P., and Karpatne A., “Mitigating Propagation Failures in Physics‐Informed Neural Networks Using Retain‐Resample‐Release (r3) Sampling,” in Proceedings of the 40th International Conference on Machine Learning (PMLR, 2023), 7264–7302.
  • 24. Wang S., Li B., Chen Y., and Perdikaris P., “Piratenets: Physics‐Informed Deep Learning With Residual Adaptive Networks,” Journal of Machine Learning Research 25, no. 402 (2024): 1–51.41334350 [Google Scholar]
  • 25. Immordino G., Vaiuso A., Da Ronch A., and Righi M., “Predicting Transonic Flowfields in Non‐Homogeneous Unstructured Grids Using Autoencoder Graph Convolutional Networks,” Journal of Computational Physics 524 (2025): 113708, 10.1016/j.jcp.2024.113708. [DOI] [Google Scholar]
  • 26. Yadav V., Casel M., and Ghani A., “Rf‐Pinns: Reactive Flow Physics‐Informed Neural Networks for Field Reconstruction of Laminar and Turbulent Flames Using Sparse Data,” Journal of Computational Physics 524 (2025): 113698, 10.1016/j.jcp.2024.113698. [DOI] [Google Scholar]
  • 27. Bao G., Ma C., and Gong Y., “Pfwnn: A Deep Learning Method for Solving Forward and Inverse Problems of Phase‐Field Models,” Journal of Computational Physics 527 (2025): 113799, 10.1016/j.jcp.2025.113799. [DOI] [Google Scholar]
  • 28. Neal R. M., “MCMC Using Hamiltonian Dynamics,” 2011, 10.1201/b10905. [DOI]
  • 29. Taylor‐LaPole A. M., Paun L. M., Lior D., Weigand J. D., Puelz C., and Olufsen M. S., “Parameter Selection and Optimization of a Computational Network Model of Blood Flow in Single‐Ventricle Patients,” Journal of the Royal Society, Interface 22, no. 223 (2025): 20240663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Paun L. M., Colebank M. J., Taylor‐LaPole A., et al., “Secret: Statistical Emulation for Computational Reverse Engineering and Translation With Applications in Healthcare,” Computer Methods in Applied Mechanics and Engineering 430 (2024): 117193, 10.1016/j.cma.2024.117193. [DOI] [Google Scholar]
  • 31. Soulat G., McCarthy P., and Markl M., “4d Flow With Mri,” Annual Review of Biomedical Engineering 22 (2020): 103–126, 10.1146/annurev-bioeng-100219-110055. [DOI] [PubMed] [Google Scholar]
  • 32. Stankovic Z., Allen B. D., Garcia J., Jarvis K. B., and Markl M., “4d Flow Imaging With MRI,” Cardiovascular Diagnosis & Therapy 4, no. 2 (2014): 173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Lax P. and Wendroff B., “Systems of Conservation Laws,” in Selected Papers Volume I (Springer, 2005), 263–283. [Google Scholar]
  • 34. Taylor‐LaPole A. M., Colebank M. J., Weigand J. D., Olufsen M. S., and Puelz C., “A Computational Study of Aortic Reconstruction in Single Ventricle Patients,” Biomechanics and Modeling in Mechanobiology 22, no. 1 (2023): 357–377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Moser P., Fenz W., Thumfart S., Ganitzer I., and Giretzlehner M., “Modeling of 3d Blood Flows With Physics‐Informed Neural Networks: Comparison of Network Architectures,” Fluids 8, no. 2 (2023): 46, 10.3390/fluids8020046. [DOI] [Google Scholar]
  • 36. Rasmussen C. E. and Williams C. K. I., Gaussian Processes for Machine Learning (MIT Press, 2006). [Google Scholar]
  • 37. MacKay D. J. C., “Introduction to Gaussian Processes,” NATO ASI Series F Computer and Systems Sciences 168 (1998): 133–166. [Google Scholar]
  • 38. Liu X., Xie B., Zhang D., Zhang H., Gao Z., and de Albuquerque V. H. C., “Unsupervised Physics‐Informed Deep Learning for Assessing Pulmonary Artery Hemodynamics,” Expert Systems With Applications 257 (2024): 125079, 10.1016/j.eswa.2024.125079. [DOI] [Google Scholar]
  • 39. Wang S., Teng Y., and Perdikaris P., “Understanding and Mitigating Gradient Flow Pathologies in Physics‐Informed Neural Networks,” SIAM Journal on Scientific Computing 43, no. 5 (2021): A3055–A3081, 10.1137/20M1318043. [DOI] [Google Scholar]
  • 40. Wang S., Yu X., and Perdikaris P., “When and Why Pinns Fail to Train: A Neural Tangent Kernel Perspective,” Journal of Computational Physics 449 (2022): 110768, 10.1016/j.jcp.2021.110768. [DOI] [Google Scholar]
  • 41. Liu D. and Wang Y., “A Dual‐Dimer Method for Training Physics‐Constrained Neural Networks With Minimax Architecture,” Neural Networks 136 (2021): 112–125, 10.1016/j.neunet.2020.12.028. [DOI] [PubMed] [Google Scholar]
  • 42. McClenny L. and Braga‐Neto U., “Self‐Adaptive Physics‐Informed Neural Networks Using a Soft Attention Mechanism,” 2024, arXiv:2009.04544, http://arxiv.org/abs/2009.04544.
  • 43. Wang S., Wang H., Seidman J. H., and Perdikaris P., “Random Weight Factorization Improves the Training of Continuous Neural Representations,” 2022.
  • 44. Wang S., Sankaran S., Wang H., and Perdikaris P., “An Expert's Guide to Training Physics‐Informed Neural Networks,” 2023.
  • 45. Raissi M., Perdikaris P., and Karniadakis G. E., “Physics‐Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations,” Journal of Computational Physics 378 (2019): 686–707, 10.1016/j.jcp.2018.10.045. [DOI] [Google Scholar]
  • 46. Kingma D. P. and Ba J., “Adam: A Method for Stochastic Optimization,” 2017, https://arxiv.org/abs/1412.6980.
  • 47. Brynjarsdóttir J. and O'Hagan A., “Learning About Physical Parameters: The Importance of Model Discrepancy,” Inverse Problems 30, no. 11 (2014): 114007, 10.1088/0266-5611/30/11/114007. [DOI] [Google Scholar]
  • 48. Kennedy M. C. and O'Hagan A., “Bayesian Calibration of Computer Models,” Journal of the Royal Statistical Society. Series B, Statistical Methodology 63, no. 3 (2001): 425–464, 10.1111/1467-9868.00294. [DOI] [Google Scholar]
  • 49. Bartolo M. A., Taylor‐LaPole A. M., Gandhi D., et al., “Computational Framework for the Generation of One‐Dimensional Vascular Models Accounting for Uncertainty in Networks Extracted From Medical Images,” Journal of Physiology 602, no. 16 (2024): 3929–3954, 10.1113/JP286193. [DOI] [PubMed] [Google Scholar]
  • 50. Paun L. M., Colebank M. J., Olufsen M. S., Hill N. A., and Husmeier D., “Assessing Model Mismatch and Model Selection in a Bayesian Uncertainty Quantification Analysis of a Fluid‐Dynamics Model of Pulmonary Blood Circulation,” Journal of the Royal Society, Interface 17, no. 173 (2020): 20200886, 10.1098/rsif.2020.0886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Hoffman M. D. and Gelman A., “The No‐u‐Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo,” Journal of Machine Learning Research 15, no. 47 (2014): 1593–1623. [Google Scholar]
  • 52. Gelman A. and Rubin D. B., “Inference From Iterative Simulation Using Multiple Sequences,” Statistical Science 7, no. 4 (1992): 457–472, 10.1214/ss/1177011136. [DOI] [Google Scholar]
  • 53. Mynard J. P. and Smolich J. J., “One‐Dimensional Haemodynamic Modeling and Wave Dynamics in the Entire Adult Circulation,” Annals of Biomedical Engineering 43, no. 6 (2015): 1443–1460, 10.1007/s10439-015-1313-8. [DOI] [PubMed] [Google Scholar]
  • 54. Thijssen D. H. J., Black M. A., Pyke K. E., et al., “Assessment of Flow‐Mediated Dilation in Humans: A Methodological and Physiological Guideline,” American Journal of Physiology. Heart and Circulatory Physiology 300, no. 1 (2011): H2–H12, 10.1152/ajpheart.00471.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Green D. J., Jones H., Thijssen D., Cable N. T., and Atkinson G., “Flow‐Mediated Dilation and Cardiovascular Event Prediction: Does Nitric Oxide Matter?,” Hypertension 57, no. 3 (2011): 363–369, 10.1161/HYPERTENSIONAHA.110.167015. [DOI] [PubMed] [Google Scholar]
  • 56. A'Court C., Stevens R., Sanders S., Ward A., McManus R., and Heneghan C., “Type and Accuracy of Sphygmomanometers in Primary Care: A Cross‐Sectional Observational Study,” British Journal of General Practice 61, no. 590 (2011): e598–e603, 10.3399/bjgp11X593884. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57. Shahbabu B., Dasgupta A., Sarkar K., and Sahoo S. K., “Which Is More Accurate in Measuring the Blood Pressure? A Digital or an Aneroid Sphygmomanometer,” Journal of Clinical and Diagnostic Research: JCDR 10, no. 3 (2016): LC11–LC14, 10.7860/JCDR/2016/14351.7458. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Naghavi E., Wang H., Fan L., et al., “Rapid Estimation of Left Ventricular Contractility With a Physics‐Informed Neural Network Inverse Modeling Approach,” Artificial Intelligence in Medicine 157 (2024): 102995, 10.1016/j.artmed.2024.102995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Jara S., Sotelo J., Ortiz‐Puerta D., et al., “Physics‐Informed Neural Network for Modeling the Pulmonary Artery Blood Pressure From Magnetic Resonance Images: A Reduced‐Order Navier–Stokes Model,” Biomedicine 13, no. 9 (2025): 2058, 10.3390/biomedicines13092058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60. Schiavazzi D. E., Arbia G., Baker C., et al., “Uncertainty Quantification in Virtual Surgery Hemodynamics Predictions for Single Ventricle Palliation,” International Journal for Numerical Methods in Biomedical Engineering 32, no. 3 (2016): e02737, 10.1002/cnm.2737. [DOI] [PubMed] [Google Scholar]
  • 61. Davies V., Noè U., Lazarus A., et al., “Fast Parameter Inference in a Biomechanical Model of the Left Ventricle by Using Statistical Emulation,” Journal of the Royal Statistical Society. Series C, Applied Statistics 68, no. 5 (2019): 1555–1576, 10.1111/rssc.12374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Kachabi A., Correa S. A., Chesler N. C., and Colebank M. J., “Bayesian Parameter Inference and Uncertainty Quantification for a Computational Pulmonary Hemodynamics Model Using Gaussian Processes,” Computers in Biology and Medicine 194 (2025): 110552, 10.1016/j.compbiomed.2025.110552. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Dalton D., Husmeier D., and Gao H., “Physics‐Informed Graph Neural Network Emulation of Soft‐Tissue Mechanics,” Computer Methods in Applied Mechanics and Engineering 417 (2023): 116351, 10.1016/j.cma.2023.116351. [DOI] [Google Scholar]
  • 64. Dalton D., Gao H., and Husmeier D., “Emulation of Cardiac Mechanics Using Graph Neural Networks,” Computer Methods in Applied Mechanics and Engineering 401 (2022): 115645, 10.1016/j.cma.2022.115645. [DOI] [Google Scholar]
  • 65. Pegolotti L., Pfaller M. R., Rubio N. L., et al., “Learning Reduced‐Order Models for Cardiovascular Simulations With Graph Neural Networks,” Computers in Biology and Medicine 168 (2024): 107676, 10.1016/j.compbiomed.2023.107676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Tong G. G., Sing‐Long C. A., and Schiavazzi D. E., “Invaert Networks for Amortized Inference and Identifiability Analysis of Lumped‐Parameter Haemodynamic Models,” Philosophical Transactions of the Royal Society of London, Series A: Mathematical, Physical and Engineering Sciences 383, no. 2293 (2025): 20240215, 10.1098/rsta.2024.0215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Tong G. G., Sing Long C. A., and Schiavazzi D. E., “InVAErt Networks: A Data‐Driven Framework for Model Synthesis and Identifiability Analysis,” Computer Methods in Applied Mechanics and Engineering 423 (2024): 116846, 10.1016/j.cma.2024.116846. [DOI] [Google Scholar]
  • 68. Canuto D., Pantoja J., Han J., Dutson E., and Eldredge J., “An Ensemble Kalman Filter Approach to Parameter Estimation for Patient‐Specific Cardiovascular Flow Modeling,” Theoretical and Computational Fluid Dynamics 34 (2020): 521–544, 10.1007/s00162-020-00530-2. [DOI] [Google Scholar]
  • 69. Balaban G., Alnæs M. S., Sundnes J., and Rognes M. E., “Adjoint Multi‐Start‐Based Estimation of Cardiac Hyperelastic Material Parameters Using Shear Data,” Biomechanics and Modeling in Mechanobiology 15, no. 6 (2016): 1509–1521, 10.1007/s10237-016-0780-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Löhner R., Antil H., Mut F., and Cebral J., “Adjoint‐Based Estimation of Sensitivity of Clinical Measures to Boundary Conditions for Arteries,” Journal of Computational Physics 497 (2024): 112619, 10.1016/j.jcp.2023.112619. [DOI] [Google Scholar]
  • 71. Lior D., Rusin C. G., Weigand J., et al., “Deformable Registration of Mra and 4D Flow Images to Facilitate Accurate Estimation of Flow Properties Within Blood Vessels,” arXiv, 2025, 312, 03116.
  • 72. Lu L., Jin P., Pang G., Zhang Z., and Karniadakis G. E., “Learning Nonlinear Operators via Deeponet Based on the Universal Approximation Theorem of Operators,” Nature Machine Intelligence 3, no. 3 (2021): 218–229, 10.1038/s42256-021-00302-5. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data S1: cnm70147‐sup‐0001‐Supplementary.pdf.

CNM-42-e70147-s001.pdf (979KB, pdf)

Data Availability Statement

Waveform and geometry data for DORV patients included in this study can be found at https://doi.org/10.5061/dryad.zpc866tj0 and in Supporting Information of Taylor‐LaPole et al. [29].


Articles from International Journal for Numerical Methods in Biomedical Engineering are provided here courtesy of Wiley

RESOURCES