Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Nov 29.
Published in final edited form as: Eng Comput. 2022 Sep 16;38(5):4167–4182. doi: 10.1007/s00366-022-01733-3

Data-driven Modeling of the Mechanical Behavior of Anisotropic Soft Biological Tissue

Vahidullah Tac 1, Vivek D Sree 1, Manuel K Rausch 2, Adrian B Tepole 1,3
PMCID: PMC10686525  NIHMSID: NIHMS1891552  PMID: 38031587

Abstract

Closed-form constitutive models are the standard to describe soft tissue mechanical behavior. However, inherent pitfalls of an explicit functional form include poor fits to the data, non-uniqueness of fit, and sensitivity to parameters. Here we design deep neural networks (DNN) that satisfy desirable physics constraints in order to replace expert models of tissue mechanics. To guarantee stress-objectivity, the DNN takes strain (pseudo)-invariants as inputs, and outputs the strain energy and its derivatives. Polyconvexity of strain energy is enforced through the loss function. Direct prediction of both energy and derivative functions enables the computation of the elasticity tensor needed for a finite element implementation. We showcase the DNN ability to learn the anisotropic mechanical behavior of porcine and murine skin from biaxial test data. A multi-fidelity scheme that combines high fidelity experimental data with a low fidelity analytical approximation yields the best performance. Finite element simulations of tissue expansion with the DNN model illustrate the potential of this method to impact medical device design for skin therapeutics. We expect that the open data and software from this work will broaden the use of data-driven constitutive models of tissue mechanics.

Keywords: Machine Learning, Nonlinear finite elements, Constitutive modeling, Abaqus User Subroutine UMAT, multi-fidelity models, Skin mechanics

Introduction

Skin is the largest organ in the body and understanding its mechanical properties is a crucial step in many biomedical applications, from prosthesis design to surgical intervention [1]. The tissue microstructure is characterized by the presence of semi-flexible biopolymer fiber networks such as collagen and elastin, which endow skin with nonlinear and anisotropic behavior [2]. The mechanical properties of skin are actually common across many soft connective tissues [3, 4]. Traditionally, the mechanics of skin and other soft tissues has been modelled using expert-constructed constitutive equations [5, 6, 7]. In this approach, a closed-form expression describing the main features of the mechanics of a family of materials is constructed first. Then, the free parameters in the equations are fitted to a specific material in the family to obtain a calibrated model. Inherent restrictions of the explicit functional form can result in poor fitting and high sensitivity to parameters [8]. Unfortunately, even considering just skin out of all connective soft tissue, there is currently no consensus on the choice of model that is most suitable in a particular application [9, 10, 11].

A new, emergent approach to material modeling is the use of data-driven methods [12, 13]. Among them, deep neural networks (DNN) have been successfully employed to describe the mechanical behavior of several materials [14, 15, 16, 17]. In this approach, there is no need to limit the model to an analytical representation, which results in more accurate predictions than traditional models [18, 19, 20]. Physics constraints such as objectivity of the stress and convexity of the hyperelastic strain energy potential are naturally satisfied by most closed-form constitutive models [21]. These constraints are embedded into data-driven methods either during the design of the algorithm itself [22], or as a penalty [23]. Drawbacks of existing approaches stem from scarcity of high fidelity test data to train the data-driven models [19]. This limitation is particularly prevalent in soft tissue mechanics. Additionally, there is a lack of data-driven material model software that can function in standard finite element solvers, which severely limits the applicability of these emerging methods to biomedical applications.

The route followed here lies between the purely data-driven approach and the expert modeling approach. Closed-form material models already include knowledge of physics relevant to soft tissue, observations of the underlying microstructure, and intuition from the modeller regarding the main features of the material response. For example, to model skin, we have assumed hyperelasticity and used expert-designed strain energy functions to fit murine and porcine skin data [24]. However, the error in the fits can be undesirable, the parameters non-unique, and the predictions can be highly sensitive to the parameters [25]. Here we design DNNs constitutive models and train them on multi-fidelity data: analytical strain energy functions serve as low fidelity approximations, high fidelity experimental measurements complement the data set. This approach is based on the recent literature that shows the advantage of multi-fidelity schemes over single fidelity approaches [17, 16, 26].

The proposed DNNs output the strain energy and its derivatives with respect to the isochoric strain invariants, including anisotropy, satisfying stress-objectivity a priori. The loss function is designed to impose convexity constraints on the strain energy. The multi-output design in which both the energy and derivatives are predicted independently by the DNN, but coupled through additional loss terms, provides more flexibility during training and enables the computation of the the stress and elasticity tensors. As a result, we are able to implement a DNN user material (UMAT) subroutine for the widely used nonlinear finite element package Abaqus [27], and showcase its potential to impact skin therapeutics through simulations of tissue expansion. The work shown here will extend the reach of machine learning tools to improve the modeling of soft tissue mechanics, in particular through improved constitutive models ready to be used in commercial finite element codes.

Materials and Methods

Constitutive equations for a hyperelastic material with two families of fibers

In this study we use a Helmholtz free energy, Ψ, that is a function of the right Cauchy-Green deformation tensor, C, and two material direction vectors in the reference configuration, v0 and w0. This form of the Helmholtz free energy function allows for greater flexibility in recreating the mechanical behavior of materials where more than one family of fibers is present or even when the orientation of fibers is random, which is usually the case in biological tissues. For soft tissues, which we assume to be nearly incompressible, the additive split into isochoric and volumetric parts is used [5],

Ψ=Ψiso(Iˆ1,Iˆ2,Iˆ4v,Iˆ4w)+Ψvol(J), (1)

where J=detC is the volume change, and the isochoric strain invariants, Iˆ1,Iˆ2,Iˆ4v and Iˆ4w are defined as

Iˆ1=Cˆ:I=tr(Cˆ),Iˆ2=12[Iˆ12tr(Cˆ2)],Iˆ4v=Cˆ:v0v0=Cˆ:V0,Iˆ4w=Cˆ:w0w0=Cˆ:W0. (2)

The isochoric right Cauchy Green deformation, Cˆ, can be defined in terms of the isochoric part of the deformation gradient,

F^=J1/3F, (3)
C^=F^F^=J2/3C. (4)

The second Piola-Kirchhoff stress tensor, S, follows from the Doyle-Erickson formula by differentiating the strain energy Ψ with respect to C and following the procedure outlined by Coleman and Noll [28, 29], arriving at

S=2ΨC=Siso+Svol, (5)
Siso=S^:C^C=J2/3S^:1,Svol=2pJC=JpC1. (6)

The following definition of the pressure has been introduced p=dΨvol/dJ. Additionally, the fictitious second Piola-Kirchhoff stress tensor, Sˆ, is the result from differentiating the isochoric part of the strain energy with respect to the isochoric invariants, i.e. Ψˆ1=Ψiso/Iˆ1,Ψˆ2=Ψiso/Iˆ2,Ψˆ4v=Ψiso/Iˆ4v and Ψˆ4w=Ψiso/Iˆ4w. The full expansion of the fictitious stress tensor is

Sˆ=2ΨisoCˆ=2[Ψˆ1I+Ψˆ2(Iˆ1ICˆ)+Ψˆ4vV0+Ψˆ4wW0]. (7)

The term fictitious originates from the fact that derivatives with respect to the full right Cauchy Green deformation tensor needs the projection with the fourth order tensor

1=I13C1C,

which relates derivatives with respect to the isochoric part of the deformation to derivatives with respect to the total deformation. Note that if the material under consideration is incompressible, i.e. J=1, then Cˆ=C, (5) reduces to

S=2[Ψˆ1I+Ψˆ2(Iˆ1IC)+Ψˆ4vV0+Ψˆ4wW0]+pC1, (8)

and the pressure p becomes an unknown Lagrange multiplier field. In certain cases, p can be solved from boundary conditions. In this study, the nearly incompressible formulation is used in the finite element formulation, while the incompressible formulation is used during neural network training since this constraint can be easily enforce for the plane stress biaxial deformations considered.

The UMAT subroutine requires the computation of the Cauchy stress tensor, σ. Thus, for completeness, we state the standard push forward operation for the stress

σ=1JFSF. (9)

The finite element subroutine also requires the computation of the elasticity tensor, 𝕔abaqus [30]. For ease of derivation, the material version of the elasticity tensor, , is introduced first,

=2SC=iso+vol, (10)
iso=2SisoC,vol=2SvolC. (11)

The expressions for the volumetric and isochoric parts of the elasticity tensor, vol and iso, can be further expanded,

vol=Jp˜C1C12JpC1C1, (12)
iso=23SisoC1+J4/31:ˆ:1T23C1Siso+23J2/3tr(Sˆ)2, (13)

where the modified pressure term, p˜=p+Jdp/dJ, has been introduced, as well as the special product noted by () defined as ()ijkl=[()ik()jl+()il()jk]/2, and an additional fourth order projection tensor 2,

2=C1C113C1C1.

Finally, the fictitious elasticity tensor, ˆ is obtained from differentiating the fictitious stress tensor with respect to the isochoric part of the deformation tensor,

ˆ=2SˆCˆ.

The full expansion of ˆ is available in the Supplement. The only remark needed in the main text is that the tensor ˆ requires the second derivatives of the strain energy function with respect to the isochoric invariants: Ψˆ11=2Ψˆ/2Iˆ1,Ψˆ12=2Ψˆ/Iˆ2Iˆ1,Ψˆ14v=2Ψˆ/Iˆ4vIˆ1, etc. This point will become important in the design in the neural network later on.

As stated above, the elasticity tensor needed in the UMAT subroutine is associated with the deformed configuration. The push-forward operation for the elasticity tensor yields

𝕔=1J(F¯F)::(F¯F)T (14)

where we have introduced the modified dyadic product defined as (¯)ijkl=()ik()jl. The tensor 𝕔 is related to the Truesdell stress rate; however, Abaqus increments employ the Jaumann stress rate. Therefore, the consistent tangent for Abaqus is not Eq. (14) but rather

𝕔abaqus=𝕔+12(σ¯I+σ_I+I¯σ+I_σ). (15)

with the modified dyadic product (_)ijkl=()il()jk.

As remarked before, incompressibility is imposed exactly during the training of the neural network, with p determined from boundary conditions. However, in the UMAT we use a volumetric strain energy which leads to the following expression for p,

p=K(J1), (16)

with K the bulk modulus. In this study we set K=1 MPa.

Neural network structure and training

We use a fully connected DNN to learn the mechanical behavior of skin. The neural network takes four inputs, the isochoric strain invariants in (2), and produces five outputs, the strain energy, Ψisop, and its derivatives with respect to the invariants, Ψˆ1p,Ψˆ2p,Ψˆ4vp and Ψˆ4wp. Note that the notation ()p is used to denote the predicted values of the DNN. The network architecture is summarized in Table 1.

Table 1:

Neural network architecture

Layer Number of nodes Activation function
Input 4 None
Hidden layer 1 4 Sigmoid
Hidden layer 2 8 Sigmoid
Hidden layer 3 8 Sigmoid
Output 5 Linear/Quadratic

Training data for the DNN is in the form of stretch and stress data, as well as the values of strain energy obtained by the integration of the stretch-stress curves. Therefore, the first component of the loss function is simply the comparison of the predicted strain energy, Ψisop, against the observed Ψisod, where ()d is used to refer to data. The first component of the loss function is

1=1Nn=1N[((Ψisop)(n)(Ψisod)(n))2+i=1,2,4v,4w((Ψˆibp)(n)(Ψˆip)(n))2], (17)

where ()(n) denotes the nth training point, out of a total of N training points. The second term in eq. (17) is a regularization term that is added to the loss function to enforce that the predicted derivatives are consistent with the strain energy. In other words, the derivatives that are a direct output of the neural network, Ψˆip, should coincide with the derivatives of Ψisop calculated using back-propagation, denoted as Ψˆibp in eq. (17), with i=1,2,4v,4w, and where ()bp stands for “back-propagation”.

The second component of the loss results from comparing the stress computed with the neural network outputs against the observed stress σd. The stress, defined in Eqs. (8) and (9), is computed based on the direct strain energy derivatives output by the neural network, Ψip, to produce σp. The loss for the stress data can then be simply stated as

2=1Nn=1N(σp)(n)(σd)(n)F (18)

where F denotes the Frobenius norm.

To guarantee existence of a solution to the boundary value problem, a suitable constraint on the strain energy is that of polyconvexity with respect to the deformation gradient, F [21, 31, 32]. An alternative approach is to enforce convexity of the strain energy with respect to C [33]. It has been shown that convexity with respect to C can lead to the existence of global minima in boundary value problems under certain conditions [34]. Convexity with respect to C has been employed in constitutive modeling of biological tissues [35] and in numerous studies on data-driven models of hyperelastic materials [33, 19]. For instance, the popular constitutive model by Holzapfel, Gasser and Ogden to capture the mechanical behavior of collagenous tissues was developed to fulfill this condition [5]. However, convexity with respect to C, and polyconvexity with respect to F are not equivalent. Polyconvexity of the strain energy function with respect to F, together with some growth conditions on the strain energy, guarantees the existence of global minimizers to the total potential energy functional [31].

In the current study we enforce convexity of the strain energy function with respect to the deformation invariants of C with a global minimum at or below the point I1=3,I2=3,I4v=1 and I4w=1. The invariants are already convex functions of F or cofF [36, 21]. Thus, the non-decreasing convex function of the invariants, for I13,I23,I4v1 and I4w1, together with a suitable volumetric energy convex in J results in polyconvexity of the strain energy with respect to F.

For a function to be convex with respect to its arguments, it’s Hessian matrix, H, must be positive semi-definite [37]. The Hessian matrix of the strain energy as a function of the invariants is

H=(Ψ11bpΨ12bpΨ14vbpΨ14wbpΨ21bpΨ22bpΨ24vbpΨ24wbpΨ4v1bpΨ4v2bpΨ4v4vbpΨ4v4wbpΨ4w1bpΨ4w2bpΨ4w4vbpΨ4w4wbp). (19)

The notation Ψijbp indicates the second derivative of the strain energy computed with the neural network by differentiating the outputs Ψˆip with respect to the jth input using back-propagation. We impose positive-definiteness of the Hessian matrix using the principal minor test [38]. For a matrix to be positive definite, it has to be symmetric and all its leading principal minors, Δk, must be positive. This condition is imposed in terms of an additional loss term,

3=1Nn=1N(H)(n)(H)(n)F+1Nn=1Nk=14max((Δk(n),0)). (20)

Note that non-negative derivatives are obtained by passing the outputs Ψˆip through the quadratic function yi=xi2 as indicated in Table 1. The choice of Linear or Quadratic activation functions in the last layer is selected in the examples below depending on whether convexity is imposed or not. The non-negative outputs do not directly enforce convexity. Rather, given that the invariants considered are convex functions of F or cofF, convex non-decreasing functions of these invariants are needed for polyconvexity. The quadratic activation functions for the Ψˆip outputs in the last layer enforce the non-decreasing condition for the strain energy by restricting the derivatives to be non-negative.

The total loss is a weighted sum of the terms discussed so far,

=a11+a22+a33. (21)

If training data from sources with different fidelities are used, the total loss of the multi-fidelity (mf) dataset is given as a weighted sum of the losses of the high fidelity (hf) and low fidelity (lf) datasets,

mf=lf+ahfhf. (22)

The training of the DNN was performed using the Adam optimization algorithm [39]. The initial learning rate was set to 4.0e5. The exponential decay rates for first and second moment estimates, β1 and β2, were set to 0.9 and 0.99 respectively. The DNN was trained in 100000 epochs without the use of batching. The training was implemented using Keras [40] with a Tensorflow [41] back-end on a workstation with the following specifications: Intel Xeon E5–1630 3.70 GHz CPU, 16 GB DDR4/2400 MHz random access memory, and Nvidia GeForce GTX 1080 GPU. The values of the three weights a1,a2 and a3 are set to 0.1, 1.0 and 0.008, respectively, after performing a hyperparameter study as reported in the Supplement.

Synthetic data generation

In the majority of biomedical applications it is difficult to obtain sufficient high fidelity data to train a neural network. The number of measurements might be limited, or the data points may be constrained to a narrow region of the input space. It is then beneficial to make use of low fidelity data if available.

In this study, high fidelity data is in the form of biaxial stress-stretch measurements. However, only two or three curves within the four-dimensional input space defined by the invariants is explored. Therefore, we generate synthetic data using the Gasser-Ogden-Holzapfel (GOH) [6] material model. The GOH model proposes an isochoric strain energy of the form

Ψˆ(C,a0)=Ψˆiso(C)+Ψˆaniso(C,v0) (23)

where a0=(sin θ,cos θ,0) is vector denoting the mean fiber direction and parameterized by the angle θ. The functional forms for the GOH strain energy are

Ψ^iso(C)=μ(I^13), (24)
Ψ^aniso(C,a0)=k14k2[exp(k2E2)1], (25)

with the generalized fiber strain

E=[κIˆ1+(13κ)Iˆ4v1]. (26)

The volumetric term that is the same as the one we used to penalize volume changes in our formulation [5, 6, 42],

Ψvol=K2(J1)2. (27)

The derivation of the stress tensor for the GOH strain energy is not repeated here, the interested reader is referred to [25, 6].

Synthetic data with the GOH model is generated by fitting the free parameters to the experimental data using the BFGS optimization algorithm in SciPy [43].

Finite element method implementation

We implemented a general neural network material model in a user material subroutine (UMAT) in the nonlinear finite element package Abaqus. The subroutine was written with minimal assumptions to allow for maximal flexibility. The neural network structure, weights and biases, activation functions, etc. are all imported into the subroutine through the input file.

The subroutine performs the following tasks:

  1. Read in the architecture, weights and biases, activation function types, etc., as a set of material properties.

  2. Pre-process the deformation gradient to obtain the isochoric invariants in Eqs. (24).

  3. Perform the forward propagation of the neural network to obtain the predicted strain energy Ψp and its first derivatives Ψip.

  4. Calculate stress using Eqs. (57, 9).

  5. Calculate the second derivatives Ψijbp with back-propagation.

  6. Compute the consistent tangent 𝕔abaqus using Eq. (15).

For the forward propagation, let yi1m be the output of layer i1 of the neural network with m nodes. Then the output of layer i is given as

yi=gi(WiTyi1+Bi),yin (28)

where gi: is the element-wise activation function, Win×m is the weights matrix, and Bin is the biases vector of the ith layer of the network. We use the sigmoid activation function in the hidden layers,

g(y)=11+ey,  g(y)dg(y)dy=g(y)(1g(y)). (29)

For the derivatives, let Ji1m×m0 be the matrix containing derivatives of the nodes of layer i1 with respect to the inputs of the neural network. Then

Ji=diag(g(yi))WiTJi1,  Jin×m0 (30)

where m0 is the number of inputs to the neural network and diag() denotes a diagonal matrix.

Biaxial stress-stretch experiments on porcine and murine skin

We use experimental data from biaxial stress-stretch experiments performed on murine [24] and porcine skin for the training and validation of neural networks. The data is collected in up to 5 different experimental protocols which are defined in Table 2.

Table 2:

Experimental loading protocols.

Loading λ x λ y σ z
Off-x λ λ 0
Off-y λ λ 0
Equibiaxial λ λ 0
Strip-x λ 1 0
Strip-y 1 λ 0

Training data

High fidelity training data used in this study consists of 13 sets of experimental data obtained from 2 pigs and 11 mice. The first porcine dataset consists of 122 data points in the off-x and off-y loading protocols, the second porcine dataset consists of 402 data points which encompasses all 5 loading protocols in Table 2. The murine dataset consists of 549 points in the off-x, off-y and equibiaxial protocols.

Low fidelity data was generated using the GOH material model. For each of the 3 high fidelity datasets, first the free parameters of the GOH model were fitted to the data. Then the model was used to generate 225 synthetic data points for each of the porcine datasets and 165 points for the murine dataset.

During training of the neural networks, the contribution of the high fidelity data is weighted higher than the low fidelity data. This guides the neural network to adhere to the experimental data more closely while approximating the low fidelity data in regions with no high fidelity data. In this study the ratio between the weights was set to 50 : 1.

Results

Performance of the neural network against synthetic data

To test the DNN material model, we first train the network using synthetic data only. We generate eleven curves in the λx,λy space by first holding λx=1 while λy is increased gradually to λy(i), with i=1,,11. The values for the y stretch are λy(i)[1,1.025,1.05,,1.25]. After reaching the corresponding λy(i) value, λy is held constant while λx is gradually increased (Figure 2A). These loading curves are representative of the the type of test that can be performed experimentally. On the other hand, the DNN takes as inputs the isochoric strain invariants. The invariant space is 4-dimensional, but we plot a 3-dimensional projection in Figure 2B. We use the GOH material model to generate synthetic stress data points and train the neural network. Various components of the loss are plotted in Figure 2C. The predictions of the trained network are plotted against the training data in Figure 2DF. These results indicate that the DNN is able to recreate almost perfectly the expert constitutive models within the training region.

Figure 2:

Figure 2:

Synthetic data generated to train the neural network and the performance of the neural network compared to the training data. (A) Training data was generated by creating curves in the λx,λy stretch space. (B) The corresponding training data in the invariant space, which is the actual input space for the neural network. The invariant space is four-dimensional but only a three-dimensional projection is shown. The colors of the curves indicate the value of the fourth invariant. (C) Various components of the loss during training. (D) Predicted and ground truth strain energy values throughout the input space. Predicted and ground truth planar stress values in the x (E) and y (F) directions. The colors of the curves in (D), (E) and (F) indicate the value of λx.

We also test if the DNN performs well outside the training region. We generate three validation datasets. The first validation dataset is built by randomly sampling λx[1,1.25],λy[1,1.25] to construct a diagonal deformation gradient of biaxial deformations not seen during training. Then, to test predictions under shear, which are not directly part of the training data, we construct a data set of deformation gradients of the form

The validation dataset is generated from randomly sampling λx[1,1.25], λy[1,1.25],γxy[0,0.3]. Lastly, we are interested in the potential of the neural network to extrapolate. An additional validation set is constructed by sampling outside the training region: λx[1,1.25] but λy[1.25,1.35];λy[1,1.25] but λx[1.25,1.35]; and λx[1.25,1.35] and λy[1.25,1.35]. The errors for the validation datasets are shown in Figure 3.

Figure 3:

Figure 3:

Validation of the neural network trained on synthetic data. Performance of the neural network on points randomly sampled from: (A) λx[1,1.25] and λy[1,1.25], (B) λx[1,1.25],λy[1,1.25],γxy[0,0.3] and (C) λx[1,1.25] but λy[1.25,1.35];λy[1,1.25] but λx[1.25,1.35]; and λx[1.25,1.35] but λy[1.25,1.35]. The colorbar indicates the error of each point as defined at the bottom of the figure.

The stress σˆp and σˆd are normalized such that each entry of these tensors is obtained by subtracting the mean and dividing by the standard deviation of the stress values over the validation dataset, e.g. σˆijp=(σijpσijd,avg)/Σijd, with σijd,avg the mean of that stress component over the validation data, and Σijd the corresponding standard deviation. If the data were normal, then the normalized quantities would be almost entirely in the range [−3, 3]. Even though the data is not normal, this is a useful scaling of the stress. Relative errors are high in the low strain region for which the stress is negligible (orders of magnitude lower than in the high stress regions), and absolute errors are higher in regions of high stress even if the relative error is small. The normalized stress better captures the performance of the DNN over the input space. It can be seen that the DNN performs well within the training region but worse toward the boundary of the training region.

Performance against experimental data: multi-fidelity data and convexity constraints

Next, we start training the DNN using experimental data. We want to test the effect of using the experimental data alone (sparse high fidelity data), or combining these data with the low fidelity approximation of the GOH model fit (multi-fidelity data). Concurrently, we want to test if the convexity constraint is required to regularize the fits of the DNN. In Figs. 4 and 5, we show the results for murine skin data and porcine skin data respectively together with error in the predictions which is defined as the average Frobenius norm of the error in stress, mean (σpσF).

Figure 4:

Figure 4:

Performance of the DNN on the murine skin data and average prediction error, E. The first two and the third panels of each row show the predicted stress vs actual stress on the training (Off-x and Off-y) and validation (Equibiaxial) sets, respectively, while the fourth panel shows the convexity loss throughout the input space. Each row corresponds to a separate DNNs trained differently. Predictions of neural networks trained with (top to bottom): single-fidelity data and no convexity constraints, single-fidelity data and convexity constraints, multi-fidelity data and no convexity constraints, multi-fidelity data and convexity constraints.

Figure 5:

Figure 5:

Performance of DNNs trained on single-fidelity (first and third columns) and multi-fidelity (second and fourth columns) training data. The scatter plots compare predicted strain energy and stress values to experimental data as well as GOH model outputs. The contour plots show the difference between the corresponding outputs of the GOH model and the DNN.

The first row of Figure 4 corresponds to a neural network that is trained using sparse high-fidelity data where the convexity constraints are not imposed. Figure 4A shows the DNN ability to fit the off-x and off-y data, achieving average errors of 4.56 kPa and 5.25 kPa, respectively. A biaxial test, not used in training, is used to test the predictive capability of the network (Figure 4B). The average error in the validation is 7.33 kPa. Because no convexity constraint is used we can see that it is not satisfied (Figure 4C). Keeping only the high-fidelity data but imposing convexity changes the performance. The training and validation loss are poorer (Figure 4EG), but the function is convex over the input space (Figure 4H).

The third and fourth rows of Figure 4 show the results of DNNs trained with multi-fidelity data. It is notable that even though the network of the third row is trained without any convexity constraints, the fact that it is trained on the GOH synthetic data (which is an inherently convex model) helps it achieve better convexity (Figure 4L). The average error in the validation set for the multi-fidelity case without convexity constraint is 11.75 kPa (Figure 4K), which is worse than the sparse high fidelity case without convexity requirements (Figure 4C). The results for the neural network trained with multi-fidelity data and with the convexity constraint are shown in last row of Figure 4. The training errors are 8.21 kPa and 7.27 kPa (Figure 4M and N), and the validation error is 11.47 kPa (Figure 4O). Since the convexity is imposed, the loss in the convexity is close to zero over the entire input space (Figure 4P). In summary, polyconvexity is a useful framework to guarantee existence of minimizers for problems in elasticity (with some additional growth conditions), but it can slightly increase the error of the neural network over the training set. This can be reflective of the uncertainties in the experimental data collection, or limitations of the hyperelastic framework. Additional 10 murine datasets are shown in the Supplemental Material, showing that the DNN can easily fit a wide variety of skin samples.

In Figure 5 we further study the effects of augmenting the training data and how the neural network differs from relying solely on the expert model. For this we focus on porcine skin. We train two DNNs, one of them is trained with the experimental data only (first and third columns of Figure 5), whereas the other is trained on the augmented data (second and fourth columns of Figure 5). In Figure 5C and I it can be seen that trained only on experimental data, the neural network achieves a low average error of 5.13 kPa, compared to the GOH fit which is 22.54 kPa. Thus, the neural network outperforms the GOH model. This is not surprising since the only task of the neural network is to interpolate the experimental data and satisfy convexity. The contour plots in Figure 5GL shows the difference between the neural network as compared with GOH material model throughout the input space. It can be seen that the two models differ toward the boundary of the deformation space considered. Surprisingly, the models agree with each other in large portions of the input space even though the DNN and GOH were trained independently on only a small region of the deformation space. Based on the validation examples with the synthetic dataset, we know that the DNN does not extrapolate well outside of the training region. On the other hand, the GOH model has been developed and trained against thousands of tissue biaxial data. It is reasonable to expect that the GOH model, even though it cannot fit any particular dataset as well as the DNN, it can be trusted to guide the neural network away from the training region. We show that training the neural network on the augmented data, the loss is on average 12.81 kPa against the experimental data (Figure 5C and E), which is higher than the single-fidelity DNN but still lower with respect to using the GOH model alone. However, as looking at the contours in Figure 5H, J and L we see that the neural network now follows the GOH model even more closely on the entire input space.

Therefore, the DNN trained with augmented data is at the very least the best version of the GOH model. It performs better than the GOH material model around the high fidelity data points while approximating the GOH model elsewhere.

The last test of the DNN material model is also done with porcine experimental data. In this case we have five different biaxial experiments (see Table 2). We are interested in determining which biaxial tests are the most informative for the DNN material model. Thus, we train the DNN with different combination of experimental data and validate against the rest of the data (Figure 6). We do the same training and testing with the GOH model. In Figure 6A and B we train the material models with only two datasets, and test against the other three. In the training set, as expected from our previous result, the neural network outperforms the GOH model. In the validation data we see that the neural network performs similarly to the GOH model in the case in which is trained on Off-x and Off-y data, but is outperformed by the GOH model in the case in which it is trained with Strip-x and Strip-y data. Figure 6C, D show the result of training the models with three of the five biaxial curves, and validating against the remaining two. Again, the DNN obviously outperforms the GOH model in the training set. Regarding validation, when the DNN is trained on strip biaxial data as well as equi-biaxial data it is able to outperform the GOH model in both validation cases (Figure 6D). In Figure 6E we train against four tests, and validate against the equibiaxial test. The validation and training errors are lower for the neural network compared to the GOH model. The superior performance of the DNN on the last case confirms that data-driven models are a preferable alternative to expert constructed constitutive models when sufficient training data is available.

Figure 6:

Figure 6:

Comparison of predicted stress vs actual stress vs GOH fits for various training/validation splits of porcine experimental data. Predictions of a neural network trained on (A) Off-x and Off-y data, (B) Off-x, Off-y and Equibiaxial data, (C) Off-x, Off-y, Strip-x and Strip-y data, (D) Strip-x and Strip-y data, and (E) Equibiaxial, Strip-x and Strip-y data.

Finite element method implementation

We show a number of basic finite element simulations to test the capabilities of the DNN material model as a UMAT subroutine for Abaqus. The neural network trained on porcine skin data (with convexity constraint) is defined in the input file. We first consider a rectangular block 5×5×1 cm3. Boundary conditions, mesh and results for a uniaxial extension simulation with λx=1.2 are depicted in Figure 7A. The result is a homogeneous stress distribution of σx=29.9 kPa,σy=σz=0, consistent with the results in Fig. 6. confirming that the UMAT subroutine is working as intended.

Figure 7:

Figure 7:

Finite element method simulations using the DNN material model in UMAT. Boundary conditions, deformed geometry and contours of σx under uniaxial loading (A), Boundary conditions, deformed geometry and contours of σx and σy under shear loading (B), and, Boundary conditions, deformed geometry and contours of σx and σy under torsional loading (C).

A shearing simulation is shown in Figure 7B. In this analysis the −x surface of the prism is clamped and a displacement boundary condition of Ux=Uy=5 is applied on the right surface. The contours of the resulting stress components, σx and σy are shown in Figure 7. The Supplement shows a simulation with the GOH fit. As discussed in the previous section, the neural network model with the augmented data is, in a way, the best extension of the GOH model: it retains some of the expert model features but does not suffer from the constraints of an explicit functional form.

The last simulation in Figure 7 is a torsional loading scenario. In this simulation the −x surface of the rectangular prism is clamped and a rotation boundary condition of URx=1 rad is imposed on the +x surface. The resulting stresses are presented in Figure 7C. This loading scenario is different from the previous two because it involves significant deformations in the out-of-plane direction. The UMAT subroutine executes without any problems. The three simulations in Figure 7 showcase the robustness and versatility of our DNN UMAT.It should also be noted that the DNN material model usually requires approximately twice as much computational time compared to the built-in GOH model. For example, for the torsion problem, the execution time for GOH is 00:01:42, while for the DNN UMAT it is 00:02:24.

Next, we perform a simulation that is much more closely related to skin biomechanics. Tissue expansion is a widely used technique in reconstructive surgery in which a balloon-like device is inserted and inflated subcutaneously to stretch and grow skin [42]. The domain is a 10×10×0.3 cm3 patch of skin modeled with 3200 brick elements. A rectangular expander of dimensions 8×8 cm2 underneath the skin mesh is modeled with the fluid cavity feature in Abaqus. The expander is inflated to 20, 40 and 60 cm3 resulting in the principal strain distributions shown in Figure 8. Once again, the simulation converged without issues and the results align with our previous experimental observations of higher deformation at the apex and less toward the periphery of the expander [44]. The simulation in Figure 8 showcases the ability of our neural network model to be used in realistic finite element simulations through our UMAT.

Figure 8:

Figure 8:

Finite element method simulations of tissue expansion using the DNN material model in UMAT. From left to right: Undeformed geometry, and, contours of maximum principal stress on deformed geometry after the expander is expanded to 20, 40 and 60 cm3, respectively.

Note that the simulation in Figure 8 evidences the anisotropy of the model. The fiber directions v and w are aligned with the Cartesian basis [1, 0, 0 and [0, 1, 0]. The tissue is stiffer in the v direction, which is why there is a band of higher stress along that direction in Figure 8. To further showcase the anisotropy in the deformation we also plot the corresponding strains (Figure 9).

Figure 9:

Figure 9:

Finite element method simulations of tissue expansion using the DNN material model in the UMAT. From left to right: model setup, and, contours of strain on deformed geometry after the expander is expanded to 20, 40 and 60 cm3, respectively.

Discussion

In this study we propose a deep neural network (DNN) material model to replace conventional constitutive equations for nonlinear materials, in particular soft collagenous tissues. The neural network takes isochoric strain invariants as inputs and produces the isochoric strain energy and its derivatives as outputs. with this design, objectivity is satisfied a priori. Other efforts in data-driven modeling of materials and structures rely on training directly on the stress data, which requires additional steps to ensure objectivity [45, 46, 47]. For instance, additional loss functions to deal with the violation of objectivity have been proposed [48]. Efforts using invariants or principal stretches as inputs with energy as the output have also been shown by others [19, 49, 50], and by us as well but for isotropic materials [18]. Intermediate approaches that map deformation invariants to principal stresses have also been sought, but they are limited to isotropic materials and even in that case require regularization schemes [51]. The key ideas introduced in this paper are the consideration of anisotropy, training with multi-fidelity data -including experimental data-, convexity constraints, and design of the DNN architecture to compute not only the stress but the consistent tangent needed in finite element simulations.

Training data for the neural network can consist of both expensive (or hard to get) high fidelity data such as results of laboratory experiments, or a combination of high fidelity data supplemented by synthetic data from expert models. A control case shown here is to train the neural network on synthetic data alone. The performance of the neural network with the synthetic data shows that the network can interpolate the expert model almost perfectly (Fig. 2). This is not surprising since neural networks are universal approximators [52]. Other work using neural network-based material models have also shown excellent performance against synthetic data [53]. When enough data is available, recent efforts in data-driven computational mechanics have shown that model-free approaches can be used [47, 54]. However, for applications in biomechanics, the anisotropy and high nonlinearity in the materials necessitates large amounts of data from deformations that can cover the entire input space [55, 56]. This is often out of reach soft tissue characterization. Thus, we propose a DNN model that captures the experimental data, but does so constrained by a hyperelastic framework and the condition of polyconvexity of the strain energy.

High fidelity data of soft tissue mechanics is sparse in most applications. In our previous work, typically only three protocols have been performed: off-biaxial x, off-biaxial y and equi-biaxial [24]. Two other tests, strip-biaxial tests in the x- and y-direction are explored here as well. Liu et al. [19] generated datasets by subjecting tissues to seven biaxial tests. Clearly, more coverage of the input space is always better for data-driven approaches. However, this is a challenge, it requires establishment of multiple repeatable protocols and extensive testing of a individual specimens which can introduce unforeseen uncertainties. Within the hyperelastic framework, expert models of soft tissue mechanics have been developed over the past few decades and reflect our growing understanding of soft tissues. For example, expert models are often based on microstructure observations [9, 57], satisfy physics constraints [21, 35], and have been carefully designed based on observations of many data [7]. On the other hand, expert models have many limitations, such as non-uniqueness of fit, high sensitivity to parameters, and inability to fit the data due to the inherent constraints of the functional form [8]. By combining the high fidelity data with an expert model as a low fidelity approximation we aim at getting the best of both, keep data-centric models that can capture the experimental data with great accuracy, while maintaining relatively good performance in regions with scarce high fidelity data. Of course, this raises the question of how to balance between the two. Here we set a much higher priority for the experimental data, but future studies should quantify the uncertainty of both the data and the models in regions with little experimental data in order to rigorously weight the high and low fidelity models. With the current weights, we showcase the ability of the DNN material model to capture the mechanical response of skin based on data obtained from 2 pigs and 11 mice, demonstrating the applicability of our approach to realistic datasets.

Imposing polyconvexity through the loss function ensures a stable material model suitable for finite element applications, but comes at the expense of fitting error (see Figure 4). This result points to polyconvexity as a potentially restrictive condition on the data, possibly due to the existence of dissipative phenomena such as viscoelasticity or damage which were not accounted for in the model [2, 58]. Noise can also affect the performance of the machine learning approaches [59]. The Supplement shows that imposing the convexity constraint, the DNN model can capture the synthetic data even in the presence of noise. Other data-driven approaches have considered different convexity constraints. For example, Vlassis et al. [33] check for convexity of the strain energy with respect to the right Cauchy Green deformation tensor C. While convexity with respect to C is widely used [34], it is not equivalent to polyconvexity with respect to F [60]. Polyconvexity of the strain energy, together with some growth conditions on the energy, ensures the existence of minimizers for boundary value problems in elasticity [31]. While it is true that polyconvexity is a sufficient but not a necessary condition, it provides enough flexibility, is compatible with phenomena such as buckling [61], and is desirable for finite element implementation. Data-driven work enforcing polyconvexity have also been explored with different approaches by us and others [53, 62]

A strong motivation behind the development of data-driven constitutive models of soft tissues based on experimental tissue testing data is to use the model in predictive finite element simulations to guide device design or treatment planning [63]. Previous work on data driven modeling has fallen short in this regard [19, 64, 53]. The DNN design shown here, including the use of invariants as inputs and prediction of energy and energy derivatives as outputs, allows us to compute not only the stresses but the consistent tangent. Together with the polyconvexity loss, our data-driven framework is uniquely suited for finite element simulations. While it would be possible to predict the energy alone, this introduces noise in the derivatives that need to be regularized as shown in [33]. An alternative framework is to use integrable neural networks [22]. Another method we have explored recently is the use of neural ordinary differential equations to learn the energy derivatives, ignoring the underlying energy function entirely [62]. In [62], the model architecture guarantees that the derivative functions do indeed come from differentiation of an underlying potential even if this potential is not explicitly modeled. The approach followed here is more akin to multifield formulations in elasticity or enhanced strain methods, for which additional degrees of freedom are added together with suitable constraints [65]. We implemented the DNN model in a UMAT subroutine for Abaqus, a popular finite element package in both academia and industry. The UMAT subroutine code was implemented with maximum flexibility in mind. The definition and all parameters of the neural network are provided to the UMAT through the input file. We showcased finite element simulations with the neural network trained on the porcine, from simple deformations to realistic applications such as tissue expansion.

Of course, this work is not without limitations. While it is common to model soft tissues within the hyperelastic framework, other physical phenomena will be included in future work, namely viscoelasticity, interstitial flow, and damage. Additionally, a Bayesian framework is needed to account for the inherent uncertainty in material behavior of biological materials. Nevertheless, we anticipate that the general framework introduced here will open up new avenues in data-driven finite element models that balance high-fidelity experimental data with expert knowledge of soft tissue mechanics.

Conclusions

The work presented in this study shows that neural network material models can reliably replace or augment conventional constitutive material models in tissue mechanics analyses. If enough high fidelity data is available, data-driven models can eliminate the burden of choosing a specific functional form and the inherent limitations that come with this choice. However, in most applications, high fidelity data is scarce. Our work demonstrates that a multi-fidelity approach can leverage expert knowledge in the form of synthetic data, while achieving a better fit to the experimental observations. A strong motivation to develop accurate material models of soft tissue is to build predictive finite element models. We designed the neural network with this application in mind, and implemented a DNN UMAT subroutine for Abaqus, a widely used finite element package.

Supplementary Material

Supplement1
Supplement2

Figure 1:

Figure 1:

Diagram depicting the training and inference processes of the deep neural network material model.

Acknowledgements

This work was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institute of Health, United States under award R01AR074525 to Adrian Buganza Tepole and the National Science Foundation through awards 1916663 and 1916665 to Manuel K. Rausch and Adrian Buganza Tepole, respectively.

Data Availability

Murine biaxial stress-strain data are available through Manuel K. Rausch’s dataverse (together with other mechanical raw data): https://dataverse.tdl.org/dataverse/STBML Code associated with this manuscript is available at: https://github.com/abuganza/NN_aniso_UMAT

References

  • [1].Lee T, Turin SY, Stowers C, Gosain AK, Tepole AB, Personalized computational models of tissue-rearrangement in the scalp predict the mechanical stress signature of rotation flaps, The Cleft PalateCraniofacial Journal 58 (2021) 438–445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Sherman V, Tang Y, Zhao S, Yang W, Meyers M, Structural characterization and viscoelastic constitutive modeling of skin, Acta Biomaterialia 53 (2017) 460–469. [DOI] [PubMed] [Google Scholar]
  • [3].Kakaletsis S, Meador WD, Mathur M, Sugerman GP, Jazwiec T, Malinowski M, Lejeune E, Timek TA, Rausch MK, Right ventricular myocardial mechanics: Multi-modal deformation, microstructure, modeling, and comparison to the left ventricle, Acta Biomaterialia 123 (2021) 154–166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Meador WD, Mathur M, Sugerman GP, Jazwiec T, Malinowski M, Bersi MR, Timek TA, Rausch MK, A detailed mechanical and microstructural analysis of ovine tricuspid valve leaflets, Acta biomaterialia 102 (2020) 100–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Holzapfel GA, Nonlinear Solid Mechanics; A Continuum Approach for Engineering, John Wiley & Sons, LTD, 2000. [Google Scholar]
  • [6].Gasser TC, Ogden RW, Holzapfel GA, Hyperelastic modelling of arterial layers with distributed collagen fibre orientations, Journal of the royal society interface 3 (2005) 15–35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Humphrey JD, Strumpf RK, Yin FCP, Determination of a constitutive model for passive myocardium: I. a new functional form, Journal of Biomechanical Engineering 112 (1990) 333–339. [DOI] [PubMed] [Google Scholar]
  • [8].Tonge TK, Voo LM, Nguyen TD, Full-field bulge test for planar anisotropic tissues: Part ii – a thin shell method for determining material parameters and comparison of two distributed fiber modeling approaches, Acta Biomaterialia 9 (2013) 5926–5942. [DOI] [PubMed] [Google Scholar]
  • [9].Limbert G, Skin Biophysics: From Experimental Characterisation to Advanced Modelling, volume 22, Springer, 2019. [Google Scholar]
  • [10].Jor JW, Parker MD, Taberner AJ, Nash MP, Nielsen PM, Computational and experimental characterization of skin mechanics: identifying current challenges and future directions, Wiley Interdisciplinary Reviews: Systems Biology and Medicine 5 (2013) 539–556. [DOI] [PubMed] [Google Scholar]
  • [11].Mueller B, Elrod J, Distler O, Schiestl C, Mazza E, On the reliability of suction measurements for skin characterization, Journal of Biomechanical Engineering 143 (2021) 021002. [DOI] [PubMed] [Google Scholar]
  • [12].Peng GC, Alber M, Tepole AB, Cannon WR, De S, Dura-Bernal S, Garikipati K, Karniadakis G, Lytton WW, Perdikaris P, et al. , Multiscale modeling meets machine learning: What can we learn?, Archives of Computational Methods in Engineering (2020) 1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Han Z, De S, et al. , A deep learning-based hybrid approach for the solution of multiphysics problems in electrosurgery, Computer methods in applied mechanics and engineering 357 (2019) 112603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Zhang X, Garikipati K, Machine learning materials physics: Multi-resolution neural networks learn the free energy and nonlinear elastic response of evolving microstructures, Computer Methods in Applied Mechanics and Engineering 372 (2020) 113362. [Google Scholar]
  • [15].Vlassis NN, Sun W, Sobolev training of thermodynamic-informed neural networks for interpretable elasto-plasticity models with level set hardening, Computer Methods in Applied Mechanics and Engineering 377 (2021). [Google Scholar]
  • [16].Lu L, Dao M, Kumar P, Ramamurty U, Karniadakis GE, Suresh S, Extraction of mechanical properties of materials through deep learning from instrumented indentation, Proceedings of the National Academy of Sciences of the United States of America 117 (2020) 7052–7062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Lejeune E, Zhao B, Exploring the potential of transfer learning for metamodels of heterogeneous material deformation, Journal of the Mechanical Behavior of Biomedical Materials 117 (2021) 104276. [DOI] [PubMed] [Google Scholar]
  • [18].Leng Y, Calve S, Tepole AB, Predicting the mechanical properties of fibrin using neural networks trained on discrete fiber network data, arXiv preprint arXiv:2101.11712 (2021). [Google Scholar]
  • [19].Liu M, Liang L, Sun W, A generic physics-informed neural network-based constitutive model for soft biological tissues, Computer Methods in Applied Mechanics and Engineering 372 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Reimann D, Chandra K, Vajragupta N, Glasmachers T, Junker P, Hartmaier A, et al. , Modeling macroscopic material behavior with machine learning algorithms trained by micromechanical simulations, Frontiers in Materials 6 (2019) 181. [Google Scholar]
  • [21].Ehret AE, Itskov M, A polyconvex hyperelastic model for fiber-reinforced materials in application to soft tissues, Journal of Materials Science 42 (2007) 8853–8863. [Google Scholar]
  • [22].Teichert GH, Natarajan A, Van der Ven A, Garikipati K, Machine learning materials physics: Integrable deep neural networks enable scale bridging by learning free energy functions, Computer Methods in Applied Mechanics and Engineering 353 (2019) 201–216. [Google Scholar]
  • [23].Nguyen LTK, Keip M-A, A data-driven approach to nonlinear elasticity, Computers & Structures 194 (2018) 97–115. [Google Scholar]
  • [24].Meador WD, Sugerman GP, Story HM, Steifert AW, Bersi MR, Tepole AB, Rausch MK, The regional-dependent biaxial behavior of young and aged mouse skin: A detailed histomechanical characterization, residual strain analysis, and constitutive model, Acta Biomaterialia 101 (2020) 403–413. [DOI] [PubMed] [Google Scholar]
  • [25].Lee T, Turin SY, Gosain AK, Bilionis I, Tepole AB, Propagation of material behavior uncertainty in a nonlinear finite element model of reconstructive surgery, Biomechanics and Modeling in Mechanobiology 17 (2018) 1857–1873. [DOI] [PubMed] [Google Scholar]
  • [26].Bonfiglio L, Perdikaris P, Brizzolara S, Multi-fidelity bayesian optimization of swath hull forms, Journal of Ship Research 64 (2020) 154–170. [Google Scholar]
  • [27].Smith M, ABAQUS/Standard User’s Manual, Version 6.9, Dassault Systèmes Simulia Corp, United States, 2009. [Google Scholar]
  • [28].Doyle T, Ericksen J, Nonlinear elasticity, volume 4 of Advances in Applied Mechanics, Elsevier, 1956, pp. 53–115. [Google Scholar]
  • [29].Coleman BD, Walter N, The Foundations of Mechanics and Thermodynamics, Springer Link, 1974. [Google Scholar]
  • [30].Fehervary H, Maes L, Vastmans J, Kloosterman G, Famaey N, How to implement user-defined fiber-reinforced hyperelastic materials in finite element software, Journal of the Mechanical Behavior of Biomedical Materials 110 (2020) 103737. [DOI] [PubMed] [Google Scholar]
  • [31].Ball JM, Convexity conditions and existence theorems in nonlinear elasticity, Archive for rational mechanics and Analysis 63 (1976) 337–403. [Google Scholar]
  • [32].Schröder J, Anisotropic polyconvex energies, in: Poly-, quasi-and rank-one convexity in applied mechanics, Springer, 2010, pp. 53–105. [Google Scholar]
  • [33].Vlassis NN, Ma R, Sun W, Geometric deep learning for computational mechanics part i: anisotropic hyperelasticity, Computer Methods in Applied Mechanics and Engineering 371 (2020) 113299. [Google Scholar]
  • [34].Gao DY, Neff P, Roventa I, Thiel C, On the convexity of nonlinear elastic energies in the right cauchy-green tensor, Journal of Elasticity 127 (2017) 303–308. [Google Scholar]
  • [35].Holzapfel GA, Gasser TC, Ogden RW, A new constitutive framework for arterial wall mechanics and a comparative study of material models, Journal of elasticity and the physical science of solids 61 (2000) 1–48. [Google Scholar]
  • [36].Schröder J, Neff P, Balzani D, A variational approach for materially stable anisotropic hyperelasticity, International journal of solids and structures 42 (2005) 4352–4371. [Google Scholar]
  • [37].Boyd S, Vandenberghe L, Convex Optimization, Cambridge University Press, 2004. [Google Scholar]
  • [38].Prussing JE, The principal minor test for semidefiniteness matrices, Journal of Guidance, Control and Dynamics 9 (1985) 121–122. [Google Scholar]
  • [39].Kingma DP, Ba J, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). [Google Scholar]
  • [40].Chollet F, et al. , Keras: The python deep learning library, ascl (2018) ascl-1806.
  • [41].Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, et al. , Tensorflow: Large-scale machine learning on heterogeneous distributed systems, arXiv preprint arXiv:1603.04467 (2016). [Google Scholar]
  • [42].Lee T, Turin SY, Gosain AK, Bilionis I, Tepole AB, Propagation of material behavior uncertainty in a nonlinear finite element model of reconstructive surgery, Biomechanics and Modeling in Mechanobiology 17 (2018) 1857–1873. [DOI] [PubMed] [Google Scholar]
  • [43].Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey CJ, Polat İ, Feng Y, Moore EW, VanderPlas J, Laxalde D, Perktold J, Cimrman R, Henriksen I, Quintero EA, Harris CR, Archibald AM, Ribeiro AH, Pedregosa F, van Mulbregt P, SciPy 1.0 Contributors, SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods 17 (2020) 261–272. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Tepole AB, Gart M, Gosain AK, Kuhl E, Characterization of living skin using multi-view stereo and isogeometric analysis, Acta biomaterialia 10 (2014) 4822–4831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [45].Ghaboussi J, Pecknold DA, Zhang M, Haj-Ali RM, Autoprogressive training of neural network constitutive models, International Journal for Numerical Methods in Engineering 42 (1998) 105–126. [Google Scholar]
  • [46].Ghaboussi J, Sidarta D, New nested adaptive neural networks (nann) for constitutive modeling, Computers and Geotechnics 22 (1998) 29–52. [Google Scholar]
  • [47].Kirchdoerfer T, Ortiz M, Data-driven computational mechanics, Computer Methods in Applied Mechanics and Engineering 304 (2016) 81–101. [Google Scholar]
  • [48].Heider Y, Wang K, Sun W, So (3)-invariance of informed-graph-based deep neural network for anisotropic elastoplastic materials, Computer Methods in Applied Mechanics and Engineering 363 (2020) 112875. [Google Scholar]
  • [49].Mihai LA, Woolley TE, Gorieli A, Stochastic isotropic hyperelastic materials: constitutive calibration and model selection, Proceedings of the Royal Society A 474 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].Dabiri Y, Van der Velden A, Sack KL, Choy JS, Kassab GS, Guccione JM, Prediction of left ventricular mechanics using machine learning, Frontiers in Physics 7 (2019) 117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Kalina KA, Linden L, Brummund J, Metsch P, Kästner M, Automated constitutive modeling of isotropic hyperelasticity based on artificial neural networks, Computational Mechanics 69 (2022) 213–232. [Google Scholar]
  • [52].Leshno M, Lin VY, Pinkus A, Schocken S, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural networks 6 (1993) 861–867. [Google Scholar]
  • [53].Klein DK, Fernández M, Martin RJ, Neff P, Weeger O, Polyconvex anisotropic hyperelasticity with neural networks, Journal of the Mechanics and Physics of Solids 159 (2022) 104703. [Google Scholar]
  • [54].Mora-Macías J, Ayensa-Jiménez J, Reina-Romo E, Doweidar MH, Domínguez J, Doblaré M, Sanz-Herrera JA, A multiscale data-driven approach for bone tissue biomechanics, Computer Methods in Applied Mechanics and Engineering 368 (2020) 113136. [Google Scholar]
  • [55].He Q, Laurence DW, Lee C-H, Chen J-S, Manifold learning based data-driven modeling for soft biological tissues, Journal of Biomechanics 117 (2021) 110124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [56].He X, He Q, Chen J-S, Deep autoencoders for physics-constrained data-driven nonlinear materials modeling, Computer Methods in Applied Mechanics and Engineering 385 (2021) 114034. [Google Scholar]
  • [57].Kassab GS, Sacks MS, Structure-based mechanics of tissues and organs, Springer, 2016. [Google Scholar]
  • [58].Kumaraswamy N, Khatam H, Reece GP, Fingeret MC, Markey MK, Ravi-Chandar K, Mechanical response of human female breast skin under uniaxial stretching, Journal of the mechanical behavior of biomedical materials 74 (2017) 164–175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Wang Z, Estrada JB, Arruda EM, Garikipati K, Inference of deformation mechanisms and constitutive response of soft material surrogates of biological tissue by full-field characterization and data-driven variational system identification, Journal of the Mechanics and Physics of Solids 153 (2021) 104474. [Google Scholar]
  • [60].Sivaloganathan J, Spector SJ, On the uniqueness of energy minimizers in finite elasticity, Journal of Elasticity 133 (2018) 73–103. [Google Scholar]
  • [61].Bonet J, Gil AJ, Ortigosa R, A computational framework for polyconvex large strain elasticity, Computer Methods in Applied Mechanics and Engineering 283 (2015) 1061–1094. [Google Scholar]
  • [62].Tac V, Sahli Costabal F, Tepole AB, Data-driven tissue mechanics with polyconvex neural ordinary differential equations, Computer Methods in Applied Mechanics and Engineering 398 (2022) 115248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [63].Baillargeon B, Rebelo N, Fox DD, Taylor RL, Kuhl E, The living heart project: a robust and integrative simulator for human heart function, European Journal of Mechanics-A/Solids 48 (2014) 38–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Cilla M, Pérez-Rey I, Martínez MA, Peña E, Martínez J, On the use of machine learning techniques for the mechanical characterization of soft biological tissues, International Journal for Numerical Methods in Biomedical Engineering 34 (2018) e3121. E3121 cnm.3121. [DOI] [PubMed] [Google Scholar]
  • [65].Wriggers P, Korelc J, On enhanced strain methods for small and finite deformations of solids, Computational Mechanics 18 (1996) 413–428. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplement1
Supplement2

Data Availability Statement

Murine biaxial stress-strain data are available through Manuel K. Rausch’s dataverse (together with other mechanical raw data): https://dataverse.tdl.org/dataverse/STBML Code associated with this manuscript is available at: https://github.com/abuganza/NN_aniso_UMAT

RESOURCES