Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Nov 1.
Published in final edited form as: Acta Biomater. 2017 Sep 20;63:227–235. doi: 10.1016/j.actbio.2017.09.025

A Deep Learning Approach to Estimate Chemically-Treated Collagenous Tissue Nonlinear Anisotropic Stress-Strain Responses from Microscopy Images

Liang Liang 1, Minliang Liu 1, Wei Sun 1
PMCID: PMC5653437  NIHMSID: NIHMS908359  PMID: 28939354

Abstract

Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84%, and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images.

Keywords: Deep Learning, convolutional neural network, elastic property, collagenous tissue

Graphical abstract

graphic file with name nihms908359u1.jpg

1. INTRODUCTION

Biological collagenous tissues are comprised of networks of collagen fibers embedded in a ground substance [1, 2], which provide pliability and strength important for many normal physiological functions. The attractive biological and mechanical properties [3] also make collagenous tissues, mostly derived from animals as xenografts, suitable for a broad spectrum of medical applications such as bioprosthetic heart valve (BHV) [4, 5], cardiovascular grafting/patch [6, 7], tendon [8] and hernia [9] repair. However, due to the heterogeneity and inherent variability of biological tissues, the mechanical properties of collagenous tissues obtained at different locations even within the same individual (regardless whether animal or human) may differ, and may impact tissue-derived device function.

Many studies [1016] have shown that the microstructure of soft tissues, particularly the collagen fiber network structure, is the key determinant of the tissue elastic properties at the macroscopic level. Advanced microscopy imaging techniques, such as second harmonic generation (SHG) imaging, has enabled noninvasive visualization of soft tissue collagen networks at the microstructural level. The elastic properties of collagenous tissues are traditionally obtained through destructive mechanical testing of harvested tissue samples (Figure 1). Ideally, the nonlinear anisotropic elastic properties of collagenous tissues could be directly estimated from noninvasive images (e.g. SHG images) of the tissue microstructure, such that xenografts could be carefully selected based on their mechanical properties and optimal, more predictable, tissue-derived device function could be ensured.

Figure 1.

Figure 1

Two approaches to obtain the elastic properties of a tissue sample: 1) the traditional approach utilizing mechanical testing of a physical test sample and 2) noninvasive microscopy imaging coupled with a trained Deep Learning model.

Recently, Deep Learning [1719], a branch of Machine Learning utilizing deep neural networks, has garnered enormous attention in the fields of artificial intelligence and image analysis. A special type of neural network, namely the convolutional neural network (CNN) [17, 20, 21], has become the state-of-the-art approach for computer vision and image analysis applications (e.g. face recognition), reaching, and even surpassing, human performance in some cases [2225]. CNN provides an end-to-end solution from input image to output target value by automatically extracting image features, thus eliminating the need for hand-crafted image features.

In this study, we developed, to our best knowledge, the first Deep Learning approach to estimate the elastic properties of collagenous tissues from SHG images (Figure 1). Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of BHVs [5] and vascular patches, was chosen to develop a representative application for collagenous tissue elastic property estimation. A multi-layer CNN was designed and trained on a dataset of SHG images and corresponding mechanical testing results (i.e., equi-biaxial stress-strain curves). The trained CNN can automatically extract features from input SHG images of GLBP tissues and predict the nonlinear anisotropic elastic properties (Figure 1).

2. METHODS

2.1 Tissue preparation and mechanical testing

The GLBP tissue samples used in this study were collected and mechanically tested through previous work by our group aimed at evaluating transcatheter heart valve biomaterials [26]. The tissue preparation and mechanical testing protocols are well documented in the published works [2729]. Briefly, testing samples were cut into a 20×20 mm2 square, and four graphite markers delimiting a square approximately 2×2 mm2 in size were glued to the central region of each sample for optical strain measurements. Samples were then mounted in a trampoline fashion to a planar biaxial tester in aqueous 0.9% NaCl solution at 37 °C. A stress-controlled test protocol [27] was utilized to obtain the biaxial stress-strain response curves of each testing sample. In this study, 48 GLBP tissue samples were tested in total.

2.2 Tissue imaging

Upon completion of biaxial mechanical testing, the tissue samples were imaged using the SHG technique at the unloaded state. We utilized a Zeiss 710 NLO inverted confocal microscope (Carl Zeiss Microscopy, LLC, Thornwood, NY, USA), equipped with a mode-locked Ti:Sapphire Chameleon Ultra laser (Coherent Inc., Santa Clara, CA), a non-descanned detector (NDD), and a Plan-Apochromat 40× oil immersion objective. The laser was set to 800 nm and emission was filtered from 380–430 nm. Samples were kept hydrated with saline solution during imaging to prevent drying artifacts and covered with #1.5 coverslips. Samples were imaged inside the area delimited by the graphite markers, and 2D image slices were collected in the thickness direction from the smooth side of each sample. A 2D slice has 512×512 pixels to 1024×1024 pixels, and for each sample the number of slices was varied to cover the thickness. In total, we obtained 3D SHG images (size from 512×512×N to 1024×1024×N) of 48 tissue samples from different animal subjects, and the corresponding mechanical testing data. Representative SHG images of a GLBP sample are shown in Figure 2, with a total of 18 slices (N=18) through the thickness. It can be seen that geometry variation (e.g. fiber waviness) in each imaging plane is much larger than that in the thickness direction as shown in Figure 2.

Figure 2.

Figure 2

Representative SHG image slices of a tissue sample. n denotes the index of each slice in the thickness direction.

2.3 PCA-based Parameterization of GLBP stress-strain curves

Two stress-strain curves were obtained from the equi-biaxial mechanical testing (section 2.1) of each tissue sample (Figure 3a&b): 1) strain E11 and stress S11 along the X1-direction, and 2) strain E22 and stress S22 along the X2-direction. As shown in Figure 3, the two curves of each sample are very different, indicating anisotropic mechanical behaviors. Each stress-strain curve was uniformly sampled along the stress axis within the range of 10 to 630 KPa. The cutoff of 630KPa was chosen because different ranges of external stresses were applied to the tissue samples and 630KPa was the minimum peak stress value [30]. For each tissue sample, the resampled strain values from the two curves were assembled as a vector of 126 numbers, Y. By using principle component analysis (PCA) [3133], the vector Y of a tissue sample can be decomposed as

YYPCA=Y¯+α1V1+α2V2+α3V3 (1)

where Y¯ is the population mean, {Vi} are the modes of variation, and {αi} are the coefficients. Here, {αi} can vary, while Y¯ and {Vi} are the same for all tissue samples. From the PCA calculation [33], the first three modes of variation {V1, V2, V3} with {α1, α2, α3} can describe 99% of the total variation of the stress-strain curves, which means each stress-strain curve can be almost perfectly reconstructed by using Eq.(1) as shown in Figure 3b. Furthermore, the reconstruction error was measured by the mean absolute error (MAE), given by

MAE=1L2L1+1j=L1L2|YPCA(j)Y(j)| (2)

where j is the index of a component in a vector; and if L1= 1 L2 = 63, MAE is the error of the reconstructed S11~E11 curve; and if L1 = 64, L2 = 126, MAE is the error of the reconstructed S22~E22 curve. Normalized mean absolute error (NMAE) was also calculated for reconstructed S11~E11 and S22~E22 curves by dividing MAE by the maximum value of E11 and E22, respectively.

Figure 3.

Figure 3

(a) The orientation definition of a tissue sample: X1 direction and X2 direction. (b) The open circles represent the stress-strain curves of a tissue sample from equi-biaxial mechanical testing experiments. The reconstructed stress-strain curves are shown by the red lines (S11~E11) and blue lines (S22~E22). (c)&(d) The stress-strain curves in the two directions of the 48 tissue samples color-coded by the corresponding α1. The dashed lines are the mean curves, Y¯.

As shown in Figure 3c&d, a material is softer, i.e. more compliant, than the mean material if α1 < 0, and stiffer than the mean material if α1 > 0. Thus the sign of α1 can be used to describe the overall tissue stiffness.

2.4 Deep learning model

As show in Figure 4, we designed a deep convolutional neural network (CNN) as the deep learning model, consisting of 6 blocks in a pipeline. The 1st block takes an input image of size 256×256×N pixels. The 6th block can be configured either as a classifier of the overall tissue stiffness (sign of α1), or a regressor to predict the PCA parameters {α1, α2, α3,}, which can be used to reconstruct the stress-strain curves by Eq.(1). The CNN (Figure 4) learns the relationship between the tissue SHG images and elastic properties from the training dataset, and then can infer the elastic properties from a new tissue image.

Figure 4.

Figure 4

Architecture of the deep convolutional neural network used in this study.

Usually, convolutional neural networks (CNNs) consist of many layers that are sequentially connected, e.g., output from the first layer is the input to the second layer. A layer performs a specific operation, such as convolution, normalization, or max pooling, and it has parameters either prescribed or to be learned from data. For a detailed explanation of the layers and related theories, we refer the reader to the reference papers [17, 20, 34, 35] and the book [36]. The network structure should be designed for specific applications, e.g., choosing the types and sizes of layers and determining their combinations. For our application, the designed CNN consisting of 6 blocks in a pipeline, where each block has one or more layers. Given an input 3D image of 256×256×N pixels, the 1st block with only one preprocessing layer, performs local contrast normalization and uniformly resamples the input 3D image into the first feature map of 256×256×3 pixels. The 1st block does not have any trainable/free parameters. The 2nd block contains a convolution layer with 64 filters (a.k.a. kernels) of 33×33×3 pixels, a batch-normalization layer, a ReLu (rectifier linear unit) layer, and a max pooling layer; and the output from the 2nd block is a feature map of 32×32×64 pixels. The 3rd to the 5th blocks are very similar to the 2nd block, which output feature maps of 16×16×64, 7×7×64, and 1×1×64 pixels respectively. All of the max-pooling layers use a 2×2 pooling window. The 1st to 5th blocks can be considered image-feature extractors which output a feature vector of 64 numbers. The 6th block is used for classification with a softmax classifier, and regression with a linear model. The CNN was implemented by using MatConvnet [37], an open source MATLAB toolbox, and custom MATLAB functions; and it can process an input 3D image within 10 seconds on a PC with intel i7-4770 CPU and 32G RAM.

2.5 Learning of the deep convolutional neural network

The CNN (Figure 4) parameters were learned from the training data. To overcome the challenge of training the CNN with a small dataset [31] (i.e., 48 test samples, which is an acceptable sample size for material testing of biological tissues), the CNN was trained by combining: 1) unsupervised deep learning to determine the parameters in the 2nd to 5th blocks, 2) supervised learning to determine the parameters in the 6th blocks, and 3) data augmentation to generate more training data.

2.5.1 Unsupervised Deep Learning from the 2nd to 5th blocks

To determine the filter parameters of a convolution layer, generally we could use encoder-decoder based unsupervised learning strategies [3841]. The input feature map to the convolution layer can be divided into small patches, where each patch has the same size as a filter (all filters in the same layer have the same size). Each patch can be converted to a vector, X, and the vectorized patches can be stacked together as the columns of a data matrix X. The filters of the convolution layer can also be vectorized and stacked together as the columns of a filter matrix A. Let h(x) denote the ReLu function: h(x) = x if x > 0, and h(x) = 0 if x ≤ 0. The encoder performs convolution followed by ReLu to each patch, which outputs the code matrix h(AX) close to the optimal (unknown yet) code matrix Z. Given the optimal code matrix Z, the decoder tries to recover the input patches X by using a linear combination of the atoms/columns in a dictionary/matrix D, i.e, using DZ to approximate X. Then the goal is to find the optimal variables {A, D, Z} such that the encoding error and the decoding error are both minimized, which is to minimize the following objective function:

=h(AX)Z2+XDZ2+g(A,D,Z) (3)

where g(A, D, Z) defines some constraints on the variables, and A′ denotes the matrix-transpose of A. The matrix norm ‖.‖ is the Frobenius norm. Obviously, by using different constraints, we can obtain different solutions of {A, D, Z}. We proposed an algorithm with three steps to directly obtain a solution under the low rank constraint [42]:

Step-1

Perform low rank approximation (LRA) [42] on the patches X, then a vectorized patch X can be approximated by

Xm=1Mzmdm=DZ, (4)

where D = [d1, …, dM], and the vector dm has the same size as X, and MM number of pixels in the patch X. dm is the product of the mth largest singular value, λm, and the corresponding left-singular vector obtained by LRA. D is the same for every single patch X. Also obtained by LRA, the code vector, Z = [z1, … zm]′, is a column vector of scalars, which is different for different patches. The percentage error of approximation for the patches X is given by

Error=n=M+1Mλn2m=1Mλm2×100%. (5)

If M=M, then the error is zero. By controlling the number of retrained singular values and singular vectors, i.e., M, the approximation error and the computation cost (proportional to M) can be controlled. In this study, M is fixed to 32, and the error is less than 30%. The low rank approximation essentially obtains D and Z that minimize ‖XDZ2 under the low rank constraint. Since the singular vectors in D are orthogonal to each other, the code vector Z can be simply approximated by DX, i.e., ZDX, which is obtained by multiplying D′ to both sides of Eq.(4). After this step, the code matrix Z and dictionary D are determined.

Step-2

Define the filter matrix A by using the learned dictionary D, given by

A=[D,D]=[d1,,dM,d1,,dM]. (6)

Also, we define a new code vector Z as

Z=[h(Z),h(Z)]. (7)

Then the objective function Eq.(3) is equivalent to

=h(AX)Z2+XDZ2+g(A,D,Z), (8)

where Z is the stack of new code vectors, corresponding to Z, and Zh(AX) because ZDX. Then, a vectorized patch X can be encoded as a vector Z by the encoder h(AX). For example, X ≅ 2d1 − 3d2, then the code vector is [2, 0] if A = [d1, d2], and the code vector is [2, 0, 0, 3] if A = [d1, d2d1, −d2] which clearly shows that the longer code vector preserves more information of X. The rationale of Eq.(6)&(7) is that the ReLu layer rejects any negative signal (i.e. code) output from the convolution layer, and therefore, nearly half of the signals will be lost in each block, harming the performance of the CNN. After this step, the filters of the convolution layer are determined.

Step-3

Perform feature map normalization. The output from the ReLu layer is a feature map serving as the input to the next layer. The size of the feature map is K1 × K2 × K3 (i.e. height × width × channel). The values of the feature map at one spatial location can be assembled to a code vector Z of length K3. By assembling all of the code vectors from the training dataset, a data matrix is obtained, and each row of this matrix is normalized by subtracting the mean and dividing by the standard deviation. The rows of the code matrix Z from a single input image will also be normalized in the same way by using the same values of mean and standard deviation. This normalization is essentially equivalent to batch-normalization [34] which has been shown to improve CNN accuracy. After this step, the parameters (i.e. mean and standard deviation values) of the normalization layer are determined.

2.5.2 Supervised learning in the 6th block

The 6th block can be configured either as a classifier or regressor. In the classification configuration, a softmax function is used to predict class membership based on the feature vector from the 5th block. Since it is a binary (soft vs. stiff) classification task, the softmax function reduces to a logistic function, given by

y=11+exp(i=164wixi+b) (9)

where {w1, …, w64, b} are the unknown scalar parameters and [x1, …, x64] is the feature vector from the 5th block. Usually, a discrimination threshold (e.g. 0.5) is specified for the binary classification. If y is greater than or equal to the threshold, then the input is classified as stiff; and if y is smaller than the threshold, then it is classified as soft. With the labeled training data (i.e., image data with known mechanical properties), the 65 parameters in of Eq.(9) can be determined through supervised learning using the cross-entropy loss function and the conjugate gradient optimization algorithm.

In the regression configuration, a multiple output linear regressor predicts the values of {α1, α2, α3} in Eq.(1) based on the feature vector from the 5th block, which is given by

αi=j=164wijxj+bi,i=1,2,3 (10)

where {wij, bi, i = 1, 2, 3, j = 1, …, 64} are the unknown scalar parameters. With the labeled training data, the 195 parameters of this regressor can be learned by using the least squares regression algorithm. Once the parameters {α1, α2, α3} are predicted by the regressor, the stress-strain curves can be reconstructed by using Eq.(1).

2.5.3 Data augmentation

Data augmentation methods are extensively used in Deep Learning applications [20, 4345] to generate more training data. In this study, two data augmentation methods were used: image splitting and flipping (Figure 5). A 3D image of N slices can be split into patches using a sliding window with a stride of 128, and the size of each patch is 256×256×N. As a result of image splitting, 1678 patches were generated. Furthermore, each patch was flipped along the horizontal direction and/or vertical direction, which produced 6712 patches. The elastic properties corresponding to image patches from the same GLBP tissue sample, were assumed to be identical.

Figure 5.

Figure 5

An example of data augmentation to generate image patches.

2.6 A comparative study of network structures

Given the relatively large size of the CNN compared to the dataset, the natural question arises whether reducing the number of layers or filters will significantly impact the performance. Given the huge design space, it would be impractical to evaluate all possible simplifications of the CNN structure. In this study, we chose to investigate two simplified CNNs for comparison, named CNN-s1 and CNN-s2 respectively. CNN-s1: in Step-2 of unsupervised learning in section 4.5.1, the filter matrix A was simplified as A = D, which reduces the number of filters. CNN-s2: the ReLu and normalization layers were removed, and the filter matrix A was simplified as A = D. The structure of CNN-s2 is similar to that in [39].

3. RESULTS

3.1 Unsupervised deep learning

The learned filters of the CNN are visualized in Figure 6. The filters in the 2nd block (Fig. 6a) are local image feature detectors, resembling the local fiber network structures. The filters in the other blocks (Fig. 6b–d) are more abstract, essentially representing various combinations of the local structures at different length scales and locations. For example, the filters in the 2nd block have the same size of 33×33×3, and they will identify local structural patterns in the input image, and a feature map of size 32×32×64 will be produced to record the results of local structural pattern match. Then the filters in the 3rd block, each of which has the size of 5×5×64, will operate on the feature map from the 2nd block to produce a new feature map of size 16×16×64, which also records the results of pattern match. The filters in the 3rd block resembles the patterns of the feature map from the 2nd block, and therefore they evaluate the combinations of local structural patterns in the input image. By taking into account of the 2×2 max-pooling operation in the 2nd block, the effective size (a.k.a receptive field size) of the filters in the 3rd block is 69×69×3 measured in the space of the input image, much larger than the size of the filters in the 2nd block.

Figure 6.

Figure 6

Examples of the learned filters. (a) The 32 filters in the convolution layer of the 2nd block, the 32 opposites of these filters are not shown. The red box contains one filter (size is 33×33×3). (b) One of the filters in the convolution layer of the 3rd block. (c) One of the filters in the convolution layer of the 4th block. (d) One of the filters in the convolution layer of the 5th block.

3.2 Classification

Classification performance was evaluated through ten-fold cross validation using the image patch data. In each round of cross validation, 90% of the image patches and corresponding overall stiffness values (i.e. sign of α1) were randomly selected as the training data; and the remaining 10% of the data were used as the testing data to test whether the trained classifier can predict the sign of α1, i.e., identify whether the tissue sample (corresponding to an image patch) is soft or stiff. The classification accuracy, defined as (TP+TN)/(TP+TN+FP+FN), the sensitivity, defined as TP/(TP+FN), and the specificity defined as TN/(TN+FP), were calculated to assess performance. Here, true positive (TP) is the number of stiff tissue patches correctly identified as stiff; false negative (FN) is the number of stiff tissue patches incorrectly identified as soft; true negative (TN) is the number of soft tissue patches correctly identified as soft; and false positive (FP) is the number of soft tissue patches incorrectly identified as stiff. In addition, AUC, defined as the area under a receiver operating characteristic (ROC) curve, was calculated as a measure of the overall classification performance. For comparison, a baseline softmax classifier using the skewness of image histogram [46] as the only feature, was also trained and tested. Since the two histograms of an image and its flipped version are the same, the flipped image patches were not used in the classification experiment. Two simplified versions of the CNN, CNN-s1 and CNN-s2 with less filters and less layers (details in Method section), were also tested.

ROC curves, as shown in Figure 7, were obtained by varying the discrimination threshold for each classifier. The performances of the proposed CNN, CNN-s1, CNN-s2, and the skewness-based softmax classifier using 0.5 as the discrimination threshold for classification, are reported in Table-1. The proposed CNN achieved the best performance, the skewness-based softmax classifier had the worst performance, and the two simplified CNNs had moderate performance.

Figure 7.

Figure 7

ROC curves of different classifiers. The ROC curve of an ideal classifier should be passing three points: the origin (0, 0), the top-left corner (0, 1) and the top-right corner (1, 1). The ROC curve of a random-guess classifier (e.g. tossing a coin to determine the stiffness) is just a straight line connecting the origin (0, 0) and the top-right corner (1, 1). The red curve, produced by the proposed CNN, is the closest to the ideal curve.

Table-1.

Classification Performance

Method Accuracy Sensitivity Specificity AUC
proposed CNN 84±2.5% 82±4.1% 86±3.6% 0.92
CNN-s1 78±2.8% 73±5.5% 80±3.7% 0.86
CNN-s2 75±3.5% 67±5.6% 80±4.9% 0.84
skewness based softmax classifier 71±3.2% 51±5.9% 84±3.9% 0.76

3.3 Regression

Regression performance was evaluated using a leave-one-out cross validation approach to test whether the trained regressor can predict the values of {α1, α2, α3}, which were used to reconstruct the stress-strain curve of each tissue sample by Eq.(1). In each round of the cross validation, the image patches and the stress-strain curves from one of the 48 tissue samples were used as the testing data to evaluate the accuracy of the regressor, and the remaining data were used as the training data to determine the parameters of the regressor. The predicted {α1, α2, α3} values for each of the image patches from the test tissue sample were averaged to obtain the final {α1, α2, α3} predictions for the whole tissue sample.

From the cross validation, the MAE errors (Eq.(2)) in the predicted stress-strain curves were 0.021±0.015 and 0.031±0.029, and NMAE errors were 12.6±9.0% and 10.9±10%, compared to the actual S11~E11 and S22~E22 curves, respectively. Figure 8a shows an exemplary set of experimentally measured and predicted curves for one sample, and the error distribution across all of the samples is given in Figure 8b. The full set of predicted curves for all 48 samples are provided in the Supplementary Material.

Figure 8.

Figure 8

(a) Representative stress-strain curves predicted by the deep learning model shown as dashed lines, and the stress-strain curves obtained from mechanical testing shown as solid lines. S11~E11 curves are shown in red. S22~E22 curves are shown in blue. (b) The mean absolute error (MAE) distribution of all samples.

4. DISCUSSION

In this study, we developed a Deep Learning approach utilizing a deep CNN to estimate the elastic properties of collagenous tissues directly from noninvasive microscopy images. To our best knowledge, this is the first study in which Deep Learning techniques were used to derive nonlinear anisotropic elastic properties directly from tissue microscopy images. This work was motivated by the lengthy, complex, and destructive nature of traditional tissue mechanical testing. While it takes only about 10–30 minutes to obtain SHG images of a tissue sample, it takes much longer (hours) to prepare testing samples, set up testing and measurement instruments, and perform the actual mechanical test on each sample to obtain the stress-strain response curves. It took several months to obtain the data from the 48 samples used in this study. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many biological materials as long as tissue mechanics are structurally determined and the structure can be noninvasively imaged (not necessarily using SHG).

Traditional machine learning methods [17] require hand-engineered features (i.e. features defined by human experts), which are difficult to obtain for this application. Rezakhaniha and co-workers [47] have defined intuitive texture features of tissue fibers, such as waviness, straightness, bundle size, etc., but this requires time-consuming manual annotation. Moreover, it is unclear whether these hand-engineered features could fully describe the fiber network structural information. As an end-to-end solution, CNN eliminates the need for hand-engineered features. One factor limiting the use of CNN and Deep Learning methods in biomechanics applications, is that they generally require a large amount of training data [48, 49], while the sample size for mechanical testing of biological tissues is typically small, on the order of 10 – 100 samples. However, in this study, it is shown that the deep CNN can also work well with a small dataset by combining supervised and unsupervised learning methods, and utilizing data augmentation methods. As more images and mechanical testing data are collected, the performance of the CNN can be further improved.

The CNN architecture used in this study, was specifically designed for this application. As explained in section 3.1, the 1st to 5th block of the CNN serve as automatic feature extractors that encode the input image into a feature vector of 64 numbers. The filters in the first convolution layer represent different local fiber network patterns, while the filters in the remaining convolution layers represent various combinations of these patterns at different locations and length scales. The 6th block of the CNN relates the feature vector (i.e. code of the structure) to the elastic properties for classification and regression. Since the nonlinear relationship between tissue structure and elastic properties is described by the entire CNN (Figure 4), this approach is a structural approach: inferring mechanics from structure features extracted by CNN. Two simplified versions of the CNN were tested, i.e., CNN-s1 and CNN-s2 with less filters and less layers. The results show that simplifications to the CNN led to a significant decrease of accuracy, which may be the result of signal loss during signal propagation due to the fewer filters, and disruption of the encoding mechanism due to the fewer layers, respectively.

The CNN also demonstrated superiority over a simple image-feature based method to estimate the overall stiffness of collagen-based materials. Raub and co-workers [46] showed that the skewness of an image histogram was correlated to the collagen concentration and the Young’s modulus of collagen gels. Therefore, a softmax classifier was built by using the skewness as the only input feature in this study. As demonstrated in the results (Figure 7), the CNN outperformed the softmax classifier by a large margin; and even the two simplified versions of the CNN performed better than the softmax classifier, which underscores the superiority of CNNs for automatically extracting fiber network features.

More importantly, we demonstrated that the CNN can predict the PCA parameters of the stress-strain curves, such that the entire anisotropic stress-strain response of GLBP tissues can be estimated. For a nonlinear elastic response, it is well known that the Young’s modulus or stiffness cannot fully describe the tissue mechanical behavior, since the tangential value changes at different stress/strain levels along the nonlinear stress-strain curve. Thus, the PCA parameters offer a much more comprehensive look at the tissue elastic properties. Interestingly, we found that for this application, the overall “shape” of a stress-strain curve can be described with a single parameter, α1 in Eq.(1). The novel PCA based approach to represent stress-strain curves developed in this study may facilitate more thorough analysis and comparison of tissue stress-strain responses over basic stiffness metrics.

The performance of the CNN can be further improved. There were some poor predictions which are mostly likely due to a common problem in machine learning: If the patterns of an input (i.e. tissue image in this application) are very unique compared to those of the training data, then the trained machine learning models will not recognize the input. As suggested by a recent work on the relationship between deep learning and training data [50], this problem can be solved by using more training data and fine tuning the neural network in our future work. In case of having extremely unique samples, a rejection option [51, 52] can be incorporated to the machine learning models, and the enhanced models can refuse to make a decision if the patterns of input are very different from those of the training data.

We note that there are many works in the literature on estimating the nonlinear mechanical properties of biological tissues by noninvasive or minimally invasive techniques. Most of these works are based on optimization strategies and constitutive models, for which the goal is to obtain the parameters of constitutive models, representing material mechanical properties. An energy-based method [53] was proposed to estimate myocardial stiffness from reduced 2D deformation data. An optimized virtual field method [54] was developed to identify myocardial stiffness from magnetic resonance elastography. Iterative finite element simulation and optimization methods [55, 56] were developed to determine human aorta material properties from strain measurements. Our group has also proposed a stress-matching based method [57] to estimate human aorta material properties. Recently, a machine learning based method [58], utilizing recurrent neural networks, was proposed for modelling the indentation force response of soft tissue, which used a robotic system to measure tissue forces and deformation. These methods can recover mechanical properties from multiple deformed states of the subjects by using noninvasive or minimally invasive measurement data from force sensors and motor controller [58], CT [57], MR [54] or ultrasound scanners [55, 56]. Temporal deformation measurement of the subjects under external loading is needed by these methods, which contains stress-strain information implicitly or explicitly. Unlike these prior efforts, the proposed approach in this study does not rely on deformation measurement data, and it utilized microcopy imaging to noninvasively reveal biological tissue structure at the unloaded state, which can be used to infer mechanical properties by the deep learning techniques. This approach suggests a new avenue for the fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images, enabling many potential applications such as serving as a quality control tool to screen tissues [59] for the manufacturing of BHVs.

The method was only tested on one type of collagenous tissues (i.e. GLBP), which is the limitation of this study. Additional tissue types should be tested in future studies. However, based on the famous universal approximation theorem (UAT) and its variants [36, 6063] in machine learning, it is theoretically feasible to apply this approach to other types of tissues. The original UAT [60] shows that any continuous function (i.e. input-output relationship) can be uniformly approximated by a neural network with sigmoid units. Recently, it is shown that a neural network with unbounded activation functions is still a universal approximator [61]. Thus, deep neural networks (e.g. CNN) have the capability to model complex relationship between structural image (input) and stress-strain curves (output) under the two conditions: 1) there is certain deterministic relationship between tissue structure and mechanics, and 2) tissue structure can be imaged. Although theoretically justified, the implementation of this approach for other tissues is non-trivial, which needs a substantial amount of work on tissue preparation, mechanical testing, imaging, and development of CNN models, and therefore it warrants a future study.

5. CONCLUSION

In conclusion, this study demonstrated the feasibility of using Deep Learning techniques for fast and noninvasive assessment of GLBP elastic properties from microcopy images. The main contributions of this study include: 1) the use of PCA to parameterize equi-biaxial stress-strain curves and quantify the overall stiffness, 2) the custom deep convolutional neural network design to automatically extract structural patterns of collagenous tissues, and perform classification to identify overall stiffness, as well as regression to predict PCA-parameters of nonlinear anisotropic stress-strain curves, and 3) the unsupervised deep learning method combined with supervised learning and data augmentation to overcome the challenge of small datasets for Deep Learning in the field of biomechanics. The developed approach was evaluated through cross validation, where an average classification accuracy of 84% and average regression errors of 0.021 and 0.031 were achieved. This study clearly demonstrates the great potential for Machine Learning techniques to estimate tissue mechanical properties solely through the use of noninvasive microcopy images.

Supplementary Material

supplement

Statement of Significance.

In this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves).

Acknowledgments

Research for this project is funded in part by NIH grant R01 HL104080. Liang Liang is supported by an American Heart Association Post-doctoral fellowship 16POST30210003. The authors thank Fatiesa Sulejmani and Andres Caballero for assisting in the collection of biaxial testing data and SHG images used in this study, as well as Caitlin Martin for comments and suggestions.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

CONFLICT OF INTEREST STATEMENT

None.

References

  • 1.Fomovsky GM, Thomopoulos S, Holmes JW. Contribution of Extracellular Matrix to the Mechanical Properties of the Heart. Journal of molecular and cellular cardiology. 2010;48(3):490–496. doi: 10.1016/j.yjmcc.2009.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Sacks MS. Incorporation of Experimentally-Derived Fiber Orientation into a Structural Constitutive Model for Planar Collagenous Tissues. Journal of Biomechanical Engineering. 2003;125(2):280–287. doi: 10.1115/1.1544508. [DOI] [PubMed] [Google Scholar]
  • 3.Nimni ME, Cheung D, Strates B, Kodama M, Sheikh K. Chemically modified collagen: A natural biomaterial for tissue replacement. Journal of Biomedical Materials Research. 1987;21(6):741–771. doi: 10.1002/jbm.820210606. [DOI] [PubMed] [Google Scholar]
  • 4.Khor E. Methods for the treatment of collagenous tissues for bioprostheses. Biomaterials. 1997;18(2):95–105. doi: 10.1016/s0142-9612(96)00106-8. [DOI] [PubMed] [Google Scholar]
  • 5.Vesely I. The evolution of bioprosthetic heart valve design and its impact on durability. Cardiovascular Pathology. 2003;12(5):277–286. doi: 10.1016/s1054-8807(03)00075-9. [DOI] [PubMed] [Google Scholar]
  • 6.Lam MT, Wu JC. Biomaterial applications in cardiovascular tissue repair and regeneration. Expert review of cardiovascular therapy. 2012;10(8):1039–1049. doi: 10.1586/erc.12.99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Brown P. Abdominal Wall Reconstruction Using Biological Tissue Grafts. AORN Journal. 2009;90(4):513–524. doi: 10.1016/j.aorn.2009.05.024. [DOI] [PubMed] [Google Scholar]
  • 8.Demange MK, de Almeida AM, Rodeo SA. Updates in biological therapies for knee injuries: tendons. Current Reviews in Musculoskeletal Medicine. 2014;7(3):239–246. doi: 10.1007/s12178-014-9230-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Huerta S, Varshney A, Patel PM, Mayo HG, Livingston EH. Biological mesh implants for abdominal hernia repair: Us food and drug administration approval process and systematic review of its efficacy. JAMA Surgery. 2016;151(4):374–381. doi: 10.1001/jamasurg.2015.5234. [DOI] [PubMed] [Google Scholar]
  • 10.Zhang L, Lake SP, Lai VK, Picu CR, Barocas VH, Shephard MS. A coupled fiber-matrix model demonstrates highly inhomogeneous microstructural interactions in soft tissues under tensile load. Journal of biomechanical engineering. 2013;135(1):011008–011008. doi: 10.1115/1.4023136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Jin T, Stanciulescu I. Computational modeling of the arterial wall based on layer-specific histological data. Biomech Model Mechanobiol. 2016;15(6):1479–1494. doi: 10.1007/s10237-016-0778-1. [DOI] [PubMed] [Google Scholar]
  • 12.Jin T, Stanciulescu I. Numerical simulation of fibrous biomaterials with randomly distributed fiber network structure. Biomech Model Mechanobiol. 2016;15(4):817–830. doi: 10.1007/s10237-015-0725-6. [DOI] [PubMed] [Google Scholar]
  • 13.D’Amore A, Amoroso N, Gottardi R, Hobson C, Carruthers C, Watkins S, Wagner WR, Sacks MS. From single fiber to macro-level mechanics: A structural finite-element model for elastomeric fibrous biomaterials. Journal of the Mechanical Behavior of Biomedical Materials. 2014;39:146–161. doi: 10.1016/j.jmbbm.2014.07.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Picu RC. Mechanics of random fiber networks-a review. Soft Matter. 2011;7(15):6768–6785. [Google Scholar]
  • 15.Liu Q, Lu Z, Hu Z, Li J. Finite element analysis on tensile behaviour of 3D random fibrous materials: Model description and meso-level approach. Materials Science and Engineering: A. 2013;587:36–45. [Google Scholar]
  • 16.Wicker BK, Hutchens HP, Wu Q, Yeh AT, Humphrey JD. Normal basilar artery structure and biaxial mechanical behaviour. Comput Methods Biomech Biomed Engin. 2008;11(5):539–51. doi: 10.1080/10255840801949793. [DOI] [PubMed] [Google Scholar]
  • 17.LeCun Y, Bengio Y, Hinton GE. Deep Learning. Nature. 2015;521:436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  • 18.Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, vd Laak JAWM, Ginneken Bv, Sánchez CI. A Survey on Deep Learning in Medical Image Analysis. arXiv:1702.05747. 2017 doi: 10.1016/j.media.2017.07.005. [DOI] [PubMed] [Google Scholar]
  • 19.Shen D, Wu G, Suk HI. Deep Learning in Medical Image Analysis. Annual Review of Biomedical Engineering. 2017;19(1):221–248. doi: 10.1146/annurev-bioeng-071516-044442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Krizhevsky A, Sutskever I, Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. Neural Information Processing Systems. 2012 [Google Scholar]
  • 21.LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–2324. [Google Scholar]
  • 22.He K, Zhang X, Ren S, Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE International Conference on Computer Vision. 2015 [Google Scholar]
  • 23.Kokkinos I. Pushing the Boundaries of Boundary Detection using Deep Learning. Intl Conf on Learning Representations. 2016 [Google Scholar]
  • 24.Taigman Y, Yang M, Ranzato MA, Wolf L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. IEEE Conference on Computer Vision and Pattern Recognition. 2014 [Google Scholar]
  • 25.He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition. 2016 [Google Scholar]
  • 26.Caballero A, Sulejmani F, Martin C, Pham T, Sun W. Evaluation of Transcatheter Heart Valve Biomaterials: Biomechanical Characterization of Bovine and Porcine Pericardium. Journal of the Mechanical Behavior of Biomedical Materials (under review) 2017 doi: 10.1016/j.jmbbm.2017.08.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Sacks MS, Sun W. Multiaxial Mechanical Behavior of Biological Materials. Annual Review of Biomedical Engineering. 2003;5(1):251–284. doi: 10.1146/annurev.bioeng.5.011303.120714. [DOI] [PubMed] [Google Scholar]
  • 28.Sun W, Sacks M, Fulchiero G, Lovekamp J, Vyavahare N, Scott M. Response of heterograft heart valve biomaterials to moderate cyclic loading. Journal of Biomedical Materials Research Part A. 2004;69A(4):658–669. doi: 10.1002/jbm.a.30031. [DOI] [PubMed] [Google Scholar]
  • 29.Sun W, Abad A, Sacks MS. Simulated Bioprosthetic Heart Valve Deformation under Quasi-Static Loading. Journal of Biomechanical Engineering. 2005;127(6):905–914. doi: 10.1115/1.2049337. [DOI] [PubMed] [Google Scholar]
  • 30.Sellaro TL, Hildebrand D, Lu Q, Vyavahare N, Scott M, Sacks MS. Effects of collagen fiber orientation on the response of biologically derived soft tissue biomaterials to cyclic loading. Journal of Biomedical Materials Research Part A. 2007;80A(1):194–205. doi: 10.1002/jbm.a.30871. [DOI] [PubMed] [Google Scholar]
  • 31.Devijver PA. Pattern Recognition: A Statistical Approach. Prentice-Hall; London, GB: 1982. [Google Scholar]
  • 32.Heimann T, Meinzer HP. Statistical shape models for 3D medical image segmentation: a review. Medical Image Analysis. 2009;13(4):543–563. doi: 10.1016/j.media.2009.05.004. [DOI] [PubMed] [Google Scholar]
  • 33.Liang L, Liu M, Martin C, Elefteriades JA, Sun W. A machine learning approach to investigate the relationship between shape features and numerically predicted risk of ascending aortic aneurysm. Biomech Model Mechanobiol. 2017 doi: 10.1007/s10237-017-0903-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Ioffe S, Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Proceedings of The 32nd International Conference on Machine Learning. 2015:448–456. [Google Scholar]
  • 35.Glorot X, Bordes A, Bengio Y. Deep Sparse Rectifier Neural Networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 2011 [Google Scholar]
  • 36.Goodfellow I, Bengio Y, Courville A. Deep Learning. The MIT Press; 2016. [Google Scholar]
  • 37.Vedaldi A, Lenc K. MatConvNet: Convolutional Neural Networks for MATLAB. Proceedings of the 23rd ACM international conference on Multimedia; ACM, Brisbane, Australia. 2015. pp. 689–692. [Google Scholar]
  • 38.Jarrett K, Kavukcuoglu K, Ranzato MA, LeCun Y. What is the Best Multi-Stage Architecture for Object Recognition? International Conference on Computer Vision. 2009 [Google Scholar]
  • 39.Lei Z, Yi D, Li SZ. Learning Stacked Image Descriptor for Face Recognition. IEEE Transactions on Circuits and Systems for Video Technology. 2016;26(9):1685–1696. [Google Scholar]
  • 40.Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. Proceedings of the 19th International Conference on Neural Information Processing Systems; MIT Press, Canada. 2006. pp. 153–160. [Google Scholar]
  • 41.Hinton GE, Osindero S, Teh YW. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation. 2006;18(7):1527–1554. doi: 10.1162/neco.2006.18.7.1527. [DOI] [PubMed] [Google Scholar]
  • 42.Markovsky I. Structured low-rank approximation and its applications. Automatica. 2008;44(4):891–909. [Google Scholar]
  • 43.Kooi T, Litjens G, van Ginneken B, Gubern-Mérida A, Sánchez CI, Mann R, den Heeten A, Karssemeijer N. Large scale deep learning for computer aided detection of mammographic lesions. Medical Image Analysis. 2017;35:303–312. doi: 10.1016/j.media.2016.07.007. [DOI] [PubMed] [Google Scholar]
  • 44.Isensee F, Kickingereder P, Bonekamp D, Bendszus M, Wick W, Schlemmer HP, Maier-Hein K. Brain Tumor Segmentation Using Large Receptive Field Deep Convolutional Neural Networks. In: Maier-Hein GFKH, Deserno GLTM, Handels H, Tolxdorff T, editors. Bildverarbeitung für die Medizin 2017: Algorithmen - Systeme - Anwendungen Proceedings des Workshops vom 12 bis 14 März 2017 in Heidelberg. Springer Berlin Heidelberg; Berlin, Heidelberg: 2017. pp. 86–91. [Google Scholar]
  • 45.Liu S, Zheng H, Fengc Y, Lid W. Prostate Cancer Diagnosis using Deep Learning with 3D Multiparametric MRI, SPIE Medical Imaging. International Society for Optics and Photonics [Google Scholar]
  • 46.Raub CB, Putnam AJ, Tromberg BJ, George SC. Predicting bulk mechanical properties of cellularized collagen gels using multiphoton microscopy. Acta Biomaterialia. 2010;6(12):4657–4665. doi: 10.1016/j.actbio.2010.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Rezakhaniha R, Agianniotis A, Schrauwen JTC, Griffa A, Sage D, Bouten CVC, van de Vosse FN, Unser M, Stergiopulos N. Experimental investigation of collagen waviness and orientation in the arterial adventitia using confocal laser scanning microscopy. Biomech Model Mechanobiol. 2012;11(3):461–473. doi: 10.1007/s10237-011-0325-z. [DOI] [PubMed] [Google Scholar]
  • 48.Deng J, Dong W, Socher R, Li LJ, Kai L, Li FF. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009:248–255. [Google Scholar]
  • 49.Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision. 2015;115(3):211–252. [Google Scholar]
  • 50.Sun C, Shrivastava A, Singh S, Gupta A. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. 2017 https://arxiv.org/abs/1707.02968.
  • 51.Bartlett PL, Wegkamp MH. Classification with a Reject Option using a Hinge Loss. Journal of Machine Learning Research. 2008;9:1823–1840. [Google Scholar]
  • 52.Geifman Y, El-Yaniv R. Selective Classification for Deep Neural Networks. 2017 https://arxiv.org/abs/1705.08500.
  • 53.Nasopoulou A, Nordsletten DA, Niederer SA, Lamata P. Feasibility of the Estimation of Myocardial Stiffness with Reduced 2D Deformation Data. In: Pop M, Wright GA, editors. Functional Imaging and Modelling of the Heart: 9th International Conference, FIMH 2017; Toronto, ON, Canada. June 11–13, 2017; Cham: Proceedings, Springer International Publishing; 2017. pp. 357–368. [Google Scholar]
  • 54.Miller R, Kolipaka A, Nash MP, Young AA. Identification of Transversely Isotropic Properties from Magnetic Resonance Elastography Using the Optimised Virtual Fields Method. In: Pop M, Wright GA, editors. Functional Imaging and Modelling of the Heart: 9th International Conference, FIMH 2017; Toronto, ON, Canada. June 11–13, 2017; Cham: Proceedings Springer International Publishing; 2017. pp. 421–431. [Google Scholar]
  • 55.Wittek A, Karatolios K, Bihari P, Schmitz-Rixen T, Moosdorf R, Vogt S, Blase C. In vivo determination of elastic properties of the human aorta based on 4D ultrasound data. Journal of the Mechanical Behavior of Biomedical Materials. 2013;27:167–183. doi: 10.1016/j.jmbbm.2013.03.014. [DOI] [PubMed] [Google Scholar]
  • 56.Wittek A, Derwich W, Karatolios K, Fritzen CP, Vogt S, Schmitz-Rixen T, Blase C. A finite element updating approach for identification of the anisotropic hyperelastic properties of normal and diseased aortic walls from 4D ultrasound strain imaging. Journal of the Mechanical Behavior of Biomedical Materials. 2016;58:122–138. doi: 10.1016/j.jmbbm.2015.09.022. [DOI] [PubMed] [Google Scholar]
  • 57.Liu M, Liang L, Sun W. A new inverse method for estimation of in vivo mechanical properties of the aortic wall. Journal of the Mechanical Behavior of Biomedical Materials. 2017;72:148–158. doi: 10.1016/j.jmbbm.2017.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Nowell R, Shirinzadeh B, Smith J, Zhong Y. Modelling the indentation force response of non-uniform soft tissue using a recurrent neural network. 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob) 2016:377–382. [Google Scholar]
  • 59.Nguyen T, Lam HL, Zhou J, Romero CM, Kafesjian R, Guo XG, Huynh VL. Method of testing bioprosthetic heart valve leaflets. 6245105. US Patents. 2001;B1
  • 60.Cybenko G. Approximation by superpositions of a sigmoidal function, Mathematics of Control. Signals and Systems. 1989;2(4):303–314. [Google Scholar]
  • 61.Sonoda S, Murata N. Neural network with unbounded activation functions is universal approximator. Applied and Computational Harmonic Analysis. 2017;43(2):233–268. [Google Scholar]
  • 62.Shaham U, Cloninger A, Coifman RR. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis. 2016 [Google Scholar]
  • 63.Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks. 1991;4(2):251–257. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supplement

RESOURCES