Abstract
H-scan ultrasound (US) is a new imaging technology that estimates the relative size of acoustic scattering objects and structures. The purpose of this study was to introduce a three-dimensional (3D) H-scan US imaging approach for scatterer size estimation in volume space. Using a programmable research scanner (Vantage 256, Verasonics Inc, Kirkland, WA, USA) equipped with a custom volumetric imaging transducer (4DL7, Vermon, Tours, France), raw radiofrequency (RF) data was collected for offline processing to generate H-scan US volumes. A deep convolutional neural network (CNN) was modified and used to achieve voxel mapping from the input H-scan US image to underlying scatterer size. Preliminary studies were conducted using homogeneous gelatin-based tissue-mimicking phantom materials embedded with acoustic scatterers of varying size (15 to 250 μm) and concentrations (0.1 to 1%). Two additional phantoms were embedded with 63 or 125 μm-sized microspheres and used to test CNN estimation accuracy. In vitro results indicate that 3D H-scan US imaging can visualize the spatial distribution of acoustic scatterers of varying size at different concentrations (R2 > 0.85, p < 0.03). The result of scatterer size estimation reveals that a CNN can achieve an average mapping accuracy of 93.3%. Overall, our preliminary in vitro findings reveal that 3D H-scan US imaging allows the visualization of tissue scatterer patterns and incorporation of a CNN can be used to help estimate size of the acoustic scattering objects.
Keywords: Acoustic scatterer size, convolution neural network, H-scan ultrasound, tissue characterization, volumetric imaging
INTRODUCTION
The use of noninvasive ultrasound (US) imaging for quantitative tissue characterization has been the focus of research efforts for several decades now (Steifer and Lewandowski 2019). The overarching challenge is to find hidden patterns in the US data to reveal more information about tissue function and pathology (Kelly et al. 2018; Opacic et al. 2018; Steifer and Lewandowski 2019). Several promising US-based tissue characterization methods have been introduced, namely, backscatter classification (Kurokawa et al. 2016), integrated backscatter (Takami et al. 2019), spectral feature extraction (Fang et al. 2018), and tissue elasticity imaging (Hoyt et al. 2008a; Hoyt et al. 2008b). A potential limitation for some of these tissue characterization methods is that they use a relatively large kernel (window) of US data during quantification, which can impact spatial resolution and make in vivo measurement of local changes problematic. Since the visual criteria for scatterer size estimation is highly subjective, a deep convolutional neural network (CNN) can play an essential role in extracting image features and simplify the estimation task, especially in vivo where scatterer size may be unknown (Bhonsle et al. 2018; Zhang et al. 2019; Zheng et al. 2018).
Recently, a new tissue characterization modality has emerged for the US classification of acoustic scatterers. Termed H-scan US (where the ‘H’ stands for Hermite or hue), this imaging approach links the mathematics of Gaussian-weighted Hermite (GH) functions to the physics of scattering and reflection from different tissue structures within a standard convolutional model of US pulse-echo systems (Ge et al. 2018; Khairalseed et al. 2017a; Khairalseed et al. 2018; Khairalseed et al. 2019a; Khairalseed and Hoyt 2019a; Parker 2016a; Parker 2016b). Specific integer orders, termed GHn, are related to the nth derivative of a Gaussian function. Matched filters employing specific orders of GHn functions are then used to analyze the spectral content of US backscattered echo signals and to colorize the display, providing visual discrimination between the major tissue scattering classes at high resolution. In general, lower frequency spectral content is generated from larger scattering structures whereas higher frequency echo content is produced by the US wave interacting with small scatterers of scale below the wavelength of the US transmit pulse (i.e. Rayleigh scatterers). Therefore, H-scan US is capable of estimating the relative size and spatial distribution of cellular structures and has shown promise in applications of monitoring cancer response to treatment (Khairalseed et al. 2017b; Khairalseed et al. 2019b). More recently, the H-scan US concept was modified to perform postbeamforming nonlinear filtering of backscattered US data and enhanced contrast-enhanced US imaging (Khairalseed et al. 2019c; Khairalseed and Hoyt 2019b).
To help improve the scatterer size estimation strategy, we developed a novel 3D H-scan US imaging system. The aim of this study was to validate this new US system using tissue-mimicking phantom materials. We investigate the feasibility of using 3D H-scan US volumes for tracking relative changes in scatterer size throughout the entire volume space. A comprehensive comparison of several machine learning methods was completed to find the most efficient one for the task, namely, GoogleNet, AlexNet, Visual Geometry Group (VGG), support vector machines (SVM) and linear regression (LR) models. Then a convolutional neural network (CNN) architecture based on a modified VGG regression model was introduced to map the H-scan US image to scatterer size and enable real-time tissue characterization. Overall, it is hypothesized that the proposed US imaging technology can make scatterer size estimation in volume space possible.
MATERIALS AND METHODS
Phantom Material Fabrication
A series of homogeneous tissue-mimicking phantoms (N = 16) were prepared to contain a range of acoustic scatterers of varying size and concentration. Each phantom contained a base mixture of 75 g of gelatin (300 Bloom, Sigma Aldrich, St. Louis, MO, USA) and spherical US scatterers (US Silica, Pacific, MO, USA) in 1 L of H2O. The diameter and concentration of the spherical scatterers were varied for each phantom produced: 15, 30, 40 or 250 μm and 0.1, 0.3, 0.5 or 1.0 % of mass fraction, respectively. Then two separate phantoms containing spherical scatterers that were 63 or 125 μm in diameter (0.3 % concentration) were made to test the H-scan US imaging system estimation accuracy after training the CNN architecture. Phantom blocks were formed by heating the gelatin solution to at least 50 °C and then pouring into a rigid rectangular mold and allowing to cool overnight. The final material size was about 35 × 35 × 40 mm (depth × width × elevation) and all 3D H-scan US imaging was performed at room temperature (25 °C).
US Data Acquisition
Locations from each phantom were randomly selected on all sides so that each one was imaged 14 different times and allowed repeat measurements from each phantom. Volumetric H-scan US data was collected using a programmable research scanner (Vantage 256, Verasonics Inc, Kirkland, WA, USA) integrated with a custom imaging transducer (4DL7, Vermon, Tours, France). This 192-element (0.2 mm pitch) transducer has an 8.5 MHz center frequency and a motor-controlled mechanism to rapidly sweep the linear array for 3D data acquisitions. The total scan angle was 27° (maximum negative and positive displacements of −13.5° and 13.5°, respectively) with the acceleration angle set to 0.135° to contain 200 frames per volume. At each frame position, backscattered radiofrequency (RF) data was acquired using ultrafast plane wave imaging. Although the spatial resolution of US plane wave imaging is known to be inferior to focused US approaches, each frame using the former exposes the entire image field with nearly uniform acoustic intensity and avoids resolution differences at depth and away from any focusing used. Spatial angular compounding improves the spatial resolution of plane wavebased H-scan US imaging (Khairalseed et al. 2018).
3D H-scan US Image Processing
After scan conversion, two parallel convolution filter kernels were applied to the acquired RF data sequences to measure the relative strength of the received signals relative to GHn after normalization by the signal energy, Figure 1. To minimize correlation between the GH spectra and increase the image contrast, we used more disparate functions for the convolution filtering, namely GH2 and GH8 (Khairalseed et al. 2018; Tai et al. 2019a). The signal envelopes for each of the filtered data sequences were then calculated using a Hilbert transformation. Using an RGB colormap scheme, the relative strength of these filter outputs was color-coded whereby the lower frequency (GH2) backscattered signals are assigned to the red (R) channel and the higher frequency (GH8) components to the blue (B) channel. The envelope of the original unfiltered compounded B-scan US image is assigned to the green (G) channel to complete the colormap and 3D H-scan US image display.
Figure 1.
Flowchart of three-dimensional (3D) H-scan ultrasound (US) imaging. (A) By processing the 3D US image data stack, the H-scan US volume is reconstructed. (B) An adaptive k-means clustering algorithm is used to adjust the spectral location of the Gaussian-weighted Hermite (GH) kernels to account for frequency dependent attenuation. (C) The H-scan US image processing via two different order GH polynomials used as parallel convolution filters. The low and high frequency information are then color-coded as red (R) and blue (B) to make the contrast among different sized scatterers more pronounced. The envelope of the received US echo is assigned to the green (G) channel to complete the RGB map.
Attenuation Correction
The Hermite functions have been well defined by Pierre-Simon Laplace (Tang 1993) and the nth order of its polynomial can be modeled as follows:
| (1) |
where is the nth order of derivative for a Gaussian pulse . A pulse-echo US system operating at an 8.5 MHz center frequency, has a round trip impulse response that correlates highest with a GH4 polynomial. Therefore, (𝑡) is used to weight the GH functions and then applied as bandpass filters to identify or associate the received US backscattered signal with the major signal 141 types from tissue. According to H-scan US imaging theory (Parker 2016b), the US backscattered signal 142 from larger and smaller acoustic scatterers can be modeled as follows:
| (2) |
| (3) |
where A (d) is the US attenuation coefficient as a function of image depth d, is used to represent the acoustic impedance, and t0 is time delay at position Z0. To isolate frequency information 147 properly, GH functions in the frequency domain (spectrum centers) must be adjusted based on a 148 particular RF signal spectrum. The Fourier transform of GHn was defined as:
| (4) |
where , and f0 is the center frequency of transmitted pulse. The adjusting of GHn in the frequency domain can be controlled via a scaling factor in the time domain as follows:
| (5) |
where ωρ is the frequency shift controlled by adjusting attenuation coefficient function when the depth equals d. is the scaling factor. The frequency center is , which can be calculated by a depth-adaptive K-means clustering algorithm (Huang 1998; Juang and Rabiner 1990; Krishna and Murty 1999; Tai et al. 2019b). The center frequencies of the GHn kernels were independently and continuously adjusted at all depths to maximize spectral coverage (see Figure 1), and the filtered signals were combined via an overlap-add method (Crochiere 1980; Narasimha 2006).
Training and Testing Protocols
All US images were of unsigned 8-bit portable network graphics (PNG) format. The training was performed using a Windows 10 Pro operating system platform(Microsoft Corporation, Redmond, WA, USA) installed with MATLAB (Mathworks Inc, Natick, MA, USA) and the Neural Network and Parallel Processing Toolbox (Ji et al. 2013). To adequately handle the processing, the computer had an NVIDIA TESLA K80 24GB graphics processor (Nvidia Corporation, Santa Clara, CA, USA) and 64 GB of RAM installed.
Training data for each category (i.e. scatterer size and concentration, 16 categories total) was collected by imaging an assortment of tissue-mimicking phantom materials at different locations. The US data collected from imaging the phantoms embedded with 15, 30, 40 or 250 μm-sized scatterers were randomly split into either training (70%) or validation (30%) sections. During the CNN training process, a modified VGG convolution operator was applied to extract image features (Ji et al. 2013), Figure 2. It contained 16 layers and the last layer was replaced with a regression layer to spatially map output images with estimates of scatterer size. For each patch, the image was resized to 227 × 227 × 3 pixels for training purposes. The initial learning rate was set to 0.001 to update weights every epoch. During final testing, H-scan US volumes collected from phantom materials embedded with 63 or 125-μm scatterers were utilized. The average scatterer size (standard deviation, SD) was estimated by the trained CNN.
Figure 2.
The architecture of the deep convolutional neural network (CNN) and modified Visual Geometry Group (VGG) kernels with down sampling and feature extraction. Architecture consists of 13 convolutional with rectified linear unit (ReLU) layers and 3 fully connected layers. The input data includes H-scan US images of phantom materials embedded with US scatterers of various sizes (15 to 250 μm) and concentrations (0.1 to 1.0 %). All image data was down sampled to small patches for CNN training.
Deep Convolutional Neural Network (CNN) Implementation
During the feature extraction procedure, the convolution operator was applied and formulated as 180 follows (Yang et al. 2018):
| (6) |
where T is the depth of the CNN model; S means the total kernels in one particular layer and R denotes the number of mini batches for each kernel. is the kernel vector weight at the (p, q, r)ℎ position connected to the 𝑚𝑡h feature map of the preceding layer. The convolution operator is applied to a patch at every step. In each of them, a convolution weight is calculated and placed at the corresponding position of the feature map. This operation then produces a feature map to represent the image dataset. 187 During CNN model training, the probability for each prediction is computed via the cross-entropy loss 188 function (van Ark et al. 2018):
| (7) |
Moreover, we formulate the training process as an optimization problem by maximizing the log191 likelihood and reduce the loss function via L2 regularization of the training data as follows (Lee et al. 2018):
| (8) |
Statistical Analysis
For each experimental group, US volumes were summarized as the mean ± SD from the weighted summation of the individual RGB channel components. The data variance between each measurement was used to assess heterogeneity in the spatial image intensity. To evaluate the impact of US scatterer size and concentration on 3D H-scan US imaging, a two-way random effects analysis of variance (ANOVA) test was performed. Furthermore, the interaction effects were included to show robustness. A p-value less than 0.05 was considered statistically significant.
RESULTS
A representative set of 3D H-scan US volume reconstructions is depicted in Figure 3. Collectively, the results reveal a progressive red color shift (with diminishing blue channel signal strength) as the size of the US scatterers was increased in the range from 15 to 250 μm. This agrees with the H-scan US theory whereby received backscattered US signals from larger scatterers dominate the red channel and that from smaller scattering objects dominates the blue channel. Recall that the red and blue channels are derived from the lower and higher frequency signals, respectively. The spatial distribution of the acoustic scatterers could be clearly detected throughout the entire 3D H-scan US volume space. Importantly, the H-scan volume color appears unchanged when the scatterer concentration in the phantom materials was varied. To study the impact of scatterer size and concentration on H-scan US image intensity, the mean value for each red and blue channel was summarized, Figure 4. Review of these measurements confirms the visual trends noted above. Since the H-scan US signal amplitude is a weighted sum of the RGB channel components, the image intensity variation (listed in Table 1) is more pronounced in volume reconstructions.
Figure 3.
3D H-scan images from tissue-mimicking phantom materials embedded with 15, 30, 40 or 250 μm scatterers at concentrations of 0.1, 0.3, 0.5 or 1.0 %. The red channel (lower frequency US signal information) becomes dominant as scatterer size progressively increases while the blue channel (higher frequency information) diminishes.
Figure 4.
Spatial analysis of the (A) blue and (B) red channel signals used to generate the H-scan US images acquired from different phantoms materials. Note the blue channel intensity decreases while red image intensity increases when the scatterer size increases. (C) Mean voxel values of the H-scan US images revealing that average image intensity increases with corresponding increases in scatterer size.
Table 1.
Mean image intensity values calculated by averaging US data collected in phantom materials of varying concentrations.
| Mean (SD) US image intensity |
|||
|---|---|---|---|
| Size (μm) | Red | Blue | H-scan |
| 15 | 0.001 (0.001) | 0.98 (0.02) | 0.01 (0.01) |
| 30 | 0.19 (0.01) | 0.81 (0.02) | 0.25 (0.04) |
| 40 | 0.27 (0.03) | 0.66 (0.04) | 0.42 (0.02) |
| 250 | 0.91 (0.07) | 0.07 (0.06) | 0.94 (0.05) |
Statistical analysis of the red channel data used to produce the final H-scan US image suggests that intensity is significantly affected by scatterer size (p < 0.03), but not concentration (p = 0.24). Scatterer size also had a significant impact on the blue channel data (p < 0.03), which was not the case for scatterer concentration (p = 0.40). Analysis of 3D H-scan US image intensities reveals the same data patterns and supports the use for scatterer size estimation (p < 0.001) and independent of scatterer density (p = 0.62). Overall, the ANOVA results of H-scan US images did not show a significant interaction between the effect of the concentration and the scatterer size (p < 0.03).
To further illustrate scatterer size estimation using 3D H-scan US, different volume subregions from matched B-scan US were randomly selected for comparison. As Figure 5 reveals, the H-scan US images clearly highlights the capacity to characterize different sized acoustic scatterer populations, which is not possible using conventional B-scan US imaging alone. More specifically, after embedding spherical scatterers in the phantom materials of increasing size, the H-scan US images exhibited an overall intensity increase (R2 = 0.94, p = 0.02). However, analysis of B-scan US image intensity values reveal this mode is less sensitive to changes in scatterer size (R2 = 0.89, p = 0.08). Also illustrated in Figure 5, H-scan US image shows a more rapid change in image intensity with increases scatterer size (278.6%) as compared to B-scan (27.8%), which suggests H-scan US imaging is more sensitive to the scatterer size changes that were evaluated during the in vitro phantom studies (p < 0.03).
Figure 5.
Representative H-scan US images of tissue-mimicking phantom materials with coregistered B-scan US for comparison (grayscale subimages). Homogeneous phantoms were prepared using either (A) 15 μm, (B) 30 μm, (C) 40 μm, or (D) 250 μm acoustic scatterers. (E) Mean image intensity and the corresponding changes reveals that 3D H-scan US is considerably more sensitive to scatterer size changes than traditional B-scan US imaging.
The H-scan US images obtained from homogenous phantoms were optimally modeled via machine learning methodologies. The 3D H-scan US images obtained from homogenous phantoms were used to train a CNN architecture based on a modified VGG regression model for accurate scatterer size estimation. The validation results reported in Figure 6 reveal that the CNN can yield a low average rootmean-square error (RMSE) of 0.95. Lastly, the trained CNN was used to estimate the scatterer size through H-scan US data collected by imaging phantoms that contained either 63 or 125-μm sized scatterers. Note that neither these phantom materials nor same-sized acoustic scatterer types were used in any of the CNN training sessions. Overall, the scatterer size predicted by the VGG model was 59.7 (± 0.6) μm when the input images were collected from homogeneous phantom materials embedded with the 63-μm sized scatterers. The estimated result was 124.6 (± 0.6) μm when the phantoms containing the 125-μm sized scatterers were studied.
Figure 6.
Box and whisker plots representing the root mean square error (RMSE) of the (A) four predicted and actual scatterer sizes during the CNN training process and the (B) scatterer size estimates produced by the trained CNN. Note that the scatterers used for testing were not used during CNN training. The mean (standard deviation, SD) of the RMSE was reported to show the robustness of the model, namely, −0.98 (0.40), −0.40 (0.34), 0.59 (0.79) and 0.94 (0.60). H-scan US-based scatterer size estimates in (B) are 59.7 (0.6) μm and 124.6 (0.6) μm.
DISCUSSION
Analysis of the 3D H-scan US imaging data demonstrated the feasibility of qualitatively detecting changes in acoustic scatterer size among different phantoms. Furthermore, the trained CNN architecture made the scatterer size estimation possible. Hence, the hypothesis of accurate scatterer size estimation using H-scan US and a CNN was supported by these findings. Compared to other scatterer size estimation algorithms, presented herein is the first study to demonstrate use of a 3D H-scan US technology to improve image contrast between different-sized US scatterers (in the range 15 to 250 μm) and independent of concentration (0.1 to 1.0 %). Scatterer size was provided by the manufacturer and assumed monodisperse with no variance. While the present study only used phantom materials with scatterer concentrations less than 1%, future studies will evaluate higher concentrations and heterogeneous phantom materials that could better represent real tissue.
Several state-of-the-art machine learning models were tested to obtain the best model for H-scan US image classification. The training data shows that the VGG model provides a higher sensitivity and specificity score, which is due in part because VGG needs relatively less training data to achieve a high prediction accuracy, and our limited data could cause underfitting issues when a deeper CNN is used, such as GoogleNet.
Tracking scatterer changes can provide useful quantitative assessments for understanding the condition of healthy and diseased tissue. The scatterer size estimation methods proposed by (Kurokawa et al. 2016; Oelze and O’Brien 2002) use a moving region-of-interest (ROI) to calculate the correlation between a theoretical and acquired backscattered US signal. The calculation of signal correlation from the power spectrum can improve results but increases the computational cost. In contrast, it has been shown that the physics of reflection or scattering are linked to the mathematics of GH functions so the overall tissue characterization task can be simplified yet accurate (Parker 2016b). Compared to the study conducted by (Khairalseed et al. 2019a), the 3D H-scan US imaging captures data from volume space and permits extended view of any tissue patterns or heterogeneities. Also, 3D imaging provides a more comprehensive view of the whole tissue structure and yield a more robust statistic if needed, which can further increase the accuracy of scatterer size estimation strategy. Our approach was shown to be robust when changing scatterer concentrations, whereas statistical analysis of matched B-scan US data reveals image intensity may be different when scatterer density changes (p < 0.05). Thus, the use of B-scan US images alone for acoustic scatterer size tracking may be problematic and more work is needed.
The feature extraction model described by Al-Kadi et al. (2016) appears to be less efficient than the method used and detailed herein, since the VGG model appears to help improve scatterer size estimation. Specifically, mapping accuracy using the Al-Kadi-based method was slightly lower when we used the same H-scan US images to train their model (86.3 versus 99.3 %). The estimation accuracy does increase by 15.1 % if H-scan US images are utilized as compared to the B-scan US images (Biswas et al. 2018). These improvements are attributed in part to increasing the dimension of the data for the input layer (Zhang et al. 2017) and size of each kernel (Lee et al. 2018; Zhang et al. 2018). Moreover, most of the previous studies in have been performed using two-dimensional (2D) B-scan US images (Brown et al. 2011; Kelly et al. 2018; Opacic et al. 2018; Steifer and Lewandowski 2019; Takami et al. 2019). The data collection throughout the entire volume (3D) space can highlight the heterogenous microenvironment and further help improve tissue characterization in vivo (Sayed et al. 2013). A custom 3D US transducer was used to generate high resolution H-scan US images and allows the user to pan through the entire tissue volume. This makes the monitoring of scatterer size on different subregions possible. Our image analysis revealed that when the phantom scatterer concentration was low (0.1 %), H-scan US image intensity was relatively lower than intensities calculated from phantom materials embedded with higher concentrations of same-sized acoustic scatterers, but still valid for our regression task. This confirms that the 3D H-scan US imaging is based on the clusters of scattering objects instead of an individual acoustic scatterer. The H-scan US technology could yield acceptable image quality from a medium containing a range of scatterer sizes and concentrations (i.e. heterogeneous tissue), but this needs to be verified with a detailed in vivo imaging study including spatial correlation of matched physical measurements from tissue microscopy sections.
CONCLUSION
3D H-scan US is a novel bioimaging technology for estimating the size of acoustic scattering objects and structures. The H-scan US image intensity was considerably impacted by variations in scatterer size but not concentration. Use of a CNN architecture based on a modified VGG regression model allowed voxel-level mapping of H-scan US data to scatterer sizes (accuracy of 93.3 %). Overall, preliminary phantom studies using 3D H-scan US imaging were encouraging and future work will explore use for in vivo tissue characterization.
ACKNOWLEDGEMENTS
This work was supported in part by NIH grant R01EB025841 and Texas CPRIT award RP180670. The authors would like to thank Lokesh Basavarajappa, Aditi Bellary, Katherine Brown, Hassan Jahanandish, and Ipek Ozdemir for their insightful suggestions on this project and help with manuscript preparation and review.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
REFERENCES
- Al-Kadi OS, Chung DYF, Coussios CC, Noble JA. Heterogeneous tissue characterization using ultrasound: A comparison of fractal analysis backscatter models on liver tumors. Ultrasound Med Biol 2016;42:1612–1626. [DOI] [PubMed] [Google Scholar]
- Bhonsle S, Lorenzo MF, Safaai-Jazi A, Davalos RV. Characterization of nonlinearity and dispersion in tissue impedance during high-frequency electroporation. IEEE Trans Biomed Eng 2018;65:2190–2201. [DOI] [PubMed] [Google Scholar]
- Biswas M, Kuppili V, Edla DR, Suri HS, Saba L, Marinhoe RT, Sanches JM, Suri JS. Symtosis: A liver ultrasound tissue characterization and risk stratification in optimized deep learning paradigm. Comput Methods Programs Biomed 2018;155:165–177. [DOI] [PubMed] [Google Scholar]
- Crochiere R A weighted overlap-add method of short-time Fourier analysis/synthesis. IEEE Trans Acoust Speech Signal Process 1980;28:99–102. [Google Scholar]
- Fang L, He N, Li S, Plaza AJ, Plaza J. A new spatial–spectral feature extraction method for hyperspectral images using local covariance matrix representation. IEEE Trans Geosci Remote Sens 2018;56:3534–3546. [Google Scholar]
- Ge GR, Laimes R, Pinto J, Guerrero J, Chavez H, Salazar C, Lavarello RJ, Parker KJ. H-scan analysis of thyroid lesions. J Med Imaging 2018;5:013505. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoyt K, Castaneda B, Parker KJ. Two-dimensional sonoelastographic shear velocity imaging. Ultrasound Med Biol 2008a;34:276–288. [DOI] [PubMed] [Google Scholar]
- Hoyt K, Castaneda B, Zhang M, Nigwekar P, di Sant’agnese PA, Joseph JV, Strang J, Rubens DJ, Parker KJ. Tissue elasticity properties as biomarkers for prostate cancer. Cancer Biomark 2008b;4:213–225. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang Z Extensions to the K-means algorithm for clustering large data sets with categorical values. Data Min Knowl Discov 1998;2:283–304. [Google Scholar]
- Ji S, Xu W, Yang M, Yu K. 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 2013;35:221–231. [DOI] [PubMed] [Google Scholar]
- Juang BH, Rabiner LR. The segmental K-means algorithm for estimating parameters of hidden Markov models. IEEE Trans Acoust Speech Signal Process 1990;38:1639–1641. [Google Scholar]
- Kelly JP, Koppenhaver SL, Michener LA, Proulx L, Bisagni F, Cleland JA. Characterization of tissue stiffness of the infraspinatus, erector spinae, and gastrocnemius muscle using ultrasound shear wave elastography and superficial mechanical deformation. J Electromyogr Kinesiol 2018;38:73–80. [DOI] [PubMed] [Google Scholar]
- Khairalseed M, Brown K, Parker KJ, Hoyt K. Real-time H-scan ultrasound imaging using a Verasonics research scanner. Ultrasonics 2019a;94:28–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Hoyt K. Integration of a CMUT linear array for wideband H-scan ultrasound imaging. Proc IEEE Ultrason Symp 2019a;1519–1522. [Google Scholar]
- Khairalseed M, Hoyt K. Real-time contrast-enhanced ultrasound imaging using pulse inversion spectral deconvolution. Proc IEEE Ultrason Symp 2019b;1:2291–2294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Hoyt K, Ormachea J, Terrazas A, Parker KJ. H-scan sensitivity to scattering size. J Med Imaging 2017a;4:043501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Javed K, Jashkaran G, Kim J-W, Parker KJ, Hoyt K. Monitoring early breast cancer response to neoadjuvant therapy using H-scan ultrasound imaging: Preliminary preclinical results. J Ultrasound Med 2019b;38:1259–1268. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Oezdemir I, Brown K, Hoyt K. Contrast-enhanced ultrasound imaging using pulse inversion spectral deconvolution. J Acoust Soc Am 2019c;146:2466–2474. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Xiong F, Kim J-W, Mattrey RF, Parker KJ, Hoyt K. Spatial Angular Compounding Technique for H-Scan Ultrasound Imaging. Ultrasound Med Biol 2018;44:267–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khairalseed M, Xiong F, Mattrey R, Parker K, Hoyt K. Detection of early tumor response to abraxane using H-scan imaging: Preliminary results in a small animal model of breast cancer. Proc IEEE Ultrason Symp 2017b;1–4. [Google Scholar]
- Krishna K, Murty MN. Genetic K-means algorithm. IEEE Trans Syst Man Cybern Part B Cybern 1999;29:433–439. [DOI] [PubMed] [Google Scholar]
- Kurokawa Y, Taki H, Yashiro S, Nagasawa K, Ishigaki Y, Kanai H. Estimation of size of red blood cell aggregates using backscattering property of high-frequency ultrasound: In vivo evaluation. Jpn J Appl Phys 2016;55:07KF12. [Google Scholar]
- Lee K, Lam M, Pedarsani R, Papailiopoulos D, Ramchandran K. Speeding up distributed machine learning using codes. IEEE Trans Inf Theory 2018;64:1514–1529. [Google Scholar]
- Narasimha MJ. Modified overlap-add and overlap-save convolution algorithms for real signals. IEEE Signal Process Lett 2006;13:669–671. [Google Scholar]
- Oelze ML, O’Brien WD. Method of improved scatterer size estimation and application to parametric imaging using ultrasound. J Acoust Soc Am 2002;112:3053–3063. [DOI] [PubMed] [Google Scholar]
- Opacic T, Dencks S, Theek B, Piepenbrock M, Ackermann D, Rix A, Lammers T, Stickeler E, Delorme S, Schmitz G, Kiessling F. Motion model ultrasound localization microscopy for preclinical and clinical multiparametric tumor characterization. Nat Commun 2018;9:1527. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Parker KJ. Scattering and reflection identification in H-scan images. Phys Med Biol 2016a;61:L20–28. [DOI] [PubMed] [Google Scholar]
- Parker KJ. The H-scan format for classification of ultrasound scattering. OMICS J Radiol 2016b;5:1–7. [Google Scholar]
- Sayed A, Layne G, Abraham J, Mukdadi O. Nonlinear characterization of breast cancer using multicompression 3D ultrasound elastography in vivo. Ultrasonics 2013;53:979–991. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steifer T, Lewandowski M. Ultrasound tissue characterization based on the Lempel–Ziv complexity with application to breast lesion classification. Biomed Signal Process Control 2019;51:235–242. [Google Scholar]
- Tai H, Khairalseed M, Hoyt K. 3D H-scan ultrasound imaging system and method for acoustic scatterer size estimation: Preliminary studies using phantom materials. Proc IEEE Ultrason Symp 2019a; 1515–1518. [Google Scholar]
- Tai H, Khairalseed M, Hoyt K. Adaptive attenuation correction during H-scan ultrasound imaging using K-means clustering. Ultrasonics 2019b;105987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Takami H, Sonoda S, Muraoka Y, Miura T, Shimizu A, Anai R, Sanuki Y, Miyamoto T, Oginosawa Y, Fujino Y, Tsuda Y, Araki M, Otsuji Y. Comparison between minimum lumen cross-sectional area and intraluminal ultrasonic intensity analysis using integrated backscatter intravascular ultrasound for prediction of functionally significant coronary artery stenosis. Heart Vessels 2019;34:208–217. [DOI] [PubMed] [Google Scholar]
- Tang T The Hermite spectral method for Gaussian-type functions. SIAM J Sci Comput 1993;14:594–606. [Google Scholar]
- van Ark M, Rio E, Cook J, van den Akker-Scheek I, Gaida JE, Zwerver J, Docking S. Clinical improvements are not explained by changes in tendon structure on ultrasound tissue characterization after an exercise program for patellar tendinopathy. Am J Phys Med Rehabil 2018;97:708. [DOI] [PubMed] [Google Scholar]
- Yang X, Ye Y, Li X, Lau RYK, Zhang X, Huang X. Hyperspectral image classification with deep learning models. IEEE Trans Geosci Remote Sens 2018;56:5408–5423. [Google Scholar]
- Zhang L, Gooya A, Pereanez M, Dong B, Piechnik S, Neubauer S, Petersen S, Frangi AF. Automatic assessment of full left ventricular coverage in cardiac cine magnetic resonance imaging with Fisher discriminative 3D CNN. IEEE Trans Biomed Eng 2019;66:1975–1986. [DOI] [PubMed] [Google Scholar]
- Zhang L, Nogues and I, Summers RM, Liu S, Yao J. DeepPap: Deep convolutional networks for cervical cell classification. IEEE J Biomed Health Inform 2017;21:1633–1643. [DOI] [PubMed] [Google Scholar]
- Zhang Y-D, Muhammad K, Tang C. Twelve-layer deep convolutional neural network with stochastic pooling for tea category classification on GPU platform. Multimed Tools Appl 2018;77:22821–22839. [Google Scholar]
- Zheng Q, Yang M, Yang J, Zhang Q, Zhang X . Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 2018;6:15844–15869. [Google Scholar]






