Skip to main content
UKPMC Funders Author Manuscripts logoLink to UKPMC Funders Author Manuscripts
. Author manuscript; available in PMC: 2023 Apr 10.
Published in final edited form as: IEEE Trans Med Imaging. 2022 Aug 1;41(8):1938–1947. doi: 10.1109/TMI.2022.3152396

Super-Resolution Ultrasound Through Sparsity-Based Deconvolution and Multi-Feature Tracking

Jipeng Yan 1,#, Tao Zhang 2,#, Jacob Broughton-Venner 3, Pintong Huang 4,, Meng-Xing Tang 5,
PMCID: PMC7614417  EMSID: EMS173147  PMID: 35171767

Abstract

Ultrasound super-resolution imaging through localisation and tracking of microbubbles can achieve subwave-diffraction resolution in mapping both micro-vascular structure and flow dynamics in deep tissue in vivo. Currently, it is still challenging to achieve high accuracy in localisation and tracking particularly with limited imaging frame rates and in the presence of high bubble concentrations. This study introduces microbubble image features into a Kalman tracking framework, and makes the framework compatible with sparsity-based deconvolution to address these key challenges. The performance of the method is evaluated on both simulations using individual bubble signals segmented from in vivo data and experiments on a mouse brain and a human lymph node. The simulation results show that the deconvolution not only significantly improves the accuracy of isolating overlapping bubbles, but also preserves some image features of the bubbles. The combination of such features with Kalman motion model can achieve a significant improvement in tracking precision at a low frame rate over that using the distance measure, while the improvement is not significant at the highest frame rate. The in vivo results show that the proposed framework generates SR images that are significantly different from the current methods with visual improvement, and is more robust to high bubble concentrations and low frame rates.

Index Terms: Ultrasound super-resolution imaging, ultrasound localization microscopy, deconvolution, graphbased tracking, Kalman filter, motion model, features-based pairing

I. Introduction

Localisation of microbubbles was first presented in vitro in [1] and in vivo in [2]. Super-resolution ultrasound (SRUS) imaging through localisation, also known as ultrasound localisation microscopy (ULM) and capable of achieving resolution beyond the wave diffraction limit, has been demonstrated in vitro [3], [4] and in vivo [5], [6]. Subsequently it has been studied in a range of in vivo models [7]–[13] and in human in the breast [14], in limbs [15], and in the brain [16].

For microbubble and localisation-based SRUS imaging, a significant challenge is the long data acquisition time in order to accumulate sufficient number of isolated bubbles. Currently, most existing studies address this issue by increasing the concentration of bubbles [11]. However, too high a concentration means it becomes difficult to isolate bubbles which are close to each other, resulting in either significant noise due to errors in localisation, or reduced signals due to too many bubble signals being discarded [17]. Another significant challenge is the tracking of these bubbles, particularly when they are close to each other and when the imaging frame rate is not high enough to avoid any ambiguity in bubble pairing.

Methods have been studied to reduce the acquisition time by localising bubbles at high concentrations. Sparse recovery-based methods such as [18], [19], and methods using Richardson-Lucy (RL) deconvolution [12], [20], [21] have been used to localise overlapped bubbles to obtain super-resolution images with reduced acquisition time. Since the point spread function (PSF) of a US imaging system varies spatially, using a fixed approximation to the PSF cannot deconvolve each bubble image into a single point but can significantly shrink the bubble images, based on which localisation can be performed [21]. Huang et. al [22] have shown that by using flow velocity filtering dense bubble signals could be separated into lower density groups to facilitate localisation and tracking. More recently, deep learning has shown promise in localising bubbles in high concentration [23]–[25] and it can perform well as long as the training datasets are representative for the tasks. It is also a challenge to obtain data sets that are sufficiently realistic while having ground truth.

While the key reason for long acquisition time with microbubble based super-resolution imaging is that new bubbles cannot be generated in situ so injected bubbles have to be allowed to flow over time while localisations are accumulated, phase change nano-droplets do not have such limitation. They are invisible to ultrasound when injected but can be switched on using ultrasound [26]. We have recently demonstrated on a cross-tube micro-flow phantom that super-resolution can be achieved in the so called Acoustic Wave Sparsely Activated Localisation Microscopy (AWSALM) by activation and deactivation of phase change nanodroplets [27], mimicking the switching of fluorophores in [28] and STORM [29] in optical super-resolution. We have also demonstrated that such superresolution image acquisition can be achieved within a fraction of a second in the fast-AWSALM [30], though flow tracking becomes difficult due to the fast change of state for the agents.

Cross correlation of bubble images has been used to track bubbles at low frame rate [5]. Motion models have also been shown to be effective to track bubbles with low frame rates, by taking the probability estimated by the Kalman fitering in the Markov Chain Monte Carlo data association algorithm (MCMCDA) [14], [17] or the multiple hypothesis tracking (MHT) procedure [19]. Both the cross correlation and the data association can work well but have relatively high computational cost. In order to reduce the computation, extracted image features of the microbubbles, such as intensity, eccentricity, solidity, have been used for individual bubble tracking [8], [15]. Furthermore, the Hungarian algorithm [31] have been used for bubble tracking in high-frame-rate data [9], [16] by minimising the distances of all bubble pairs in order to achieve global optimization. The method is faster than the cross-correlation and data association tracking but since only the bubble distance is used, the tracking accuracy is susceptible to imaging frame rates and bubble concentrations, while high-frame-rate imaging is currently not available in most of commercial ultrasound systems yet.

In this study we have introduced microbubble image features into a Kalman tracking framework, and made the frame work compatible with sparsity-based deconvolution, a powerful image pre-processor, in order to address the key challenges of tracking bubbles of high concentration at low frame rate.

II. Methods

In this section, we firstly describe an SRUS processing framework for motion-corrected contrast-enhanced ultrasound (CEUS) images, as shown in Fig. 1. In the framework, individual bubble signals are isolated by sparsity-based image deconvolution/reconstruction followed by localisation, and then the localised bubbles between consecutive frames are paired using a graph-based global optimisation cost function that consists of both bubble image features and Kalman motion model. Next we describe the implementation of the proposed methods, and their evaluation.

Fig. 1.

Fig. 1

Diagram of SR processing. To improve the computation efficiency of isolation, CEUS images are segmented by the background noise threshold. PSF is estimated by fitting bubble images with 2D Gaussian functions. Bubbles are localised by the centroid of the shrunk regions obtained after deconvolution. Then, region positions and features are used for tracking. Finally, SR images are painted by drawing the bubble trajectories whose length are long enough.

A. Isolation and localisation

Image deconvolution has been shown to be able to shrink the bubble signals and help isolate overlapping bubbles, where bubbles are assumed point scatters and their images are approximated as the convolution with the PSF of imaging system. In this study, the deconvolution is performed by solving a linear inverse problem:

b=Ax+e (1)

where b is the vectorized two-dimensional (2D) discrete Fourier transform (DFT) of the bubble image, size M2×1; x is the vectorised high-resolution image, size N2×1; e denotes noise; A is a M2×N2 matrix defined by

A=U(FF) (2)

where U is a M2×M2 diagonal matrix, whose diagonal elements are the vectorized 2D Fourier transform of PSF; F is the low frequency part of the N×N square DFT matrix, size M×N; ⊗ stands for Kronecker product which is derived from the vectorisation [19]. When N>M, Eq. (1) becomes undetermined. In practice, given the relatively low bubble concentrations used in super-resolution imaging, a sparsity assumption can be used to solve Eq. (1) by solving

min||bAx||22+λ||x||1 (3)

with the fast iterative shrinkage-thresholding algorithm (FISTA) [32]. In the FISTA, it is essential to calculate the gradient ∇f = A*AzA*b and the Lipschitz constant (Lf) of A*A, where * denotes the conjugate transpose of a matrix. It is noted that A is with size M2×N2 and A*A is with size N2×N2. Direct calculation of Lf and ∇f demands high computational resources. ∇f and Lf can be calculated alternatively according the following equations

AAz=vec(F(conj(H)H(Fmat(z)FT))conj(F)),Ab=vec(F(conj(H)B)conj(F)),Lf=norm(FF)2max(abs(H))2 (4)

where B is the 2D DFT of the image, H the 2D DFT of defined PSF, ⊙ the Hadamard (element-wise) product, T the transpose of matrix, y the approximation of x in the FISTA, vec() the vectorisation and mat() the inverse vectorisation, i.e. transforming vector into matrix. The matrices in vec() are all with dimensions of no more than N×N and the matrix FF* in norm() is with size of M×M. A*b is constant for the iterations in the FISTA. Therefore, the required memory and computation are significantly reduced by Eq. (4). The derivation of Eq. (4) can be found in Appendix. After the deconvolution the centres of the bubbles are then localised through centroiding.

B. Tracking

The graph-based tracking is used as the framework to pair bubbles in two adjacent frames, i.e. the reference frame and the searching frame, as shown in Fig. 2. After building the cost function to quantify the similarity between each pair of bubbles, the total cost of all candidate pairs of bubbles is minimised. Moreover, any pair of bubbles with a cost over a predefined upper limit are not regarded as the same bubble to accelerate the minimisation. The tracking algorithm obeys the topology restriction, i.e. each bubble in one frame must be paired with a single bubble in the other frame. To achieve this, unpaired bubbles are paired to dummy bubbles. Bubbles in the reference frame paired with dummy bubbles are considered to disappear in the new frame. Bubbles in the searching frame paired with dummy bubbles are considered to be newly appeared. More explanations about the minimisation process can be found in [33]. This study focuses on constructing and analysing the cost function of the graph-based tracking approach. A simple form of the cost function can be constructed by only using the distance d between a pair of bubbles

c1=d2dlim2 (5)

Fig. 2.

Fig. 2

Diagram of the graph-based tracking, taking the first and second frames for example. Step 1 : Numbers below ‘m’ or ‘n’ give IDs to bubble in each frame. Numbers in the subscript of K denote the different Kalman parameter sets, including parameters in Eq. (7). Kalman parameters are initialised with the corresponding bubble positions when associated with each ‘m’. Step 2: A cost matrix of pairing candidate bubbles is obtained by calculating the cost function. Step 3: ‘1’ denotes bubbles corresponding to the row and column are paired. Step 4: Bubbles in searching frame paired with the reference frame are associated with updated Kalman parameter sets. New bubbles are associated with newly initialised Kalman parameter sets.

where c1 is the cost and dlim is maximum distance, which can be defined according to the maximal blood velocity and the frame rate. Minimising the above cost function has shown to be able to track bubble movements well at high frame rates of no less than 500 Hz [9], [16]. Linear Kalman filtering has been applied to bubble tracking [17], [19] and bubble trajectory smoothing [34]. In this study, the linear Kalman motion model is incorporated as part of the overall cost function for tracking between two consecutive frames. The filtering distribution O𝒩(μ, Σ) can be computed with the Kalman parameters as

μ=Hks^kk,Σ=HkPkkHkT+Rk (6)

where ŝ is a vector containing bubble’s position and velocity; The subscript n|m denotes the estimate of the value in frame n given observations up to and including frame m; ŝk|k and Pk|k are the state and estimate covariance matrix obtained by updating ŝk|k-1 and Pk|k-1, predicted by the linear motion model, with the observation ok of real bubble state sk at frame k; Hk is the observation model; Rk is the covariance of the observation noise. A probability (pkal) that a candidate pair of bubble images are for the same bubble is defined by the Gaussian distribution:

pkal=12π|Σ|0.5exp(0.5(okμ)TΣ1(okμ)) (7)

Namely, pkal is calculated by the location of a bubble in the searching frame and the Gaussian distribution built with a bubble in the reference frame. A covariance matrix with a large determinant is used for Kalman filtering initialization as which direction the bubble will go into is unknown before pairing, and the determinant decreases with the times of pairing. The more the location of a bubble in the searching frame deviates from the updated location of the bubble in the reference frame, the less likely the two are the same bubble and the higher the cost. Hence, the distance-based pairing cost function can be replaced by

c2=1pkal. (8)

In order to further improve the tracking performance, the cost function can further incorporate bubble image features. As bubble images have been processed by the deconvolution before tracking, it is necessary to investigate what features of the bubble images are persevered after the deconvolution. This is investigated in the next subsection with results shown in Fig. 6. Linear correlation is found between the following two image-intensity-based features of the original bubble images and those after deconvolution.

M0=xp,zprI(zp,xp),M2=xp,zprI(zp,xp)[(zpczr)2+(xpcxr)2] (9)

Fig. 6.

Fig. 6

Results of feature analysis. The abscissa is normalized feature values calculated on the original images. The ordinate is the normalized feature values calculated on the images after deconvolution. Blue dot and red circle denote features of regions obtained by the small and the large PSFs respectively. (a) M0. (b) M2. Dashed and solid lines denotes the linear regressions between the features of images before and deconvolution with different PSFs.

where xp and zp are pixel positions in x and z direction on either the original or super-resolution maps; I is the pixel intensity; M0 is the sum of pixel intensity and M2 is the second-order central moment around the centre (czr, cxr) of each bubble. The differences in the two image features (M0diff and M2diff) are calculated for each candidate-bubble pair. As there are two image feature differences but only one cost is generated by these features, principal component analysis (PCA) is applied to the feature differences and then the first principal components (P1st) are used to reduce the loss of information for the dimensionality reduction. While PCA relies on the linear model, pkal is an exponential of distance. Therefore, the first principal component of the two image features are combined with pkal in multiplication to obtain the cost function as

c3=P1stpkal. (10)

The evaluation and comparisons of the performance of three cost functions are presented in the next two subsections.

C. Evaluation based on Simulations

Simulations are implemented in Matlab (2019a, Mathworks, MA, USA) to analyse and evaluate the performance of the proposed framework.

1). Bubble image generation

Simulations in this study are based on bubble images manually segmented from in vivo CEUS images of a mouse brain by a constant threshold. Through a self-developed Matlab graphical user interface (GUI), 224 bubbles were manually tracked to identify the same bubbles from 138 image frames. 1310 patches, each of which contains an individual bubble image, were segmented and saved for corresponding bubbles.

2). Evaluation of bubble isolation

Evaluation of bubble isolation: Signals of ten segmented bubbles are randomly selected for this simulation. Because phase information is lost in the contrast specific images, two bubble images are combined by maximal intensity projection to generate one simulation image, as shown in Fig. 4. There are in total 45 (binomial coefficient C210) pair combinations. As the largest size of the bubble signals is 23 pixels (Pixel size=0.1 wavelength), distances between the centroids of the two signals in the same image are varied from 4 to 24 pixels with an incremental of 1 pixel and the orientations of their centroids are varied in eight steps covering 360 degrees. Therefore in total 45×21×8=7560 frames are generated.

Fig. 4.

Fig. 4

Visualized demonstration of the simulations for isolation evaluation. The first-column figures show four different pairs of bubbles are overlapped in different distances and angles. Red lines are the boundaries detected by the constant threshold. The second-column figures show the images after deconvolution. The red crosses in the third-column figures denote the localised positions of bubbles.

Bubble signals in each frame are first isolated by a constant background noise threshold and then isolated regions are processed by the sparsity-based deconvolution. Isolation accuracy is defined as the precentage of the successfully isolated bubble pairs per distance and per angle. The isolation accuracies are averaged across angles, with the standard deviation calculated. Moreover, isolation accuracies obtained by the constant background noise threshold are also calculated for comparison.

3). Image feature analysis

In this study, a constant PSF is used as the deconvolution kernel. When the in vivo bubble images vary in size and shape, the deconvolution with a fixed kernel will shrink the bubble signals but won’t generate individual points. Therefore, we hypothesise that some image features might be preserved after the deconvolution and can be used for tracking. The correlation between features of original images and those of the shrunk regions is analysed. The calculated features are area, eccentricity, solidity, M0 and M2 (Eq.(9)). Nine PSFs defined by the 2D Gaussian function, with varying standard of deviation among 0.5, 0.7, and 0.9 wavelength in either lateral (x) or axial (z) or both directions, are used for deconvolution in the correlation analysis to check if the correlation holds for different PSFs.

4). Evaluation of bubble tracking

The performance of the different cost functions in Section II is evaluated on a simulated micro-vascular model, as shown in Fig. 3. As the image from the same bubble can vary over different imaging frames, the simulation takes this into account by using the segmented images of the same bubble at different image frames. The flow speeds in vessels are randomly generated between 5 and 15 mm/s and a random perturbation within 10% of the true flow speed is introduced in the simulated flow speed between two frames. The frame rate is set to 200 Hz and then bubble positions in each frame are calculated. Moreover, the number of bubbles in each frame varies from 10 to 50 with an incremental of 10 to simulate different bubble concentrations. Furthermore, the segmented bubble signals are separated into two groups and used to build two group of simulations respectively to enhance the robustness of the evaluation. Therefore in total ten simulation data sets are generated, each containing 800 frames.

Fig. 3.

Fig. 3

Simulation setting for tracking evaluation. Bubbles are moved along two bifurcated vessels. Each vessel has five branches. Disks with same face color in two frames denote the same bubbles and the different edge colors indicate different features of a bubble are used in different frames.

All the three cost functions, i.e. c1 in Eq. (5), c2 in (8) and c3 in (10), in a graph-based approach are used to track bubbles in the simulated data. The parameters such as the upper limit of cost for pairing are kept the same. The nearest-neighbour tracking method without the graph-based approach is also implemented for comparison. Each whole data set is then downsampled in time to generate data with 50 and 20 frames per second. Both positive predictive value (precision) and false negative rate (miss rate) of pairing are calculated.

D. In vivo experiment

1). Mouse brain

A BALB/c mouse (female, 7 weeks old) was anesthetized with isoflurane and fixed on a thermal insulation blanket. After performing craniectomy, ultrasound contrast agent (Sonazoid, GE Healthcare, Amersham, UK) was injected via the tail vein by an infusion pump in a speed of 5 μ1/min. All animal studies were approved by the Animal Laboratory of the Second Affiliated Hospital of Zhejiang University (permit No. 2019-070).

An ultrasound imaging system (ZONARE ZS3, Mindray, Shenzhen, China) and a linear array probe (L30-8, Mindray) were used to obtain the contrast specific images. Ultrasound was transmitted at 9 MHz and the centre frequency of the received harmonic filter was set to 18 MHz; the frame rate was 95 Hz; the mechanical index was 0.29. Moreover, the probe was translated along its elevation direction by a mechanical scanning system with a step of 20 μm to obtain CEUS images at 34 different slices. 800 frames were acquired for each slice.

To evaluate the sparsity based deconvolution, bubbles isolated by the background noise threshold and bubbles isolated by the sparsity-based deconvolution are firstly tracked by the same graph-based approach with the same c3 cost function before comparison.

To evaluate the proposed tracking method and investigate the influence of frame rates on the tracking methods, the original data at 95 Hz and the downsampled data at frame rates of 47.5 Hz and 19 Hz, are isolated by the deconvolution and then tracked by the graph-based tracking algorithm with the three different cost functions. Taking results obtained at 95 Hz as reference, the absolute errors in measuring bubble density and velocity at the two low frame rates are averaged on each slice and then averaged across slices, with the standard deviations across slices calculated.

The SR bubble density map is obtained by maximum intensity projection of 34 2D image slices and the velocity map is obtained by the vector mean of the slices.

2). Human lymph mode

The CEUS data of one patient was acquired at the Second Affiliated Hospital of Zhejiang University. Ethics was approved by the University Institutional Ethics. The patient was informed by and signed on writing contents. Ultrasound examinations were performed using a clinical scanner (Resona 7S, Mindray) and a L11-3U transducer. 1.2 mL SonoVue (Bracco, Milan, Italy) microbubbles were administered intravenously as a 2s bolus, which is a typical clinical dose for CEUS imaging of the cervical lymph nodes. 30s of data used for SR were acquired at end of the clinical scan after a flash in order to ensure the bubble concentration was not too high. Images were acquired at a frame rate of 20 Hz using an MI of 0.085. The data are processed by different combinations of isolation and tracking methods, same with the procedure used for processing mouse brain data.

III. Results

A. Simulations

A demonstration of the isolation performance is shown in Fig. 4. It can also be seen quantitatively from Fig. 5 that isolation accuracy is significantly improved by using deconvolution comparing to the constant threshold, even when the bubbles are close. For instance, the isolation accuracy of using deconvolution reaches 98.1% when the distance between two bubbles is one wavelength, comparing to 40.6% by noise thresholding.

Fig. 5.

Fig. 5

Results of the isolation evaluation. Isolation accuracy for each distance is demonstrated with the average and standard deviation of values across eight angles.

Regarding which image features are preserved after deconvolution, the coefficient of determination (R2) of the linear regression are 0.173±0.132, 0.004±0.006, 0.452±0.187, 0.987±0.011, and 0.961±0.015 for eccentricity, solidity, area, M0, and M2 respectively. Only the two pixel intensity-related features (M0&M2) of the original images and those after deconvolution are in linear correlations. The linear correlation holds for the nine different sizes of PSFs. Only results corresponding to two PSFs are shown in Fig. 6.

Results of the tracking evaluation are shown in Fig. 7. It can be seen that frame rates and bubble concentrations can significantly affect the pairing precision. There is no obvious difference in performance between the nearest-neighbour algorithm and the graph-based tracking algorithm with three different cost functions when the frame rate is the highest (200 Hz). The tracking precisions are over 90% for all the bubble concentrations and almost 100% when there are only 10 bubbles in the simulated frames. The nearest-neighbour tracking is found to significantly underperform comparing to the graph-based tracking when the frame rate is 50 Hz. When the frame rate goes down to 20 Hz, the proposed cost function c3 outperforms all the other tracking cost functions. Results of simulations based on the two different bubble sets are consistent. Moreover, using c3 for tracking can also significantly reduce the miss rate at the frame rate of 20Hz. Compared to c1, the increase in the precision and decrease in the miss rate obtained by c3 are up to 52.6% and 50.5% respectively.

Fig. 7.

Fig. 7

Results of simulations for tracking evaluation at different frame rates, 200Hz, 50Hz, and 20Hz. NN denotes the nearest-neighbour tracking algorithm. c1 denotes the graph-based tracking method, whose cost function only consists of bubble distance. c2 denotes the cost function with the motion model. c3 denotes the cost function with features added in c2. When bubble numbers in each frame increase from 10 to 50, the averaged distances between bubble and its nearest neighbour are 4.00, 2.48, 1.94, 1.59, and 1.39 wavelengths respectively. As the PSF of a bubble is usually over 2 or 3 wavelengths, such average distances between bubbles correspond to high bubble concentrations.

B. In vivo experiment

SR velocity maps for the mouse brain data obtained with and without deconvolution-based isolation are shown in Fig. 8. By comparing Fig. 8 (c) and (d), clearer vasculature can be detected by the sparsity-based image deconvolution than the constant threshold. It can also be seen from Fig. 8 (e) and (f) that two closely spaced vessels are mistaken as a single vessel, but was separated when the sparsity-based image deconvolution was applied before bubble localisation and tracking. Moreover, two apparent vessels with distance of 28.7μm are clearly separated as shown in Fig. 8 (g).

Fig. 8. Velocity maps obtained by different isolation methods but the same tracking method, i.e. graph-based tracking with the cost function c3.

Fig. 8

(a) Isolation based on the constant threshold. (b) Isolation based on deconvolution. Velocity map is displayed by a two-dimensional color map, where hue denotes the velocity magnitude and intensity increases with the microbubble (MB) density. (c) Magnified top-right region of interest (ROI) from (a). (d) Same top-right ROI from (b). It can be seen that the deconvolution can help detect clearer vasculature than the constant threshold. (e) Magnified mid-left ROI from (a). (f) Same mid-left ROI from (b). It can be seen that the deconvolution helps detect the two vessels (labeled by red arrows), while the constant threshold can only detect one vessel for the overlapped bubbles. (g) Another magnified ROI from (b) and the color bar denotes the normalized MB density. (h) Normalized MB density on white line in (g).

It can be seen from Fig. 9 that images obtained with three different cost functions at the frame rate of 95Hz are broadly similar. When the frame rate decreases, images obtained by the c1 function becomes noisy with a significant number of spurious tracks at 19 Hz. There seems to be less such spurious tracks after incorporating tracking with the motion model in c2. Images obtained by the c3 cost function generate the most consistent SR images across the different frame rates among three cost functions. Quantified changes are shown in Table I, where results obtained by three different cost functions at 95Hz are taken as the references. It can be seen that the proposed cost function c3 has the least average errors.

Fig. 9.

Fig. 9

SR images obtained by processing the mouse brain data at different frame rates using different cost functions after deconvolution. Brightness of each figure is self-normalised by the maximal density respectively.

Table I.

Mean Errors in Bubble Localisation Density (unit: A. U.) and Velocity (unit: M/S) Obtained by Tracking at the Low Frame Rates and the Corresponding Standard Deviations Across Slices.

c 1 c 2 c 3
Density Error 47.5Hz 23.7±12.6 18.1±8.7 16.7±9.2
(×10−3) 19Hz 32.0±16.5 20.7±9.5 19.0±10.1
Velocity Error 47.5Hz 10.8±6.81 7.42±4.25 7.18±4.68
(×10−5) 19Hz 13.1±7.67 7.63±4.32 7.42±4.59

The maximum intensity projection image of all the human lymph node data is shown in Fig. 10. It can be seen from Fig. 10 (b) that the bubble concentration is too high for conventional SR imaging and constant thresholding cannot isolate gathered bubbles, resulting in some intuitively wrong localisation. In contrast, deconvolution method can mitigate the problem. Velocity maps obtained by different combinations of methods are shown in Fig. 11 and are significant different. A video of demonstrating localisation with the deconvolution and bubble trajectories linked by the c3 cost function is supplemented.

Fig. 10.

Fig. 10

Maximum intensity projection image of the human lymph node data and localisation demonstrations with different isolation methods. Red cross denotes the positions of localised bubbles.

Fig. 11. Velocity maps obtained by different combinations of methods.

Fig. 11

(a) Constant threshold & cost function c3. (b) Deconvolution & cost function c1. (c) Deconvolution & cost function c2. (d) Deconvolution & cost function c3.

IV. Discussion

A. Findings

In this study we have introduced microbubble image features and Kalman filtering into a graph based bubble tracking framework, and made the frame work compatible with sparsity-based deconvolution, a powerful image pre-processor, in order to address the key challenges of tracking bubbles of high concentration at low frame rate. Both simulations and experimental results demonstrated significant improvements over existing methods.

Firstly, an improvement of up to 73.6 % in bubble isolation accuracy by the sparsity-based deconvolution is shown in Fig. 4 and Fig. 5. Next, preserved image intensity features (M0&M2) after the deconvolution, which were not considered in previous sparse recovery studies [18], [19], [35], are identified and incorporated in the bubble pairing cost function, together with the Kalman motion model as a weighting factor in Eq. (10). The results of the tracking show that while existing methods perform well when high frame rates and low bubble concentrations are used, the proposed approach outperforms existing ones when the frame rate decreases and with high bubble concentrations. At a low frame rate of 20 Hz, the tracking precision for the three cost functions c1, c2 and c3 increases in order, demonstrating the benefits of integrating the motion model and the bubble image features into the cost function for tracking.

The proposed method is also shown to be robust in dealing with overlapped bubbles and low frame rates for the in vivo mouse brain data. Isolating bubbles by the deconvolution before tracking can distinguish two apparent vessels which are not distinguishable by the existing constant threshold, as shown in Fig. 8. Furthermore, using the proposed cost function with the combination of Kalman motion model and image features can improve the robustness of tracking to frame rates, as shown in Fig. 9 and Table I. Methods are also compared using a clinical dataset of a lymph node with high bubble concentration and low frame rate. Significant differences can be seen among the SR images using different isolation and tracking methods in Fig. 11, although a common limitation of such in vivo evaluation is that the ground truth is difficult to obtain.

B. Differences from previous work

Tracking individual bubble movements allows super-resolved mapping of the flow velocity in small vessels. Some existing studies, including but not limited to machine learning based approaches which aim to generate SR images in high bubble concentrations, only reconstructed the vasculature without flow velocity mapping [12], [18], [24], [25]. A number of existing studies have investigated improved localisation and tracking using techniques including sparsity-based deconvolution, graph-based approach and Kalman motion model for tracking. The graph-based tracking method used in [9], [16], [34] minimises the total distance between bubble pairs. Kalman motion model was implemented in [17], [19] for bubble tracking by solving the data association problem in a number of consecutive frames, which is known to be time consuming [36]. Kalman filter was also used to regularise the bubble tracking path and make it more smooth [34]. Comparing to previous work which reported separately on using 1) bubble image similarity measures [5], 2) Kalman filter [14], [17], [19], and 3) graph-based method [9], [16] for tracking bubbles, in this study we have adapted these three elements and combined them in a single cost function in a bubble tracking global optimisation framework. The framework is designed to minimise the total cost of all proposed bubble pairs, which include the cost due to image feature mismatch, and cost due to any mismatch between the data and the Kalman motion model prediction. Furthermore, the challenge of using sparsity-based deconvolution for isolation is that image features have changed after the processing. Therefore as a related contribution we have identified the preserved bubble image features after the deconvolution through feature analysis. Finally we have also proposed the use of PCA to combine the multiple features into a single feature cost as part of our graph-based tracking optimisation framework.

Bubble localisation is done with sparse recovery on each frame both in [19] and our work. It is worth noting that none of the matrices used in our sparse-based deconvolution are lager than N×N by the optimized computation, which is smaller and hence more computationally efficient than the weighted sparse recovery in [19] where the weight was with dimensions of N2×N2.

C. Limitation and future work

Bubbles images used for simulations are manually and individually segmented from in vivo images. While such images come from real data and hence have relevant appearence for individual bubbles, the phase information is lost and the coherent interference of the bubble signals when they get close to each other can not be properly simulated. Simulations considering the ultrasound and bubble physics with ground truth would make the simulation more realistic.

Comparing to the data association methods which are based on multiple frames, the graph-based tracking pairs bubbles in two adjacent frames. Given that the motion model predicts bubble positions in multiple frames of the imaging sequence, pairing mistakes in the past frames could impact the prediction of bubbles in the current and future frames.

Moreover, it is worth noting that the upper limit for the cost in the graph-based approach also impacts the tracking results. Generally, lower cost limit produces higher tracking precision as well as higher miss rates.

Sparsity-based deconvolution is time-consuming. While we already used a fast algorithm for the optimisation, future work is required to further reduce the computational cost. Exploring phase information in the RF or IQ data, adopting non-linear motion models [37] and developing algorithms combining multiple motion models [38] in the proposed framework can potentially further improve the bubble tracking performance and should be studied in the future.

V. Conclusion

We have developed a super-resolution processing framework, combining microbubble image features with Kalman tracking and compatible with sparsity-based deconvolution, and shown that the framework is robust to low imaging frame rates and high bubble concentrations.

Supplementary Material

Appendix

Acknowledge

We would like to thank Prof. Chris Dunsby and Mr. Qingyuan Tan from Imperial College London and Prof. Yuanyi Zheng from Shanghai Jiao Tong University for their insightful discussion.

Contributor Information

Jipeng Yan, Email: j.yan19@imperial.ac.uk, Ultrasound Lab for Imaging and Sensing, Department of Bioengineering, Imperial College London, London, UK, SW7 2AZ.

Tao Zhang, Email: zhangtao-us@zju.edu.cn, Second Affiliate Hospital, Zhejiang University, Hangzhou, China, 313000.

Jacob Broughton-Venner, Ultrasound Lab for Imaging and Sensing, Department of Bioengineering, Imperial College London, London, UK, SW7 2AZ.

Pintong Huang, Second Affiliate Hospital, Zhejiang University, Hangzhou, China, 313000.

Meng-Xing Tang, Ultrasound Lab for Imaging and Sensing, Department of Bioengineering, Imperial College London, London, UK, SW7 2AZ.

References

  • [1].Couture O, Besson B, Montaldo G, Fink M, Tanter M. Microbubble ultrasound super-localization imaging (musli); 2011 IEEE International Ultrasonics Symposium; 2011. pp. 1285–1287. [Google Scholar]
  • [2].Siepmann M, Schmitz G, Bzyl J, Palmowski M, Kiessling F. Imaging tumor vascularity by tracing single microbubbles; 2011 IEEE International Ultrasonics Symposium; 2011. pp. 1906–1909. [Google Scholar]
  • [3].Viessmann O, Eckersley R, Christensen-Jeffries K, Tang M-X, Dunsby C. Acoustic super-resolution with ultrasound and microbubbles. Physics in Medicine & Biology. 2013;58(18):6447. doi: 10.1088/0031-9155/58/18/6447. [DOI] [PubMed] [Google Scholar]
  • [4].Desailly Y, Couture O, Fink M, Tanter M. Sono-activated ultrasound localization microscopy. Applied Physics Letters. 2013;103(17):174107 [Google Scholar]
  • [5].Christensen-Jeffries K, Browning RJ, Tang M-X, Dunsby C, Eckersley RJ. In vivo acoustic super-resolution and super-resolved velocity mapping using microbubbles. IEEE Transactions on Medical Imaging. 2014;34(2):433–440. doi: 10.1109/TMI.2014.2359650. [DOI] [PubMed] [Google Scholar]
  • [6].Errico C, Pierre J, Pezet S, Desailly Y, Lenkei Z, Couture O, Tanter M. Ultrafast ultrasound localization microscopy for deep super-resolution vascular imaging. Nature. 2015;527(7579):499–502. doi: 10.1038/nature16066. [DOI] [PubMed] [Google Scholar]
  • [7].Lin F, Shelton SE, Espíndola D, Rojas JD, Pinton G, Dayton PA. 3-d ultrasound localization microscopy for identifying microvascular morphology features of tumor angiogenesis at a resolution beyond the diffraction limit of conventional ultrasound. Theranostics. 2017;7(1):196. doi: 10.7150/thno.16899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Zhu J, Rowland EM, Harput S, Riemer K, Leow CH, Clark B, Cox K, Lim A, Christensen-Jeffries K, Zhang G, et al. 3d super-resolution us imaging of rabbit lymph node vasculature in vivo by using microbubbles. Radiology. 2019;291(3):642–650. doi: 10.1148/radiol.2019182593. [DOI] [PubMed] [Google Scholar]
  • [9].Song P, Trzasko JD, Manduca A, Huang R, Kadirvel R, Kallmes DF, Chen S. Improved super-resolution ultrasound microvessel imaging with spatiotemporal nonlocal means filtering and bipartite graph-based microbubble tracking. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2017;65(2):149–167. doi: 10.1109/TUFFC.2017.2778941. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Andersen SB, Hoyos CAV, Taghavi I, Gran F, Hansen KL, Sørensen CM, Jensen JA, Nielsen MB. Super-resolution ultrasound imaging of rat kidneys before and after ischemia-reperfusion; 2019 IEEE International Ultrasonics Symposium (IUS); IEEE; 2019. pp. 1169–1172. [Google Scholar]
  • [11].Christensen-Jeffries K, Couture O, Dayton PA, Eldar YC, Hynynen K, Kiessling F, O’Reilly M, Pinton GF, Schmitz G, Tang M-X, et al. Super-resolution ultrasound imaging. Ultrasound in medicine & biology. 2020;46(4):865–891. doi: 10.1016/j.ultrasmedbio.2019.11.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Chen Q, Yu J, Rush BM, Stocker SD, Tan RJ, Kim K. Ultrasound super-resolution imaging provides a noninvasive assessment of renal microvasculature changes during mouse acute kidney injury. Kidney International. 2020;98(2):355–365. doi: 10.1016/j.kint.2020.02.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Kanoulas E, Butler M, Rowley C, Voulgaridou V, Diamantis K, Duncan WC, McNeilly A, Averkiou M, Wijkstra H, Mischi M, et al. Super-resolution contrast-enhanced ultrasound methodology for the identification of in vivo vascular dynamics in 2d. Investigative radiology. 2019;54(8):500. doi: 10.1097/RLI.0000000000000565. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Opacic T, Dencks S, Theek B, Piepenbrock M, Ackermann D, Rix A, Lammers T, Stickeler E, Delorme S, Schmitz G, et al. Motion model ultrasound localization microscopy for preclinical and clinical multiparametric tumor characterization. Nature communications. 2018;9(1):1–13. doi: 10.1038/s41467-018-03973-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Harput S, Christensen-Jeffries K, Brown J, Li Y, Williams KJ, Davies AH, Eckersley RJ, Dunsby C, Tang M-X. Two-stage motion correction for super-resolution ultrasound imaging in human lower limb. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2018;65(5):803–814. doi: 10.1109/TUFFC.2018.2824846. [DOI] [PubMed] [Google Scholar]
  • [16].Demené C, Robin J, Dizeux A, Heiles B, Pernot M, Tanter M, Perren F. Transcranial ultrafast ultrasound localization microscopy of brain vasculature in patients. Nature Biomedical Engineering. 2021;5(3):219–228. doi: 10.1038/s41551-021-00697-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Ackermann D, Schmitz G. Detection and tracking of multiple microbubbles in ultrasound b-mode images. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2015;63(1):72–82. doi: 10.1109/TUFFC.2015.2500266. [DOI] [PubMed] [Google Scholar]
  • [18].Bar-Zion A, Solomon O, Tremblay-Darveau C, Adam D, Eldar YC. Sushi: Sparsity-based ultrasound super-resolution hemodynamic imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2018;65(12):2365–2380. doi: 10.1109/TUFFC.2018.2873380. [DOI] [PubMed] [Google Scholar]
  • [19].Solomon O, van Sloun RJ, Wijkstra H, Mischi M, Eldar YC. Exploiting flow dynamics for superresolution in contrast-enhanced ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2019;66(10):1573–1586. doi: 10.1109/TUFFC.2019.2926062. [DOI] [PubMed] [Google Scholar]
  • [20].Yu J, Lavery L, Kim K. Super-resolution ultrasound imaging method for microvasculature in vivo with a high temporal accuracy. Scientific reports. 2018;8(1):1–11. doi: 10.1038/s41598-018-32235-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Qian X, Kang H, Li R, Lu G, Du Z, Shung KK, Humayun MS, Zhou Q. In vivo visualization of eye vasculature using super-resolution ultrasound microvessel imaging. IEEE Transactions on Biomedical Engineering. 2020;67(10):2870–2880. doi: 10.1109/TBME.2020.2972514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Huang C, Lowerison MR, Trzasko JD, Manduca A, Bresler Y, Tang S, Gong P, Lok U-W, Song P, Chen S. Short acquisition time super-resolution ultrasound microvessel imaging via microbubble separation. Scientific reports. 2020;10(1):1–13. doi: 10.1038/s41598-020-62898-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Park JH, Choi W, Yoon GY, Lee SJ. Deep learning-based super-resolution ultrasound speckle tracking velocimetry. Ultrasound in medicine & biology. 2020;46(3):598–609. doi: 10.1016/j.ultrasmedbio.2019.12.002. [DOI] [PubMed] [Google Scholar]
  • [24].Liu X, Zhou T, Lu M, Yang Y, He Q, Luo J. Deep learning for ultrasound localization microscopy. IEEE Transactions on Medical Imaging. 2020;39(10):3064–3078. doi: 10.1109/TMI.2020.2986781. [DOI] [PubMed] [Google Scholar]
  • [25].van Sloun RJ, Solomon O, Bruce M, Khaing ZZ, Wijkstra H, Eldar YC, Mischi M. Super-resolution ultrasound localization microscopy through deep learning. IEEE Transactions on Medical Imaging. 2020;40(3):829–839. doi: 10.1109/TMI.2020.3037790. [DOI] [PubMed] [Google Scholar]
  • [26].Sheeran PS, Matsunaga TO, Dayton PA. Phase change events of volatile liquid perfluorocarbon contrast agents produce unique acoustic signatures. Physics in Medicine & Biology. 2013;59(2):379. doi: 10.1088/0031-9155/59/2/379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Zhang G, Harput S, Lin S, Christensen-Jeffries K, Leow CH, Brown J, Dunsby C, Eckersley RJ, Tang M-X. Acoustic wave sparsely activated localization microscopy (awsalm): Super-resolution ultrasound imaging using acoustic activation and deactivation of nanodroplets. Applied Physics Letters. 2018;113(1):014101 [Google Scholar]
  • [28].Betzig E, Patterson GH, Sougrat R, Lindwasser OW, Olenych S, Bonifacino JS, Davidson MW, Lippincott-Schwartz J, Hess HF. Imaging intracellular fluorescent proteins at nanometer resolution. Science. 2006;313(5793):1642–1645. doi: 10.1126/science.1127344. [DOI] [PubMed] [Google Scholar]
  • [29].Rust MJ, Bates M, Zhuang X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm) Nature methods. 2006;3(10):793–796. doi: 10.1038/nmeth929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Zhang G, Harput S, Hu H, Christensen-Jeffries K, Zhu J, Brown J, Leow CH, Eckersley RJ, Dunsby C, Tang M-X. Fast acoustic wave sparsely activated localization microscopy: Ultrasound super-resolution using plane-wave activation of nanodroplets. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2019;66(6):1039–1046. doi: 10.1109/TUFFC.2019.2906496. [DOI] [PubMed] [Google Scholar]
  • [31].Kuhn HW. The hungarian method for the assignment problem. Naval research logistics quarterly. 1955;2(1-2):83–97. [Google Scholar]
  • [32].Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences. 2009;2(1):183–202. [Google Scholar]
  • [33].Sbalzarini IF, Koumoutsakos P. Feature point tracking and trajectory analysis for video imaging in cell biology. Journal of structural biology. 2005;151(2):182–195. doi: 10.1016/j.jsb.2005.06.002. [DOI] [PubMed] [Google Scholar]
  • [34].Tang S, Song P, Trzasko JD, Lowerison M, Huang C, Gong P, Lok U-W, Manduca A, Chen S. Kalman filter-based microbubble tracking for robust super-resolution ultrasound microvessel imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency control. 2020;67(9):1738–1751. doi: 10.1109/TUFFC.2020.2984384. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Solomon O, Eldar YC, Mutzafi M, Segev M. Sparcom: Sparsity based super-resolution correlation microscopy. SIAM Journal on Imaging Sciences. 2019;12(1):392–419. [Google Scholar]
  • [36].Oh S, Russell S, Sastry S. Markov chain monte carlo data association for multi-target tracking. IEEE Transactions on Automatic Control. 2009;54(3):481–497. [Google Scholar]
  • [37].Piepenbrock M, Dencks S, Schmitz G. Microbubble tracking with a nonlinear motion model; 2020 IEEE International Ultrasonics Symposium (IUS); 2020. pp. 1–4. [Google Scholar]
  • [38].Roudot P, Ding L, Jaqaman K, Kervrann C, Danuser G. Piecewise-stationary motion modeling and iterative smoothing to track heterogeneous particle motions in dense environments. IEEE Transactions on Image Processing. 2017;26(11):5395–5410. doi: 10.1109/TIP.2017.2707803. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix

RESOURCES