Abstract
A conventional ultrasound image is formed by transmitting a focused wave into tissue, time-shifting the backscattered echoes received on an array transducer and summing the resulting signals. The van Cittert-Zernike theorem predicts a particular similarity, or coherence, of these focused signals across the receiving array. Many groups have used an estimate of the coherence to augment or replace the B-mode image in an effort to suppress noise and stationary clutter echo signals, but this measurement requires access to individual receive channel data. Most clinical systems have efficient pipelines for producing focused and summed RF data without any direct way to individually address the receive channels. We describe a method for performing coherence measurements that is more accessible for a wide range of coherence-based imaging. The reciprocity of the transmit and receive apertures in the context of coherence is derived and equivalence of the coherence function is validated experimentally using a research scanner. The proposed method is implemented on a Siemens ACUSON SC2000™ultrasound system and in vivo short-lag spatial coherence imaging is demonstrated using only summed RF data. The components beyond the acquisition hardware and beamformer necessary to produce a real-time ultrasound coherence imaging system are discussed.
I. Introduction
Ultrasound images are formed by transmitting sound waves from an array transducer and recording echoes backscattered from the body using the same array. A typical “delay-and-sum” beamformer applies receive focal delays to the individual channel signals and coherently sums them to form a single RF line. This operation is conventionally performed early in the signal processing chain to reduce the total bandwidth required downstream and therefore reduce system cost and complexity. While the summed RF data provide the necessary information for performing many clinical imaging techniques, systems that process and provide access to each receive channel separately enable alternative beamforming methods that examine the incoming signal across the receiving array.
The van Cittert-Zernike (VCZ) theorem describes the expected coherence of these returned signals scattered from diffuse media [1], [2]. For a uniform, diffuse medium, the coherence of echoes backscattered from the focal point as a function of receive element separation, or “lag”, is given by the Fourier transform of the square of the transmit pressure field magnitude. Therefore the expected coherence of an aperture with unity weighting across the array is a ramp function that predicts decreasing covariance for increasing lag value. The measured coherence has been used as metric for evaluating phase error in aberration correction schemes as an alternative to measuring speckle or point target brightness [3], [4]. More recently, coherence measurements have been used to augment B-mode images by suppressing signal in regions with low coherence [5], [6]. Similarly, short-lag spatial coherence (SLSC) imaging takes advantage of this measurement by estimating the coherence curve as a function of lag and integrating the curve up to a small fraction of the aperture length to form an image [7].
Previous implementations of a real-time SLSC imaging system utilizing a research scanner with access to receive channel data achieved frame rates of up to 6.7 frames per second [8]. Powerful research scanners with access to receive channel data have been developed [9], but translation of coherence methods to more widely-available clinical scanners with more developed post-processing pipelines would improve the accessibility of these techniques and combine them with adaptive imaging methods to overcome in vivo challenges such as motion and aberration. Although most clinical scanners do not provide access to receive channel data, alternative beamforming methods can be implemented on these scanners by taking advantage of acoustic reciprocity. We hypothesize that the proposed technique provides a coherence measurement that is equivalent to the conventional receive channel coherence and that this data could be used for coherence imaging applications such as the generalized coherence factor [5], phase coherence imaging [6] and SLSC imaging [7]. This technique also provides the added benefit of coherence throughout the full field of view, unlike the limited depth of field provided by conventional SLSC images [10].
Section II presents a derivation of the acoustic reciprocity of coherence by interchanging the transmit sources and receive elements. The theory is tested against experimental data by examining both averaged coherence curves and SLSC images for transmit channel data versus receive channel data. Section III describes possible imaging sequences to measure coherence on a clinical scanner using the conventional hardware delay-and-sum beamformer by taking advantage of reciprocity. The selected sequence is implemented on the Siemens ACUSON SC2000™ultrasound system and evaluated for in vivo SLSC imaging of the human liver. Section IV discusses the possibility of real-time clinical quality SLSC imaging.
II. Acoustic Reciprocity
A. Theory
The following is a modification of the derivation performed by Mallart and Fink [1], exchanging the roles of the transmit and receive apertures to derive the expected coherence as a function of transmit channel separation, or lag. The same assumptions of a sub-wavelength incoherent scattering medium and separability are made here. The desired function is the spatial covariance of the pressure field transmitted from two points X1 and X2 at the frequency f, scattered by the medium and received on the same array (note the change in notation from the referenced derivation).
The measured pressure P is described by a linear system consisting of three parts – forward propagation, scattering, and backward propagation. A pressure wave with frequency f is transmitted from a single element at location X1, interacts with a scatterer at location X0, and returns an echo to the transducer:
| (1) |
A constant representing the transmit pressure amplitude is omitted from this expression. The incident pressure at point X0 due to a single point source on the aperture with unity weighting is described by a spherical wave:
| (2) |
The pressure field reflected by a single scatterer is a spherical wave described by the scattering function χ(X, f). The pressure is received at an array located a distance r away and summed over the entire array with aperture weighting function O(X):
| (3) |
The aperture function includes appropriate phasing to focus the array such that the wavefront arriving from the point X0 is constant across the receiving aperture. Integrating the scattered pressure field from the entire medium gives the total received pressure signal:
| (4) |
The spatial covariance Rp(X1,X2, f) is the expected value of the inner product between the pressure from two source points and is given as:
| (5) |
| (6) |
The scattering terms are assumed to be spatially-incoherent such that . The resulting delta function is used to simplify the integral, reducing the expression to a single volume integral. The spatial covariance is therefore expressed as a function of the frequency-dependent scattering function, the magnitude of the receive transfer function, and a phase term:
| (7) |
The Fresnel approximation assumes that the axial distance between the source, X1 = (x1, 0), and the scattering point, X0 = (x, z), is large compared to the lateral distance. This allows the approximation r ≈ z in the amplitude terms and
| (8) |
in the phase term. Taking the depth z to be constant, the final expression shows the familiar form of the van Cittert-Zernike theorem:
| (9) |
Like the expression derived by Mallart and Fink, when the two points are symmetrical with respect to the center of the transducer (x1 = −x2), this expression reduces to a two-dimensional Fourier transform of the magnitude of the receive transfer function term Hrx taken with respect to the frequency (x1 − x2)/λz. Comparing (11) in [1] with (9), the expected spatial covariance of ultrasound in tissue demonstrates reciprocity of the transmit and receive apertures. Both the conventional receive channel configuration and the proposed transmit channel configuration are shown in Fig. 1. In this case, the covariance as a function of transmit element separation depends on the receive aperture function. Given a uniform weighting function for both transmit and receive apertures, the expected coherence between either receive channels or transmit channels is a ramp function.
Fig. 1.
(a) Receive channel coherence configuration. A signal is transmitted from a focused group of transmit sources, scattered and received at two different receive elements. (b) Proposed transmit channel coherence configuration. A signal is transmitted from two different transmit sources, scattered and received using a focused group of receive elements.
B. Experimental Results
An experimental demonstration of the reciprocity principle can be made using a single-channel, synthetic-aperture data set. An RF channel data set was collected using the Verasonics ultrasound scanner (Verasonics, Inc., Redmond, WA) and an ATL L12-5 256-element linear array (multiplexed for 128 channels in transmit and receive) with a pitch of 0.195 mm and a transmit frequency of 5 MHz. All data were stored for offline processing. The imaging target was an ATS Model 549 phantom (ATS Laboratories, Inc. Bridgeport, CA). The scan was performed by transmitting on and recording from individual elements from the center 128 elements of the array, imaging the same location in the phantom.
The recorded data set f[t, rx, tx] has three dimensions – range (time), receive channel, and transmit channel. Synthetic aperture beamforming was performed by calculating the delays necessary for a wave traveling from each transmit element to the points in the field and back to each receive element, creating a focused data set of subimages s[z, x, rx, tx] with four dimensions – axial position, lateral position, receive channel, and transmit channel. A conventional “receive channel” data set, s[z, x, rx], was formed by summing the beamformed data across the transmit channel dimension. A “transmit channel” data set, s[z, x, tx], was formed by summing across the receive channel dimension. Coherence curves were calculated from each data set by computing the normalized cross-correlation across the channels as a function of element spacing. Synthetic aperture focusing provides the focusing necessary for the VCZ theorem to be valid at all points in the field of view, allowing analysis of the coherence to be performed anywhere in the resulting image [10].
In addition to the raw coherence curves, an SLSC image can be formed to visually compare the measurements made over the entire field of view. Given either of the focused channel data sets, the first step in calculating an SLSC image is to compute the normalized cross-correlation between each pair of channel signals and average over all pairs with the same separation, giving a single value ρ[z, x, n] for each lag value n at every axial and lateral image location:
| (10) |
Axial kernels on the order of a wavelength from sample k1 to k2 are used in the cross-correlation and are centered on the sample at depth z. Each pixel Rsl[z, x] in the SLSC image is created by summing the corresponding coherence curve up to M lags and normalization is applied to the resulting image [7]:
| (11) |
The lag value M is often selected as a fraction of the total array length and is a parameter that can be varied to optimize image quality. Fig. 2 shows sample coherence profiles for the transmit and receive data sets, averaged over a 5 mm × 5 mm square region at a depth of 13 mm. The curves for the transmit channel and receive channel coherence are almost identical at all lags across the aperture. Both show suppression of the coherence due to random noise and the same deviation from the expected linear coherence profile. This shape is often seen in practice due to effects such as effective apodization of the array, reverberation, and phase aberration [11], [12]. These linear effects all obey reciprocity and only the channel noise, added after wave propagation, is expected to cause differences. To test this, the plot also shows a coherence profile measured from a matching Field II simulation [13], [14], run using the same transducer parameters as the experimental setup. As expected, the coherence measured from both the transmit channel and receive channel data sets in the absence of noise are identical.
Fig. 2.
Sample coherence curves taken from experimental data for both transmit and receive channel data sets using a kernel length of one wavelength. The experimental curves show nearly identical coherence with slight differences due to the summation of random noise. The Field II simulation shows a similar profile, exactly the same for both data sets in the absence of noise.
Fig. 3 shows the resulting SLSC image for each experimental case. The background texture present in both images is the same, confirming that the measured coherence is effectively the same between the two cases. For coherence applications such as SLSC imaging, either the transmit or receive channel dimension can be selected for processing without any penalty to the resulting data quality.
Fig. 3.
(left) Receive channel data SLSC image and (right) transmit channel data SLSC image. Peak coherence is observed around 15 mm due to the fixed elevation lens on the array. The two images are nearly identical with slight differences due to the summation of random noise. The displayed dynamic range is a linear mapping from 0 to 1.
A. Beamforming
The transmit channel data set can be directly acquired by performing a conventional synthetic aperture scan, transmitting on a single element and recording data for all desired receive beams either over multiple transmit events or using parallel receive beamforming [15], [16]. The system performs receive focusing and sums the receive channel data before the user can access any signals, producing the transmit channel data set s[z, x, tx]. Two main acquisition sequences and beamforming approaches compatible with a clinical scanner architecture are presented and are compared with a conventional full synthetic aperture implementation. All methods would also be compatible with using a virtual source element located behind the array to increase the channel signal-to-noise ratio [17].
1) Rectilinear field of view, transmit correction
The most straightforward approach to data acquisition is to specify the same rectilinear field of view for each of the transmit events such that the resulting data completely overlaps. This requires unique receive delays corresponding to every transmit element. It is assumed that the scanner calculates delays along the specified receive beamforming line and back to each receive element individually. The field of view and beamforming geometry is shown in Fig. 4(a). Even though receive beamforming is performed on the scanner, this data set still requires transmit focusing to compensate for the difference in path length between the true transmit distance (the line from the transmit element to the point of interest) and the receive beam line. This additional spatial shift is described by
Fig. 4.
Focusing schemes to acquire synthetic aperture transmit channel data. Each case shows the active transmit element and a sample receive element with the assumed focusing path (which may differ from the true path of propagation). (a) Rectilinear acquisition grid with transmit focusing performed in post-processing. The transmit path length for focusing is incorrectly chosen to lie along the receive beam instead of radially from the transmit element, so correction is required. (b) Fan-beam grid with scan conversion performed in post-processing. Each transmit element corresponds to a different fan-beam grid. (c) Full synthetic aperture focusing. The transmit path is not collinear with the specified receive beam.
| (12) |
where z is axial depth, x is lateral receive beam location and xt is the current transmit element position.
Once the receive channels have been summed, each sample describes a particular receive focus. Resampling the summed data to perform the necessary transmit delay adjustment introduces errors in receive delay curvature by selecting a sample that describes a different receive focus. This creates focal errors that are more severe closer to the transducer, where the receive delay curvature changes more quickly through depth. The single channel data set used to produce Fig. 3 was beamformed to mimic this geometry and the necessary transmit refocusing was applied to the summed receive signals. Fig. 5(left) shows the resulting SLSC image and demonstrates artifacts that obscure the speckle texture close to the transducer and the shallowest set of lesions due to the incorrect receive focus curvature. The artifact is less noticeable after the first few centimeters and no artifact is apparent in the middle set of lesions. A compromise must be made between transmit and receive focal quality in this case.
Fig. 5.
Experimental single channel data used to mimic two potential implementation schemes and create SLSC images, compared to conventional full synthetic aperture. Inset shows shallow anechoic lesion. (left) Rectilinear field of view with complete overlap between transmit events, transmit focus correction applied after receive focusing and summation. Notice the clutter in the lesion displayed in the inset. (center) Fan-beam field of view centered on each transmit element, shifting with each transmit event, and scan converted with bilinear interpolation after receive focusing and summation. (right) Full synthetic aperture acquisition. The displayed dynamic range is a linear mapping from 0 to 1.
2) Shifting fan-beam field of view, scan conversion
A more complete synthetic aperture calculation can be made by positioning the receive beamforming lines such that they are collinear with the line drawn from the active transmit element to the point of interest in the field. This requires positioning the receive beams in a fan-beam around the active transmit element, as if performing phased array focusing. Phased array scanning is supported by many scanners, but the complication of this scheme is that it is necessary to translate the field of view to be centered on the active transmit element. This geometry is shown in Fig. 4(b).
While no additional focusing is required, it is necessary to scan convert the RF data onto the same rectilinear grid for post-processing. This can be done quickly using bilinear interpolation and speed can be increased further by running the computation in parallel for each frame acquired. The signal must be adequately sampled both in range and angle in order to maintain coherence in the scan conversion process from a polar coordinate system to a Cartesian grid. The region where the fan-beams overlap is the effective field of view. As before, a single channel data set was used to mimic this geometry and the necessary scan conversion was applied before forming an SLSC image. Fig. 5(center) demonstrates that the synthetic aperture focusing is performed correctly and that the scan conversion has only a small effect on the image quality. In this example, 128 receive beams over an 80 degree field of view were used, creating a masking effect outside this region in the resulting image.
3) Rectilinear field of view, full synthetic aperture focusing
The best-case focusing method specifies independent receive delays for each transmit and receive element both axially and laterally. Full synthetic aperture delays are calculated by the system and the data are all recorded on the same rectilinear grid, removing the need for scan conversion. This method is not compatible with the standard receive beamforming line definition since the transmit and receive paths are not collinear. The configuration is shown in Fig. 4(c). This method is identical to the one used in Fig. 3(right) and the image is shown again in Fig. 5(right).
The contrast and contrast-to-noise ratio (CNR) of the anechoic lesion in the inset were measured in each case to highlight differences in the images:
| (13) |
| (14) |
where μ is the mean, σ2 is the variance, and the subscripts indicate values outside (o) and inside (i) the lesion.
From left to right, the contrast was −8.6 dB, −50.8 dB, and −61.7 dB. The CNR was 3.38, 8.05 and 7.71 respectively. The fan-beam field of view with scan conversion much more accurately reproduces the synthetic aperture image than the rectilinear field of view technique.
B. Methods
The shifting fan-beam field of view with scan conversion was chosen for implementation on the Siemens ACUSON SC2000™ultrasound system (Siemens Medical Solutions USA, Inc., Mountain View, CA). I/Q data sets were acquired with an Acuson 4V1c phased array transducer (Siemens Medical Solutions USA, Inc.) with a transmit center frequency of 3.2 MHz and stored for offline post-processing. For a single-element transmit event with each of the 112 transmit elements (170 micron pitch), 64 equally-spaced receive beams were collected over an 80 degree field of view with the vertex located at the active transmit element and using all 112 receive elements. 16:1 parallel receive was used to collect the receive beams for a given transmit element over the course of four transmit events. A single complete frame of data for each imaging location was stored for offline processing.
On a separate PC, the I/Q data were upsampled, converted to RF data, and scan-converted onto a single rectilinear grid to create a “transmit channel” data set. The focused receive signals from each transmit event were summed, envelope detected and log-compressed to create a synthetic aperture B-mode image. Normalized cross-correlation was performed across the transmit channels to measure coherence curves throughout the field of view. As a sample application, the correlation curves were summed up to 18% of the array length to produce an SLSC image. Coherence was also measured from a data set produced in post-processing using only one-quarter of the total transmit events to simulate performing a faster scan using a sparsely sampled array, integrating up to 11% of the array length when producing the SLSC image. This procedure is described in Fig. 6.
Fig. 6.
Explanation of experimental methods for in vivo imaging. (a) Acquisition sequence using clinical scanner. Data for a single frame (112 images) are collected and stored in memory as beamformed I/Q data before being transferred to disk for offline processing. (b) Post-processing on an external PC. Focused images are loaded and scan-converted onto a common grid to produce either a B-mode image or an SLSC image using the “transmit channel” data set.
In an effort to move towards clinical scanning rates, a sparsely sampled synthesized transmit aperture was also evaluated by selecting only one-quarter of the transmit elements, spaced evenly across the array. In the case of transmit channel coherence, the receive aperture is the component that determines the spatial coherence function and is the same between the two cases despite the sparse transmit aperture. In this study, the collected synthetic aperture subimages were downsampled across the transmit dimension in post-processing rather than performing a second acquisition in order to produce matched images.
SLSC and B-mode images are presented using a procedure to match the displayed resolution and dynamic range in order to provide a visual comparison between the two methods. Based on the speckle cell size term in the contrast-to-noise ratio proposed by Smith and Wagner [18], resolution and texture size is first matched between the two images by filtering using a rectangular blur kernel. The size of this kernel can be determined by comparing the two-dimensional Fourier transform of the background texture from each image, but in practice the kernel should have a width of 1 pixel and a length equal to the SLSC kernel length. The SLSC image is then normalized to values between 1 and N for display, where N is the number of gray values used. Using a reference speckle area near the region of interest, the mean and standard deviation of the B-mode image is adjusted to match the SLSC image and values outside the displayed gray values are clipped to the range [1,N]. It is important to note that the linear rescaling of the displayed values does not change the contrast-to-noise ratio, while the filtering and thresholding operations can improve it. This algorithm is automatically applied to a given pair of images to be matched and is not optimized for any particular imaging condition.
Sample results of this procedure are shown in Fig. 7 using the same data as the previous figures, demonstrating that the processed B-mode is more visually comparable to the SLSC image than the original B-mode. After blurring with a 0.2 mm axial kernel, the displayed dynamic range was changed from [−40, 0] dB to [−29.6, 0.8] dB to match to the SLSC image brightness distribution. Qualitatively, both the brightness and background texture of the images are well matched. The contrast measured from the displayed grayscale data using a 2 mm-diameter circular region inside and above the anechoic lesion improves from −12.4 dB to −34.3 dB due to the processing method, compared to −31.5 dB for the SLSC image. CNR improves from 4.16 to 4.78 due to the processing method, compared to 5.13 for the SLSC image.
Fig. 7.
Verasonics experimental data. (left) Original B-mode image, 40 dB dynamic range displayed. (center) SLSC image. The displayed dynamic range is a linear mapping from 0 to 1. (right) B-mode processed to match axial resolution and displayed dynamic range. Processing includes low pass filtering via convolution with a rectangular kernel to match resolution due to the axial SLSC kernel, shifting and scaling the log-compressed values to match displayed mean and variance, and thresholding to restrict the dynamic range.
C. In Vivo Results
Images of the human liver were collected using the sequence described from a subject recruited after IRB approval and informed consent. Fig. 8 shows matched B-mode and SLSC images for a single acquisition from the human liver. The SLSC image demonstrates the extended depth of field expected in a synthetic aperture image compared to a conventional SLSC image and shows a reduction of clutter in the vasculature compared to the B-mode image. The B-mode image has been blurred using a 0.5 mm axial kernel to match the SLSC image resolution, removing some uncorrelated noise from the image. The image frame was acquired in approximately 111 ms (reported by the SC2000). This would be reduced by a factor of four for the sparse transmit sequences.
Fig. 8.

In vivo human liver vasculature acquired on the Siemens Acuson SC2000 scanner using the shifting fan-beam technique. (left) Log-compressed, filtered synthetic aperture B-mode image, (center) synthetic aperture SLSC image and (right) synthetic aperture SLSC image with one-quarter the number of transmit elements (4× frame rate). Data were focused by the existing delay-and-sum focusing pipeline on the SC2000 scanner, only requiring scan conversion and image formation in post-processing. Arrow indicates target for image metric calculation. The displayed dynamic range is a linear mapping from 0 to 1.
The contrast and CNR were measured from the image-matched data for the vessel indicated by the white arrow in the B-mode image using a 2.4 mm-diameter circle inside the region and in the speckle region immediately to the left. From left to right, the contrast measured for the images was −8.8 dB, −13.0 dB and −21.2 dB. The CNR was 2.03, 3.02 and 3.32. The SLSC images improve the visibility of the target compared to the B-mode, and moving to a sparse aperture adds noise and further increases contrast (at the expense of speckle SNR) [19]. This effect due to noise is limited to good imaging conditions, because the loss of speckle SNR would eventually outweigh the gains made in contrast and reduce the CNR.
Fig. 9 shows another sample B-mode image of liver vasculature masked by clutter and the corresponding SLSC images produced using the method described above. The texture in the vessels in the B-mode image shows a distinct speckle pattern that does not disappear even after the applied axial blur filter. This indicates that the clutter is most likely reverberation and off-axis scatter, overwriting a hypoechoic vessel with echoes produced elsewhere. The coherence signal as applied in SLSC can reduce the appearance of these echoes and restore the visibility of the hypoechoic region.
Fig. 9.

Separate acquisition of in vivo human liver vasculature using same configuration as in Fig. 8. Images cropped to show vessels with strong clutter in B-mode and the reduction of clutter in the SLSC image. Arrow indicates target for image metric calculation. The displayed dynamic range is a linear mapping from 0 to 1.
The contrast and CNR were measured for the circular vessel cross-section indicated by the white arrow and the speckle region immediately shallower using a 6 mm-diameter circlular region. From left to right, the contrast measured for the images was −6.4 dB, −9.5 dB and −13.8 dB. The CNR was 1.01, 1.27 and 1.43 respectively. As before, the SLSC images improve visibility of the structure and adding noise by using a sparse aperture happens to further improve contrast.
There are artifacts present due to the method of implementation. The images displayed are masked to preserve only the region of overlap between the individual fan-beam acquisitions. Outside of this region, more computationally expensive calculations would need to be done to normalize the B-mode image and to calculate a subaperture coherence to account for only using a subset of transmit channels. The angular field of view could be enlarged at the expense of acquisition frame rate or the rectilinear grid acquisition scheme could be implemented to produce completely overlapping subimages. Additionally, there is a radial blur that is noticeable at depth in the SLSC images. The extent of this blur, the perceived resolution and the speckle texture depend on the choice of M, the lag value for integration [7]. Integration to longer lags reduces the blurring effect at the expense of signal-to-noise ratio.
IV. Discussion
This work presents an imaging sequence and focusing scheme that would be useful in a real-time coherence imaging mode on a clinical imaging system. Real-time implementation would require a low-level development effort on the scanner, analogous to the existing B-mode infrastructure present to access the data in memory and create a post-processing pipeline for beamformed frames using parallel CPU and GPU resources.
Some sequence performance improvements have already been presented, including sparsely sampling the transmit aperture and using parallel receive beamforming. The amount of sparsity chosen presents a trade-off between faster acquisition and more thorough sampling of the coherence function. The parallel receive beamforming does not impact image quality and should be increased as much as the system allows. The scan geometry can also be adjusted to either include more receive beams or perform faster scans with reduced sampling or a narrower field of view.
The interpolation and scan conversion process used here took around 12 milliseconds per transmit element (CPU time) to produce a 3000 × 150 pixel image on a single core of a 3.07 GHz Intel Xeon W3550 CPU using a MATLAB and MEX file implementation of bilinear interpolation. This total time would be a bottleneck for real-time processing, but the computations for each transmit event are independent and could be run in parallel on either the CPU or GPU. After the channel data set is constructed, the GPU could be used to calculate the correlation values and to create the coherence curve as in previous SLSC work [8].
The post-processing on a commercial clinical scanner could be reconfigured to support coherence-based imaging techniques. For example, an important adaptive beamforming step for improving coherence is aberration correction, removing partially-correlated phase offsets across the array caused by structures in the tissue with varying sound speed. The individual transmit subimages have previously been shown to be useful in iterative estimation of the aberration profile [20]. The phase offset can then be removed from the recorded data in order to restore the measured coherence that would otherwise be suppressed. In environments such as cardiac imaging where tissue motion may be a concern, the subimages can be used for motion estimation and the images may be realigned before further processing [21]. Current clinical B-mode images are augmented with proprietary non-linear filtering and compounding techniques to increase resolution, enhance contrast and suppress speckle texture. With a real-time, synthetic-aperture channel data stream available, similar tools could be developed to enhance coherence imaging.
V. Conclusion
We have presented a new technique to make coherence measurement compatible with a larger number of ultrasound scanner architectures and allow more widespread use of coherence-based algorithms. Once SLSC imaging and other coherence algorithms are integrated into an existing clinical signal processing pipeline, future work can focus on developing post-processing steps to improve final image quality. Providing experimental images that compete with modern clinical B-mode images is essential to fairly evaluating their performance and to speeding up the adoption of these new methods by commercial manufacturers.
Acknowledgments
This work is supported by NIH grants R01-EB017711 and T32-EB001040 from the National Institute of Biomedical Imaging and Bioengineering.
The authors wish to thank the ultrasound division at Siemens Medical Solutions USA, Inc. for their in-kind and technical support. The authors also want to thank Gregg Trahey and Jeremy Dahl for their support and feedback.
Contributor Information
Nick Bottenus, Email: nick.bottenus@duke.edu, Department of Biomedical Engineering, Duke University, Durham, NC and was previously a Student Intern/Co-Op at Siemens Medical Solutions, USA Inc.
Kutay F. Üstüner, Siemens Medical Solutions USA, Inc., Ultrasound Division
References
- 1.Mallart R, Fink M. The van Cittert-Zernike theorem in pulse echo measurements. The Journal of the Acoustical Society of America. 1991 Nov;90 [Google Scholar]
- 2.Liu DL, Waag RC. About the Application of the Van Cittert-Zernike Theorem in Ultrasonic Imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 1995 Jul;42(4):590–601. [Google Scholar]
- 3.Mallart R, Fink M. Adaptive focusing in scattering media through sound-speed inhomogeneities: The van Cittert Zernike approach and focusing. The Journal of the Acoustical Society of America. 1994;96(6):3721–3732. [Google Scholar]
- 4.Liu DL, Waag RC. Estimation and correction of ultrasonic wavefront distortion using pulse-echo data received in a two-dimensional aperture. IEEE transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 1998 Jan;45(2):473–90. doi: 10.1109/58.660157. [DOI] [PubMed] [Google Scholar]
- 5.Li PC, Li ML. Adaptive Imaging Using the Generalized Coherence Factor. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control. 2003;50(2):128–141. doi: 10.1109/tuffc.2003.1182117. [DOI] [PubMed] [Google Scholar]
- 6.Camacho J, Parrilla M, Fritsch C. Phase Coherence Imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 2009 May;56(5):958–74. doi: 10.1109/TUFFC.2009.1128. [DOI] [PubMed] [Google Scholar]
- 7.Lediju MA, Trahey GE, Byram BC, Dahl JJ. Short-lag spatial coherence of backscattered echoes: imaging characteristics. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 2011 Jul;58(7):1377–88. doi: 10.1109/TUFFC.2011.1957. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hyun D, Trahey GE, Dahl JJ. In vivo demonstration of a real-time simultaneous B-mode/spatial coherence GPU-based beamformer. 2013 IEEE International Ultrasonics Symposium (IUS); Jul. 2013.pp. 1280–1283. [Google Scholar]
- 9.Jensen J, Holten-Lund, Nilsson RH, Hansen M, Larsen U, Domsten R, Tomov B, Stuart M, Nikolov S, Pihl M, Du Y, Rasmussen J, Rasmussen M. SARUS: A synthetic aperture real-time ultrasound system. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 2013;60(9):1838–1852. doi: 10.1109/TUFFC.2013.2770. [DOI] [PubMed] [Google Scholar]
- 10.Bottenus N, Byram B, Dahl J, Trahey G. Synthetic aperture focusing for short-lag spatial coherence imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 2013;60(9):1816–1826. doi: 10.1109/TUFFC.2013.2768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bottenus N, Dahl J, Trahey G. Apodization schemes for short-lag spatial coherence imaging. 2013 IEEE International Ultrasonics Symposium (IUS); Jul. 2013.pp. 1276–1279. [Google Scholar]
- 12.Pinton GF, Trahey GE, Dahl JJ. Spatial Coherence in Human Tissue: Implications for Imaging and Measurement. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control. 2014;61(12):1976–1987. doi: 10.1109/TUFFC.2014.006362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Jensen JA, Svendsen NB. Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 1992;39(2):262–267. doi: 10.1109/58.139123. [Online]. Available: http://ieeexplore.ieee.org/xpls/absall.jsp?arnumber=139123. [DOI] [PubMed] [Google Scholar]
- 14.Jensen JA. Field: A Program for Simulating Ultrasound Systems. Medical & Biological Engineering & Computing. 1996;34(Supplement 1)(Part 1):351–353. [Google Scholar]
- 15.Corl P, Grant P, Kino G. A Digital Synthetic Focus Acoustic Imaging System for NDE. 1978 Ultrasonics Symposium; 1978.pp. 263–268. [Google Scholar]
- 16.Frazier CH, O’Brien WD., Jr Synthetic Aperture Techniques with a Virtual Source Element. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 1998;45(1):196–207. doi: 10.1109/58.646925. [DOI] [PubMed] [Google Scholar]
- 17.Karaman M, Li PC, O’Donnell M. Synthetic Aperture Imaging for Small Scale Systems. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 1995;42(3):429–442. [Google Scholar]
- 18.Smith SW, Wagner RF, Sandrik JMF, Lopez H. Low contrast detectability and contrast/detail analysis in medical ultrasound. IEEE Transactions on Sonics and Ultrasonics. 1983;3(3):164–173. [Google Scholar]
- 19.Bottenus N, Trahey GE. Equivalence of time and aperture domain additive noise in ultrasound coherence. Journal of the Acoustical Society of America. 2015;137(1):132–138. doi: 10.1121/1.4904530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Liu D, Ustuner K. Aberration correction using broad transmit beams. 2012 IEEE International Ultrasonics Symposium; Oct. 2012.pp. 2270–2273. [Google Scholar]
- 21.Nikolov S, Jensen J. K-space model of motion artifacts in synthetic transmit ultrasound imaging. Ultrasonics, 2003 IEEE Symposium on. 2003;1(2):1824–1828. [Google Scholar]







