Skip to main content
Biomedical Optics Express logoLink to Biomedical Optics Express
. 2020 May 28;11(6):3331–3345. doi: 10.1364/BOE.388816

Target depth-regularized reconstruction in diffuse optical tomography using ultrasound segmentation as prior information

Menghao Zhang 1, K M Shihab Uddin 2, Shuying Li 2, Quing Zhu 1,2,*
PMCID: PMC7316021  PMID: 32637258

Abstract

Ultrasound (US)-guided diffuse optical tomography (DOT) is a promising non-invasive functional imaging technique for diagnosing breast cancer and monitoring breast cancer treatment response. However, because larger lesions are highly absorbing, reconstructions of these lesions using reflection geometry may exhibit light shadowing, which leads to inaccurate quantification of their deeper portions. Here we propose a depth-regularized reconstruction algorithm combined with a semi-automated interactive neural network (CNN) for depth-dependent reconstruction of absorption distribution. CNN segments co-registered US to extract both spatial and depth priors, and the depth-regularized algorithm incorporates these parameters into the reconstruction. Through simulation and phantom data, the proposed algorithm is shown to significantly improve the depth distribution of reconstructed absorption maps of large targets. Evaluated with 26 patients with larger breast lesions, the algorithm shows 2.4 to 3 times improvement in the top-to-bottom reconstructed homogeneity of the absorption maps for these lesions.

1. Introduction

Diffuse optical tomography (DOT) is a non-invasive functional imaging technique that reconstructs the optical properties of tissue [1]. Utilizing multiple wavelengths in the near infrared (NIR) spectrum enables quantitative estimations of oxygenated, deoxygenated, and total hemoglobin concentrations, which are directly related to tumor angiogenesis [23]. However, due to intense light scattering inside biological tissue, localizing a lesion area becomes challenging, and the DOT reconstruction problem becomes ill-posed and ill-conditioned. Other imaging modalities, such as ultrasound (US) [46], x-ray CT [78], and magnetic resonance imaging (MRI) [910], have been introduced to provide prior information to guide DOT reconstruction. As a result, the reconstruction is less ill-posed due to the significantly reduced number of imaging voxels. Several clinical studies have been reported for breast cancer diagnosis [47,910].

However, to reliably reconstruct the optical properties of MRI, CT, or US-identified breast lesions, a regularization scheme should be incorporated to constrain the solution space and obtain a quality image. The general approach is based on prior anatomical information from high resolution imaging modalities. Implementation of spatially encoded regularization using images obtained from MRI has been investigated, where a two-step image reconstruction procedure was required [1112]. In the first step, a suspicious lesion was segmented from the rest of the tissue. Then this prior information was imposed to regularize the matrix inversion. Newer approaches include direct regularization imaging (DRI) in DOT [13] and depth compensation in fluorescence molecular tomography [14]. In DRI, the gray-scale values of MRI images are directly used to regularize the reconstruction, without the need for image segmentation [13]. For transmission-mode fluorescence molecular tomography, a depth compensation algorithm is introduced to reconstruct inclusions in deep tissues [14].

Our group has developed a US-guided DOT system and imaging algorithms that use co-registered US images to locate a breast lesion in reflection geometry and to segment the lesion into a fine-mesh lesion region of interest (ROI) and a coarse-mesh background tissue region [15]. This dual-mesh scheme improves the ill-posed reconstruction by significantly reducing the total number of imaging voxels. Regularization is empirically implemented as λ=pσ1 to stabilize the solution, where p is proportional to the US measured tumor size and σ1 is the largest eigenvalue of WW,whereWis the weight matrix [16]. However, images reconstructed using these algorithms without regularization in depth may show light shadowing due to the high absorption of the upper portion of the lesion, which may lead to inaccurate quantification of the lower portion [17]. Althobaiti et al. investigated DRI using US gray-scale images, and their method improved the malignant-to-benign lesion contrast as well as the lesion shape and distribution [18]. However, DRI using US gray-scale images is limited because the contrast of US images is poor due to speckle noise and posterior acoustic shadowing of larger tumors.

In recent years, deep learning-based image segmentation has been widely used in many areas, and several segmentation neural networks have been introduced in medical imaging [1921]. In this study, we have implemented a convolutional neural network (CNN) to automatically segment each US image into lesion and background regions [21]. The 2D target shape in one spatial dimension and in depth is automatically extracted, and the target symmetry in another spatial dimension is used to obtain an approximate 3D target shape. From the 3D target shape, coarse and fine meshes are defined and the regularization matrix in depth for image reconstruction is constructed. This approach allows near real-time image segmentation and speeds up image reconstruction, bringing US-guided DOT one step closer to clinical use.

Recently, sparse regularization has been shown to improve DOT image reconstruction because its sparsity is robust against noise and enables the preservation of edges in images. In this study, fast iterative shrinkage-thresholding (FISTA) [22] is used with the L1 regularization matrix to constrain the reconstruction [2324,26]. We call our new approach with depth-regularization “depth-regularized reconstruction”. The resulting absorption maps have shown more homogeneous absorption distribution in depth. The performance of our depth-regularized reconstruction has been evaluated using simulated targets, phantom targets, and clinical data, all compared with the results of a regularized reconstruction without depth-dependent regularization [17].

2. Methods

2.1. Diffuse optical tomography

Photon migration inside a highly scattering medium, such as breast tissue, can be modeled by a diffusion equation of the photon-density wave [25]. Assuming the optical properties inside a small voxel are constant and the optical properties change smoothly across all the voxels, the diffusion equation can be reformulated as a Helmholtz wave equation. Then, by linearizing the problem based on Born approximation and discretizing the imaging space into N disjoint voxels, the equation becomes

[USC]M×1=[W]M×N[δμa]N×1, (1)

where USC represents the perturbation measurements and M is the number of measurements, which is the number of sources multiplied by the number of detectors. W is the weight matrix computed from the analytic solution of the diffusion equation for a semi-infinite medium, using the optical properties of the background breast tissue, and δμa is the absorption distribution to be reconstructed. We solve the inverse problem of Eq. (1) with a regularization term R(δμa)=||diag(λi)δμa||11 :

δμa=argminδμa||USCWδμa||2+R(δμa). (2)

Ultrasound has been introduced to provide prior information of the lesion’s location and size. With co-registered US images, a dual-mesh method segments the medium into a fine-mesh ellipsoid lesion region and a coarse mesh background tissue region [15]. Mathematically, we separate W and δμa into two categories, W=[Wf,WC] andδμa=[δμaf,δμaC], where Wf and δμaf are the weight matrix and the changes of absorption distribution inside the lesion region, and WC and δμaC are the weight matrix and changes of absorption distribution in the background region. This simple ellipsoid dual-mesh schedule is used for both σ1-regularized reconstruction (defined below) and depth-regularized reconstruction.

2.2. σ1-regularized reconstruction using FISTA

The inverse problem in solving absorption distributions is described in Eq. (2), where ||USCWδμa||2 is used to measure how well the data fit the model, R(δμa)=||diag(λ)δμa||11 is used to regularize the reconstruction, where the l1 norm is used in R(δμa)to obtain a sparse solution to improve the accuracy of reconstruction. Regularization is implemented as λ=pσ1, where p is proportional to the US measured tumor size and σ1 is the largest eigenvalue of WHW[16]. We refer to this algorithm as σ1-regularized reconstruction in the rest of the manuscript. FISTA with a constant step size was used to solve the inverse problem [22,26]. The proximal gradient method solves the optimization problem with a gradient step followed by a proximal step.

The reconstruction method of the inverse problem is encapsulated in Algorithm 1 given below. We use a zero initialization for the absorption coefficient distributionδμa=[0]N×1. The intermediate variables s and q are also initialized accordingly, as described in step 2 [22]. We then go through the iteration. The gradient of the ||USCWδμa||2 used in step 4 can be computed as

||USCWδμa||2δμa=WH(WδμaUSC). (3)

where WH is the Hermitian adjoint of the weight matrix W. A step size of τ is chosen as 2/norm(WHW) in this paper. In step 5 of the algorithm, we compute δμat using the proximal operator associated with the regularization termR(δμa), where the proximal operator is defined as

proxR(z,R(δμa))=argminz{R(δμa)+12||zδμa||22}. (4)

This proximal operator can be efficiently calculated with the soft threshold functionSγ(z), defined as [22,26]

Sγ(z)=sign(z)max(0,|z|γ), (5)

where is the element-wise multiplication operator and γ=τλ. Here, the sign(z) is a sign function that extracts the sign of each element in z, and |z| computes the absolute value of each element in z. We then update the intermediate variables st and qt, following the procedures listed as step 6 and step 7, which help accelerate the convergence rate [22].

Algorithm 1. Fast iterative shrinkage-threshold algorithm with constant step size τ.

graphic file with name boe-11-6-3331-i001.jpg

2.3. Ultrasound segmentation using CNN and depth-regularized reconstruction

Co-registered US images can help to localize the lesion region and reduce the number of voxels with unknown optical properties, easing the ill-posed problem. However, due to the low contrast, speckle noise, and posterior shadow in most breast US images, it remains challenging to accurately perform automated lesion segmentation [2021]. Typically, an ellipsoid is used to approximate the lesion, and more than twice the lesion size in the spatial dimension is used to define the dual-mesh region, which accounts for larger target region in low resolution DOT imaging. The depth is manually measured from co-registered US images. Here we take a further step and extract lesion shape information from US images as a prior for depth-regularized DOT reconstruction.

The automated convolutional neural network (CNN) for breast cancer detection was developed by Yap et al. [20]. Their CNN has 33 convolution layers. The network architecture is one convolution layer with a 7×7 kernel, one pooling layer, 6 convolution layers with 3×3 kernels, one pooling layer, 8 convolution layers with 3×3 kernels, one pooling layer, 12 convolution layers with 3×3 kernels, one pooling layer, and 6 convolution layers with 3×3 kernels. Instead of a regular convolution, the last two stages use dilated convolution (also known as atrous convolution) with a dilation rate of 2. A pyramid scene parsing network [21] extracts the feature map from the output of the final convolution layer. The CNN model is pre-trained on ImageNet, which is an image database with 1.2 million images. The input RGB image has three channels, for red, green, and blue in general. We fine-tuned the adapted CNN by freezing the weights of the first two convolutional layers, which captures universal features like boundaries and edges, then we trained the model using PASCAL, a well-known standard dataset with US images [21], and a small learning rate to modify the parameters to fit our task.

The segmentation of US images using the pre-trained CNN was performed automatically by selecting four markers, two each in the lateral and depth dimensions. Those four marks perform as boundary points. We introduced an extra channel in the image with four 2D Gaussian functions centered at each marker. After selecting the four markers, this CNN automatically generated the 2D shape of the lesion. The target symmetry in another spatial dimension was used to obtain an approximate 3D target shape. By performing this semi-automatic interactive segmentation, we could efficiently localize the lesion and extract its approximate shape, including its width at each depth layer and depth as a prior, and could incorporate these parameters into the depth-regularized DOT reconstruction with R(δμa)=||diag(λi)δμa||11, as given below.

The depth-dependent regularization matrix [diag(λi)] is computed as follows. At target layer i within the dual-mesh, which is generally 2 to 3 times the spatial dimension measured by co-registered US using CNN, the regularization matrix is computed as λi=Cwidth(i)×depthi, where the index i is the i-th power of the absolute depth at i-th target layer. For example, index i is 1 for the top target layer in the dual-mesh, 2 for the second layer, … and i for the i-th target layer. Here, width(i) is the width of the i-th target layer automatically measured by the CNN. c is a constant empirically chosen as 4 based on simulations. The units of width and depth are in cm. λi outside the dual-mesh region is given as zeros. Thus, the regularization is tighter for shallow target layers and looser for deeper layers.

To quantify the light shadowing effect, the ratio of the summation of the reconstructed δμatopfrom the top depth layer to the summation of δμabottomfrom the bottom layers is calculated as R=δμatoplayerδμabottomlayers.

2.4. Phantom and patient experiments

To evaluate the performance of the proposed algorithm, phantom and patient data were acquired from the US-guided DOT system. The DOT system uses four laser diodes with wavelengths of 740, 780, 808, and 830 nm, and it incorporates 10 or 14 parallel photomultiplier (PMT) detectors and a commercial US system [6,27]. Each laser diode is modulated at 140 MHz and delivers light sequentially to nine source locations on a hand-held probe. The sources are placed on one side of the ultrasound transducer and detectors are placed on the other side, with source-to-detector separations varying from 3 cm to 7.6 cm. The probe diameter is 9 cm, and the volume underneath the probe is represented by voxels. The patient study protocol for this study was approved by the local Institutional Review Boards and was compliant with the Health Insurance Portability and Accountability Act. All patients signed an informed consent form, and all patient data used in this study were de-identified.

A total of 26 patients with large lesions were studied to evaluate the algorithms. Based on biopsy results, 12 patients had malignant lesions (mean age 61 years; range 47-73 years) and 14 had benign lesions (mean age, 44 years; range 22-71 years). For malignant lesions, the average size was 1.8 ± 0.63 cm and average depth was 2.2 ± 0.39 cm; while for benign lesions, the average size was 2.2 ± 0.92 cm and average depth was 1.6 ± 0.43 cm.

3. Results

3.1. Ultrasound segmentation

Ultrasound segmentation, the first step in the proposed algorithm, provides the lesion location and shape information. With accurate segmentation, we can then achieve more accurate reconstruction results. The pre-trained CNN model is applied to our co-registered US images to perform US segmentation.

To evaluate the performance of the US segmentation, an experienced US imager delineated the boundaries of all breast lesions studied. One example is presented in Fig. 1. Based on the evaluations by US experts, the automated interactive neural network gives very similar results to the manual measurements.

Fig. 1.

Fig. 1.

(a) US image of a breast tumor with manually marked lesion boundaries. (b) US image segmentation using the semi-automated interactive CNN model.

3.2. Reconstruction results

3.2.1. Monte Carlo simulation

To evaluate the performance of the proposed algorithm, MC simulations were performed using the known optical properties of the target and background tissue. In the MC simulations, photons propagated in a 10 cm × 10 cm × 6 cm volume whose background optical properties were μa0=0.02cm1 and μsp0=7cm1. Three target shapes were simulated using these same target optical properties of μa=0.20cm1 and μsp=7cm1. For the first shape, the target was set as two cylinders of different diameters, stacked symmetrically with their centers aligned, to mimic a shaped lesion. The top cylinder was 1.5 cm in diameter and 0.75 cm high, and the bottom one was 3 cm in diameter and 0.75 cm high. The center of this stacked target was located at (0, 0, 2 cm) inside the background volume. Together, the stacked cylinders measured 1.5 cm in height. Within the image background volume, the interfacial plane between the cylinders was set at a depth of 2 cm. Imaging two planes at 2.5 mm above and below that depth, as is typical practice in our DOT, would yield reconstructions at depths of 1.75 and 2.25 cm. Two sets of reconstructed images were generated by using the σ1-regularized reconstruction and the proposed depth-regularized reconstruction. For the second target shape, both the top and bottom cylinders were 1.5 cm in diameter and 0.75 cm high, while for the third shape, the larger cylinder was placed on top and the smaller one below.

Longitudinal sections of simulated target and reconstructed absorption maps from both algorithms are presented in Fig. 2. For the first shape, the σ1-regularized reconstruction has a maximum reconstructed value of μa=0.199cm1 in (d), but only the top portion of the lesion is resolved. The reconstructed FWHM width is 3 cm, which is two times larger than the true target dimension of 1.5 cm. The depth-regularized reconstruction provides a similar maximum reconstructed value of μa=0.197cm1, along with the second reconstructed depth layer with 2.25-cm FWHM, which is closer to the real target shape shown in (e). Similar results, (f) and (g), are shown for the second shape, where the σ1-regularized reconstruction has a maximum reconstructed value of μa=0.170cm1, but fails to resolve the target in the second depth layer. The depth-regularized reconstruction provides a maximum reconstructed value of μa=0.171cm1 along with a second reconstructed target layer of R = 1.65. The second reconstructed target layer has a similar reconstructed FWHM width to the first layer’s target width, which is closer to the real target size. However, for the third target shape, shown in (h) and (i), due to the large absorbing top cylinder, both algorithms failed to resolve the target in the second depth layer.

Fig. 2.

Fig. 2.

(a), (b), and (c) are cross section illustrations of three differently shaped simulated targets. In each target, two cylinders with the same optical properties are concentrically stacked. (d), (f), and (h) are reconstructed images from the σ1-regularized reconstruction. Only the top layer of the target is resolved for all three shapes. (e), (g), and (i) are reconstructed images from the proposed depth-regularized algorithm. Under conditions (a) and (b), both targets are resolved at the deeper layer. The ratio R = 1.32 in (e) and 1.65 in (g). For target shape (c), the top portion absorbed most of the light and the deeper target portion could not be resolved. Each image slice is 9 cm by 9 cm in spatial dimensions and the depth between slices is 0.5 cm. The absorption coefficient is cm−1 with range 0 to 0.2 cm−1.

The importance of accurate US measurement in resolving the target depth distribution was also evaluated by simulations. A simple ellipsoid fitting was applied to all three simulated target shapes to acquire the approximate width of each target at different depths. Two sets of images were reconstructed for each shape, based on the depth-regulated algorithm with ellipsoidal fitting and the same algorithm with the actual target width. (f) and (g) in Fig. 3 show that the simple ellipsoid fitting works fine when the target width is closer to the measurement from the ellipsoid. However, (d) and (e) show the ellipsoid fitting leads to a larger ratio, R = 2.45, as compared with using a more accurate measurement of the width, which yields R = 1.32. For target shape (c), due to the highly absorbing target on the top, both methods failed to resolve the bottom portion.

Fig. 3.

Fig. 3.

(a), (b), and (c) are cross sections of three differently shaped simulated targets. For all three shapes, the ellipsoid fitting of the target is marked in the red. (d), (f), and (h) are reconstructed images from the depth-regularized algorithm with ellipsoidal fitting, with the ratio R = 2.45 in (d) and R = 1.59 in (f). (e), (g), and (i) are reconstructed images from the proposed algorithm with the actual target size, with the ratio R = 1.32 in (e) and R = 1.65 in (g). For shape (c), both methods failed to resolve the bottom portion.

3.2.2. Phantom experiments

Phantom experiments were conducted at a wavelength of 780 nm to evaluate the performance of the depth-regularized algorithm. Solid ball phantoms made of polyester resin were used to mimic a high contrast malignant lesion with calibrated optical properties of μa=0.23cm1 andμsp=7cm1. These target phantoms were submerged 2 cm deep in background intralipid solution with μa0 in the range of 0.02 to 0.03cm1 andμsp0in the range of 7 to 8 cm1, typical bulk optical properties for normal breast tissue. An example of a 2 cm diameter target is shown in Fig. 4. Figure 4(a) is a co-registered US image showing the spherical phantom in the center of the image. Absorption distribution maps (b) reconstructed from the σ1-regularized algorithm have a maximum μa=0.214cm1, but resolve the target from only the 1.5 cm depth. (c) The depth-regularized algorithm has a similar maximum μa=0.204cm1, and resolves the target in both depth layers. However, it cannot recover the third target layer at 2.5 cm.

Fig. 4.

Fig. 4.

(a) Co-registered US image, a solid ball phantom 2 cm below the surface. (b) Reconstructed images from the σ1-regularized algorithm, with a maximum reconstructed μa=0.214cm1. Only the target top-layer was resolved in depth, which leads to an infinite top-to-bottom ratio. (c) Reconstructed images from the proposed depth-regularized algorithm, with a maximum reconstructed μa=0.204cm1. The proposed algorithm resolves both target layers, which is closer to the phantom shape and has a R = 1.071 top-to-bottom ratio. Note that each image slice is 9 cm by 9 cm. The depth spacing between layers is 0.5 cm, and the absorption coefficient is measured in cm −1. Note that the target is marked by a circle, and some sound reverberations from the hard resin ball are shown outside the circle.

Three high contrast phantoms, with μa=0.23cm1, and three low contrast phantoms, withμa=0.11cm1, were reconstructed using both algorithms. The σ1-regularized algorithm failed to resolve the second target layer in the high contrast cases and resolved two out of three low contrast cases of R = 1.948 ± 0.276. The low contrast case in which the σ1-regularized algorithm could not resolve the second target layer was located at 3 cm depth. The proposed depth-regularized algorithm provides an average ratio R = 1.022 ± 0.155 for the low contrast phantoms and an average ratio R = 1.269 ± 0.280 for the high contrast phantoms.

3.2.3. Clinical study

To evaluate the performance of the proposed algorithm, clinical data were acquired from both the lesion side of a breast, as target data, and the contralateral position of the normal breast, as reference data. Background breast tissue optical properties were computed by fitting the reference data. Four wavelength absorption distribution maps were reconstructed for each case, then used to compute hemoglobin concentrations.

Figure 5 shows example reconstructed 780 nm absorption distribution maps of a stage 2 invasive cancer with mixed ductal and lobular features. A co-registered US image is shown in Fig. 5(a), and the resulting image after applying the CNN, with the lesion shape marked, is shown in Fig. 5(b). Images (c) reconstructed with the σ1-regularized algorithm show that the top portion of the lesion in slice 4 (2 cm depth) has a much higher μaof 0.307cm1 and is much larger than the bottom portion in slice 5 (2.5 cm depth). The depth-regularized algorithm (d) improves the reconstruction for the second layer, with R = 1.577. One example of a benign fibroadenoma, reconstructed from imaging at 780 nm, is presented in Fig. 6 to demonstrate the performance of the proposed algorithm. Image (c), reconstructed with the σ1-regularized algorithm, has an R = 2.556. The depth-regularized algorithm (d) improves reconstruction for the second layer (slice 3), with R = 1.085.

Fig. 5.

Fig. 5.

Segmented US image and reconstructed absorption maps of a stage 2 cancer at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from CNN. (c) Reconstructed absorption map from the σ1 -regularized algorithm with a maximum μa=0.307cm1and infinite R. (d) Reconstructed absorption map from the depth-regularized algorithm with a maximum μa=0.299cm1 and R = 1.57. (c) and (d) consist of 7 sub-images, each sub-image showing one 9 cm by 9 cm cross section of the reconstructed absorption map from 0.5 cm to 3.5 cm in 0.5 cm spacing below tissue surface. The absorption coefficient in cm−1 with range 0 to 0.2 cm−1.

Fig. 6.

Fig. 6.

Segmented US images and reconstructed absorption maps of a benign fibroadenoma imaged at 780 nm. (a) US image from a co-registered commercial US system. (b) Segmented US image from the segmentation CNN. (c) Reconstructed absorption map from the σ1 -regularized algorithm, with a maximum μa=0.084cm1. (d) Reconstructed absorption map from the depth-regularized algorithm, with a maximum μa=0.079cm1. The units of absorption coefficients are cm−1, and the range is 0 to 0.1 cm−1.

Finally, after applying both the σ1-regularized algorithm and the depth-regularized algorithm to all the patient data, the total hemoglobin concentrations were computed from the reconstructed absorption distributions of all four wavelengths. The boxplot in Fig. 7 shows the maximum total hemoglobin concentrations (HbT) for all 26 cases for both the σ1-regularized algorithm and depth-regularized algorithm. For the benign cases, the total hemoglobin from the σ1-regularized algorithm is 63.44 ± 26.16μM, and for the depth-regularized algorithm, it is 50.32 ± 24.19μM. For the malignant cases, the value is 122.76 ± 26.29 μM for the σ1-regularized algorithm, and 117.18 ± 28.79μMfor the depth-regularized algorithm. From the reconstructed values, we can conclude that the depth-regularized reconstruction algorithm has similar performance for both absorption coefficients and total hemoglobin, as compared to the σ1-regularized algorithm. However, the depth-regularized algorithm provides improved depth distribution.

Fig. 7.

Fig. 7.

Boxplots of reconstructed maximum total hemoglobin values among 14 benign and 12 malignant cases, using either the σ1-regularized algorithm (σ1-regularized) or the proposed depth-regularized algorithm (depth). HbT is reported in μM.

3.2.4. Quantitative analysis of the light shadowing effect

The top-to-bottom ratio of the light shadowing effect was quantitatively analyzed for 26 cases of patient data. Due light shadowing, some clinical cases do not have reconstructed values for their bottom portions when the σ1-regularized algorithm is used, and so have an infinite ratio for their quantitative measurements of light shadowing effect. After excluding those cases, we reconstructed the absorption distribution maps of 12 benign and 11 malignant cases, and then computed the ratios, R, of light shadowing effect. From the boxplots in Fig. 8, we can see clear differences between the σ1-regularized and the depth-regularized algorithms. The proposed depth-regularized algorithm has a small ratio, which means more homogeneously reconstructed absorption distribution maps in depth. Statistically, the σ1-regularized algorithm has mean ratios of 3.53 in benign cases and 3.82 in malignant cases. These values are expected, because in benign cases the lesions are less absorbing than in malignant cases, which leads to reduced light shadowing. The depth-regularized algorithm reconstructs absorption distributions with reduced light shadowing effect, with a mean ratio of 1.19 for benign cases and 1.59 for malignant cases, corresponding to 3.0 to 2.4 times improvements in top-to-bottom reconstructed homogeneity for benign and malignant large lesions.

Fig. 8.

Fig. 8.

Boxplot of R for 12 benign and 11 malignant cases.

3.2.5. Evaluation on small targets

Although the proposed algorithm focused on large lesions, we want to show that the depth-regularized algorithm will not change reconstruction results with small targets. We evaluated the proposed algorithm’s performance on 5 benign and 5 malignant cases with small targets, where the lesion diameters were smaller than 1 cm. One example of a small 6 mm ductal carcinoma in situ is presented in Fig. 9, with the lesion located 1.3 cm beneath the skin. The σ1-regularization algorithm and depth-regularized algorithm provide both similar results for reconstructed values and reconstructed image shape.

Fig. 9.

Fig. 9.

Images of a small malignant lesion imaged at 780 nm, reconstructed with both the σ1 -regularized algorithm and depth-regularized algorithm. (a) Co-registered US image (b) Segmented US image (c) Reconstructed absorption distribution from the σ1-regularized algorithm, with a maximum μa=0.227cm1. (d) Reconstructed absorption distribution from the depth-regularized algorithm, with a maximum μa=0.226cm1. The absorption coefficient is in cm−1, and the range is 0 to 0.2 cm−1.

For five benign lesions, the average size was 1.0 ± 0.33 cm and average depth was 1.33 ± 0.38 cm; while for five malignant lesions, the average size was 1.0 ± 0.44 cm and average depth was 1.6 ± 0.44 cm. For the benign cases, the total hemoglobin of the σ1-regularized algorithm is 47.16 ± 12.27μM, and for the depth-regularized algorithm it is 46.27 ± 14.97μM. For the malignant cases, the total hemoglobin of the σ1-regularized algorithm is 137.73 ± 21.93μM, and for the depth-regularized algorithm it is 138.53 ± 21.37μM.

3.2.6. Robustness test

The depth-regularized algorithm has shown promising results in reducing the light shadowing effect and improving target depth distribution. The key is the construction of the depth-dependent regularization matrix, which relies on the US segmentation. Due to the low contrast and speckle noise of US images and the user dependency of our semi-automated segmentation algorithm, different users may realize different segmentation results. Here we evaluated the influence of segmentation errors in reconstruction of a simulated target, where the target shape was known. The simulated target was a 2 cm diameter sphere with an absorption coefficient of μa=0.2cm1, located 2 cm below the surface of the tissue. Three target layers of 1.5 cm, 2 cm, and 2.5 cm were reconstructed. One set of images, reconstructed using the ground truth target shape, has a maximum reconstructed absorption coefficient of 0.184 cm−1. The segmentation error of each layer was defined as the percentage error of the width of the segmented region in the corresponding reconstructed layer. The error is positive when the segmented region is larger than the true size of the lesion and negative otherwise. We manually added 5%, 10%, 15%, and 20% positive and negative errors to each individual reconstructed layer and to all reconstructed layers simultaneously. Using the proposed depth-regularized algorithm, 24 sets of images were reconstructed. The reconstructed maximum absorption coefficients, versus segmentation errors, are presented in Fig. 10. The largest error differs by only 11% from the original reconstructed absorption coefficients, which means the proposed algorithm is fairly robust against small errors in segmentation. It is interesting to note that when the error of top target portion is positive, the width is larger, the λ1 is smaller, and the regularization is looser. Therefore the reconstructed μa is larger than the use of the true width at the same depth layer. When the error is negative, the trend is reversed. However, for the second target layer, the positive error causes looser regularization and therefore a higher reconstructed μa value in the second target layer. Since the maximum is always located in the top target layer, this reduces the maximum μa. The negative error in the second target layer increases the value of the reconstructed maximum target μa . The overall error follows the trend of the first target layer errors. The highest error is 11%, when the top layer is measured as 20% smaller than the true width, and the error is about 5% when the top layer is measured as 20% larger than the true width at the same depth. The second layer error is about 5% when the second layer is measured as either smaller or larger than the true width at the same depth. These findings suggest that when the uncertainty is higher in segmentation, it is the best to use a larger width for the top target layer. This could be done by providing guidelines to operators.

Fig. 10.

Fig. 10.

Reconstructed absorption coefficients versus segmentation errors. Positive errors mean segmented regions are larger than the true size of the lesion, and negative errors mean the opposite. First and second target layer widths were changed to evaluate the errors. Segmentation errors have minimal influence on reconstructed absorption coefficients.

4. Discussion and summary

As presented here, deep learning-based US segmentation extracts the lesion location and the lesion shape from co-registered US images as a prior, which is incorporated into a regularization matrix in the depth-regularized reconstruction algorithm. The choice of regularization matrix is both important and difficult. With the help of an accurate and efficient US segmentation neural network, a lesion’s width and its reconstruction depth can both be used for constructing a depth-dependent regularization matrix. This adaptive regularization matrix gives us a better estimate of the true target absorption distribution in depth.

The depth-regularized reconstruction does not change the maximum reconstructed target absorption coefficient, and therefore does not change the maximum total hemoglobin concentration when it is compared with the σ1-regularized algorithm. Thus, it does not change the quantification of the malignant-to-benign lesion contrast ratio. However, in this work, it improved the lesion depth distribution more in the benign group, with a ratio of 3, than in the malignant group, with a ratio of 2.4. Thus, the improved depth distribution could be used as another parameter to distinguish benign vs. malignant lesions, i.e, benign lesions have a more homogenous depth distribution than malignant lesions. However, this initial result will need to be further tested on a large patient population.

Depth-regularized reconstruction is limited in recovering the depth distribution if the top of the target mass is much larger than the bottom of the mass, as seen from target shape 3 in the simulations (Figs. 2 and 3). Additionally, the algorithm cannot resolve the deep portion of the target beyond the second layer, as seen in the Fig. 4 phantom data, which is more than one centimeter below the top target layer in our reconstruction mesh set-up. In general, the target depth sensitivity is about 3 cm when the center of the target is located at this depth. This depth is adequate to image almost all breast lesions. Note that we can always position a patient differently by using a wedge underneath the patient in a supine position to make sure the lesion is within the depth range.

The automated CNN segmentation algorithm provides the lesion shape to extract the spatial and depth priors and reduce the need for manually measuring these parameters. It will also minimize user dependence in the reconstruction of lesion absorption maps, which is an important step toward the clinical translation of US-guided DOT technology. However, the CNN algorithm still requires users to input the boundary points, and the user must have basic knowledge of US images. This task can be readily accomplished by ultrasound sonographers who are trained to assist radiologists in breast exams. Ideally, the 3D shape of the target gained from several co-registered 2D US images can refine the regularization matrix in depth and further improve the accuracy of the reconstructed absorption distribution. This approach will require scanning the lesion in another dimension using a linear array. In future work, we will evaluate the additional improvement and its clinical utility.

In conclusion, we demonstrate a depth-regularized reconstruction algorithm combined with a semi-automated interactive neural network to reconstruct the absorption distribution and total hemoglobin concentration. Co-registered US images are segmented as both spatial and depth priors and are incorporated into the reconstruction. The depth-regularized algorithm is validated using both simulations and phantom and clinical data. The light shadowing effect is reduced by 2-3 times in large lesions. The algorithm improved depth distribution more in a benign group than in a malignant group, a difference that could provide another parameter for breast cancer classification. We believe our work can be generalized for US-guided diffuse optical tomography/fluorescence tomography in reflection geometry.

Acknowledgements

We acknowledge the funding support for this work from NCI (R01CA228047) and NIBIB (R01EB002136). The authors thank Professor Jim Ballard for reviewing and editing the manuscript.

Funding

National Institutes of Health10.13039/100000002 (NIBIB R01EB002136, R01CA228047).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

  • 1.Boas D. A., Brooks D. H., Miller E. L., DiMarzio C. A., Kilmer M., Gaudette R. J., Zhang Q., “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag. 18(6), 57–75 (2001). 10.1109/79.962278 [DOI] [Google Scholar]
  • 2.Tromberg B. J., Pogue B. W., Paulsen K. D., Yodh A. G., Boas D. A., Cerussi A. E., “Assessing the future of diffuse optical imaging technologies for breast cancer management,” Med. Phys. 35(6), 2443–2451 (2008). 10.1118/1.2919078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Grosenick D., Rinneberg H., Cubeddu R., Taroni P., “Review of optical breast imaging and spectroscopy,” J. Biomed. Opt. 21(9), 091311 (2016). 10.1117/1.JBO.21.9.091311 [DOI] [PubMed] [Google Scholar]
  • 4.Zhu Q., Cronin E. B., Currier A. A., Vine H. S., Huang M., Chen N., Xu C., “Benign versus malignant breast masses: optical differentiation with us-guided optical imaging reconstruction,” Radiology 237(1), 57–66 (2005). 10.1148/radiol.2371041236 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Zhu Q., Hegde P. U., Ricci Jr A., Kane M., Cronin E. B., Ardeshirpour Y., Xu C., Aguirre A., Kurtzman S. H., Deckers P. J., “Early-stage invasive breast cancers: potential role of optical tomography with us localization in assisting diagnosis,” Radiology 256(2), 367–378 (2010). 10.1148/radiol.10091237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Zhu Q., Ricci Jr A., Hegde P., Kane M., Cronin E., Merkulov A., Xu Y., Tavakoli B., Tannenbaum S., “Assessment of functional differences in malignant and benign breast lesions and improvement of diagnostic accuracy by using us-guided diffuse optical tomography in conjunction with conventional us,” Radiology 280(2), 387–397 (2016). 10.1148/radiol.2016151097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Fang Q., Selb J., Carp S. A., Boverman G., Miller E. L., Brooks D. H., Moore R. H., Kopans D. B., Boas D. A., “Combined optical and X-ray tomosynthesis breast imaging,” Radiology 258(1), 89–97 (2011). 10.1148/radiol.10082176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zimmermann B. B., Deng B., Singh B., Martino M., Selb J., Fang Q., Sajjadi A. Y., Cormier J., Moore R. H., Kopans D. B., Boas D. A., Saksena M. A., Carp S. A., “Multimodal breast cancer imaging using coregistered dynamic diffuse optical tomography and digital breast tomosynthesis,” J. Biomed. Opt. 22(4), 046008 (2017). 10.1117/1.JBO.22.4.046008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Brooksby B., Pogue B. W., Jiang S., Dehghani H., Srinivasan S., Kogel C., Tosteson T. D., Weaver J., Poplack SP S. P., Paulsen K. D., “Imaging breast adipose and fibroglandular tissue molecular signatures by using hybrid MRI-guided near-infrared spectral tomography,” Proc. Natl. Acad. Sci. U. S. A. 103(23), 8828–8833 (2006). 10.1073/pnas.0509636103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Feng J., Xu J., Jiang S., Yin H., Zhao Y., Gui J., Wang K., Lv X., Ren F., Pogue B. W., Paulsen K. D., “Addition of T2-guided optical tomography improves noncontrast breast magnetic resonance imaging diagnosis,” Breast Cancer Res. 19(1), 117 (2017). 10.1186/s13058-017-0902-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Brooksby B., Jiang S., Dehghani H., Pogue B. W., Paulsen K. D., Weaver J. B., Kogel C., Poplack S. P., “Combining near-infrared tomography and magnetic resonance imaging to study in vivo breast tissue: implementation of a Laplacian-type regularization to incorporate magnetic resonance structure,” J. Biomed. Opt. 10(5), 051504 (2005). 10.1117/1.2098627 [DOI] [PubMed] [Google Scholar]
  • 12.Yalavarthy P. K., Pogue B. W., Dehghani H., Carpenter C. M., Jiang S., Paulsen K. D., “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express 15(13), 8043–8058 (2007). 10.1364/OE.15.008043 [DOI] [PubMed] [Google Scholar]
  • 13.Zhang L., Jiang S., Zhao Y., Feng J., Pogue B. W., Paulsen K. D., “Direct Regularization From Co-Registered Contrast MRI Improves Image Quality of MRI-Guided Near-Infrared Spectral Tomography of Breast Lesions,” IEEE Trans. Med. Imaging 37(5), 1247–1252 (2018). 10.1109/TMI.2018.2794548 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Liu F., Li M., Zhang B., Luo J., Bai J., “Weighted depth compensation algorithm for fluorescence molecular tomography reconstruction,” Appl. Opt. 51(36), 8883–8892 (2012). 10.1364/AO.51.008883 [DOI] [PubMed] [Google Scholar]
  • 15.Zhu Q., Chen N., Kurtzman S. H., “Imaging tumor angiogenesis by use of combined near-infrared diffusive light and ultrasound,” Opt. Lett. 28(5), 337–339 (2003). 10.1364/OL.28.000337 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Uddin K. S., Mostafa A., Anastasio M., Zhu Q., “Two-step imaging reconstruction using truncated pseudoinverse as a preliminary estimate in ultrasound guided diffuse optical tomography,” Biomed. Opt. Express 8(12), 5437–5449 (2017). 10.1364/BOE.8.005437 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Xu C., Zhu Q., “Light shadowing effect of large breast lesions imaged by optical tomography in reflection geometry,” J. Biomed. Opt. 15(3), 036003 (2010). 10.1117/1.3431086 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Althobaiti M., Vavadi H., Zhu Q., “Diffuse optical tomography reconstruction method using ultrasound images as prior for regularization matrix,” J. Biomed. Opt. 22(2), 026002 (2017). 10.1117/1.JBO.22.2.026002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Huang Q., Luo Y., Zhang Q., “Breast ultrasound image segmentation: a survey,” Int. J. CARS 12(3), 493–507 (2017). 10.1007/s11548-016-1513-1 [DOI] [PubMed] [Google Scholar]
  • 20.Yap M. H., Pons G., Martí J., Ganau S., Sentís M., Zwiggelaar R., Davison A. K., Martí R., “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEE J. Biomed. Health Inform 22(4), 1218–1226 (2018). 10.1109/JBHI.2017.2731873 [DOI] [PubMed] [Google Scholar]
  • 21.Maninis K.-K., Caelles S., Pont-Tuset J., Van Gool L., “Deep extreme cut: From extreme points to object segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2018), pp. 616–625. [Google Scholar]
  • 22.Beck A., Teboulle M., “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). 10.1137/080716542 [DOI] [Google Scholar]
  • 23.Lu W., Lighter D., Styles I. B., “L1-norm based nonlinear reconstruction improves quantitative accuracy of spectral diffuse optical tomography,” Biomed. Opt. Express 9(4), 1423–1444 (2018). 10.1364/BOE.9.001423 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Mohajerani P., Eftekhar A. A., Huang J., Adibi A., “Optimal sparse solution for fluorescent diffuse optical tomography: theory and phantom experimental results,” Appl. Opt. 46(10), 1679–1685 (2007). 10.1364/AO.46.001679 [DOI] [PubMed] [Google Scholar]
  • 25.Zacharopoulos A., Schweiger M., Kolehmainen V., Arridge S., “3d shape based reconstruction of experimental data in diffuse optical tomography,” Opt. Express 17(21), 18940–18956 (2009). 10.1364/OE.17.018940 [DOI] [PubMed] [Google Scholar]
  • 26.Xu S., Uddin K. S., Zhu Q., “Improving DOT reconstruction with a Born iterative method and US-guided sparse regularization,” Biomed. Opt. Express 10(5), 2528–2541 (2019). 10.1364/BOE.10.002528 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Vavadi H., Mostafa A., Zhou F., Uddin K. M. S., Althobaiti M., Xu C., Bansal R., Ademuyiwa F., Poplack S., Zhu Q., “Compact ultrasound-guided diffuse optical tomography system for breast cancer imaging,” J. Biomed. Opt. 24(2), 1–9 (2018). 10.1117/1.JBO.24.2.021203 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Biomedical Optics Express are provided here courtesy of Optica Publishing Group

RESOURCES