Abstract
In Photoacoustic (PA) cameras, an acoustic lens-based system can form a focused image of an object plane. A real-time C-scan PA image can be formed by simply time gating the transducer response. While most of the focusing action is done by the lens, residual refocusing is needed to image multiple depths with high resolution simultaneously. However, a refocusing algorithm for PA camera has not been studied so far in the literature. In this work, we reformulate this residual refocusing problem for a PA camera into a two-sided wave propagation from a planar sensor array. One part of the problem deals with forward wave propagation while the other deals with time reversal. We have chosen a Fast Fourier Transform (FFT) based wave propagation model for the refocusing to maintain the real-time nature of the system. We have conducted Point Spread Function (PSF) measurement experiments at multiple depths and refocused the signal using the proposed method. Full Width at Half Maximum (FWHM), peak value and Signal to Noise Ratio (SNR) of the refocused PSF is analyzed to quantify the effect of refocusing. We believe that using a two-dimensional transducer array combined with the proposed refocusing, can lead to real-time volumetric imaging using a PA camera.
1. Introduction
A lens-based photoacoustic (PA) imaging system, known as a PA camera uses an acoustic lens for imaging. In a PA camera, a pulsed laser source excites ultrasound (US) waves from an object. The US waves are then focused by an acoustic lens to an imaging plane where it is detected by a US transducer array [1]. The working of the system is comparable to its optical counterpart, as it forms an image of the object plane in focus [2]. While similar behaviors can be expected due to the wave nature of acoustic and optics, the differences can be attributed to widely varying spatial scale of wavelength and object size. However, it is different from conventional point-by-point scanning PA imaging system which uses raster scanning by a focused single element transducer. The main difference is that the acoustic lens is separate from the transducer surface. The separate use of lens and unfocused US transducer array have several advantages: (i) real-time C-scan image of all absorbers located on a single depth plane is possible without any need for computer algorithm based image reconstruction, (ii) the A-line data acquired with the lens-based focusing can have a higher signal strength compared to other conventional methods that acquire data on a planar surface prior to reconstruction, (iii) with a 2D transducer array and using time information, a focused volumetric image of the object can be formed with a single view. The practical value of this technology lies in significant cost reduction and design simplicity compared to conventional PA imaging, especially with respect to C-scan and volumetric imaging scenarios.
In a PA camera, the signal generated at different depths inside the object gets focused and detected by a transducer array through time gating (see Fig. 1). The time gate for a specific depth can be calculated from acoustic travel time using lens equation, (where O and I are distances from the lens to object and lens to imaging plane respectively). The time-gated A-line signals corresponding to the depth O, with a fixed transducer array at I is fully focused, and no further refocusing is needed for this plane. However, the A-line signals acquired through the lens possess information about other depths, although they are somewhat defocused proportional to the distance from the focused object plane. A fully focused volumetric imaging is possible from the data acquired with such a PA camera using a residual refocusing algorithm. In this paper, we present the concept, theory and experimental demonstration of the residual refocusing idea. The proposed method can be viewed as a fast post-processing step of the PA camera acquired data. PA imaging has potential in human prostate, thyroid, breast and skin cancer diagnosis and disease management. We anticipate that this technology will be useful to commercial device developers as well as to the medical research communities that need inexpensive imaging devices for PA imaging exploration.
Figure 1.

Lens-based PA system. (a) Wave propogation from a point source at 2F + ΔO through acoustic lens and detected by detector array at 2F distance from lens. (b) Back propogation to refocus the detected waveform from (a) at 2F − ΔI. (c) Wave propogation from point source at . (d) Forward propogation to refocus measured wavefront from (c) at .
A lens-based PA system was first proposed by He et al. [1]. A system with unit magnification and 4F geometry designed to image a single object plane can be found in [3] [2]. A hand-held probe using this lens-based design was introduced by Valluru et al. and coined the name PA camera [4]. This system can be used for several applications including in vivo imaging. An ex vivo tissue imaging using the system was studied that successfully demonstrated the ability to classify malignant, benign and normal tissue [5]. In all these works, imaging an object plane at 2F is considered as the other depths are defocused. Using PA camera, there were attempts to image multiple depths with limited success [6]. A PSF characterization of PA camera at various depths and analysis can be found in [7]. Several PA image reconstruction methods are popular for situations where no lens is used, and the sensor array lies on a plane, such as the Delay and Sum [8] and the FFT based back-projection [9]. The refocusing idea behind our method is similar to the one used in point-by-point scanning of PA signal by a single element focused transducer [10]. Our method is a combination of several ideas that are uniquely appropriate for our PA camera geometry.
The concept (Fig. 1) is based on the observation that there is a distinct wavefront originating from a particular object plane in the PA camera, that propagates in time and space through a uniform medium towards its focal plane after it has passed through the lens. If the sensor array is not located in the appropriate focal plane, the wavefront gets captured either before or after it has come to full focus, depending on whether the sensor plane is in front or behind the focal plane. The captured wavefront has full amplitude, and phase information that can be used to either forward or backward propagate it, and thus, bring it to full focus. This approach, uniquely suited for a PA camera setup, has not been used before to the best of our knowledge.
2. Methods
Consider a PA camera, with a lens of focal length F placed at the center of the system as in Fig 1. Further, consider an object of finite volume centered at 2F distance from the lens and a transducer array at 2F distance on the other side of the lens. The object and image distances are chosen to be 2F to maintain unit magnification. The object, consisting of optical absorbers, is uniformly illuminated using a pulsed laser. Due to thermoelastic expansion caused by the laser heating, there is instantaneous pressure rise in the absorber, proportional to the optical absorption coefficient. This pressure discontinuity then propagates as a US wave through the medium (water) to reach the surface of the acoustic lens. The acoustic lens is made of a material having a higher sound speed than the surrounding medium. Additionally, the density of the lens material is selected such that the acoustic impedance of the lens material is close to that of water to maximize transmission of US energy through the lens. A bi-concave lens is used for focusing the acoustic waves. The radius of curvature of the lens along with its index of refraction gives it a fixed focal length.
Experimental setup
The PA camera system consists of a pulsed laser source, bi-concave acoustic lens, and a transducer array. A tunable pulsed laser (EKSPLA Inc NT-352A) with a pulse duration of 5 ns was used in our experiment. We chose a graphite ball as the PA source. The laser wavelength is selected to be 790 nm where the optical absorption of graphite is high. The energy of the laser is set approximately to 18 mJ/cm2. The acoustic lens was manufactured using plastic material DSM18420 with 3D printing technology. For hand-held imaging applications, we designed the lens with a radius of curvature 33.5 mm and a diameter of 32 mm, resulting in a focal length of 39.4 mm. The parameters were arbitrarily selected to limit the length of the probe (4F) to 16 cm and diameter 4 cm. The lens material has a sound speed of 2590 m/s and density of 884.17 kg/m3, which enables impedance matching with water. With a transmission coefficient of 0.95 and a reflection coefficient of 0.043, more than 90% of the incident acoustic energy gets transmitted through both sides of the lens. A 16 element linear transducer array with an element size of 1 mm × 0.5 mm, the center frequency of 5 MHz and 55% bandwidth at −6 dB was used in the experiment. The diameter of the graphite ball was selected to be 0.2 mm to ensure that the dominant frequency generated by the PA source falls within the transducer bandwidth. Signals acquired by the transducer are then amplified by a custom-made preamplifier stage of 50 dB gain and digitized using National Instruments PXI-5105 at a sampling frequency of 60 MHz. The transducer array was placed at 2F distance from the lens. The PA source, lens center, and transducer array are collinear. To maintain constant optical energy, the PA source (graphite ball) is kept constant with respect to the laser. To change the object distance from the lens, the whole lens and transducer set is moved with respect to the point source using a linear motor stage (Zaber Technologies Inc).
We have conducted ex vivo tissue imaging to show the application of refocusing algorithm in volumetric imaging. A 3D printed probe with the lens at center and transducer plugged was used for imaging the tissue. The tissue is of 24 mm × 40 mm × 3 mm in size. A C-scan was conducted using a stepper motor, and the measured A-line signals were stacked to form 3D data [4]. The tissue was kept at an off-focal point of 2F + 5 mm from the lens center. This gave us a defocused 3D image which was then refocused for comparison.
3. Theory
Consider three-point PA sources inside the object at 2F, 2F + ΔO and . PA signal generated at 2F is focused at 2F distance on the other side of the lens. B-scan is formed by acquiring A-line data with linear array transducer and displaying it after envelope detection (Fig. 2c). Similarly, the entire object plane at 2F can be imaged as a C-scan with a transducer array. Fig. 1(c) shows a case where the signal from a point source at get focused beyond 2F distance in a plane located at . The distance and are related by the lens equation. Since the sensor array is at 2F, the US wavefront is detected before it reaches its optimal focal point. This detected wavefront needs to be back-propagated to its correct focal point as indicated in Fig. 1(b). The opposite happens for a point source at 2F + ΔO as in Fig. 1(a). The optimal focal point for this point source, according to lens equation is at 2F − ΔI, but we are detecting the wave at 2F. This wavefront needs to be forward propagated to its correct focal point as indicated in Fig. 1(d)
Figure 2.
(a) PSF at FBF − 7 mm. (d) PSF at FBF + 7 mm. (c) PSF at FBF. (b) Refocused PSF at FBF − 7 mm. (e) Refocused PSF a FBF + 7 mm.
The two cases with point sources at and 2F + ΔO can be considered as two separate refocusing problems. Two different algorithms are used to deal with this two-sided refocusing problem. In the former case, a fast and convenient FFT based forward propagation model is used while for the latter case, time reversal followed by FFT based propagation is used.
4. Proposed Algorithm
Based on the preceding discussion, the whole problem of refocusing boils down to the acoustic pressure detected by a planar sensor and reconstructing the initial pressure from the measured pressure time series. An FFT based time reversal method proposed by Kosli et al. [11] is used to keep the possibility of real-time imaging. The proposed algorithm is described next. The entire transducer time series is divided into two halves with respect to the 2F time gate. Each half of the time series is separately refocused and combined to get the final image. Let pI (x, y, t) be one half of the measured transducer time series, with I ∈ {2F +, 2F −} indicating the right and left halves with respect to 2F time gate respectively. A symmetric pressure profile p(x, y, t) = [pI (x, y, −t) pI (x, y, t)] is formed by stacking mirror image of the time series pI (x, y, t). This is to satisfy the symmetry condition of initial pressure p0(x, y, z) = p0(x, y, −z), since it is unknown from the transducer measurement whether the wave approached from the left or right of the sensing plane. The time reversal is performed in the Fourier domain. To use the FFT algorithm, the time component of the measured pressure time series is considered to be the z axis. To obtain the angular frequency ω, a weighting is used based on the dispersion relation [11].
| (1) |
where {kx, ky, kz} are the wave number components, the scaling factor with is the sound speed and 𝔽 is the 3D Fourier transform which can be computed using the FFT algorithm. The inhomogeneous part of the wave should be set to zero. Using the dispersion relation ( ), we can map the angular frequency ω in P (kx, ky, ω) to spatial wave number component P (kx, ky, kz). This can be performed using an interpolation [9]. The final image can now be reconstructed by performing an inverse Fourier transform 𝔽−1.
| (2) |
This step also can be performed using the FFT algorithm. The complexity of this algorithm is estimated to be 𝒪(n3log n) [12]. This refocusing can be addressed with back projection (𝒪(n5)) [8] or an iterative time reversal (𝒪(n4)) [13]. However, we prefer this method because of its low complexity. Once p(x, y, z) is computed for both time series 2F+ and 2F−, both halves are concatenated to form final pressure profile.
5. Results and Discussion
We conducted PSF experiments with multiple point targets set at different depths from the lens, ranging from 2F − 10 mm to 2F + 10 mm, with incremental steps of 1 mm. Ideally, for the 4F geometry system, the optimal focus is at 2F distance from the lens center. But experimentally, based on minimum Full Width at Half Maximum (FWHM) of the PSF criterion, we found the best focus at 2F + 2.5 mm from the lens. Let this best focal point be denoted by FBF. Consider the three cases mentioned earlier, i.e., for object at FBF, and FBF + ΔO. Fig. 2(c) shows the PSF at FBF. Fig. 2(a) shows PSF at FBF − 7 mm. Note that the curvature of the PSF represents a wavefront that is moving towards its focal point when it was detected. Fig. 2(d) shows the PSF at FBF + 7 mm, it has opposite curvature that indicates a wavefront that is diverging as it propagates beyond its focal point. The lateral and axial FWHM at FBF, were found to be 1.6 mm and 0.3 mm respectively. The axial FWHM was constant for all depths since it is determined by the transducer bandwidth. The lateral FWHM is determined by the amount of defocus at various depths, which in turn by the lens diameter, focal length, depth of field and the transducer center frequency [7].
It is important to note a difference here. The theoretical arguments were presented assuming that the signal detection takes place in a two dimensional (2D) x–y plane located at 2F. Therefore a 3D Fourier transform as prescribed in Eq. 1 was needed. In the experiments, the 2D wavefield in the x–y plane was captured only along the x-axis with the linear array. Therefore we used a 2D version of the algorithm described earlier. The justification can be found in [14]. 3D sensing will produce better results but as we have found, even with 1D sensing significant refocusing is possible. Fig. 2(b) shows the PSF after refocusing the detected wavefront originating from point source at FBF − 7 mm. Similarly, Fig. 2(e) shows the PSF we get by refocusing the signal from point source at FBF − 7 mm. The degree of refocusing is visually apparent as the PSF shrink in their lateral extent and begin to look more like the PSF at the FBF.
To quantitatively evaluate the effect of the refocusing algorithm at different depths, we focused on three parameters of the PSF namely; (i) Lateral FWHM, (ii) Peak value, (iii) Signal to Noise Ratio (SNR). Significant changes are expected in these parameters before and after the application of refocusing. Furthermore, these parameters define the final image quality metrics, and as such, any demonstrated improvements support the efficacy of the procedure. Lateral FWHM before refocusing shown with the blue bold line in Fig. 3 is asymmetric. At 2F − 10 mm FWHM is 3.97 mm, and the value monotonously falls to 1.61 mm at best focal point FBF and then increases to 1.91 mm at 2F + 10 mm. The asymmetric nature of FWHM is due to two reasons: (i) for all points from 2F − 10 mm to FBF the transducer can detect the entire converging wavefront from the lens. While for points after FBF to 2F + 10 mm the measured signal is generated from the virtual image formed by the lens, which is further propagated to the transducer surface, (ii) attenuation of acoustic signal with distance resulting in faster die out from the center to the side of PSF. After applying the refocusing algorithm, the lateral FWHM denoted by dashed blue line in Fig. 3 indicates varying degree of improvement for different depth planes. But the fact that the numbers are now very close to 1.61 mm value observed at FBF indicates that most of the defocusing effect has been reversed. The mean lateral FWHM after refocusing is 1.6045 mm with a variance of 0.0057 mm. The small slope of the lateral FWHM plot can be explained by attenuation due to the distance of the source from the lens. The normalized peak value plot (red bold) is shown in Fig. 3. The maximum value of the plot is at the best focal point FBF. As we move away from the focal point on either side, the peak value drops. After refocusing there is a significant gain in the peak value for off-focal points. Peak value at FBF remains unchanged, while there is a gain of more than 1.5 times the original value at 2F − 10 mm and 2F + 10 mm depths. Consider the PSF at the best focal point in Fig. 2(c). A rectangular area that covers a majority of the PSF is considered to compute signal energy and the area outside is used for noise computation. The patch under consideration is of 0.8 mm × 0.6 mm corresponding to approximately −6 dB for the PSF at FBF. Both signal and noise energy are normalized with their area before computing the SNR. The red plot in Fig. 4 is the SNR before refocusing and the blue after refocusing. At 2F − 10 mm the SNR value is lower due to the wider PSF and gradually increases as it reaches the best focal point at FBF and falls similarly on the other side. SNR depends on the amount of wavefront originating from a given depth that is intercepted by the lens. Hence, SNR is a function of distance of source to the lens. After refocusing SNR plot is almost constant with a slight variation with depth.
Figure 3.
Lateral FWHM and Peak value before and after refocusing.
Figure 4.
SNR computed before and after refocusing.
To demonstrate the practical value of the proposed refocusing algorithm, we conducted ex vivo human prostate tissue PA imaging with the PA camera, keeping the specimen 5 mm off from the focal point. Fig. 5(a) shows approximately 5 mm thick sliced specimen that was imaged. Pathological processing was followed after completing PA imaging. Fig 5(b) is the histopathology section from the center of the specimen. A malignant cancer region has been marked in blue by the pathologist. Fig. 5(c) shows the unfocused but envelop detected 3D image data obtained by our PA imaging camera. Cross-sectional images along x–y, y–z and x–z planes passing through the malignant region of interest is shown to draw attention to changes from refocusing algorithm. The malignant region is visible due to dominant PA signal from blood vessels, the unfocused nature makes it spread into a larger volume all around. Strong PA signals also originate from black ink that is routinely used to coat the tissue boundaries (Fig. 5(a)), which can be seen in Fig. 5(c). Fig. 5(d) displays the 3D envelope detected data after applying the refocusing algorithm. The same cancer region has been highlighted in all cross-sectional images for comparison. The spatial tightening of the PA signal in the 3D volumetric image is clearly discernible.
Figure 5.
Ex vivo prostate tissue imaging: (a) Photograph of the tissue. (b) Histology image (Malignant area marked with a red circle in both the images). (c) Cross section PA images before refocusing. (d) Cross section PA image after refocusing.
6. Conclusion
We have proposed a refocusing method for a lens-based PA system. Through the PSF measurement experiments, we studied the amount of defocus introduced by the lens. The refocusing method was tested on the experimental data, and quantitative analysis of the amount of refocusing was studied. After applying the proposed refocusing algorithm, both lateral FWHM and SNR values are rendered constant throughout the 2 cm imaging depth. There is also a significant improvement in the peak value with less than 10% variation in the imaging depth of interest. With this new development, we hope that imaging multiple depths and 3D imaging capability of PA camera will become a practical reality. Additionally, because of the FFT based wave propagation model, inexpensive real-time 3D PA imaging may become feasible.
Acknowledgments
This work was supported by grant from NIBIB, NIH through Grant No. 1R15EB019726-01. We greatly acknowledge Lang memorial foundation for providing financial support for the laser. We also acknowledge the US Fulbright program for supporting our collaborative efforts.
References
- 1.He Y, Tang Z, Chen Z, Wan W, Li J. A novel photoacoustic tomography based on a time-resolved technique and an acoustic lens imaging system. Physics in medicine and biology. 2006;51(10):2671. doi: 10.1088/0031-9155/51/10/019. [DOI] [PubMed] [Google Scholar]
- 2.Rao NA, Lai D, Bhatt S, Arnold SC, Chinni B, Dogra VS. Acoustic lens characterization for ultrasound and photoacoustic c-scan imaging modalities. Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE; IEEE; 2008. pp. 2177–2180. [DOI] [PubMed] [Google Scholar]
- 3.Chen Z, Tang Z, Wan W. Photoacoustic tomography imaging based on a 4f acoustic lens imaging system. Optics express. 2007;15(8):4966–4976. doi: 10.1364/oe.15.004966. [DOI] [PubMed] [Google Scholar]
- 4.Valluru K, Chinni B, Bhatt S, Dogra V, Rao N, Akata D. Probe design for photoacoustic imaging of prostate. 2010 IEEE International Conference on Imaging Systems and Techniques; IEEE; 2010. pp. 121–124. [Google Scholar]
- 5.Dogra VS, Chinni BK, Valluru KS, Moalem J, Giampoli EJ, Evans K, Rao NA. Preliminary results of ex vivo multispectral photoacoustic imaging in the management of thyroid cancer. American Journal of Roentgenology. 202(6) doi: 10.2214/AJR.13.11433. [DOI] [PubMed] [Google Scholar]
- 6.Chen X, Tang Z, He Y, Liu H, Wu Y. A simultaneous multiple-section photoacoustic imaging technique based on acoustic lens. Journal of Applied Physics. 2010;108(7):073116. [Google Scholar]
- 7.Francis KJ, Chinni B, Channappayya SS, Pachamuthu R, Dogra VS, Rao N. Characterization of lens based photoacoustic imaging system. Photoacoustics. 2017;18:37–47. doi: 10.1016/j.pacs.2017.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Xu M, Wang LV. Universal back-projection algorithm for photoacoustic computed tomography. Physical Review E. 2005;71(1):016706. doi: 10.1103/PhysRevE.71.016706. [DOI] [PubMed] [Google Scholar]
- 9.Cox B, Beard P. Fast calculation of pulsed photoacoustic fields in fluids using k-space methods. The Journal of the Acoustical Society of America. 2005;117(6):3616–3627. doi: 10.1121/1.1920227. [DOI] [PubMed] [Google Scholar]
- 10.Li ML, Zhang HF, Maslov K, Stoica G, Wang LV. Improved in vivo photoacoustic microscopy based on a virtual-detector concept. Optics letters. 2006;31(4):474–476. doi: 10.1364/ol.31.000474. [DOI] [PubMed] [Google Scholar]
- 11.Köstli KP, Frenz M, Bebie H, Weber HP. Temporal backward projection of optoacoustic pressure transients using fourier transform methods. Physics in medicine and biology. 2001;46(7):1863. doi: 10.1088/0031-9155/46/7/309. [DOI] [PubMed] [Google Scholar]
- 12.Lutzweiler C, Razansky D. Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification. Sensors. 2013;13(6):7345–7384. doi: 10.3390/s130607345. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Treeby BE, Cox BT. k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields. Journal of biomedical optics. 2010;15(2):021314–021314. doi: 10.1117/1.3360308. [DOI] [PubMed] [Google Scholar]
- 14.Köstli KP, Beard PC. Two-dimensional photoacoustic imaging by use of fourier-transform image reconstruction and a detector with an anisotropic response. Applied optics. 2003;42(10):1899–1908. doi: 10.1364/ao.42.001899. [DOI] [PubMed] [Google Scholar]




