Skip to main content
Journal of Medical Imaging logoLink to Journal of Medical Imaging
. 2018 Apr 2;5(2):021224. doi: 10.1117/1.JMI.5.2.021224

Toward dynamic lumbar puncture guidance using needle-based single-element ultrasound imaging

Haichong K Zhang a,*, Younsu Kim a, Melissa Lin b, Mateo Paredes c, Karun Kannan c, Abhay Moghekar d, Nicholas J Durr b,c, Emad M Boctor a,b,e
PMCID: PMC5879152  PMID: 29651451

Abstract.

Lumbar punctures (LPs) are interventional procedures that are used to collect cerebrospinal fluid. Since the target window is small, physicians have limited success conducting the procedure. The procedure is especially difficult for obese patients due to the increased distance between bone and skin surface. We propose a simple and direct needle insertion platform, enabling image formation by sweeping a needle with a single ultrasound element at the tip. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle, such as bone, but also visually locate the structures by combining transducer location tracking and synthetic aperture focusing. The concept of the system was validated through a simulation that revealed robust image reconstruction under expected errors in tip localization. The initial prototype was built into a 14 G needle and was mounted on a holster equipped with a rotation shaft allowing one degree-of-freedom rotational sweeping and a rotation tracking encoder. We experimentally evaluated the system using a metal-wire phantom mimicking high reflection bone structures and human spinal bone phantom. Images of the phantoms were reconstructed, and the synthetic aperture reconstruction improved the image quality. These results demonstrate the potential of the system to be used as a real-time guidance tool for improving LPs.

Keywords: ultrasound imaging, synthetic aperture, lumbar puncture guidance, ultrasound needle

1. Introduction

Lumbar punctures (LPs) are performed to collect cerebrospinal fluid (CSF), an important bodily fluid, needed to diagnose a variety of central nervous system disorders or conditions, including life threatening ones like encephalitis or meningitis, where a diagnosis delay of even a few hours can be catastrophic.1 The current standard of care utilizes anatomical landmarks to locate the L3-L5 intervertebral space; a needle must be advanced through several tissue layers, between the vertebrae and into the subarachnoid space without hitting other obstacles (e.g., blood vessels or bone) along the way. Most LPs are performed “blindly” without the assistance of imaging or guidance mechanisms. More than 400,000 LPs are performed annually, but nearly 23.3% end in failure due to the myriad of challenges.2,3 These failures lead to misdiagnoses, treatment delays, and subsequent unnecessary and dangerous procedures.46 Obese patients with excess adipose tissue between skin and target structures suffer a significantly increased probability of LP failure,4 and the rate of overall complications as a result of LPs almost doubles in obese patients to nearly 50%.79

The conventional image guidance used when a physician is unable to collect CSF with a blind entry is fluoroscopy. While fluoroscopy is accurate, it cannot be used on all patients, such as pregnant women, and is expensive due to the required equipment. It delays diagnosis because it is usually scheduled for the following day, and it exposes the patients and physicians to high levels of radiation for the duration of the procedure. There are several emerging imaging approaches to further improve LPs. A majority of these solutions tackle this issue by improving upon existing imaging technology through introducing tracking methods.1015 For example, one solution introduced electromagnetic tracking to both the needle and the ultrasound transducer, which resulted in a significant increase in success rate in facet joint injections.10 However, this technology introduces additional systems to the clinic, which can discourage adoption. Another proposed solution is a guidance system that incorporated ultrasound tracking of the needle coupled with patient-specific geometries and augmented reality to enable the accurate placement of anesthetic nerve blocks.11 This approach again forces the physicians to familiarize themselves with both the tracking system and augmented reality. Some of the solutions described above have been brought to market. The eZono eZGuide magnetizes the needle to take advantage of the Hall effect to track the needle in the body.16 However, this solution does not provide adequate resolution and sensitive tracking at greater depths. Moreover, physicians have to acquire this system and adapt it in their clinical workflow. Another example is the ClearGuide One, which utilizes a combination of CT and US to calculate optimal needle trajectory. A common trait among these existing solutions is the introduction of additional equipment into the procedure creating substantial disruption to existing workflow, which in turn leads to lower adoption rate.

Alternatively, integrating an ultrasound element directly with the LP needle is an approach combining the guiding and procedural tools into one, and several studies have investigated an ultrasound needle for A-line sensing.17,18 The advantage of this approach is alignment between the sensor’s forward direction and the actual needle insertion direction. Although no additional interpolation or registration process is needed, the limitation is that utilizing A-mode lines alone to guide a needle to a target located several centimeters away is not realistic.

The proposed system extends this ultrasound needle approach by introducing an imaging component to allow for visualization of tissues in LP procedures to eliminate failed or traumatic attempts and iatrogenic complications in obese patients. We propose a simple and direct needle insertion platform that dynamically tracks needle position, enabling image formation from sweeping a single ultrasound element at the needle tip. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle, such as bone, but also visually locate the structures by combining transducer location tracking and a synthetic aperture focusing algorithm. For dynamic image guidance, the angle of the needle relative to the holster is measured by an absolute encoder, and the received US data is sent to a tracking-based synthetic aperture imaging algorithm to produce high resolution images. In a typical use-case, the physician would insert a 16/18G spinal introducer needle containing the needle-shaped ultrasound transducer and sweep the needle in an arc above the skin to acquire images of the tissue in front of the needle. These initial images can be used to determine the insertion point. Once the needle is inserted, the needle is swept in smaller arcs within the patient’s subcutaneous fat layer. The resulting images can then be used to guide and alter the trajectory of the needle to avoid peripheral bone structures and access the correct target. Importantly, we propose that the ultrasound scanning is performed to guide the introducer needle to the correct location and that the actual LP be performed by removing the ultrasound insert and inserting a high gauge, low trauma needle through the same introducer needle. Compared to the previously proposed guidance technique, this system uniquely provides the structural information of the spinal bone located beneath the skin without requiring any registration process, and the system integrates into the LP workflow without complication.

This paper demonstrates the proof-of-concept of this technology in terms of image formation and reconstruction through simulation and phantom experiments. In simulation, the point targets were imaged for the resolution evaluation, and the tolerances toward various errors have been evaluated. For the experiment, a metal wire phantom mimicking bone structures and a human spine bone phantom were imaged with freehand scanning. Finally, we discuss the uniqueness, benefits, and limitations of our approach compared to other image-guided procedures.

2. Approach

2.1. Single-Element Ultrasound Sensing and Imaging

The concept design and initial prototype of the proposed single-element ultrasound imaging system is shown in Fig. 1. The technology consists of two subsystems, as shown in Fig. 1(a): the needle-shaped ultrasound probe and the needle position tracking system. The needle position tracking system serves as the base component supporting the needle-shaped ultrasound probe and consists of a rigid-body holster and a tracking encoder. A needle-shaped ultrasound probe that can fit inside a spinal introducer needle is included in the system. In our prototype depicted in Fig. 1(b), the ultrasound probe is fabricated using a stainless-steel tube with a magnet wire threaded through, with one end of the magnet wire connected to the lead zirconate titanate (PZT) element and the other end soldered to a coaxial cable with a Bayonet Neill–Concelman (BNC) connector. The needle itself provides A-lines in real-time to the user, so that the contrast from the bone could be used as a warning to prevent the needle from collision. The holster is a platform rigidly fixed on the patient surface and has holes, where the introducer needle is inserted. The rotation shaft placed in the center of the holster allows one degree-of-freedom (DoF) rotational motion of the needle. The pivot point was 3 to 5 cm above the surface to ensure that the needle had space to sweep before insertion to find the insertion point. An angular encoder embedded in the holster was used to track the rotational orientation. The needle depth relative to the holster is indicated through a marker on the needle and is used in the reconstruction and display, as described in the following section. By sweeping the needle along the one DoF rotation shaft, an ultrasound image can be formed during the LP procedure, which can aid in determining the direction the needle should proceed.

Fig. 1.

Fig. 1

The concept of a single-element ultrasound sensing and imaging system for dynamic LP guidance. (a) The illustration of the needle-based single-element ultrasound imager. (b) The pictures of the prototype. The needle-shaped ultrasound probe is mounted on the holster and the rotation shaft allows one DoF rotational motion. The rotational position is tracked through the integrated rotational encoder.

For the practical workflow, the conventional “blind” insertion can be categorized into finding insertion, actual insertion, forward motion of the needle penetrating the subarachnoid space, and CSF collection (Fig. 2). In our proposed workflow, our image/sensing guidance can take place at the step prior to the needle going beyond subsurface fat tissue. First, the physician palpates the patient’s back, as in the current standard of care. Once the initial entry point has been determined, the tracking holster is placed on the back of the patient at the determined location and fixed rigidly to the body. Ultrasound images can be collected for finding the insertion position. To generate an image, the physician sweeps the needle in an arc. This sweeping motion allows our system to collect data from both the needle probe and the tracking holster, where an angle range of 10 to 60 deg is expected depending on the desired window size. As the algorithm processes this data, the image is updated on the screen for the physician to use in real time. When the insertion angle is secured, the physician threads the needle through the holster and into the patient. Once in the adipose tissue layer, the physician can still sweep the needle in a smaller angle range to create images for monitoring the needle trajectory. Tissue damage is not a concern because this sweeping motion is routinely performed in the current standard of care to reach the CSF. Adipose and connective tissues surrounding the spine are able to shift as the needle sweep. The needle will not be swept close to the target or in the subarachnoid space after a dural puncture. However, the distance from bone to the needle can be updated in real time beyond the adipose tissue. The expected initial distance of an LP is 6 to 12 cm, and the image acquisition can be repeated as the needle proceeds by sweeping to generate an image as needed. Before puncturing the dura, the ultrasound element insert can be pulled out and a biopsy needle (22G or higher) can be threaded in for CSF collection, creating a smaller hole and minimizing the possibility of iatrogenic complications arising. This way, it is possible to safely perform an LP while avoiding structures along the way to the subarachnoid space.

Fig. 2.

Fig. 2

The workflow diagram of the “blind” needle insertion, compared with the proposed procedure with dynamic ultrasound guidance. The proposed workflow reduces reattempts that could be necessary in conventional workflow without guidance. Note that the needle sweep occurs mainly at the surface of the skin and up to subsurface adipose tissue.

2.2. Backpropagation-Based Synthetic Tracked Aperture Focusing

Synthetic aperture focusing is a technique for merging small subaperture information into a coherent wide aperture to reconstruct a resolution-enhanced B-mode image.19 Synthetic aperture is applicable in a monostatic case, where there is a single-element transmitting and receiving, in which a B-mode image can be reconstructed based on a virtual ultrasound array.20 Conventional monostatic synthetic aperture assumes predetermined and uniform data acquisition, and we extend this concept in synthetic tracked aperture focusing (STAF) to enable real-time visualization for each data acquisition by utilizing ultrasound element tracking information.2123 The needle rotates with one tracked DoF, producing a scan of a curvilinear region. Conventional delay-and-sum based synthetic aperture focusing uses multiple A-line data points under a predefined geometry in order to beamform a line. A B-mode image then is formed by repeating this process. However, this method does not allow for real-time updating of the image as the position of the ultrasound element changes when the needle is pivoted. In order to incorporate the newly acquired A-line data collected as the needle pivots, a backpropagation approach is applied. This approach projects the collected radio frequency (RF) A-line data back to a predefined 2-D field used for real-time visualization. The relationship between pre- and postreconstruction can be formulated as follows:

ybf(m,n)=eybfe(m,n,e), (1)
ybfe(m,n,e)=ypre(d,e), (2)

where ybf is the final reconstructed RF data, ybfe is the reconstructed RF data from single position, and ypre is the received raw RF data. m and n depict the pixel information of the lateral and axial direction, respectively. The distance used when collecting the prebeamformed data is d, and the element position is e. The received signal distance is related to the actual image geometry by

d2=(mme)2+(nne)2, (3)

where me and ne represent the element position. For each line acquisition, the received A-line data will be backprojected based on Eqs. (2) and (3), and then stored in a separate data matrix. For each backpropagation process, a weighting function (the Hanning window) is applied by multiplying it with the received A-line data minimizing the grating lobes. The number of A-lines accumulated in the matrix is then counted, and this data is used to divide and normalize the inhomogeneous data volume from multiple angle positions. A set of signal processing techniques including envelope detection and scan conversion are applied to the normalized data matrix for displaying the B-mode image in real time. This process is repeated for every line acquisition. This modified algorithm for synthetic aperture focusing is desirable for the proposed tracking-based single-element ultrasound imaging system because it enables the user to monitor the updated B-mode image while scanning freehand compared to the conventional monostatic synthetic aperture approach. Figure 3 illustrates the example of the backpropagation process. On the left, a single-element backpropagation from a single line is shown. As the number of poses increases, the focusing effect gradually becomes stronger, which makes the point target size smaller. Note this process is not unidirectional, and the user can move the needle back-and-forth to further improve image quality.

Fig. 3.

Fig. 3

The backpropagation reconstruction process corresponding poses #1, #2, and #3. (a) A-line data, highlighted in red, from position #1 was backpropagated. (b) Position #2 was backpropagated and summed on the left image. (c) Positions from #1, #2, and #3 were backpropagated and summed. The yellow line represents the backpropagation geometrical loci of a point target (green dot).

3. Methods

3.1. Simulation

Simulations were performed to both validate image reconstruction and analyze the effect of needle localization errors on the displayed image. Five target points were simulated from 10-mm to 50-mm at 10-mm intervals. Assuming a single-element ultrasound transmitter and receiver, the rotation radius from the rotation axis to the element was 40 mm. The center frequency of the single element was set to 4 MHz, and the data was sampled in 40 MHz. The image was reconstructed with the backpropagation-based synthetic aperture algorithm. The simulated virtual scan collected 128 poses with 0.46-deg pitch. Three types of tracking errors, axial tracking (rotation radius) error, rotational tracking error, and axial motion error, were considered for the error tolerance analysis. The axial tracking error corresponds to the misinterpretation of the needle depth relative to the rotation axis. The error was added in the simulated channel data by shifting in the axial direction. Then, for the rotational tracking error, which simulates incorrect rotational readings from the encoder, each angle position was assigned channel data from that angle with an added randomly generated bias. For example, if 25-deg is the correct angle, and 1 deg is the error, data at 26-deg can be assigned to generate simulated channel data. Finally, the axial motion error indicates the effect of tissue motion and deformation during scan. To simulate this, we introduced different randomly generated axial shift error (Gaussian noise) for each A-line. For all of the randomly generated error, the magnitude represents the standard deviation (SD) of the randomly generated distance between ground truth and value with error. The process was repeated 100 times, and the average and the SD were calculated for each error magnitude. The full-width at half-maximum (FWHM) was the metric to represent the point target size and the lateral resolution. The FWHM of all five point targets was measured and the ratio compared to the FWHM of the data with no error was plotted as FWHM degradation in percentage, where 100% indicates identical.

3.2. Experimental Setup

3.2.1. Needle with PZT element fabrication

As described in Sec. 2.1, the needle-shaped ultrasound transducer was based on the PZT-5H element placed on the tip of a wire inserted in a 14 G spinal needle. The element was placed at the tip of a wire, which was threaded through a stainless-steel tube. This wire was then attached to the end of the tube with one electrode conductively adhered to the wire and the other electrode to the steel tube. The thickness of the wire was analogous to the element diameter, so the wire had the effect of a backing layer. The other end of the wire and tube was connected to a coaxial cable with a BNC connector so that the needle could be connected to sampling devices.

3.2.2. Ultrasound tracking

The proposed imaging approach was based on the accurate tracking of the element location informed by a 12-bit absolute magnetic angular encoder (AEAT-6012, Broadcom). The fabricated single-element transducer with 1-mm diameter24 was mounted on a holster with a rotation encoder to read precise rotational position. The encoder could provide absolute angle detection with a resolution of 0.0879 deg; it had no upper speed limit, though there were fewer samples per revolution as the speed increased. The encoder was connected to an “encoder-to-tube” adapter, which allowed the pivot angle of the needle to directly correspond to the angle read by the encoder. The current design incorporates an Arduino UNO, which collects the encoder angle. The distance from the needle tip to the rotation pivot point was 50 mm. This distance from the pivot point relates the ultrasound element tracking to the rotation information.

3.2.3. Data collection

The data collection process is summarized in Fig. 4. The transmission was triggered by the computer, and the received ultrasound signal was captured by a data acquisition system (US-Key, Lecoeur Electronique), synchronized by the computer to be collected alongside angle data from the encoder. The encoder and ultrasound reception were synchronized through MATLAB software to ensure the same time point acquisition. In this validation experiment, we applied freehand scanning, in which the needle was swept in one DoF direction. The RF data were collected 2000 times in 61-deg angle range.

Fig. 4.

Fig. 4

Diagram of hardware integration for the single-element ultrasound system.

3.2.4. Imaging target and processing

To validate the concept, a wire phantom, consisting of two metal wires (1-mm diameter) and a bone, human lumbar spinal, and phantom, was imaged [Figs. 5(a)5(b)]. Both phantoms were submerged into a water tank at 20-mm and 40-mm depths, respectively. The single-element ultrasound imaging system was fixed on the top using a rigid microstage. The cross-section of the metal wire was imaged to evaluate the resolution improvement of point target. For a practical assessment, the spinal phantom was scanned from three directions: one sagittal plane and two transverse planes. The sagittal plane crosses two vertebrae, in the spine, with the potential needle entry space being kept in between of them. The two transverse planes capture imaging fields, both with and without a spinous process. Transverse plane 1 is an ideal image field, where the needle insertion can be performed, whereas transverse plane 2 should be avoided due to the spinous process, as noted in Fig. 5(b). Figure 5(c) presents B-mode images, which were collected using clinical ultrasound machine (SonixCEP, Ultrasonix) with a convex probe (C5-2/60, Ultrasonix).

Fig. 5.

Fig. 5

Imaging targets. (a) Metal-wire phantom and (b) human lumbar spinal phantom. Three planes were scanned for spinal phantom, where dot line represents the imaging plane. (c) B-mode images of spinal phantom using clinical ultrasound machine. The black line scaler indicates 10 mm.

The needle transducer was swept freehand, and 2000 A-lines and corresponding angle position readings were collected. After the data collection, the line data were reallocated with regards to each angle point with 0.5-deg incremental steps. The STAF algorithm was applied on the realigned data, as described in Sec. 2.2. Using the wire phantom, a uniform number of A-lines was used to form prereconstruction data to evaluate the reconstruction performance, and both single acquisition and 10 times averaged data were used in reconstruction. Then, a dataset using all A-lines is used to confirm that nonuniform data will not affect reconstruction. For the spinal phantom, all 2000 A-lines were used. Postprocessing methods included envelope detection, scan conversion, and log compression.

4. Results

4.1. Simulation Results

The result of the simulated point targets is shown in Fig. 6. For the ground truth data without any errors, reconstructed point targets could be confirmed for all depths [Fig. 6(a)]. Three types of error sources introduced image deformation. The first type of error was axial tracking uncertainty simulating an incorrect estimation of the rotation radius. An example shown in Fig. 6(b) presents that the error of a 3.5-mm axial shift does not introduce much image degradation but visualized the position of targets at the incorrect depths. The second type of error was the rotation reading error. This false reading occurs when the rotation reading has an error or the recording was faster than the actual motion. Figure 6(c) shows the case when 0.5-deg rotational error was applied. Finally, the axial motion error that reflects the depth position variation occurs in each line acquisition in axial axis; this error could be introduced by tissue motion artifacts or the motion of the holster itself. Although the simulated error is an extreme case, where a very fast vibrational motion occurs by assigning a randomized error in each line depth independently, the error magnitude in SD of 0.04 mm can result in severe image quality degradation.

Fig. 6.

Fig. 6

The reconstructed ultrasound image using the simulated data with five point targets. Three distinct types of error were also applied to observe the system tolerance. (a) Ground truth image with no error, (b) the result with error in the axial axis simulating incorrect rotation radius, (c) the result with error in the rotation axis, and (d) the result with error from motion in the axial axis.

Next, the tolerance of the system to these errors was quantitatively evaluated by varying the magnitude of the error. Figure 7 shows the amount of image deformation for certain error conditions through the metric of FWHM. The evaluation metric for FWHM degradation was defined as the ratio between the FWHM of an image with error and that of an image with no error, where the image quality with no error is presented as 100%. As a result, high tolerance was seen in the axial axis error, where the resolution was not degraded when up to 2 to 3 mm of error was induced. For the error in rotational angle, FWHM was acceptable until 0.5 deg but gradually degraded as the error magnitude increased. For the axial motion, FWHM was stable until 0.03 mm but started degrading after introducing 0.04 mm error. If the potential magnitude of error is predictable, it is necessary either to sweep the same region several times or to sample angles in smaller steps to average out the error. The tolerance improved substantially when 10 times averaging was applied by simulating the same data with distinct randomized errors applied.

Fig. 7.

Fig. 7

The full-width at the half maximum (FWHM) of the point targets for different error sources. (a) The resolution in the presence of error in axial direction for entire channel data, (b) the resolution in the presence of error in rotational angle tracking, and (c) the resolution in the presence of error in axial direction for each receive line.

4.2. Needle Sensing Evaluation

Figure 8 shows the sensitivity of the depth detection and the accuracy of the rotation encoder. The depth detection result is shown in Fig. 8(a), showing that the ultrasound needle could accurately sense the depth information. The slope was 1.006 and the root mean square (RMS) error was 0.076 mm. The needle depth sensing itself could be used as a real-time guidance tool, by indicating how deep the needle could go before hitting the high contrast region. This could work well, given that in our case, the largest contrasting structures are bone. The accuracy of the encoder was assessed experimentally prior to implementing in our system. The needle was placed vertically against a measurement surface and incrementally rotated such that, at each new angle position, the angle reading from the encoder and the horizontal displacement of the tip of the needle were recorded. With the horizontal displacement of the needle tip (x) and the known length of the arc radius (r), the actual angle can be calculated with this simple trigonometric equation:

θ=tan1xr. (4)

Fig. 8.

Fig. 8

The evaluation results of the (a) needle and (b) the rotation encoder. (a) The distance from the needle reading was compared to the designated motion distance and (b) the angle measurement for the encoder was compared to the actual angle. The correlation function for both the depth sensing and rotation encoder sensing was higher than 99.99% and 99.94%, respectively. (c) The A-line waveform of the needle transducer from a 1-mm metal wire in both temporal and frequency domain was shown.

This was compared to the angle found with the encoder to find the error, and a linear trend was observed, as shown in Fig. 8(b). The slope was 1.005 and the RMS error was 0.116 deg. These two pieces of information indicate that the sensing system is sufficient to produce an image with minimal distortion, as indicated by motion from tissue or sensors. The correlation function for both the depth sensing and rotation encoder sensing was higher than 99.99% and 99.94%, respectively.

In a separate study, we evaluated the angle sensitivity and the signal profile of the fabricated transducer. The needle-shaped transducer was mounted on a linear translation stage and a pure lateral translational motion was applied. The signal from a 1-mm metal wire was recorded at each lateral position, and the angle sensitivity was calculated by measuring the change of intensity. The angle sensitivity of the needle was 11.44 deg, where 6  dB signal strength was reached. Furthermore, Fig. 8(c) presents the signal profile. The maximum frequency was 3.62 MHz, which was close to the simulated signal center frequency of 4 MHz.

4.3. Single-Element Ultrasound Imaging Results

Experimental results of a wire phantom using our prototype imager are shown in Fig. 9. The tracked rotation angle trajectory [Fig. 9(a)] and the histogram representing the number of samples for each angle [Fig. 9(b)] are shown. The first 10 data samplings for each angle trajectory were also stored separately and used as uniform data to evaluate the effect of nonuniform sampling compared with uniformly sampled data. For the image without STAF, the acquired RF data were aligned and postprocessed without backpropagation. This is identical to using aperture size of single element in STAF. The reconstructed image, displayed in log scale, shows a resolution improvement for both targets. The FWHM of the rightmost shallow wire improved from 6.68 to 1.21 mm, when a single data sample was used for each angle step during STAF, 4.66 to 0.97 mm when 10 data samples were used for each angle step, and 4.44 to 0.94 mm when all recorded data were used. Similarly, the FWHM of the left side wire improved from 4.43 to 1.09 mm for a single sample, 1.88 to 0.97 mm for 10 data samples, and 3.19 to 0.98 mm for all recorded samples. This result indicates that repeated freehand scanning produced data that improved the imaging quality by averaging even though it has nonuniform spacing. The extracted cross-section of the point target was shown in Fig. 10. The lateral profile of two point targets using all data is shown, and the resolution improvement was seen in both targets.

Fig. 9.

Fig. 9

The imaging result of the metal wire phantom. (a) The tracked rotation angle trajectory and (b) the histogram represent the number of samples for each angle. Note that the first 10 data samples were used for the uniform averaging result. (c) The B-mode image with and without STAF. For each set, a comparison was made between the controlled case, where the number of samples for each angle data was equivalent (for single acquisition and 10 times averaging), and the case where 2000 nonuniform freehand A-lines were used.

Fig. 10.

Fig. 10

The lateral cross-section profile taking the maximum intensity of (a) left and (b) right targets. For each profile, the results with and without applying STAF are compared.

The spinal bone phantom was scanned as a more practical imaging target to validate clinical feasibility. Figure 11 presents the B-mode images of the spinal phantom with and without STAF in three different planes, as defined in Sec. 3.2. The bottom row also shows images with interpreted manually drawn phantom boundaries, and potential needle paths were highlighted by red lines if they appeared. In the sagittal plane, the two spinal processes were depicted, and the gap between them indicating a potential needle path was captured. The high intensity reflection signals from 50- to 60-mm depth indicate that signals were not blocked by the spinous processes located at around 20 mm depth (yellow arrow). In addition, two transverse planes were scanned to highlight the signal difference with and without a potential insertion point. For the first transverse plane, the gap between two superior articular processes was captured and the signal from the body was also highlighted (yellow arrow). For the second transverse plane, the spinous process tip showed the highest contrast while weak signals were recorded from the process body (white arrow). The rest of the structure below appears with a weaker contrast. The fact that there was no signal from vertebral body could be used as an identifier of the lack of a needle entry point.

Fig. 11.

Fig. 11

The ultrasound image of the spine phantom. The contrast from the deep center region appeared in both sagittal and first transverse planes indicate that the needle cannot go through without changing the insertion orientation. The red line indicates the suggested needle path.

5. Discussions and Conclusions

The current standard of care for LPs introduces a wide range of iatrogenic complications and places a heavy financial burden on the patient, physician, and healthcare system overall. The proposed imaging system incorporates an ultrasound element at the tip of the needle to allow for imaging at any depth the needle can be inserted. The needle probe bypasses the attenuation from adipose tissue and skin that plagues traditional topical ultrasound approaches. By providing the physician with visualization of the bone structure ahead and facilitating accurate midline placement, the needle will pass through a plane known to be devoid of significant blood vessels and nerves.25 The simulation validation and experiments using our prototype successfully demonstrate the imaging potential of the tool for LPs guidance.

Compared to other imaging systems, there are clear advantages. Unlike other ultrasound guidance systems, the proposed system is an independent imager that does not require any registration process, which could introduce additional needle tracking inaccuracy. The real-time sensing of information is the most direct and informative way to prevent undesired contact between the needle and bone. As the image plane is kept in the needle frame, there is no out-of-plane error that could be introduced due to ultrasound beam thickness in conventional ultrasound probe-based guidance. The system also requires minimal electrical components, making it more cost effective.

Another unique aspect of the system is the fact that the proposed system could update the sensing/imaging at intermediate insertion depth in real-time, whereas conventional guidance must rely on information gathered prior to entry, such as preoperative CT or ultrasound-to-tool calibration. This up-to-date information is not only more reliable but also higher resolution especially as the needle approaches the real target. Conventional ultrasound could have a limited imaging depth depending on the object and frequency used. Thus, the proposed system can provide clearer and more precise information for guidance. Signal acquisition speed in our current system was 70 ms per an A-line for 75-mm depth field. We will further improve the data acquisition speed by optimizing both hardware and software up to the theoretical limit of 0.1 ms due to acoustic time-of-flight.

Maintaining a rigid-body transformation between the needle tip frame and the spine is considered a hardware requirement of this system, because, as demonstrated in the simulation, the motion of the holster can directly affect the image. Although motion of the holster is unlikely to occur during the sweep above the skin before needle insertion, a sweep inside the adipose layer could drift the skin surface. Therefore, the holster must be rigidly fixed over a wide area so that local displacement will not also displace the holster. Alternatively, external mechanical joints connecting the holster with a rigid body, such as the procedure bed or chair, can be used to fix the holster in place.

Regarding the limitations of the system in practical implementations, the tissue layer could cause signal attenuation that lowers the intensity from the bone surface. Therefore, a more sensitive receiver circuit and needle are desired. SNR can be improved by electrical impedance matching between the sampling circuit and needle. Another solution is to use a PZT element with lower center frequency, which can increase the error tolerance due to the increase in wavelength. Other than that, signal processing or a more adaptive beamforming algorithm could also help to counter this concern. The tracking accuracy due to motion artifacts is considered as another problem. There is some inevitable error from the mis-synchronization between the tracking and the ultrasound data. Ideally, the tracker should be fast enough to reduce the error introduced. As suggested from simulation analysis, recording multiple data in the same angle could mitigate the motion artifacts. Additionally, unexpected angular sensitivity from a nonorthogonal element (i.e., an element that was placed at a tilt from the manual fabrication process) could introduce beamforming error.

Future works include building an adaptive image formation algorithm taking into account the known angular sensitivity and enhancing the SNR of the image. Large angle sweeping at the adipose tissue layer could bend the needle, so the effect on image quality should be further studied. The system will be further improved and optimized to the more realistic environments, such as ex vivo tissue or in vivo.

Acknowledgments

The authors would like to acknowledge Shayan Roychoudhury to help creating a figure, and Nisu Patel, Larissa Chan, Ernest Scalabrin, Suraj Shah, Arden Chew, and Kush Gupta for their contributions to the project in a variety of capacities. Financial supports were provided by Johns Hopkins University internal funds, NSF Grant No. IIS-1653322: Co-Robotic Ultrasound Sensing in Bioengineering, NIGMS-/NIBIB-NIH Grant No. R01EB021396: Slicer+PLUS: Point-of-Care Ultrasound, NCI-NIH Grant No. R21CA202199: Prostate Specific Membrane Antigen Targeted Photoacoustic Imaging for Prostate Cancer, and NCI-NIH Grant No. R44CA192482, Development of a Mobile and Automated Platform for Multiplexed Multi Modality Imaging.

Biography

Biographies for the authors are not available.

Disclosures

No conflicts of interest, financial or otherwise, are declared by the authors.

References


Articles from Journal of Medical Imaging are provided here courtesy of Society of Photo-Optical Instrumentation Engineers

RESOURCES