Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Apr 1.
Published in final edited form as: IEEE Trans Biomed Eng. 2022 Mar 18;69(4):1424–1434. doi: 10.1109/TBME.2021.3118535

A minimally interactive method for labeling respiratory phases in free-breathing thoracic dynamic MRI for constructing 4D images

Changjian Sun 1,, Jayaram K Udupa 2,*, Yubing Tong 3, Caiyun Wu 4, Shuxu Guo 5, Joseph M McDonough 6, Drew A Torigian 7, Patrick J Cahill 8
PMCID: PMC8990545  NIHMSID: NIHMS1790647  PMID: 34618668

Abstract

Objective:

Determination of end-expiration (EE) and end-inspiration (EI) time points in the respiratory cycle in free-breathing slice image acquisitions of the thorax is one key step needed for 4D image construction via dynamic magnetic resonance imaging. The purpose of this paper is to realize the automation of the labeling process.

Methods:

The diaphragm is used as a surrogate for tracking respiratory motion and determining the state of breathing. Regions of interest (ROIs) containing the hemi-diaphragms are set by human interaction to compute the optical flow matrix between two adjacent 2D time slices. Subsequently, our approach examines the diaphragm speed and direction and by considering the change in the optical flow matrix, the EE or EI points are detected.

Results and conclusion:

The labeling accuracy for the lateral aspect of the left lung and the lateral aspect of the right lung (0.63±0.71) is significantly lower (P < 0.05) than the accuracy for other positions (0.42±0.44), but the error in almost all scenarios is less than 1 time point. By comparing between automatic and manual labeling in 12 scenarios, we found out that 9 scenarios showed no significant difference (P > 0.05) between two methods. Overall, our method is found to be highly agreeable with manual labeling and greatly shortens the labeling time, requiring less than 8 minutes/study compared to 4 hours/study for manual labeling.

Significance:

Our method achieves automatic labeling of EE and EI points without the need for use of patient internal or external markers.

Keywords: Auto-labeling respiratory phase, Diaphragm motion, Dynamic magnetic resonance imaging, Optical flow, 4D construction, Thoracic insufficiency syndrome (TIS)

Introduction

4D imaging of the thorax has been widely used in radiation therapy to quantify thoracic organ displacements, visualize abdominal and thoracic organ motion, and assess mechanical functions of organs[1], [2]. 4D medical imaging approaches using different modalities including computed tomography (CT) [37], magnetic resonance imaging (MRI) [2], [8]–[18], and ultrasonography (US) [19] have also been developed. MRI is the modality of choice for imaging the pediatric thorax due to the absence of ionizing radiation, excellent soft tissue contrast, sufficient temporal resolution, and the ease of implementation of dynamic protocols.[2] The motivation and rationale for the presented work stem from the need to quantify dynamic thoracic function and its change due to surgical treatment in a pediatric ailment known as thoracic insufficiency syndrome (TIS).

TIS is a complex condition involving malformation of the components of the thorax, mainly the rib cage, spine, sternum, and intercostal muscles [20], [21]. In many cases, children with TIS are born with congenital spinal deformities and/or have a neuromuscular condition leading to scoliosis. Patients with TIS are unable to support normal breathing and lung growth. As they grow, their rib cage, spine, and thoracic volume do not keep pace. As a result, their chest wall becomes deformed (sunken) and they may become dependent on nasal oxygen or ventilator support to breathe. Traditional 4D imaging methods are difficult to implement for studying TIS patients due to the physiological characteristics of these patients. For example, patients often suffer from extreme deformities of the chest wall, diaphragm, and/or spine that prevent the chest from supporting normal breathing to cooperate with the requirements of imaging such as breath-holding or breathing cooperatively with a gating or tracking device [20], [21]. Additionally, young age at onset of TIS and conditions associated with TIS such as cerebral palsy are also associated with intellectual deficits preventing participation and cooperation with studies [22], [23]. Therefore, for the study of TIS, image acquisition under free-breathing conditions is the only practical option. With this tenet, we developed a method of dynamic MRI (dMRI), wherein for each sagittal slice location through the thorax, slices are acquired over several respiratory cycles at ~200 ms per slice while the patient breathes freely. Images are acquired in this manner for all sagittal locations across the chest. This typically results in ~3000 slices which constitute a spatio-temporal sampling of the dynamic thorax without any information available to anchor the time instances to specific respiratory phases. From these data, by using a graph-based optimization technique [2], we construct an “optimal” 4D image representing the breathing thorax over one respiratory cycle, which typically consists of ~300 spatio-temporal slices. The method is purely-image based without the requirement of sorting based on a breathing signal or using any external surrogate. The utility of this approach in studying TIS is beginning to emerge [24]–[27].

One critical processing step in that approach [2] is to label the end-expiration (EE) and end-inspiration (EI) phases in the time sequence of slices associated with each sagittal location. This step has been conventionally carried out manually, which requires an expert to examine the slices in each time sequence, observe the way the diaphragm moves, and mark the slice in the sequence where the diaphragm reaches the superior-most and inferior-most position as representing an EE-phase slice and EI-phase slice, respectively. Since all 3000 slices have to be visually assessed in this manner, this step is time-consuming (taking typically 3.5-4 hours per patient data set). In this paper, we propose a method to significantly improve the level of automation of this step so that the entire 4D construction process becomes highly automated and clinically viable.

Existing 4D image acquisition and construction methods can be categorized into four groups. (i) Real-time acquisition [28]. These methods acquire image volume data rapidly enough to cover the 3D region of interest and several respiratory phases in one cycle. Typically, the volume covered is quite small and the image quality is inferior to that obtainable by other methods. (ii) Prospective gating methods [4], [29], [30]. These utilize some device to provide respiratory signals so images can be acquired at defined respiratory phases. Surgically implanted internal markers and external markers such as pressure-sensitive belts have been used to generate signals. (iii) Retrospective methods using gating devices [7], [31], [32]. These methods need a device to generate a “respiratory signal” as in (ii) but select slices based on signal after image acquisition to create a 4D image. (iv) Image-based retrospective gating [2], [8]–[10], [15]–[19], [33], [34]. They do not need devices or signals and are least encumbering to the patient unlike methods in (ii) and (iii), and are best suited for TIS applications. Since methods under (i) have quality and body region coverage issues, they are also not appropriate to study TIS. The rationale and advantages of our graph-based method over other techniques under (iv) have already been elucidated in Ref. [2]. In short, these other techniques all make assumptions on the nature of the breathing cycle or the image features that seem to be valid in CT imagery. These requirements are hard to satisfy in dMRI of very sick TIS patients. 4D methods developed for CT images have not been tested for MRI acquisitions and are not guaranteed to be fully viable [5], [7], [10]. Some methods like SGD (sagittal–coronal–diaphragm) motion tracking method cannot be implemented only by using sagittal MRI [8]. Other methods sensitive to the intensity of the respiratory signal cannot meet the requirements of 4D image construction [7], [8].

These challenges led us to develop the new graph-based approach for TIS patients. The idea underlying graph-based retrospective 4D imaging methods is to reorder the acquired slices using intra-image gating signals. These methods can be divided into three categories based on the sorted reference: The first group of methods derive the respiratory signal using feature vector dimensionality reduction by mapping the respiratory phase information contained in the slice to a low-dimensional space. Most of these methods are based on manifold learning [16], [18], [35], [36] and others have used principal component analysis [37]. The signal dimensionality reduction method is common to multiple modalities of medical imagery. The idea seems to be that although the slices may come from different locations, the same respiratory state causes similar physical deformation, which makes them lie on similar manifolds. By the judgment of the manifold, the correspondence between the slicing order and the breathing phase is optimized. The second group of methods uses 2D image-based internal surrogates [810], [15], [33], [34]. These methods manually select and extract one or more internal anatomical features from the multi-slice scan data, then combine the features to estimate the respiratory phases and perform reconstruction based on the determined respiratory signal correlation between the slices. The third group of methods devise and find optimal paths in an appropriately constructed graph to find the best way to put slices together spatially and temporally to form a 4D image [2], [17].

The graph-based approach of Ref. [2] for 4D construction of the thorax over one respiratory cycle belongs to the third group, in which the EE and EI phases of breathing constitute key time point information, forming essential underpinnings of the graph formation stage of the method. Although EE and EI time points are determined on sagittal slice planes, the graph method weaves the 4D space together in terms of time and space slices to assure optimal space and time continuity. However, as described above, one main hurdle in that approach has been the manual identification of EE and EI time points. Automatic labeling of EE and EI phases can greatly improve the efficiency of this method. At present, motion tracking of tissues or organs in the body to identify the cycle of breathing or heartbeat mainly relies on analysis of body region deformation or internal surrogate tracking. For respiratory motion, the most common method is to use an observation window to detect the cranio-caudal motion of the diaphragm [3840]. The extraction of respiratory signals by calculating reference changes in observation windows in adjacent slices also follows this principle [17]. In addition, the state of breathing can also be estimated using the deformation of the body contour [34], but this method cannot be utilized if the patient’s breathing is weak as in many TIS conditions. Some investigators have formulated and solved a system of partial differential equations to describe cardiac dynamics [41].

In view of the physiological characteristics of patients with TIS, we need a method that can automatically detect and label weak breathing. The main idea underlying automation of this step is to track the movement of the diaphragm by analyzing the optical flow velocity information in its vicinity from adjacent time points and thereby detect and label EE and EI phases according to the movement of the diaphragm during free breathing. As explained in Section 2, the user first specifies a rectangular region of interest (ROI) on one sagittal slice enclosing the hemi-diaphragm. The velocity information is then computed within the ROI in all slices. The magnitude and direction of the velocity are utilized in determining the slices corresponding to EE and EI time points. Utilizing a set of 87 dMRI data sets acquired from patients and normal pediatric subjects, we assess the accuracy and precision of the proposed auto-labeling method as compared to manual labeling in Section 3. Our conclusions, limitations of the method, and challenges we encountered during implementation are explained in Section 4.

Some preliminary results along the lines of the study in this paper have appeared in the proceedings of the SPIE Medical Imaging 2019 conference. This paper is a significant extension over the conference paper in the following aspects: Extensive background and literature review which was missing in the SPIE paper; a detailed description of the method and algorithms which was missing in the conference paper; more data used to test the accuracy and precision of the algorithm; considerably expanded experimental results and their analysis; and expanded concluding remarks.

Materials And Methods

A. Image data sets

Image data sets were obtained from the Children’s Hospital of Philadelphia (CHOP). This retrospective study was conducted following approval from the Institutional Review Board at CHOP and the University of Pennsylvania along with a Health Insurance Portability and Accountability Act waiver. Image data sets utilized in this study all pertain to sagittal thoracic dMRI. Each patient was scanned from right lateral end to left lateral end at 30-40 sagittal plane positions under breathing conditions that are natural for the patient. All subjects in this study are scanned by using the same scanning protocol with details as follows. 3T imager (Siemens Healthcare, Erlangen, Germany, Manufacturer’s Model Name Prisma, the sequence used to acquire free-breathing thoracic dMRI is trufisp tfi2d1), true-fast imaging with steady-state precession sequence; TR/TE = 3.82/1.91 ms; voxel size, approximately 1 mm x 1 mm x 6 mm; 320 x 320 x 38 matrix; bandwidth = 558 Hz; flip angle = 76°; and one signal average. For each sagittal location in the thorax, slice data were obtained during 8-14 tidal breathing cycles at approximately 470-480 ms per slice; total acquisition time per subject = 40 minutes. This process yields over 2000-3000 slices in total for one patient and constitutes a spatio-temporal sampling of the patient’s dynamic thorax over several respiratory cycles.

The 4D construction method we used [2] identifies a small set of about 200-300 slices among these 2000-3000 slices by using a graph-based optimization technique to build one representative and optimal 4D image to describe the breathing motion of the 3D thorax over one respiratory cycle.

A total of 87 dMRI data sets gathered from 54 subjects and one dynamic (4D) phantom were utilized in our study as summarized in Table I. Scan 1 and Scan 2 in the table refer to different scan sessions of the same subjects; in the case of patients, they constitute pre- and post-operative data sets. For 3 of the 5 adult subjects, we acquired data in a repeated scan session. A dynamic phantom [2] was created by 3D printing a (left and right) lung segmented at one time point of the dMRI data set of an adult normal subject and immersing the lung in a water bath. Realistic tidal breathing effect by air volume and respiratory rate was simulated by pumping air into and out of the lung shell at known values. In the labeling experiment for phantom, air pumping into the lung shell will be accompanied by changes in the water level, which is regarded as a substitute for diaphragm movement. During manual labeling, the operator recognizes the water level and accordingly determines respiratory phases. The rise of the water level simulates the exhalation process, the water level reaching the highest point of the lungs is regarded as EE; the drop of water level simulates the inhalation process, and the water level reaching the lowest point of the lungs is regarded as EI. See Ref. [2] for details.

TABLE I.

SUMMARY OF DMRI DATA SETS UTILIZED IN OUR STUDY

Scan 1 Scan 2 Total dMRI data sets
TIS –pediatric 29 (pre-op) 29 (post-op) 58
Normal–pediatric 20 - 20
Normal – adult 5 3 8
Dynamic phantom 1 - 1

TIS patient data set contains 16 males and 13 females, with age 4.5±4.2 yrs before surgery and 4.5±4.2 yrs after surgery (no significant difference in age; p > 0.05). Normal pediatric data set contains 12 males and 8 females, with age 11.0±2.3yrs and body mass index of 18.8±2.8 kg/m2. Normal adult subjects were 3 males and 2 females, with age 27.6 ± 2.5 yrs.

B. Methods

An overview of the auto-labeling approach is schematically illustrated in Fig. 1. We assume that there is a time varying (almost periodic) body region B(t) (in our case, thorax) whose domain is contained in Ω = X×Y×Z mm3. Our dMRI scanning method produces a sequence of slices

A={fz1,t1,fz1,t2,,fz1,tM,fz2,tM+1,,fz2,t2M,,fzN,tN×M}

representing a spatio-temporal sampling of Ω over a total scanning time interval of [0, τ]. Each slice fzi,tj, is acquired within a short time (~480 ms), when B(t) can be assumed to be frozen in time/motion, such that ziZ and tj ∈ [0, τ]. Note that in our protocol the z-axis is orthogonal to the sagittal plane, typically N (the number of sagittal or z locations) is 35 to 40, meaning that slices are acquired for N sagittal slice locations, and the number of time points M for each sagittal location is usually 80. For convenience, we will denote the sequence of slices associated with a specific z-location by Az, z = z1, z2 …, zN. Since there is no time coordination (due to free-breathing acquisitions) among slices in A , it constitutes an uncoordinated spatiotemporal sampling of Ω over the time interval [0, τ]. In other words, the respiratory phases of the slices in the two time sequences Azi and Azj associated with any two distinct z-locations zi and zj are not synchronized. The 4D image construction method we previously reported [2] requires identification of the time slices that denote the EE and EI time points for each time sequence Az, z = z1, z2, …, zN . In the published approach, this step was accomplished manually, wherein an operator examined the slices in Az and marked a slice as representing either EE or EI if the hemi-diaphragm dome in that slice reached the highest (cranial direction) or lowest (caudal direction) position, respectively (see Fig. 2). Subsequently, the 4D construction method used a graph-based optimization technique to find the best 4D volume (constituting the 3D body region over one respiratory cycle) from among the set of all slices in A . The methods of graph construction and graph optimization in Ref 2 by properly linking the space (z) and time (t) slices guarantee space and time continuity among the subset of slices selected by the optimization process (see Ref 2 for details). The manual labeling step requires a great deal of time and effort (typically 3.5-4 hours per dMRI data set). Although dMRI, 4D construction, and subsequent image analysis facilitated uncovering previously unknown information about the TIS process and its treatment outcome [2427], the manual labor required has hindered the translation of the entire dMRI approach for routine clinical use.

Fig. 1.

Fig. 1.

A schematic illustration of the auto-labeling method.

Fig. 2.

Fig. 2.

Part of a time sequence Az from a patient dMRI data set illustrating the EE and EI time points marked based on visually observed movement of the diaphragm dome.

The main idea underlying the proposed auto-labeling process is to use the part of the hemi-diaphragm indicated within a region of interest (ROI) as a surrogate to automatically track the diaphragm’s upward and downward motion during the expiratory and inspiratory phases of the respiratory cycle, respectively. We use time-dependent optical flow computation [42] to determine the direction and magnitude of the motion of the diaphragm. This vectorial motion (velocity) information is used to accurately determine the EE and EI time points within each time sequence Az, z = z1, z2, …, zN. In the labeling process, the optical flow matrix within the ROI is computed and the EE and EI points are filtered by noting the points at which the direction of motion of the diaphragm changes from upward to downward (EE point) and downward to upward (EI point).

1). Specifying an ROI as a respiratory surrogate

Some tissue regions are more significantly and regularly deformed during breathing and can be selected as a respiratory surrogate [8] than others. In order to make the auto-labeling approach as consistent as possible with the ground truth labeling operation, we chose the hemi-diaphragm observable on sagittal slices as a surrogate of respiration, since the hemi-diaphragm satisfies the above criterion and since manual labeling also uses this structure. Selecting the diaphragm for tracking respiration has obvious advantages in the TIS application: 1) The edge of the hemi-diaphragm can be clearly discerned from the dMRI image as the border between the thorax and abdomen [8]. 2) While an unaffected patient can be expected to have motion in a compliant chest wall, due to the often extreme distortions of the spine, rib cage, and other skeletal structures in TIS, restriction, elimination, or even paradoxical inverse movements can occur in the chest wall [21],[22]. Conversely, the diaphragm reliably maintains a discernible superior-inferior motion.

To accurately track the movement of solely the hemi-diaphragm and reduce the effect of deformation from other organs/tissues within the whole slice, we set an ROI interactively [43] roughly covering the superior dome of the hemi-diaphragm as shown in Fig. 3. The ROI needs to be specified manually only for one sagittal z-location per dMRI data set. The specified ROI is then propagated to all z-locations automatically with the same size and location. We treat sagittal z-locations passing through the region of the heart differently due to the fact that the movement of the heart is not in synchrony with the motion of the diaphragm or chest wall and will mislead flow estimations and our decisions derived from flow. For these locations, the ROI specified for all other locations is split into two equal parts and only the right half part is chosen so as to exclude heart as shown in the middle image in Fig. 3. Thus, two kinds of ROIs are set as follows. All sagittal locations from rightmost to left-most positions with respect to the patient thorax are separated into 3 regions with the ratio 30%:40%:30%. The size of the ROI varies from patient to patient, and the size of the first and third regions is approximately 70 x 80. The second region covers the sagittal locations passing through the heart, as illustrated in Fig. 3, where the left and right images from the first and third regions, respectively, use an ROI of the same size. The ROI is decreased in size for the second region to avoid effects from cardiac movement as shown in the middle image of Fig. 3. The correctness of placement of the automatically set and propagated ROIs is verified quickly by visually examining a few time slices in the three regions for each z-location. The total manual time taken for ROI selection in this manner is 10-15 minutes per dMRI study.

Fig. 3.

Fig. 3.

ROIs selected at different sagittal z-locations of the thorax. Slices through mid-right lung (left), mid-thorax at the level of heart (middle), and mid-left lung (right) are shown.

2). Computing time-dependent optical flow within an ROI

An optical flow approach [42] is employed to automatically track the motion of the hemi-diaphragm in each lung (strictly speaking, the boundary that separates the base of the lung from the surrounding tissues) within the ROI. Since this motion tracking is done separately for each time sequence Az, we will describe the method for a single time sequence of slices. Without loss of generality, let any such time sequence be denoted by Az = {fT1, fT2, …, fTM} where subscript z has been dropped from the notation used for the slices for simplicity, fT1, fT2, …, fTM denote slices in the time sequence Az and T1, …, TM denote the time instances associated with the slices in Az. Fig. 4 illustrates the main idea of the approach. Consider a point such as P in the middle of the hemi-diaphragm dome. As the hemi-diaphragm in Fig. 4(a) undergoes a complete inferior-superior-inferior (EI-EE-EI) motion during one breathing cycle. P’s y-location traverses a path. This is conceptually illustrated in Fig. 4(b) where dy (t) denotes this movement component of P. What is illustrated in Fig. 4(a) and (b) is an ideal situation where t is assumed to be continuous (not discrete), and an individual point (P) is tracked. In our practical set up, we can sample slices only at discrete time instances, which are indicated by small circles over one respiratory cycle in Fig. 4(b). Instead of tracking individual points (pixels), we estimate an average of the motion of all points in the vicinity of the hemi-diaphragm within the ROI by using the mechanism of optical flow estimated from each successive pair of adjacent time-slices fTi and fTi+1 in Az. The optical flow value we seek is a vector that denotes the velocity (speed and direction) of motion. The component v(t) of this vector in the y direction is illustrated in Fig. 4(c). The mechanism of optical flow assumes that the motion under consideration at every pixel (x, y) is small in going from slice fTi at Ti to slice fTi+1 at Ti+1. This assumption leads to the image constraint equation shown below where Δt = Ti+1Ti and (xx, yy) denotes a pixel neighboring pixel (x, y).

Fig. 4.

Fig. 4.

(a) A sample ROI. For the image coordinate system, x and y indicate the two directions of the sagittal plane. The y direction corresponds to superior-inferior direction and the x direction indicates anterior-posterior direction. (b) The graph shows conceptually the continuous motion of the hemi-diaphragm over one respiratory cycle at point P. The small circles in (b) and (c) denote the sampled time-slices. (c) The vertical (cranio-caudal) component v(t) of the velocity of the hemi-diaphragm at P. The detected EI and EE time points are marked in blue and orange, respectively. The EI time point corresponds to the time instance just before v(t) changes from a +ve to a −ve value, and vice versa for EE.

fTi+1(x+Δx,y+Δy)=fTi(x,y) (1)

With the assumption of small motion from Ti to Ti+1, by Taylor series expansion,

fTi+1(x+Δx,y+Δy)=fTi(x,y)+fTixΔx+fTiyΔy+fTitΔt+ε (2)

where ε denotes residual sum over higher order terms in the series. If we ignore ε and divide throughout by Δt, the above equation leads to

fTixu+fTiyv+fTit=0 (3)

where (u,v)=(ΔxΔt,ΔyΔt) denotes the velocity vector with its horizontal (antero-posterior) component u and cranio-caudal component v at pixel p = (x, y) at time t = Ti. We will make use of only the v component for auto-labeling.

We employ the Lucas–Kanade (LK) method [42] to solve for u and v. The LK algorithm is based on the assumption that the optical velocities in local neighborhoods of each pixel p = (x, y) are similar. This assumption can be used to derive the basic equation of optical flow for all pixels in a small neighborhood L(p) of pixel p and solve the resultant system of equations by the least squares technique for u and v at (x, y). In our implementation, we assumed 3 × 3 neighborhoods.

[uv]=[(x,y)L(p)2ft(x,y)x2(x,y)L(p)(ft(x,y)x)(ft(x,y)y)(x,y)L(p)(ft(x,y)x)(ft(x,y)y)(x,y)L(p)2ft(x,y)y2]1[(x,y)L(p)(ft(x,y)x)(ft(x,y)t)(x,y)L(p)(ft(x,y)x)(ft(x,y)t)] (4)

3). Determining the cranio-caudal component of motion and EE and EI time points

Let Vz(p, t) denote the image of the cranio-caudal component of velocity within the specified ROI at time t for the time sequence Az. That is, at any pixel p within the ROI, Vz(p, t) denotes the velocity component v at pixel p = (x, y) at time t estimated as described in the previous section. To avoid undue influence of noise, instead of following motion at every pixel within the ROI, we estimate the average of the signed cranio-caudal velocity components within the ROI

μz(t)=pROIVz(p,t)|ROI| (5)

where |ROI| denotes the number of pixels within the ROI. In summary, μz(t) is the average cranio-caudal velocity for slice location z at time t, with the convention that a +ve value of μz(t) indicates downward (caudal direction) motion of the hemi-diaphragm (inspiration) for z at time t and a −ve value denotes upward (cranial direction) motion (expiration) for z at t.

Fig. 5 illustrates the variation of μz(t) as a function of t as estimated by the above method in a time sequence Az associated with a patient dMRI data set for the right hemi-diaphragm. The pseudo-periodic motion of the hemi-diaphragm seems to be well-captured by the proposed technique. Recall that μz(t) represents the cranio-caudal velocity of the hemi-diaphragm. During inspiration, the hemi-diaphragm moves caudally, μz(t) > 0, and the EI time points are identified at time instances just before μz(t) changes from a +ve (downward motion) to a −ve value (upward motion). Similarly, EE time points are estimated from μz(t) at time instances just before μzt changes from a −ve (upward motion) to a +ve value (downward motion). In other words, the conditions for EI and EE are:

Fig. 5.

Fig. 5.

A plot of μz estimated from the time samples in Az for a patient dMRI data set. The time axis has 80 time points and a total of 79 optical flow values are equivalently spaced on the t axis. The blue and orange dots denote EI and EE time points, respectively.

EI: Time instance Ti such that

μz(Ti)>0ANDμz(Ti+1)<0. (6)

EE: Time instance Ti such that

μz(Ti)<0ANDμz(Ti+1)>0. (7)

A complete respiratory cycle in Fig. 5 extends from a colored time point to the next colored time point of the same color.

Experiments, Results, And Discussion

A. Experiments

We conducted experiments to ascertain the accuracy and precision of our auto-labeling method.

Accuracy: The EE and EI time points in the dynamic sequence Az (of M = 80 time points) associated with each sagittal slice location z for each of the 87 dMRI data sets (see Table I) were determined manually by a trained operator under the guidance of a radiologist (coauthor Torigian, a professor of radiology at the Hospital of the University of Pennsylvania with 24 years of experience in thoracoabdominopelvic CT, MRI, and PET imaging, interpretation, and image analysis) by visually examining the movement of the diaphragm on all ~250,000 slices of these data sets. These EE and EI markings served as ground truth for assessing the accuracy of our auto-labeling method in detecting these time points. Auto-labeling was performed on all 87 scans. Accuracy was quantified by estimating the deviation in the time instance determined by auto-labeling from the closest ground truth marking. To be specific, for a time sequence Az = {fT1, fT2, …, fTM} associated with a z-slice, let an EE time slice determined by auto-labeling be fTa and the closest “true” time-slice be fTt. Then, the deviation in this instance is |ta|. We estimated the mean ε m (and standard deviation ε sd) of this error over the tested cases separately for EE and EI, EE and EI together (EE+EI), separately for left lung (LL) and right lung (RL), and left lung and right lung together (LL+RL). Since the performance at a z-location passing through the lateral and medial aspects of the hemi-diaphragm may be different from the performance at a z-location passing through the center of the hemi-diaphragm dome, we analyzed accuracy separately at the mid-level, lateral aspect, and medial aspect of each of left and right hemi-diaphragms instead of determining an overall accuracy.

Precision: Recall that the auto-labeling method requires interactive specification of the ROI. To study the dependence of the reproducibility of auto-labeling on this subjective operation, on a subset of 10 data sets (5 pre-operative scans of TIS patients and 5 scans of normal adult subjects), the auto-labeling process (including ROI specification) was repeated. The same operator who labeled EE and EI time points manually on all data sets repeated manual labeling in another repeated session conducted several months after the initial session on the same above 10 data sets. These data served to understand the variability in manual labeling itself and how the auto-labeling precision compared with this variability.

B. Results and discussion

1). Qualitative

Fig. 6 illustrates manual and auto-labeling processes over one breathing cycle. In this example, the patient completes exhalation at time point T2 where the diaphragm reaches the highest point, and T2 is marked as EE. The patient completes inspiration at time point T5 when the diaphragm returns to the lowest point, and thus T5 is marked as EI. The lower half of Fig. 6 shows the variation in the direction of optical flow within the ROI during the breathing cycle. Overall optical flow values within the ROI can be positive or negative (shown by blue and orange arrows), indicating downward or upward motion of the diaphragm, respectively. The experimental results show that T2 and T5 are the last time points in the exhalation and inhalation processes, respectively. In many cases of slice acquisition, the diaphragm seems to pause for a short period of time near EI and EE, and only a slight deviation occurs. The position of the diaphragm in the two time-adjacent images does not change much. This phenomenon mostly occurs at the last time point of inhalation, which is near T5. Due to the huge workload of annotators and the need for active judgment, sometimes they cannot be distinguished quickly in a short time. Automatic labeling is more advantageous in this case. This inevitably results in differences between auto-labeling and manual labeling, although auto-labeling may often be more accurate due to its quantitative nature and ability in distinguishing between close cases. The proposed method does not use a complete image, but only uses the ROI containing the diaphragm to complete respiratory signal tracking. The principle is to use the change trend of the optical flow of pixels in the ROI area only from two adjacent images. For black-band artifacts of the balanced SSFP images at 3T, if the pixels in the two adjacent images do not find a matching point, then the pixel’s optical flow will not be influenced and hence will not be counted.

Fig. 6.

Fig. 6.

The upper row shows ROIs selected over one respiratory cycle (from T1 to T6) of a time sequence associated with a z-location of a sample data set. The orange and blue lines denote the superior-most point and inferior-most point of the hemi-diaphragm during breathing, respectively. The lower row illustrates optical flow direction from T0 to T1, …, T5 to T6. The upward and downward motions detected by optical flow are denoted by orange and blue arrows, respectively.

2). Quantitative

Accuracy: ε m and ε sd values for the different tested scenarios are summarized in Table II. Note that there were 86 4D dMRI acquisitions (excluding the phantom; see Table I) involved in our study, where each acquisition included 35-40 sagittal z-locations. Thus, our experiment involved roughly 3,000-3,500 time sequences Az (hence auto-labeling experiments) where each experiment (time sequence) involved detecting 2-4 time points for each of EE and EI In other words, the total number of estimations of EE and EI time points in our study was ~20,000. Table II shows overall errors in the last column computed from all estimations for each scenario, but also separately for left lung (LL) and right lung (RL), pre-operative and post-operative data sets, right region (RR), middle region (MR), and left region (LR) of the hemi-diaphragm, and EE and EI time points. We make the following observations from these results. Comparisons for statistically significant differences are made based on T-test.

TABLE II.

ERROR IN AUTO-LABELING COMPARED TO GROUND TRUTH

RR MR LR All
EE EI EE+EI EE EI EE+EI EE EI EE+EI EE+EI
TIS Pre-operative RL 0.64 0.55 0.59 0.38 0.28 0.32 0.39 0.37 0.38 0.43
0.56 0.54 0.50 0.39 0.34 0.34 0.43 0.41 0.35 0.28
LL 0.48 0.38 0.43 0.46 0.40 0.43 0.90 1.04 0.98 0.61
0.54 0.41 0.44 0.46 0.68 0.54 0.65 0.87 0.73 0.39
All 0.56 0.46 0.51 0.42 0.34 0.37 0.65 0.70 0.68 0.52
0.39 0.36 0.34 0.33 0.36 0.31 0.45 0.51 0.45 0.28
TIS Post-operative RL 0.73 0.71 0.72 0.34 0.30 0.32 0.54 0.31 0.42 0.48
0.65 0.71 0.63 0.35 0.31 0.26 0.41 0.33 0.29 0.27
LL 0.56 0.30 0.42 0.53 0.24 0.37 0.71 0.80 0.76 0.52
0.49 0.29 0.31 0.36 0.27 0.26 0.69 1.14 0.91 0.36
All 0.65 0.50 0.57 0.43 0.27 0.35 0.62 0.55 0.59 0.50
0.43 0.40 0.35 0.29 0.22 0.21 0.45 0.57 0.49 0.26
Normal (pediatric + adult) RL 0.25 0.41 0.33 0.12 0.04 0.08 0.34 0.28 0.31 0.23
0.30 0.53 0.40 0.16 0.06 0.08 0.51 0.35 0.41 0.16
LL 0.44 0.44 0.44 0.25 0.17 0.22 0.42 0.47 0.44 0.35
0.69 0.51 0.48 0.34 0.38 0.26 0.51 0.58 0.51 0.28
All 0.33 0.40 0.36 0.17 0.10 0.14 0.37 0.38 0.37 0.29
0.37 0.34 0.29 0.16 0.18 0.11 0.40 0.38 0.35 0.19

Mean (1st number) and standard deviation (2nd number) values of error over the tested samples are listed in each cell. Error is expressed in terms of the number of slices (time points) that separate the estimated from true time points. The actual length of time of the basic unit is related to the frequency of image acquisition and is the time interval between two time-adjacent images. The time interval between adjacent time slices is typically ~480 ms. The ideal value for error is 0. RL, LL: right & left lung. RR, MR, LR: right, middle, and left region of the hemi-diaphragm.

(i) The error in almost all scenarios is less than 1 time point. That is, in the sequence Az consisting of 80 time point slices, some of which are labeled as EE and EI time points, the separation between ground truth labeling and auto-labeling is on the average off by less than 1 time point slice. This, we believe, is remarkable, considering that manual labeling can itself vary due to ambiguity (as explained above in Fig. 6) by about that amount. For normal subject data sets, error (εε sd) over all scenarios is 0.29±0.19, and for TIS patient data sets before and after surgery, errors over all scenarios are 0.52±0.28 and 0.50±0.26, respectively. The overall error in the case of patients (0.51±0.59) is statistically significantly higher (P < 0.05) than that in the case of normal subjects (0.29±0.19). For phantom data sets (not listed in Table II), the results achieved the highest accuracy with an overall error of 0.27±0.26. Note that although we observed statistically significant differences in comparing between some scenarios (as noted below), the differences themselves were insignificant given that most errors are less than one time point in magnitude.

(ii) Interestingly, at the LR position of the left lung and RR position of the right lung (denoted respectively by LR-LL and RR-RL), the error is statistically significantly greater (P <0.05) than in other positions. Based on all samples of patients and normal subjects, the error is (0.63±0.71) vs. (0.42±0.44). From the analysis of samples from normal subjects, the error is (0.76±0.76) vs. (0.39±0.43). At LR-LL before surgery, the errors for EI and EE are the largest among all positions, the error for EE is 0.90±0.65, the error for EI is 1.04±0.87, and the average error of EE+EI reaches 0.98±0.73. It appears that the greater the proportion of the reference structure (the hemi-diaphragm) in the ROI region, the higher will be the accuracy of the optical flow method for tracking motion. At LR-LL and RR-RL, the area of the diaphragm is much smaller than at other z locations, and it is overwhelmed by other tissues in the ROI region. The background has a significant influence on the optical flow value, resulting in lower accuracy at these two positions. This effect can be verified from results at other locations. Regardless of pre- or post-surgery condition, the average accuracy at MR-RL is greater than the accuracy at the lateral edge.

(iii) Based on the sample data of all positions of TIS patients, the accuracy of EI labeling (0.46±0.63) is greater than that of EE (0.56±0.54) (P < 0.05). This result seems reasonable. During inhalation, the rate of change in lung volume and the speed of the diaphragm are lower than during exhalation. Whether it is for manual labeling or automatic tracking, the end of the inhalation process is easier to identify accurately.

(iv) As explained previously, in the region close to the heart, we chose a smaller ROI to reduce the influence of the heart on the optical flow value. In the process of obtaining the optical flow value, the influence of other tissues in the background on the calculation is inevitable. The expansion and contraction of the heart will affect the labeling process, which can be shown by comparing errors of right and left lungs. The effect of heart motion on auto-labeling is more pronounced on the left side of the thorax. Before surgery, the accuracy of labeling at LR-RL (0.38±0.35) is better than that at RR-LL (0.59±0.50) although the difference is not statistically significant (P > 0.05). The post-surgical errors showed that although the accuracy of labeling at LR-RL is close to that at RR-LL, the accuracy for the right lung (0.48±0.27) is still higher than that for the left lung (0.52±0.36) based on all data (P > 0.05).

(v) The labeling error overall after surgery (0.50±0.26) is slightly lower than that before surgery (0.52±0.28) (P > 0.05), but for both conditions (0.51±0.58) the error is statistically significantly higher (P < 0.05) than that for normal subjects (0.29±0.19). After surgery, there was a statistically significant difference (P < 0.05) in the error in locating EE and EI time points at MR-LL, RR-LL, and LR-RL, with the values (mean ± sd) for the three scenarios being: 0.53±0.36 (EE) vs. 0.24±0.27 (EI), 0.56±0.49 (EE) vs. 0.30±0.29 (EI), and 0.54±0.41 (EE) vs. 0.31±0.33 (EI), respectively. This possibly suggests that the surgery improved the movement of the diaphragm near the heart making the distinction between EE and EI clearer.

Precision: In this part, we compare the differences among three repeated experiments to study how the reproducibility of auto-labeling compared with that of manual labeling. The three experiments are: (i) repeated automatic labeling where the same operator selected ROIs twice (Auto-1), (ii) different operators selected ROIs twice (Auto-2), and (iii) the same operator manually labeled twice (Manual). Considering the repeated experiments by the same operator (Auto-1), the deviation of auto-labeling over roughly 5000 cycles in these data sets was found to be 0.49±0.68. This is actually smaller than the deviation of our method from the manual ground truth (Manual). These results are summarized in E-Table 1 in Supplementary Material.

We compared the deviation of repeated experiments with the same operator (Auto-1) and different operators (Auto-2) for each of the different scenarios: RR, MR, and LR locations for right and left lung and for EE and EI. We analyzed the12 pair-wise results using ANOVA. No pair showed a significant difference in the results of the two repeated experiments. Comparison of automatic labeling (Auto-1) and manual labeling (Manual), only LR-RL for EE, LR-RL for EI, and LR-LL for EE showed statistically significant differences (P < 0.05) between manual labeling reproducibility and auto-labeling reproducibility, with the value for the three scenarios being: 0.30±0.77 (Manual) vs. 0.27±0.31 (Auto-1), 0.32±0.47 (Manual) vs. 0.27±0.18 (Auto-1), and 0.88±1.72 (Manual) vs. 0.63±0.80 (Auto-1), respectively. Again, as with accuracy, the deviations are all less than one time-point slice. This indicates that the variability found in auto-labeling is mostly comparable to that in manual labeling, and in those cases when the deviation is statistically significant, the difference with respect to manual labeling is less than one time point. This result combined with the accuracy result demonstrates that the proposed auto-labeling method is comparable to manual labeling (with a deviation of not more than one time point) and is as reproducible as the manual method itself.

Computational time: For MATLAB 2015a implementation on a Lenovo computer with 4-core, 3.7 GHz CPU (AMD A10-6700), 16GB RAM, and running the professional Windows 7 operating system, the human interaction time required per patient study for auto-labeling is at most 15 minutes. The actual purely computational time per study subsequently is ~8 minutes. In our experience of manually labeling all 86 human subject dMRI data sets (Table I), a study typically takes about 4 hours for a trained technician. Thus, the auto-labeling method greatly facilitates analyzing a large number of TIS patient studies in a routine manner for studying the TIS phenomenon and its treatment outcomes.

Comparison with results from the literature: Among all retrospective 4D imaging methods, the imaging principle of Ref. [17] comes closest to our method. This method needs to identify the first EE time point in acquired 2D image sequences as a reference for determining the weight to find the optimal path to complete the 4D reconstruction process. The authors use NiftyReg software to calculate the dense displacement fields between two temporal slices and lowpass filtering to extract the respiratory signals. To assess the accuracy of their auto-labeling strategy, the authors compare the results of their automated EE detection algorithm with results manually labeled by 5 human operators. Data sets from 12 patients were used in their experiments which involved 36 EE detection experiments. Note that our annotation method marks both EE and EI time points, although here we use only the results for EE for comparison. Romaguera et al. select three locations for their comparative assessment: Area covering the liver and right hemi-diaphragm, the heart, and the left hemi-diaphragm. This is because the labeling reference can be different at these three locations, and the challenge of automatic labeling is also different. These three locations correspond to MR-RL, RR-LL, and MR-LL, respectively, in our evaluation method.

The image acquisition protocol in Romaguera’s paper is different from ours, with 150 time points for each spatial slice location and a much shorter time for acquiring data for a single slice. If error in auto-labeling is calculated in terms of number of time points, the error of Romaguera’s method will be much larger, which is unfair to their method. To standardize describing labeling error, we convert the number of time points of the error into time length deviation (in ms) based on the scanning protocol. As an example, an error of 0.12 time points in our method translates into 0.12 time slices x 480 ms per slice = 57.6 ms. The ground truth employed in their evaluation is the median number among the slices manually labeled by 5 operators.

Table III summarizes the labeling errors at the three locations for our method over all normal subjects and TIS patients and for Romaguera’s method over 12 normal volunteer subjects. The average errors of our method are smaller at all three positions regardless of normal subjects or patients. For the labeling of normal subjects, the labeling error of both methods at MR-RL and MR-LL is smaller than the error at RR-LL. Our method has very high accuracy at the left and right diaphragm positions, where the error is only 57.6 ms and 120 ms, respectively. Considering that there is a short resting period before the start of the inspiratory phase, during which the diaphragm remains at the same height, these errors are extremely small, compared to the errors of Romaguera’s method at these two positions of 355 ms and 233 ms, respectively. The labeling of data sets of patients with TIS is more challenging than that of normal subjects. Due to the more irregular breathing rhythm of the patients compared to normal subjects and the severe deformation of the lungs, the errors in our method at MR-LL (230.4 ms pre-operatively and 268.8 ms post-operatively) and RR-LL (220.8 ms pre-operatively and 254.4 ms post-operatively) are higher than at other regions. However, the errors at MR-LL are still much lower than the errors in Romaguera’s method on normal subjects.

TABLE III.
MR-RL RR-LL MR-LL
(Romaguera et al., 2019) 355 509 233
Our method for TIS (pre) 182.4 230.4 220.8
Our method for TIS (post) 163.2 268.8 254.4
Our method for Normal 57.6 211.2 120

The average EE auto-labeling errors (ms) of our method compared with Romaguera’s method[17] at 3 locations MR-RL, RR-LL, and MR-LL. RL & LL: right & left lung. RR & MR: right & middle region.

Concluding Remarks

In this paper, to make image-based 4D construction practical, we presented an auto-labeling method for identifying EE and EI time points in free-breathing thoracic dMRI slice acquisitions based on time-dependent optical flow concepts. Our method tracks movement of the hemi-diaphragm using the optical flow technique to determine respiratory phase. The method is independent of the image acquisition process and does not require setting internal or external markers on the patient. The auto-labeling process saves time greatly compared to manual labeling currently performed, which in turn makes the entire process of dMRI analysis for the study of TIS significantly more practical. Our extensive evaluation based on 87 dMRI data sets suggests that the accuracy of the auto-labeling method to identify EE and EI phases is within 1 discrete time unit of temporal sampling. More importantly, this deviation is well within the deviation found in manual labeling by an expert who labeled all 87 data sets by visually examining all ~250,000 slices. We conclude that the auto-labeling method performs at least as accurately as manual expert labeling and saves a considerable amount of human time needed in the process.

The main limitation of this approach is that at present it assumes that the acquired MRI slices constitute spatiotemporal sampling of the thorax under tidal breathing conditions. The method is not able to distinguish between normal tidal breathing cycles and abnormal cycles such as when subjects take a long deep breath or when they perform shallow breathing by almost holding their breath. Interestingly, we find such abnormal patterns of breathing more frequently in normal subjects than in TIS patients. We are developing separate techniques to automatically detect such abnormal events before auto-labeling is performed, again by using optical flow but combined with machine learning techniques.

Supplementary Material

supp1-3118535
supp2-3118535
Download video file (1.7MB, avi)
supp3-3118535
Download video file (830.6KB, avi)
supp4-3118535
Download video file (864.9KB, avi)
supp5-3118535
Download video file (3.2MB, avi)

Acknowledgment

The training of Mr. Changjian Sun in the Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, for the duration of two years is supported by the China Scholarship Council. This research is supported by an NIH grant R01 HL150147 and in part by the Institute for Translational Medicine and Therapeutics of the University of Pennsylvania through a grant by the National Center for Advancing Translational Sciences of the National Institutes of Health under award number UL1TR001878.

This research is supported by a “Frontier Grant” from The Children’s Hospital of Philadelphia and in part by the Institute for Translational Medicine and Therapeutics of the University of Pennsylvania through a grant by the National Center for Advancing Translational Sciences of the National Institutes of Health under award number UL1TR001878.

Contributor Information

Changjian Sun, College of Electronic Science and Engineering, Jilin University, Changchun 130012, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

Jayaram K. Udupa, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

Yubing Tong, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

Caiyun Wu, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

Shuxu Guo, College of Electronic Science and Engineering, Jilin University, Changchun 130012, China.

Joseph M. McDonough, Center for Thoracic Insufficiency Syndrome, Children’s Hospital of Philadelphia, Philadelphia, PA, 19104, United States

Drew A. Torigian, Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States

Patrick J. Cahill, Center for Thoracic Insufficiency Syndrome, Children’s Hospital of Philadelphia, Philadelphia, PA, 19104, United States

References

  • [1].Tory M et al. “4D space-time techniques: a medical imaging case study,” Proceedings of the conference on Visualization’01. IEEE Computer Society. 2001; 473–476, 2001. [Google Scholar]
  • [2].Tong Y et al. “Retrospective 4D MR image construction from free-breathing slice acquisitions: a novel graph-based approach,” Med. Image Anal, vol. 35, pp. 345–359, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Yang D et al. “4d-ct motion estimation using deformable image registration and 5d respiratory motion modeling,” Med. Phys, vol. 35, no. 10, pp. 4577, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Wink N et al. “Phase versus amplitude sorting of 4D-CT data,” J. Appl. Clin. Med. Phys, vol. 7, no. 1, pp. 77–85, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Meijs M et al. “Robust segmentation of the full cerebral vasculature in 4d ct of suspected stroke patients,” Scientific Reports, vol. 7, no. 1, pp. 15622, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Cong T et al. “Vessel enhancement and segmentation of 4d CT lung image using stick tensor voting,” Sensing and Imaging, vol. 17, no. 1, pp. 1–16, 2016. [Google Scholar]
  • [7].Pan T et al. “4D-CT imaging of a volume influenced by respiratory motion on multi-slice ct,” Med. Phys. vol 31, no. 2, pp. 333, 2004. [DOI] [PubMed] [Google Scholar]
  • [8].Liu Y et al. “Retrospective four-dimensional magnetic resonance imaging with image-based respiratory surrogate: a sagittal-coronal-diaphragm point of intersection motion tracking method,” Journal of Medical Imaging, vol. 4, no. 2, pp. 024007, 2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Cai J et al. “Four-dimensional magnetic resonance imaging (4d-MRI) using image-based respiratory surrogate: a feasibility study,” Med. Phys, vol. 38, no. 12, pp. 6384–6394, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Siebenthal MV et al. “4D MR imaging of respiratory organ motion and its variability,” Phys. Med. Biol, Vol. 52, no. 6, pp. 1547–64, 2007. [DOI] [PubMed] [Google Scholar]
  • [11].Rank CM et al. “4d respiratory motion-compensated image reconstruction of free-breathing radial MR data with very high undersampling,” Magn. Reson. Med, vol. 77, no. 3, pp. 1170, 2016. [DOI] [PubMed] [Google Scholar]
  • [12].Yang YX et al. “A hybrid approach for fusing 4d-mri temporal information with 3d-ct for the study of lung and lung tumor motion,” Med. Phys, vol. 42, no. 8, pp. 4484, 2015. [DOI] [PubMed] [Google Scholar]
  • [13].Breuer K et al. “Stable and efficient retrospective 4d-mri using non-uniformly distributed quasi-random numbers,” Phys. Med. Biol, vol. 63, no. 7, 2018. [DOI] [PubMed] [Google Scholar]
  • [14].Mickevicius NJ and Paulson E. “Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4d-mri for online MR-guided radiation therapy,” Phys. Med. Biol, vol. 62, no. 8, 2016. [DOI] [PubMed] [Google Scholar]
  • [15].Dikaios N et al. “MRI-based motion correction of thoracic PET: initial comparison of acquisition protocols and correction strategies suitable for simultaneous PET/MRI systems,” European radiology, vol. 22, no. 2, pp. 439–446, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Baumgartner CF et al. “High-resolution dynamic MR imaging of the thorax for respiratory motion correction of PET using groupwise manifold alignment,” Med. Image Anal, vol. 18, no. 7, pp. 939–952, 2014. [DOI] [PubMed] [Google Scholar]
  • [17].Romaguera LV et al. “Automatic self-gated 4D-MRI construction from free-breathing 2D acquisitions applied on liver images,” International journal of computer assisted radiology and surgery, vol. 14, no. 6, pp. 933–944, 2019. [DOI] [PubMed] [Google Scholar]
  • [18].Clough J et al. “Weighted Manifold Alignment using Wave Kernel Signatures for Aligning Medical image Datasets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 988–997, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Wachinger C et al. “Manifold learning for image-based breathing gating in ultrasound and MRI,” Med. Image Anal, vol. 16, no. 4, pp. 806–818, 2012. [DOI] [PubMed] [Google Scholar]
  • [20].Campbell RM et al. “The characteristics of thoracic insufficiency syndrome associated with fused ribs and congenital scoliosis,” J. Bone Joint Surg, vol. 85, no. 3, pp. 399–408, 2003. [DOI] [PubMed] [Google Scholar]
  • [21].Campbell RM and Smith MD. “Thoracic insufficiency syndrome and exotic scoliosis,” J. Bone Joint Surg, vol. 89, pp. 108–122, 2007. [DOI] [PubMed] [Google Scholar]
  • [22].England SJ. “Current techniques for assessing pulmonary function in the newborn and infant: advantages and limitations,” Pediatric pulmonology, vol. 4, no. 1, pp. 48–53, 1998. [DOI] [PubMed] [Google Scholar]
  • [23].Ruppel G. “Manual of pulmonary function testing,” 6th ed. St. Louis: Mosby, 1994. pp. 225. [Google Scholar]
  • [24].Udupa JK et al. “Understanding Respiratory Restrictions as a Function of the Scoliotic Spinal Curve in Thoracic Insufficiency Syndrome: A 4D Dynamic MR Imaging Study,” Journal of pediatric orthopedics, vol. 40, no.4, pp.183–189, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Tong Y et al. “Quantitative dynamic thoracic MRI: application to thoracic insufficiency syndrome in pediatric patients,” Radiology, vol. 292, no.1, pp. 206–213, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Song J et al. “Architectural Analysis on dynamic MRI to study thoracic insufficiency syndrome,” Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling,” in Proc. of SPIE, Houston, TX, 2018, pp. 105762C. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [27].Jagadale BN et al. “Lung parenchymal Analysis on dynamic MRI in thoracic insufficiency syndrome to assess changes following surgical intervention,” in Proc. of SPIE, Houston, TX, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Kim YC et al. “Real - time 3D magnetic resonance imaging of the pharyngeal airway in sleep apnea,” Magn. Reson. Med, vol. 71, no. 4, pp. 1501–1510, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [29].Kothary N et al. “Safety and efficacy of percutaneous fiducial marker implantation for image-guided radiation therapy,” Journal of Vascular Interventional Radiology, vol. 20, no. 2, pp. 235–239, 2009. [DOI] [PubMed] [Google Scholar]
  • [30].Nehmeh SA et al. “Quantitation of respiratory motion during 4D-PET/CT acquisition,” Med. Phys, vol. 31, no. 6, pp. 1333, 2004. [DOI] [PubMed] [Google Scholar]
  • [31].Wagshul ME et al. “Novel retrospective, respiratory-gating method enables 3d, high resolution, dynamic imaging of the upper airway during tidal breathing,” Magn. Reson. Med, vol. 70, no. 6, pp. 1580–1590, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Low DA et al. “A method for the reconstruction of four-dimensional synchronized ct scans acquired during free breathing,” Med. Phys, vol. 30, no. 6, pp. 1254, 2003. [DOI] [PubMed] [Google Scholar]
  • [33].Li R et al. “4d ct sorting based on patient internal anatomy,” Phys. Med. Biol, vol. 54, no. 15, pp. 4821–4833, 2009. [DOI] [PubMed] [Google Scholar]
  • [34].Liu Y et al. “Investigation of sagittal image acquisition for 4d-mri with body area as respiratory surrogate,” Med. Phys, vol. 41, no. 10, pp. 101902, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Georg M et al. “Manifold learning for 4D CT reconstruction of the lung,” in Proc. IEEE CVPRW, pp. 1–8, 2008. [Google Scholar]
  • [36].Yigitsoy M et al. “Manifold learning for image-based breathing gating in MRI,” in Proc. of SPIE - The International Society for Optical Engineering, pp. 7962, 2011. [Google Scholar]
  • [37].Uh J et al. “Four-dimensional MRI using an internal respiratory surrogate derived by dimensionality reduction,” Phys. Med. Biol, vol. 61, no. 21, pp. 7812, 2016. [DOI] [PubMed] [Google Scholar]
  • [38].Timinger H et al. “Motion compensated coronary interventional navigation by means of diaphragm tracking and elastic motion models,” Physics in Medicine & Biology, vol. 50, no. 3, pp. 491, 2005. [DOI] [PubMed] [Google Scholar]
  • [39].Shechter G et al. “Respiratory motion of the heart from free breathing coronary angiograms,” IEEE transactions on medical imaging, vol. 23, no. 8, pp. 1046–1056, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Cervino LI et al. “Tumor motion prediction with the diaphragm as a surrogate: a feasibility study,” Phys. Med. Biol, vol. 55, no. 9, pp. N221, 2010. [DOI] [PubMed] [Google Scholar]
  • [41].Sundar H et al. “Biomechanically-constrained 4D estimation of myocardial motion,” in Proc. 12rd Int. Conf. MICCAI, 2009, pp. 257–265. [DOI] [PubMed] [Google Scholar]
  • [42].Barron JL et al. “Performance of optical flow techniques,” International journal of computer vision, vol. 12, no. 1, pp. 43–77, 1994. [Google Scholar]
  • [43].Grevera G et al. “CAVASS: A Computer-Assisted Visualization and Analysis Software System,” J. Digital Imaging, vol. 20, no. 1, pp. 101, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

supp1-3118535
supp2-3118535
Download video file (1.7MB, avi)
supp3-3118535
Download video file (830.6KB, avi)
supp4-3118535
Download video file (864.9KB, avi)
supp5-3118535
Download video file (3.2MB, avi)

RESOURCES