Skip to main content
Journal of Medical Imaging logoLink to Journal of Medical Imaging
. 2022 Aug 25;9(4):044006. doi: 10.1117/1.JMI.9.4.044006

Framework for lumen-based nonrigid tomographic coregistration of intravascular images

Abhishek Karmakar a, Max L Olender b, David Marlevi b, Evan Shlofmitz c, Richard A Shlofmitz c, Elazer R Edelman b, Farhad R Nezami d,*
PMCID: PMC9402451  PMID: 36043032

Abstract.

Purpose

Modern medical imaging enables clinicians to effectively diagnose, monitor, and treat diseases. However, clinical decision-making often relies on combined evaluation of either longitudinal or disparate image sets, necessitating coregistration of multiple acquisitions. Promising coregistration techniques have been proposed; however, available methods predominantly rely on time-consuming manual alignments or nontrivial feature extraction with limited clinical applicability. Addressing these issues, we present a fully automated, robust, nonrigid registration method, allowing for coregistering of multimodal tomographic vascular image datasets using luminal annotation as the sole alignment feature.

Approach

Registration is carried out by the use of the registration metrics defined exclusively for lumens shapes. The framework is primarily broken down into two sequential parts: longitudinal and rotational registration. Both techniques are inherently nonrigid in nature to compensate for motion and acquisition artifacts in tomographic images.

Results

Performance was evaluated across multimodal intravascular datasets, as well as in longitudinal cases assessing pre-/postinterventional coronary images. Low registration error in both datasets highlights method utility, with longitudinal registration errors—evaluated throughout the paired tomographic sequences—of 0.29±0.14  mm (<2 longitudinal image frames) and 0.18±0.16  mm (<1 frame) for multimodal and interventional datasets, respectively. Angular registration for the interventional dataset rendered errors of 7.7°±6.7°, and 29.1°±23.2° for the multimodal set.

Conclusions

Satisfactory results across datasets, along with additional attributes such as the ability to avoid longitudinal over-fitting and correct nonlinear catheter rotation during nonrigid rotational registration, highlight the potential wide-ranging applicability of our presented coregistration method.

Keywords: coregistration, cardiovascular imaging, multimodal imaging, intravascular imaging, nonrigid registration

1. Introduction

Medical imaging has become an indispensable part of modern cardiovascular medicine, with an expanding variety of modalities visualizing and quantifying anatomical and pathological features across the cardiovascular system.14 During clinical monitoring and decision-making, relevant complementary information is, however, often distributed across multiple image acquisitions.

Longitudinal image sets are often compared to quantify disease progression, monitoring aneurysmal growth5 or progressive vascular disease.6 Partially overlapping image runs are sometimes required to cover the full spatial extent of an atherosclerotic lesion during percutaneous intravascular evaluation.7 Finally, different imaging modalities provide complementary vessel information, with the fusion of multimodal datasets providing a more comprehensive depiction of vessel morphology and function, elevating clinical predictions across a range of cardiovascular settings.810 In all of these instances, image coregistration is fundamental to allowing for quantitative and unbiased comparison and fusion of these disparate image sets.

Manual coregistration remains not only time-consuming and labor-intensive but also reader dependent.7,11 Innovative automated methods have been proposed to align intramodal, multimodal, or interventional datasets;1218 however, they do not yet meet all clinical needs. Although successful in isolation, intramodality coregistration often relies on modality-specific fiducial landmarks,12,14 limiting clinical utility and general usage. Multimodal coregistration methods improve the applicability, but they are based on aligning nontrivial morphological features (e.g., vascular calcium patterns15), rely on semiautomated subalgorithms,13 or use modality-dependent specialized software.18 Only limited attempts have been made to coregister pre/postinterventional datasets,15,17,19 in which procedurally-induced changes in vascular profile often prevent usage of traditional coregistration methods and errors in registration can be as high as 14 mm.17 In summation, there remains a clinical need for a robust, automated, and generally applicable coregistration algorithm that allows for effective fusion of complementary datasets ranging across intramodal, multimodal, and interventional settings.

To address this, we present a comprehensive nonrigid framework (Fig. 1) for the automatic longitudinal and rotational coregistration of vascular tomographic images, with luminal annotations as the sole alignment feature. The current work is an extension of our previous lumen-based rigid registration framework,19 now accommodating both nonrigid longitudinal and rotational registration. This is introduced to compensate for inconsistent image spacing and rotation that occurs due to spurious oscillatory catheter motion during image capture. Numerous benefits can be associated with such a method: (1) the use of only luminal features makes it applicable across any lumen-based image modality; (2) the nonrigid nature of the registration procedure allows for effective image morphing across disparate interventional sets; and (3) the algorithm being packaged in a fully-automated framework makes it directly applicable for clinical and research usage. In subsequent sections, the theoretical basics of the method are presented together with a set of validation tests, purposely selected to highlight utility and viability across a range of relevant cardiovascular imaging and treatment applications.

Fig. 1.

Fig. 1

Schematic design of the full coregistration algorithm. The vessel lumen is segmented as a binary image in each acquisition and characterized on a per-frame basis. The resulting features are then used to sequentially perform longitudinal and rotational matching of frames.

2. Materials and Methods

2.1. Features and Metrics

Classical coregistration of tomographic images requires longitudinal and rotational alignment, which generally relies on exclusive morphological features or modality-specific fiducial landmarks. Such a technique is not flexible for general implementation and is limited by modality and to cases in which the sizes of imaged entities are identical. Focusing on shape in addition to size allows us to provide a generalizable registration for a far broader array of vascular imaging modalities. In the next section, we discuss how lumen area (LA) can better define longitudinal registration and lumen shape signature (LSS) rotational registration. Extracting these two features from imaged lumen contours provides an effective registration protocol that can be applied to a range of images.

2.1.1. Lumen area

The LA guides longitudinal coregistration by identifying corresponding frame pairs between two acquisitions. LA vectors, ALA and BLA, are populated by the respective cross-sectional areas depicted in each frame, such that ALA(i) and BLA(j) are the cross-sectional LAs in the i’th frame of acquisition A and j’th frame of acquisition B, respectively. Thus, each acquisition has a unique LA vector of the same length as the number of acquired frames (N).

The generation of the LA across an entire pullback (effectively represented by the LA vector) may be affected by image noise originating from, e.g., pulsatile motion or vascular spasm during acquisition, poor image quality, or motion artifacts. To better capture the underlying lumen profile of the vessel segment, irregularities are therefore mitigated by an empirically-determined, second-order Savitsky–Golay (SG) smoothing filter of window length equal to the greater of 11 frames or 10% of the overall acquisition. The application of the SG filter allows for the retention of the overall trend within the components of the LA vector while removing potential image noise from said components.

2.1.2. Lumen shape signature

The LSS is orientation sensitive and central to rotational coregistration. The LSS is essentially a normalized polar representation of the luminal boundary. Importantly, the LSS captures side branch features, where present, in its information content. This inclusion allows the framework to match side branches—known to be highly-reliable fiducial landmarks—between datasets. Definition of the polar representation is dependent upon the lumen center—the point with respect to which the polar representation of the lumen boundary is computed. Departing from our earlier approach,19 the present work does not define the centroid as the lumen center; instead, the lumen center is calculated by assessing the binary distance transform with respect to the lumen contour20 and the peak of the binary distance transform is chosen as the lumen center.

Using the lumen center as the origin, LSS vectors [Fig. 2(a)] of length Nθ are constructed for each frame of a given acquisition. Here, Nθ is the total number of elements in Θ, a discretized form of the interval [0, 360) defined as Θ={θk:θk=360·[(k1)/Nθ], where k={1,2,3,,Nθ}. Each entry of an LSS vector is the radial distance of the discretized and normalized lumen boundary at angular locations θk defined with respect to a reference line [Fig. 2(b)]. LSS vectors are represented as ALSS{i,k} and BLSS{j,k}, indicating the k’th component of the LSS vectors for i’th frame of acquisition A and j’th frame of acquisition B, respectively [Fig. 2(a)].

Fig. 2.

Fig. 2

(a) Visual representation of a typical sequence of LSS vectors obtained from an acquisition. LSS vectors of acquisition B are shown here. The first element of each LSS vector (i.e., k=1) is highlighted in black. (b) A single LSS vector (BLSS{j}) is shown, with indexed elements increasing in the direction shown by the curved arrow. For instance, the pointer shows BLSS{j,k=11} (c) Visual depiction of the meaning of BLSS{j} and BLSS{j}[θ]. The black arrow is shown to emphasize rotation.

For registration purposes, the difference between LSS vectors of frames i and j is defined as ϕ[i,j]

ϕ[i,j](θ)=1Nθk=1Nθ|BLSS{j,k}[θ]ALSS{i,k}1|, (1)
ϕ[i,j]=minθΘϕ[i,j](θ), (2)

where the LSS vectors quantify the lumen boundaries rotated by θ in the counterclockwise direction relative to the reference line [Fig. 2I].

2.2. Registration Method

After feature extraction (Sec. 2.1), complete coregistration is conducted by performing sequential longitudinal and rotational registration (Fig. 1).

2.2.1. Longitudinal registration

Longitudinal alignment comprises the identification of a frame correspondence map between two acquisitions. Such a map is achieved by sequential steps of rigid and nonrigid longitudinal registration.

Rigid registration

Given a pair of LA vectors, initial rigid longitudinal registration of acquisition B to acquisition A is completed by identifying the first point of correspondence (FPC) between the two acquisitions. The FPC is an ordered pair (FPCA and FPCB), where FPCA and FPCB are matched frame numbers of acquisition A and B, respectively. The optimal FPC correctly characterizing the overlap between two acquisitions and is obtained by ranking all possible FPCs according to an introduced S1 score Eq. (3), which minimizes the variation of the Lumen area difference of a set of matched frames. For any given FPC, the luminal area difference vector ABLA is calculated according to Eqs. (4) and (5). The ABLA vector is composed of consistent frames computed according to the longitudinal spacing RA and RB of the two datasets being registered. Specifically, RB denotes the number of frames traversed in acquisition B for every RA frame in acquisition A. The S1 score prioritizes the overlaps in which the variation in the luminal area difference is minimal. Variation within the array is quantified as the standard deviation, σ(ABLA). In addition, the variation is weighted by the size of the evaluated overlap to exclude infinitesimal overlaps between two datasets

S1=Noσ(ABLA), (3)
ABLA[k]=|ALA(FPCA+RA·k)BLA(FPCB+RB·k)|, (4)
No=min((NAFPCA)/RA,(NBFPCB)/RB), (5)

where k={0,1,2,3,,No1}, No is the length of overlap between acquisitions A and B for a given FPC.

In some cases, such as during coregistration of interventional datasets in which there are large changes in LA between acquisitions, the obtained FPC may require refinement. Therefore, in our framework, after coarse longitudinal registration, an additional refinement is performed every time to make any required corrections. To do so, all FPCs within a distance of 0.25 mm of the obtained coarse FPC are reconsidered using a secondary set of metrics. Such a choice was set after an empirical study that was conducted with the datasets used in Ref. 19. Specifically, for each of the candidate refined FPCs, invariant frame pairs (IFP)—matched frames pairs that remain geometrically invariant —are defined as entries in the ABLA array Eq. (4) that are <1  mm2. The 1  mm2 threshold was chosen under the assumption that, after datasets are corrected for scale (pixel to mm2 conversion known for each acquisition), the luminal areas of the unaltered vasculature will not exceed this amount. For coronary arteries, the relative pulsatile arterial radius change is about 5% and is even less for arteries affected by atherosclerosis.21 The average LA of all our datasets is 7  mm2, which allows for a maximum pulsatile area change of 0.7  mm2, so a threshold of 1  mm2 works as a tolerance that helps in the identification of frames that have not undergone severe vasculature change between acquisitions.

Calculation of the refined FPC is completed by scoring all of the reconsidered FPCs according to the S2 score, calculated using only IFPs

S2=(N*No*)(ALA(IFP)·BLA(IFP)ALA(IFP)2BLA(IFP)2), (6)

where N* is the size of the IFP list and No* is the size of the ABLA array subset. More details about the S2 score can be found in Ref. 19.

The output of the rigid longitudinal registration method is the FPC with the highest value of the S2 score. Theoretically, the S2 score helps correct the previously obtained coarse FPC by comparing portions of the acquisitions that remain unaltered such as a bifurcation or simply a portion of the vasculature unaffected by the intervention.

Nonrigid registration

Despite refinement strategies, rigid registration assumes that the image sequences in both acquisitions are monotonic in space, i.e., the images capture vascular information only along the increasing axial direction. Therefore, the rigid strategy suffers from occasional inaccuracies originating from imperfect image sequence capture leading to nonuniform frame spacing and potentially nonmonotonic progression. In the modalities considered in this paper, catheter movement during image capture can be irregular, leading to duplicative image frames. Such acquisition inaccuracies cannot be corrected by rigid registration methods alone, so we employ additional nonrigid longitudinal registration as part of our complete coregistration method. Such a procedure is herein guided by the initial rigid registration estimate; the constraint imposed by the initial rigid registration reduces the search window of the imposed refinements, reducing the required computational cost of the secondary nonrigid registration. Nonrigid longitudinal registration is completed using the dynamic time warping (DTW) algorithm15 as it helps mitigate the very shortcoming from which the rigid registration method suffers.

For a clear understanding of the algorithm, two lists of spatial dimensions are defined, IA and IB, which contain the list of frames indices of acquisitions A and B, respectively, that point to the same physical vasculature

IA=FPCA:  FPCA+RA·No,m=1+RA·No, (7)
IB=FPCB:  FPCB+RB·No,n=1+RB·No. (8)

The identified information overlap between acquisitions after performing rigid longitudinal registration allows us to define the rigidly aligned overlapping LA vectors ALAR and BLAR, which serve as input to the DTW algorithm as

ALAR=ALA(IA), (9)
BLAR=BLA(IB). (10)

The DTW algorithm finds an optimal mapping between the elements of the index arrays IA and IB by minimizing a given cost function for all possible path maps M that start from (1, 1) and end at (m,n), where m and n are defined according to Eqs. (7) and (8). Maps in M are essentially a list of index pairs matching frames of one acquisition to the other. We herein minimize the cost function Fc1—an evaluation of the cost of a particular map Mi—with respect to the cost matrix (C)

Fc1=σ(C(Mi)). (11)

The matrix C of size m×n is populated such that each cell is given as the cost of matching one frame in IA to another frame in IB

C(i,j)=|ARLA[IA(i)]BRLA[IB(j)]|. (12)

Finding the optimal map is typically achieved with the help of dynamic programming (DP) using an accumulated cost matrix and simultaneously keeping track of all possible maps. To reduce computational time, the search window of possible maps is limited to a band of controlled thickness around the diagonal of C, where the so-called warping window length (half of the control thickness) controls the degree of tolerated nonrigidness. Often, the warping window length is set manually, and larger values can lead to over-fitting, resulting in incorrect matching of image frames. As demonstrated later (Sec. 4.1), the cost function Fc1 used here helps to protect against overfitting, as opposed to more common cost functions, and therefore it can coregister datasets of slightly low quality as well. Theoretically speaking, the metric used here minimizes the overall luminal area difference between two registered datasets. This is particularly useful for datasets that have large luminal area changes. For example, if two LA vectors have common unaltered vasculature information, the metric will align the respective regions in both datasets. At large warping windows, it is possible that the frames containing altered vasculature might be forcefully aligned to frames containing unaltered vasculature, which is wrong, but this metric does not allow for that. This feature is demonstrated empirically in Sec. 4.1. In our case, the warping window length was set to four OCT frames (0.8 mm) and was kept constant throughout all registrations. The final longitudinal registration determined by the presented approach is thus the optimal map MAB={(a1,b1),(a2,b2),,(aNNR,bNNR)}, where a and b are elements of IA and IB, respectively, conforming to the warping window and minimizing the cost function Fc1.

2.2.2. Nonrigid rotational registration

Following nonrigid longitudinal registration, a DP framework is implemented to determine a global frame-wise rotation angle distribution for the longitudinally registered sections of each acquisition. The idea is to construct a similarity matrix that contains the similarity scores for two-lumen shapes at different orientations and then find the optimal registration angle distribution with the help of this matrix. The range of the angular search space was kept at 720° (360° to +360°) to avoid any jumping between registration angles such as 0.2° for one frame to 356° for the next frame. The 720° range allows the frame adjacent to the 0.2° frames to jump to 4°, which is equivalent to a registration angle of 356° and thereby preserves continuity in the overall registration angle distribution.

A similarity matrix (R) of size 2Nθ×NNR is first constructed, where Nθ is the number of discrete angular positions as defined above (Sec. 2.1.2) and NNR is the total number of entries in MAB. Each cell is populated by the score assigned according to the amount of similarity between two given LSSs. Similarity between two given LSSs is computed according to the metric given in Eq. (1), which evaluates the average radial deformation of the two lumens. The orientation corresponding to the lowest score for a given pair of LSSs, calculated using Eq. (1) indicates the optimal relative rotation between the two-lumen shapes. To this end, we define shape similarity as the negation of this shape difference value, and it is used in populating R(p,q)

θp=360·(p1/Nθ), (13)
R(p,q)=ϕ(IA(aq),IB(bq))[θp]. (14)

The resulting matrix R is thus identical above and below i=Nθ, allowing for path continuity between 0° and 360°. To promote frame-to-frame cohesion and dampen oscillation between adjacent frames, the similarity matrix R is smoothed using a linear kernel convolution, yielding R¯

R¯(p,q)=15k=q5q+5(1|kq5|)·R(p,k). (15)

Intercolumn uniformity of the modified similarity matrix is brought about by the standardized column-wise z-score according to

R¯z(p,q)=|R¯(p,q)μ(R¯(1:2Nθ,q))|σ(R¯(1:2Nθ,q)), (16)

where μ and σ are the mean and standard derivation functions, respectively.

After the construction of the standardized similarity matrix, a path that maximizes the sum of all similarity scores of points through which the path passes is then found. The optimal path is calculated by constructing an accumulated cost matrix Rzacc

Rzacc(i,j)=R¯z(i,j)+max(Rzacc(max(1,iLwinθ):min(i+Lwinθ,2Nθ),j1)). (17)

Here, an additional constraint, Lwinθ, is added to the algorithm to confine the recorded registration angle between adjacent matched frame-pairs to (Lwinθ/Nθ)·720°. Herein, we chose a value of Lwinθ such that a maximum difference of 30° is allowed between adjacent matched frame pairs.

Under ideal acquisition conditions amenable to the rigid transformation assumption, the detected path proceeds horizontally through the data matrix, indicating a single rotation angle for all matched frame pairs. The final rotational registration determined by the presented approach is a vector of NNR angles, corresponding to indices of the optimal calculated path through Rzacc, with each entry corresponding to a frame pair of MAB.

2.3. Patient Data Acquisition and Lumen Annotation

Our coregistration method (Sec. 2.2) was applied to two distinct imaging datasets: one multimodal dataset and one single modality dataset acquired before and after intravascular intervention. All data were collected following standard clinical protocols, including informed consent and institutional approval. We specifically considered two forms of intravascular tomographic images—optical coherence tomography (OCT) and intravascular ultrasound (IVUS)—as together they are the modalities of choice in interventional cardiology and offer different benefits and challenges. Together they offer complementary fidelity as OCT provides greater spatial precision and IVUS greater vascular depth and at different spatial scales—OCT and IVUS images of the same sections will appear to be different sizes. As such, these are the perfect tests of fusion imaging and hybrid registration.

A number of datasets were used. OCT and IVUS pullbacks were repeated for the same coronary lesion in seven patients in a published dataset.19,22 OCT images were collected under automated pullback using the C7-XR FD-OCT optical frequency domain intravascular imaging system and the DragonFly catheter (St. Jude Medical, Lightlab Imaging Inc., Westford, MA).22 IVUS data were subsequently acquired in the same vascular region using a 45 MHz rotational IVUS imaging catheter (Volcano Inc., Rancho Cordova, CA).22 The IVUS catheter was positioned using fiduciary points such as calcification spots, so the IVUS and OCT pullback spans coincided. Importantly, these datasets have different longitudinal resolutions: the IVUS datasets has 0.017 mm and the OCT datasets has 0.2-mm frame spacing, yielding a 12:1 frame ratio. Acquisitions were screened manually for image quality. Two interventional cardiologists manually annotated the vessel lumen to limit the interexpert bias—blinded to the images from the modality not being actively annotated—with their averaged expert annotations subsequently used as input to our proposed coregistration algorithm. An experienced reader of intravascular images manually determined optimal longitudinal and rotational coregistration in 55 frames throughout the seven acquisitions in which clearly identified fiduciary landmarks were present. These manual registrations served as the reference comparisons for the automated method performance.

In a separate interventional dataset, OCT pullbacks were acquired from roughly the same coronary artery segments in five patients in which lumen shape and LA changed as they underwent orbital atherectomy and subsequent stent deployment. OCT pullbacks of the lesions were acquired at baseline (before intervention), after atherectomy, and again after stent deployment. Poststenting acquisitions were available in two patients, providing a total of nine acquisition pairs for coregistration. Full pullbacks were automatically processed with lumens annotated using the basic edge detection and subsequent continuous surface smoothing described in Ref. 22. An expert experienced in interpreting OCT images provided longitudinal and rotational coregistration at recognizable points throughout the pullbacks for validation purposes.

3. Results

3.1. Coregistration of Multimodal Datasets

Based on the method presented in Sec. 2.2.1, the OCT pullbacks were longitudinally registered to the corresponding IVUS pullbacks with a registration deviation of 0.29±0.14  mm from manual annotations [Fig. 3(a)]. Expressed in terms of OCT frames, the average and maximum longitudinal registration deviation in the multimodal dataset were under two and three OCT frames, respectively.

Fig. 3.

Fig. 3

(a) Longitudinal registration error for multimodal datasets was 0.29±0.14  mm across the seven acquisition pairs. (b) Angular offset error after rotational registration was 29.1°±23.2° relative to manual alignment. (c) and (d) Pullback cross-sections of IVUS (c) and OCT (d) of the fourth patient [fourth-error bar in (a) and (b)] show good visual agreement after of registration. (e) and (f) An IVUS frame (e) with its corresponding OCT frame (f) further shows the quality of registration. The lumen boundary is highlighted in yellow in both frames.

Following longitudinal registration and using the strategy described in Sec. 2.2.1, the OCT pullbacks were registered circumferentially to the corresponding IVUS pullbacks with rotational registration. The alignment difference relative to manual matching was 29.1°±23.2°, ranging between 0.8° and 93.8° [Fig. 3(b)].

A visual representation of one registered patient-specific dataset shows sound qualitative agreement [Figs. 3(c) and 3(d)] better presented with the alignment of an IVUS frame [Fig. 3(e)] with its corresponding OCT frame [Fig. 3(f)].

3.2. Coregistration of Interventional Datasets

Similarly, the preintervention OCT pullbacks were successfully registered longitudinally to the corresponding postintervention OCT pullbacks with longitudinal registration deviation of 0.18±0.16  mm from manual alignment [Fig. 4(a)], an average variability of less than one OCT frame. Pullbacks were also successfully registered rotationally to the corresponding postintervention OCT pullbacks with rotational registration. Cumulative deviation from manual alignment was 7.7°±6.7°, with the majority of the frames having a deviation ranging between 1.0° and 14.5° [Fig. 4(b)]. The overall interventional dataset consisted of images with baseline-atherectomy, atherectomy-poststenting, and baseline-poststenting and for each individual case, the longitudinal registration differences were 0.06±0.09  mm, 0.23±0.13  mm, and 0.36±0.14  mm, respectively. The rotational registration deviations for the three datasets were 3.26°±2.45°, 10.33°±3.41°, and 16.25°±5.97°, respectively.

Fig. 4.

Fig. 4

(a) Longitudinal registration error for interventional datasets was limited to 0.18±0.16  mm. (b) Angular offset error after rotational registration was limited to 7.7°±6.7°. (c) Overlay of preintervention reference pullback (yellow) and registered postatherectomy pullback (blue). (d) Overlay of preintervention reference pullback (magenta) and registered poststenting pullback (green). (e) Overlay of a preintervention OCT frame with its corresponding coregistered postatherectomy OCT frame. (f) Overlay of a preintervention reference pullback frame (within the stented region) with its corresponding coregistered poststenting OCT frame. The stent struts can be seen here as thin, high-intensity strips casting distal shadows.

The quality of registration is visually confirmed by overlaying the baseline and the corresponding registered postintervention datasets [Figs. 4(c) and 4(d)]. An example of successful registration of baseline-poststenting dataset is shown in Fig. 4(d). The overlay of two frames from Figs. 4(c) and 4(d), respectively, shows calcified regions to be aligned successfully, even without considering them to be reference features during coregistration [Figs. 4(e) and 4(f)].

4. Discussion

In this paper, we present a fully-automated, nonrigid framework for longitudinal and rotational coregistration of vascular tomographic images from disparate acquisition modalities, applicable across intra- and multimodality datasets, as well as allowing for superposition of image sets acquired over the course of vascular intervention. Furthermore, with our approach relying solely on luminal-based features, limitations associated with modality-specific morphological features are overcome, ensuring applicability across a wide scale of cardiovascular settings. By applying the methodology to different types of intravascular datasets, we have shown how our lumen-based coregistration produces highly accurate longitudinal registration and reasonably accurate rotational registration results even in challenging clinical scenarios, extending the utility of previous coregistration algorithms and promising applicability and potential for future clinical and experimental usage.

4.1. Longitudinal Registration

Results indicate excellent agreement against expert manual longitudinal registration [Figs. 3(a) and 4(a)]. As reported in Sec. 3.1, longitudinal registration variance for the multimodal datasets is effectively less than two frames, highlighting the accuracy of our approach and the benefit against comparable approaches (Molony et al.15 reporting longitudinal errors of around four frames). That our approach renders such accurate longitudinal coregistration indicates that the variation in luminal profile (the LA vectors) of a vascular vessel carries sufficient information for achieving longitudinal coregistration. The introduced nonrigid longitudinal registration is an additional processing step that allows for nonlinear warping of one acquisition over another. This is especially important when comparing pre- and postvascular manipulation and intervention images. Indeed, high-quality longitudinal registration was achieved in these challenging datasets consistent with the best results reported in Ref. 14.

Rigid registration works well when images are perfect and captured without any motion artifacts, and therefore simple rotation and translation alone can align images. More often this is not the case, and registration requires additional image manipulation in time and space. This is specifically the case in interventional datasets, in which there is a significant change in LAs and the choice of large warping windows during nonrigid longitudinal registration can result in ineffective overfitting with an inaccurately predicted coregistration. To deal with this, a different cost function (Fc1), defined in Eq. (11), was used instead of the classical cost function, Fc2.14,15 It is typically defined as

Fc2=numel(Mi)C(Mi), (18)

where the function “numel” computes the total number of elements present in a map.

The superior effectiveness of introducing a metric of Fc1 over Fc2 is evident when the quality of alignment for registered acquisitions is quantified using the Pearson coefficient ρ, for comparison. As specifically shown in Fig. 5(a), Fc2 changes continuously across a wide range of warping window sizes; in comparison, Fc1 reaches a converged plateau already at a comparably narrow warping window, highlighting both stability and robustness against potential overfitting. The latter is further demonstrated in Figs. 5(b) and 5(d), which shows that registration by Fc2 at large window lengths synthetically warps LA vectors, causing them to lose regional identity and resulting in excessive shape changes and stretching during coregistration [Fig. 5(d), dotted black insert). By contrast, essential regional information and identity are preserved by the use of our proposed Fc1.

Fig. 5.

Fig. 5

(a) Variation of the quality of nonrigidly registered datasets with warping window length. Green and red curves represent the datasets registered by minimizing functions Fc2 and Fc1, respectively. (b) Variation of mean LA difference of registered datasets with warping window length. Green and red curves represent the datasets registered by minimizing functions Fc2 and Fc1, respectively. (c and d) Visual demonstration of registration quality achieved by minimizing Fc1 as window length is increased. Blue and red lines represent an aligned pre- and poststenting dataset. (d) Additionally shows a location marked with a dotted black line where the poststenting dataset (colored in green) is overstretched to match the baseline (blue). This demonstrates that the use of the cost function Fc2 leads to loss of important clinical information. For this case, it was the leading part of the deployed stent.

Successful registration of 16 datasets, both multimodal and interventional, indicates that clinically utilized pullbacks even after performing nonrigid registration are similar to the ones obtained after rigid registration. This implies that pullbacks have near rigid longitudinal characteristics. However, performing nonrigid registration is indispensable as it enables local corrections in rigidly registered frames, resulting in better rotational registration. This becomes particularly important during the construction of detailed three-dimensional (3D) geometric models of the considered vasculature.23,24

4.2. Rotational Registration

Fulfilling the complete task of vascular coregistration, the longitudinal registration method was complemented with a subsequent rotational one. The results from this angular coregistration indicate good agreement with expert manual annotation [Figs. 3(b) and 4(b)], although higher variations were reported in the multimodal dataset (averaging around 30°). As such, two factors that possibly affect the accuracy and performance of the multimodal dataset are worth pointing out: discrepancies in frame spacing and differences in captured geometry.

First, in the setting of multimodal OCT to IVUS coregistration, comparably large discrepancies exist between out-of-frame image spacing. As such, a number of IVUS frames could be matched to a single OCT one, resulting in varying angular alignments depending on which OCT-IVUS pair is being matched (in preliminary tests, the difference between matching two adjacent IVUS frames to a corresponding OCT one could be up to 15°). As the algorithm finds an optimal global distribution of registration angles, such deviations can thus locally offset the resulting registration angle distribution.

Second, variance can be introduced when certain geometric features of the lumen are expressed in one modality but absent in the other. This might be due in part to the result of differences in local image acquisition resolution or error in the lumen annotation itself. From Fig 4(b), it can be seen that, for the dataset of the fifth patient, a single registration angle offset of 90° decreases the overall accuracy by a significant amount. The IVUS-OCT frame from the fifth patient that caused such a large deviation from the expert annotated registration angle revealed that the segmented lumen obtained from an OCT frame was nearly circular, whereas the IVUS frame to which it has been matched had noncircular directionality. Geometrically, rotational alignment of symmetric shapes presents with a number of nonunique solutions, and hence decreased accuracy can naturally follow. In other words, if at least one of the IVUS- or OCT-imaged lumens is circular (i.e., showing complete absence of fiducial landmark features), all rotational registration angles become equally likely.

It is worth comparing the distribution of rotational registration angles from the multimodal dataset to that of the intramodal interventional one [Figs. 6(a) and 6(b), respectively]. As observed, for acquisitions obtained using the same modality with or without significant differences in LAs within datasets, the global registration angle distribution is almost constant, meaning that the two datasets differ by a mere rigid, single rotational transformation [Fig 6(b)]. In contrast, a varying distribution is observed for the multimodal registration IVUS-OCT [Fig 6(a)] and based on the goodness of rotational registration [Figs. 6(c) and 6(d)], both variations are most likely true. Such behavior likely follows varying catheter rotations during image acquisition. This is exhibited probably due to the fact that the OCT catheter is less susceptible to pulling than the IVUS catheter.25 The interventional datasets were almost equal in length, so in the instance of potential varying catheter rotation between acquisitions, they were biased by the same amount; as a result, the relative catheter rotation was minimal. These registration distributions substantiate and motivate the requirement of a generalized nonrigid registration algorithm, akin to that presented herein, capable of handling such datasets. Indeed, introduction of the present nonrigid alignment halved the error of the framework from 66°—achieved in the foundational work implementing rigid alignment for the same dataset19—to 30°. In Figs 6(e) and 6(f), the luminal area difference is shown for the displayed multi-modal and interventional datasets, respectively. In Fig. 6(e), a clear positive bias is seen as the measured average IVUS LA is always 2.07±1.24  mm2 higher than the measured OCT. This is a common, previously reported problem in which IVUS over-estimates LA.26 However, the exact value of the area is unimportant for this algorithm because the framework depends on the frame-wise area distribution rather than the exact value. For the interventional dataset [Fig. 6(f)], apart from the frames for which a stent was deployed, in most matched frames the luminal area hovers around 0  mm2. This shows that the algorithm is able to find and align healthy frames correctly despite the challenging inherent mismatch in true geometry within the stented region.

Fig. 6.

Fig. 6

(a) Color map of the matrix Rz¯, indicating the strength of rotational alignment, obtained for a particular IVUS-OCT dataset with the obtained angular distribution superimposed as a white line. An alternate path that leads to the same rotational registration is shown in light green. Distribution of the color yellow indicates a very high match percentage between two matched lumen shapes at a particular rotation, in contrast to cyan where it is the worst. (b) Color map of the matrix Rz¯ obtained for a particular pre- and postintervention OCT dataset. The obtained angular distribution is superimposed as a white line. It can be seen that the path obtained (white) follows most of the yellow spots while maintaining proper continuity and minimal oscillation between adjacent matched frame pairs. (c) Overlay of IVUS and OCT images at various locations of the coregistered dataset. Note that the image scale differs between modalities. (d) Overlay of pre- and postintervention OCT images at different locations of the coregistered dataset. (e) Luminal area difference of the multimodal dataset is shown in (a) and (c). Values ranged as 2.07±1.24  mm2. (f) Luminal area difference of the interventional dataset shown in (b) and (d). The variation shows that the luminal area difference mostly hovers around 0.0  mm2 except for the region where a stent was deployed to expand the final LA.

4.3. Computational Time

Minimizing the computational time of a registration framework is key to clinical translation. The framework presented here was implemented in MATLAB 2019 on a standard desktop laptop having an i-5 seventh generation processor. Based on the framework presented here, the computational time of the registration algorithm can be broken down into (1) rigid longitudinal registration time, (2) nonrigid longitudinal registration time, (3) building rotational similarity matrix for rotational registration, and (4) finding the optimal path in the rotational similarity matrix to get relative rotation of individual LSS. Neither the rigid nor the nonrigid longitudinal registration pose any high demands on computational power, and with the iterative procedures completely vectorized, processing times were below 5 s for all 16 datasets, representing a virtual real-time utility. The creation of the rotational similarity matrix is a comparably more demanding task, with a maximum recorded processing time of just below 3 min for a dataset containing >1000 images. Nevertheless, the overall computational time across all datasets was 2.3±1.4  min for complete registration, and our utility is amenable to continuous development in computational efficiency to achieve more rapid image coregistration.

4.4. Clinical Contextualization

Successful registration in the multimodal datasets demonstrates the flexibility of the method even in images that capture vascular information to different wall depths and with different time steps and spatial scales. Our processing paradigm not only is promising in such challenging datasets but also circumvents obstacles faced by other algorithms. For example, in the registration of IVUS-OCT, if the images acquired are from an artery with calcified plaques that are buried deep in the arterial wall, then OCT images may not be able to capture such features owing to its low penetrability.27 Such datasets will be impossible to register if the registration method relies upon the location or morphology of calcified plaques as their principal features for registration. The method’s reliance on luminal features alone provides the benefit that it can be used in datasets with a clear delineation of lumen or in images for which minor manipulation can assign lumen boundaries to the vasculature of interest. An example of the latter is the registration of computed tomography images and multi-parametric magnetic resonance imaging for evaluation of dominant intraprostatic lesion.28 Combining image modalities in which the lumen can be easily extracted such as OCT, IVUS, multiplane angiography, and computed tomography angiography is sometimes done to determine plaque composition,27,29 perform plaque vulnerability assessment, improve stent visualization,30 or generate 3D geometric models to conduct biomechanical simulations.24,31 Registration in such myriad modalities by extraction of overlapping nontrivial vascular features becomes highly challenging. In such scenarios, the proposed method will be of great use. The use of luminal features alone allows us to leverage the superior capabilities of both modalities and compensate for their respective limitations in precision or penetration depth. Increasing the viability of coregistration enhances clinical decisions with respect to the superior presentation of the follow-up state and further enables possible image fusion abilities.

Results with intervention datasets demonstrate the ability of the processing paradigm to handle cases in which there are large local changes in LA. This makes the tool of particular interest for applications that evaluate the performance of procedures such as orbital atherectomy, gauge the accuracy of the location of stent deployment, evaluate plaque vulnerability, or quantify the vascular responses to endovascular implants. Given the quality results obtained with the interventional dataset, the framework is expected to work well with other plaque phenotypes such as in acute coronary syndrome in which plaque affected lumens have less drastic area changes than the lumens considered in this study. More importantly, despite relying on lumen morphology for the coregistration method, we observed in numerous cases, for example, in Fig. 4(e), excellent alignment of deep tissue structures such as calcified plaque. This is particularly important considering that such features were not considered during registration and implies two specific observations. First, this outcome highlights how the constituent composition of deep tissue influences luminal morphology, which is directly linked to regional hemodynamics. Second, the outcome further demonstrates the power of our method and the self-sufficiency of considering features of the lumen alone for coregistration, revealing how matched lumen shape characteristics may infer deep tissue structures and help unravel important coupling between possible clinical events and the apparent microstructure of a lesion. The method works by relying on the fact that surrounding vasculature assigns unique shapes to the lumen, which allows them to be registered without difficulty. Another example of intervention evaluation is exemplified by Fig 4(f), which shows an OCT frame containing an extensive calcified plaque (translucent magenta color) altered during stenting (stretched green lumen outline). Therefore, this method shows promise for being employed in datasets acquired after large time intervals, which can help with ascertaining and quantifying disease progression in which the luminal area difference is expected to be of the same order as that tested here.

The above clinical contextualization discussions reiterate the key feature that makes this framework of significant utility: it is blind to both image modality and longitudinal state of the dataset. Such fruitful utility, with its immense potential to provide clinical insight, is mainly possible because the algorithm relies only on the luminal features of the vasculature being imaged rather than modality-specific fiducial landmarks.

4.5. Limitation and Future Work

The presented registration framework shows good results with the 16 datasets considered here; however, the sample size is small, and it will be interesting to see the performance of the framework in more datasets. Furthermore, as these datasets were clinical and invasive validation methods were therefore not feasible, the framework is benchmarked against manual coregistration—reported errors are not absolute, but rather reported as deviations between manual and automated processes. Although manual coregistrations have inherent bias and are subject to human error, they are still the gold standard in clinical practice. The performance of automated methods relative to interobserver performance, as well as the impact of interobserver variability in the segmentation of underlying lumen annotations, should be explored in future work to provide additional context to the reported performance and evaluate robustness to input error.

The 3D and isotropic nature of the image targets adds further challenges. It can be difficult to find a perfect match in the longitudinal plane, especially with IVUS, in which the same feature can appear several times due to motion artifact. Each imaging modality has its own fidelity limitations as well. The longitudinal resolution of IVUS is twelve times more than the longitudinal resolution of OCT, which makes it more difficult for an annotator to find a perfect match for a single OCT frame to twelve choices of IVUS frames. Rotational registration described in this paper is often affected by lumen segmentation and may result in a large rotational registration inaccuracy. However, as longitudinal registration is based on the global variation of the LAs, the nature of the metric makes it less sensitive to local segmentation errors, which allows our method to achieve a coregistration error of less than 2 OCT image frames (0.4 mm). The rotational inaccuracy particularly arises in situations in which a certain segmented lumen resembles a circle and thus lacks circumferentially distinguishing features. The current metric for rotational registration has difficulty registering such frames and therefore gives rise to a 30° average error. The clinical relevance of this error is hard to assess, and its importance will depend on the context in which the coregistration is being performed. In cases in which high rotational accuracy is of paramount importance, the precision of the method can likely be improved by the inclusion of targeted mural features, as additional features to the ones considered here. The choice of feature can be made based on the modalities being registered. These features could be utilized as a postprocessing correction to further improve the alignment of images.

Finally, the method’s ability to register datasets relies on the accuracy with which the lumen can be segmented from the image. Low-quality lumen segmentation can arise from imaging imperfections associated with some modalities such as blooming artifacts associated with CT scans at implant sites or residual blood for OCT images due to insufficient flushing by an operator. In such cases in which lumen segmentation is challenging, the accuracy will be compromised. However, as both the longitudinal and rotational registration are based on global metrics, the proposed method is expected to be resilient to such localized inaccuracies. An immediate extension of this work will be to evaluate the framework with other modalities and conduct a sensitivity study of the effect of lumen segmentation on the accuracy of rotational registration. Furthermore, it will be worthwhile to conduct a study on the amount of mutual overlap required for successful registration using this method.

5. Conclusion

A fully automated nonrigid registration protocol using imaged lumen contours as sole input enables accurate longitudinal and rotational coregistration of tomographic vessel images. Multiplanar image coregistration of interventional datasets from different modalities combines the advantages of each individual modality, which can be either in terms of penetration depth or resolution, and thereby enhances clinical decision-making. The results of our lumen-based registration approach showcase the influence of deep tissue structures on luminal morphological features, which encourages its use in follow-up microstructural analysis to correlate lesion microstructure and micromechanics to adverse clinical events.

In summation, lumen-based vascular image coregistration offers a universal, robust and straightforward approach for aligning tomographic datasets, applicable to most common intravascular imaging scenarios and potentially capable of registering tomographic acquisitions containing any biological structure with a lumen.

Acknowledgments

ERE and FRN are supported in part by a grant (R01 HL161069) from the National Institutes of Health.

Biographies

Abhishek Karmakar received his BTech and MTech degrees in mechanical engineering from Indian Institute of Technology, Kanpur, India, in 2021. He is currently a PhD candidate in biomedical engineering at Meinig School of Biomedical Engineering, Cornell University, New York, United States.

Max L. Olender received his PhD in mechanical engineering from Massachusetts Institute of Technology, where he was also a postdoctoral associate. He received his BS and MS degrees in mechanical and biomedical engineering, respectively, from the University of Michigan. His research interests include medical imaging, artificial intelligence in medicine, computational modeling, and biomechanics. He is a member of the American Association for the Advancement of Science, Biophysical Society, and Institute of Electrical and Electronics Engineers.

David Marlevi is a Knut and Alice Wallenberg postdoctoral fellow at Karolinska Institute (KI) and Massachusetts Institute of Technology (MIT). He received his MSc degree in applied mechanics from Royal Institute of Technology (KTH) in 2014 and his PhD in biomedical engineering from KTH and KI in 2019. His research interests include vascular drug delivery and cardiovascular imaging, leading the affinity group on drug delivery devices in the Edelman Lab at MIT.

Evan Shlofmitz is an interventional cardiologist and the director of intravascular imaging at St. Francis Hospital, The Heart Center in Roslyn, New York. He completed a fellowship in interventional cardiology at Georgetown University/MedStar Washington Hospital Center. He has a strong interest in clinical education and research, lecturing internationally. His research interests have centered on intravascular imaging, the treatment of calcified coronary artery disease, in-stent restenosis, and the optimization of stent implantation.

Richard A. Shlofmitz is a chairman of cardiology at St. Francis Hospital, The Heart Center, where he has practiced since 1987. Performing over 1000 coronary interventions annually, he has one of the largest volumes of experience in percutaneous intervention. He also has the most clinical experience with orbital atherectomy and optical coherence tomography (OCT) world-wide. He has worked to advance the field with a focus on precision PCI.

Elazer R. Edelman is a professor at Massachusetts Institute of Technology and Harvard Medical School, and a senior attending physician at Brigham and Women’s Hospital. He also directs MIT’s Institute for Medical Engineering and Science, Clinical Research Center, and Harvard-MIT Biomedical Engineering Center. He is a fellow of the American College of Cardiology, American Heart Association, American Academy of Arts and Sciences, and the National Academies of Inventors, Medicine, and Engineering, among other organizations.

Farhad R. Nezami is a lead investigator in the Division of Thoracic and Cardiac Surgery in Brigham and Women’s Hospital, and a faculty member in the Department of Surgery at Harvard Medical School. He received his doctoral degree in mechanical engineering from Swiss Federal Institute of Technology (ETH) in Zürich. His current research interests include computational pathophysiology, clinical machine learning, deep learning for medical image processing, in silico predictive tools, and drug delivery.

Disclosures

No conflicts of interest, financial or otherwise, are declared by the authors.

Contributor Information

Abhishek Karmakar, Email: ak944@cornell.edu.

Max L. Olender, Email: molender@mit.edu.

David Marlevi, Email: marlevi@mit.edu.

Evan Shlofmitz, Email: eshlofmitz@gmail.com.

Richard A. Shlofmitz, Email: hartfixr1@aol.com.

Elazer R. Edelman, Email: ere@mit.edu.

Farhad R. Nezami, Email: frikhtegarnezami@bwh.harvard.edu.

6. Code, Data, and Materials Availability

The data used to support the findings of this study can be made available upon reasonable request.

References

  • 1.Vancraeynest D., et al. , “Imaging the vulnerable plaque,” J. Am. Coll. Cardiol. 57(20), 1961–1979 (2011). 10.1016/j.jacc.2011.02.018 [DOI] [PubMed] [Google Scholar]
  • 2.Koskinas K. C., et al. , “Intracoronary imaging of coronary atherosclerosis: validation for diagnosis, prognosis and treatment,” Eur. Heart J. 37(6), 524–535 (2016). 10.1093/eurheartj/ehv642 [DOI] [PubMed] [Google Scholar]
  • 3.Athanasiou L., Nezami F. R., Edelman E. R., “Computational cardiology,” IEEE J. Biomed. Health Inf. 23, 4–11 (2018). 10.1109/JBHI.2018.2877044 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Laviña B., “Brain vascular imaging techniques,” Int. J. Mol. Sci. 18, 70 (2017). 10.3390/ijms18010070 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bizjak Ž., et al. , “Registration based detection and quantification of intracranial aneurysm growth,” in Proc. SPIE 10950, Medical Imaging 2019: Computer-Aided Diagnosis, p. 1095007 (2019). 10.1117/12.2512781 [DOI] [Google Scholar]
  • 6.Mintz G. S., “Clinical utility of intravascular imaging and physiology in coronary artery disease,” J. Am. Coll. Cardiol. 64(2), 207–222 (2014). 10.1016/j.jacc.2014.01.015 [DOI] [PubMed] [Google Scholar]
  • 7.Shlofmitz E., Jeremias A., “Precision percutaneous coronary intervention: is optical coherence tomography co-registration the future?” Catheter. Cardiovasc. Interv. 92(1), 38–39 (2018). 10.1002/ccd.27698 [DOI] [PubMed] [Google Scholar]
  • 8.Donnelly P., et al. , “Multimodality imaging atlas of coronary atherosclerosis,” JACC Cardiovasc. Imaging 3(8), 876–880 (2010). 10.1016/j.jcmg.2010.06.006 [DOI] [PubMed] [Google Scholar]
  • 9.Cohade C., et al. , “PET-CT: accuracy of PET and CT spatial registration of lung lesions,” Eur. J. Nucl. Med. Mol. Imaging 30, 721–726 (2003). 10.1007/s00259-002-1055-3 [DOI] [PubMed] [Google Scholar]
  • 10.Bricq S., et al. , “Automatic deformable PET/MRI registration for preclinical studies based on B-splines and non-linear intensity transformation,” Med. Biol. Eng. Comput. 56, 1531–1539 (2018). 10.1007/s11517-018-1797-0 [DOI] [PubMed] [Google Scholar]
  • 11.Shimamura K., Guagliumi G., “Optical coherence tomography for online guidance of complex coronary interventions,” Circulation J. 80(10), 2063–2072 (2016). 10.1253/circj.CJ-16-0846 [DOI] [PubMed] [Google Scholar]
  • 12.Timmins L. H., et al. , “Framework to co-register longitudinal virtual histology-intravascular ultrasound data in the circumferential direction,” IEEE Trans. Med. Imaging 32(11), 1989–1996 (2013). 10.1109/TMI.2013.2269275 [DOI] [PubMed] [Google Scholar]
  • 13.Pauly O., et al. , “Semi-automatic matching of OCT and IVUS images for image fusion,” Proc. SPIE 6914, 69142N (2008). 10.1117/12.773805 [DOI] [Google Scholar]
  • 14.Alberti M., et al. , “Automatic non-rigid temporal alignment of intravascular ultrasound sequences: method and quantitative validation,” Ultrasound Med. Biol. 39(9), 1698–1712 (2013). 10.1016/j.ultrasmedbio.2013.03.005 [DOI] [PubMed] [Google Scholar]
  • 15.Molony D. S., et al. , “Evaluation of a framework for the co-registration of intravascular ultrasound and optical coherence tomography coronary artery pullbacks,” J. Biomech. 49(16), 4048–4056 (2016). 10.1016/j.jbiomech.2016.10.040 [DOI] [PubMed] [Google Scholar]
  • 16.Zhang L., et al. , “Simultaneous registration of location and orientation in intravascular ultrasound pullbacks pairs via 3D graph-based optimization,” IEEE Trans. Med. Imaging 34(12), 2550–2561 (2015). 10.1109/TMI.2015.2444815 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Hebsgaard L., et al. , “Co-registration of optical coherence tomography and X-ray angiography in percutaneous coronary intervention. The Does Optical Coherence Tomography Optimize Revascularization (DOCTOR) fusion study,” Int. J. Cardiol. 182, 272–278 (2015). 10.1016/j.ijcard.2014.12.088 [DOI] [PubMed] [Google Scholar]
  • 18.Carlier S., et al. , “A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography,” Cardiovasc. Revascularization Med. 15, 226–232 (2014). 10.1016/j.carrev.2014.03.008 [DOI] [PubMed] [Google Scholar]
  • 19.Karmakar A., et al. , “Detailed investigation of lumen-based tomographic co-registration,” in Proc. – 2020 IEEE Int. Conf. Bioinf. and Biomed., BIBM 2020, pp. 1038–1042 (2020). 10.1109/BIBM49941.2020.9313508 [DOI] [Google Scholar]
  • 20.de Macedo M. M. G., et al. , “A robust fully automatic lumen segmentation method for in vivo intracoronary optical coherence tomography,” Rev. Bras. Eng. Biomed. 32(1), 35–43 (2016). 10.1590/2446-4740.0759 [DOI] [Google Scholar]
  • 21.Eslami P., et al. , “Effect of wall elasticity on hemodynamics and wall shear stress in patient-specific simulations in the coronary arteries,” J. Biomech. Eng. 142, 024503 (2020). 10.1115/1.4043722 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Olender M. L., et al. , “A mechanical approach for smooth surface fitting to delineate vessel walls in optical coherence tomography images,” IEEE Trans. Med. Imaging 38(6), 1384–1397 (2019). 10.1109/TMI.2018.2884142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Wu W., et al. , “3D reconstruction of coronary artery bifurcations from coronary angiography and optical coherence tomography: feasibility, validation, and reproducibility,” Sci. Rep. 10, 18049 (2020). 10.1038/s41598-020-74264-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Kadry K., et al. , “A platform for high-fidelity patient-specific structural modeling of atherosclerotic arteries: from intravascular imaging to three-dimensional stress distributions,” J. R. Soc. Interface, 18, 20210436 (2021). 10.1098/rsif.2021.0436 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Su M.-I., et al. , “Concise review of optical coherence tomography in clinical practice,” Acta Cardiol. Sin. 32(4), 381–386 (2016). 10.6515/acs20151026a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Maehara A., et al. , “IVUS-guided versus OCT-guided coronary stent implantation: a critical appraisal,” JACC: Cardiovascular Imaging 10, 1487–1503 (2017). 10.1016/j.jcmg.2017.09.008 [DOI] [PubMed] [Google Scholar]
  • 27.Wang X., et al. , “In vivo calcium detection by comparing optical coherence tomography, intravascular ultrasound, and angiography,” JACC. Cardiovasc. Imaging 10(8), 869–879 (2017). 10.1016/j.jcmg.2017.05.014 [DOI] [PubMed] [Google Scholar]
  • 28.Ciardo D., et al. , “Multimodal image registration for the identification of dominant intraprostatic lesion in high-precision radiotherapy treatments,” Br. J. Radiol. 90, 20170021 (2017). 10.1259/bjr.20170021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Nakazato R., et al. , “Atherosclerotic plaque characterization by CT angiography for identification of high-risk coronary artery lesions: a comparison to optical coherence tomography,” Eur. Hear. J. - Cardiovasc. Imaging 16(4), 373–379 (2015). 10.1093/ehjci/jeu188 [DOI] [PubMed] [Google Scholar]
  • 30.Mutha V., et al. , “3D reconstruction of coronary arteries using multiplane angiography and optical coherence tomography to improve stent visualisation,” Hear. Lung Circ. 22, S126 (2013). 10.1016/j.hlc.2013.05.300 [DOI] [Google Scholar]
  • 31.Olender M. L., Edelman E. R., “The coming convergence of intravascular imaging with computational processing and modeling,” REC Interv. Cardiol. 3(3), 161–163 (2021). 10.24875/RECICE.M21000202 [DOI] [Google Scholar]

Articles from Journal of Medical Imaging are provided here courtesy of Society of Photo-Optical Instrumentation Engineers

RESOURCES