Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Aug 7.
Published in final edited form as: Analyst. 2021 Jul 1;146(15):4822–4834. doi: 10.1039/d1an00103e

Multi-modal image sharpening in fourier transform infrared (FTIR) microscopy

Rupali Mankar 1, Chalapathi Charan Gajjela 1, Farideh Foroozandeh Shahraki 1, Saurabh Prasad 1, David Mayerich 1, Rohith Reddy 1
PMCID: PMC8903170  NIHMSID: NIHMS1782315  PMID: 34198314

Abstract

Mid-infrared Spectroscopic Imaging (MIRSI) provides spatially-resolved molecular specificity by measuring wavelength-dependent mid-infrared absorbance. Infrared microscopes use large numerical aperture objectives to obtain high-resolution images of heterogeneous samples. However, the optical resolution is fundamentally diffraction-limited, and therefore wavelength-dependent. This significantly limits resolution in infrared microscopy, which relies on long wavelengths (2.5 μm to 12.5 μm) for molecular specificity. The resolution is particularly restrictive in biomedical and materials applications, where molecular information is encoded in the fingerprint region (6 μm to 12 μm), limiting the maximum resolving power to between 3 μm and 6 μm. We present an unsupervised curvelet-based image fusion method that overcomes limitations in spatial resolution by augmenting infrared images with label-free visible microscopy. We demonstrate the effectiveness of this approach by fusing images of breast and ovarian tumor biopsies acquired using both infrared and dark-field microscopy. The proposed fusion algorithm generates a hyperspectral dataset that has both high spatial resolution and good molecular contrast. We validate this technique using multiple standard approaches and through comparisons to super-resolved experimentally measured photothermal spectroscopic images. We also propose a novel comparison method based on tissue classification accuracy.

1. Introduction

Broadband vibrational spectroscopic imaging provides excellent molecular sensitivity that can identify the spatial distribution of molecular constituents. Fourier transform infrared (FTIR) spectroscopic imaging is a popular technique to measure mid-infrared absorbance spectra in materials science,1,2 forensics,3 and biomedicine4,5 by illuminating the sample with mid-infrared (mid-IR) light in the range of 750 to 4000 cm−1 (13.3 to 2.5 μm).6 This technique is commercially available, facilitating wide adoption in settings where mid-infrared spectroscopic imaging (MIRSI) is necessary to provide molecular context at each pixel.

The spatial resolution Δ of an imaging system under the Rayleigh criterion is proportional to the incident wavelength λ and inversely proportional to the objective’s numerical aperture NA:7

Δ=0.61λNA. (1)

The numerical aperture is fixed and usually in the range of ≈0.5 to 0.8. Spatial resolution is therefore wavelength-dependent and varies significantly (up to 6×) across the mid-IR range (2.5 to 13.3 μm). MIRSI instrument manufacturers typically use pixel sizes between 5 μm and 7 μm based on the spatial resolution in the fingerprint region (900 to 1800 cm−1). This is sub-optimal for biomedical applications that require sub-cellular resolution to evaluate heterogeneous tissue structures. Recent commercial platforms provide high-definition8 imaging that reduces pixel sizes to ≈1.1 μm to achieve the best possible spatial resolution.8,9 These advances improve image quality in high-wavenumber bands (3000 to 3500 cm−1).10 However, images in the fingerprint region, which encode molecular contrast for a variety of organic molecules, are still diffraction limited.11 The final images are high resolution at higher wavenumbers, due to the reciprocal relationship to wavelength, while important molecular information at longer wavelengths is obscured in low-resolution images. The methods that improve the spatial resolution at these wavenumbers can significantly improve the viability of FTIR in biomedical applications.12

Resolution limits in MIRSI hinder the analysis of histological samples where small spatial features, such as collagen fibers (≈2 μm wide) and cell clusters, are clinically important. Applications requiring high spatial resolution have motivated the development of new MIRSI instruments leveraging probes to overcome the diffraction limit.1316 Photothermal IR (PTIR)17 and optical photothermal IR (O-PTIR)18 enable submicrometer resolution with simultaneous spectroscopic contrast. While individual band images and spectra can be acquired rapidly using O-PTIR, measuring full hyperspectral cubes is currently too time-consuming for routine clinical applications. However, spectroscopic imaging data from O-PTIR provides a direct experimental measurement of submicron spatial features and spectroscopic signatures, which is challenging to obtain through other technologies. We use it as a gold standard to assess the technology proposed in the current manuscript.

This paper proposes curvelet-based multi-modal fusion to enhance spatial resolution in chemical maps, bridging the gap between MIRSI and traditional histology to achieve cellular-level resolution with FTIR instrumentation. The proposed method builds on image sharpening techniques from remote sensing19,20 and extends it to mid-IR hyperspectral datasets using a novel unsupervised approach to integrating high-frequency features into hyperspectral images. In remote sensing, low resolution multispectral (MS) images are fused with high-resolution panchromatic (PAN) images. Such sharpening is commonly referred to as pansharpening. Our approach uses dark-field microscopy to obtain high-resolution data analogous to panchromatic images. High spatial frequency features are fused into MIRSI data using an unsupervised curvelet-based approach. This combination of dark-field microscopy and MIRSI provides several practical advantages: (1) both modalities are label-free; (2) high-resolution image acquisition requires very little additional time, and (3) no changes to sample preparation are required. Unlike current photothermal imaging, our technique does not require new instrumentation or long hyperspectral data acquisition times.

We demonstrate the efficacy of the proposed algorithm on tissue biopsies, where both molecular specificity21 and cellular-level resolution (<5 μm)22 are critical to clinical diagnosis but beyond the capabilities of FTIR imaging. We evaluate the efficacy of our fusion algorithm using quantitative metrics such as spectral distortions relative to the raw hyperspectral data. We also propose evaluations based on the classification of fusion data and comparing these results to both FTIR and traditional histopathology. This novel evaluation method is more practical for MIRSI, since it focuses on optimizing the resulting image for reconstruction, which is currently the most common task in infrared histology.

1.1. Previous work

Multi-modal image fusion refers to a broad class of techniques combining data from two or more modalities to produce an information-rich output image. This provides clinical insights that each modality cannot furnish alone. Such techniques are common in medical imaging, where magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and single-photon emission computed tomography23 are fused to provide a comprehensive data set for a single patient. Multi-modal image fusion can also speed up acquisition. Work by Kong et al.24 demonstrates that integration of autofluorescence imaging and Raman scattering acquires molecular information faster than the conventional histology. Falahkheirkhah25 has proposed a deep learning framework to enhance spatial details of MIRSI by training on hematoxylin and eosin (H&E) stained tissue images. This approach has limited applicability in histopathology applications where data consists of several cell types and subtypes, since the spatial enhancement is not uniform for all morphological features but biased towards the morphological features highlighted by H&E.

Pansharpening is used extensively in remote sensing to fuse high-resolution panchromatic images with multispectral (MS) data.26 This produces images with better spatial and spectral resolution through cost-effective imaging using independent sensors optimized for (1) high spectral resolution (MS sensor) and (2) high spatial resolution (panchromatic sensor).27 Pansharpening commonly relies on component substitution: a low-resolution hyperspectral (HS) imageis projected onto a new basis, such as the one provided by principal component analysis (PCA). High-frequency components are inserted by replacing elements in the projected basis. The inverse transform is then applied to produce a fused output. Pansharpening with PCA is a component substitution algorithm,28 where the MS image is transformed using PCA. The first component of the new MS image is replaced by the high-resolution pan image to add spatial detail. Inverting the fused MS projection results in a sharpened MS output. Component substitution techniques are unsupervised, and do not require training or a corresponding ground truth. However, these sharpening techniques are prone to spectral distortion. The extent of these distortions depends on the correlation between the sharp pan band and the replaced component of the projected MS image.

Another approach involves directly injecting spatial details into low-resolution images. These methods separate spatial features using multi-resolution analysis (MRA). Wavelet-based fusion, band dependent spatial injection (BDSD), and wavelet-based Bayesian fusion are examples of spatial detail injection methods.29 These methods provide better spectral fidelity, since the injected features are optimized for each band and there is no upper limit on the number of injected high-resolution bands. Most pansharpening algorithms are supervised. Band dependent spatial detail (BDSD) uses a ground truth image to identify optimal parameters for injecting spatial details. BDSD is also computationally intensive since the inserted spatial features are computed using the entire MS image.26 Clustered-BDSD (C-BDSD) algorithm is an extension of BDSD algorithm with efficient implementation. C-BDSD optimizes parameters on pixels clustered using spatial features unlike BDSD in which parameters are estimated globally or locally with a sliding window.30 Parameter estimation on clustered pixels makes C-BDSD fast and accurate when compared to BDSD. Although C-BDSD is fast and works well for remote sensing, it is a supervised algorithm which relies on a high resolution ground truth. The nonavailability of high resolution FTIR hyperspectral images makes supervised approaches impractical.

Spatial–spectral fusion methods using the Fourier or wavelet transforms are good at retaining spectral information at the expense of spatial detail.31 Wavelet transforms are poor at representing curved edges, making them sub-optimal for microscopic images of organic materials.32 The curvelet transform is therefore preferred in medical imaging applications, such as image segmentation33 and fusion.31

2. Curvelet transform

The curvelet transform34 (CT) is an extension of wavelets35 and ridgelets.36 Images are decomposed into sub-bands of different scales using the wavelet transform, and then a localized ridgelet transform is applied to each sub-band. Curvelets can represent high-frequency contours at a range of scales using a sparse set of coefficients combined with the curvelet basis. The curvelet transform includes three stages: (1) image decomposition, (2) smooth partitioning, and (3) a ridgelet transform:

  1. Image decomposition: Each band image is decomposed into resolution-based sub-bands using a 2D isotropic wavelet transform. Each layer contains details of different frequencies.

  2. Smooth partitioning: The first layer is low frequency and can be smoothly expressed using wavelets. However, the wavelet transform is not efficient for representing high-frequency curved features. High frequency features are therefore represented with curvelets. To represent high-frequency features efficiently using curvelets, each sub-band is divided into square partitions of a size appropriate for the scale (Fig. 1). At a finer scale, curved edges are divided into smaller fragments with smaller square partitions and treated as straight edges.

  3. Ridgelet transform: The ridgelet transform is applied on each square partition of each sub-band.

Fig. 1.

Fig. 1

An illustration of the curvelet transform.

The ridgelet transform is based on the continuous curvelet transform and fast discrete curvelet transform.37 In this section, we illustrate the utility of curvelet transforms in the context of our proposed image fusion algorithm. A curvelet transform, defined on a two dimensional function f(x, y), is represented using curvelet coefficients computed by taking inner product of elements of f and curvelet at different scales and orientations. In a curvelet transform, φj is defined as a mother curvelet.37 Curvelets at scales 2j are obtained through rotations and translations of the mother curvelet. At decomposition scale 2j, orientations of curvelets are given by the sequence of equispaced rotation angles θl=2π2j2l, with l = 0, 1, … such that 0 < θl < 2π and translation given by sequence of translation parameters are k=(k1,k2). At scale 2j and orientation θl and positions xk(j,l)=Rθl1(k12j,k2j2), with Rθ is rotation by θ, curvelets are defined (as a function of x) by:

φj,l,k(x)=φj(Rθl(xxk(j,l))). (2)

Any function fL2(2) can be represented as a series of curvelet coefficients and a curvelet coefficient at scale 2j, direction l and location k is computed by taking inner product between an element f and a curvelet φj,l,k,

cj,l,k=f,φj,l,k. (3)

Function f, sparsely represented using discrete curvelet coefficients, is reconstructed using formula,

f=j,l,kf,φj,l,kφj,l,k. (4)

The curvelet transform is a multidimensional extension of the wavelet transform, which can effectively represent curved discontinuities with fewer coefficients than wavelets. Medical images are composed of many curved edges optimally represented by wedgesusing sparse curvelet coefficients.31,33 The sharpness of the curved edges in a hyperspectral band image changes as a function of wavelengths due to the diffraction. The CT decomposes images into multi-resolution sub-bands representing curved features at different scales and orientation. In Fig. 2, curvelet coefficients for two FTIR band images (1080 cm−1 and 1650 cm−1) are shown with the Cartesian concentric corona for the first four scales. The coarse scale is in the center of the corona. The coefficient scale increases from inner to outer corona, with coefficients at different orientations measured clockwise from the top left. Number of angles (orientations) changes with the scaleand with the number of angles selected for 2nd coarsest scale which is 16 (Fig. 2). The band image at 1650 cm−1 is sharper than the image at 1080 cm−1, therefore dense coefficients are seen at finer scales corresponding tohigher wavenumbers. Multi-resolution sparse decomposition is useful for fusing high spatial frequency features without introducing artifacts into the spectral domain.

Fig. 2.

Fig. 2

Curvelet coefficients at four scales for two band images (wavenumber 1080 cm−1 and 1650 cm−1) from FTIR imaged tissue represented using Cartesian corona. The first coarser scale (low pass filtered) is in the center of the Cartesian concentric corona. Coefficients scale increase from inner corona to the outer corona and at each scale, coefficients at different orientations measured clockwise from the top left.

3. Materials and methods

Our curvelet-based multi-modal fusion is validated on 10 tissue samples from breast and ovarian tissue microarrays (TMAs). We procured formalin fixed paraffin embedded (FFPE) breast (AMS802) and ovarian (BC11115c) sections from commercial tissue banks, with adjacent sections placed on IR transparent CaF2 and standard glass slides. All sections went through the same deparaffinization protocol. Unstained sections on CaF2 slides were imaged using both FTIR and dark-field microscopes. Adjacent sections on glass were stained with H&E and imaged in brightfield. TMAs included tissue cores from different grades and stages of cancer to enable validation on biochemically diverse tissues. Ten 1 mm cores from each array (10 different patients) were sharpened and annotated to compute the classification accuracy of the two key histological classes.

3.1. Multi-modal imaging

Both FTIR and dark-field microscopy are used to image unstained tissue cores from breast and ovarian TMAs. Tissue sections were prepared using standard protocols11 for FTIR imaging. 5 μm thick tissue sections from FFPE blocksmounted on IR transparent windows of CaF2 were deparaffinized for imaging, first imaged with FTIR imaging system (Agilent 670 spectrometer coupled to a Cary 620 microscopy system) and then with a dark-field microscope (Nikon Eclipse Ti inverted optical microscope). Agilent Cary 620 FTIR has 15 × 0.62NA and 128 × 128 pixels focal plane array (FPA) detector. We collected mid-IR HS images of tissue sections using standarddefinition (SD) mode with 5.5 μm pixel size and 8 cm−1 spectral resolution in the spectral range of 1000 to 3900 cm−1.

Tissue sections were imaged with a Nikon inverted optical microscope with a 10×, 0.4NA objective in the dark-field mode. A dark-field condenser transmits a hollow cone of light and blocks light from within a disk around the optical axis. In the presence of a sample, scattered light is collected by the objective forming a bright image against a dark background. Based on the Rayleigh criteria, the diffraction-limited spatial resolution of dark-field images collected in the visible range (400 to 700 nm) is significantly higher than FTIR images in the fingerprint region (2.5 to 12 μm).

3.2. Pre-processing

In multi-modal fusion, image registration is a critical step, since misalignment can introduce spatial artifacts in the fused result. Both FTIR and dark-field are label-free, enabling multi-modal imaging without additional tissue preparation. This prevents physical distortion of the tissue between images that can lead to misalignment. For this study, we image deparaffinized, label-free tissue sections with both modalities. Multi-modal images were registered by cropping both FTIR and dark-field images to the same tissue area. We then upsampled FTIR images to match the scale of dark-field images in the x dimension. We used the image resize method in OpenCV, using bilinear interpolation to resample each FTIR band image,38 and OpenCV affine transformations to align dark-field images with FTIR hyperspectral images.38 Prior to image sharpening, FTIR images underwent standard rubberband baseline correction to remove scattering artifacts.39As baseline correction methods can impact classification, we perform these corrections before spatial frequency injection to facilitate comparison between the original and sharpened images.

3.3. Spatial–spectral fusion

In mid-IR imaging, wavenumbers in the fingerprint region 900 to 1800 cm−1 are especially important for identifying biomolecules, therefore, FTIR hyperspectral images (L) from these wavenumbers are used for histology analysis. The images have a diffraction-limited spatial resolution of 5.5 to 11.11 μm. Tissue sections are first imaged with FTIR and then with a dark-field microscope without any intermediate processing.

FTIR data consists of B band images LiL, where i = 1, ⋯, B. We performed image sharpening on each band image by fusing spatial features from dark-field image Ψ into Li using a curvelet transform algorithm described below. In FTIR, the intensity range in each band varies with absorbance. The range among different bands can vary by a factor of 10. For better image sharpening results, the dark-field image (Ψ) was equalized to Ψi for each band image Li. Equalization of the dark-field image was performed with linear scaling to match the intensity scale of each band image. The proposed curvelet-based method uses multi-resolution analysis by decomposing each image into a set of spatial features using the fast discrete curvelet transform (FDCT).37

Higher curvelet coefficients from dark-field image represents sharp features which can be fused in FTIR band images based on Local Magnitude Ratio (LMR).34 Let Cj,l(Li(x, y)) and Cj,l(Ψi(x, y) be curvelet coefficients at band Li, with the equalized dark-field image Ψi at scale 2j and orientation l. The LMR at spatial location (x, y) is defined as:

LMRj,l(x,y)=Cj,l(Li(x,y))Cj,l(Ψi(x,y)). (5)

As the edges in dark-field images are sharper than FTIR data, LMRj,l(x, y) ≤ 1 at location (x, y) indicates that the spatial details of Ψi(x, y) are better than the spatial details of Li and therefore are injected into the fused image.

Fig. 3 and Algorithm 1 illustrate the data fusion process for combining the FTIR hyperspectral image L and registered dark-field image Ψ using curvelet transform.

Fig. 3.

Fig. 3

Process for sharpening broadband low spatial resolution HS images with high spatial resolution dark-field images using the curvelet transform based algorithm. High spatial resolution HS image is obtained by fusing detail coefficients of each low resolution band image with detail coefficients of equalized dark-field image for that band image.

4. Results

We demonstrate the efficacy and robustness of the proposed technique using two independent datasets consisting of breast and ovarian tissue cores derived from patients at varying stages of cancers. For the sharpened images, we evaluated spatial quality with visual qualitative inspection and assess spectral quality using quantitative metrics. We also compare the performance of curvelet based fusion against PCA based29 methods.

Algorithm 1.

Algorithm for fusing multi-modal images using curvelet transform

1. Input: LX×Y×B,ΨX×Y; Output: FX×Y×B
2. Select single band image LiL, where i = 1, ⋯, B.
3. Compute equalized reference image Ψi for Li.
4. Apply fast discrete curvelet transform (FDCT) on Li and Ψi.
  Lc = {C(Li), D1,l(Li), …, D7,l(Li)}
  Hc = {C(Ψi), D1,l(Ψi), …, D7,l(Ψi)}
where, Lc and Hc are sets of curvelet coefficients of low-resolution band image Li and sharp dark-field image Ψi respectively. These coefficients are composed of coarse coefficients C(I) and detail coefficients Dj,l(I) of image I at scale 2j and orientation l.
5. Generate curvelet coefficients for fused image i.e. Fc = {C(Fi), D1,l(Fi), ⋯, D7,l(Fi)} using following fusion rules for coarse and detail coefficients.
 (a) coarse coefficients from Lc are kept as it is in the fused image C(Fi) = C(Li)
 (b) detail coefficients from Lc and Hc are fused using local magnitude ratio (LMR) criteria34
  Dj,lFi(x,y)=Dj,lLi(x,y)ifLMRj,l(x,y)>1Dj,lΨi(x,y)ifLMRj,l(x,y)1
Here, Dj,l(Fi(x, y)) are detail coefficients at scale 2j and orientation l for fused image F at spatial location (x,y) and LMR is computed using eqn (5).
6. Apply inverse FDCT on fused coefficients Fc to reconstruct the fusion band image Fi.
7. Append fused band image Fi to generate HS image F
8. Repeat step 2 to 7 for i = 1, ⋯, B.

Band images of tissue cores from breast TMA AMS 802 are presented in Fig. 4, which demonstrates sharpening of the raw FTIR data by fusing high spatial frequency features from the dark-field images of the same cores. Both curvelet and PCA based algorithms sharpen high-frequency features, like fibrous textures or the lining of epithelial cells in lobules. Arrows in the top row point at the fibrous texture in the stromal area, and arrows in the second and third rows (from the top) point to epithelial cells in the terminal duct lobular units (TDLUs) and terminal ducts respectively. Visual inspection of the sharpened images using curvelet based sharpening establishes the improvement in spatial quality as compared to the raw FTIR images. The curvelet-based algorithm also increases spectral localization and avoids adding spectral artifacts during sharpening. However, PCA sharpening fuses spatial details in the raw FTIR data at the cost of greater spectral distortion. Red arrows in PCA sharpened images indicate the loss of spectral information in the fused images due to dominating spatial features from the dark-field images.

Fig. 4.

Fig. 4

Multi-modal imaging for improving the spatial resolution of FTIR hyperspectral images using dark field images. The increase in spatial details is observed by comparing raw FTIR images of breast tissue cores with the PCA sharpened images and the curvelet sharpened images at wavenumber 1650 cm−1. PCA based sharpening adds spatial details at the cost of spectral information (red arrows). Whereas, proposed curvelet based sharpening can enhance spatial detail while maintaining spectral mapping (black arrows).

We used three quantitative metrics to evaluate spectral quality which capture distinct characteristics, namely spectral angle mapper (SAM),40 histological classification performance metric area under ROC curve (AUC) and classification accuracy (CA). SAM quantifies the mean angular distance between pixels in a fused image with corresponding pixels in an upscaled raw FTIR image as defined per eqn (6) below. SAM values range from (−90, 90) degrees with 0 degree being optimal.

SAM(fi,li)=arccos(fi,sifsi), (6)

Metrics presented in Table 1 are computed from 10 tissue cores of 10 different patients at varying stages of cancer. The quantitative results match qualitative image sharpening trends (Fig. 4). SAM is typically computed with respect to the ground truth image eqn (6). As we cannot directly measure high-resolution HS ground truth images, we estimate SAM using sharpened images with respect to upsampled raw FTIR. SAM values closer to 0 indicate less spectral distortion. Table 1 indicates the average SAM of the proposed curvelet based sharpening algorithm (2.22) is smaller than that for the PCA sharpening algorithm (2.94) for breast tissue cores.

Table 1.

Quantitative analysis of image sharpening (breast tissue)

SAM AUC Classification accuracy

Raw 0 0.9831 92%
PCA 2.94 0.9762 91.2%
Curvelet 2.22 0.9984 96.7%

We validated the quality of sharpened (fused) images by analyzing the effect of sharpening on classification performance for histological classes of interest. We were interested in accurate classification of epithelial cells, which are implicated in breast tissue carcinoma. We performed classification of raw FTIR and sharpened HS images for two histology classes: epithelium (green) and stroma (blue) (Fig. 5). We used a binary SVM classifier using 12 optimal features selected by the GA-LDA feature selection algorithm39 and evaluated classification results using two metrics, namely area under the ROC curve (AUC) and classification accuracy. The corresponding results are presented in Table 1. The curvelet sharpened images have 1.5% higher AUC and 4.7% higher classification accuracy over the raw FTIR images whereas PCA sharpended images has 0.5% lower AUC and 0.8% lower classification accuracy. The proposed curvelet algorithm demonstrates superior spectral fidelity compared to PCA because it minimizes spectral distortions.

Fig. 5.

Fig. 5

Multi-modal image sharpening using PCA and curvelet based algorithm validated by classifying sharpened breast tissue cores into two histological classes: epithelium (green) and stroma (blue). Here, first row shows (a) dark-field image and band images at 1650 cm−1 wavenumber from (b) raw FTIR image, (c) PCA based sharpened image and (d) curvelet based sharpened image. Second row shows (e) adjacent section from same breast core stained with hematoxilin and eosin and classified images for (f) raw data (g) PCA sharpened data and (f) curvelet sharpened data.

We assessed the reliability of sharpening results by classifying key histology classes. Raw FTIR images Fig. 5(a), PCA sharpened images Fig. 5(b), and curvelet sharpened images Fig. 5(c) were annotated for two histological classes: epithelium (green) and stroma (blue) using H&E stained adjacent section (Fig. 5(e)) as the ground truth. A visual comparison of classification images from raw FTIR data Fig. 5(f) and curvelet sharpened data Fig. 5(h) shows that the Fig. 5(h) corresponds more closely to the H&E, especially around the TDLU region as shown in the magnified insets. Several stromal pixels in PCA sharpened images Fig. 5(g) are incorrectly classified as epithelial cells due to spectral distortions induced during image sharpening. Our curvelet-based sharpening improves both the sensitivity of epithelial cells and the localization of the cells with sharp edges for lobules that helps precise grading of carcinoma. Misclasssfied pixels are illustrated in Fig. 6 in red. The curvelet based image Fig. 6(d) has fewer misclassified pixels than the raw FTIR image Fig. 6(b) and PCA based sharpened image Fig. 6(c).

Fig. 6.

Fig. 6

Effects of image sharpening on classification. (a) H&E stained tissue for ground truth and incorrectly classified pixels are highlighted in red for classification results on: (b) raw FTIR image, (c) PCA based sharpened image and (d) curvelet based sharpened image.

We performed similar extensive analysis for ovarian cancer TMAs (Fig. 7 and 8) and observed similar improvements as seen in for breast TMAs. The results are based on total 337 982 spectra (149 954 for training and 188 028 for testing) from 10 tissue cores from different patients at varying stages of cancers. Training and testing spectra are taken from mutually exclusive tissue cores. By testing on spectra from mutually exclusive tissue cores which are measured at different times, we have considered spectral measurement uncertainties for imaging settings mentioned here. For this independent dataset, the curvelet sharpened images have 3.0% higher AUC and 6% higher classification accuracy over the raw FTIR images whereas PCA sharpended images have 4.5% lower AUC and 2.4% lower classification accuracy than the raw FTIR images.

Fig. 7.

Fig. 7

curvelet based image sharpening with multi-modal imaging is demonstrated using ovarian tissue cores from ovarian TMA BC11115c. Spatial information from dark-field images (left) of ovarian tissue cores is fused to raw FTIR images (middle) used to achieve leveraged spatial resolution in sharpened images (right).

Fig. 8.

Fig. 8

Qualitative analysis of multi-modal image sharpening using PCA and curvelet based algorithm. Top row: (a) Dark-field image, band images at 1650 cm−1 wavenumber for (b) raw FTIR image, (c) PCA based sharpened image, and (d) curvelet based sharpened image for the same core. Bottom row: (e) Adjacent tissue section stained with hematoxilin and eosin, classified images for (f) raw FTIR image, (f) PCA sharpened image, and (g) curvelet sharpened image which are classified into epithelium (green) andstroma (blue).

Fig. 9 compares our sharpened results with O-PTIR (mIRage), showing sharpened images of eight ovarian cores from TMA BC11115c. Epithelial cells, stromal cells, adipocytes and lymphocytes are compared with O-PTIR images at the Amide I band (1650 cm−1). Qualitative comparison of sharpened images with O-PTIR images indicates that spatial resolution achieved by proposed multi-modal fusion is between FTIR and O-PTIR with spatial details comparable to the optical photothermal imaging.

Fig. 9.

Fig. 9

Image sharpening results on eight ovarian cores from TMA BC11115c with O-PTIR imaging at amide I band (1650 cm−1). On the right side of each sharpened (fused) ovarian core image, insets of high frequency features(middle) is compared with raw FTIR (top), and O-PTIR (bottom) images.

5. Discussion

The goal of this study was to develop a fast, clinically viable method to enhance image quality by sharpening spatial features in a diffraction-limited mid-IR HS image while preserving spectral fidelity. The proposed multi-modal fusion method requires minimal sample preparation and can fit into a clinical workflow seamlessly. We used rapid, label-free microscopy to augment spatial details in the data from FTIR imaging. Also, being unsupervised, the curvelet based sharpening eliminates the need for super-resolved ground truth images. The results of the proposed image sharpening method are robust and reliable as well as are generally applicable for different tissue types.

The proposed fusion method allows FTIR imaging at larger pixel sizes, thus reducing data collection time. Dark-field microscopy used in this study is faster than FTIR imaging, therefore, data fusion is a practical solution for improving image quality without increasing data collection time. It takes ≈120 minutes to image one tissue core of 1 mm with FTIR in high-definition (HD) mode (pixels size 1.1 μm) with 16 coadds, ≈3 minutes for the same tissue core with FTIR in SD mode (pixels size 5.5 μm), and ≈40 s with dark-field microscope. The curvelet transform is implemented using MATLAB and sharpening of a single band image of one tissue core (1059 pixels × 1069 pixels) takes around 6 seconds on a system with 24 GB of physical memory and 0.24 seconds with 256 GB of physical memory. The proposed fusion method enables roughly 35× faster imaging than the HD mode in FTIR. Data acquisition with larger pixel sizes followed by the application of our algorithm can potentially provide high resolution data with lower collection time and will be explored in the future. Our current instrument limitations have allowed for two pixels sizes: 5.5 μm (SD imaging mode) and 1.1 μm (HD imaging mode).

Multi-modal fusion algorithms for image sharpening (pansharpening) can either be supervised or unsupervised. Supervised algorithms require super-resolved ground truth images to find optimization parameters, whereas unsupervised algorithms do not have this requirement. Obtaining super-resolved ground truth in FTIR imaging is challenging because of the diffraction limit. Recent methods25 for supervised image sharpening have relied on H&E images as a substitute for super-resolved ground truth FTIR data. Since H&E images do not contain spectroscopic information, the algorithm can potentially distort spectral quality in FTIR data by adding wavenumber dependent spatial artifacts to each band image. Our unsupervised method overcomes the challenges encountered by the supervised methods as it does not require super-resolved ground truth images.

In building our classifiers, we performed training and validation on mutually exclusive data in both breast and ovarian cancer TMAs accommodating patient-to-patient variations. Independently processed breast and ovarian tissue TMAs with a diverse array oftissues with varying grades of cancer from different patients reinforces the robustness of our results.

We used two validation metrics Area under ROC curve (AUC) and classification accuracy to evaluate improvement in image quality after sharpening. An ROC curve is a plot of false positive rate vs. true positive rate at different thresholds from 0 to 1 for a binary classifier. Higher AUC indicates better classification performance at different thresholds. Overall classification accuracy is a measure of classification performance at an optimal threshold. While quantitative metrics, area under ROC curve (AUC) and classification accuracy, for classification from Tables 1 and 2 measure spectral quality of annotated pixels, a qualitative analysis of classification images in breast (Fig. 5)and ovarian tissue cores (Fig. 8) helps evaluate spectral fidelity at both annotated and unlabeled pixels, especially in regions important to cancer diagnosis. Qualitative analysis is especially important for the tissue regions with mixed pixels where annotation is challenging. Therefore, we have used a combination of both quantitative and qualitative analyses to evaluate the fidelity of the proposed technique. The superior qualitative and quantitative results demonstrate utility and efficacy of the proposed curvelet based multi-modal fusion method for image sharpening. Both quantitative and qualitative results presented above indicate that image sharpening improves sensitivity and specificity of histological classes, providing a more accurate assessment of the spread of cancer cells in the tissue and in turn facilitates improved understanding of disease prognosis.

Table 2.

Quantitative analysis of image sharpening (ovarian tissue)

SAM AUC Classification accuracy

Raw 0 0.9581 91.6%
PCA 1.34 0.9151 89.3%
Curvelet 1.21 0.9873 97.6%

We further evaluated image sharpening results by comparing sharpened data with data from O-PTIR images as it provides direct super-resolved images derived from molecular absorbance, which is the same intrinsic parameter measured by FTIR. However, as mechanism for measuring molecular absorbance by O-PTIR is different from FTIR, their absorbance values are not expected to directly match. We therefore compared our results by visual inspection on individual band images from both the datasets.

Higher spectral resolution would increase the computational cost due to an increase in the number of bands. Since the curvelet based fusion operates on individual bands, it allows sharpening of only specific bands based on histological classes of interest. Image sharpening of selected bands reduces the computational time and storage requirement of sharpening the entire HS image with hundreds of bands. This advantage of the proposed method also enables selective enhancement of histological-class-specificmorphological features by fusing with other specialized imaging modalities such as second harmonic generation (SHG) imaging for collagen fibers.41,42

The proposed technique has limitations when the sample under examination has no contrast under a dark field microscope, but consists of chemically distinct species that FTIR imaging recognizes. Here, the algorithm would retain the chemical sensitivity of FTIR imaging but would not improve spatial resolution. For such samples, a different technique (auto-fluorescence, Raman, etc.) which can provide contrast can be used as a substitute for dark-field images. We believe that our algorithm would also work for such image fusion.

6. Conclusion

We describe a novel technique to sharpen diffraction-limited FTIR images and demonstrate improvement in data quality using breast and ovarian cancer tissues. The proposed curvelet based multi-modal fusion technique fuses spatial information from the dark-field images into FTIR hyperspectral images. Each imaging modality contributes complementary information about the sample, therefore, resulting image has the best spatial and spectral information. Also, the curvelet based sharpening is suitable for biomedical images containing curved discontinuities. The proposed technique is both a fast and cost-effective way of enhancing spatial–spectral quality that improves sensitivity and specificity of histological classes deemed important for accurately grading the spread of cancer cells for cancer prognosis.

Acknowledgements

This work is supported in part by the NLM Training Program in Biomedical Informatics and Data Science T15LM007093 (RM, RR), the Cancer Prevention and Research Institute of Texas (CPRIT) #RR170075 (RR), the National Institutes of Health #R01HL146745 (DM), and the National Science Foundation CAREER Award #1943455.

Footnotes

Conflicts of interest

There are no conflicts to declare.

References

  • 1.Kazarian SG and Chan KA, Macromolecules, 2004, 37, 579–584. [Google Scholar]
  • 2.Theophile T, Infrared spectroscopy: Materials science, engineering and technology, BoD–Books on Demand, 2012. [Google Scholar]
  • 3.Ewing AV and Kazarian SG, Analyst, 2017, 142, 257–272. [DOI] [PubMed] [Google Scholar]
  • 4.Levin IW and Bhargava R, Annu. Rev. Phys. Chem, 2005, 56, 429–474. [DOI] [PubMed] [Google Scholar]
  • 5.Pahlow S, Weber K, Popp J, Bayden RW, Kochan K, Rüther A, Perez-Guaita D, Heraud P, Stone N, Dudgeon A, et al. , Appl. Spectrosc, 2018, 72, 52–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Baker MJ, Gazi E, Brown MD, Shanks JH, Gardner P and Clarke NW, Br. J. Cancer, 2008, 99, 1859–1866. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Belkebir K, Chaumet PC and Sentenac A, J. Opt. Soc. Am. A, 2005, 22, 1889–1897. [DOI] [PubMed] [Google Scholar]
  • 8.Reddy RK, Walsh MJ, Schulmerich MV, Carney PS and Bhargava R, Appl. Spectrosc, 2013, 67, 93–105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Nasse MJ, Walsh MJ, Mattson EC, Reininger R, Kajdacsy-Balla A, Macias V, Bhargava R and Hirschmugl CJ, Nat. Methods, 2011, 8, 413–416. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Deutsch B, Reddy R, Mayerich D, Bhargava R and Carney PS, J. Opt. Soc. Am. A, 2015, 32, 1126–1131. [DOI] [PubMed] [Google Scholar]
  • 11.Baker MJ, Trevisan J, Bassan P, Bhargava R, Butler HJ, Dorling KM, Fielden PR, Fogarty SW, Fullwood NJ, Heys KA, et al. , Nat. Protoc, 2014, 9, 1771–1791. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Pandey K, J. Appl. Polym. Sci, 1999, 71, 1969–1975. [Google Scholar]
  • 13.Katzenmeyer AM, Holland G, Chae J, Band A, Kjoller K and Centrone A, Nanoscale, 2015, 7, 17637–17641. [DOI] [PubMed] [Google Scholar]
  • 14.Chan KA and Kazarian SG, Chem. Soc. Rev, 2016, 45, 1850–1864. [DOI] [PubMed] [Google Scholar]
  • 15.Dazzi A and Prater CB, Chem. Rev, 2017, 117, 5146–5173. [DOI] [PubMed] [Google Scholar]
  • 16.Zhang D, Li C, Zhang C, Slipchenko MN, Eakins G and Cheng J-X, Sci. Adv, 2016, 2, e1600521. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Grisedale LC, Moffat JG, Jamieson MJ, Belton PS, Barker SA and Craig DQ, Mol. Pharm, 2013, 10, 1815–1823. [DOI] [PubMed] [Google Scholar]
  • 18.Kansiz M and Prater C, Advanced Chemical Microscopy for Life Science and Translational Medicine, 2020, p. 112520E.
  • 19.Du Q, Younan NH, King R and Shah VP, IEEE Geosci. Remote Sens. Lett, 2007, 4, 518–522. [Google Scholar]
  • 20.Padwick C, Deskevich M, Pacifici F and Smallwood S, Proceedings of the ASPRS 2010 Annual Conference, San Diego, CA, USA, 2010. [Google Scholar]
  • 21.Fernandez DC, Bhargava R, Hewitt SM and Levin IW, Nat. Biotechnol, 2005, 23, 469–474. [DOI] [PubMed] [Google Scholar]
  • 22.Leslie LS, Wrobel TP, Mayerich D, Bindra S, Emmadi R and Bhargava R, PLoS One, 2015, 10, e0127238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Du J, Li W, Lu K and Xiao B, Neurocomputing, 2016, 215, 3–20. [Google Scholar]
  • 24.Kong K, Rowlands CJ, Varma S, Perkins W, Leach IH, Koloydenko AA, Williams HC and Notingher I, Proc. Natl. Acad. Sci. U. S. A, 2013, 110, 15189–15194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Falahkheirkhah K, Yeh K, Mittal S, Pfister L and Bhargava R, 2019, arXiv preprint arXiv:1911.04410.
  • 26.Garzelli A, Nencini F and Capobianco L, IEEE Trans. Geosci. Remote Sens, 2008, 46, 228–236. [Google Scholar]
  • 27.Masi G, Cozzolino D, Verdoliva L and Scarpa G, Remote Sens, 2016, 8, 594. [Google Scholar]
  • 28.Kwarteng P and Chavez A, Photogramm. Eng. Remote Sens, 1989, 55, 339–348. [Google Scholar]
  • 29.Loncan L, De Almeida LB, Bioucas-Dias JM, Briottet X, Chanussot J, Dobigeon N, Fabre S, Liao W, Licciardi GA, Simoes M, et al. , IEEE Geosci. Remote Sens. Mag, 2015, 3, 27–46. [Google Scholar]
  • 30.Garzelli A, IEEE Trans. Geosci. Remote Sens, 2015, 53, 2096–2107. [Google Scholar]
  • 31.Arif M and Wang G, Soft Comput, 2020, 24, 1815–1836. [Google Scholar]
  • 32.Bhateja V, Krishn A and Sahu A, et al. , Proceedings of the Second International Conference on Computer and Communication Technologies, 2016, pp. 1–9. [Google Scholar]
  • 33.AlZubi S, Islam N and Abbod M, Int. J. Biomed. Imaging, 2011, 2011, 136034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Rao C, Rao JM, Kumar AS, Jain D and Dadhwal V, 2014 IEEE International Advance Computing Conference (IACC), 2014, pp. 952–957. [Google Scholar]
  • 35.Pajares G and De La Cruz JM, Pattern Recognit, 2004, 37, 1855–1872. [Google Scholar]
  • 36.Donoho DL and Flesia AG, Studies in Computational Mathematics, Elsevier, 2003, vol. 10, pp. 1–30. [Google Scholar]
  • 37.Candes E, Demanet L, Donoho D and Ying L, Multiscale Model. Simul, 2006, 5, 861–899. [Google Scholar]
  • 38.Joshi P, OpenCV with Python by example, Packt Publishing Ltd, 2015. [Google Scholar]
  • 39.Mankar R, Walsh M, Bhargava R, Prasad S and Mayerich D, Analyst, 2018, 143, 1147–1156. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Alparone L, Wald L, Chanussot J, Thomas C, Gamba P and Bruce LM, IEEE Trans. Geosci. Remote Sens, 2007, 45, 3012–3021. [Google Scholar]
  • 41.Chen X, Nadiarynkh O, Plotnikov S and Campagnola PJ, Nat. Protoc, 2012, 7, 654. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.van Huizen LM, Kuzmin NV, Barbé E, van der Velde S, te Velde EA and Groot ML, J. Biophotonics, 2019, 12, e201800297. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES