Highlights
-
•
This work surveys computational data harmonisation approaches in digital healthcare.
-
•
A comprehensive checklist that summarises common practices for data harmonisation.
-
•
A meta-analysis is conducted to explore harmonisation studies in various modalities.
-
•
A critique of existing harmonisation strategies is presented for future research.
Keywords: Information fusion, data harmonisation, data standardisation, domain adaptation, reproducibility
Abstract
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research.
1. Introduction
Computational biomedical research aims to advance digital healthcare and biomedical studies by developing computational models that improve the precise diagnosis of disease spectrum, analysis of gene expressions or time series data (e.g., electroencephalograms and electrocardiograms). These models are designed to discover novel risk biomarkers, predict disease progression, design optimal treatments, and identify new drug targets for applications such as cancer, pulmonary disease, and neurological disorders. Whilst a well-performed model should have characteristics of high performance, robustness, explainability, and reproducibility, it faces the issue that the bias of distribution between different datasets dramatically increases the difficulty of developing models from large-scale studies. Although data harmonisation is needed with almost any kind of medical data, automated methods have been extensively used for medical images, gene expression analysis, with the rest of the modalities being ignored or harmonised manually. Studies have shown that machine learning based approaches, especially deep neural networks, are highly sensitive to the distribution of training data. Therefore, there is an urgent need to develop approaches that can integrate the device/site-invariant information from multiple datasets. To address this issue, researchers established standard acquisition protocols [1 3] or definitions [4,5] to help data collectors to glean standardised data. For instance, Delbeke et al. [2] recommended an acquisition protocol for F-FDG Positron emission tomography/computerised tomography imaging (PET/CT), and Simon et al. [3] presented a standardised MR imaging protocol for multi-sclerosis. Schmidt et al. [4,5] mainly focused on integrating the data from routine health information systems, including conducting manual harmonisation and rule-based alignment of electronic data. Although these acquisition protocols could effectively reduce the cohort bias (non-biological variances in cross-scanner/site data), they were limited in assisting prospective studies because most studies were retrospective and could not be re-acquired with the same standard. In addition, a non-standardised acquisition protocol is needed for personalised digital healthcare sometimes. Therefore, it is imperative to explore a computational method to harmonise multicentre datasets.
Although some surveys of computational data harmonisation have been released [6,7,10], such as MRI (magnetic resonance imaging) [11] or CT (computerised tomography) harmonisation, these surveys only explored methods of single modality or application and rarely focused on evaluation metrics and research guidance (shown in Table 1). Moreover, there is a lack of a checklist that can summarise the common practice and give guidance for methodology selection and development for computational data harmonisation studies. This survey summarises the computational data harmonisation strategies for multimodal data in the digital healthcare field in terms of methodologies, evaluations, and applications. Our paper covers three main areas (i.e., gene expression, radiomics, and pathology), with over 96 qualified papers published within two decades. This is the largest and the most comprehensive exploration of the computational data harmonisation strategies to the best of our knowledge. To provide a better scientific practice for the community working on data harmonisation, a comprehensive checklist with all the steps is proposed to guide the researchers on reporting their studies more effectively. With this checklist, explorations (what the strategy is) and advances (how well the model performs) of the study can be clearly illustrated by reporting the items in model and evaluation sections. Overall, the main contributions of this survey can be summarised as:
-
•
A three-fold taxonomy that describes the methodology, evaluation and applications of computational data harmonisation strategies.
-
•
A checklist with all the steps that can be followed in future data harmonisation studies.
-
•
The critique and limitations of the existing data harmonisation strategies and potential studies.
The rest of the manuscript is organised as follows: (1) Section 2 describes the definition, motivation, utilisation and solution of computational data harmonisation issues; (2) Section 3 illustrates how this survey is conducted; (3) Sections 4,5, and6 demonstrate the three-fold taxonomy of harmonisation strategies; (4) Section 7 describes the results of the meta-analysis and presents a checklist for data harmonisation studies; (5) Section 8 presents the checklist for harmonisation studies and summarises the critiques and limitations of current strategies; and (6) Section 9 concludes this survey.
Table 1.
Comparison of existing data harmonisation review studies.
Survey | [6] | [7] | [8] | [9] | Ours |
Period | ∼2020 | ∼2019 | ∼2020 | ∼2021 | ∼2021 |
# of reviewed studies | N/A | 23 | 49 | 42 | 96 |
Domain | Radiomics | Radiomics | Radiomics | Radiomics | Radiomics, Gene, Pathology |
Metric | × | × | × | × | √ |
Checklist | × | × | × | × | √ |
Guidance | × | √ | × | × | √ |
Meta-analysis | × | √ | × | × | √ |
“# of reviewed studies” indicates the number of included papers in the survey.
2. Computational data harmonisation: definition, origin, what for and how?
This section illustrates the details of data harmonisation, including the definition, origin, purpose and solutions of computational data harmonisation tasks. To better describe these characteristics, the terminology of computational data harmonisation is illustrated in Table 2.
Table 2.
Terminology of computational data harmonisation.
Terminology | Definitions |
Cohort | A group of data acquired by the same acquisition protocol and devices |
Subjects | Patients (objects) involved in the study |
Category | The classes that were involved in the study, e.g., cancer vs. normal |
Cases | Samples (a subject can produce multiple samples with different acquisition protocols) involved in the study |
Cohort bias | The non-biological related variances caused by acquisition protocols (also named as “batch effect” in gene expression studies) |
Source cohorts | The cohort that needs to be harmonised from |
Reference cohort | The cohort that needs to be harmonised to |
2.1. What?
Data harmonisation refers to combining the data from different sources to one cohesive data set by adjusting data formats, terminologies and measuring units [12]. It is mainly performed to address issues caused by nonidentical annotations or records of different operators or systems, which requires a standard protocol for manual adjustment. The conventional approach for data harmonisation is performed by manually setting rules or terms to integrate multicentre datasets from health information systems. It requires complex mapping of terminologies and manual harmonisations.
Different from manual harmonisation that relies on a standard protocol and manual adjustment, computational data harmonisation in digital healthcare aims to reduce the cohort bias (non-biological variances) given by different data acquisition schemes. It applies computational strategies (such as machine learning, image/signal processing) to integrate multicentre datasets and reduce their non-biological heterogeneity. Compared with data cleansing, data normalisation, standardisation, etc., data harmonisation has a broader definition and is a term that represents the strategies of reducing cohort biases (caused by different acquisition protocols and devices). It can be conducted by removing outliers (data cleansing), aligning the location-and-scale parameters of cohorts (data normalisation), converting multiple datasets into a common data format (data standardisation/transformation, referring to manual harmonisation). It is of note that data harmonisation is not as same as style transformation (e.g., generating T1 images using T2 in MRI, or generating CT using X-Ray images), it only focuses on intra-modality datasets.
2.2. Why?
This section first illustrates the motivation of computational data harmonisation approaches, then describes the source of non-biological variances. Computational methods refer to the automatic analysis of digital healthcare data, using machine learning or mathematical modelling algorithms. It usually requires the extraction and fusion of data-derived features from the raw data. For instance, the grey level co-occurrence matrix (GLCM), which is one of the most commonly used textual features in radiomics, can be used as an independent prognostic factor (representing in F-FDG PET/CT images the metabolic intra-tumoral heterogeneity) in patients with surgically treated rectal cancer [15]. However, datasets acquired from different sites present significant variances (Fig. 1), which can hinder the effectiveness of extracted features and lead to unstable performance for both computational and manual diagnosis. In particular, Zhao et al. [13] found a considerable segmentation based inconsistency of lung tumours while conducting repeated manual labelling by three radiologists. This inconsistency could lead to a significant reduction (from 0.76 to 0.28) of concordance correlation coefficients for certain radiomics features. Therefore, computational data harmonisation is proposed to eliminate or reduce these non-biological variances in multicentre datasets for (1) enhancing the robustness and reproducibility of computational modules; (2) producing the fusion of knowledge captured beforehand with knowledge captured over a new task; (3) promoting the comprehensive performance of computational modules.
Fig. 1.
Visualised differences in (a) radiomics and (b) pathology images. (a) a lung tumour captured on the same CT scanner with 6 different acquisition protocols (From [13]). (b) H&E stained tissue images from different sites [14].
The non-biological data variances are mainly from hardware (e.g., scanners and platforms), acquisition protocols (e.g., signal/imaging acquisition parameters) and laboratory preparations (e.g., staining and slicing). These variances may lead to the weak reproducibility of quantitative biomarkers and limit the time-series studies based on multi-source datasets, indicating an urgent need for data harmonisation strategies to generate reproducible features [15,16].
2.2.1. Heterogeneity of acquisition devices (inter-device variability)
Heterogeneity of acquisition devices leads to the variance of multicentre data, which is mainly discovered in signals, CT, MRI, and pathological images. This heterogeneity is mainly brought by different detector systems of vendors, the sensitivity of the coils, positional and physiologic variations during acquisition, and magnetic field variations in MRI, amongst others. [17, 18, 19, 20]. Studies have shown that even using a fixed acquisition protocol for different brands of scanners, some radiomics features are still non-reproducible. For instance, Berenguer et al. [21] explored the reproducibility of radiomics features on five different scanners with the same acquisition protocol and witnessed large differences, ranging from 16% to 85% of the radiomics features were reproducible. Sunderland et al. [22] explored the large variance of standard uptake value (SUV) in different brands of scanners, witnessing a much higher maximum SUV of newer scanners compared with old ones.
2.2.2. Heterogeneity of acquisition protocols (intra-device variability)
The different acquisition protocols are the main reasons for cross-cohort variability. They mainly include the scanning parameters (e.g., voltage, tube current, the field of view, slice thickness, microns per pixel, etc.) and reconstruction approaches (e.g., different reconstruction kernels) [35]. To investigate the intra/inter reproducibility of radiomics features, several studies have been conducted by test-reset experiments (Table 3). In Table 3, a good reproducibility/repeatability is defined as the high correlation coefficient (e.g., ICC, CCC, ) or low difference (e.g., mean difference, CoV) between two features. For instance, a certain radiomics feature is considered reproducible/repeatable when the CCC between features extracted from two repeated scans is larger than 0.90. As shown in Table 3, the scanning parameters notably affect the radiomics features, making the statistical analysis difficult. For instance, only 15.2% of radiomics features are reproducible when using soft and sharp kernels during the reconstruction [34]. This weak reproducibility greatly hinders the large-scale digital healthcare studies and applications of computational models. Although implementing strict standard protocol can reduce non-biomedical variances, the non-standard acquisition protocol is needed by physicians for personalised centre-based image quality considerations. For instance, the thickness and pixel size are regularly adjusted on a case-by-case principle to improve the data quality [36]. Therefore, the heterogeneity of acquisition protocol is unavoidable which requires a general solution.
Table 3.
Summary of the reproducibility/repeatability studies.
Reference | Intra-repro | Inter-repro | Repeatability | Condition | Variables | Object | Modality |
Jha et al. [23], 2021 | 30.7% (332/1080) |
14.3% (154/1080) |
82.2% (888/1080) | ICC >0.90 | Slice Sickness | Phantoms | CT |
Emaminejad et al. [24], 2021 | 8.0% (18/226) |
/ | / | CCC>0.90 | Reconstruction | Patients | CT |
7.5% (17/226) |
/ | / | CCC>0.90 | Radiation Dose | Patients | CT | |
Kim et al. [25], 2021 | 11.0% (112/1020) |
/ | / | CCC>0.85 | Acceleration Factors | Patients | MRI |
Ymashita et al. [26], 2020 | / | 5.6% (15/266) |
/ | CCC>0.90 | Different Scanners | Patients | CECT |
Fiset et al. [27], 2019 | / | 22.6% (398/1761) |
/ | ICC >0.90 | Different Scanners | Patients | MRI |
Saeedi et al. [28], 2019 | 20.5% (8/39) |
/ | / | CoV< 5% | Tube Voltage | Phantoms | CT |
30% (13/39) |
/ | / | CoV< 5% | Tube Current | Phantoms | CT | |
Meyer et al. [29], 2019 | 20.8% (22/106) |
/ | / | Radiation Dose | Patients | CT | |
52.8% (56/106) |
/ | / | Reconstruction | Patients | CT | ||
39.6% (42/106) |
/ | / | Reconstruction | Patients | CT | ||
12.3% (13/106) |
/ | / | Slice Sickness | Patients | CT | ||
Perrin et al. [30], 2018 | 24.8% (63/254) |
/ | / | CCC>0.90 | Injection Rates | Patients | CECT |
13.4% (34/254) |
/ | / | CCC>0.90 | Resolution | Patients | CECT | |
Midya et al. [31], 2018 | 11.7% (29/248) |
/ | / | CCC>0.90 | Tube Current | Phantoms | CT |
19.8% (49/248) |
/ | / | CCC>0.90 | Noise | Phantoms | CT | |
63.3% (157/248) |
/ | / | CCC>0.90 | Reconstruction | Patients | CT | |
Altazi et al. [32], 2017 | 21.5% (17/79) |
/ | / | Mean difference <25% | Reconstruction | Patients | PET |
Zhao et al. [13], 2016 | 11.2% (10/89) |
/ | / | CCC>0.90 | Reconstruction | Patients | CT |
/ | / | 69.7% (62/89) |
CCC>0.90 | / | Patients | CT | |
Hu et al. [33], 2016 | / | / | 64.0% (496/775) |
ICC>0.80 | / | Patients | CT |
Choe et al. [34], 2019 | 15.2% (107/702) |
/ | / | CCC>0.85 | Reconstruction | Patients | CT |
CCC: concordance correlation coefficient; ICC: intraclass correlation coefficient; CoV: coefficient of variation; : R-squared; CT: computed tomography; MRI: magnetic resonance imaging; CECT: consecutive contrast-enhanced computed tomography; PET: positron emission tomography.
2.2.3. Heterogeneity of laboratory preparations (Preparation variability)
All the gene expression, radiomics, and pathological data heavily suffer from laboratory variances, including sample preparation, assay, slicing, and staining. For single-cell RNA sequencing (scRNA-seq) and microarray data, there are various analysis platforms with different biases, making it difficult to integrate and compare results from multi-centre/batch of data [37,38]. For radiomics data, variances such as injection rate and radiation dose may also affect the data quality. Considering the pathology data, variances are mainly from manual operations [39,40] (e.g., biopsy sectioning, sample fixation, dehydration and stain concentration), all these factors result in the variation of pixel values and stain consistencies.
2.3. What for?
Large scale and longitudinal studies. The challenges of integrating and utilising multicentre datasets make researchers realise the importance of data harmonisation when conducting large-scale studies [41]. On the one hand, the information fusion without harmonisation cannot achieve reproducible results in large scale and longitudinal studies [13,31,42]. Some researchers have advised that the conclusions reached must be treated with caution since some features can vary greatly against minor non-biomedical changes [43]. Data harmonisation, on the other hand, is critical for patients who are monitored longitudinally and imaged on different scanners. For instance, the longitudinal PET cannot provide helpful information if they are gathered from multi-scanners, since the relationship between SUV and outcomes may get concealed [16].
Transferability of computational models. The unstable performance has been found when applying computational models to multicentre datasets [44]. To address this issue, transfer learning was proposed to enhance the robustness of computational models by holding a priori knowledge on the way data can vary. It feeds the model with further data which reflects the variability that the model may encounter at inference time. However, transfer learning requires extra training samples to reduce the uncertainty with respect to the variability of data that models can cope with. This could be inapplicable for prospective studies in the digital healthcare field. Different from transfer learning, computational data harmonisation strategies can process the data without extra training or fine-tuning, which provide an applicable solution for multicentre studies. Meanwhile, there has been mounting evidence that combining data harmonisation with machine learning algorithms enables robust and accurate predictions on multicentre datasets [45].
2.4. How?
The deployment of a computational method includes preparation (acquiring datasets such as staining, scanning), pre-processing, modelling and analysing, while the data harmonisation can be performed through the processing of images/signals/gene matrices (i.e., sample-wise) or alignment of data-derived features (i.e., feature-wise). The sample-wise harmonisation is usually conducted before modelling, aiming to reduce the cohort variance of all training samples and fuse multicentre samples as a single dataset. It involves image processing, synthesis and invariant feature learning approaches. After acquiring cohort-invariant data, a single model can be developed for clinical related tasks. The feature-wise harmonisation aims to reduce the bias of extracted features, such as the GLCM, convex hull area of the region of interest. It is usually performed on extracted feature matrixes, eliminating the cohort variances through fusing the extracted features (shown as the left bottom subfigure Fig. 2, the red and blue dots indicate samples from different cohorts). Both the sample-wise and feature-wise data harmonisation can effectively reduce the variances and improve the performance of the analysis. However, the feature-wise harmonisation requires several models to extract features of interest, leading to complex model development. Moreover, when the number of samples in each cohort is small, it is hard to develop the corresponding models.
Fig. 2.
Workflow of developing a computational data harmonisation method.
3. Methods
3.1. Literature search and review
The literature search, selection and recording were conducted independently by two researchers with experience in computer science and biomedicine. The agreement was then achieved by a third reviewer with the expertise of biomedical data analysis. All these searches were performed on Scopus Preview (Elsevier) database for publications up to July 10, 2021. To investigate the strategies of harmonisation for information fusion, we searched the literature using the keyword of 'batch effect removal', 'deep learning’ and ‘harmonisation’, ‘data harmonisation’, ‘normalisation’ and ‘harmonisation’, ‘colour normalisation’, ‘reproducibility’ and ‘radiomics’, ‘image standardisation’. These initial keywords were searched both independently and jointly to cover more literature. It is of note that both ‘normalisation’ and ‘standardisation’ are methods of harmonisation. The pre-screening was first conducted by viewing the abstract and title to filter those irrelevant articles. The eligibility was then checked through our criteria (given in Section 3.2) to remove the unqualified works for full-text review.
A flowchart demonstrating the literature selection procedure is presented in Fig. 3. After removing the irrelevant and duplicated articles by screening the titles and abstracts, 238 articles were selected for full-text screening. Based on eligibility criteria, 139 publications were considered unqualified, and 96 papers were included in this systematic review.
Fig. 3.
Literature selection procedure.
3.2. Inclusion and exclusion criteria
The entry criteria were: (1) original research publications in peer-reviewed journals or international conferences; (2) focus on the computational data harmonisation of digital healthcare data. The excluded criteria were: (1) studies that only applied existing harmonisation strategies without further development; (2) studies that focused on manual harmonisation such as regulations; (3) review and literature survey studies; (4) studies that only explore the reproducibility or stability without developing harmonisation approaches.
3.3. Data collection
Details of papers for quality review were manually summarised in a spreadsheet, including title, modality, methodology, metrics, data scale, year of publication, data property (e.g., private or public), applications, number of cohorts, and number of cases.
4. Data harmonisation strategies for information fusion
In this systematic review, data harmonisation approaches were divided into four groups, with the distribution based methods, image processing, synthesis, and invariant feature learning. To better illustrate the basic idea and relationship of computational approaches, a taxonomy is shown in Fig. 4, followed by a detailed description of harmonisation techniques.
Fig. 4.
Taxonomy of computational data harmonisation strategies.
4.1. Distribution based methods
The distribution based methods estimate/calculate the bias between cohorts from the latent space, then match/map the source data to the target ones through a bias correction vector or alignment functions.
4.1.1. Location-scale methods (LS)
The location-scale methods estimate the location-scale parameters (mean and variance) of each cohort and align all data towards the same location-scale.
ComBat: ComBat [46] robustly estimated both the mean and the variance of each batch using empirical Bayes shrinkage, then harmonised the data according to these estimates. The data was first standardised to have similar overall mean and variance, followed by the empirical Bayes estimation via parametric empirical priors. With these adjusted bias estimators, the data could be harmonised by the location-scale model based functions [47 66]. For instance, Radua et al. applied ComBat to address the heterogeneity of cortical thickness, surface area and subcortical volumes caused by various scanners and sequences [53]. Whitney et al. implemented ComBat to harmonise the radiomic features extracted across multicentre DCE-MRI datasets [54].
ComBat-seq: Researchers have made more extensions based on the original ComBat harmonisation. Since the assumption of Gaussian distribution in the original ComBat made it sensitive to outliers, Zhang et al. proposed ComBat-seq [67] by assuming the Negative Binomial distribution, which could better address the outlier issues. The ComBat-seq first built a negative binomial regression model and obtained the estimators of cohort bias, followed by the calculation of ‘batch free’ distributions for mapping original data.
BM-ComBat: Different from the original ComBat that shifted samples to the overall mean and variance, an M-ComBat [68] was proposed to provide a flexible solution, transferring the data to the location and scale of a pre-defined “reference”. With these efforts, Da-ano et al. [69] proposed a BM-ComBat by introducing a parametric bootstrap in M-ComBat for robust estimation, aiming to provide a more flexible and robust harmonisation strategy.
QN—ComBat: Müller et al. [70] applied a quantile normalisation before ComBat correction in longitudinal gene expression data to achieve better performance.
Distance-Weighted Discrimination (DWD): DWD [71] searched the hyperplane where the samples could be well separated and projected the different batches on the DWD plane. The data was then harmonised by subtracting the DWD plane multiplied by the batch mean. It is of note that DWD repeated the translations of samples from different cohorts until their vectors were overlapped.
4.1.2. Iterative clustering methods (IC)
The iterative clustering methods harmonise the cohort bias by conducting multiple bias correction through repeated clusterings procedures. These methods usually (1) perform cluster to all samples from different cohorts, and (2) compute the correction vectors for harmonisation based on cluster centroids.
Cross-platform normalisation (XPN): XPN [72] took the combined standardised sample and median central gene as input to remove gross systematic differences, followed by the clusters, aiming to identify homogenous groups of genes and samples with similar expressions in combined data. The gene clusters were then acquired by assignment function, which was used to compute estimated model parameters via standard maximum likelihood.
Harmony: Harmony [73] first employed principal components analysis (PCA) to reduce the dimension of all samples, and classified them into several groups (one centroid per group) through k-means clustering. With these centroids, the correction factors for harmonisation were calculated. The above clustering and correction were repeated until the convergence.
4.1.3. Nearest neighbours methods (NNM)
NNM methods first found the mutual nearest pairs, then computed the bias correction vectors based on paired samples and subtracted these vectors from the source cohort. Differences in these methods mainly refer to the geometry space when locating the mutual nearest pairs.
Mutual nearest neighbours (MNN): MNN identified nearest neighbours between different cohorts and treated them as anchors to calculate the cohort bias [74]. It first pre-normalised the gene data with cosine normalisation, followed by the estimation of the bias correction vector by computing the Euclidean distances between paired samples. The bias correction vector was then applied to all samples instead of the participated pairs. It required that all participated batches must share at least one common type with another.
Scanorama: Similar to the MNN method, panorama stitching (Scanorama) [75] aims at estimating cohort bias from samples across batches. It first reduced dimensions of raw data (or source data) using singular value decomposition (SVD). Then an approximate nearest neighbour was adopted to find the mutually linked samples across cohorts. Different from MNN, Scanorama checked the priority of dataset merging within all batches and acquired the merged panorama based on the weighted average of batch correction vectors. At last, the harmonisation was performed with Scanpy [76] workflow.
Batch balanced k-nearest neighbours (BBKNN): Initially, BBKNN [77] found the nearest neighbours in a principal component space based on Euclidean distances. Then it built a graph that linked all the samples across cohorts based on the neighbour information. These neighbour sets were then harmonised by uniform manifold approximation (UMAP) [78] algorithms.
Standard CCA and multi-CCA (Seurat): Different from other NNM-methods, Seurat [79] performed canonical correlation analysis to acquire the canonical correlation vectors that could project multi-datasets into the most correlated subspace. In this subspace, the mutual nearest pairs were located to compute the bias correction vectors to guide the data integration. When processing multi-cohort datasets (number of cohorts larger than two), the first batch would be set as the reference batch for the correction of the second batch. Then the harmonised second batch would be appended to the reference batch. This repeated procedure stopped when all the batches are harmonised [38, 79].
4.1.4. Remove unwanted variations (RUV)
These methods assumed that the cohort bias was independent of those biases refer to biological variances, which could be estimated as “unwanted variations”. For instance, the bias of negative control genes (prior known genes that would not be affected by biological changes of interest) could be regarded as cohort bias. Based on this assumption, the raw data could be harmonised by subtracting those “unwanted variations”.
Remove unwanted variations, 2-step (RUV-2): Control variables were used by RUV-2 to discover the factors related to cohort bias [80]. The negative control (probes that should never be expressed in any sample) samples were subjected to component analysis, and the resulting factors were incorporated into a linear regression model. Variations in the expression levels of these genes thus were considered undesirable. To extract low-dimensional features, Risso et al. [81] presented an extension of the RUV-2 with a zero-inflated negative binomial model that accounted for dropouts, discretisation, and the count character of the data. The cohort bias was then subtracted from the raw data to generate a gene expression matrix that is harmonised.
Singular value decomposition harmonic (SVDH): By factorising the expression matrix of input data and reconstructing it while taking off the elements related to the cohort bias, singular value decomposition (SVD) could be used to reduce cohort bias. Alter et al. [82] suggested using SVD to harmonise the data by filtering away the eigenarrays that lead to noise or experimental artefacts.
scMerge: scMerge [83] first constructed a graph that connected clusterings between cohorts by searching for mutual nearest neighbours. The unwanted factors were then estimated using stably expressed genes as negative controls. At last, an RUV model was used to collect and remove unwanted differences between cohorts.
Surrogate variable analysis (SVA): SVA [84] aimed to recognise and estimate the unwanted variations of data from multiple cohorts. It could be performed without any cohort information. The mixed dataset was first divided into a collection of n surrogate variables via SVD, followed by the clearance of data with large variances. SVA coefficients were then calculated for harmonisation by using a linear regression function with surrogate variables and raw diffusion intensities.
Print-tip loess normalisation (PLN): PLN [85] was initially proposed to deal with microarray data. To eliminate the cohort bias, PLN employed a blocking term to construct a linear model with the input data. The cohort bias was subtracted from the original data to produce the batch corrected expression matrix.
Removal of artificial voxel effect by linear regression (RAVEL): RAVEL [86] separated the voxel value into unwanted variation parts and biological parts. The unwanted variation factors were estimated from the region of interest by SVD, based on the prior knowledge of voxel values, which were not related to disease status.
4.1.6. Spherical harmonics (SH)
Spherical harmonics approaches were designed to harmonise MRI data, aiming to coordinate all data from different cohorts to the same spherical harmonic domain, by adjusting the spherical variables.
Rotation invariant spherical harmonics (RISH): RISH was based on mapping diffusion-weighted imaging data from source cohorts to target cohorts [17,66,87,88]. It started with calculating the rotation-invariant features from the estimated spherical harmonics coefficients (of target and source samples, respectively). These rotation invariant features were then mapped from the source cohorts to target cohorts through region-specific linear mapping, followed by the updating of spherical harmonics coefficients. The harmonised diffusion signal was calculated for each subject in source cohorts using the latest spherical harmonics coefficients in target cohorts of gradient directions.
Spherical moment harmonics. Due to the insufficient adjustment by location-scale parameters in some cases, researchers proposed the spherical moment method (SMM), which utilised the spherical moments to map the diffusion-weighted images from source cohorts to reference cohorts [89,90]. SMM matches the spherical mean () and spherical variance () per b-value (the diffusion weighting) by and , where are data from the target and source cohorts under shell, respectively. The mapping parameters for harmonising data from different cohorts were acquired by the linear transform .
4.1.7. Distribution alignment (DA)
Distribution alignment methods aim to transform the distribution of the source cohort to that of the reference cohort, using cumulative distribution functions or probability density functions.
Cumulative distribution functions alignment (CDFA): CDFA [91] was first proposed for multisite MRI data harmonisation, which aligned the source voxel intensities through an estimated non-linear intensity transformation to match the target cumulative distribution functions. The estimated intensity transformation defined a one-to-one mapping between the voxels in source and target cohorts.
Gamma cumulative distribution functions alignment (GCDF): The voxel intensities were re-parameterised using a mixture model of two Gamma distributions that fitted a reference histogram [92]. This reparameterisation was based on the CDF of the Gamma component, which modelled the particular uptake, and constrained the new feature space to [0, 1].
Probability density function matching: GENESHIFT [93] estimated the empirical density and measured the distance between probability density functions. GENESHIFT first picked the common genes from different cohorts, then estimated their probability density functions to find the best matching offsets. The harmonised data would be acquired by subtracting the estimated offsets from the source cohorts.
4.2. Image processing
Image Processing employs digital image processing algorithms to harmonise multi-cohort data, including image filtering (also called image convolution), registration, resampling and normalisation.
4.2.1. Image filtering (IF)
Image filtering (also called convolution) is the process that multiplies two arrays to produce a new array of the same dimension. The 2D second-order Butterworth low-pass filter was found to be able to eliminate cohort bias between CT images with different voxel sizes [94], while the local binary pattern filtering could produce stable and reproducible radiomic features [95].
4.2.2. Physical-size resampling (Resample)
Studies have shown that physical size such as pixel/voxel size, mpp (microns per pixel of level 0 in digital pathology) can greatly affect the radiomic/pathological features. This bias can be reduced using bilinear resampling to equalise all the physical sizes [94].
4.2.3. Standardisation/normalisation (SN)
Standardisation/normalisation models were designed to reduce the variation and inter-variability in different cohorts by linear transform. These methods usually performed location-scale shifts in image spaces (e.g., HSV, RGB, , illumination spaces, etc.) or image histograms.
Global colour normalisation (GCN) transfers the colour statistics from the source to the target images by globally altering the image histogram [96,97]. A typical representative of GCN is Z-score normalisation, assumed the variable from cohort i, subject j as , z-score normalisation is conducted through
(1) |
where and are the mean and standard deviation of each cohort. However, this global alignment may lose some information.
Local colour normalisation (LCN) transfers the colour statistics of the specific regions, e.g., ignoring the background regions, from source to target images. In [98], the authors first converted the source and target images from the RGB into the space, and then conducted a transformation to harmonise the source image and re-converted it into the RGB space. It is of note that the luminance of background regions is not involved during the processing. This helped the transformation to preserve intensity information within the region of interest while requiring the pre-definition of certain regions.
Histogram matching (HM): HM is a method of contrast adjustment using the histogram of images [99]. It adjusts the distribution of images by scaling the pixel values to fit the range of specified histogram (i.e., the target one):
(2) |
where indicates the target image and is the source image. Generally, and are 0 and 255, respectively. For instance, Shah et al. [100] investigated the histogram normalisation on MRI images to harmonise cross-cohort data for multiple sclerosis lesion identification.
Fuzzy based Reinhard colour normalisation (FRCN): To decrease the colour variation, Roy et al. [101] applied fuzzy logic to regulate the contrast enhancement in space to adjust the colour coefficients within the space.
Category based colour normalisation (CategoryCN): To reduce the variance of global colour normalisation, researchers proposed a category based approach for accurate colour normalisation [102]. CategoryCN first classified each pixel by unsupervised approaches from the source and target images, then conducted colour normalisation based on the different classes.
Complete colour normalisation (CCN): The complete colour normalisation included the normalisation of illumination and spectrum, one to harmonise the illuminant during imaging and another to reduce spectral variation [39,103]. CCN estimated the illuminant and spectral matrices from the target cohort, then matched the source illuminant and spectral estimations to the target ones.
4.2.4. Stain separation methods (SS)
Stain separation approaches separated the input images into distinct channels (e.g., the haematoxylin channel, eosin channel, and the background channel for H&E-stained images) to evaluate the stain feature matrix and match these features through certain operations from source to target cohort data. The core concept of stain separation was based on Lambert Beer's law [104] (in the RGB space, stain concentrations are nonlinearly dependant), shown as
(3) |
where was the value of incident light, and was the value of images in optical density (OD) space. Most stain separation methods aimed to factorise the OD values into two matrices as
(4) |
where S was the stain depth matrix and D was the stain colour appearance (SCA) matrix.
Colour deconvolution (CD): These approaches estimated the concentration of stains in pixel values and normalised the spectral variation in separated stains [105 108]. For example, estimation of the stain matrix was first given by evaluating the proportion of RGB channels within different cohorts, followed by colour deconvolution [106,107]. The inverse of the staining appearance matrix was multiplied with the optical density space intensity value to get normalised stain channels using non-linear spline mapping.
Structured-preserving colour normalisation (SPCN): SPCN assumed that most tissue regions were characterised by the most effective stain amongst the used stains [109]. It first converted a given RGB image to optical density using the Beer-Lambert Law. After that, SPCN decomposed images into several stain density maps using sparse and non-negative matrix factorization (SNMF), followed by the combination of the stain density map and colour normalisation.
StainCNNs: Inspired by SPCN, Lei et al. proposed a deep neural network for stain separation to reduce the computational consumption of SNMF [110]. The proposed stainCNNs approach took the source images as input and learned to generate the stain colour appearance matrix. It significantly reduced the processing time while retaining the high quality of the harmonised images.
Adaptive colour deconvolution (ACD): ACD first transferred the input RGB images to optical density space, then performed stain separation with adaptive colour deconvolution matrix to obtain the haematoxylin (H) channel, eosin (E) channel and residual channel [111]. At last, the harmonised images were obtained through recombining the H and E components with a stain colour appearance matrix of target cohorts.
Rough-fuzzy circular clustering based stain separation (RCCSS): In RCCSS, stain separation was carried out using an image model based on transmission light microscopy [112]. Initially, each image was transferred to OD space and then decomposed to obtain the SCA matrix and associated stain depth matrix. Maji et al. [113] presented a circular clustering algorithm to find the ‘centroid’, ‘a crisp lower approximation’, and the ‘fuzzy boundary’, which could be integrated by saturation-weighted hue histogram in the HIS colour space.
4.3. Synthesis
The objective of synthesis is to precisely reproduce a sample that belongs to a missing modality or domain, which harmonises the multi-cohort datasets. It relaxes harmonisation tasks as style transfer and considers each cohort as a ‘style’ and transfers all samples to the same ‘style’. Based on the characteristics of the training sample, synthesis methods are divided into paired synthesis and unpaired synthesis.
4.3.1. Paired sample-to-sample synthesis (P-s2s)
P-s2s methods are trained using paired samples generated from the same object acquired using different protocols. These methods aim to learn the data transfer between source and reference cohorts, which require the repeated acquisition of the same subject under different protocols. Therefore, they can only be applied to radiomic data since the repeated acquisition for the same subject is impossible for gene expression and pathology.
Multi-layer perceptron harmonic (MLPH): In 2009, a pilot architecture of the autoencoder-related method was proposed by Cheng et al. [114] to generate the harmonised data by learning the nonlinear transform function.
Spherical harmonic network (SHNet): Golkov et al. [115] presented a cascaded fully connected network that employs ReLU and Batch normalisation to harmonise the diffusion MRI scans. Inspired by SHNet, Koppers et al. [116] applied the residual structure to improve the robustness while avoiding overfitting.
Deep rotation invariant spherical harmonics (Deep-RISH): Karayumak et al. [117] proposed a deep learning based non-linear mapping approach that utilises RISH features to map the raw signal (dMRI data) between scanners with the same fibre orientations. Deep-RISH was composed of five convolution layers, which took the 9 × 9 RISH feature patches as the input.
DeepHarmony: DeepHarmony was proposed to produce data with consistent contrast within different cohorts [118]. It employed a U-Net based architecture, taking data from the source cohort and producing harmonised data of the target cohort.
Deep harmonics for diffusion kurtosis imaging (Deep HDKI): Tong et al. [119] carried out a concise architecture with three 3D-convolution layers for diffusion kurtosis images (DKI). The paired data was generated using an iterative technique called linear least square and were non-linearly registered to diffusion-weighted images acquired on the target scanner using the computational tools. Then the neural network was trained on the paired samples for harmonisation.
Deep harmonics for slice thickness (Deep HST): Park et al. [120] studied the reproducibility of radiomic features in lung cancer under different slice thicknesses and proposed an end-to-end deep neural network to generate harmonised CT data between 1-, 3-, and 5-mm slice thickness.
Deep harmonics for reconstruction kernel (Deep HRK): Choe et al. [34] explored the influence of different reconstruction kernels on radiomic features and presented a CNN with residual learning to transfer the data from the soft kernel (B30f) to the sharp kernel (B50f).
Distribution-matching residual network (MMD-ResNet): Shaham et al. [121] presented a comprehensive multi-layer perceptron for harmonisation with residual connection [122] and batch normalisation [123] techniques. Given two cohorts of data and . The MMD-ResNet aimed to learn a map by minimising the maximum mean discrepancy [124] between and Y. It is of note that this was a ‘one-way street’ distribution matching for harmonisation and required re-training for inverse transformation.
Pulse sequence information based contrast learning on neighbourhood ensembles (PSI-CLONE): PSI-CLONE [125] first calculated sequence parameters from source cohorts, then applied to the reference cohorts to produce the source-style data. By training a regression model to learn the nonlinear mapping between synthesised source-style data and reference data, the source cohorts could be harmonised effectively. Based on PSI-CLONE, Jog et al. [126] applied the multi-scale feature extraction to improve the performance.
4.3.2. Unpaired sample-to-sample synthesis (Up-s2s)
Up-s2s approaches generate the harmonised data by cycle-consistent generative adversarial networks or conditional variational autoencoder-decoder, which require sufficient samples and cohort labels from different cohorts for network training.
Cycle-consistent generative adversarial networks (CycleGAN): Most synthesis methods of unpaired sample-to-sample translation were based on CycleGAN [127,128] and its derivatives [62,129,130]. In [130], a CycleGAN with Markovian discriminator was applied to harmonise the diffusion tensor data, which was designed to further improve the ability to capture local information.
Conditional variational autoencoder-decoder (Conditional VAE): Variational Autoencoder (VAE) is commonly used in data synthesis, dimensional reduction, and feature refinement tasks. It employs an encoding network to decompose the input high dimensional data x into hidden representation z, and a decoding network to reconstruct the raw data x, where and are parameters of E and D. The conditional VAE modifies the decoder to a conditional decoder that takes the latent variable z and specified cohort c back to a harmonised data . By integrating Conditional VAE with the adversarial module, cohort transfer can be performed without paired training samples. Several studies have been proposed using Conditional VAE for data harmonisation, including:
-
(1)
SH-VAE[131] performed cohort bias correction of diffusion-weighted MRI by conditional VAE to produce cohort-invariant encodings. Different from other conditional VAE based methods, SH-VAE took spherical harmonics coefficients as input and output.
-
(2)
stVAE[132] applied Conditional VAE with Y-Autoencoders (additional classification head in the encoder) and adversarial feature decomposition for single-cell RNA sequencing.
-
(3)
scAlign[133] performed harmonisation by learning a duplex mapping of cell sequences between different cohorts in a low dimensional latent space. This mapping enabled the model to estimate a representation of certain samples under data from different cohorts. Besides, it employed the “association learning” method [134] to walk through the embeddings generated by a neural network with data from different cohorts. The association learning enabled the network to extract the embeddings that can capture the essence of the input data, leveraging the lack of annotations in paired synthesis. With these essence embeddings, scAlign applied a decoder to synthesise the harmonised data.
-
(4)
iMAP[135] first presented an autoencoder architecture to learn the cohort-invariant features, then used these features to set MNN pairs by the random walk strategy. This autoencoder included one encoder E and two generators and , with two inputs (gene expression vectors and cohort labels) and outputs ( for generating the cohort variations and for reconstructing the original input), respectively. With the defined MNN pairs, a GAN model was used to produce the cohort-invariant samples.
4.4. Invariant feature learning
The invariant feature learning techniques are meant to learn the cohort-invariant features from different cohorts of data, then apply these features for the main task (e.g., segmentation, classification, regression). The concept behind representation learning approaches for harmonisation is that if a sparse dictionary/mapping can be built from data of different cohorts, these learnt representations will not include inter/intra cohort variability.
4.4.1. Dictionary learning (DictL)
Sparse dictionary learning (SDL): SDL [136][137] was a representation learning approach that aimed to reduce the complexity of the harmonisation task by decomposing the input data as a linear combination of components. SDL could be applied to identify the cohort-invariant features to reconstruct the raw data from a huge number of random features [170].
Unsupervised colour representation learning (UCRL): UCRL [138] first estimated the sparsity parameter based on SPCN, then employed a robust dictionary learning method [139] to acquire the stain colour appearance matrix. By taking the stain centroid estimation as an L1-regularised linear least-squares task, the stain mixing coefficients map of the source data was combined with the colour appearance matrix of the reference data.
4.4.2. Autoencoder based methods (AE)
DESC: DESC [140] trained a VAE to obtain the cohort-invariant feature embeddings, then iteratively optimised a clustering loss function to group the cohort data. The Louvain clustering [141], which aimed to improve modularity for community detection, was used to initialise the cluster centres.
BERMUDA: BERMUDA [142] first applied a graph based clustering to data from different cohorts individually, followed by a method (named MetaNeighbor) to identify similar clusters between cohorts to get the initial unaligned comprehensive dataset. An autoencoder was then built to reconstruct the input data while producing invariant feature embeddings in the low dimensional latent space. These feature embeddings were cohort-invariant and can be used for further analysis.
4.4.3. Adversarial learning methods (AdvL)
The adversarial learning methods indicate developing a learning system that focuses on the scanner/protocol invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the cohort bias on predictions. These methods [143 146] were usually composed of an adversarial module for cohort identification, a backbone for feature extraction, and the main task for classification, regression, and/or segmentation.
The adversarial learning methods used for harmonisation mainly had two structures, as shown in Fig. 5(a) and (b). For methods such as AD2AH (Attention-guided deep domain adaptation harmonics) [143], DUH (Deep unlearning harmonics) [144], and scDGN (single-cell domain generalisation network) [145], the adversarial module (domain classifier) was designed to assist the encoder to learn the cohort invariant features by maximising the adversarial loss while minimising the loss of the main task (Fig. 5(a)). To acquire the precise feature representation z, methods such as NormAE [146] added a decoder to reconstruct the input raw data through minimising the reconstruction loss , shown in Fig. 5(b). By incorporating these optimisation functions, the main task could achieve stable performance when dealing with multi-cohort data.
Fig. 5.
Illustration of adversarial learning methods.
5. Evaluation approaches of the data harmonisation strategies
This section explores evaluation approaches for harmonisation performance and divides them into distribution based, correlation based, value based, and task based metrics (Fig. 6). Distribution based metrics evaluate the harmonisation performance through assessing the clusters or location-scale parameters amongst different cohorts. The correlation based and value based metrics assess the variability of data-derived features from different cohorts to test the reproducibility. Besides, cohort classification is also considered as an evaluation method, aiming to demonstrate the harmonisation effect by cohort classification results. Visualisation is the commonly used evaluation approach that can straight visualise datasets before and after harmonisation.
Fig. 6.
Taxonomy of harmonisation metrics. The visualization and cohort classification assessment are not presented due to their limited subcategory.
5.1. Distribution based evaluation
Distribution based metrics assess the harmonisation performance via calculating the clustering or local-scale parameters. The clustering related metrics include adjusted rand index (ARI), k-Nearest neighbour batch-effect test (kBET), local inverse Simpson's index (LISI). The location-scale related metrics contain structural similarity families, normalised median intensity and KL divergence.
5.1.1. Adjusted rand index (ARI)
The adjusted Rand index is the corrected-for-chance version of the Rand index (RI) and can be used for harmonisation evaluations [140]. Given a set of n elements and their predictions , the RI can be calculated through
(5) |
where TP is the number of true positives, TN indicates the number of true negatives, FP is the number of false positives and FN is the number of false negatives. The ARI is illustrated as
(6) |
where E(RI) is the expectation of the RI. It ranges from 0 to 1, and a large ARI indicates the cluster results are similar to the real labels.
5.1.2. k-Nearest neighbour batch-effect test (kBET)
The k-Nearest neighbour batch-effect test was proposed to assess whether the distribution based harmonisation method can remove cohort bias while preserving biological variability [147]. kBET formulates a null hypothesis that the data is ‘well mixed’. It employs a based test for random fixed-size neighbourhoods to evaluate whether the data is well mixed. The low average rejection rate indicates good harmonisation performance and vice versa. As a result, determining whether the mean rejection rate surpasses a significance level allows the null hypothesis to be rejected for the whole dataset.
5.1.3. Local inverse Simpson's index (LISI)
LISI combines perplexity based neighbourhood construction and the Inverse Simpson's Index (ISI), which is sensitive to local diversity and can be well interpreted [73,135]. LISI applies the Gaussian kernel based distributions of neighbourhoods via distance based weights and computes the local distributions by fixing perplexity. Meanwhile, it uses the ISI to enhance the interpretation, that is
(7) |
where p(x) is the batch probabilities in local distributions.
5.1.4. Structural similarity families
Structural similarity index measure (SSIM) was designed to evaluate the image quality degradation during data transmission by measuring the similarity between two samples [148]. It was initially proposed for grey level images and has been widely applied to evaluate the harmonisation performance in digital pathology [62,101,108,118,126]. Assume , as the average and variance of sample i, as the covariance between sample x and y, and the smooth parameters . The SSIM can be described as
(8) |
To better assess the similarity for colour images, Kolaman et al. [149] proposed quaternion structural similarity (QSSIM) to measure the size and direction of chrominance, luminance and degradation [109]. Feature similarity index (FSIM) utilises phase congruency and gradient magnitude features to evaluate the low-level features of image visual quality [150]. The QSSIM and FSIM were employed in [110,138] and [105,110], respectively, to assess the structural preserving conditions after the harmonisation process. Though most methods applied structural similarity related metrics for evaluation, studies have shown their limitations and weaknesses [151]. For instance, it has been reported that these metrics suffer from uniform pooling, distortion and instability, especially when measuring samples with hard edges or low variance regions.
5.1.5. Normalised median intensity (NMI)
Assume the mean of R, G, B values for the i th pixel within the image as , the NMI for assessing the colour consistency is calculated as
(9) |
where the denotes the 95th percentile [39,102,108,111,112,138,152]. The harmonisation strategy is effective when the median and maximum intensity values are close enough. Since the NMI does not consider the consistency of the ROI within the same biopsy set of S images, Maji et al. [113] presented an extension of NMI, named Between-Image colour constancy (BiCC) index, which can be represented by
(10) |
where and . The value of BiCC ranges from 0 to 1, an efficient harmonisation algorithm for image modality should make the value as high as possible.
5.1.6. Coefficient of variation (COV)
Given the mean and standard deviation , the coefficient of variation (COV) is defined as , which depicts the degree of variation in respect to the population mean [17,28,47, 69,90,103,119,125,153]. The Multivariate COV (MCOV) is used to quantify the variability of features between different cohorts, with a lower value indicating better reproducibility [154]. Assume and as the mean of features extracted from two different cohorts x and y, as the covariance matrix, the MCOV is computed via
(11) |
5.1.7. Kullback-Leibler divergence
Kullback-Leibler (KL) divergence was proposed to measure how a probability distribution is different from another one. Assume two discrete probability distributions p and q (each with k samples), the KL divergence is given by
(12) |
It was applied as a metric for harmonisation strategies, with 0 indicating identical quantities of information between two distributions [113,140]. For instance, Li et al. [140] applied KL divergence to evaluate how randomly are samples from different cohorts mixed together within each cluster.
5.2. Correlation based evaluation
The measurement of reproducible/nonreproducible is usually given by statistical analysis via calculating the correlation between data-derived features before and after harmonisation, including concordance correlation coefficient, intra-class correlation coefficient etc.
5.2.1. Pearson correlation coefficient (PCC)
Pearson correlation coefficient measures the linear correlation between two groups of variables X and Y, which is presented as
(13) |
where is the covariance and indicates the standard deviation. PCC ranges from 0 to 1 where 0 denotes there is no correlation between X and Y, and 1 represents a perfect correlation. The PCC was used as an indicator to assess the similarity between the source and harmonised data [38,39,49,56,101,108,109].
5.2.2. Concordance correlation coefficient (CCC)
The CCC was proposed by Lin et al. [155] that measured the agreement between two variables and was used to assess the reproducibility [34,61,94,95,120]. Different from PCC that can only assess the correlation between two groups of data, CCC measures how large the gap between two groups of data is. Assume the two variables , and their mean , and variance , the CCC is given by
(14) |
where .
5.2.3. Intra-class correlation coefficient (ICC)
Both PCC and CCC can only assess the correlation between two groups of data, which cannot be implemented on multi-cohorts. The ICC is utilised for data structured as groups instead of those as paired observations, it is usually used to assess the variability within different protocols, different imaging devices, or different sites [47,91,93,95,156]. It interprets on a scale of [0, 1], with 1 illustrating the perfect agreement and 0 indicating complete randomness. Essentially, the ICC employed for data harmonisation describes the confidence of how similar the variables are in different cohorts, which is the one-way random model that assumes there is no systematic bias [47]. Data from various cohorts are pooled and assessed within or cross operators based on the analysis of variance (ANOVA). The one-way random model can be given from:
(15) |
where is the mean square between groups, is the mean square within groups and indicates the average group size.
5.2.4. P-value
Some studies evaluate the harmonisation effectiveness through computing the P-value given by paired hypothesis tests [17,47,49,51, 52, 54, 55,66,74,79,80,87,99,102,107,117,120,121,130,157]. This statistical analysis is often conducted based on the paired region of interests before-and-after harmonisation. In particular, there are significant differences (corresponding to p-value < 0.05) between the data of different cohorts before and after harmonisation, and vice versa. For instance, Fortin et al. [58] analysed the number of voxels that are significantly related to cohorts, e.g., a voxel is counted when the p-value calculated is less than 0.05.
5.2.5. Percentage of reproducible/nonreproducible features (PRF/PNF)
The percentage of nonreproducible features was treated as an evaluation metric [58,64,65,70,91,158], e.g., Mahon et al. [158] compared the percentage of significantly different features before and after conducting ComBat harmonisation. On the contrary, the percentage of reproducible features was also considered as an evaluation metric of harmonisation performance [61].
5.3. Value based evaluation
The value based evaluation mainly assesses the intensity differences between the data or data-derived features before and after harmonisation. This usually requires a “ground truth” that can ideally reflect harmonisation results, a low value of intensity differences indicates good harmonisation performance.
5.3.1. Mean absolute error (MAE)
The average absolute error of features (textual and clinical features) can be used to reflect harmonisation effects [52,53,57,58, 62, 63, 66,101,114,118,144], this usually requires the extraction of certain ROIs from the data before and after harmonisation. For instance, Wachinger et al. [63] evaluated the MAE in age prediction on the raw dataset and ComBat-harmonised dataset to illustrate the effectiveness of ComBat. Dewey et al. compared the MAE between the synthesised and raw images to demonstrate the harmonisation performance.
5.3.2. Root-mean square error (RMSE)
Many researchers measured the RMSE between the harmonised samples and the ground truth targets to assess the replicability [49,57,81,90, 91, 103, 114,115,117,119,120,125,130,131]. For instance, Moyer et al. employed RMSE and mean absolute error to assess the harmonisation performance between synthesised diffusion MRI and the ground truth. However, this metric requires paired datasets which is a heavy burden for digital healthcare research.
5.3.3. Peak signal to noise ratio (PSNR)
PSNR illustrates the ratio between the maximum power of a signal and the power of noise that influences the integrity of its representation. Consider two groups of variables X and Y, the PSNR between X and Y can be given as
(16) |
where denotes the maximum possible value of the image (255 for images) and MSE is the mean squared error. It is commonly used in image denoising tasks as an evaluation metric. Some researchers applied this indicator to measure the quality of the synthesised images during the harmonisation [125,126].
5.4. Main task based performance evaluation
Many studies demonstrated its effectiveness by comparing the performance of the main tasks before and after harmonisation methods. Although it is a result-orientated evaluation method and may be affected by the random initialisation of the main-task models (machine learning models), it can prove the effectiveness of the harmonisation method to some extent. The main tasks involved in harmonisation evaluation mainly include regression [57,68,91], segmentation, and classification [84]. The assessment is usually done by the Dice coefficient (Dice) or the intersection over union (IoU) [62,100,106,108,126,129,144]. On the contrary, the classification tasks are variously evaluated, e.g., using area under the receiver operating characteristics curve (AUC) [50,54,86,92, 106, 111, 115,138,143,146,154,159], accuracy [38,48,50,59, 60, 62, 63,69,77,89,99,106,131 133,140,143,145,159], true positive rate [67,135], sensitivity [48,143], specificity [48] and Matthews correlation coefficient (MCC) [69]. Note that MCC is a balanced measurement for the binary classification tasks, with comprehensive evaluations of TP, TN, FP, and FN, therefore it is divided into the main task based performance evaluation.
5.5. Cohort classification
Different from comparing the variety of data or data-derived features before and after harmonisation, some studies reported the effectiveness of harmonisation strategies through adopting cohort classification [49,63,94]. The core idea of this metric is that the cohort should be more difficult to identify when an effective harmonisation strategy is employed. For instance, Wachinger et al. [63] compared the accuracy of cohort identification of the raw dataset, dataset applied with z-score normalisation, linear model and ComBat, respectively. The worst results were gained by ComBat, which indicates the best harmonisation ability since the classifier cannot well identify the cohorts after harmonisation.
5.6. Visualisation
Visualisation refers to the techniques that can picture the data distribution from low dimensional feature space or sample intensities. Approaches for harmonisation assessment that visualising the data distribution in latent space including principal component analysis (PCA) [56,57,59,68,70,81,83,146], uniform manifold approximation and projection (UMAP) [38,50,77,133,135,142,160], and t-distributed stochastic neighbour embedding (t-SNE) [54,74,75,79, 83, 121,132,133,143,145]. These approaches convert different high-dimensional data into low-dimensional data and plot them into the same feature space. The harmonisation is well performed if the visualisation of data distribution is mixed instead of assembling as different clusters. In addition to visualising data distributions, some researchers also plot the intensities or location-and-scales (mean and variance) of each sample before and after harmonisation to assess the performance [46,83,86,97, 98, 100,103,106,107,125].
6. Applications of computational data harmonisation
Data harmonisation has been widely adopted in various fields of digital healthcare, including the manual harmonisation of tabular data, computational data harmonisation of the gene, radiomics and pathological data. Though there have been many efforts on removing the artefacts of time series signal data (such as EEG, ECG) [161], these works mainly focus on the removal of noises caused by biological variances. For instance, researchers employed filtering [162] and wavelet transform [163] approaches to remove the ocular, muscle and cardiac artefacts. However, these studies barely pay attention to the device/site variances, indicating the lack of harmonisation studies to these time series signal data. This section illustrates the application of computational data harmonisations in gene expression, radiomics and pathology.
6.1. Gene expression analysis
The process of generating a functional gene product from the information within a gene is referred to as gene expression, which is one of the major research areas in biomedical research. The traditional approach of gene expression analysis is microarray technology, which relies on comprehensive chemical reactions to convert RNA to cDNA. The latest gene expression analysis method is single-cell RNA sequencing (scRNA-Seq). It isolates the single cells and RNA for transcription, library generation and sequencing, using the new generation sequencing (NGS) techniques. Unfortunately, due to the various NGS platforms and experimental environments (pH, temperature), both microarray technology and scRNA-sequencing are highly affected by cohort bias. Therefore, computational data harmonisation is widely used in microarray [46,71,72,80,82,84,85,89,93] and scRNA-Seq [56,67,68,70,73 75,77,81,83, 121, 132, 133,135,140,142,145,160] to remove the cohort bias.
6.2. Radiomics analysis
Radiomics refers to the extraction and analysis of a large number of quantitative image features from medical images obtained by CT, PET, MRI, and other imaging modalities [164]. In addition to the studies on phantoms [61,65,94], research refer to MRI mainly focus on the brain [17,47 49,51 53,55,57,58, 63,66,86,87,90 92,95,97,99,100,115,117- 119,125,126,129 131,143,144,157], breast [101,111], while those refer to CT focus on ear [60], liver [50,64] and lung [34,120]. In addition to CT, PET and MRI, optical coherence tomography also suffers severe scanner variability, which can be eased by computational data harmonisation approaches [129].
6.3. Pathology analysis
Most harmonisation strategies (also named stain/colour normalisation) in pathology aimed to address the stain variance. These studies mainly focused on uterus [108], breast [39,62,102,103,107,110 113], lymphoma [103,138], skin [105], liver [106], renal [59], and stomach [109].
6.4. Other modalities
Metabolomics is an omics technology that monitors and discovers metabolic changes in people in relation to illness state or in reaction to a medical or external intervention using current analytical instruments and pattern recognition algorithms. The nonlinear cohort bias during the liquid chromatography−mass spectrometry can be removed by computational methods [146].
7. Meta-analysis
In this review, the methodologies and metrics were grouped based on different ideas or theories, and the meta-analysis was conducted and reported in three areas/modalities. The reason why the results were explored and discussed through different modalities (gene, radiomics, and pathology) is that these data have different properties. For instance, data of gene analysis are expression matrices, data of radiomics are high-dimensional volume array (grayscale image per slice), and data of pathology are colour images with huge sizes.
7.1. Meta-analysis
Data properties and study trends. The number of studies and data properties for harmonisation approaches from 2000 to 2021 is demonstrated in the top left in Fig. 8, with the percentage of studies that was conducted on the public dataset. The public data can be acquired through open source websites or archives while the in-house data cannot be acquired. There has been a dramatic increase in the number of harmonisation studies since 2019, indicating an urgent need to conduct large-scale studies and data harmonisation strategies. In addition, we demonstrate the number of harmonisation studies on different sub-modalities in recent years (Microarray and sRNA-seq for gene expression studies, CT and MRI for radiomic studies, and Pathology). In terms of gene expression, the harmonisation approaches for microarrays were mainly presented before 2015, while that for sRNA-seq have become the latest topic in recent years. As for radiomics studies, researchers have realised the importance to improve the reproducibility of radiomics features, especially in MRI. Data harmonisation for digital pathology has been noted in decades, while it receives more attention in recent years.
Fig. 8.
Number of publications and years in terms of data properties and modalities. The public data is the open source data that can be acquired, the in-house data is not available from the internet. The percentage in the top left subfigure is the ratio of studies that were conducted on the public dataset.
Strategies and modalities. Due to the diversity of biomedical data modalities, the relationship of different strategies and modalities was explored. As shown in Fig. 9, the distribution based methods were commonly applied in gene expression and radiomics studies, which account for 79% and 59% of the employed approaches, respectively; however, only a few of them (5%) were employed in digital pathology. The empirical Bayes methods dominated the distribution based methods because of their generalisation ability and robustness, while the RUV and SH were more commonly used in specific fields. The image processing approaches were mainly used in digital pathology, employing standardisation/normalisation and stain separation ideas to merge the multi-cohort data. Unlike the distribution based and image processing based methods, invariant feature learning and synthesis were found to be applicable in all three modalities recently, dominated by deep learning based algorithms.
Fig. 9.
Harmonisation strategies in terms of different modalities. ‘IFL’ indicates invariant feature learning approaches, “Img Pro” refers to image processing approaches. The percentage of sub-methods is annotated with the abbreviations of sub-methods in each pie chart.
Evaluation metric. The evaluation metric is another crucial aspect when developing computational data harmonisation strategies. It describes the performance of harmonisation methods via analysing the distribution, correlation and values between the source and target cohorts. amongst all evaluation metrics, visualisation was the most commonly used method to present harmonisation effects, followed by evaluating the main tasks (Fig. 10). Some studies tried to evaluate via classifying the cohorts, but this may have a limitation since the inability to distinguish cohorts does not mean that all data is well harmonised. Overall, even there are many options for the assessment of harmonisation strategies, there still exist some barriers to implementing harmonisation assessment in clinical flows. The evaluation can only be acquired when (1) there are paired datasets (which is inapplicable in real clinical settings); (2) there is a certain machine learning-based module for performance comparison (demands for well-trained computational modules); (3) there are clinicians for visual assessment (subjective and time-consuming); (4) there are predefined regions of interest (demands for manual annotation or computational modules). Moreover, data harmonization is of utmost importance for a seamless federation of models (i.e. a naïve federation approach in which no additional algorithms are needed to cope with incoherencies in local datasets). Therefore, to which extent should local datasets be harmonized not to destroy locally contextual particularities of the data that positively contribute to the local generalization of the models.
Fig. 10.
Evaluation metrics in terms of different modalities.
In addition to reporting the utilisation of different metrics, details of evaluation metrics in terms of different modalities were presented in Fig. 10. Visualisation is preferred for gene expression and digital pathology, including the visualisation of data distribution (e.g., using UMAP, t-SNE) and samples before and after harmonisation. Many pathology studies applied distribution based metrics, such as structure similarity and normalised median intensity. The task based evaluation was also considered reliable, which accounts for 19%, 28% and 13% of all evaluation metrics.
Data scale-images. The scale of samples in radiomics and pathology is closely related to image resolution and qualities. We analysed the variable ε of studies that involved images with width w and height h. Most radiomics studies were performed with 256 pixels while some were conducted with 512 and 128 pixels, this was because of the shortage of GPU or RAMs especially when dealing with 3D or multi-slice datasets. Moreover, it is of note that 69.7% of radiomics studies did not report the image size, while that proportion of pathological studies was 20%. For pathological images, most studies were conducted with large scale images (ε), since the image processing algorithm performs better results when considering the larger field of views.
Data scale-cohort. In addition to the sample (image) scales, the scale of cohorts is also important for data harmonisation approaches. For gene expression, more than half of the studies were performed on large datasets (number of cases per cohort > 5000). Most radiomic studies were conducted on small datasets, with the number of cases (scans) per cohort < 200.
7.2. Research directions
This section first presents research tracks for different kinds of data harmonisation approaches based on their limitations, respectively, and states the common restrictions of previous studies.
Directions for distribution based methods. The distribution based methods mainly map the source data to the target one through the estimation of the cohort variances. This leads to issues that (1) most distribution based methods were conducted based on refined feature vectors that required prior knowledge of the region of interest. This prior knowledge conflicts with the original purpose that harmonisation approaches are proposed to process multicentre datasets to build robustness and precise computational tools because the region of interest cannot be well predicted by models trained without data harmonisation; (2) although studies have proved that some distribution based methods (such as ComBat) can remove cohort bias while preserving the differences between radiomics features on phantoms, all these methods cannot be well performed to images or high dimensional signals, due to demanding computational complexity; (3) the data harmonisation needs to be performed to entire datasets again when new data are added; (4) some approaches are pairwise (e.g., XPN, DWD, CCA, MNN, Seurat), leading to a complex training procedure (repeated training) when they are implemented to multicentre datasets (number of cohorts > 2). In particular, the first cohort will be considered as the target cohort to correct samples in the second one, and these corrected samples are then added to the first cohort [140].
To overcome these problems, researchers may (1) focus on the harmonisation of raw datasets, instead of data-derived features; (2) develop highly efficient data harmonisation approaches that can deal with a large amount of data; (3) enhance the robustness of data harmonisation strategies; (4) develop methods that can simultaneously harmonise multicentre datasets; and (5) avoid using pair-wise samples for algorithms development.
Directions for image processing based methods. Image processing based methods can harmonise the image data without complex procedures. However, these methods also have limitations as (1) some of them (such as stain separation) can be only performed to specific fields; (2) some (image filtering) methods heavily rely on empirical settings, such as filtering kernel sizes and kernel types, which are less efficient and hard to reproduce; and (3) some may lose the information during non-linear transforms. To address these issues, researchers should pay attention to general data harmonisation approaches that do not heavily rely on empirical settings.
Directions for synthesis. Although deep learning based synthesis solutions have advanced rapidly and achieved significant performance, these methods still suffer from poor reproducibility and generalisability. The obvious limitations are (1) most synthesis methods were built based on the existing multicentre datasets, which lack evaluations on new datasets; (2) the GAN based models are prone to instability and may hallucinate or introduce unrealistic changes; and (3) training a GAN based model requires a large amount of training data for all cohorts, which may be not feasible for clinical studies. To overcome these drawbacks, researchers should (1) report the data harmonisation performance on new datasets that are not involved during model development; (2) enhance the stability of data synthesis; (3) build data harmonisation strategies using less training data.
Directions for invariant feature learning. Invariant feature learning can reduce the disadvantages of synthesis approaches by learning how to extract cohort-invariant features from datasets, but it still faces some challenges. For instance, it can only extract invariant features for analysis while cannot obtain harmonised data. Therefore, future studies should focus on how to generate the harmonised data using extracted invariant features.
Explainable AI and harmonisation studies. Another research niche that still remains uncharted in the literature related to data harmonisation is the use of explainable Artificial Intelligence (XAI) methods [165] to identify possible reasons for incoherent data representations. We envision that XAI approaches can be exploited to gain insight on which visual artefacts are present in data instances that imprint a bias on the predicted outcome of a data-based model. This insight can be then analysed to decide whether the rooting cause of such biasing artefacts correspond to insufficient harmonisation of medical data before the learning phase. Furthermore, out of distribution examples can be also detected by virtue of local explanatory techniques (e.g., those capable of discerning which parts of the input to a model are pushing their output towards one class or another), which upon inspection can be attributed to other exogenous phenomena that can relate to data harmonisation, such as a possible miscalibration of the medical equipment or a change in the protocols capturing the data themselves. On the other hand, better harmonisation has benefits to XAI, since all the data are harmonised into the same standard and no cohort biases would be introduced to the XAI system [166,167]. All in all, we foresee an interesting research cross-fertilization at the crossroads between harmonisation and XAI.
Limitations for methodology design. Most studies for data harmonisation did not follow a stepwise design methodology, which cannot be reproduced easily by third parties. For instance, as shown in Table 4, more than half of the radiomic studies did not report the image scale. Moreover, the different definitions of ‘reproducible’ in previous studies and various evaluation metrics greatly hinder the method comparison for further research.
Table 4.
Data scale (image size) in previous studies.
Small | Middle | Large | N/A | |
Radiomics | 9.1% (6) | 18.2% (12) | 3.0% (2) | 69.7% (46) |
Pathology | 5.0% (1) | 15.0% (3) | 60.0% (12) | 20% (4) |
* The small, middle, large image sizes are defined as , , ε , respectively, N/A indicates there is no report of image size.
8. Checklist and guidance
To address the issues of methodology design, we presented a Checklist for Computational Data Harmonisation in Digital Healthcare (CHECDHA) to enhance the reproducibility and methodological principle, inspired by the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) [169]. Furthermore, the guidance on how to choose data harmonisation strategies is also presented in this section.
8.1. Checklist criteria
The proposed checklist clarifies the common practice for data harmonisation through data, model, evaluation, result, and discussion, shown in Table 5.
Table 5.
Checklist for Computational Data Harmonisation in Digital Healthcare (CHECDHA) criteria.
Category | Item | Explanation | Example | |
Motivation | Background | The application field of the dataset(s) | Information fusion of DW-MRI data from different scanners | |
Importance | Why this study is conducted, how important it is | Dramatically increase the statistical power and sensitivity of clinical studies | ||
Data | Common | Dataset | What the dataset(s) is (are), how it is (they are) collected (details of acquisition protocols, entry and exit criteria) How many categories, cohorts, subjects, and cases are included in the studies |
m healthy subjects under n protocols ( cases, n cohorts) Protocol 1: … Protocol 2: … |
Property | Whether the dataset(s) is (are) in-house or public, provide the access link if appropriate | Public/In-house | ||
Pre-processing | How the dataset is pre-processed | Z-score normalisation | ||
Ground truth | What the ground truth is and how it is generated | Cohort x under protocol i | ||
Partition | For machine learning, how the dataset is partitioned into training, validation, and testing subsets in terms of the number of samples, patients | 7:2:1 for training, validation and test | ||
Augmentation | For machine learning, how the dataset is augmented | Randomized flip, rotation | ||
Specific | MRI sequence | What the MRI sequence is | Diffusion-weighted | |
Region | Which region(s) of the body or the subject in the dataset is (are) covered | Brain | ||
Slice size | What the sizes of each slice are | 512512 | ||
Pixel/Voxel size | What the physical length of a pixel/voxel is | 0.25 mm/ 1 | ||
WSI size | What the sizes of the whole slide images are | 12,00030,000 | ||
Patch size | What the extracted image patches are | 256256 | ||
mmp | What the microns per pixel in the level-0 scan are | – | ||
Model | Workflow | What the procedures of train and inference are, illustrated by the flow chart(s) if appropriate. | – | |
Learning approaches | What the learning method is. e.g., supervised learning, un/semi-supervised learning | Semi-supervised learning | ||
Architecture | What the structure of the proposed neural network is, if appropriate | nnUNet | ||
Task | The description of main tasks conducted on harmonised datasets, e.g., lesion segmentation/classification. | Tumour Segmentation | ||
Input domain | What the input modality of the proposed method is | 3-D images / 2D feature vectors | ||
Input size | The input sizes of the model | |||
Loss | What the optimisation functions are during the training. | Dice and cross-entropy loss | ||
Open-source | Whether the source code is available or not, provide the link if appropriate. | Open-source code www.github.com... | ||
Platform | The learning library used to build the model | TensorFlow 2.5.0 | ||
Evaluation | Statistical Analysis | What the evaluation methods of statistical analysis are | ANOVA-test | |
Metric | What indicators are used to evaluate harmonisation performance, e.g., the ratio of the reproducible features, coefficient of variation, Pearson correlation coefficient. | Intra-class correlation coefficient (>0.9 is considered reproducible) | ||
Comparison | What existing approaches are used to compare the performance of the proposed method | stVAE | ||
Visualisation | What approaches are used to visualise the data distribution before and after harmonisation strategies | t-SNE/UMAP/PCA | ||
Result | Result | What the quantitative values of evaluation metrics are. | – | |
Time-consuming | The computational time of the proposed method and the comparisons. | 30 s per case | ||
Discussion | Novelty | What the innovation of the proposed method is. | – | |
Strength | The importance/significance of the issue addressed by the proposed method. | – | ||
Limitation | What remained and unsolved issues are. | – | ||
Future works | Whether there will be potential studies in the future. | – |
The proposed CHECDHA checklist can greatly standardise the process of data harmonisation studies, which comprehensively describe the motivation, data, data harmonisation strategy, evaluation and conclusions. Start with a clear motivation (Fig. 12), researchers should first emphasise the importance of performing data harmonisation in a certain field. Then, the compositions of datasets should be illustrated in detail, including the common and specific attributes shown in the checklist. When introducing methodologies, the authors should clearly state their ideas and implementation details (input domain, architecture, input size, development platform, etc.). During the evaluation, researchers should assess the reproducibility using new/independent data or data-derived features before and after data harmonisation by appropriate metrics. Meanwhile, the data harmonisation performance of previous approaches should be considered as comparisons to reflect the advantages of the proposed method. At last, the novelty, strength, limitations and future works should be given in the discussion and conclusion sections.
Fig. 12.
Workflow of conducting data harmonisation studies guided by the checklist.
8.2. Guidance of data harmonisation strategies and metrics
Studies have shown that implementing inaccurate data harmonisation strategies may lead to significant bias, which results in more inaccurate predictions [168]. To guide the method selection, a flowchart presenting possible ways of data harmonisation is presented in Fig. 13. As the flowchart illustrates, the distribution based methods can be well performed on refined features or gene matrices. For high dimensional images, image processing methods are recommended when a high-performance GPU is not available. The deep learning based methods (including invariant feature learning and synthesis) can be applied to all kinds of modalities, while it requires sufficient training samples. The invariant feature learning methods are recommended when the main task can be integrated with the training process, since the synthesis may introduce unrealistic artefacts to the data.
Fig. 13.
Flowchart of how to select data harmonisation strategies.
For evaluation, the selection of metrics can directly affect whether the results are reliable or not. Here we summarise and recommend data harmonisation metrics based on different conditions in Fig. 14. Visualisation is the most intuitional way to analyse data harmonisation results, which can be implemented by visualising the raw data with t-SNE/UMAP/PCA or visualising the data harmonised raw data. Main task based evaluation can directly illustrate the effectiveness of the data harmonisation strategies, by comparing the main task performance on data before and after the data harmonisation. If the harmonised ground truth is not available, one can use distribution based metrics to assess the degree of sample mixture (although this may require the cohort label). When the harmonised ground truth can be acquired, the value based or correlation based metrics can precisely present the data harmonisation performance.
Fig. 14.
Flowchart of how to select harmonisation metrics.
9. Conclusion
Computational data harmonisation has been proposed for digital healthcare research studies in decades. However, bridging basic science research models and data fusion into multicentre, multimodal and multi-scanner medical practice and clinical trials can be challenging unless data harmonisation can be performed effectively. Furthermore, transfer/federated/multitask learning and other areas wherein knowledge is exchanged amongst models only work under ideal conditions, whenever the distribution shift is not large enough for the exchange knowledge to remain coherent across models/centres working over different data sources. Otherwise, data harmonisation is needed. Unfortunately, it is unclear which approaches and metrics should be employed when dealing with multimodal datasets. Moreover, there lacks a ‘standardised’ stepwise design methodology, which leads to poor reproducibility of the existing studies.
To overcome these issues, this paper summarises and categorises the existing data harmonisation strategies and metrics based on different theories, and subsequently presents the CHECDHA criteria. The proposed CHECDHA criteria help researchers to conduct data harmonisation studies in a standardised format, which can greatly advance academic reproducibility and development. Moreover, data harmonisation approaches and evaluation metrics in terms of three modalities are summarised to help researchers to select appropriate strategies (Fig. 7 and Fig. 8). In addition to summarising the methodologies, guidance of method and metrics selection (Fig. 11, Fig. 12) is also provided according to the different conditions. Last but not least, limitations and directions of different methods are illustrated for future works.
Fig. 7.
Taxonomy of applications that involved computational data harmonisation strategies.
Fig. 11.
Scales of cohorts in gene expression and radiomics studies.
Data harmonisation, an important process in large multicentre studies, has drawn more and more attention in computational biomedical research. It can be well adapted to a federated learning system to promote the development of computational modules and plays an important role in biomedical research including radiomic, genetic and pathological studies. Due to the lack of criteria when reporting research findings of harmonisation studies, we strongly appeal that the researchers should follow and expand the checklist presented in this survey.
Author statements
YN, JDS, FH and GY conceived and designed the study, contributed to data analysis, contributed to data interpretation, and contributed to the writing of the report. YN, JDS, SW1, CS, MR, IS, KH, JO, JN, JG, BE, AP, AAB, MIM, SW2, WV, NF, JPC, EVR, AC, HW, PL, LCA, LMB, FH, and GY contributed to the literature search. YN contributed to data collection and performed data curation and contributed to the tables and figures. YN, JDS, LCA, LMB, and GY contributed to Writing - Review & Editing. SW1, CS, KH, JG, AAB, MIM, SW2, WV, EVR, PL, LMB and GY contributed to Funding acquisition. NF contributed to Project administration. FH oversaw the study. GY supervised the work. All authors contributed to the article and approved the submitted version.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgement
This study was supported in part by the European Research Council Innovative Medicines Initiative (DRAGON#, H2020-JTI-IMI2 101005122), the AI for Health Imaging Award (CHAIMELEON##, H2020-SC1-FA-DTS-2019–1 952172), the UK Research and Innovation Future Leaders Fellowship (MR/V023799/1), the British Heart Foundation (Project Number: TG/18/5/34111, PG/16/78/32402), the SABRE project supported by Boehringer Ingelheim Ltd, the European Union's Horizon 2020 research and innovation programme (ICOVID, 101016131), the Euskampus Foundation (COVID19 Resilience, Ref. COnfVID19), and the Basque Government (consolidated research group MATHMODE, Ref. IT1294–19, and 3KIA project from the ELKARTEK funding program, Ref. KK-2020/00049).
# DRAGON Consortium:
Xiaodan Xinga, Ming Lia, Scott Wagersb, Rebecca Bakerc, Cosimo Nardid, Brice van Eeckhoute, Paul Skippf, Pippa Powellg, Miles Carrollh, Alessandro Ruggieroi, Muhunthan Thillaii, Judith Babari, Evis Salai, William Murchj, Julian Hiscoxk, Diana Barallel, Nicola Sverzellatim
## CHAIMELEON Consortium:
Ana Miguel Blancon, Fuensanta Bellvís Batallero, Mario Aznarp, Amelia Suarezp, Sergio Figueirasq, Katharina Krischakr, Monika Hierathr, Yisroel Mirskys, Yuval Elovicis, Jean Paul Beregit, Laure Fourniert, Francesco Sardanelliu, Tobias Penzkoferv, Karine Seymourw, Nacho Blanquerx, Emanuele Neriy, Andrea Laghiz, Manuela Françaaa, Ricard Martinezab
a National Heart and Lung Institute, Imperial College London, London, UK
b BioSci Consulting, Maasmechelen, Belgium
c Clinical Data Interchange Standards Consortium, Austin, Texas, United States
d University of Florence, Firenze, Italy
e Medical Cloud Company, Liège, Belgium
f TopMD, Southampton, UK
g European Lung Foundation, Sheffield, UK
h Department of Health, Public Health England, London, UK
i Department of Radiology, University of Cambridge, Cambridge, UK
j Owlstone Medical, Cambridge, UK
k University of Liverpool, Liverpool, UK
l University of Southampton, Southampton, UK
m University of Parma, Parma, Italy
n Medical Imaging Department, Hospital Universitari i Politècnic La Fe, Valencia, Spain
o QUIBIM, Valencia, Spain
p Matical Innovation, Madrid, Spain
q Bahía Software, A Coruña, Spain
r European Institute for Biomedical Imaging Research, Vienna, Austria
s Ben Gurion University of the Negev, Be'er Sheva, Israel
t Le Collège des Enseignants en Radiologie de France, France
u Research Hospital Policlinico San Donato, Milan, Italy
v Charité – Universitätsmedizin Berlin, Berlin, Germany
w Medexprim, Labège, France
x Valencia Polytechnic University, Valencia, Spain
y University of Pisa, Pisa, Italy
z Sapienza University of Rome, Rome, Italy
aa The Centro Hospitalar Universitário do Porto, Portugal
ab University of Valencia, Valencia, Spain
Contributor Information
Yang Nan, Email: y.nan20@imperial.ac.uk.
Guang Yang, Email: g.yang@imperial.ac.uk.
References
- 1.Clarke W.T., Mougin O., Driver I.D., Rua C., Morgan A.T., Asghar M., Clare S., Francis S., Wise R.G., Rodgers C.T. Multi-site harmonization of 7 tesla MRI neuroimaging protocols. Neuroimage. 2020;206 doi: 10.1016/j.neuroimage.2019.116335. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Delbeke D., Coleman R.E., Guiberteau M.J., Brown M.L., Royal H.D., Siegel B.A., Townsend D.W., Berland L.L., Parker J.A., Hubner K. Procedure guideline for tumor imaging with 18F-FDG PET/CT 1.0. J. Nucl. Med. 2006;47:885–895. [PubMed] [Google Scholar]
- 3.Simon J., Li D., Traboulsee A., Coyle P., Arnold D., Barkhof F., Frank J., Grossman R., Paty D., Radue E. Standardized MR imaging protocol for multiple sclerosis: consortium of MS Centers consensus guidelines. Am. J. Neuroradiol. 2006;27:455–461. [PMC free article] [PubMed] [Google Scholar]
- 4.Schmidt B.-.M., Colvin C.J., Hohlfeld A., Leon N. Defining and conceptualising data harmonisation: a scoping review protocol. Syst. Rev. 2018;7:1–6. doi: 10.1186/s13643-018-0890-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Schmidt B.-.M., Colvin C.J., Hohlfeld A., Leon N. Definitions, components and processes of data harmonisation in healthcare: a scoping review. BMC Med. Inform. Decis. Mak. 2020;20:1–19. doi: 10.1186/s12911-020-01218-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Da-Ano R., Visvikis D., Hatt M. Harmonization strategies for multicenter radiomics investigations. Phy. Med. Biol. 2020;65:24TR02. doi: 10.1088/1361-6560/aba798. [DOI] [PubMed] [Google Scholar]
- 7.Pinto M.S., Paolella R., Billiet T., Van Dyck P., Guns P.-.J., Jeurissen B., Ribbens A., den Dekker A.J., Sijbers J. Harmonization of brain diffusion MRI: concepts and methods. Front. Neurosci. 2020;14:396. doi: 10.3389/fnins.2020.00396. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Gitto S., Cuocolo R., Albano D., Morelli F., Pescatori L.C., Messina C., Imbriaco M., Sconfienza L.M. CT and MRI radiomics of bone and soft-tissue sarcomas: a systematic review of reproducibility and validation strategies. Insights Imaging. 2021;12:1–14. doi: 10.1186/s13244-021-01008-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Mali S.A., Ibrahim A., Woodruff H.C., Andrearczyk V., Müller H., Primakov S., Salahuddin Z., Chatterjee A., Lambin P. Making radiomics more reproducible across scanner and imaging protocol variations: a review of harmonization methods. J. Pers. Med. 2021;11:842. doi: 10.3390/jpm11090842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Chen T., Philip M., Cao K.-A.Lê, Tyagi S. A multi-modal data harmonisation approach for discovery of COVID-19 drug targets. Brief. Bioinform. 2021 doi: 10.1093/bib/bbab185. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Tax C.M., Grussu F., Kaden E., Ning L., Rudrapatna U., Evans C.J., St-Jean S., Leemans A., Koppers S., Merhof D. Cross-scanner and cross-protocol diffusion MRI data harmonisation: a benchmark database and evaluation of algorithms. Neuroimage. 2019;195:285–299. doi: 10.1016/j.neuroimage.2019.01.077. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Hutchinson D.M., Silins E., Mattick R.P., Patton G.C., Fergusson D.M., Hayatbakhsh R., Toumbourou J.W., Olsson C.A., Najman J.M., Spry E. How can data harmonisation benefit mental health research? An example of the Cannabis cohorts research consortium. Australian New Zealand J. Psychiat. 2015;49:317–323. doi: 10.1177/0004867415571169. [DOI] [PubMed] [Google Scholar]
- 13.Zhao B., Tan Y., Tsai W.-.Y., Qi J., Xie C., Lu L., Schwartz L.H. Reproducibility of radiomics for deciphering tumor phenotype with imaging. Sci. Rep. 2016;6:1–7. doi: 10.1038/srep23428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Kumar N., Verma R., Sharma S., Bhargava S., Vahadane A., Sethi A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging. 2017;36:1550–1560. doi: 10.1109/TMI.2017.2677499. [DOI] [PubMed] [Google Scholar]
- 15.Hotta M., Minamimoto R., Gohda Y., Miwa K., Otani K., Kiyomatsu T., Yano H. Prognostic value of 18 F-FDG PET/CT with texture analysis in patients with rectal cancer treated by surgery. Ann. Nucl. Med. 2021:1–10. doi: 10.1007/s12149-021-01622-7. [DOI] [PubMed] [Google Scholar]
- 16.Mattoli M.V., Calcagni M.L., Taralli S., Indovina L., Spottiswoode B.S., Giordano A. How often do we fail to classify the treatment response with [18 F] FDG PET/CT acquired on different scanners? Data from clinical oncological practice using an automatic tool for SUV harmonization. Mol. Imaging Biol. 2019;21:1210–1219. doi: 10.1007/s11307-019-01342-5. [DOI] [PubMed] [Google Scholar]
- 17.Mirzaalian H., Ning L., Savadjiev P., Pasternak O., Bouix S., Michailovich O., Grant G., Marx C.E., Morey R.A., Flashman L.A. Inter-site and inter-scanner diffusion MRI data harmonization. Neuroimage. 2016;135:311–323. doi: 10.1016/j.neuroimage.2016.04.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Zhu T., Hu R., Qiu X., Taylor M., Tso Y., Yiannoutsos C., Navia B., Mori S., Ekholm S., Schifitto G. Quantification of accuracy and precision of multi-center DTI measurements: a diffusion phantom and human brain study. Neuroimage. 2011;56:1398–1411. doi: 10.1016/j.neuroimage.2011.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Jovicich J., Marizzoni M., Bosch B., Bartrés-Faz D., Arnold J., Benninghoff J., Wiltfang J., Roccatagliata L., Picco A., Nobili F. Multisite longitudinal reliability of tract-based spatial statistics in diffusion tensor imaging of healthy elderly subjects. Neuroimage. 2014;101:390–403. doi: 10.1016/j.neuroimage.2014.06.075. [DOI] [PubMed] [Google Scholar]
- 20.Leo P., Lee G., Shih N.N., Elliott R., Feldman M.D., Madabhushi A. Evaluating stability of histomorphometric features across scanner and staining variations: prostate cancer diagnosis from whole slide images. J. Med. Imaging. 2016;3 doi: 10.1117/1.JMI.3.4.047502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Berenguer R., Pastor-Juan M.D.R., Canales-Vázquez J., Castro-García M., Villas M.V., Mansilla Legorburo F., Sabater S. Radiomics of CT features may be nonreproducible and redundant: influence of CT acquisition parameters. Radiology. 2018;288:407–415. doi: 10.1148/radiol.2018172361. [DOI] [PubMed] [Google Scholar]
- 22.Sunderland J.J., Christian P.E. Quantitative PET/CT scanner performance characterization based upon the society of nuclear medicine and molecular imaging clinical trials network oncology clinical simulator phantom. J. Nucl. Med. 2015;56:145–152. doi: 10.2967/jnumed.114.148056. [DOI] [PubMed] [Google Scholar]
- 23.Jha A., Mithun S., Jaiswar V., Sherkhane U., Purandare N., Prabhash K., Rangarajan V., Dekker A., Wee L., Traverso A. Repeatability and reproducibility study of radiomic features on a phantom and human cohort. Sci. Rep. 2021;11:1–12. doi: 10.1038/s41598-021-81526-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.N. Emaminejad, M.W. Wahi-Anwar, G.H.J. Kim, W. Hsu, M. Brown, M. McNitt-Gray, Reproducibility of lung nodule radiomic features: multivariable and univariable investigations that account for interactions between CT acquisition and reconstruction parameters, Med. Phys., (2021). [DOI] [PMC free article] [PubMed]
- 25.Kim M., Jung S.C., Park J.E., Park S.Y., Lee H., Choi K.M. Reproducibility of radiomic features in SENSE and compressed SENSE: impact of acceleration factors. Eur. Radiol. 2021:1–14. doi: 10.1007/s00330-021-07760-w. [DOI] [PubMed] [Google Scholar]
- 26.Yamashita R., Perrin T., Chakraborty J., Chou J.F., Horvat N., Koszalka M.A., Midya A., Gonen M., Allen P., Jarnagin W.R. Radiomic feature reproducibility in contrast-enhanced CT of the pancreas is affected by variabilities in scan parameters and manual segmentation. Eur. Radiol. 2020;30:195–205. doi: 10.1007/s00330-019-06381-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Fiset S., Welch M.L., Weiss J., Pintilie M., Conway J.L., Milosevic M., Fyles A., Traverso A., Jaffray D., Metser U. Repeatability and reproducibility of MRI-based radiomic features in cervical cancer. Radiother. Oncol. 2019;135:107–114. doi: 10.1016/j.radonc.2019.03.001. [DOI] [PubMed] [Google Scholar]
- 28.Saeedi E., Dezhkam A., Beigi J., Rastegar S., Yousefi Z., Mehdipour L.A., Abdollahi H., Tanha K. Radiomic feature robustness and reproducibility in quantitative bone radiography: a study on radiologic parameter changes. J. Clin. Densitom. 2019;22:203–213. doi: 10.1016/j.jocd.2018.06.004. [DOI] [PubMed] [Google Scholar]
- 29.Meyer M., Ronald J., Vernuccio F., Nelson R.C., Ramirez-Giraldo J.C., Solomon J., Patel B.N., Samei E., Marin D. Reproducibility of CT radiomic features within the same patient: influence of radiation dose and CT reconstruction settings. Radiology. 2019;293:583–591. doi: 10.1148/radiol.2019190928. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Perrin T., Midya A., Yamashita R., Chakraborty J., Saidon T., Jarnagin W.R., Gonen M., Simpson A.L., Do R.K. Short-term reproducibility of radiomic features in liver parenchyma and liver malignancies on contrast-enhanced CT imaging. Abdominal Radiol. 2018;43:3271–3278. doi: 10.1007/s00261-018-1600-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Midya A., Chakraborty J., Gönen M., Do R.K., Simpson A.L. Influence of CT acquisition and reconstruction parameters on radiomic feature reproducibility. J.Med. Imaging. 2018;5 doi: 10.1117/1.JMI.5.1.011020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Altazi B.A., Zhang G.G., Fernandez D.C., Montejo M.E., Hunt D., Werner J., Biagioli M.C., Moros E.G. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms. J. Appl. Clin. Med. Phy. 2017;18:32–48. doi: 10.1002/acm2.12170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Hu P., Wang J., Zhong H., Zhou Z., Shen L., Hu W., Zhang Z. Reproducibility with repeat CT in radiomics study for rectal cancer. Oncotarget. 2016;7:71440. doi: 10.18632/oncotarget.12199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Choe J., Lee S.M., Do K.-.H., Lee G., Lee J.-.G., Lee S.M., Seo J.B. Deep learning–based image conversion of CT reconstruction kernels improves radiomics reproducibility for pulmonary nodules or masses. Radiology. 2019;292:365–373. doi: 10.1148/radiol.2019181960. [DOI] [PubMed] [Google Scholar]
- 35.Primak A.N., McCollough C.H., Bruesewitz M.R., Zhang J., Fletcher J.G. Relationship between noise, dose, and pitch in cardiac multi–detector row CT. Radiographics. 2006;26:1785–1794. doi: 10.1148/rg.266065063. [DOI] [PubMed] [Google Scholar]
- 36.Gierada D.S., Bierhals A.J., Choong C.K., Bartel S.T., Ritter J.H., Das N.A., Hong C., Pilgram T.K., Bae K.T., Whiting B.R. Effects of CT section thickness and reconstruction kernel on emphysema quantification: relationship to the magnitude of the CT emphysema index. Acad. Radiol. 2010;17:146–156. doi: 10.1016/j.acra.2009.08.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Tung P.-.Y., Blischak J.D., Hsiao C.J., Knowles D.A., Burnett J.E., Pritchard J.K., Gilad Y. Batch effects and the effective design of single-cell gene expression studies. Sci. Rep. 2017;7:1–15. doi: 10.1038/srep39921. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Stuart T., Butler A., Hoffman P., Hafemeister C., Papalexi E., Mauck W.M., III, Hao Y., Stoeckius M., Smibert P., Satija R. Comprehensive integration of single-cell data. Cell. 2019;177:1888–1902. doi: 10.1016/j.cell.2019.05.031. e1821. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Vijh S., Saraswat M., Kumar S. A new complete color normalization method for H&E stained histopatholgical images. Appl. Intell. 2021:1–14. [Google Scholar]
- 40.D.E. Chandler, R.W. Roberson, Bioimaging: current concepts in light and electron microscopy, (2009).
- 41.Sun D., Wang J., Han Y., Dong X., Ge J., Zheng R., Shi X., Wang B., Li Z., Ren P. TISCH: a comprehensive web resource enabling interactive single-cell transcriptome visualization of tumor microenvironment. Nucleic Acids Res. 2021;49:D1420–D1430. doi: 10.1093/nar/gkaa1020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Shafiq-ul-Hassan M., Zhang G.G., Latifi K., Ullah G., Hunt D.C., Balagurunathan Y., Abdalah M.A., Schabath M.B., Goldgof D.G., Mackin D. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med. Phys. 2017;44:1050–1062. doi: 10.1002/mp.12123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Mackin D., Fave X., Zhang L., Fried D., Yang J., Taylor B., Rodriguez-Rivera E., Dodge C., Jones A.K., Court L. Measuring CT scanner variability of radiomics features. Invest. Radiol. 2015;50:757. doi: 10.1097/RLI.0000000000000180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Mårtensson G., Ferreira D., Granberg T., Cavallin L., Oppedal K., Padovani A., Rektorova I., Bonanni L., Pardini M., Kramberger M.G. The reliability of a deep learning model in clinical out-of-distribution MRI data: a multicohort study. Med. Image Anal. 2020;66 doi: 10.1016/j.media.2020.101714. [DOI] [PubMed] [Google Scholar]
- 45.Rathore S., Bakas S., Akbari H., Shukla G., Rozycki M., Davatzikos C. Proceedings of the Medical Imaging 2018: Computer-Aided Diagnosis, International Society for Optics and Photonics. 2018. Deriving stable multi-parametric MRI radiomic signatures in the presence of inter-scanner variations: survival prediction of glioblastoma via imaging pattern analysis and machine learning techniques. [Google Scholar]
- 46.Johnson W.E., Li C., Rabinovic A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics. 2007;8:118–127. doi: 10.1093/biostatistics/kxj037. [DOI] [PubMed] [Google Scholar]
- 47.Pandey U., Saini J., Kumar M., Gupta R., Ingalhalikar M. Normative baseline for radiomics in Brain MRI: evaluating the robustness, regional variations, and reproducibility on FLAIR Images. J. Magn. Reson. Imaging. 2020 doi: 10.1002/jmri.27349. [DOI] [PubMed] [Google Scholar]
- 48.Ingalhalikar M., Shinde S., Karmarkar A., Rajan A., Rangaprakash D., Deshpande G. Functional connectivity-based prediction of Autism on site harmonized ABIDE dataset. IEEE Trans. Biomed. Eng. 2021 doi: 10.1109/TBME.2021.3080259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Wengler K., Cassidy C., van Der Pluijm M., Weinstein J.J., Abi-Dargham A., van de Giessen E., Horga G. Cross-scanner harmonization of neuromelanin-sensitive MRI for multisite studies. J. Magn. Reson. Imaging. 2021 doi: 10.1002/jmri.27679. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Beaumont H., Iannessi A., Bertrand A.-.S., Cucchi J.M., Lucidarme O. Harmonization of radiomic feature distributions: impact on classification of hepatic tissue in CT imaging. Eur. Radiol. 2021:1–10. doi: 10.1007/s00330-020-07641-8. [DOI] [PubMed] [Google Scholar]
- 51.Garcia-Dias R., Scarpazza C., Baecker L., Vieira S., Pinaya W.H., Corvin A., Redolfi A., Nelson B., Crespo-Facorro B., McDonald C. Neuroharmony: a new tool for harmonizing volumetric MRI data from unseen scanners. Neuroimage. 2020;220 doi: 10.1016/j.neuroimage.2020.117127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Beer J.C., Tustison N.J., Cook P.A., Davatzikos C., Sheline Y.I., Shinohara R.T., Linn K.A. A.s.D.N. Initiative, Longitudinal combat: a method for harmonizing longitudinal multi-scanner imaging data. Neuroimage. 2020;220 doi: 10.1016/j.neuroimage.2020.117129. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Radua J., Vieta E., Shinohara R., Kochunov P., Quidé Y., Green M.J., Weickert C.S., Weickert T., Bruggemann J., Kircher T. Increased power by harmonizing structural MRI site differences with the ComBat batch adjustment method in ENIGMA. Neuroimage. 2020;218 doi: 10.1016/j.neuroimage.2020.116956. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Whitney H.M., Li H., Ji Y., Liu P., Giger M.L. Harmonization of radiomic features of breast lesions across international DCE-MRI datasets. J. Med. Imaging. 2020;7 doi: 10.1117/1.JMI.7.1.012707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Yu M., Linn K.A., Cook P.A., Phillips M.L., McInnis M., Fava M., Trivedi M.H., Weissman M.M., Shinohara R.T., Sheline Y.I. Statistical harmonization corrects site effects in functional connectivity measurements from multi-site fMRI data. Hum Brain Mapp. 2018;39:4213–4227. doi: 10.1002/hbm.24241. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Espín-Pérez A., Portier C., Chadeau-Hyam M., van Veldhoven K., Kleinjans J.C., de Kok T.M. Comparison of statistical methods and the use of quality control samples for batch effect correction in human transcriptome data. PLoS ONE. 2018;13 doi: 10.1371/journal.pone.0202947. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Fortin J.-.P., Cullen N., Sheline Y.I., Taylor W.D., Aselcioglu I., Cook P.A., Adams P., Cooper C., Fava M., McGrath P.J. Harmonization of cortical thickness measurements across scanners and sites. Neuroimage. 2018;167:104–120. doi: 10.1016/j.neuroimage.2017.11.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Fortin J.-.P., Parker D., Tunç B., Watanabe T., Elliott M.A., Ruparel K., Roalf D.R., Satterthwaite T.D., Gur R.C., Gur R.E. Harmonization of multi-site diffusion tensor imaging data. Neuroimage. 2017;161:149–170. doi: 10.1016/j.neuroimage.2017.08.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Kothari S., Phan J.H., Stokes T.H., Osunkoya A.O., Young A.N., Wang M.D. Removing batch effects from histopathological images for enhanced cancer diagnosis. IEEE J. Biomed. Health Inform. 2013;18:765–772. doi: 10.1109/JBHI.2013.2276766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Arendt C.T., Leithner D., Mayerhoefer M.E., Gibbs P., Czerny C., Arnoldner C., Burck I., Leinung M., Tanyildizi Y., Lenga L. Radiomics of high-resolution computed tomography for the differentiation between cholesteatoma and middle ear inflammation: effects of post-reconstruction methods in a dual-center study. Eur. Radiol. 2021;31:4071–4078. doi: 10.1007/s00330-020-07564-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Ibrahim A., Refaee T., Leijenaar R.T., Primakov S., Hustinx R., Mottaghy F.M., Woodruff H.C., Maidment A.D., Lambin P. The application of a workflow integrating the variable reproducibility and harmonizability of radiomic features on a phantom dataset. PLoS ONE. 2021;16 doi: 10.1371/journal.pone.0251147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Lan J., Cai S., Xue Y., Gao Q., Du M., Zhang H., Wu Z., Deng Y., Huang Y., Tong T. Unpaired stain style transfer using invertible neural networks based on channel attention and long-range residual. IEEE Access. 2021;9:11282–11295. [Google Scholar]
- 63.Wachinger C., Rieckmann A., Pölsterl S., Initiative A.s.D.N. Detect and correct bias in multi-site neuroimaging datasets. Med. Image Anal. 2021;67 doi: 10.1016/j.media.2020.101879. [DOI] [PubMed] [Google Scholar]
- 64.Foy J.J., Al-Hallaq H.A., Grekoski V., Tran T., Guruvadoo K., Armato Iii S.G., Sensakovic W.F. Harmonization of radiomic feature variability resulting from differences in CT image acquisition and reconstruction: assessment in a cadaveric liver. Phy. Med. Biol. 2020;65 doi: 10.1088/1361-6560/abb172. [DOI] [PubMed] [Google Scholar]
- 65.Martin M.-J.Saint, Orlhac F., Akl P., Khalid F., Nioche C., Buvat I., Malhaire C., Frouin F. A radiomics pipeline dedicated to Breast MRI: validation on a multi-scanner phantom study, magnetic resonance materials in physics. Biol. Med. 2020:1–12. doi: 10.1007/s10334-020-00892-y. [DOI] [PubMed] [Google Scholar]
- 66.Karayumak S.C., Bouix S., Ning L., James A., Crow T., Shenton M., Kubicki M., Rathi Y. Retrospective harmonization of multi-site diffusion MRI data acquired with different acquisition parameters. Neuroimage. 2019;184:180–200. doi: 10.1016/j.neuroimage.2018.08.073. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Zhang Y., Parmigiani G., Johnson W.E. ComBat-Seq: batch effect adjustment for RNA-Seq count data. NAR Genom. Bioinform. 2020;2 doi: 10.1093/nargab/lqaa078. lqaa078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Stein C.K., Qu P., Epstein J., Buros A., Rosenthal A., Crowley J., Morgan G., Barlogie B. Removing batch effects from purified plasma cell gene expression microarrays with modified ComBat. BMC Bioinformatics. 2015;16:1–9. doi: 10.1186/s12859-015-0478-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Da-Ano R., Masson I., Lucia F., Doré M., Robin P., Alfieri J., Rousseau C., Mervoyer A., Reinhold C., Castelli J. Performance comparison of modified ComBat for harmonization of radiomic features for multicenter studies. Sci. Rep. 2020;10:1–12. doi: 10.1038/s41598-020-66110-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Müller C., Schillert A., Röthemeier C., Trégouët D.-.A., Proust C., Binder H., Pfeiffer N., Beutel M., Lackner K.J., Schnabel R.B. Removing batch effects from longitudinal gene expression-quantile normalization plus ComBat as best approach for microarray transcriptome data. PLoS ONE. 2016;11 doi: 10.1371/journal.pone.0156594. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Benito M., Parker J., Du Q., Wu J., Xiang D., Perou C.M., Marron J.S. Adjustment of systematic microarray data biases. Bioinformatics. 2004;20:105–114. doi: 10.1093/bioinformatics/btg385. [DOI] [PubMed] [Google Scholar]
- 72.Shabalin A.A., Tjelmeland H., Fan C., Perou C.M., Nobel A.B. Merging two gene-expression studies via cross-platform normalization. Bioinformatics. 2008;24:1154–1160. doi: 10.1093/bioinformatics/btn083. [DOI] [PubMed] [Google Scholar]
- 73.Korsunsky I., Millard N., Fan J., Slowikowski K., Zhang F., Wei K., Baglaenko Y., Brenner M., Loh P.-r., Raychaudhuri S. Fast, sensitive and accurate integration of single-cell data with Harmony. Nat. Methods. 2019;16:1289–1296. doi: 10.1038/s41592-019-0619-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Haghverdi L., Lun A.T., Morgan M.D., Marioni J.C. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat. Biotechnol. 2018;36:421–427. doi: 10.1038/nbt.4091. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Hie B., Bryson B., Berger B. Efficient integration of heterogeneous single-cell transcriptomes using Scanorama. Nat. Biotechnol. 2019;37:685–691. doi: 10.1038/s41587-019-0113-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Wolf F.A., Angerer P., Theis F.J. SCANPY: large-scale single-cell gene expression data analysis. Genome Biol. 2018;19:1–5. doi: 10.1186/s13059-017-1382-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Polański K., Young M.D., Miao Z., Meyer K.B., Teichmann S.A., Park J.-.E. BBKNN: fast batch alignment of single cell transcriptomes. Bioinformatics. 2020;36:964–965. doi: 10.1093/bioinformatics/btz625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.L. McInnes, J. Healy, J. Melville, Umap: uniform manifold approximation and projection for dimension reduction, arXiv preprint arXiv:1802.03426, (2018).
- 79.Butler A., Hoffman P., Smibert P., Papalexi E., Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat. Biotechnol. 2018;36:411–420. doi: 10.1038/nbt.4096. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Gagnon-Bartsch J.A., Speed T.P. Using control genes to correct for unwanted variation in microarray data. Biostatistics. 2012;13:539–552. doi: 10.1093/biostatistics/kxr034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Risso D., Perraudeau F., Gribkova S., Dudoit S., Vert J.-.P. A general and flexible method for signal extraction from single-cell RNA-seq data. Nat. Commun. 2018;9:1–17. doi: 10.1038/s41467-017-02554-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Alter O., Brown P.O., Botstein D. Singular value decomposition for genome-wide expression data processing and modeling. Proc. Natl. Acad. Sci. 2000;97:10101–10106. doi: 10.1073/pnas.97.18.10101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Lin Y., Ghazanfar S., Wang K.Y., Gagnon-Bartsch J.A., Lo K.K., Su X., Han Z.-.G., Ormerod J.T., Speed T.P., Yang P. scMerge leverages factor analysis, stable expression, and pseudoreplication to merge multiple single-cell RNA-seq datasets. Proc. Natl. Acad. Sci. 2019;116:9775–9784. doi: 10.1073/pnas.1820006116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84.Leek J.T., Johnson W.E., Parker H.S., Jaffe A.E., Storey J.D. The sva package for removing batch effects and other unwanted variation in high-throughput experiments. Bioinformatics. 2012;28:882–883. doi: 10.1093/bioinformatics/bts034. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Smyth G.K., Speed T. Normalization of cDNA microarray data. Methods. 2003;31:265–273. doi: 10.1016/s1046-2023(03)00155-5. [DOI] [PubMed] [Google Scholar]
- 86.Fortin J.-.P., Sweeney E.M., Muschelli J., Crainiceanu C.M., Shinohara R.T. A.s.D.N. Initiative, Removing inter-subject technical variability in magnetic resonance imaging studies. Neuroimage. 2016;132:198–212. doi: 10.1016/j.neuroimage.2016.02.036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Mirzaalian H., Ning L., Savadjiev P., Pasternak O., Bouix S., Michailovich O., Karmacharya S., Grant G., Marx C.E., Morey R.A. Multi-site harmonization of diffusion MRI data in a registration framework. Brain Imaging Behav. 2018;12:284–295. doi: 10.1007/s11682-016-9670-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Mirzaalian H., de Pierrefeu A., Savadjiev P., Pasternak O., Bouix S., Kubicki M., Westin C.-.F., Shenton M.E., Rathi Y. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2015. Harmonizing diffusion MRI data across multiple sites and scanners; pp. 12–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Zhang Y., Jenkins D.F., Manimaran S., Johnson W.E. Alternative empirical Bayes models for adjusting for batch effects in genomic studies. BMC Bioinformatics. 2018;19:1–15. doi: 10.1186/s12859-018-2263-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Huynh K.M., Chen G., Wu Y., Shen D., Yap P.-.T. Multi-site harmonization of diffusion MRI data via method of moments. IEEE Trans. Med. Imaging. 2019;38:1599–1609. doi: 10.1109/TMI.2019.2895020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Wrobel J., Martin M., Bakshi R., Calabresi P., Elliot M., Roalf D., Gur R., Gur R., Henry R., Nair G. Intensity warping for multisite MRI harmonization. Neuroimage. 2020;223 doi: 10.1016/j.neuroimage.2020.117242. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Llera A., Huertas I., Mir P., Beckmann C.F. Quantitative intensity harmonization of dopamine transporter SPECT images using gamma mixture models. Mol. Imaging Biol. 2019;21:339–347. doi: 10.1007/s11307-018-1217-8. [DOI] [PubMed] [Google Scholar]
- 93.Lazar C., Taminau J., Meganck S., Steenhoff D., Coletta A., Solís D.Y.W., Molter C., Duque R., Bersini H., Nowé A. GENESHIFT: a nonparametric approach for integrating microarray gene expression data based on the inner product as a distance measure between the distributions of genes. IEEE/ACM Trans. Comput. Biol. Bioinform. 2013;10:383–392. doi: 10.1109/TCBB.2013.12. [DOI] [PubMed] [Google Scholar]
- 94.Mackin D., Fave X., Zhang L., Yang J., Jones A.K., Ng C.S., Court L. Harmonizing the pixel size in retrospective computed tomography radiomics studies. PLoS ONE. 2017;12 doi: 10.1371/journal.pone.0178524. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Moradmand H., Aghamiri S.M.R., Ghaderi R. Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma. J. Appl. Clin. Med. Phy. 2020;21:179–190. doi: 10.1002/acm2.12795. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Pitas I. John Wiley & Sons; 2000. Digital Image Processing Algorithms and Applications. [Google Scholar]
- 97.Shinohara R.T., Sweeney E.M., Goldsmith J., Shiee N., Mateen F.J., Calabresi P.A., Jarso S., Pham D.L., Reich D.S., Crainiceanu C.M. Statistical normalization techniques for magnetic resonance imaging. NeuroImage Clin. 2014;6:9–19. doi: 10.1016/j.nicl.2014.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Reinhard E., Adhikhmin M., Gooch B., Shirley P. Color transfer between images. IEEE Comput. Graph. Appl. 2001;21:34–41. [Google Scholar]
- 99.Loizou C.P., Murray V., Pattichis M.S., Seimenis I., Pantziaris M., Pattichis C.S. Multiscale amplitude-modulation frequency-modulation (AM–FM) texture analysis of multiple sclerosis in brain MRI images. IEEE Trans. Inf. Technol. Biomed. 2010;15:119–129. doi: 10.1109/TITB.2010.2091279. [DOI] [PubMed] [Google Scholar]
- 100.Shah M., Xiao Y., Subbanna N., Francis S., Arnold D.L., Collins D.L., Arbel T. Evaluating intensity normalization on MRIs of human brain with multiple sclerosis. Med. Image Anal. 2011;15:267–282. doi: 10.1016/j.media.2010.12.003. [DOI] [PubMed] [Google Scholar]
- 101.Roy S., Lal S., Kini J.R. Novel color normalization method for Hematoxylin & Eosin stained histopathology images. IEEE Access. 2019;7:28982–28998. [Google Scholar]
- 102.Zarella M.D., Yeoh C., Breen D.E., Garcia F.U. An alternative reference space for H&E color normalization. PLoS ONE. 2017;12 doi: 10.1371/journal.pone.0174489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Li X., Plataniotis K.N. A complete color normalization approach to histopathology images using color cues computed from saturation-weighted statistics. IEEE Trans. Biomed. Eng. 2015;62:1862–1873. doi: 10.1109/TBME.2015.2405791. [DOI] [PubMed] [Google Scholar]
- 104.Swinehart D.F. The beer-lambert law. J. Chem. Educ. 1962;39:333. [Google Scholar]
- 105.Tosta T.A.A., de Faria P.R., Neves L.A., do Nascimento M.Z. Color normalization of faded H&E-stained histological images using spectral matching. Comput. Biol. Med. 2019;111 doi: 10.1016/j.compbiomed.2019.103344. [DOI] [PubMed] [Google Scholar]
- 106.Khan A.M., Rajpoot N., Treanor D., Magee D. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution. IEEE Trans. Biomed. Eng. 2014;61:1729–1738. doi: 10.1109/TBME.2014.2303294. [DOI] [PubMed] [Google Scholar]
- 107.Ruifrok A.C., Johnston D.A. Quantification of histochemical staining by color deconvolution. Anal. Quant. Cytol. Histol. 2001;23:291–299. [PubMed] [Google Scholar]
- 108.Hoque M.Z., Keskinarkaus A., Nyberg P., Seppänen T. Retinex model based stain normalization technique for whole slide image analysis. Comput. Med. Imaging Graph. 2021;90 doi: 10.1016/j.compmedimag.2021.101901. [DOI] [PubMed] [Google Scholar]
- 109.Vahadane A., Peng T., Sethi A., Albarqouni S., Wang L., Baust M., Steiger K., Schlitter A.M., Esposito I., Navab N. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med. Imaging. 2016;35:1962–1971. doi: 10.1109/TMI.2016.2529665. [DOI] [PubMed] [Google Scholar]
- 110.Lei G., Xia Y., Zhai D.-.H., Zhang W., Chen D., Wang D. StainCNNs: an efficient stain feature learning method. Neurocomputing. 2020;406:267–273. [Google Scholar]
- 111.Zheng Y., Jiang Z., Zhang H., Xie F., Shi J., Xue C. Adaptive color deconvolution for histological WSI normalization. Comput. Methods Programs Biomed. 2019;170:107–120. doi: 10.1016/j.cmpb.2019.01.008. [DOI] [PubMed] [Google Scholar]
- 112.Maji P., Mahapatra S. Rough-fuzzy circular clustering for color normalization of histological images. Fundam. Inform. 2019;164:103–117. doi: 10.1109/TMI.2019.2956944. [DOI] [PubMed] [Google Scholar]
- 113.Maji P., Mahapatra S. Circular clustering in fuzzy approximation spaces for color normalization of histological images. IEEE Trans Med Imaging. 2019;39:1735–1745. doi: 10.1109/TMI.2019.2956944. [DOI] [PubMed] [Google Scholar]
- 114.Cheng H.-.D., Cai X., Min R. A novel approach to color normalization using neural network. Neural Comput. Appl. 2009;18:237–247. [Google Scholar]
- 115.Golkov V., Dosovitskiy A., Sperl J.I., Menzel M.I., Czisch M., Sämann P., Brox T., Cremers D. Q-space deep learning: twelve-fold shorter and model-free diffusion MRI scans. IEEE Trans. Med. Imaging. 2016;35:1344–1351. doi: 10.1109/TMI.2016.2551324. [DOI] [PubMed] [Google Scholar]
- 116.Koppers S., Bloy L., Berman J.I., Tax C.M., Edgar J.C., Merhof D. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2019. Spherical harmonic residual network for diffusion signal harmonization; pp. 173–182. [Google Scholar]
- 117.Karayumak S.C., Kubicki M., Rathi Y. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2018. Harmonizing diffusion MRI data across magnetic field strengths; pp. 116–124. [Google Scholar]
- 118.Dewey B.E., Zhao C., Reinhold J.C., Carass A., Fitzgerald K.C., Sotirchos E.S., Saidha S., Oh J., Pham D.L., Calabresi P.A. DeepHarmony: a deep learning approach to contrast harmonization across scanner changes. Magn. Reson. Imaging. 2019;64:160–170. doi: 10.1016/j.mri.2019.05.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Tong Q., Gong T., He H., Wang Z., Yu W., Zhang J., Zhai L., Cui H., Meng X., Tax C.W. A deep learning–based method for improving reliability of multicenter diffusion kurtosis imaging with varied acquisition protocols. Magn. Reson. Imaging. 2020;73:31–44. doi: 10.1016/j.mri.2020.08.001. [DOI] [PubMed] [Google Scholar]
- 120.Park S., Lee S.M., Do K.-.H., Lee J.-.G., Bae W., Park H., Jung K.-.H., Seo J.B. Deep learning algorithm for reducing CT slice thickness: effect on reproducibility of radiomic features in lung cancer. Korean J. Radiol. 2019;20:1431–1440. doi: 10.3348/kjr.2019.0212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Shaham U., Stanton K.P., Zhao J., Li H., Raddassi K., Montgomery R., Kluger Y. Removal of batch effects using distribution-matching residual networks. Bioinformatics. 2017;33:2539–2546. doi: 10.1093/bioinformatics/btx196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.He K., Zhang X., Ren S., Sun J. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. Deep residual learning for image recognition; pp. 770–778. [Google Scholar]
- 123.Ioffe S., Szegedy C. Proceedings of the International conference on machine learning. PMLR; 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift; pp. 448–456. [Google Scholar]
- 124.Gretton A., Borgwardt K., Rasch M., Schölkopf B., Smola A. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006;19:513–520. [Google Scholar]
- 125.Jog A., Carass A., Roy S., Pham D.L., Prince J.L. MR image synthesis by contrast learning on neighborhood ensembles. Med. Image Anal. 2015;24:63–76. doi: 10.1016/j.media.2015.05.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 126.Jog A., Carass A., Roy S., Pham D.L., Prince J.L. Random forest regression for magnetic resonance image synthesis. Med. Image Anal. 2017;35:475–488. doi: 10.1016/j.media.2016.08.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127.Zhu J.-.Y., Park T., Isola P., Efros A.A. Proceedings of the IEEE international conference on computer vision. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks; pp. 2223–2232. [Google Scholar]
- 128.Zhao F., Wu Z., Wang L., Lin W., Xia S., Shen D., Li G. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; 2019. Harmonization of infant cortical thickness using surface-to-surface cycle-consistent adversarial networks; pp. 475–483. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Ren M., Dey N., Fishbaugh J., Gerig G. Segmentation-renormalized deep feature modulation for unpaired image harmonization. IEEE Trans. Med. Imaging. 2021;40:1519–1530. doi: 10.1109/TMI.2021.3059726. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Zhong J., Wang Y., Li J., Xue X., Liu S., Wang M., Gao X., Wang Q., Yang J., Li X. Inter-site harmonization based on dual generative adversarial networks for diffusion tensor imaging: application to neonatal white matter development. Biomed. Eng. Online. 2020;19:1–18. doi: 10.1186/s12938-020-0748-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Moyer D., Ver Steeg G., Tax C.M., Thompson P.M. Scanner invariant representations for diffusion MRI harmonization. Magn. Reson. Med. 2020;84:2174–2189. doi: 10.1002/mrm.28243. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Russkikh N., Antonets D., Shtokalo D., Makarov A., Vyatkin Y., Zakharov A., Terentyev E. Style transfer with variational autoencoders is a promising approach to RNA-Seq data harmonization and analysis. Bioinformatics. 2020;36:5076–5085. doi: 10.1093/bioinformatics/btaa624. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Johansen N., Quon G. scAlign: a tool for alignment, integration, and rare cell identification from scRNA-seq data. Genome Biol. 2019;20:1–21. doi: 10.1186/s13059-019-1766-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Haeusser P., Mordvintsev A., Cremers D. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. Learning by association–a versatile semi-supervised training method for neural networks; pp. 89–98. [Google Scholar]
- 135.Wang D., Hou S., Zhang L., Wang X., Liu B., Zhang Z. iMAP: integration of multiple single-cell datasets by adversarial paired transfer networks. Genome Biol. 2021;22:1–24. doi: 10.1186/s13059-021-02280-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Mairal J., Bach F., Ponce J., Sapiro G. Online learning for matrix factorization and sparse coding. J. Machine Learn. Res. 2010;11 [Google Scholar]
- 137.St-Jean S., Coupé P., Descoteaux M. Non local spatial and angular matching: enabling higher spatial resolution diffusion MRI datasets through adaptive denoising. Med. Image Anal. 2016;32:115–130. doi: 10.1016/j.media.2016.02.010. [DOI] [PubMed] [Google Scholar]
- 138.Tosta T.A.A., de Faria P.R., Servato J.P.S., Neves L.A., Roberto G.F., Martins A.S., do Nascimento M.Z. Unsupervised method for normalization of hematoxylin-eosin stain in histological images. Comput. Med. Imaging Graph. 2019;77 doi: 10.1016/j.compmedimag.2019.101646. [DOI] [PubMed] [Google Scholar]
- 139.Lu C., Shi J., Jia J. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013. Online robust dictionary learning; pp. 415–422. [Google Scholar]
- 140.Li X., Wang K., Lyu Y., Pan H., Zhang J., Stambolian D., Susztak K., Reilly M.P., Hu G., Li M. Deep learning enables accurate clustering with batch effect removal in single-cell RNA-seq analysis. Nat. Commun. 2020;11:1–14. doi: 10.1038/s41467-020-15851-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 141.Blondel V.D., Guillaume J.-.L., Lambiotte R., Lefebvre E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008:P10008. 2008. [Google Scholar]
- 142.Wang T., Johnson T.S., Shao W., Lu Z., Helm B.R., Zhang J., Huang K. BERMUDA: a novel deep transfer learning method for single-cell RNA sequencing batch correction reveals hidden high-resolution cellular subtypes. Genome Biol. 2019;20:1–15. doi: 10.1186/s13059-019-1764-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 143.Guan H., Liu Y., Yang E., Yap P.-.T., Shen D., Liu M. Multi-site MRI harmonization via attention-guided deep domain adaptation for brain disorder identification. Med. Image Anal. 2021;71 doi: 10.1016/j.media.2021.102076. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Dinsdale N.K., Jenkinson M., Namburete A.I. Deep learning-based unlearning of dataset bias for MRI harmonisation and confound removal. Neuroimage. 2021;228 doi: 10.1016/j.neuroimage.2020.117689. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 145.Ge S., Wang H., Alavi A., Xing E., Bar-Joseph Z. Supervised adversarial alignment of single-cell RNA-seq data. J. Comput. Biol. 2021;28:501–513. doi: 10.1089/cmb.2020.0439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 146.Rong Z., Tan Q., Cao L., Zhang L., Deng K., Huang Y., Zhu Z.-.J., Li Z., Li K. NormAE: deep adversarial learning model to remove batch effects in liquid chromatography mass spectrometry-based metabolomics data. Anal. Chem. 2020;92:5082–5090. doi: 10.1021/acs.analchem.9b05460. [DOI] [PubMed] [Google Scholar]
- 147.Büttner M., Miao Z., Wolf F.A., Teichmann S.A., Theis F.J. A test metric for assessing single-cell RNA-seq batch correction. Nat. Methods. 2019;16:43–49. doi: 10.1038/s41592-018-0254-1. [DOI] [PubMed] [Google Scholar]
- 148.Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 2004;13 doi: 10.1109/tip.2003.819861. [DOI] [PubMed] [Google Scholar]
- 149.Kolaman A., Yadid-Pecht O. Quaternion structural similarity: a new quality index for color images. IEEE Trans. Image Process. 2011;21:1526–1536. doi: 10.1109/TIP.2011.2181522. [DOI] [PubMed] [Google Scholar]
- 150.Zhang L., Zhang L., Mou X., Zhang D. FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011;20:2378–2386. doi: 10.1109/TIP.2011.2109730. [DOI] [PubMed] [Google Scholar]
- 151.Pambrun J.-.F., Noumeir R. Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP) IEEE; 2015. Limitations of the SSIM quality metric in the context of diagnostic imaging; pp. 2960–2963. [Google Scholar]
- 152.Nyúl L.G., Udupa J.K., Zhang X. New variants of a method of MRI scale standardization. IEEE Trans. Med. Imaging. 2000;19:143–150. doi: 10.1109/42.836373. [DOI] [PubMed] [Google Scholar]
- 153.Albert A., Zhang L. A novel definition of the multivariate coefficient of variation. Biomet. J. 2010;52:667–675. doi: 10.1002/bimj.201000030. [DOI] [PubMed] [Google Scholar]
- 154.Chirra P., Leo P., Yim M., Bloch B.N., Rastinehad A.R., Purysko A., Rosen M., Madabhushi A., Viswanath S.E. Multisite evaluation of radiomic feature reproducibility and discriminability for identifying peripheral zone prostate tumors on MRI. J. Med. Imaging. 2019;6 doi: 10.1117/1.JMI.6.2.024502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 155.Lawrence I., Lin K. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989:255–268. [PubMed] [Google Scholar]
- 156.Liljequist D., Elfving B., Skavberg Roaldsen K. Intraclass correlation–a discussion and demonstration of basic features. PLoS ONE. 2019;14 doi: 10.1371/journal.pone.0219854. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Orlhac F., Lecler A., Savatovski J., Goya-Outi J., Nioche C., Charbonneau F., Ayache N., Frouin F., Duron L., Buvat I. How can we combat multicenter variability in MR radiomics? Validation of a correction procedure. Eur. Radiol. 2021;31:2272–2280. doi: 10.1007/s00330-020-07284-9. [DOI] [PubMed] [Google Scholar]
- 158.Mahon R., Ghita M., Hugo G., Weiss E. ComBat harmonization for radiomic features in independent phantom and lung cancer patient computed tomography datasets. Phy. Med. Biol. 2020;65 doi: 10.1088/1361-6560/ab6177. [DOI] [PubMed] [Google Scholar]
- 159.Ioannidis G.S., Trivizakis E., Metzakis I., Papagiannakis S., Lagoudaki E., Marias K. Pathomics and deep learning classification of a heterogeneous fluorescence histology image dataset. Appl. Sci. 2021;11:3796. [Google Scholar]
- 160.Lotfollahi M., Wolf F.A., Theis F.J. scGen predicts single-cell perturbation responses. Nat. Methods. 2019;16:715–721. doi: 10.1038/s41592-019-0494-8. [DOI] [PubMed] [Google Scholar]
- 161.Jiang X., Bian G.-.B., Tian Z. Removal of artifacts from EEG signals: a review. Sensors. 2019;19:987. doi: 10.3390/s19050987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 162.He P., Kahle M., Wilson G., Russell C. Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference. IEEE; 2006. Removal of ocular artifacts from EEG: a comparison of adaptive filtering method and regression method using simulated data; pp. 1110–1113. [DOI] [PubMed] [Google Scholar]
- 163.Kumar P.S., Arumuganathan R., Sivakumar K., Vimal C. Removal of ocular artifacts in the EEG through wavelet transform without using an EOG reference channel. Int. J. Open Problems Compt. Math. 2008;1:188–200. [Google Scholar]
- 164.Aerts H.J., Velazquez E.R., Leijenaar R.T., Parmar C., Grossmann P., Carvalho S., Bussink J., Monshouwer R., Haibe-Kains B., Rietveld D. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014;5:1–9. doi: 10.1038/ncomms5006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 165.Arrieta A.B., Díaz-Rodríguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., García S., Gil-López S., Molina D., Benjamins R. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fusion. 2020;58:82–115. [Google Scholar]
- 166.Yang G., Ye Q., Xia J. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond. Inform. Fusion. 2022;77:29–52. doi: 10.1016/j.inffus.2021.07.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 167.Holzinger A., Dehmer M., Emmert-Streib F., Cucchiara R., Augenstein I., Del Ser J., Samek W., Jurisica I., Díaz-Rodríguez N. Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inform. Fusion. 2022;79:263–278. [Google Scholar]
- 168.López-González F.J., Silva-Rodríguez J., Paredes-Pacheco J., Niñerola-Baizán A., Efthimiou N., Martín-Martín C., Moscoso A., Ruibal Á., Roé-Vellvé N., Aguiar P. Intensity normalization methods in brain FDG-PET quantification. Neuroimage. 2020;222 doi: 10.1016/j.neuroimage.2020.117229. [DOI] [PubMed] [Google Scholar]
- 169.Mongan, John, Linda Moy, and Charles E. Kahn Jr. "Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers." Radiology: Artificial Intelligence 2.2 (2020): e200029. [DOI] [PMC free article] [PubMed]
- 170.St‐Jean S., Viergever M.A., Leemans A. Harmonization of diffusion MRI data sets with adaptive dictionary learning. Human brain mapping. 2020;41(16):4478–4499. doi: 10.1002/hbm.25117. [DOI] [PMC free article] [PubMed] [Google Scholar]