Skip to main content
. 2022 Oct 10;13:5962. doi: 10.1038/s41467-022-33619-9

Fig. 1. Overview of stMVC model.

Fig. 1

a Given each SRT data with four-layer profiles: histological images (I), spatial locations (S), gene expression (X), and manual region segmentation (Y) as the input, stMVC integrates them to disentangle tissue heterogeneity, particularly for the tumor. b stMVC adopts SimCLR model with feature extraction framework from ResNet-50 to efficiently learn visual features (hi) for each spot (vi) by maximizing agreement between differently augmented views of the same spot image (ii) via a contrastive loss in the latent space (li), and then constructs HSG by the learned visual features hi. c stMVC model adopting SGATE model learns view-specific representations (pi1 and pi2) for each of two graphs including HSG and SLG, as well as the latent feature from gene expression data by the autoencoder-based framework as a feature matrix, where a SGATE for each view is trained under weak supervision of the region segmentation to capture its efficient low-dimensional manifold structure, and simultaneously integrates two-view graphs for robust representations (ri) by learning weights of different views via attention mechanism. d Robust representations R can be used for elucidating tumor heterogeneity: detecting spatial domains, visualizing the relationship distance between different domains, and further denoising data.