Skip to main content
. Author manuscript; available in PMC: 2021 Nov 19.
Published in final edited form as: Neuroimage. 2021 Aug 24;243:118514. doi: 10.1016/j.neuroimage.2021.118514

Fig. 1.

Fig. 1.

Diagram of the DeepAtrophy deep learning algorithm for quantifying progressive change in longitudinal MRI scans. During training, DeepAtrophy consists of two copies of the same “basic sub-network” (Dθ) with shared weights θ. Dθ is a 3D ResNet image classification network with 50 layers (Chen et al., 2019; He et al., 2015) and the output layer having k = 5 elements. Dθ takes as input two MRI scans from the same individual in arbitrary temporal order. The outputs from the two copies of Dθ feed into a 2k × m fully connected layer with weights ω. The resulting “super-network” Sθ,ω, takes as input two pairs of same-subject images, in arbitrary order, and with constraint that the inter-scan interval of one scan pair contains the inter-scan interval of the other scan pair. DeepAtrophy minimizes a weighted sum of two loss functions: the scan temporal order (STO) loss, which measures the ability of Dθ to correctly infer the temporal order of the two input scans; and the relative interscan interval (RISI) loss, which measures the ability of the super-network Sθ,ω, to infer which of the input scan pairs has a longer inter-scan interval. During testing, network Dθ is applied to pairs of same-subject scans. A single measure of disease progression, the predicted interscan interval (PII), is computed as a linear combination of the k outputs of Dθ. The coefficients of this linear combination are obtained by fitting a linear model on the subset of the training data (amyloid negative normal control group) with actual inter-scan interval as the dependent variable and outputs of Dθ as independent variables.