Skip to main content
. Author manuscript; available in PMC: 2022 Dec 10.
Published in final edited form as: Proc SPIE Int Soc Opt Eng. 2021 Dec 10;12088:120880L. doi: 10.1117/12.2606155

Figure 4.

Figure 4.

The framework of the Attention-Guided Generative Adversarial Network (AG-GAN) model used here, showing the mathematical transformation mapping for translating subject x to subject y. There are in-built attention mechanisms in the generators to identify the part of images with maximum distinctions. Architecture adapted from 17, with component panels shown from our neuroimaging application.