Skip to main content
. Author manuscript; available in PMC: 2024 Apr 1.
Published in final edited form as: Placenta. 2023 Mar 15;135:43–50. doi: 10.1016/j.placenta.2023.03.003

Figure 2. Whole-slide learning method - diagram:

Figure 2.

A whole slide image (WSI) is divided into a set of smaller images (patches). Images pass through a feature extraction network to become feature vectors. The attention subnetwork generates an attention value for each vector and uses that value to produce a weighted average (weighted features). The classifier subnetwork then generates a single label for the whole slide. Attention values for each patch are plotted on the attention map (blue-yellow: low-high attention). Note that a separate attention subnetwork and classifier subnetwork exist for each diagnosis, though only that for infarction is shown.