Skip to main content
. 2022 Mar 3;49(5):3134–3143. doi: 10.1002/mp.15539

FIGURE 2.

FIGURE 2

Architecture of the attention‐based deep MIL. Extracted radiomics features are used as the input to the transformation network, which is then pooled with attention. A fully connected final layer combines the attention‐based pooling to give the output probability